50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) | AWS (Amazon Web Services) Job openings in Bangalore (Bengaluru)
Apply to 50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.

Location: Bangalore
Experience: 2–5 years
Type: Full-time | On-site
Start: Immediate
Why this role exists
Most systems don’t fail because of one big outage.
They fail because reliability is treated as an afterthought.
Right now, uptime depends too much on individual heroics.
That doesn’t scale.
This role exists to build a reliability system where:
- Uptime is predictable
- Failures are contained
- Escalations don’t depend on leadership
What you’ll do
You will not just monitor systems.
You will own reliability as a product.
1. Drive uptime to production-grade reliability
- Improve system uptime to 99.9% customer-facing SLA within 4 months
- Define and track:
- SLAs / SLOs / error budgets
- Ensure reliability is measured from the customer’s perspective, not internal metrics
2. Build incident response as a system
- Set up a 24/7 incident response rotation across 3 engineers
- Eliminate dependency on leadership (no single escalation point)
- Define:
- Incident severity levels
- Response playbooks
- Escalation protocols
- Ensure fast detection → containment → resolution
3. Contain and fix erratic system behavior
- Identify and resolve:
- Latency spikes
- Downtime incidents
- Integration failures
- Build guardrails to prevent recurrence
- Focus on root cause elimination, not temporary fixes
4. Create continuous reliability feedback loops
- Work closely with engineering teams to:
- Surface recurring failure patterns
- Improve build quality
- Reduce production bugs
- Ensure learnings from incidents directly improve future releases
5. Improve observability and monitoring
- Build dashboards and alerts for:
- System health
- Performance metrics
- Failure signals
- Ensure issues are detected before customers report them
6. Reduce operational fragility
- Remove single points of failure (people, systems, workflows)
- Improve system resilience across:
- Deployments
- Integrations
- Runtime environments
What success looks like
- Uptime reaches 99.9%+ reliably
- Incidents are:
- Detected early
- Contained quickly
- Resolved permanently
- No dependency on a single individual for escalation
- System behavior becomes predictable and stable
- Engineering teams ship with higher reliability confidence
Who you are
- You have 2-5 years of experience in SRE / DevOps / backend systems
- You have worked on production systems with real uptime expectations
- You think in:
- Systems
- Failure modes
- Trade-offs
- You are comfortable debugging live, high-pressure environments
What will make you stand out
- Experience with:
- Distributed systems
- Cloud infrastructure (AWS / Azure / GCP)
- Monitoring & alerting tools
- Have built or improved:
- Incident response systems
- Reliability frameworks
- Strong debugging skills across:
- Infra
- Application
- Integrations
Compensation
₹60,000/month (fixed)
(Aligned with role scope and impact expectations)
Why join
- You will define reliability standards for a production AI platform
- Your work directly impacts:
- Customer trust
- Product performance
- Enterprise readiness
- You will move the system from reactive → predictable
What this role is not
- Not just monitoring dashboards
- Not limited to handling tickets
- Not dependent on escalation to leadership
What this role is
- A builder of reliability systems
- A guardian of uptime and performance
- A multiplier of engineering quality
One question to self-evaluate
Can you build a system where downtime is rare, predictable, and never dependent on a single person?
We are looking for a Mid-Level .NET Developer to design, develop, and maintain scalable microservices for enterprise applications. The role involves working on high-performance, reliable systems deployed in containerized environments.
Key Responsibilities:
- Develop and maintain scalable .NET microservices
- Build robust Web APIs with proper validation, error handling, and security
- Write unit and integration tests to ensure code quality
- Design portable and environment-agnostic solutions
- Collaborate with cross-functional teams and client stakeholders
- Optimize performance and implement caching strategies
- Follow security best practices for enterprise applications
- Participate in code reviews and maintain coding standards
- Support deployment and troubleshoot issues in client environments
Must-Have Skills:
Core Technical Expertise:
- 4+ years of experience with .NET Core (3.1+) / .NET 5+ and C# (8+)
- Strong hands-on experience with ASP.NET Core Web API & Entity Framework Core
- Experience building REST APIs and middleware
- Strong understanding of SOLID principles, Dependency Injection, Repository pattern
- Experience with unit testing (xUnit / NUnit / MSTest), Moq, integration testing
Microservices & Deployment:
- Hands-on experience with Docker
- Understanding of microservices architecture & distributed systems
- Experience with configuration management (appsettings.json, IConfiguration)
- Knowledge of NuGet and dependency management
Good-to-Have Skills:
Advanced Technical:
- Experience with .NET 6/7/8, Minimal APIs, gRPC, SignalR
- Advanced EF Core, Dapper, database migrations
- Kubernetes and container orchestration
- Cloud platforms: Azure / GCP / Alibaba Cloud
- Message brokers: Azure Service Bus, RabbitMQ, Kafka
- Databases: PostgreSQL, MySQL, MongoDB, Cassandra
- API Gateways: Azure API Management, Kong
Development & Operations:
- CI/CD tools: Azure DevOps, Jenkins, GitHub Actions
- Monitoring: Application Insights, Serilog, Prometheus
- Security: HTTPS, CORS, input validation, secure coding
- Background services: Hangfire, Quartz.NET
Client-Facing Experience:
- Experience in service-based organizations
- Ability to adapt to multiple domains
- Understanding of industry standards and compliance
We are looking for a Mid-Level .NET Developer to design, develop, and maintain scalable microservices for enterprise applications. The role involves working on high-performance, reliable systems deployed in containerized environments.
Key Responsibilities:
- Develop and maintain scalable .NET microservices
- Build robust Web APIs with proper validation, error handling, and security
- Write unit and integration tests to ensure code quality
- Design portable and environment-agnostic solutions
- Collaborate with cross-functional teams and client stakeholders
- Optimize performance and implement caching strategies
- Follow security best practices for enterprise applications
- Participate in code reviews and maintain coding standards
- Support deployment and troubleshoot issues in client environments
Must-Have Skills:
Core Technical Expertise:
- 4+ years of experience with .NET Core (3.1+) / .NET 5+ and C# (8+)
- Strong hands-on experience with ASP.NET Core Web API & Entity Framework Core
- Experience building REST APIs and middleware
- Strong understanding of SOLID principles, Dependency Injection, Repository pattern
- Experience with unit testing (xUnit / NUnit / MSTest), Moq, integration testing
Microservices & Deployment:
- Hands-on experience with Docker
- Understanding of microservices architecture & distributed systems
- Experience with configuration management (appsettings.json, IConfiguration)
- Knowledge of NuGet and dependency management
Good-to-Have Skills:
Advanced Technical:
- Experience with .NET 6/7/8, Minimal APIs, gRPC, SignalR
- Advanced EF Core, Dapper, database migrations
- Kubernetes and container orchestration
- Cloud platforms: Azure / GCP / Alibaba Cloud
- Message brokers: Azure Service Bus, RabbitMQ, Kafka
- Databases: PostgreSQL, MySQL, MongoDB, Cassandra
- API Gateways: Azure API Management, Kong
Development & Operations:
- CI/CD tools: Azure DevOps, Jenkins, GitHub Actions
- Monitoring: Application Insights, Serilog, Prometheus
- Security: HTTPS, CORS, input validation, secure coding
- Background services: Hangfire, Quartz.NET
Client-Facing Experience:
- Experience in service-based organizations
- Ability to adapt to multiple domains
- Understanding of industry standards and compliance
Job Title: AWS DevOps Engineer (MLOps)
We are looking for a highly skilled AWS + MLOps Engineer to design, build, and maintain scalable machine learning infrastructure and pipelines on AWS. The ideal candidate will have strong expertise in DevOps practices, cloud architecture, and MLOps frameworks, along with solid Python programming skills.
Job Description:
We are looking for an experienced AWS DevOps Engineer to join our team. You will be responsible for building and optimising CI/CD pipelines, managing AWS infrastructure, and automating tasks using AWS services.
Key Responsibilities:
- CI/CD Pipelines: Develop CI/CD pipelines with AWS CodePipeline, build ECR images, and update services on ECS.
- Automation: Create Python Lambda functions for automation and AWS Batch jobs for GPU processing.
- Infrastructure Management: Manage AWS infrastructure using Terraform (IAM roles, RDS, Lambda, etc.) and deploy microservices on EKS with ALB Ingress.
- Data Processing: Work with AWS Step Functions and EMR for data workflows; troubleshoot Spark jobs.
- Microservices: Deploy ATLAS on ECS, and create AWS Glue crawlers for data integration.
- Strong Experience with MLOps is an added advantage.
Required Skills:
- Experience with AWS services (ECS, ECR, Lambda, Step Functions, EMR, Glue, etc.).
- Proficient in CI/CD, Terraform, and Python scripting.
- Experience deploying EKS clusters and using AWS ALB for routing.
- Strong troubleshooting skills with EMR and Spark.
- Understanding/experience with AWS EMR, Sagemaker and Databricks would be added advantage
Preferred:
- AWS Certification (DevOps, Solutions Architect, etc.).
- Experience with microservices and GPU-intensive processes.
- Bachelor’s in Computer Science or equivalent
- Hands-on with C#, .NET MAUI, and ReactJS.
- Experience consuming RESTful APIs and integrating with cloud backends (Azure/AWS).
- Knowledge of secure coding and app signing/deployment processes.
- Familiarity with CI/CD in Azure DevOps and version control (Git).
- Proficiency in API consumption, Git, and mobile deployment workflows.
Director of Engineering — Flights Platform
AI-First Travel Commerce · High-Scale Distributed Systems · Marketplace Infrastructure
🌏 The Problem Space
A flight search looks trivially simple. It is anything but.
Every query you fire triggers a choreography of distributed systems operating in real-time — integrating with a dozen airline GDS/NDC providers, computing dynamic fares across inventory buckets and fare rules, ranking thousands of itineraries by relevance and business intent, and returning a ranked, priced, bookable result set — all in under 100ms.
→ Millions of search queries per minute
→ <100ms end-to-end SLA with external API dependencies
→ High-value transactions — a bug here means a missed booking, not a failed render
→ Pricing errors erode trust faster than any other failure mode
We are rebuilding the Flights platform as a real-time commerce engine for Bharat — AI-native from day zero, built to power both B2C consumer journeys and high-stakes B2B enterprise corridors.
This is a once-in-a-decade opportunity to build national-scale flight infrastructure from first principles.
🧠 What You Will Own
You will own the full Flights platform — systems, architecture, and the teams that build them.
Core System Domains:
•Search Systems — high-throughput, low-latency query pipelines returning ranked, bookable options
•Pricing & Fare Engine — dynamic pricing logic, fare rules, promotional overlays, and real-time validation
•Booking & Ticketing — transaction-critical flows requiring strict consistency, idempotency, and zero data loss
•Airline Integrations — managing unreliable external GDS/NDC APIs with retries, circuit-breakers, and reconciliation
•Post-Booking Flows — cancellations, modifications, refunds — correctness at the margin is non-negotiable
Platform Scope:
•High-scale APIs serving consumer apps, B2B enterprise clients, and third-party partners
•Event-driven state machines managing booking workflows across async boundaries
•Observability and reliability infrastructure across all mission-critical flows
Team Scope:
•Lead 15–30+ engineers across multiple product and platform teams
•Manage Engineering Managers and Principal/Staff engineers
•Own hiring, org design, and technical direction
⚙️ Core Engineering Challenges
This role is fundamentally about making the right trade-offs under uncertainty — at scale.
Latency vs. Accuracy — when do you serve a cached fare vs. call a live airline API?
Availability vs. Consistency — graceful degradation at booking time vs. strict price validation
Cost vs. Performance — when is an external API call worth it vs. a cache hit?
Scalability vs. Simplicity — the best system is the one your team can reason about under incident
🤖 AI-First Engineering
AI is not an afterthought. It is load-bearing architecture.
•LLM-powered pricing intelligence — dynamic fare prediction and demand signals
•RAG pipelines for fare rules, refund policy, and support automation
•Agentic booking resolution workflows — autonomous exception handling at scale
•MCP-based orchestration layers for multi-provider integration
⚖️ Key Responsibilities
Architecture & Distributed Systems
•Design and evolve sub-100ms distributed query systems serving millions of concurrent searches
•Build fault-tolerant booking pipelines with strong consistency and durability guarantees
•Drive Kafka-based event architectures for booking state management
Reliability & Observability
•Own 99.99%+ availability for booking and pricing systems
•Build deep observability — metrics, distributed tracing, structured logging, SLOs/SLAs
•Lead post-incident reviews and drive systemic reliability improvements
Business Partnership
•Partner with Product, Revenue, and Partnerships to translate commercial goals into architecture
•Influence platform roadmap, supplier strategy, and long-term technical investment
🛠️ Technology Stack
Backend: Java · Kotlin · Go · Python
Architecture: Microservices · Event-Driven (Kafka) · gRPC
Data: Redis · Aerospike · DynamoDB · Elasticsearch
Cloud: AWS (EKS, EC2, S3)
Observability: Prometheus · Grafana · OpenTelemetry
👤 Who You Are
•12–16 years in backend/distributed systems; 5+ Years in an Engineering Leadership role, led teams of 15–50 engineers
•Built and scaled large B2C + B2B platforms — Travel Tech, FinTech, or high-scale Consumer
•Deep expertise in real-time systems, marketplace dynamics, and external API integration
•Tier-I institute background strongly preferred (IIT / IIIT / NIT / IISC / BITS / VIT / SRM — CSE/ISE)
🚀 Why This Matters
Build national-scale infrastructure for 1.4 billion people
Sit at the intersection of AI · distributed systems · marketplace economics
Define the future of travel commerce in India — from architecture to product
Dear Candidates,
We have an urgent requirement for a Technical Lead – Full Stack role based in Bangalore. Please find the details below:
Work Location (WFO):
Nagar, Bengaluru, Karnataka
Interview Process:
L1 Interview – Face-to-Face at Office
Experience Required:
4-6 Years (Minimum1+ years in Technical Leadership role)
Role Overview:
The candidate will lead the technical vision and architecture of a compliance platform by designing scalable, secure, and high-performance systems. The role involves driving full-stack development across .NET and open-source technologies, enabling unified AI Agent capabilities, Single Authentication (SSO), and a One-UI experience.
Key Responsibilities:
- Define and own end-to-end architecture including micro-frontends, .NET services, FastAPI APIs, and microservices
- Lead full-stack development using .NET and modern open-source technologies
- Modernize legacy systems (ASP.NET, .NET Core, MS SQL Server) to cloud-native architecture
- Design and implement AI Agents, SSO, and unified UI experiences
- Manage sprint planning, backlogs, and collaborate with Product Owners
- Implement CI/CD pipelines using Jenkins, GitHub Actions
- Drive containerization and orchestration using Docker & Kubernetes
- Ensure secure deployments and cloud infrastructure management
- Establish engineering best practices, code reviews, and architecture governance
- Mentor teams on Clean Architecture, SOLID principles, and DevOps practices
Required Skills:
- ReactJS, FastAPI, Python, REST/GraphQL
- ASP.NET, MVC, .NET Core, Entity Framework, MS SQL Server
- Strong experience in Microservices Architecture
- DevOps: CI/CD, Jenkins, GitOps, Docker, Kubernetes
- Cloud Platforms: AWS / Azure / GCP
- AI/ML & LLM tools: OpenAI, Llama, LangChain, etc.
- Security: RBAC, API security, secrets management
Qualifications:
- BE / BTech in Computer Science
About Wissen Technology
Wissen Technology, established in 2015 and part of the Wissen Group (founded in 2000), is a specialized technology consulting company. We pride ourselves on delivering high-quality solutions for global organizations across Banking & Finance, Telecom, and Healthcare domains.
For more details:
Website: www.wissen.com
Wissen Thought leadership : https://www.wissen.com/articles/
LinkedIn: Wissen Technology
Role: Java Backend Developer with AWS
Key Responsibilities
● Design & Development: Build robust, scalable, and maintainable backend services using Java 17+ and the Spring Boot ecosystem.
● API Design: Develop and document RESTful for consumption by web and mobile frontends.
● Database Management: Optimize complex queries and design schemas for relational (PostgreSQL, MySQL) and/or NoSQL (MongoDB, DynamoDB) databases.
● Microservices Architecture: Contribute to the migration or maintenance of microservices, ensuring high availability and low latency.
● Code Quality: Perform thorough code reviews and write comprehensive unit, integration, and functional tests (JUnit, Mockito).
● Cloud & DevOps: Deploy and manage applications in AWS using Docker and Kubernetes.
● Collaboration: Work closely with Frontend Developers, Product Managers to translate business requirements into technical specifications.
Required Technical Skills
● Java Mastery: Deep understanding of Core Java, Multithreading, and Stream API.
● Frameworks: Extensive experience with Spring Boot, Spring Security, and Spring Data JPA/Hibernate.
● Cloud Experience: Proficiency with cloud-native services (e.g., AWS Lambda, S3, RDS, or SQS).
● Messaging: Experience with message brokers like Apache Kafka for asynchronous processing.
● CI/CD: Hands-on experience with Jenkins, GitLab CI, or GitHub Actions.
● Version Control: Expert-level knowledge of Git and branching strategies.
Qualifications & Soft Skills
● Experience: 5+ years of professional experience in backend development.
● Problem Solving: Ability to debug complex production issues and performance bottlenecks.
● Communication: Excellent verbal and written communication skills—vital for a remote/hybrid contractor setup.
● Adaptability: Proven ability to quickly learn new codebases and business domains.
Full-Stack Developer (Backend-Focused)
We are seeking a seasoned Full-Stack Developer with strong expertise in backend engineering using Python and Golang. In this role, you will take ownership of backend systems while contributing to the development of modern, responsive frontend interfaces. The focus will be on building secure, scalable, and high-performance applications, with emphasis on API development, database engineering, and cloud deployment.
Key Responsibilities
- Develop and enhance backend services using Python frameworks such as Django or FastAPI
- Design, build, and maintain RESTful APIs and microservices
- Work extensively with relational and NoSQL databases, including PostgreSQL, MySQL, and MongoDB
- Collaborate with frontend developers to integrate user-facing elements with backend logic
- Implement efficient, secure, and scalable application architectures
- Troubleshoot and resolve software defects across different environments
- Optimize performance and reliability of backend services
- Write clean, maintainable, and well-tested code following best practices
- Contribute to DevOps activities, including CI/CD pipelines and containerization
Required Skills & Qualifications
- 6+ years of experience in full-stack or backend-focused development
- Strong proficiency in Python with hands-on experience in frameworks like Django or FastAPI
- Solid understanding of SQL and NoSQL databases, including data modeling and query optimization
- Familiarity with modern frontend technologies such as React, Vue, or Angular
- Experience with Docker, Kubernetes, and at least one cloud platform (AWS, Azure, or GCP) is preferred
- Strong understanding of system design, distributed systems, and microservices architecture
- Experience with Git and CI/CD automation pipelines
- Excellent problem-solving skills and ability to work collaboratively
Lead Cloud Reliability Engineer
Job Responsibilities
● Lead and manage the Cloud Reliability teams to provide strong Managed Services support to end-customers.
● Isolate, troubleshoot and resolve issues reported by CMS clients in their cloud environment
● Drive the communication with the customer providing details about the issue, current steps, next plan of action, ETA
● Gather client's requirements related to use of specic cloud services and provide assistance in seing them up and resolving issues
● Create SOPs and knowledge articles for use by the L1 teams to resolve common issues
● Identify recurring issues, perform root cause analysis and propose/implement preventive actions
● Follow change management procedure to identify, record and implement changes
● Plan and deploy OS, security patches in Windows/Linux environment and upgrade k8s clusters
● Identify the recurring manual activities and contribute to automation
● Provide technical guidance and educate team members on development and operations. Monitor metrics and develop ways to improve.
● System troubleshooting and problem-solving across plaorm and application domains. Ability to use a wide variety of open-source technologies and cloud services.
● Build, maintain, and monitor conguration standards.
● Ensuring critical system security through using best-in-class cloud security solutions.
Qualifications
● 4-7 years experience in Cloud Infrastructure and Operations domains and IT operational experience preferably in a global enterprise environment.
● Specialize in one or two cloud deployment platforms: AWS, GCP
● Hands on experience with AWS/GCP services (EKS, ECS, EC2, VPC, RDS, Lambda, GKE, Compute Engine)
● Understanding of one or more programming languages (Python, JavaScript, Ruby, Java, .Net)
● Logging and Monitoring tools (ELK, Stackdriver, CloudWatch)
● Knowledge on Conguration Management tools such as Ansible, Terraform, Puppet, Chef
● Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos)
● Good analytical, communication, problem solving, and learning skills.
● Knowledge on programming against cloud plaorms such as Google Cloud Platform and lean development methodologies.
● Strong service aitude and a commitment to quality.
● Willingness to work in shifts.

at Prismberry Technologies Pvt Ltd (acquired by eYantra Ventures Limited)
Role Overview
We are seeking a highly skilled Senior Java Developer to join our engineering team. In this role, you will be responsible for designing and developing high-performance, scalable, and resilient microservices. You will work at the intersection of complex backend logic, real-time data streaming, and cloud infrastructure to deliver seamless user experiences.
Key Responsibilities
- System Design: Architect and develop robust, scalable, and maintainable backend services using Java and Spring Boot.
- Scalability: Build distributed systems capable of handling high traffic and large datasets with low latency.
- Database Management: Design and optimize complex schemas in both Relational (SQL) and NoSQL databases, ensuring data integrity and performance.
- Event-Driven Architecture: Implement real-time messaging and data pipelines using Apache Kafka.
- Cloud Infrastructure: Deploy and manage services on cloud platforms (AWS or GCP), leveraging managed services to improve system reliability.
- Collaboration: Work closely with cross-functional teams to define requirements, participate in code reviews, and mentor junior developers.
Technical Requirements
- Core Java: Deep expertise in Java (8 or higher), including concurrency, multithreading, and JVM tuning.
- Frameworks: Strong experience with Spring Boot, Spring Cloud, and Hibernate/JPA.
- Messaging: Proven experience with Apache Kafka for event streaming and asynchronous processing.
- Cloud: Proficiency in AWS (EC2, S3, RDS, Lambda) or GCP (GCE, GCS, Cloud SQL, Pub/Sub).
- Databases: Solid knowledge of PostgreSQL, MySQL, or Oracle, alongside NoSQL experience (e.g., MongoDB, Cassandra, or Redis).
- DevOps & Tools: Familiarity with Docker, Kubernetes, and CI/CD pipelines (Jenkins, GitLab CI, or GitHub Actions).
Preferred Qualifications
- Experience with Microservices Architecture and Domain-Driven Design (DDD).
- Understanding of distributed caching strategies and load balancing.
- Strong problem-solving skills and a "clean code" mentality.
Company Overview:
Planview has one mission: to build the future of connected work with market-leading portfolio management and work management solutions. Planview is a recognized innovator and industry leader, our solutions enable organizations to connect the business from ideas to impact, empowering companies to accelerate the achievement of what matters most. Our solutions span every class of work, resource, and organization to address the varying needs of diverse and distributed teams, departments, and enterprises.
As a Sr CloudOps Engineer II, you will oversee teams of Engineers and be a champion for configuration management, technologies in the cloud, and continuous improvement. You will work closely with global leaders to ensure that our applications, infrastructure, and processes are scalable, secure, and supportable. By leveraging your production experience and development skills you will work hand in hand with Engineers (Dev, DevOps, DBOps) to design and implement solutions that improve delivery of value to customers, reduce costs, and eliminate toil.
Responsibilities (What you will do):
- Guide the professional development of Engineers and support the teams to accomplish business goals
- Work closely with leaders in the Israel to align on priorities and architect, deliver, and manage our products
- Build systems that are secure, scalable, and self-healing.
- Manage and improve deployment pipelines.
- Triage and remediate production issues.
- Participate in on-call rotations for escalations.
Qualifications (What you will bring):
- Bachelor's degree is CS or equivalent experience in related field.
- 2+ years managing Engineering teams.
- 8+ years of experience as a site reliability or platform engineer, preferably in a fast-scaling environment
- 5+ years administering Linux and Windows environments.
- 3+ years programming / scripting experience (e.g., Python, JavaScript, PowerShell)
- Strong technical knowledge in OS’s (Linux and Windows), virtualizations, storage systems, networking, and firewall implementations
- Maintaining production environments in the On Premise (90%) and Cloud (10%) (e.g., AWS, Google Cloud, Azure)
- Solid understanding of networking principles and how it applies to data flow and security.
- Automating deployments of cloud based available services (e.g., AWS EC2 / RDS, Docker, Kubernetes)
- Experience managing CI/CD infrastructures, with a strong proficiency in platforms like bitbucket and Jenkins to streamline deployment pipelines and ensure efficient software delivery.
- Management of resources using Infrastructure as Code tools (e.g., CloudFormation, Terraform, Chef)
- Knowledge of observability tools such as LogicMonitor, New Relic, Prometheus, and Coralogix, as well as their implementation.
- Worked within Agile and Lean software development teams.
- Experience working in globally distributed teams.
- Ability to look on the big picture and manage risks.
Build, deploy, and maintain production-grade AI/ML solutions for Fortune 500 enterprise clients on Google Cloud Platform. Hands-on role focused on shipping scalable AI systems across GenAI, agentic workflows, traditional ML, and computer vision.
Key Responsibilities:
Generative AI & Agentic Systems
- Design and build GenAI applications (RAG, agentic workflows, multi-agent systems)
- Develop intelligent systems with memory, planning, and reasoning capabilities
- Implement prompt engineering, context optimization, and evaluation frameworks
- Build observable and reliable multi-agent architectures
Traditional ML & Computer Vision
- Develop ML pipelines (forecasting, recommendation, classification, regression)
- Build production-grade computer vision solutions (document AI, image analysis)
- Perform feature engineering, model optimization, and benchmarking
MLOps & Production Engineering
- Own end-to-end ML lifecycle (CI/CD, testing, versioning, deployment)
- Build scalable APIs, microservices, and data pipelines
- Monitor models, detect drift, and implement A/B testing frameworks
Knowledge Solutions
- Architect knowledge graphs and semantic search systems
- Implement hybrid retrieval (vector + keyword search)
Client Collaboration
- Present technical solutions to enterprise clients
- Collaborate with architects, data engineers, and business teams
Required Skills & Experience
- 3–6 years of hands-on ML Engineering experience
- Strong Python and software engineering fundamentals
- Experience shipping production ML systems on cloud (GCP preferred)
- Experience across GenAI, Traditional ML, Computer Vision
- MLOps experience and RAG-based systems
Preferred
- GCP Professional ML Engineer certification
- Knowledge graphs / semantic search experience
- Experience in regulated industries (Healthcare / BFSI)
- Open-source or technical publications
We are seeking a skilled and passionate ML Engineer with 3+ years of experience to join our team. The ideal candidate will be instrumental in developing, deploying, and maintaining machine learning models, with a strong focus on MLOps practices.
This role requires hands-on experience with Azure cloud services, Databricks, and MLflow to build robust and scalable ML solutions.
Responsibilities
- Design, develop, and implement machine learning models and algorithms to solve complex business problems.
- Collaborate with data scientists to transition models from research and development into production-ready systems.
- Build and maintain scalable data pipelines for ML model training and inference using Databricks.
- Implement and manage the ML model lifecycle using MLflow, including experiment tracking, model versioning, and model registry.
- Deploy and manage ML models in production environments on Azure, leveraging services such as:
- Azure Machine Learning
- Azure Kubernetes Service (AKS)
- Azure Functions
- Support MLOps workloads by automating model training, evaluation, deployment, and monitoring processes.
- Ensure the reliability, performance, and scalability of ML systems in production.
- Monitor model performance, detect model drift, and implement retraining strategies.
- Collaborate with DevOps and Data Engineering teams to integrate ML solutions into existing infrastructure and CI/CD pipelines.
- Document model architecture, data flows, and operational procedures.
Qualifications
Education
- Bachelor’s or Master’s degree in Computer Science, Engineering, Statistics, or a related quantitative field.
Experience
- Minimum 3+ years of professional experience as an ML Engineer or in a similar role.
Required Skills
- Strong proficiency in Python for data manipulation, machine learning, and scripting.
- Hands-on experience with machine learning frameworks, such as:
- Scikit-learn
- TensorFlow
- PyTorch
- Keras
- Demonstrated experience with MLflow for:
- Experiment tracking
- Model management
- Model deployment
- Proven experience working with Microsoft Azure cloud services, specifically:
- Azure Machine Learning
- Azure Databricks
- Related compute and storage services
- Solid experience with Databricks for:
- Data processing
- ETL pipelines
- ML model development
- Strong understanding of MLOps principles and practices, including:
- CI/CD for ML
- Model versioning
- Model monitoring
- Model retraining
- Experience with containerization and orchestration technologies, including:
- Docker
- Kubernetes (especially AKS)
- Familiarity with SQL and data warehousing concepts.
- Experience working with large datasets and distributed computing frameworks.
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills.
Nice-to-Have Skills
- Experience with other cloud platforms (AWS or GCP).
- Knowledge of big data technologies such as Apache Spark.
- Experience with Azure DevOps for CI/CD pipelines.
- Familiarity with real-time inference patterns and streaming data.
- Understanding of Responsible AI principles, including fairness, explainability, and privacy.
Certifications (Preferred)
- Microsoft Certified: Azure AI Engineer Associate
- Databricks Certified Machine Learning Associate (or higher)
Dear Candidates,
Exp: 3+ years
NP: Immediate to 7 days
Location: Bangalore, Chennai
5 days week
Job Description
Function: Software Engineering → Full-Stack Development
Fintech/BFSI domain experience.
- React.js
- Node.js
- AWS
Requirements:
- Mandatory Skill: Strong Experience in React JS, Node JS, and AWS -3+ years of relevant experience from Current Projects.
- Expertise with at least one Object-oriented JavaScript Framework (React, Angular, Ember, Dojo, Node, etc. ).
- Good to have hands-on experience in Python development.
- Proficiency with Object Oriented Programming, multi-threading, data serialization, and REST API to connect applications to back-end services.
- Proficiency in Docker, Kubernetes (k8s), Jenkins, and GitHub Actions is essential for this role.
- Proven cloud development experience AWS.
- Understanding of IT life cycle methodology and processes.
- Experience in understanding and Leading Enterprise Platforms/Solutions.
- Experience working with Microservices/Service Oriented Architecture Frameworks.
- Good Understanding of Middleware technologies.
- Possess expertise in at least one unit testing framework.
- Education: Avoid UG Degree alone and look only at B. E/B. Tech/MCA/M. Sc.

Mid Size Product Engineering Services Company
This role will report to the Chief Technology Officer
You Will Be Responsible For
* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.
* Leading a team in building a high-performing and scalable SaaS product.
* Conducting code reviews to maintain code quality and follow best practices
* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams
* Developing and building microservices leveraging cloud services
* Working on application security aspects
* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.
* Creating a culture of innovation that enables the continued growth of individuals and the company
* Working closely with Product and Business teams to build winning solutions
* Led talent management, including hiring, developing, and retaining a world-class team
Ideal Profile
* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.
* Proficiency in MERN / Java / Full Stack.
* Led a team in optimizing the performance and scalability of a product
* You have extensive experience with DevOps environment and CI/CD practices and can train teams.
* You're a hands-on leader, visionary, and problem solver with a passion for excellence.
* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.
What's on Offer?
* Exciting opportunity to drive the Engineering efforts of a reputed organisation
* Work alongside & learn from best in class talent
* Competitive compensation + ESOPs
Excellent Opportunity- Lead Java Full Stack (React +AWS+ Dynamo) -Wissen Technology, Whitefield Bengaluru
Hi ,
As discussed please find company's details and JD as mentioned:
About Wissen Technology
Wissen Technology, established in 2015 and part of the Wissen Group (founded in 2000), is a specialized technology consulting company. We pride ourselves on delivering high-quality solutions for global organizations across Banking & Finance, Telecom, and Healthcare domains.
For more details:
Website: www.wissen.com
Wissen Thought leadership : https://www.wissen.com/articles/
LinkedIn: Wissen Technology
Job Description:
Requirements Lead (Java+ React+ AWS+ DynamoDB)
- Bachelor’s degree in computer science or related field.
- 7-12 years of experience in software development.
- Hands-on experience working on AWS cloud environment and DynamoDB.
- Proficiency in Java, J2EE, Spring, Hibernate, REST API, Microservices.
- Experience in developing applications using J2EE Design Patterns and AWS services.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork skills
Amita Soni
Senior Consultant-Talent Acquisition-Wissen Technology, Pune
Lead Data Engineer
What are we looking for
real solver?
Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.
Your Responsibilities
What you will wake up to solve.
- Lead Technical Design & Data Architecture: Architect and lead the end-to-end development of scalable, cloud-native data platforms. You’ll guide the squad on critical architectural decisions—choosing between Batch vs. Streaming or ETL vs. ELT—while remaining 100% hands-on, contributing high-quality, production-grade code.
- Build High-Velocity Data Pipelines: Drive the implementation of robust data transports and ingestion frameworks using Python, SQL, and Spark. You will build integration layers that connect heterogeneous sources (SaaS, RDBMS, NoSQL) into unified, high-availability environments like BigQuery, Snowflake, or Redshift.
- Mentor & Elevate the Squad: Foster a culture of technical excellence by mentoring and inspiring a team of data analysts and engineers. Lead deep-dive code reviews, promote best-practice data modeling (Star/Snowflake schema), and ensure the squad adopts modern engineering standards like CI/CD for data.
- Drive AI-Ready Data Strategy: Be the expert in designing data foundations optimized for AI and Machine Learning. You will champion the use of GCP (Dataflow, Pub/Sub, BigQuery) and AWS (Lambda, Glue, EMR) to create "clean room" environments that fuel advanced analytics and generative AI models.
- Partner with Clients as a Technical DRI: Act as the Directly Responsible Individual for client success. Translate ambiguous business questions into elegant data services, manage project deliverables using Agile methodologies, and ensure that the data provided is accurate, consistent, and mission-critical.
- Troubleshoot & Optimize for Scale: Own the reliability of the reporting layer. You will proactively monitor pipelines, troubleshoot complex transformation bottlenecks, and propose ways to improve platform performance and cost-efficiency.
- Innovate and Build Reusable IP: Spearhead the creation of reusable data frameworks, custom operators, and transformation libraries that accelerate future projects and establish Searce’s unique technical advantage in the market.
Welcome to Searce
The AI-Native tech consultancy that's rewriting the rules.
Searce is an AI-native, engineering-led, modern tech consultancy that empowers clients to futurify their business by delivering intelligent, impactful, real business outcomes. Searce solvers co-innovate with clients as their trusted transformational partners ensuring sustained competitive advantage. Searce clients realize smarter, faster, better business outcomes delivered by AI-native Searce solver squads.
Functional Skills
the solver personas.
- The Data Architect: This persona deconstructs ambiguous business goals into scalable, elegant data blueprints. They don't just move data; they design the foundation—from schema design to partitioning strategies—that allows data scientists and analysts to thrive, foreseeing technical bottlenecks and making pragmatic trade-offs.
- The Player-Coach: As a hands-on leader, this persona leads from the front by writing exemplary, production-grade SQL and Python while simultaneously mentoring and elevating the skills of the squad. Their success is measured by the team's ability to deliver high-quality, maintainable code and their growth as engineers.
- The Pragmatic Innovator: This individual balances a passion for modern data tech (like Generative AI and Real-time Streaming) with a sharp focus on business outcomes. They champion new tools where they add real value but are disciplined enough to choose stable, cost-effective solutions to meet deadlines and deliver robust products.
- The Client-Facing Technologist: This persona acts as the crucial technical bridge between the data squad and the client. They build trust by listening actively, explaining complex data concepts (like data latency or idempotency) in simple terms, and demonstrating how engineering decisions align with the client’s strategic goals.
- The Quality Craftsman: This individual possesses an unwavering commitment to data integrity and treats data engineering as a craft. They are the guardian of the reporting layer, advocating for robust testing, data validation frameworks, and clean, modular code to ensure the long-term reliability of the data platform.
Experience & Relevance
- Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads.
- Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.
- AI-Native Workflow: You don’t just build for AI; you build with AI. You must be proficient in using AI coding assistants (e.g., GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative AI.
- Architectural Portfolio: Evidence of leading 2-3 large-scale transformations—including platform migrations, data lakehouse builds, or real-time analytics architectures.
- Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO’s business vision into a Lead Engineer’s technical specification without losing anything in translation.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.
Job Role: Sr. Full Stack Developer
Experience- Min 6 Years
Location-Bangalore
Company Profile- https://www.wissen.com/
Domain
Fintech, Banking, Capital Markets, Investment Banking
Job Summary
We are looking for a highly experienced Senior Full Stack Engineer with strong hands-on expertise in Java, Spring Boot, AWS,React, and DynamoDB. The ideal candidate will have a strong background in building secure, scalable, high-performance applications for financial services, with experience in regulated environments such as banking, capital markets, or investment banking.
Key Responsibilities
- Design, develop, and maintain scalable backend services using Java and Spring Boot.
- Build responsive and reusable user interfaces using React.
- Design and optimize data models and access patterns in DynamoDB.
- Develop RESTful APIs and integrate them with front-end and downstream systems.
- Work on microservices-based architecture and cloud-native application design.
- Collaborate with product managers, business analysts, architects, QA, and DevOps teams to deliver business-critical solutions.
- Ensure application security, performance, reliability, and maintainability.
- Participate in code reviews, architecture reviews, and design discussions.
- Troubleshoot production issues and support enhancements in live environments.
- Follow SDLC, Agile, and DevOps best practices in a fast-paced financial services environment.
Required Skills
- 8+ years of experience in software development.
- Hands-on experience working on React, AWS cloud environment and DynamoDB.
- Proficiency in Java, J2EE, Spring, Hibernate, REST API, Microservices.
- Experience in developing applications using J2EE Design Patterns and AWS services.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork skills.
- Qualifications
- Bachelor’s or Master’s degree in Computer Science, IT, or a related discipline.
- Proven experience delivering enterprise-grade applications in regulated financial environments.
Director - Data engineering
What are we looking for
real solver?
Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.
Your Responsibilities
what you will wake up to solve.
1. Delivery & Tactical Rigor
- Methodology Implementation: Implement and manage a unified, 'DataOps-First' methodology for data engineering delivery (ETL/ELT pipelines, Data Modeling, MLOps, Data Governance) within assigned business units. This ensures predictable outcomes and trusted data integrity by reducing architecture variability at the project level.
- Operational Stewardship: Drive initiatives to optimize team utilization and enhance operational efficiency within the practice. You manage the commercial success of your squads, ensuring data delivery models (from migration to modern data stack implementation) are executed profitably, scalably, and cost-effectively.
- Execution & Technical Resolution
- Technical Escalation: Serve as the primary escalation point for delivery issues, personally leading the resolution of complex data integration bottlenecks and pipeline failures to protect client timelines and data reliability standards.
- Quality Enforcement
- Quality Oversight: Execute and monitor technical data quality standards, ensuring engineering teams adhere to strict policies regarding data lineage, automated quality checks (observability), security/privacy compliance (GDPR/CCPA/PII), and active catalog management.
2. Strategic Growth & Practice Scaling
- Talent & Scaling Execution: Execute the strategy for data engineering talent acquisition and development within your business units. Implement objective metrics to assess and grow the 'Data-Native' DNA of your teams, ensuring squads are consistently equipped to handle petabyte-scale environments and high-impact delivery.
- Offerings Alignment: Drive the adoption of standardized regional offerings (e.g., Modern Data Platform, Data Mesh, Lakehouse Implementation). Ensure your teams leverage the profitable frameworks defined by the practice to accelerate time-to-insight and eliminate architectural fragmentation in client environments.
- Innovation & IP Development: Lead the practical integration of Vector Databases and LLM-ready architectures into project delivery. Champion the hands-on development of IP and reusable accelerators (e.g., automated ingestion engines) that improve delivery speed and enhance data availability across your portfolio.
3. Leadership & Unit Management
- Unit Leadership: Directly lead, mentor, and manage the Engineering Managers and Lead Architects within your business unit. Hold your teams accountable for project-level operational consistency, technical talent development, and strict adherence to the practice's data governance standards.
- Stakeholder Communication: Clearly articulate the business unit’s operational performance, technical quality metrics, and delivery progress to the C-suite Stakeholders and regional client leadership, bridging the gap between technical execution and business value.
- Ecosystem Alignment: Maintain strong technical relationships with key partner contacts (Snowflake, Databricks, AWS/GCP). Align team delivery capabilities with current product roadmaps and ensure squad-level participation in training, certifications, and partner-led enablement opportunities.
Welcome to Searce
The ‘process-first’, AI-native modern tech consultancy that's rewriting the rules.
We don’t do traditional.
As an engineering-led consultancy, we are dedicated to relentlessly improving the real business outcomes. Our solvers co-innovate with clients to futurify operations and make processes smarter, faster & better.
Functional Skills
1. Delivery Management & Operational Excellence
- Methodology Execution: Expert capability in implementing and enforcing a unified delivery methodology (DataOps, Agile, Mesh Principles) within specific business units. Proven track record of auditing squad-level adherence to ensure consistency across the project lifecycle.
- Operational Performance: High proficiency in managing day-to-day operational metrics, including squad utilization, resource forecasting, and productivity tracking. Skilled at optimizing team performance to meet profitability and efficiency targets.
- SOW & Risk Mitigation: Proven experience in operationalizing Statement of Work (SOW) requirements and identifying technical delivery risks early. Expert at mitigating scope creep and data-specific bottlenecks (e.g., latency, ingestion gaps) before they impact client outcomes.
- Technical Escalation Leadership: Demonstrated ability to lead "war room" efforts to resolve complex pipeline failures or data integrity issues. Skilled at providing clear, rapid remediation plans and communicating technical status directly to regional stakeholders.
2. Architectural Implementation & Technical Oversight
- Modern Stack Proficiency: Deep, hands-on expertise in implementing Cloud-Native architectures (Lakehouse, Data Mesh, MPP) on Snowflake, Databricks, or hyperscalers. Ability to conduct deep-dive architectural reviews and course-correct design decisions at the squad level to ensure scalability.
- Operationalizing Governance: Proven experience in embedding data quality and observability (completeness, freshness, accuracy) directly into the CI/CD pipeline. Responsible for technical enforcement of regulatory compliance (GDPR/PII) and maintaining the integrity of data catalogs across active projects.
- Applied Domain Expertise: Practical experience leading the delivery of high-growth solutions, specifically Generative AI infrastructure (RAG, Vector DBs), Real-Time Streaming, and large-scale platform migrations with a focus on zero-downtime execution.
- DataOps & Engineering Standards: Expert-level mastery of DataOps, including the setup and management of orchestration frameworks (Airflow, Dagster) and Infrastructure as Code (IaC). You ensure that automation is a baseline requirement, not an afterthought, for all delivery teams.
3. Unit Management & Commercial Execution
- Unit & Team Management: Proven success in leading and mentoring Engineering Managers and Lead Architects. Responsible for the operational metrics, technical output, and career development of the business unit's talent pool.
- Offerings Implementation & Scoping: Expertise in translating service offerings (e.g., Data Maturity Assessments, Lakehouse Builds) into accurate project scopes, technical estimates, and resource plans to ensure delivery is both profitable and competitive.
- Talent Growth & Mentorship: Functional ability to implement growth frameworks for data engineering roles. Focus on hands-on coaching and scaling high-performance technical talent to meet the demands of complex, petabyte-scale environments.
- Partner Enablement: Functional competence in managing regional technical relationships with major partners (Snowflake, Databricks, GCP/AWS). Drives squad-level certifications, joint technical enablement, and alignment with partner product roadmaps.
Tech Superpowers
- Modern Data Architect – Reimagines business with the Modern Data Stack (MDS) to deliver data mesh implementations, insights, & real value to clients.
- End-to-End Ecosystem Thinker – Builds modular, reusable data products across ingestion, transformation (ETL/ELT), governance, and consumption layers.
- Distributed Compute Savant – Crafts resilient, high-throughput architectures that survive petabyte-scale volume and data skew without breaking the bank.
- Governance & Integrity Guardian – Embeds data quality, complete lineage, and privacy-by-design (GDPR/PII) into every table, view, and pipeline.
- AI-Ready Orchestrator – Engineers pipelines that bridge structured data with Unstructured/Vector stores, powering RAG models and Generative AI workflows.
- Product-Minded Strategist – Balances architectural purity with time-to-insight; treats every dataset as a measurable "Data Product" with clear ROI.
- Pragmatic Stack Curator – Chooses the simplest tools that compound reliability; fluent in SQL, Python, Spark, dbt, and Cloud Warehouses.
- Builder @ Heart – Writes, reviews, and optimizes queries daily; proves architectures with cost-performance benchmarks, not slideware. Business-first, data-second, outcome focused technology leader.
Experience & Relevance
- Executive Experience: Minimum 10+ years of progressive experience in data engineering and analytics, with at least 3 years in a Senior Manager or Director -level role managing multiple technical teams and owning significant operational and efficiency metrics for a large data service line.
- Delivery Standardization: Demonstrated success in defining and implementing globally consistent, repeatable delivery methodologies (DataOps/Agile Data Warehousing) across diverse teams.
- Architectural Depth: Must retain deep, current expertise in Modern Data Stack architectures (Lakehouse, MPP, Mesh) and maintain the ability to personally validate high-level architectural and data pipeline design decisions.
- Operational Leadership: Proven expertise in managing and scaling large professional services organizations, demonstrated ability to optimize utilization, resource allocation, and operational expense.
- Domain Expertise: Strong background in Enterprise Data Platforms, Applied AI/ML, Generative AI integration, or large-scale Cloud Data Migration.
- Communication: Exceptional executive-level presentation and negotiation skills, particularly in communicating complex operational, data quality, and governance metrics to C-level stakeholders.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.
Solutions Architect - Data Engineering
Modern tech solutions advisory & 'futurify' consulting as a Searce lead fds (‘forward deployed solver’) architecting scalable data platforms and robust data engineering solutions that power intelligent insights and fuel AI innovation.
If you’re a tech-savvy, consultative seller with the brain of a strategist, the heart of a builder, and the charisma of a storyteller — we’ve got a seat for you at the front of the table.
You're not a sales lead. You're the transformation driver.
What are we looking for
real solver?
Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.
- Improver. Solver. Futurist.
- Great sense of humor.
- ‘Possible. It is.’ Mindset.
- Compassionate collaborator. Bold experimenter. Tireless iterator.
- Natural creativity that doesn’t just challenge the norm, but solves to design what’s better.
- Thinks in systems. Solves at scale.
This Isn’t for Everyone. But if you’re the kind who questions why things are done a certain way— and then identifies 3 better ways to do it — we’d love to chat with you.
Your Responsibilities
what you will wake up to solve.
You are not just a Solutions Architect; you are a futurifier of our data universe and the primary enabler of our AI ambitions. With a deep-seated passion for data engineering, you will architect and build the foundational data infrastructure that powers the customers entire data intelligence ecosystem.
As the Directly Responsible Individual (DRI) for our enterprise-grade data platforms, you own the outcome, end-to-end. You are the definitive solver for our customer's most complex data challenges, leveraging a powerful tech stack including Snowflake, Databricks, etc. and core GCP & AWS services (BigQuery, Spanner, Airflow, Kafka). This is a hands-on-keys role where you won't just design solutions—you'll build them, break them, and perfect them.
- Solution Design & Pre-sales Excellence:Collaborate with cross-functional teams, including sales, engineering, and operations, to ensure successful project delivery.
- Design Core Data Engineering: Master data modeling, architecting high-performance data ingestion pipelines and ensuring data quality and governance throughout the data lifecycle.
- Enable Cloud & AI: Design and implement solutions utilizing core GCP data services, building foundational data platforms that efficiently support advanced analytics and AI/ML initiatives.
- Optimize Performance & Cost: Continuously optimize data architectures and implementations for performance, efficiency, and cost-effectiveness within the cloud environment.
- Bridge Business & Tech: Translate complex business requirements into clear technical designs, providing technical leadership and guidance to data engineering teams.
- Stay Ahead of the Curve: Continuously research and evaluate new data technologies, architectural patterns, and industry trends to keep our data platforms at the cutting edge.
Functional Skills:
- Enterprise Data Architecture Design: Expert ability to design holistic, scalable, and resilient data architectures for complex enterprise environments.
- Cloud Data Platform Strategy: Proven capability to strategize, design, and implement cloud-native data platforms.
- Pre-Sales & Technical Storyteller: Crafts compelling, client-ready proposals, architectural decks, and technical demonstrations. Doesn't just present; shapes the strategic technical narrative behind every proposed solution.
- Advanced Data Modelling: Mastery in designing various data models for analytical, operational, and transactional use cases.
- Data Ingestion & Pipeline Orchestration: Strong expertise in designing and optimizing robust data ingestion and transformation pipelines.
- Stakeholder Communication: Exceptional skills in articulating complex technical concepts and architectural decisions to both technical and non-technical stakeholders.
- Performance & Cost Optimization: Adept at optimizing data solutions for performance, efficiency, and cost within a cloud environment.
Tech Superpowers:
- Cloud Data Mastery: You're a wizard at leveraging public cloud data services, with deep expertise in GCP (BigQuery, Spanner, etc.) and expert proficiency in modern data warehouse solutions like Snowflake.
- Data Engineering Core: Highly skilled in designing, implementing, and managing data workflows using tools like Apache Airflow and Apache Kafka. You're also an authority on advanced data modeling and ETL/ELT patterns.
- AI/ML Data Foundation: You instinctively design data pipelines and structures that efficiently feed and empower Machine Learning and Artificial Intelligence applications.
- Programming for Data: You have a strong command over key programming languages (Python, SQL) for scripting, automation, and building data processing applications.
Experience & Relevance:
- Architectural Leadership (8+ Years): You bring extensive experience (7+ years) specifically in a Solutions Architect role, focused on data engineering and platform building.
- Cloud Data Expertise: You have a proven track record of designing and implementing production-grade data solutions leveraging major public cloud platforms, with significant experience in Google Cloud Platform (GCP).
- Data Warehousing & Data Platform: Demonstrated hands-on experience in the end-to-end design, implementation, and optimization of modern data warehouses and comprehensive data platforms.
- Databricks & BigQuery Mastery: You possess significant practical experience with Databricks as a core data warehouse and GCP BigQuery for analytical workloads.
- Data Ingestion & Orchestration: Proven experience designing and implementing complex data ingestion pipelines and workflow orchestration using tools like Airflow and real-time streaming technologies like Kafka.
- AI/ML Data Enablement: Experience in building data foundations specifically geared towards supporting Machine Learning and Artificial Intelligence initiatives.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’.
Don’t Just Send a Resume. Send a Statement.
So, If you are passionate about tech, future & what you read above (we really are!), apply here to experience the ‘Art of Possible’
Job Title : AWS Data Engineer
Experience : 4+ Years
Location : Bengaluru (HSR – Hybrid, 3 Days WFO)
Notice Period : Immediate Joiner
💡 Role Overview :
We are looking for a skilled AWS Data Engineer to design, build, and scale modern data platforms. The role involves working with AWS-native services, Python, Spark, and DBT to deliver secure, scalable, and high-performance data solutions in an Agile environment.
🔥 Mandatory Skills :
Python, SQL, Spark, AWS (S3, Glue, EMR, Redshift, Athena, Lambda), DBT, ETL/ELT pipeline development, Airflow/Step Functions, Data Lake (Parquet/ORC/Iceberg), Terraform & CI/CD, Data Governance & Security
🚀 Key Responsibilities :
- Design, build, and optimize ETL/ELT pipelines using Python, DBT, and AWS services
- Develop and manage scalable data lakes on S3 using formats like Parquet, ORC, and Iceberg
- Build end-to-end data solutions using Glue, EMR, Lambda, Redshift, and Athena
- Implement data governance, security, and metadata management using Glue Data Catalog, Lake Formation, IAM, and KMS
- Orchestrate workflows using Airflow, Step Functions, or AWS-native tools
- Ensure reliability and automation via CloudWatch, CloudTrail, CodePipeline, and Terraform
- Collaborate with data analysts and data scientists to deliver actionable insights
- Work in an Agile environment to deliver high-quality data solutions
✅ Mandatory Skills :
- Strong Python (including AWS SDKs), SQL, Spark
- Hands-on experience with AWS data stack (S3, Glue, EMR, Redshift, Athena, Lambda)
- Experience with DBT and ETL/ELT pipeline development
- Workflow orchestration using Airflow / Step Functions
- Knowledge of data lake formats (Parquet, ORC, Iceberg)
- Exposure to DevOps practices (Terraform, CI/CD)
- Strong understanding of data governance and security best practices
- Minimum 4–7 years in Data Engineering (3+ years on AWS)
➕ Good to Have :
- Understanding of Data Mesh architecture
- Experience with platforms like Data.World
- Exposure to Hadoop / HDFS ecosystems
🤝 What We’re Looking For :
- Strong problem-solving and analytical skills
- Ability to work in a collaborative, cross-functional environment
- Good communication and stakeholder management skills
- Self-driven and adaptable to fast-paced environments
📝 Interview Process :
- Online Assessment
- Technical Interview
- Fitment Round
- Client Round
Required Skills
- 8+ years of DevOps / Cloud Engineering experience
- Strong hands-on experience with AWS services (EC2, S3, RDS, IAM, VPC, etc.)
- Expertise in Kubernetes (deployment, scaling, cluster management)
- Strong experience in PostgreSQL and AWS RDS administration
- Proficiency in Terraform for infrastructure automation
- Experience building and maintaining CI/CD pipelines (Jenkins, GitLab CI, etc.)
- Strong knowledge of Java (mandatory) and application deployment lifecycle
- Experience with Docker and containerization
- Solid understanding of networking, security, and system architecture
- Strong troubleshooting and problem-solving skills

A leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage
Responsibilities:
- Lead architecture, technical decisions, and ensure code quality, scalability, and performance
- Develop backend systems using Python & SQL; build APIs and optimize databases
- Work with frontend (React/Angular) and API-driven architectures
- Integrate AI/ML models and support analytics/LLM-based solutions
- Manage cloud deployments (Azure/AWS) and implement CI/CD practices
- Ensure system reliability, monitoring, and production readiness
- Mentor team members, conduct reviews, and collaborate with cross-functional teams
Responsibilities:
- Lead architecture, technical decisions, and ensure code quality, scalability, and performance
- Develop backend systems using Python & SQL; build APIs and optimize databases
- Work with frontend (React/Angular) and API-driven architectures
- Integrate AI/ML models and support analytics/LLM-based solutions
- Manage cloud deployments (Azure/AWS) and implement CI/CD practices
- Ensure system reliability, monitoring, and production readiness
- Mentor team members, conduct reviews, and collaborate with cross-functional teams
Key Responsibilities:
- Lead and mentor a team of Java and Python developers, providing technical guidance and fostering a culture of continuous learning and improvement.
- Oversee the design, development, and implementation of high-performance, scalable, and secure software solutions for the financial services industry.
- Collaborate with product managers and architects to translate business requirements into technical specifications and ensure alignment with overall product strategy.
- Drive the adoption of best practices in software development, including code reviews, testing, and continuous integration/continuous deployment (CI/CD).
- Manage project timelines and resources effectively, ensuring on-time and within-budget delivery of projects.
- Identify and mitigate technical risks, proactively addressing potential issues and ensuring the stability and reliability of our platforms.
- Stay abreast of emerging technologies and trends in Java, Python, and related fields, and evaluate their potential application to our products and services.
- Contribute to the development of technical documentation and training materials.
Required Skillset:
- Demonstrated expertise in Java and Python development, with a strong understanding of object-oriented principles, design patterns, and data structures.
- Proven ability to lead and mentor a team of software engineers, fostering a collaborative and high-performing environment.
- Experience in designing and developing scalable, high-performance, and secure software solutions.
- Strong understanding of software development methodologies, including Agile and Waterfall.
- Excellent communication, interpersonal, and problem-solving skills.
- Ability to work effectively in a fast-paced, dynamic environment.
- Bachelor's or Master's degree in Computer Science or a related field.
- Experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
- Experience with cloud platforms (e.g., AWS, Azure, GCP) is a plus.
WHAT YOU'LL WORK ON
- Build and scale backend services using Node.js & Express
- Architect and optimize MongoDB schemas for performance
- Contribute to frontend features with Next.js & React
- Debug production issues, optimize API latency & CI/CD pipelines
- Integrate MathJax (LaTeX rendering) & VdoCipher (secure video)
WHAT WE'RE LOOKING FOR
- Strong DSA fundamentals — logical thinking over competitive coding
- Deep JavaScript/TypeScript knowledge: Closures, Promises, Event Loop
- 1–2 original projects (no To-Do apps or tutorial clones)
- Ability to independently pick up Docker, Redis, or AWS
- Ownership mindset — ensure it works in production, not just locally
BONUS POINTS
- Docker / containerization basics
- Real-world AWS experimentation (EC2, S3, Lambda)
- Active GitHub profile: open-source contributions or unique projects
AI Usage Policy:
We encourage AI tools (Cursor, Copilot, GPT-4) as force multipliers — but you must own your code, explain rade-offs, and debug without solely relying on AI.
HOW TO APPLY
- Share your GitHub profile link
- Include live demo links to your best, most original projects
We value what you've built far more than what's on your resume.
WHAT YOU'LL WORK ON
- Design and implement scalable APIs and microservices using Node.js & Express
- Manage deployments via GitHub Actions and CodeDeploy; work with Docker & AWS
- Optimize MongoDB queries and use Redis caching for high-concurrency traffic
- Bridge Figma designs to backend logic using Next.js and Tailwind CSS
- Maintain monitoring with Nginx & PM2 to ensure 99.9% uptime
WHAT WE'RE LOOKING FOR
- 1+ year of professional experience building and maintaining production applications
- Deep Node.js knowledge: async programming, RESTful API architecture
- MongoDB mastery: schema design, indexing strategies, complex aggregation pipelines
- Hands-on AWS (EC2/S3 minimum) and practical CI/CD pipeline experience
- Proven ability to take a feature from PRD / Figma to stable production deployment
WHAT WILL MAKE YOU STAND OUT
- Experience maintaining apps with high concurrent user counts
- Comfortable with Nginx configs and Dockerfiles
- Hands-on with payment gateway integration (Cashfree) and webhook handling
- Obsession with maintainable, well-documented, DRY code
AI Usage Policy:
AI tools (Cursor, Copilot, GPT-4) are force multipliers — use them. But you must own your code, reason hrough architectural trade-offs, and debug without relying solely on AI.
HOW TO APPLY:
- Tell us about the most complex bug you've solved or a backend system you built from scratch
- Share your GitHub profile
- Include at least two live project links showcasing your best work
- Your code will directly impact the learning outcomes of thousands of students.
About the Role
Qiro is building the infrastructure powering the next generation of underwriting, credit analytics, and tokenized private credit markets.
We are looking for a Tech Lead — Credit & Blockchain Infrastructure to lead the architecture and execution of our core systems — spanning underwriting engines, credit lifecycle workflows, and blockchain-integrated capital markets infrastructure.
This is not a feature delivery role. This is a system ownership role.
You will be hands-on while leading a growing engineering team in a fast-moving, in-office environment.
What You’ll Own
- Define and evolve the long-term technical vision for Qiro’s programmable credit infrastructure — architecting cohesive systems that unify underwriting engines, credit lifecycle workflows, and tokenized capital markets.
- Own the end-to-end architecture of scalable backend platforms (Python and/or TypeScript), establishing clear boundaries between risk logic, platform APIs, and smart contract integrations while ensuring scalability, auditability, and extensibility.
- Build and standardize configurable underwriting and credit lifecycle systems — from onboarding and drawdown orchestration to repayment waterfalls and early closures — ensuring deterministic, traceable financial state transitions at institutional scale.
- Set integration and infrastructure standards across API contracts, data models, validation layers, and event-driven architectures, enabling reliable synchronization between off-chain services and on-chain contracts.
- Architect secure and resilient blockchain integrations, including wallet interactions, capital flow coordination, and observable on-chain/off-chain state reconciliation.
- Lead high-impact, cross-product initiatives from RFC and system design through production launch — validating architectural decisions, aligning stakeholders, and delivering measurable improvements in reliability, performance, and developer velocity.
- Elevate reliability and operational excellence by defining SLOs, strengthening CI/CD and observability practices, reducing latency, and minimizing systemic risk in financial workflows.
- Build and scale the engineering organization — mentoring senior engineers, shaping hiring standards, driving architecture reviews, and fostering a culture of ownership, craftsmanship, and first-principles thinking.
- Partner closely with Product, Design, Security, and Operations to translate complex lending and capital market mechanics into simple, robust platform primitives.
Who You Are
- 6-8+ years of engineering experience, with 3+ years in technical leadership roles.
- Strong backend architecture experience in Python and/or TypeScript.
- Comfortable designing distributed systems and financial workflows.
- Experience building fintech, lending, underwriting, trading, or blockchain-integrated systems.
- Strong understanding of API design, state management, and data modeling.
- Able to navigate ambiguity and build 0→1 infrastructure.
- Hands-on builder who leads by writing production-grade code.
We Value
- Experience with underwriting engines or policy-driven decision systems.
- Exposure to smart contracts and blockchain integrations.
- Familiarity with PostgreSQL and event-driven architectures.
- Experience in early-stage or high-growth startups.
- Strong product thinking and ability to translate complex financial logic into scalable systems.
Why Join Qiro
- Lead the architecture of a programmable credit infrastructure platform.
- Join the founding technical leadership team.
- High autonomy and ownership — your decisions shape the company.
- In-office collaboration in Bangalore for speed and iteration.
- Competitive compensation and meaningful equity.
Our Culture
We operate with:
- First-principles thinking
- Technical craftsmanship
- High ownership
- Fast execution with long-term architectural discipline
Key Responsibilities
- Frontend Development: Designing and building responsive, interactive user interfaces using React.js, HTML5, CSS3, and modern JavaScript (ES6+).
- Backend Development: Developing robust, scalable server-side applications and microservices using Java and the Spring Boot framework.
- API Integration: Creating and consuming RESTful APIs to ensure seamless communication between the React frontend and the Java backend.
- Database Management: Designing and optimizing database schemas and queries using SQL (e.g., MySQL, PostgreSQL, Oracle) or NoSQL (e.g., MongoDB) databases.
- State Management: Managing application state in React using tools like Redux, Hooks, or Context API.
- Testing & Quality Assurance: Writing unit and integration tests using frameworks such as JUnit for backend and Jest or React Testing Library for frontend.
- DevOps & Deployment: Collaborating on CI/CD pipelines and using containerization tools like Docker and Kubernetes for application deployment.
- Indeed
- +14
Required Skills & Qualifications
- Core Technical Skills:
- Deep proficiency in Java (8+) and the Spring ecosystem (Spring Boot, Spring Security, Spring Data JPA).
- Expertise in React.js workflows, component-based architecture, and hooks.
- Strong understanding of Microservices architecture and cloud platforms (AWS, Azure, or GCP).
- Experience & Education:
- Typically requires a Bachelor's degree in Computer Science or a related field.
- Proven experience (often 1–5+ years depending on seniority) in full-stack development.
- Tools: Version control systems like Git, build tools like Maven or Gradle, and Agile project management tools like Jira.
- Indeed
- +13
Typical Salary Ranges (India)
- Freshers: ₹3.8 Lakh to ₹12 Lakh per year.
- Experienced (5+ years): ₹18 Lakh to ₹30 Lakh+ per year.
- Average (General): Approximately ₹29 Lakh per year for high-demand specialized roles.
Job Summary
We are seeking a skilled Python Platform Developer to join our engineering team. You will be responsible for building, optimizing, and maintaining the core backend infrastructure and internal platforms that power our applications. The ideal candidate will build scalable API architectures, enhance data security, and implement automation to improve developer productivity.
Key Responsibilities
- Platform Development: Design, develop, and maintain robust and scalable backend services, API frameworks, and shared libraries using Python.
- Infrastructure Automation: Build and maintain tools for infrastructure automation using technologies such as AWS (Lambda, EC2, S3), Docker, and Infrastructure as Code (IaC) tools like Terraform or CloudFormation.
- Performance Optimization: Improve system performance, low-latency API interactions, and data storage solutions.
- CI/CD Optimization: Develop, maintain, and improve automated testing and continuous integration/continuous deployment (CI/CD) pipelines.
- Collaboration: Work closely with product engineers, DevOps, and frontend developers to define requirements and deliver reliable infrastructure solutions.
- Security & Monitoring: Implement strong security protocols and monitoring solutions (e.g., Prometheus, Datadog) to ensure platform reliability.
Required Skills and Qualifications
- Education: Bachelor’s degree in Computer Science, Engineering, or a related field.
- Experience: 3–5+ years of experience in software development with a heavy focus on Python.
- Core Python: Deep understanding of Python 3.x, object-oriented programming (OOP), and asynchronous programming (e.g., asyncio).
- Frameworks: Hands-on experience with web frameworks like FastAPI, Django, or Flask.
- Cloud Platforms: Experience with AWS or GCP services.
- Tools: Proficient with Git, Docker, and CI/CD pipelines.
- Database: Strong knowledge of SQL and database management.
Preferred Skills
- Experience with serverless architectures.
- Knowledge of Kubernetes.
- Experience in a DevOps or Site Reliability Engineering (SRE) role.
DevOps Engineer
Location: Bangalore office
About Peliqan
Peliqan is an all-in-one data platform combining ELT/ETL pipelines, a built-in data warehouse, SQL and low-code Python transformations, reverse ETL, and AI-powered data activation. We connect 250+ data sources and serve enterprise teams, consultants, and SaaS companies. SOC 2 Type II certified and GDPR compliant.
The Role
Own and evolve the infrastructure powering Peliqan's multi-tenant data platform. You'll manage Kubernetes clusters, cloud resources, CI/CD pipelines, and monitoring — keeping everything reliable, secure, and scalable. You'll be the go-to person for infrastructure support across the engineering team.
Responsibilities
Manage and optimise Kubernetes clusters running production workloads — data pipelines, APIs, and customer-facing services.
Maintain Docker-based local development environments for the engineering team.
Administer cloud infrastructure on AWS and Google Cloud (compute, storage, networking, managed databases).
Build and maintain CI/CD pipelines for automated testing, building, and deploying across staging and production.
Set up and manage monitoring, alerting, and logging for platform health and incident response.
Manage release processes — deployments, rollbacks, and release strategies.
- Maintain infrastructure-as-code using Helm charts.
- Support security hardening and compliance efforts (SOC 2, GDPR).
Requirements
3+ years in a DevOps, SRE, or Infrastructure Engineering role.
Strong hands-on experience with Kubernetes and Helm charts.
Deep familiarity with Docker for containerisation and local dev workflows.
Production experience with AWS and/or Google Cloud.
- Proficiency in Python and Bash scripting for automation and tooling.
- Solid grasp of DevOps principles: infrastructure-as-code, GitOps, observability, continuous delivery.
- Experience with CI/CD platforms (GitHub Actions, GitLab CI, or similar).
Nice to Have
- Experience supporting multi-tenant SaaS platforms or data infrastructure at scale.
- Knowledge of PostgreSQL, MySQL, or cloud-managed database administration.
- Exposure to security compliance frameworks (SOC 2, ISO 27001, GDPR).

Job Title: Consultant – Enterprise Application Development
Location: Bengaluru (Hybrid / On-site)
Engagement: Full-Time
Experience: 10 – 15 years preferred
About Us: Introducing VTT, a comprehensive mobility service provider catering to diverse multinational sectors like IT/ITES, KPO/BPO, Financial, Pharma, and more across Indian cities. Our “Managed Mobility Program” includes Fleet Management, Technology, Resource Management, Car Rentals, Logistics, and Special Services (Ambulance and PWD vehicles). Trusted by Fortune companies such as Cisco, Morgan Stanley, Wells Fargo, Google, PWC, and others, we pride ourselves on leveraging expertise and cutting-edge technology for safe, efficient, and uninterrupted service delivery. With a commitment to excellence, we ensure best-in-class standards for all our clients. Trip to school is now timely, comfy and secure! Our well maintained f leet is here to enrich your child’s commute, keeping students punctual and safe thanks to GPS tracking paired with well-trained drivers. Our routes are carefully planned, our drivers attentive, and everything hassle-free.
Role Overview
We are looking for a seasoned Consultant with comprehensive expertise in enterprise-level application development across backend, frontend, mobile, DevOps, and cloud. The role demands a strong architectural mindset combined with hands-on execution. The Consultant will also play a critical role in understanding the current system architecture end-to-end, driving technical improvements, building the tech team foundation, and establishing structured technical documentation.
Key Responsibilities
• Understand the complete architecture of the existing systems, including web, mobile, backend services, and cloud environment.
• Provide hands-on leadership across backend, frontend, mobile, DevOps, and cloud infrastructure.
• Architect and optimize enterprise-grade applications for scalability, security, performance, and reliability.
• Conduct technical due diligence on current systems and propose improvements or refactoring plans.
• Build the foundation for the internal engineering team including hiring support, role definitions, and best-practice processes.
• Drive engineering workflows including coding standards, branching strategy, CI/CD, monitoring, and release management.
• Create comprehensive technical documentation covering system architecture, API specs, deployment playbooks, and SOPs.
• Review code and provide mentorship to engineering resources.
• Coordinate with product and business teams to translate requirements into technical design and actionable development roadmap.
• Troubleshoot and resolve deep-stack issues during development or production.
Technical Expertise Required
Backend
• Java / Spring Boot
• Node.js
•Microservices architecture
• REST / GraphQL
Frontend
• React js
• Responsive UI, component-based architecture, state management
Mobile
• Flutter
• React Native
Cloud & DevOps
• AWS (ECS / EKS / EC2 / RDS / Lambda / S3 / IAM / CloudWatch etc.)
• CI/CD pipelines (GitHub Actions / Jenkins / GitLab CI or equivalent)
• Docker / Kubernetes
• Infrastructure-as-code (Terraform / CloudFormation)
Database
• MongoDB
• Knowledge of PostgreSQL / MySQL is an added advantage
Professional Attributes
• Strong architectural thinking with the ability to simplify complex systems.
• Excellent communication and stakeholder management skills.
• Ability to work independently without constant supervision.
• Capability to mentor, lead, and build an engineering team from scratch.
• Process-driven mindset with a focus on best practices and documentation.
Deliverables
• Architectural understanding and documentation of current systems.
• Recommendations and implementation plan for system upgrades or restructuring.
• Establishment of core engineering processes and standards.
• Hiring support and technical evaluation of developers.
Job Title : Golang Backend Developer
Experience : 3+ Years
Location : Bangalore (Work From Office)
Notice Period : Immediate to 15 Days (Strict)
🚀 About the Role :
We are looking for a Backend Developer with strong Golang expertise to build scalable, high-performance systems. You will play a key role in designing microservices, handling concurrent workloads, and developing robust backend architectures for production-scale applications.
🔥 Mandatory Skills :
Strong hands-on experience in Golang, Microservices Architecture, REST APIs, Concurrency (goroutines & channels), PostgreSQL/MySQL, Redis, Messaging Systems (Kafka/RabbitMQ/SQS), AWS/GCP, Docker & Kubernetes, and CI/CD pipelines.
🛠️ Key Responsibilities :
- Design, develop, and maintain scalable backend services using Golang.
- Build high-performance REST APIs and microservices.
- Develop concurrent and distributed systems using goroutines and channels.
- Implement event-driven and asynchronous architectures.
- Optimize system performance, latency, and database efficiency.
- Integrate messaging systems and caching layers for scalability.
- Collaborate with cross-functional teams for end-to-end delivery.
- Ensure high code quality, testing, and system reliability.
- Monitor, debug, and enhance production systems.
Required Skills & Qualifications :
- Strong hands-on experience in Golang (must-have).
- Solid understanding of Concurrency in Go (goroutines, channels, worker pools).
- Experience with Microservices Architecture.
- Strong knowledge of RESTful API development.
- Proficiency in Databases : PostgreSQL / MySQL / MongoDB.
- Hands-on experience with Redis (caching).
- Experience with Messaging Systems: Kafka / RabbitMQ / SQS.
- Hands-on experience with AWS or GCP.
- Experience with Docker & Kubernetes.
- Familiarity with CI/CD pipelines (GitHub Actions, Jenkins, etc.).
- Strong understanding of Data Structures, Algorithms, and System Design.
Good to Have :
- Experience with gRPC-based microservices.
- Familiarity with monitoring tools like Prometheus, Grafana.
- Exposure to high-scale distributed systems.
- Experience with event-driven architectures.
- Knowledge of security practices (JWT, OAuth2, RBAC).
What We’re Looking For :
- Strong problem-solving and debugging skills.
- Ownership mindset with end-to-end feature delivery.
- Ability to write clean, efficient, and maintainable code.
- Comfortable working in a fast-paced, high-growth environment.
You can also register yourself on the below platform to proceed further :
Job Description - Manager Sales
Min 15 years experience,
Should be experience from Sales of Cloud IT Saas Products portfolio which Savex deals with,
Team Management experience, leading cloud business including teams
Sales manager - Cloud Solutions
Reporting to Sr Management
Good personality
Distribution backgroung
Keen on Channel partners
Good database of OEMs and channel partners.
Age group - 35 to 45yrs
Male Candidate
Good communication
B2B Channel Sales
Location - Bangalore
If interested reply with cv and below details
Total exp -
Current ctc -
Exp ctc -
Np -
Current location -
Qualification -
Total exp Channel Sales -
What are the Cloud IT products, you have done sales for?
What is the Annual revenue generated through Sales ?
We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.
Responsibilities:
* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.
* Deploy and orchestrate containerized applications using Kubernetes.
* Implement and maintain infrastructure as code (IaC) using Terraform.
* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.
* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.
* Collaborate with cross-functional teams to define technical requirements and deliver solutions.
* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.
Requirements:
* 4+ years of experience with AWS, including practical exposure to its services in production environments.
* Demonstrated expertise in Kubernetes for container orchestration.
* Proficiency in using Terraform for managing infrastructure as code.
* Exposure to at least one CI/CD tool, such as Jenkins or Chef.
* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.
Role & Responsibilities:
We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.
Responsibilities:
- Design, build, and maintain scalable data pipelines and ETL/ELT workflows
- Work with Dataform or DBT to implement transformation logic and data models
- Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
- Support data migration initiatives and data mesh architecture patterns
- Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
- Apply data governance and quality best practices across the data lifecycle
- Troubleshoot pipeline issues and drive proactive monitoring and resolution
Ideal Candidate:
- Strong Data Engineer Profile
- Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
- Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
- Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
- Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
- Must have strong SQL skills with experience in writing complex queries and optimizing performance.
- Must have programming experience in Python and/or SQL for data processing.
- Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
- Exposure to data migration projects and/or data mesh architecture concepts.
- Experience with Spark / PySpark or large-scale data processing frameworks.
- Experience working in product-based companies or data-driven environments.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
NOTE:
- There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
About Us:
REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.
We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI.
Responsibilities:
● Engineering - Take complete ownership of engineering stacks. Define and maintain software systems architecture for high availability 24x7 systems.
● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.
● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.
● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime.
Requirements:
● Bachelor's or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent
● Proficient in python programming along with Frameworks like Django/Fast Api/Flask and Java frameworks like Spring, Hibernate, SpringBoot, etc
● Debug and resolve technical issues that arise during the development or after deployment at various stages.
● Experience in databases including MySQL and NoSQL
● Experience in designing, developing and maintaining high availability systems.
● Experience in MVC pattern, Tomcat, Git, and Jira.
● Experience working with AWS cloud platform.
● 4-5 years of experience building highly available systems
● 2-3 years experience leading a team of engineers and analysts
● Strong analytical and data driven approach to problem solving
Advanced Software Architect
Position Responsibilities :
- Lead the architecture and development of AI-powered, distributed systems that meet enterprise-grade performance and security standards.
- Leverage AI tools for code generation, architectural design, and documentation to accelerate delivery and improve quality.
- Design, build, and maintain services using Python, Java, and Node.js, following clean-code and secure design principles.
- Develop agentic AI-based tools, domain-specific copilots, and developer productivity enhancements.
- Collaborate with cross-functional teams to define modular, scalable, and compliant architecture patterns.
- Conduct technical design reviews and produce detailed documentation, including system specifications, API docs, and architecture diagrams.
- Integrate AI solutions into CI/CD pipelines, ensuring observability, automated testing, and deployment standards are met.
- Implement robust monitoring and performance engineering practices to maintain high-quality deployments.
- Continuously evaluate emerging AI technologies and integrate them into development workflows for maximum impact.
- Champion best practices in security, automation, and performance optimization across the organization.
Qualifications :
- 8+ years in software engineering with full-stack or backend development in Python, Java, and/or Node.js.
- 3+ years with AI tools for development, prototyping, or documentation tasks.
- Experience with cloud-native development and containerized deployment (Docker, Kubernetes).
- Knowledge of AI integration patterns, vector stores, prompt engineering, and RAG pipelines.
- Ability to design software architecture using sequence diagrams, ERDs, data models, and threat models.
- Comfortable with Gen AI-first environments and working with remote Agile teams.
Preferred Qualifications
- Experience building AI copilots or developer tools using OpenAI/Claude SDKs, LangChain, or similar frameworks.
- Experience working in a fast-paced, AIDLC environment, with a strong understanding of CI/CD practices.
- Familiarity with GitHub Actions, Argo Workflows, Terraform, and monitoring/observability tools.
- Containerization and Orchestration: Proficiency in Docker and Kubernetes for containerization and orchestration.
- Cloud Platforms: Experience with cloud computing platforms such as AWS, Azure, or OCI Cloud.
🚨 We’re Building a “Top 1% Engineering Org”
We’re building a high-talent-density, AI-first R&D organization from scratch — inside a publicly listed company undergoing a full-scale transformation.
Think:
→ Rewriting legacy systems into AI-native architectures
→ Embedding LLMs + Agentic AI into core workflows
→ Reimagining platforms, infra, and data systems for the next decade
This is the kind of shift you’d expect from Google, Microsoft, or Meta —
Except you get to build it from day 0 → scale it globally.
About the Role / Team
We are building a next-generation AI-first R&D organization in Bengaluru, focused on solving complex problems across LLMs, Agentic AI systems, distributed computing, and enterprise-scale architectures.
This initiative is part of a publicly listed global company investing heavily in AI-driven transformation, re-architecting its platforms into intelligent, autonomous systems powered by large language models, workflows, and decision engines.
You will be working on:
- Agentic AI systems & LLM-powered workflows
- Distributed, scalable backend systems
- Enterprise-grade AI platforms
- Automation-first engineering environments
🚀 The Mandate
Own and evolve the technical backbone of an AI-first enterprise platform.
You will define architecture across LLM-powered systems, distributed services, and data platforms — and lead critical transformations from legacy → AI-native systems.
🧩 What You’ll Do
- Architect large-scale distributed systems powering AI-driven workflows
- Lead 0→1 and 1→N platform builds (LLM integrations, agentic systems, orchestration layers)
- Redesign legacy systems into scalable, modular, AI-native architectures
- Drive system design excellence across teams (APIs, infra, observability, reliability)
- Make high-stakes decisions on trade-offs (latency, cost, scalability, model performance)
- Mentor senior engineers and influence engineering culture/org standards
- Partner with product, data, and leadership on long-term technical strategy
🧠 What We’re Looking For
- Proven track record building high-scale backend or platform systems
- Deep expertise in distributed systems, microservices, cloud (AWS/GCP/Azure)
- Strong exposure to data systems/infra / Data / real-time architectures
- Experience or strong interest in LLMs, GenAI, or AI system design
- Exceptional system design, abstraction, and problem-solving ability
- High ownership mindset — you think in terms of systems, not tickets
- Strong coding skills in Python / Java / Go / Node.js
- Solid understanding of data structures, system design basics, and backend architecture
- Experience building scalable APIs and services
- Familiarity or curiosity around AI/LLMs, async systems, or event-driven design
- Strong debugging, problem-solving, and ownership mindset
- Solve hard system problems (latency, scale, reliability)
- Drive cross-team technical decisions and standards
- Mentor senior engineers and influence org-wide architecture
- Design large-scale distributed systems and backend platforms
- Mentorship & Technical Leadership
- Expertise in system design, scalability, and performance optimization
Nice to Have
- Experience integrating LLMs, vector databases, or AI pipelines
- Contributions to architecture at scale
- Experience with Agentic AI / LLM orchestration frameworks
- Background in product engineering or platform companies
- Exposure to global-scale systems (millions of users / high throughput)
🔥 What Sets You Apart
- Built platforms used by millions of users / high-throughput systems
- Experience with event-driven systems, stream processing, or infra platforms
- Prior work on AI/ML platforms, model serving, or intelligent systems
Job Details
- Job Title: Director of Engineering
- Industry: SAAS
- Function – Information Technology
- Experience Required: 9-14 years
- Working Days: 6 days
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: TypeScript, AWS, NodeJS, mongodb, React.js, WebGL, Three.js, AI/ML, Docker,nKubernetes
Criteria
Candidate must be having 9+ years of engineering experience, with 3u20134 years in technical leadership
Hands-on expertise with React/Next.js, Node.js/Python, and AWS.
Ability to design scalable architectures for high-performance systems.
Should have AI/ML deployment experience
Strong 3D graphics/WebGL/Three.js knowledge.
Candidates should be from SAAS/Software/IT Services based startups or scaleup companies only
Job Description
The Role:
Company is hiring a hands-on Director of Engineering who codes, architects systems, and builds teams. You’ll set the technical foundation, drive engineering excellence, and own the architecture of our AI, 3D, and XR platform.
This is not a pure management role - expect to spend 50–60% of your time writing code, solving deep technical problems, and owning mission-critical systems. As we scale, this role transitions into CTO, taking full ownership of technical vision and long-term strategy.
What You’ll Own:
1. Technical Leadership & Architecture
● Architect company’s full-stack platform across frontend, backend, infrastructure, and AI.
● Scale core systems: VersaAI engine, rendering pipeline, AR deployment, analytics.
● Make decisions on stack, scalability patterns, architecture, and technical debt.
● Own design for high-performance 3D asset processing, real-time rendering, and ML deployment.
● Lead architectural discussions, design reviews, and set engineering standards.
2. Hands-On Development
● Write production-grade code across frontend, backend, APIs, and cloud infra.
● Build critical features and core system components independently.
● Debug complex systems and optimize performance end-to-end.
● Implement and optimize AI/ML pipelines for 3D generation, CV, and recognition.
● Build scalable backend services for large-scale asset processing and real-time pipelines.
● Develop WebGL/Three.js rendering and AR workflows.
3. Team Building & Engineering Management
● Hire and grow a team of 5–8 engineers initially (scaling to 15–20).
● Establish engineering culture, values, and best practices.
● Build career frameworks, performance systems, and growth plans.
● Conduct 1:1s, mentor engineers, and drive continuous improvement.
● Set up processes for agile execution, deployments, and incident response.
4. Product & Cross-Functional Collaboration
● Work with the founder and product team on roadmap, feasibility, and prioritization.
● Translate product requirements into technical execution plans.
● Collaborate with design for UX quality and technical alignment.
● Support sales and customer success with integrations and technical discussions.
● Contribute technical inputs to product strategy and customer-facing initiatives.
5. Engineering Operations & Infrastructure
● Own CI/CD, testing frameworks, deployments, and automation.
● Create monitoring, logging, and alerting setups for reliability.
● Manage AWS infrastructure with a focus on cost and performance.
● Build internal tools, documentation, and developer workflows.
● Ensure enterprise-grade security, compliance, and reliability.
Tech Stack:
1. Frontend
React.js, Next.js, TypeScript, WebGL, Three.js
2. Backend
Node.js, Python, Express/FastAPI, REST, GraphQL
3. AI/ML
PyTorch, TensorFlow, CV models, Stable Diffusion, LLMs, ML pipelines
4. 3D & Graphics
Three.js, WebGL, Babylon.js, glTF, USDZ, rendering optimization
5. Databases
PostgreSQL, MongoDB, Redis, vector databases
6. Cloud & Infra
AWS (EC2, S3, Lambda, SageMaker), Docker, Kubernetes CI/CD: GitHub Actions
Monitoring: Datadog, Sentry
What We’re Looking For:
1. Must-Haves
● 9+ years of engineering experience, with 3–4 years in technical leadership.
● Deep full-stack experience with strong system design fundamentals.
● Proven success building products from 0→1 in fast-paced environments.
● Hands-on expertise with React/Next.js, Node.js/Python, and AWS.
● Ability to design scalable architectures for high-performance systems.
● Strong people leadership with experience hiring and mentoring teams.
● Ready to code, review, design, and lead from the front.
● Startup mindset: fast execution, problem-solving, ownership.
2. Highly Desirable
● AI/ML deployment experience (CV, generative AI, 3D reconstruction).
● Strong 3D graphics/WebGL/Three.js knowledge.
● Experience with real-time systems, rendering optimizations, or large-scale pipelines.
● Background in B2B SaaS, XR, gaming, or immersive tech.
● Experience scaling engineering teams from 5 → 20+.
● Open-source contributions or technical content creation.
● Experience working closely with founders or executive leadership.
Why Company:
● Hard, meaningful engineering problems at the intersection of AI, 3D, XR, and web tech.
● Build from day zero – architecture, team, and culture.
● Path to CTO as the company scales.
● High autonomy to drive technical decisions.
● Direct founder collaboration on product vision.
● High ownership, high-growth environment.
● Backed by global leaders: Microsoft, Google, NVIDIA, AWS.
Location & Work Culture:
● Location: HSR Layout, Bengaluru
● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)
● Culture: High-intensity, high-integrity, engineering-first
● Team: Young, ambitious, technically strong
Minimum 5+ years in backend engineering with strong system design expertise
Experience building scalable systems from scratch
Expert-level proficiency in Node.js
Deep understanding of distributed systems
Strong NoSQL design skills
Hands-on AWS cloud experience
Proven leadership and mentoring capability
Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies
Job Details
- Job Title: Full Stack Engineer
- Industry: SAAS
- Function – Information Technology
- Experience Required: 5-7 years
- Working Days: 6 days
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: TypeScript, NodeJS, mongodb, RESTful APIs, React.js
Criteria
Candidate should have at least 4+ years of professional experience as a Full Stack Engineer
Hands-on experience with both React.js and Node.js
Solid understanding of MongoDB
Should have experience in RESTful APIs
Should be from a startup or scale up companies
Should have good experience in Typescript
Strong understanding of asynchronous programming patterns
Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies
Job Description
The Role:
We’re looking for a Full Stack Engineer to build, scale, and maintain high-performance web applications for company’s technology platforms. This role involves working across the stack-frontend, backend, and infrastructure - using modern JavaScript-based technologies.
You’ll collaborate closely with product managers, designers, and cross-functional engineering teams to deliver scalable, secure, and user-centric solutions. This role is ideal for someone who enjoys end-to-end ownership, technical problem-solving, and working in a fast-paced startup environment.
What You’ll Own
1. Full Stack Development
● Design, develop, test, and deploy robust and scalable web applications.
● Build and maintain server-side logic and microservices using Node.js, Express.js, and TypeScript.
● Contribute to frontend feature development and integration.
● Participate in feature planning, estimation, and execution.
2. Backend & API Engineering
● Design and develop RESTful APIs and backend services.
● Implement asynchronous workflows and scalable microservice architectures.
● Ensure performance, reliability, and security of backend systems.
● Implement authentication, authorization, and data protection best practices.
3. Database Design & Optimization
● Design and manage MongoDB schemas using Mongoose.
● Optimize queries and database performance for scale.
● Ensure data integrity and efficient data access patterns.
4. Frontend Collaboration & Integration
● Collaborate with frontend developers to integrate React components and APIs seamlessly.
● Ensure responsive, high-performing application behavior.
5. System Design & Scalability
● Contribute to system architecture and technical design discussions.
● Design scalable, maintainable, and future-ready solutions.
● Optimize applications for speed and scalability.
6. Product & Cross-Functional Collaboration
● Work closely with product and design teams to deliver high-quality features in rapid iterations.
● Participate in the full development lifecycle—from concept to deployment and maintenance.
7. Code Quality & Best Practices
● Write clean, testable, and maintainable code.
● Follow Git-based version control and code review best practices.
● Contribute to improving engineering standards and workflows.
What We’re Looking For
Must-Haves
● 4+ years of professional experience as a Full Stack Engineer or similar role.
● Strong proficiency in JavaScript and TypeScript.
● Hands-on experience with Node.js and Express.js.
● Solid understanding of MongoDB and Mongoose.
● Experience building and consuming RESTful APIs and microservices.
● Strong understanding of asynchronous programming patterns.
● Good grasp of system design principles and application architecture.
● Experience with Git and version control best practices.
● Bachelor’s degree in Computer Science, Engineering, or a related field.
Good-to-Have / Preferred
● Frontend development experience with React.js.
● Exposure to Three.js or similar 3D/visualization libraries.
● Experience with cloud platforms (AWS, GCP, Azure – EC2, S3, Lambda).
● Knowledge of Docker and containerization workflows.
● Experience with testing frameworks (Jest, Mocha, etc.).
● Familiarity with CI/CD pipelines and automated deployments.
Tools You’ll Use
● Backend: Node.js, Express.js, TypeScript
● Frontend: React.js (preferred)
● Database: MongoDB, Mongoose
● Version Control: Git, GitHub / GitLab
● Cloud & DevOps: AWS / GCP / Azure, Docker
● Collaboration: Google Workspace, Notion, Slack
Key Metrics You’ll Own
● Code quality, performance, and scalability
● Timely delivery of features and releases
● System reliability and reduction in production issues
● Contribution to architectural improvements
Why company
● Work on impactful, product-driven tech platforms.
● High-ownership role with end-to-end engineering exposure.
● Opportunity to work with modern technologies and evolving architectures.
● Collaborative startup culture with strong learning and growth opportunities.
Job Details
- Job Title: Senior Backend Engineer
- Industry: SAAS
- Function – Information Technology
- Experience Required: 5-8 years
- Working Days: 6 days a week, (5 days-in-office, Saturdays WFH)
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: AWS, NodeJS, RESTful APIs, NoSQL
Criteria
· Minimum 5+ years in backend engineering with strong system design expertise
· Experience building scalable systems from scratch
· Expert-level proficiency in Node.js
· Deep understanding of distributed systems
· Strong NoSQL design skills
· Hands-on AWS cloud experience
· Proven leadership and mentoring capability
· Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies
Job Description
The Role:
What You’ll Build:
1. System Architecture & Design
● Architect highly scalable backend systems from the ground up
● Define technology choices: frameworks, databases, queues, caching layers
● Evaluate microservices vs monoliths based on product stage
● Design REST, GraphQL, and real-time WebSocket APIs
● Build event-driven systems for asynchronous processing
● Architect multi-tenant systems with strict data isolation
● Maintain architectural documentation and technical specs
2. Core Backend Services
● Build high-performance APIs for 3D content, XR experiences, analytics, and user interactions
● Create 3D asset processing pipelines for uploads, conversions, and optimization
● Develop distributed job workers for CPU/GPU-intensive tasks
● Build authentication/authorization systems (RBAC)
● Implement billing, subscription, and usage metering
● Build secure webhook systems and third-party integration APIs
● Create real-time collaboration features via WebSockets/SSE
3. Data Architecture & Databases
● Design scalable schemas for 3D metadata, XR sessions, and analytics
● Model complex product catalogs with variants and hierarchies
● Implement Redis-based caching strategies
● Build search and indexing systems (Elasticsearch/Algolia)
● Architect ETL pipelines and data warehouses
● Implement sharding, partitioning, and replication strategies
● Design backup, restore, and disaster recovery workflows
4. Scalability & Performance
● Build systems designed for 10x–100x traffic growth
● Implement load balancing, autoscaling, and distributed processing
● Optimize API response times and database performance
● Implement global CDN delivery for heavy 3D assets
● Build rate limiting, throttling, and backpressure mechanisms
● Optimize storage and retrieval of large 3D files
● Profile and improve CPU, memory, and network performance
5. Infrastructure & DevOps
● Architect AWS infrastructure (EC2, S3, Lambda, RDS, ElastiCache)
● Build CI/CD pipelines for automated deployments and rollbacks
● Use IaC tools (Terraform/CloudFormation) for infra provisioning
● Set up monitoring, logging, and alerting systems
● Use Docker + Kubernetes for container orchestration
● Implement security best practices for data, networks, and secrets
● Define disaster recovery and business continuity plans
6. Integration & APIs
● Build integrations with Shopify, WooCommerce, Magento
● Design webhook systems for real-time events
● Build SDKs, client libraries, and developer tools
● Integrate payment gateways (Stripe, Razorpay)
● Implement SSO and OAuth for enterprise customers
● Define API versioning and lifecycle/deprecation strategies
7. Data Processing & Analytics
● Build analytics pipelines for engagement, conversions, and XR performance
● Process high-volume event streams at scale
● Build data warehouses for BI and reporting
● Develop real-time dashboards and insights systems
● Implement analytics export pipelines and platform integrations
● Enable A/B testing and experimentation frameworks
● Build personalization and recommendation systems
Technical Stack:
1. Backend Languages & Frameworks
● Primary: Node.js (Express, NestJS), Python (FastAPI, Django)
● Secondary: Go, Java/Kotlin (Spring)
● APIs: REST, GraphQL, gRPC
2. Databases & Storage
● SQL: PostgreSQL, MySQL
● NoSQL: MongoDB, DynamoDB
● Caching: Redis, Memcached
● Search: Elasticsearch, Algolia
● Storage/CDN: AWS S3, CloudFront
● Queues: Kafka, RabbitMQ, AWS SQS
3. Cloud & Infrastructure:
● Cloud: AWS (primary), GCP/Azure (nice to have)
● Compute: EC2, Lambda, ECS, EKS
● Infrastructure: Terraform, CloudFormation
● CI/CD: GitHub Actions, Jenkins, CircleCI
● Containers: Docker, Kubernetes
4. Monitoring & Operations
● Monitoring: Datadog, New Relic, CloudWatch
● Logging: ELK Stack, CloudWatch Logs
● Error Tracking: Sentry, Rollbar
● APM tools
5. Security & Auth
● Auth: JWT, OAuth 2.0, SAML
● Secrets: AWS Secrets Manager, Vault
● Security: Encryption (at rest/in transit), TLS/SSL, IAM
What We’re Looking For:
1. Must-Haves
● 5+ years in backend engineering with strong system design expertise
● Experience building scalable systems from scratch
● Expert-level proficiency in at least one backend stack (Node, Python, Go, Java)
● Deep understanding of distributed systems and microservices
● Strong SQL/NoSQL design skills with performance optimization
● Hands-on AWS cloud experience
● Ability to write high-quality production code daily
● Experience building and scaling RESTful APIs
● Strong understanding of caching, sharding, horizontal scaling
● Solid security and best-practice implementation experience
● Proven leadership and mentoring capability
2. Highly Desirable
● Experience with large file processing (3D, video, images)
● Background in SaaS, multi-tenancy, or e-commerce
● Experience with real-time systems (WebSockets, streams)
● Knowledge of ML/AI infrastructure
● Experience with HA systems, DR planning
● Familiarity with GraphQL, gRPC, event-driven systems
● DevOps/infrastructure engineering background
● Experience with XR/AR/VR backend systems
● Open-source contributions or technical writing
● Prior senior technical leadership experience
Technical Challenges You’ll Solve:
● Designing large-scale 3D asset processing pipelines
● Serving XR content globally with ultra-low latency
● Scaling from thousands to millions of daily requests
● Efficiently handling CPU/GPU-heavy workloads
● Architecting multi-tenancy with complete data isolation
● Managing billions of analytics events at scale
● Building future-proof APIs with backward compatibility
Why company:
● Architectural Ownership: Build foundational systems from scratch
● Deep Technical Work: Solve distributed systems and scaling challenges
● Hands-On Impact: Design and code mission-critical infrastructure
● Diverse Problems: APIs, infra, data, ML, XR, asset processing
● Massive Scale Opportunity: Build systems for exponential growth
● Modern Stack and best practices
● Product Impact: Your architecture directly powers millions of users
● Leadership Opportunity: Shape engineering culture and direction
● Learning Environment: Stay at the forefront of backend engineering
● Backed by AWS, Microsoft, Google
Location & Work Culture:
● Location: Bengaluru
● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)
● Culture: Builder mindset, strong ownership, technical excellence
● Team: Small, highly skilled backend and infra team
● Resources: AWS credits, latest tooling, learning budget
About Us
Euphoric Thought Technologies is a fast-growing technology company focused on delivering scalable, high-performance digital solutions. We are looking for a skilled Backend Developer to join our dynamic team and contribute to building robust and efficient systems.
Key Responsibilities
Design, develop, and maintain scalable backend services and APIs
Write clean, maintainable, and efficient code
Collaborate with frontend developers, DevOps, and product teams
Optimize applications for maximum speed and scalability
Troubleshoot, debug, and upgrade existing systems
Implement security and data protection best practices
Participate in code reviews and technical discussions
Required Skills & Qualifications
4–5 years of hands-on experience in backend development
Strong proficiency in at least one backend language such as Java and Core Java
Experience with frameworks like Spring Boot, Django, Express.js, etc.
Good understanding of RESTful APIs and Microservices architecture
Strong experience with databases (MySQL, PostgreSQL, MongoDB)
Familiarity with version control systems (Git)
Experience with cloud platforms (AWS/Azure/GCP) is a plus
Knowledge of Docker, Kubernetes, CI/CD pipelines is an added advantage
Strong problem-solving and analytical skills
We're looking for an experienced Full-Stack Engineer who can architect and build AI-powered agent systems from the ground up. You'll work across the entire stack—from designing scalable backend services and LLM orchestration pipelines to creating frontend interfaces for agent interactions through widgets, bots, plugins, and browser extensions.
You should be fluent in modern backend technologies, AI/LLM integration patterns, and frontend development, with strong systems design thinking and the ability to navigate the complexities of building reliable AI applications.
Note: This is an on-site, 6-day-a-week role. We are in a critical product development phase where sthe peed of iteration directly determines market success. At this early stage, speed of execution and clarity of thought are our strongest moats, and we are doubling down on both as we build through our 0→1 journey.
WHAT YOU BRING:
You take ownership of complex technical challenges end to end, from system architecture to deployment, and thrive in a lean team where every person is a builder. You maintain a strong bias for action, moving quickly to prototype and validate AI agent capabilities while building production-grade systems. You consistently deliver reliable, scalable solutions that leverage AI effectively — whether it's designing robust prompt chains, implementing RAG systems, building conversational interfaces, or creating seamless browser extensions.
You earn trust through technical depth, reliable execution, and the ability to bridge AI capabilities with practical business needs. Above all, you are obsessed with building intelligent systems that actually work. You think deeply about system reliability, performance, cost optimization, and you're motivated by creating AI experiences that deliver real value to our enterprise customers.
WHAT YOU WILL DO:
Your primary responsibility (95% of your time) will be designing and building AI agent systems across the full stack. Specifically, you will:
- Architect and implement scalable backend services for AI agent orchestration, including LLM integration, prompt management, context handling, and conversation state management.
- Design and build robust AI pipelines — implementing RAG systems, agent workflows, tool calling, and chain-of-thought reasoning patterns.
- Develop frontend interfaces for AI interactions including embeddable widgets, Chrome extensions, chat interfaces, and integration plugins for third-party platforms.
- Optimize LLM operations — managing token usage, implementing caching strategies, handling rate limits, and building evaluation frameworks for agent performance.
- Build observability and monitoring systems for AI agents, including prompt versioning, conversation analytics, and quality assurance pipelines.
- Collaborate on system design decisions around AI infrastructure, model selection, vector databases, and real-time agent capabilities.
- Stay current with AI/LLM developments and pragmatically adopt new techniques (function calling, multi-agent systems, advanced prompting strategies) where they add value.
BASIC QUALIFICATIONS:
- 4–6 years of full-stack development experience, with at least 1 year working with LLMs and AI systems.
- Strong backend engineering skills: proficiency in Node.js, Python, or similar; experience with API design, database systems, and distributed architectures.
- Hands-on AI/LLM experience: prompt engineering, working with OpenAI/Anthropic/Google APIs, implementing RAG, managing context windows, and optimizing for latency/cost.
- Frontend development capabilities: JavaScript/TypeScript, React or Vue, browser extension development, and building embeddable widgets.
- Systems design thinking: ability to architect scalable, fault-tolerant systems that handle the unique challenges of AI applications (non-determinism, latency, cost).
- Experience with AI operations: prompt versioning, A/B testing for prompts, monitoring agent behavior, and implementing guardrails.
- Understanding of vector databases, embedding models, and semantic search implementations.
- Comfortable working in fast-moving, startup-style environments with high ownership.
PREFERRED QUALIFICATIONS:
- Experience with advanced LLM techniques: fine-tuning, function calling, agent frameworks (LangChain, LlamaIndex, AutoGPT patterns).
- Familiarity with ML ops tools and practices for production AI systems.
- Prior work on conversational AI, chatbots, or virtual assistants at scale.
- Experience with real-time systems, WebSockets, and streaming responses.
- Knowledge of browser automation, web scraping, or RPA technologies.
- Experience with multi-tenant SaaS architectures and enterprise security requirements.
- Contributions to open-source AI/LLM projects or published work in the field.
WHAT WE OFFER:
- Competitive salary + meaningful equity.
- High ownership and the opportunity to shape product direction.
- Direct impact on cutting-edge AI product development.
- A collaborative team that values clarity, autonomy, and velocity.
About the Role
We are looking for an experienced Senior Backend Developer to design and build scalable, secure, and high-performance backend systems. The ideal candidate will have deep expertise in Python/Django, microservices architecture, and cloud technologies, along with strong problem-solving skills and leadership capabilities.
Key Responsibilities
•Design and develop backend services using Django and Python.
•Architect and implement microservices-based solutions for scalability and maintainability.
•Work with PostgreSQL and Redis for efficient data storage and caching.
•Build and maintain RESTful APIs and ensure robust API design principles.
•Implement system design best practices for high availability and fault tolerance.
•Containerize applications using Docker and manage deployments with Kubernetes.
•Integrate with cloud platforms (AWS/Azure) for hosting and infrastructure management.
•Apply security best practices to protect data and application integrity.
•Collaborate with frontend, QA, and DevOps teams for seamless delivery.
•Mentor junior developers and conduct code reviews to maintain quality standards.
Required Skills & Expertise
•Django/Python – Advanced proficiency in backend development.
•Microservices Architecture – Strong understanding of distributed systems.
•PostgreSQL & Redis – Expertise in relational and in-memory databases.
•Docker/Kubernetes – Hands-on experience with containerization and orchestration.
•API Design & System Design – Ability to design scalable and secure systems.
•Cloud (AWS/Azure) – Practical experience with cloud services and deployments.
•Security Best Practices – Knowledge of authentication, authorization, and data protection.
Preferred Qualifications
•Experience with CI/CD pipelines and DevOps practices.
•Familiarity with message queues (e.g., RabbitMQ, Kafka).
•Exposure to monitoring tools (Prometheus, Grafana).
What We Offer
•Competitive salary and benefits.
•Opportunity to work on cutting-edge backend technologies.
•Collaborative and growth-oriented work environment.
About Us:
REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.
We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI.
Responsibilities:
● Engineering - Take complete ownership of engineering stacks including Data Engineering and MLOps. Define and maintain software systems architecture for high availability 24x7 systems.
● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.
● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.
● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime.
Requirements:
● 4-5 years of experience building highly available systems
● 2-3 years experience leading a team of engineers and analysts
● Bachelors or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent
● Proficient in python programming skills and expertise with data engineering and machine learning deployment
● Experience in databases including MySQL and NoSQL
● Experience in developing and maintaining critical and high availability systems will be given strong preference
● Experience in software design using design principles and architectural modeling.
● Experience working with AWS cloud platform.
● Strong analytical and data driven approach to problem solving
Description:
Experience in backend development with a strong focus on Java, Microservices and Java Migration
Java Microservices Back end Developer, Angular UI experience
Java Sprint boot Microservices with expected knowledge and Basic knowledge of Database.
Hands-on experience on Spring boot, Webservices, Microservices, SOAP and REST
Drive analysis of business requirements, functional requirements, and technical specification documents to design
and develop technical solutions that meet business needs
Assess opportunities for application and process improvement and obtain broader buy-in across global stakeholders
Pro-actively share and report risks, issues, challenges, blockers and forthcoming tasks with the stakeholders and team. Strong JavaScript (ES6+), HTML5, and CSS3 fundamentals, Familiar with matrix operations or nested data structures (arrays, objects, maps), Strong debugging and optimization skills
Having knowledge of some framework for TDD
Good to have Graph QL knowledge
Immediate hiring for Senior Data Engineer
📍 Location: Hyderabad/Bangalore
💼 Experience: 7+Years
🕒 Employment Type: Full-Time
🏢 Work Mode: Hybrid
📅 Notice Period: 0-1Month serving notice only
We are seeking a highly skilled and motivated Data Engineer to join our innovative team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support our enterprise-wide data-driven initiatives. You will collaborate closely with cross-functional teams to ensure the availability, reliability, and performance of our data systems and solutions.
🔎 Key Responsibilities:
- Data Pipeline Development
- Data Modeling and Architecture
- Data Integration and API Development
- Data Infrastructure Management
- Collaboration and Documentation
🎯 Required Skills:
- Bachelor’s degree in computer science, Engineering, Information Systems, or a related field.
- 7+ years of proven experience in data engineering, software development, or related technical roles.
- 7+ years of experience in programming languages commonly used in data engineering (Python, Java, SQL, Stored Procedures, Scala, etc.).
- 7+ years of experience with database systems, data modeling, and advanced SQL.
- 7+ years of experience with ETL tools such as SSIS, Snowflake, Databricks, Azure Data Factory, Stored Procedures, etc.
- Experience with big data technologies such as Hadoop, Spark, Kafka, etc.
- 5+ years of experience working with cloud platforms like Azure, AWS, or Google Cloud.
- Strong analytical, problem-solving, and debugging skills with high attention to detail.
- Excellent communication and collaboration skills in a team-oriented, fast-paced environment.
- Ability to adapt to rapidly evolving technologies and business requirements.


















