50+ AWS (Amazon Web Services) Jobs in India
Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!
Software Engineer (Backend) – Kotlin & React
About Us
We are a high-agency startup building elegant technological solutions to real-world problems.
Our mission is to build world-class systems from scratch that are lean, fast, and intelligent. We are currently operating in stealth mode, developing deeply technical products involving Kotlin, React, Azure, AWS, GCP, Google Maps integrations, and algorithmically intensive backends.
We are building a team of builders — not ticket takers. If you want to design systems, make real decisions, and own your work end-to-end, this is the place for you.
Role Overview
As a Software Engineer, you will take full ownership of building and scaling critical product systems. You will work directly with the founding team to transform complex real-world problems into scalable technical solutions.
This role is ideal for engineers who enjoy thinking deeply about systems, writing clean code, and building products from 0 → 1.
Key Responsibilities
System Development & Architecture
- Design, develop, and maintain scalable backend services, primarily using Kotlin or JVM-based languages (Java/Scala).
- Architect systems that are robust, high-performance, and production-ready.
- Apply strong data structures, algorithms, and system design principles to solve complex engineering challenges.
Full Stack Development
- Build fast, maintainable front-end applications using React.
- Ensure seamless integration between frontend systems and backend services.
Cloud Infrastructure
- Design and manage cloud architecture using AWS, Azure, and/or Google Cloud Platform (GCP).
- Implement scalable deployment pipelines, monitoring, and infrastructure optimization.
Product & Technical Collaboration
- Work closely with founders and product stakeholders to translate business problems into technical solutions.
- Contribute actively to product and engineering roadmap decisions.
Performance Optimization
- Continuously improve system performance, scalability, and reliability.
- Implement efficient algorithms and system optimizations to gain a technical advantage.
Engineering Excellence
- Write clean, well-tested, and maintainable code.
- Maintain strong engineering standards across the codebase.
Required Skills & Qualifications
We value capability and ownership over years of experience. Whether you have 10 years of experience or none, what matters is your ability to build and solve hard problems.
Core Requirements
- Strong computer science fundamentals (Data Structures, Algorithms, System Design).
- Experience with Kotlin or JVM languages such as Java or Scala.
- Experience building modern React applications.
- Hands-on experience with cloud platforms (AWS / Azure / GCP).
- Experience designing and deploying scalable distributed systems.
- Strong problem-solving and analytical thinking.
Preferred / Bonus Skills
- Experience with Google Maps APIs or geospatial integrations.
- Prior startup experience.
- Contributions to open-source projects.
- Personal side projects demonstrating strong engineering ability.
Ideal Candidate
You will thrive in this role if you:
- Take ownership of problems, not just tasks.
- Are comfortable working in high-ambiguity environments.
- Have a builder mindset and enjoy creating systems from scratch.
- Learn quickly and execute with speed and precision.
This Role May Not Be For You If
- You prefer strict task assignments and detailed specifications before starting work.
- You want to focus only on coding tickets without product involvement.
- You prefer large teams with multiple layers of management.
Why Join Us
- Build 0 → 1 products with massive ownership.
- Work in a flat organization with no unnecessary hierarchy.
- Collaborate directly with founders and core product builders.
- Your contributions will have immediate and visible impact.
- Flexible remote work environment.
- Opportunity to shape the technology, culture, and future of the company.
If you are passionate about building powerful systems, solving complex problems, and owning your work, we would love to hear from you.
Job Title: Senior Java Architect (12+ Years Experience)
Location: Remote (2 PM - 11 PM IST)
Experience: 12+ Years
Salary: ₹15L - ₹21L/yr
Employment Type: Contract (1 Year Extendable)
Job Description:
We are looking for a highly experienced Java Architect to join our team on a long-term contract basis. The ideal candidate should have deep expertise in designing scalable enterprise applications using Java and microservices architecture. The candidate should be capable of driving architecture decisions, mentoring development teams, and delivering high-performance solutions for enterprise-grade systems.
Key Responsibilities:
- Design and architect scalable enterprise applications using Java and microservices
- Lead system design and architecture decisions for complex applications
- Develop and implement microservices architecture patterns
- Drive technical architecture across multiple development teams
- Mentor and guide senior developers and engineering teams
- Handle high-traffic, scalable enterprise application architecture
- Collaborate with stakeholders to define technical requirements and roadmaps
- Ensure system performance, scalability, and reliability
- Review code and architecture designs for best practices
- Work with Spring Boot, Spring Cloud, and modern Java frameworks
Required Skills & Qualifications:
- 12+ years of hands-on experience in Java development and architecture
- Strong expertise in microservices architecture and design patterns
- Deep knowledge of system design principles and enterprise architecture
- Hands-on experience with Spring Boot and Spring Cloud
- Experience designing scalable, high-performance enterprise applications
- Proficiency in RESTful APIs, messaging systems, and API gateways
- Strong understanding of cloud platforms (AWS, Azure, or GCP)
- Experience with containerization (Docker) and orchestration (Kubernetes)
- Knowledge of database design (SQL and NoSQL)
- Expertise in JVM tuning and performance optimization
Technical Skills:
- Java 11+ (Java 17/21 preferred)
- Spring Boot, Spring Cloud, Spring Security
- Microservices architecture and design patterns
- RESTful APIs, GraphQL, gRPC
- Apache Kafka, RabbitMQ, or similar messaging systems
- Docker, Kubernetes, CI/CD pipelines
- PostgreSQL, MySQL, MongoDB, or similar databases
- Redis, Elasticsearch, or caching solutions
- Maven, Gradle, Git
Additional Requirements:
- Ability to work 2 PM - 11 PM IST (US/Europe Shift Timing)
- Immediate availability or short notice period (15 days max)
- Strong problem-solving and analytical skills
- Excellent communication skills for stakeholder collaboration
- Experience mentoring technical teams
- Contract commitment for a minimum of 1 year (extendable)
Good to Have (Preferred Skills):
- Experience with reactive programming (WebFlux, Project Reactor)
- Cloud certification (AWS Solutions Architect, etc.)
- Experience with observability tools (Prometheus, Grafana)
- Knowledge of domain-driven design (DDD)
- Experience with multi-cloud or hybrid cloud architectures
- Freelance/contract experience (preferred)
What We Offer:
- 1.2 ~ 1.8 LPM fixed contract salary
- Long-term contract (1-year extendable)
- Remote work with a flexible 2-11 PM IST schedule
- Work with cutting-edge enterprise technologies
- Opportunity to architect large-scale systems
- Collaborate with experienced engineering teams
- Immediate start for the right candidates
Who we are:
MangoApps is a modern, cloud-based platform that unifies content, communication, training, and operations for the entire organization in one single platform. Unlike tens of point solutions, our integrated approach provides a unified employee experience that saves time and company costs.
As large enterprises invest in MangoApps, we need strategic and high-energy techno-functional talent, who will create, develop, and maintain one-on-one relationships with our enterprise customers. In this role, you will be responsible for onboarding large enterprise accounts on to MangoApps platform. Driving ongoing implementation and providing world class support would be one of your key KRA's. You will also partner closely with other cross-functional team members to translate business needs and product requirements into new solutions for these customers.
The Opportunity:
- You will be an expert on MangoApps platform driving the end-to-end implementation to meet the customer technical and business requirements. You will also serve as the primary point of contact for the customer, managing customer expectations and ensuring high-level customer satisfaction.
- You will work like a Forward Deployed Engineer (FDE), with an ability to work directly in customer environments to diagnose, troubleshoot, and implement solutions in close to real time by collaborating closely with Engineering, DevOps, QA and Product teams.
- You will translate customer-specific requirements into scalable product improvements and engineering inputs.
- You will participate in project-based, architectural and design discussions to ensure solutions are optimal for customers. You will ensure timely response and resolution to technical and product related incidents.
- You are expected to own support escalations and investigate technical issues and coordinate with engineering and product teams to diagnose problems and take corrective actions for customers.
- Conducting regular status calls and business reviews would be part of your responsibilities as a TAM on the Mango team. You will assess the health of the account by identifying risks and preparing risk mitigation plans to avoid and minimize churn.
- You will proactively update customers about product enhancements, upgrade and ensure necessary action to maintain availability and customer satisfaction.
- You will be responsible for enabling our customers with sound product knowledge. You will be conducting live training sessions for customers in onboarding phase.
What makes you a great fit for our team:
- 5+ years of experience leading technical implementations and ongoing technical relationships requiring ownership and execution of complex, client-facing enterprise projects with senior internal and external stakeholders.
- Effective communication skills including active listening skills, understanding non-verbal signs (even over zoom! ) storytelling and rapport building. Strong exposure with handling US customers.
- Strong customer orientation with an eagerness and ability to tune, tweak and improve a rapidly evolving client engagement process.
- Solid Understanding of Cloud & SaaS Architecture, with Working knowledge of: Cloud environments (AWS/Azure/GCP basics) and System integrations, authentication (SSO, APIs).
- Strong Hands-On Technical Debugging skills with ability to diagnose and troubleshoot issues in live customer environments working with APIs, logs, integrations and cloud environments. You should be adept at Breaking down complex issues, Identify root cause (not just symptoms) and propose practical, implementable fixes.
- Excellent problem-solving skills with an ability to manage multiple, complex, high-priority tasks, and situations across multiple accounts.
- Strong conflict resolution and negotiation skills with a sense of urgency in driving to closure escalations and open technical issues.
- Highly efficient team player, with the ability to work independently to juggle multiple priorities in a fast-paced and fluid environment.
- You thrive in a fast-paced and dynamic environment and are a self-starter who gets things across the finish line.
- Ability to curate training contents and presentations for Product training sessions.
- Prior experience in architecting, developing and seeding customer facing intranets with high value content to drive user adoption is highly desirable.
Why work with us:
- We take delight in what we do, and it shows in the products we offer and ratings of our products by leading industry analysts like IDC, Forrester and Gartner OR independent sites like Capterra
- Be part of the team that has a great product-market fit, solving some of the most relevant communication and collaboration challenges faced by big and small organizations across the globe.
- MangoApps is highly collaborative place and careers at MangoApps come with a lot of growth and learning opportunities. If you're looking to make an impact, MangoApps is the place for you.
- We focus on getting things done and know how to have fun while we do them. We have a team that brings creativity, energy, and excellence to every engagement.
- A workplace that was listed as one of the top 51 Dream Companies to work for by World HRD Congress in 2019
- As a group, we are flat and treat everyone the same.
Benefits:
We are a young organization and growing fast. Along with the fantastic workplace culture that helps you meet your career aspirations, we provide some comprehensive benefits.
- Comprehensive Health Insurance for Family (Including Parents) with no riders attached.
- Accident Insurance for each employee.
- Sponsored Trainings, Courses and Nano Degrees.
About You:
- Self-motivated: You can work with a minimum of supervision and be capable of strategically prioritizing multiple tasks in a proactive manner.
- Driven: You are a driven team player, collaborator, and relationship builder whose infectious can-do attitude inspires others and encourages great performance in a fast-moving environment.
- Entrepreneurial: You thrive in a fast-paced, changing environment and you're excited by the chance to play a large role.
- Passionate: You must be passionate about online collaboration and ensuring our clients are successful; we love seeing hunger and ambition.
- Thrive in a start-up mentality with a whatever it takes attitude.
We are seeking a Technical cum Pre‑Sales Engineer who can effectively bridge technology and business, with a strong focus on driving sales growth. The role involves supporting the sales team through technical expertise, solution design, customer engagement, and on‑field activities such as events and customer meetings. Strong knowledge of Microsoft and AWS infrastructure is essential, and experience with AI and Security solutions adds significant weightage.
This is a hybrid role based in Chennai, offering a mix of on-site customer engagement and remote work flexibility.
Key Responsibilities
Pre‑Sales & Revenue Growth
- Work closely with the sales team to identify opportunities, qualify leads, and close deals
- Understand customer business requirements and recommend suitable cloud, infrastructure, AI, and security solutions
- Deliver technical presentations, product demos, proposals, and proof‑of‑concepts
- Assist in preparing technical inputs for RFPs, RFQs, and solution proposals
- Actively contribute to achieving and exceeding sales and revenue targets
Technical Responsibilities
- Design and propose solutions using Microsoft and AWS platforms
- Provide strong technical expertise on:
- Microsoft 365 & Exchange (Online / Hybrid)
- Active Directory, Azure AD / Entra ID
- Azure infrastructure and core services
- AWS compute, storage, networking, and security services
- Demonstrate practical understanding of AI services (Azure AI, AWS AI/ML services) and relevant business use cases
- Recommend and explain security solutions, including:
- Identity & access management
- Cloud security
- Email, endpoint, and data security
- Compliance and governance best practices
Travel, Events & Customer Engagement
- Travel will be required for:
- Customer meetings
- Industry events, expos, and roadshows
- Partner and networking events
- Represent the organization in customer engagements and technical discussions
- Build strong relationships with customers, partners, and internal teams
Location & Collaboration
- Preferred location: Chennai
- Open to candidates willing to relocate to Chennai
- Work closely with delivery, support, and operations teams to ensure smooth post‑sales handover
Required Skills & Qualifications
- Bachelor’s degree in Engineering, IT, Computer Science, or equivalent experience
- 3–6 years of experience in Pre‑Sales, Solution Consulting, or Technical Sales
- Strong hands‑on or conceptual understanding of:
- Microsoft Exchange & Active Directory
- Azure and AWS infrastructure
- Cloud networking and security fundamentals
- Ability to translate complex technical concepts into business‑friendly solutions
- Excellent communication, presentation, and stakeholder management skills
- Sales‑oriented mindset with the ability to support deal closures
Preferred / Added Advantage
- Experience with AI, automation, or modern cloud workloads
- Exposure to Microsoft Defender, Sentinel, AWS IAM, Security Hub
- Microsoft and/or AWS certifications (Azure Administrator, Solutions Architect, etc.)
- Experience working with System Integrators, MSPs, or Cloud Service Providers
Compensation & Benefits
- Fixed + Variable pay structure, with variable incentives linked directly to performance and sales contributions
- Growth‑oriented role with exposure to advanced cloud, AI, and security solutions
- Opportunity to work with enterprise and mid‑market customers
- Performance‑driven career progression

at Prismberry Technologies Pvt Ltd (acquired by eYantra Ventures Limited)
Role Overview
We are seeking a highly skilled Senior Java Developer to join our engineering team. In this role, you will be responsible for designing and developing high-performance, scalable, and resilient microservices. You will work at the intersection of complex backend logic, real-time data streaming, and cloud infrastructure to deliver seamless user experiences.
Key Responsibilities
- System Design: Architect and develop robust, scalable, and maintainable backend services using Java and Spring Boot.
- Scalability: Build distributed systems capable of handling high traffic and large datasets with low latency.
- Database Management: Design and optimize complex schemas in both Relational (SQL) and NoSQL databases, ensuring data integrity and performance.
- Event-Driven Architecture: Implement real-time messaging and data pipelines using Apache Kafka.
- Cloud Infrastructure: Deploy and manage services on cloud platforms (AWS or GCP), leveraging managed services to improve system reliability.
- Collaboration: Work closely with cross-functional teams to define requirements, participate in code reviews, and mentor junior developers.
Technical Requirements
- Core Java: Deep expertise in Java (8 or higher), including concurrency, multithreading, and JVM tuning.
- Frameworks: Strong experience with Spring Boot, Spring Cloud, and Hibernate/JPA.
- Messaging: Proven experience with Apache Kafka for event streaming and asynchronous processing.
- Cloud: Proficiency in AWS (EC2, S3, RDS, Lambda) or GCP (GCE, GCS, Cloud SQL, Pub/Sub).
- Databases: Solid knowledge of PostgreSQL, MySQL, or Oracle, alongside NoSQL experience (e.g., MongoDB, Cassandra, or Redis).
- DevOps & Tools: Familiarity with Docker, Kubernetes, and CI/CD pipelines (Jenkins, GitLab CI, or GitHub Actions).
Preferred Qualifications
- Experience with Microservices Architecture and Domain-Driven Design (DDD).
- Understanding of distributed caching strategies and load balancing.
- Strong problem-solving skills and a "clean code" mentality.
Work mode- WFO 5 days
Location: Hyderabad (Onsite)
Experience- 7+
- K8s Hands-on experience
- Linux Troubleshooting Skills
- Experience on OnPrem Servers and Management
- Helm
- Docker
- Ingress and Ingress Controllers
- Networking Basics
- Proficient Communication
Must-Have Skills:
- Hands-on experience with airgap Kubernetes clusters, ideally in regulated industries (finance, healthcare, etc.).
- Strong expertise in CI/CD pipelines, programmable infrastructure, and automation.
- Proficiency in Linux troubleshooting, observability (Prometheus, Grafana, ELK), and multi-region disaster recovery.
- Security & compliance knowledge for regulated industries.
- Preferred: Experience with GKE, RKE, Rook-Ceph, and certifications like CKA, CKAD.
Who You Are
- A Kubernetes expert who thrives on scalability, automation, and security.
- Passionate about optimizing infrastructure, CI/CD, and high-availability systems.
- Comfortable troubleshooting Linux, improving observability, and ensuring disaster recovery readiness.
- A problem solver who simplifies complexity and drives cloud-native adoption.
What You’ll Do
- Architect & automate Kubernetes solutions for airgap and multi-region clusters.
- Optimize CI/CD pipelines & cloud-native deployments.
- Work with open-source projects, selecting the right tools for the job.
- Educate & guide teams on modern cloud-native infrastructure best practices.
- Solve real-world scaling, security, and infrastructure automation challenges.
Why Join Us?
- Work on high-impact Kubernetes projects in regulated industries.
- Solve real-world automation & infrastructure challenges with cutting-edge tools.
- Grow in a team that values learning, open-source contributions, and innovation.
What are we looking for??
- You have a good understanding and work experience in AKS, Kubernetes, and EKS.
- You are able to manage multi region clusters for disaster recovery.
- You have a good understanding of AWS stack.
- You have experience of production level in Kubernetes.
- You are comfortable coding/programming and can do so whenever required.
- You have worked with programmable infrastructure in some way - Built a CI/CD pipeline, Provisioned infrastructure programmatically or Provisioned monitoring and logging infrastructure for large sets of machines.
- You love automating things, sometimes even what seems like you can’t automate - such as one of our engineers used Ansible to set up the Ubuntu workstation and runs a playbook every time something has to be installed.
- You don’t throw around words such as “high availability” or “resilient systems” without understanding at least their basics. Because you know that words are easy to talk about but there is a fair amount of work to build such a system in practice.
- You love coaching people - about the 12-factor apps or the latest tool that reduced your time of doing a task by X times and so on. You lead by example when it comes to technical work and community.
- You understand the areas you have worked on very well but, you are curious about many systems that you may not have worked on and want to fiddle with them.
- You know that understanding applications and the runtime technologies gives you a better perspective - you never looked at them as two different things.
What you will be learning and doing?
- You will be working with customers trying to transform their applications and adopt cloud-native technologies. The technologies used will be Kubernetes, Prometheus, Service Mesh, Distributed tracing and public cloud technologies or on-premise infrastructure.
- The problems and solutions are continuously evolving in space but fundamentally you will be solving problems with simplest and scalable automation.
- You will be building open source tools for problems that you think are common across customers and industry. No one ever benefited from re-inventing the wheel, did they?
- You will be hacking around open source projects, understand their capabilities, limitations and apply the right tool for the right job.
- You will be educating the customers - from their operations engineers to developers on scalable ways to build and operate applications in modern cloud-native infrastructure.
Proficiency in Java 8+.
Solid understanding of REST APIs(Spring boot), microservices,
databases (SQL/NoSQL), and caching systems like Redis/Aerospike.
Familiarity with cloud platforms (AWS, GCP, Azure) and DevOps tools (Docker, Kubernetes, CI/CD).
About Us:
We are hiring for a pre seed funded startup called Zeromoblt (https://zeromoblt.com/), a high-agency Hyderabad-based startup revolutionizing student transportation with lean, intelligent tech stacks.
Our mission: architect world-class systems from scratch—fast, scalable, and algorithmically sharp—using Kotlin, React, AWS (EC2, IoT, IAM), Google Maps, and multi-cloud setups. Stealth mode operations mean you're building 0→1 products with founders, not fixing tickets.
What You'll Do
- Lead end-to-end ownership of complex systems: design, build, deploy, monitor, and iterate at scale.
- Architect high-performance backends in Kotlin (or JVM langs) that handle real-time routing and IoT data.
- Craft scalable React UIs that power ops dashboards and parent-facing apps.
- Drive cloud decisions across AWS, Azure/GCP—optimising costs for our bootstrap runway.
- Apply DSA/system design to solve hard problems like dynamic route optimization and predictive scaling.
- Shape the engineering roadmap: propose, prioritise, and ship features with founders.
- Mentor juniors while executing solo on high-impact bets—no layers, just results.
We're Looking For
- 3-6 years of hands-on engineering where you've owned and shipped production systems (prove it with code/stories).
- Elite CS fundamentals: advanced DSA, system design (distributed systems a must), design patterns.
- Mastery of Kotlin/Java + modern React; real AWS experience (EC2, IAM, CLI—you know our stack).
- Proven "leap-taker": startup grit, side projects, or open-source that screams hunger.
- Figure-it-out velocity: you thrive in chaos, learn our domain overnight, and deliver 10x faster than peers.
This Role Is Not For You If…
- You need structured roadmaps, PM hand-holding, or big-tech process.
- Comfort > impact: stable salary over equity upside and chaos.
- You've never worn all hats (dev, ops, product) in a resource-constrained environment.
Why Join Us
- Massive ownership: lead tech for 10k+ students, direct founder access, shape ZeroMoblt's scale.
- Flat, high-trust team: flexible Hyderabad/remote, no bureaucracy.
- Hungry culture: we hire hustlers scaling from 700 to 10k students—your wins are visible daily.
- Hungry to Leap? Apply now!
Angular Full Stack Developer (7+ Years)
Location: Ahmedabad / Pune (Onsite)
Experience: 7+ Years
Compensation: No bar for the right candidate
What You’ll Be Building
You’ll work on enterprise-grade, high-performance web applications using Angular and modern serverless backend architecture on AWS. This is not basic CRUD work—expect complex data-heavy systems, scalable UI, and real-world impact.
What You’ll Do
- Build and scale Angular-based frontend systems with strong architecture principles
- Develop and integrate REST APIs + serverless backend workflows
- Work on data-heavy UI using AG Grid & AG Charts
- Implement RxJS, state management, and performance optimization
- Write clean, testable code with Jest & Cypress
- Work with AWS serverless stack (Lambda, API Gateway, DynamoDB, etc.)
- Manage infrastructure using Terraform (IaC)
- Contribute to architecture decisions, code reviews, and best practices
- Work in a fast-paced Agile environment
Must-Have Skills (Don’t Apply If Missing)
- 7+ years in full-stack / frontend-heavy development
- Strong expertise in Angular + TypeScript
- Hands-on with RxJS and scalable frontend architecture
- Experience with AG Grid / AG Charts (or similar complex UI libs)
- Experience with Jest + Cypress testing
- Solid understanding of AWS serverless architecture
- Experience with Terraform (IaC)
Good to Have (Will Give You an Edge)
- CI/CD experience (GitHub Actions or similar)
- Exposure to AI dev tools (Copilot, etc.)
- Backend experience (Node.js / NestJS)
- Experience working in large distributed teams
Why This Role Matters
You won’t be another developer pushing tickets. You’ll be working on scalable systems, real architecture decisions, and performance-critical applications.
Who Should Apply
People who:
- Can own modules end-to-end
- Don’t need hand-holding
- Care about clean code + performance
- Have actually built complex systems—not just tutorials
Who Should NOT Apply
- Less than 7 years experience
- Only basic Angular knowledge
- No real exposure to production-scale applications
Company Overview:
Planview has one mission: to build the future of connected work with market-leading portfolio management and work management solutions. Planview is a recognized innovator and industry leader, our solutions enable organizations to connect the business from ideas to impact, empowering companies to accelerate the achievement of what matters most. Our solutions span every class of work, resource, and organization to address the varying needs of diverse and distributed teams, departments, and enterprises.
As a Sr CloudOps Engineer II, you will oversee teams of Engineers and be a champion for configuration management, technologies in the cloud, and continuous improvement. You will work closely with global leaders to ensure that our applications, infrastructure, and processes are scalable, secure, and supportable. By leveraging your production experience and development skills you will work hand in hand with Engineers (Dev, DevOps, DBOps) to design and implement solutions that improve delivery of value to customers, reduce costs, and eliminate toil.
Responsibilities (What you will do):
- Guide the professional development of Engineers and support the teams to accomplish business goals
- Work closely with leaders in the Israel to align on priorities and architect, deliver, and manage our products
- Build systems that are secure, scalable, and self-healing.
- Manage and improve deployment pipelines.
- Triage and remediate production issues.
- Participate in on-call rotations for escalations.
Qualifications (What you will bring):
- Bachelor's degree is CS or equivalent experience in related field.
- 2+ years managing Engineering teams.
- 8+ years of experience as a site reliability or platform engineer, preferably in a fast-scaling environment
- 5+ years administering Linux and Windows environments.
- 3+ years programming / scripting experience (e.g., Python, JavaScript, PowerShell)
- Strong technical knowledge in OS’s (Linux and Windows), virtualizations, storage systems, networking, and firewall implementations
- Maintaining production environments in the On Premise (90%) and Cloud (10%) (e.g., AWS, Google Cloud, Azure)
- Solid understanding of networking principles and how it applies to data flow and security.
- Automating deployments of cloud based available services (e.g., AWS EC2 / RDS, Docker, Kubernetes)
- Experience managing CI/CD infrastructures, with a strong proficiency in platforms like bitbucket and Jenkins to streamline deployment pipelines and ensure efficient software delivery.
- Management of resources using Infrastructure as Code tools (e.g., CloudFormation, Terraform, Chef)
- Knowledge of observability tools such as LogicMonitor, New Relic, Prometheus, and Coralogix, as well as their implementation.
- Worked within Agile and Lean software development teams.
- Experience working in globally distributed teams.
- Ability to look on the big picture and manage risks.
Your Responsibilities
what you will wake up to solve.
- Process-First AI Strategy: Principal Technical Expert: Act as a hands-on leader and the core technical authority tasked with "futurifying" client businesses through advanced AI. Take full ownership of the AI Engineering squad, transforming ambitious concepts into high-impact, tangible realities.
- Engineering & Intelligent Deployment: Execute the full-lifecycle development of innovative AI/ML solutions, including hands-on design, coding, testing, and deployment of robust, scalable systems that prioritize technical excellence and business relevance.
- Scalability & Architectural Optimization: Directly build and optimize high-performance AI architectures and core system components to ensure solutions are reliable, production-ready, and optimized for long-term operational success.
- Impact-Driven Technical Expertise: Deliver intelligent client outcomes through direct technical contribution, maintaining an "Always Beta" mindset and a relentless focus on solving complex engineering challenges.
- Leadership through Action: Lead by example rather than control, coaching and mentoring a high-performing squad of "happier Do-ers" to foster a vibrant culture of continuous innovation and technical excellence.
- Strategic Integration & Collaboration: Partner across internal teams to translate chaotic business challenges into precise technical requirements, ensuring seamless solution integration and adoption for global clients.
- The "Agentic" Shift: You will lead the transition from simple predictive models to Agentic Workflows. You will build systems where AI agents can plan, reason, and execute complex tasks autonomously to solve intricate business problems.
- Talent & Culture: You will mentor a high-performance squad of AI Engineers and Data Scientists. You will teach them to look beyond the algorithm and understand the business outcome.
Functional Skills
1. Scaling Intelligent Workforce through Delivery Excellence
- Deep Technical Acumen: Operates at the cutting edge of AI, applying advanced technical knowledge to engineer and implement groundbreaking solutions, and guide the squad in developing future capabilities.
- Client Advocacy & Revenue Growth: Skill in cultivating and maintaining trusted client partnerships. Drives strategic engagement that results in repeat business and expanded client portfolios within the region.
- Contract & Risk Governance: High proficiency in reviewing and managing complex project agreements (SoW), mitigating delivery risks, and navigating commercial negotiations to safeguard BU profitability.
- Structured Problem-Solving: Simplifies chaotic technical challenges for the squad by breaking them into solvable chunks using first-principles thinking.
- Squad Delivery Ownership: Follows through on the squad's solution execution—owning technical outcomes from ideation to deployment with rigor, precision, and pride, ensuring tangible, real-world business value.
2. Technical Oversight & Execution Charter
- Technical Troubleshooting & Crisis Resolution: Actively manages technical roadblocks within the squad, personally intervening to troubleshoot ML or MLOps constraints. You ensure the protection of sprint timelines and the guaranteed performance of deployed models through hands-on problem-solving.
- Cloud-Native Technical Command: Maintains deep, functional knowledge of modern AI system design (e.g., RAG Frameworks, Agentic Workflows, and Inference Optimization) across GCP and AWS. You hold the responsibility to validate squad-level technical roadmaps, ensuring they are technically feasible and production-hardened.
- End-to-End Project Management: Expertly manage all aspects of a project, including scope, budget, timelines, and stakeholder communication. Accountable for the entire delivery, not just the technical parts.
- Talent Strategy & Mentorship: Drive the hiring and development of specialized talent. You will be responsible for defining and optimizing effective team structures while proactively fostering an environment that champions creative problem-solving and technical agility.
Tech Superpowers
- Deep AI Engineering Mastery & Guidance: Possesses profound, hands-on expertise in engineering, optimizing, and deploying foundational models, custom AI solutions, and complex multi-modal systems. You'll also guide your squad in understanding model architectures, training methodologies, and ethical AI development from the ground up, ensuring their collective proficiency.
- Intelligent Systems Architecture & Oversight: You'll directly contribute to and oversee the coding and implementation of robust, scalable, and production-grade AI platforms and MLOps components for your squad's projects. You'll translate abstract technical requirements into high-performance, maintainable AI system designs, always considering reliability, security, and future extensibility across the squad's work.
- Cloud-Native AI capability: More than cloud-certified, you are deeply cloud-capable in applied AI engineering. You proficiently leverage and guide your team in utilizing leading cloud AI/ML ecosystems to build, deploy, and manage AI solutions.
- Technical Integrity & Ethical Governance: Establishes and audits mandatory technical quality benchmarks, ensuring strict adherence to rigorous policies regarding model validation, automated testing coverage, and ethical governance.
Experience & Relevance
- A value-driven AI/ML Engineering Manager with 8+ years of experience in building and scaling end-to-end AI engineering and solution delivery.
- Leadership Track Record: Proven track record as a hands-on builder, and lead, contributing to the design, development, and deployment of complex, enterprise-grade AI/ML platforms and solutions. Expert in leveraging Google Cloud's AI/ML ecosystem (Vertex AI, BigQuery ML, GKE for MLOps) to deliver highly performant, scalable, and impactful AI transformations.
- Delivery & Advisory Record: Experience in building and optimizing intelligent systems and personally driving the technical execution from conception to scalable deployment.
- Applied AI & Domain Expertise: Hands-On AI Deployment: Extensive hands-on experience deploying AI-powered workflows, copilots, and automation solutions in production environments.
- Client-Facing Lead: Demonstrated hands-on experience as an AI/ML Product Manager, Data Science Manager, or Technical Architect in client-facing capacities. This involves directly building, implementing, and advising on complex AI solutions, consistently acting as the trusted technical authority for strategic clients.
Bonus Points (you will thrive if you have)
- Founder’s Energy: Bias for action, thrive in ambiguity, relentless focus on outcomes.
- Low-Code/No-Code Fluency: Experience with AI integrations via Power Platform or similar.
- AI Copilots & Extensions: Built plugins, copilots, or agentic automation frameworks.
- Thought Leadership DNA: Industry content creation, technical blogs, public speaking.
- Ethical Compass: Strong commitment to responsible AI practices.
- Engineer at Heart: Background in product development or engineering before moving into architecture.
Why you’ll love being a ‘Searcian’
NOT your ‘usual’ management consultancy; we ‘solve differently’.
- We are happier. No really happier’: A vibrant, inclusive, and supportive work environment. We even have a dedicated role for ‘Better Living’.
- The Company You Keep (Says Everything): solvers. engineers. tinkerers. improvers. futurists operating across 12 countries.
- No room for CAVEers (Constantly Against Virtually Everything people). Instead, we make room for a meditation room in our offices.
- No bloat: 27 people meeting with 23 clueless people. Not happening here. We also don't do the meetings to plan for pre-meetings.
- No bureaucracy. Zero entropy. Real decision-making velocity: We’re large enough to solve the world’s most complex business challenges, yet small and agile enough to value individual humans. With us, you’re a name, not an employee ID number lost in a sea of 37,000 people where it takes a year just to decide ‘who will decide’.
- Ideas over Hierarchy: We reject HiPPOs (Highest Paid Person’s Opinion). The most well-reasoned ideas win - regardless of whose name is on them. That dangerous phrase, "We’ve always done it this way," dies here.
- Own-the-outcome: The buck stops with you. Doesn’t matter if you are an intern. (Psst: A ‘real intern’ actually drafted this JD.)
- Expert ‘wholesome generalists’, Not ‘one-nut-tighteners’. At Searce, you see the whole picture — how the car is designed, built, and driven — not just how to tighten the third nut on a red 1962 Ford Falcon owned by Vinny’s cousin. Real impact comes from knowing why that nut matters to the person behind the wheel.
- You ‘do stuff’ that matters. Not just “follow up on the deck we shared.”
- Gain more years in your Searce-perience: We operate at a 3.65x experience velocity—yes, we measured it. (and charted it to the scale too)
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.
Responsibilities:
We want you to show off your technical skills, but we also want you to be creative and think outside the box. Here are some of the ways you'll be flexing your tech muscles:
- Use your superpowers to solve complex technical problems, combining your excellent abstract reasoning ability with problem-solving skills.
- Efficient in at least one product or technology of strategic importance to the organisation, and a true tech ninja.
- Stay up-to-date with emerging trends in the field, so that you can keep bringing fresh ideas to the table.
- Implement robust and extensible code modules as per guidelines. We love all code that's functional (Don’t we?)
- Develop good quality, maintainable code modules without any defects, exhibiting attention to detail. Nothing should look sus!
- Manage assigned tasks well and schedule them appropriately for self and team, while providing visibility to the mentor and understanding the mentor's expectations of work. But don't be afraid to add your own twist to the work you're doing.
- Consistently apply and improve team software development processes such as estimations, tracking, testing, code and design reviews, etc., but do it with a funky twist that reflects your personality.
- Clarify requirements and provide end-to-end estimates. We all love it when requirements are clear (Don’t we?)
- Participate in release planning and design complex modules & features.
- Work with product and business teams directly for critical issue ownership. Isn’t it better when one of us understands what they say?
- Feel empowered by managing deployments and assisting in infra management.
- Act as role model for the team and guide them to brilliance. We all feel secured when we have someone to look up to.
Qualifications:
We want to make sure you're a funky, tech-loving person with a passion for learning and growing. Here are some of the things we're looking for:
- You have a Bachelor's or Master’s degree in Computer Science or a related field, but you also have a creative side that you're not afraid to show.
- You have excellent abstract reasoning ability and a strong understanding of core computer science fundamentals.
- You're proficient with web programming languages such as HTML, CSS, JavaScript with at least 5+ years of experience, but you're also open to learning new languages and technologies that might not be as mainstream.
- You’ve 5+ years of experience with backend web framework Django and DRF.
- You’ve 5+ years of experience with frontend web framework React.
- Your knowledge of cloud service providers like AWS, GCP, Azure, etc. will be an added bonus.
- You have experience with testing, code, and design reviews.
- You have strong written and verbal communication skills, but you're also not afraid to show your personality and let your funky side shine through.
- You can work independently and in a team environment, but you're also excited to collaborate with others and share your ideas.
- You've demonstrated your ability to lead a small team of developers.
- And most important, you're also excited to learn about new things and try out new ideas.
This is an mid-level position, you'll get to flex your coding muscles, work on exciting projects, and grow your skills in a fast-paced, dynamic environment. So, if you're passionate about all things tech and ready to take your skills to the next level, we want YOU to apply! Let's make some magic happen together!
We are located in Delhi. This post may require relocation.
Build, deploy, and maintain production-grade AI/ML solutions for Fortune 500 enterprise clients on Google Cloud Platform. Hands-on role focused on shipping scalable AI systems across GenAI, agentic workflows, traditional ML, and computer vision.
Key Responsibilities:
Generative AI & Agentic Systems
- Design and build GenAI applications (RAG, agentic workflows, multi-agent systems)
- Develop intelligent systems with memory, planning, and reasoning capabilities
- Implement prompt engineering, context optimization, and evaluation frameworks
- Build observable and reliable multi-agent architectures
Traditional ML & Computer Vision
- Develop ML pipelines (forecasting, recommendation, classification, regression)
- Build production-grade computer vision solutions (document AI, image analysis)
- Perform feature engineering, model optimization, and benchmarking
MLOps & Production Engineering
- Own end-to-end ML lifecycle (CI/CD, testing, versioning, deployment)
- Build scalable APIs, microservices, and data pipelines
- Monitor models, detect drift, and implement A/B testing frameworks
Knowledge Solutions
- Architect knowledge graphs and semantic search systems
- Implement hybrid retrieval (vector + keyword search)
Client Collaboration
- Present technical solutions to enterprise clients
- Collaborate with architects, data engineers, and business teams
Required Skills & Experience
- 3–6 years of hands-on ML Engineering experience
- Strong Python and software engineering fundamentals
- Experience shipping production ML systems on cloud (GCP preferred)
- Experience across GenAI, Traditional ML, Computer Vision
- MLOps experience and RAG-based systems
Preferred
- GCP Professional ML Engineer certification
- Knowledge graphs / semantic search experience
- Experience in regulated industries (Healthcare / BFSI)
- Open-source or technical publications
We are seeking a skilled and passionate ML Engineer with 3+ years of experience to join our team. The ideal candidate will be instrumental in developing, deploying, and maintaining machine learning models, with a strong focus on MLOps practices.
This role requires hands-on experience with Azure cloud services, Databricks, and MLflow to build robust and scalable ML solutions.
Responsibilities
- Design, develop, and implement machine learning models and algorithms to solve complex business problems.
- Collaborate with data scientists to transition models from research and development into production-ready systems.
- Build and maintain scalable data pipelines for ML model training and inference using Databricks.
- Implement and manage the ML model lifecycle using MLflow, including experiment tracking, model versioning, and model registry.
- Deploy and manage ML models in production environments on Azure, leveraging services such as:
- Azure Machine Learning
- Azure Kubernetes Service (AKS)
- Azure Functions
- Support MLOps workloads by automating model training, evaluation, deployment, and monitoring processes.
- Ensure the reliability, performance, and scalability of ML systems in production.
- Monitor model performance, detect model drift, and implement retraining strategies.
- Collaborate with DevOps and Data Engineering teams to integrate ML solutions into existing infrastructure and CI/CD pipelines.
- Document model architecture, data flows, and operational procedures.
Qualifications
Education
- Bachelor’s or Master’s degree in Computer Science, Engineering, Statistics, or a related quantitative field.
Experience
- Minimum 3+ years of professional experience as an ML Engineer or in a similar role.
Required Skills
- Strong proficiency in Python for data manipulation, machine learning, and scripting.
- Hands-on experience with machine learning frameworks, such as:
- Scikit-learn
- TensorFlow
- PyTorch
- Keras
- Demonstrated experience with MLflow for:
- Experiment tracking
- Model management
- Model deployment
- Proven experience working with Microsoft Azure cloud services, specifically:
- Azure Machine Learning
- Azure Databricks
- Related compute and storage services
- Solid experience with Databricks for:
- Data processing
- ETL pipelines
- ML model development
- Strong understanding of MLOps principles and practices, including:
- CI/CD for ML
- Model versioning
- Model monitoring
- Model retraining
- Experience with containerization and orchestration technologies, including:
- Docker
- Kubernetes (especially AKS)
- Familiarity with SQL and data warehousing concepts.
- Experience working with large datasets and distributed computing frameworks.
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills.
Nice-to-Have Skills
- Experience with other cloud platforms (AWS or GCP).
- Knowledge of big data technologies such as Apache Spark.
- Experience with Azure DevOps for CI/CD pipelines.
- Familiarity with real-time inference patterns and streaming data.
- Understanding of Responsible AI principles, including fairness, explainability, and privacy.
Certifications (Preferred)
- Microsoft Certified: Azure AI Engineer Associate
- Databricks Certified Machine Learning Associate (or higher)
About the company:
At Inteliment, we help organizations turn data into powerful decisions. With two decades of proven expertise, we work with global customers to solve complex business problems using advanced data and analytics solutions. Our ACE – Analytical Centre of Excellence brings together some of the best minds in data engineering, analytics, and AI to build next-generation decision intelligence platforms. If you are passionate about data engineering, modern data platforms, and solving real business problems, this role will give you the opportunity to work on global enterprise data ecosystems.
About the role
We are seeking a highly skilled Data Architect with strong hands-on expertise in Data Engineering and/or Data Visualization tools, having 6+ years of experience in the pure Data Analytics domain. The ideal candidate will be responsible for architecting scalable data solutions, guiding technical teams, and ensuring robust data pipelines, analytics frameworks, and visualization ecosystems aligned with business objectives.
Requirements:
- Bachelor’s or master’s degree in computer sciences, Information Technology, or a related field.
- 6+ years of hands-on experience in Data Analytics domain.
- Strong experience in designing enterprise data solutions.
- Proven experience in handling large-scale data systems.
- Experience in client-facing roles is preferred.
- Certifications with related field will be an added advantage
Technical Skills
✔ Data Engineering Stack
- Python / PySpark / SQL
- ETL Tools (e.g., Informatica, Talend, SSIS, or equivalent)
- Cloud Platforms (AWS / Azure / GCP)
- Data Warehousing (Snowflake, Redshift, BigQuery, etc.)
- Big Data Technologies (Spark, Hadoop – preferred)
✔ Visualization & BI Tools (At least one advanced tool mandatory)
- Power BI
- Tableau
- Qlik
- Looker or equivalent
✔ Database Technologies
- SQL (MySQL, PostgreSQL, SQL Server, Oracle)
- NoSQL (MongoDB, Cassandra – preferred)
✔ Additional Preferred Skills
- Data Modeling (Star/Snowflake schema)
- API integrations
- CI/CD for data pipelines
- Version control (Git)
- Agile methodology exposure
Soft Skills
- Leadership: Strong leadership and mentoring capabilities to guide technical teams.
- Communication: Excellent communication skills for collaborating with cross-functional teams and stakeholders.
- Problem-Solving: Analytical mindset with a keen attention to detail.
- Adaptability: Ability to manage shifting priorities and requirements effectively.
- Team Collaboration: Strong interpersonal skills for fostering a collaborative work environment.
Responsibilities:
✔ Solution Architecture & Design
- Design end-to-end data architecture solutions including data ingestion, transformation, storage, and visualization.
- Architect scalable and high-performance data pipelines.
- Define best practices, standards, and governance frameworks for data analytics projects.
✔ Data Engineering
- Build and optimize ETL/ELT pipelines.
- Work with structured and unstructured datasets.
- Design and implement data lakes, data warehouses, and modern data platforms.
- Ensure data quality, integrity, and performance tuning.
✔ Data Visualization & Analytics
- Architect and implement enterprise-level dashboards and reporting solutions.
- Define data models optimized for BI tools.
- Guide teams in building intuitive, performance-driven visualizations.
- Translate business requirements into scalable analytics solutions.
✔ Technical Leadership
- Provide technical direction to data engineers, BI developers, and analysts.
- Conduct code reviews and enforce architectural standards.
- Collaborate with cross-functional teams including business stakeholders and delivery teams.
- Mentor junior team members and drive capability building.
✔ Stakeholder Engagement
- Participate in client discussions, solution presentations, and requirement workshops.
- Provide effort estimations and solution proposals.
- Act as a technical escalation point.
About Superclaims
Superclaims modernizes health insurance claims adjudication with intelligent automation. We help insurers and TPAs replace manual, document-heavy workflows with faster, more accurate decisions at scale.
Role: Python Backend Developer
We are looking for a Python Backend Developer who is excited to build AI-powered automation products in a fast-paced startup environment.
What you'll do
- Build and maintain scalable backend systems and APIs
- Develop intelligent data extraction pipelines using AI/ML
- Design and implement agentic workflows with LangGraph
- Design efficient database schemas and optimize queries in PostgreSQL
- Integrate and work with LLMs (OpenAI, Gemini, or similar)
- Collaborate with product, frontend, and data teams to deliver end-to-end features
- Write clean, tested, and well-documented code
Must-have skills
- Strong proficiency in Python and a modern web framework (FastAPI or similar)
- Experience with PostgreSQL and an ORM (SQLAlchemy preferred)
- Solid understanding of RESTful API design and best practices
- Hands-on experience or strong familiarity with LangGraph
- Experience working with LLMs (OpenAI, Gemini, or similar providers)
- Comfort with Git/version control and collaborative development workflows
Nice-to-have skills
- Experience with Docker and containerized deployments
- Knowledge of Redis for caching or background tasks
- Exposure to cloud platforms (GCP, AWS, or Azure)
- Experience with vector databases and retrieval-augmented generation
- Basic prompt engineering skills
- Experience with object storage (S3/MinIO)
What we're looking for
- 1+ years of Python backend development experience (open to exceptional freshers)
- Fast learner with genuine curiosity about AI/ML and automation
- Prior startup experience preferred
- Ownership mindset, bias for action, and comfort with ambiguity
- Ready to relocate to Hyderabad (work location)
How to apply
Please share:
- Your resume
- GitHub/Portfolio link
- A brief note on why you're interested in AI-powered automation and Superclaims
Who are we ?
Searce means ‘a fine sieve’ & indicates ‘to refine, to analyze, to improve’. It signifies our way of working: To improve to the finest degree of excellence, ‘solving for better’ every time. Searcians are passionate improvers & solvers who love to question the status quo.
The primary purpose of all of us, at Searce, is driving intelligent, impactful & futuristic business outcomes using new-age technology. This purpose is driven passionately by HAPPIER people who aim to become better, everyday.
Tech Superpowers
End-to-End Ecosystem Thinker: You build modular, reusable data products across ingestion, transformation (ETL/ELT), and consumption layers. You ensure the entire data lifecycle is governed, scalable, and optimized for high-velocity delivery.
The MDS Architect. You reimagine business with the Modern Data Stack (MDS) to deliver Data Mesh implementations and real value. You treat every dataset as a measurable "Data Product with a clear focus on ROI and time-to-insight.
Distributed Compute & Scale Savant: You craft resilient architectures that survive petabyte scale volume and data skew without "breaking the bank. You prove your designs with cost-performance benchmarks, not just slideware.
Al-Ready Orchestrator: You engineer the bridge between structured data and Unstructured/Vector stores. By mastering pipelines for RAG models and GenAl, you turn raw data into the fuel for intelligent, automated workflows.
The Quality Craftsman (Builder @ Heart): You are an outcome-focused leader who lives in the code. From embedding GDPR/PII privacy-by-design to optimizing SQL, Python, and Spark daily, you ensure integrity is baked into every table
Experience & Relevance
Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads
Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.
Al-Native Workflow: You don't just build for Al you build with Al. You must be proficient in using Al coding assistants (e.g.. GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative Al.
Architectural Portfolio: Evidence of leading 2-3 large-scale transformations-including platform migrations, data lakehouse builds, or real-time
analytics architectures.
Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO's business vision into a Lead Engineer's technical specification without losing anything in translation.
The "Solver" Mindset: A track record of solving 'impossible data problems-whether it's fixing massive data skew, optimizing spiraling cloud costs, or architecting 99.9% available data services.
About the role
We are looking for talented Senior Backend Engineers (5+ years of experience) to join our team and take ownership of different parts of our stack. You will be working alongside a team of Engineers locally and directly with the U.S. Engineering team on all aspects of product/application development. You will leverage your experiences and abilities to inform decisions across product development and technology. You will help us build the foundation of our 2nd Headquarters in Pune: its culture, its processes, and its practices. There are a ton of interesting problems to solve, so come hungry. If your colleagues describe you as curious, driven, kind, and creative you are a culture fit.
What Success Looks Like
- You write, review and ship code in production. Your employer or client's success depends on the software you build
- You use Generative AI tools on a daily basis to enhance the quality and efficacy of your software and non-software deliverables
- You are a self-starter and enjoy working with minimal supervision
- You evaluate and make technical architecture decisions with a long-term view, optimizing for speed, quality, and safety
- You take pride in the product you create and the code that you write
- Your team can rely on you to get them out of a sticky situation in production
- You can work well on a team of sales executives, designers and engineers in an in-person environment
- You are passionate about the enterprise software development lifecycle and feel strongly about improving it
- You are a first principles engineer who exercises curiosity about the technologies you work with
- You can learn quickly about technologies, software and code that you are not familiar with, often from rudimentary documentation
- You take ownership of the code that you write, and you help the team operate with everything that you build, throughout its lifecycle
- You communicate openly and solicit feedback on important decisions, keeping the team aligned on your rationale
- You exercise an optimistic mindset and are willing to go the extra mile to make things work
Areas of Ownership
Our hiring process is designed for you to demonstrate a generalist set of capabilities, with a specialization in Backend Technologies.
Required Technical Experience (MUST HAVE):
- Expertise in Python -
- Deep hands-on experience with Terraform -
- Proficiency in Kubernetes -
- Experience with cloud platforms (GCP strongly preferred, AWS/Azure acceptable) -
Additional experience with some of the following:
- Backend Frameworks and Technologies (Node.js, NuxtJS, Express.js)
- Programming languages (JavaScript, TypeScript, Java, C++, Go)
- RPCs (REST, gRPC or GraphQL)
- Databases (SQL, NoSQL, Postgres, MongoDB, or Firebase)
- CI/CD (Jenkins, CircleCI, GitLab or similar)
- Source code versioning tools such as Git or Perforce
- Microservices architecture
Ways to stand out
- Familiarity with AI Platforms
- Extensive experience with building enterprise-scale applications with >99% SLAs
- Deep expertise across the full required stack: Python, Terraform, Kubernetes, and GCP
You'll Get...
- Competitive Salary
- Medical Insurance Benefits
- Employer Provident Fund contributions with Gratuity after 5 years of service
- Company-sponsored US onsite trips for high performers, based on business requirements
- Potential international transfer support for top performers, based on business requirements
- Technology (hardware, software, trainings, etc.) equipment and/or allowance
- The opportunity to re-shape an entire industry
- Beautiful office environment
- Meal allowance and/or food provision on site
Culture
Who we are: Our Co-Founder and CTO is a Serial Gen AI Inventor who grew up in Pune, India, is a BITS Pilani graduate, and worked at NVIDIA's Pune office for 6 years. There, he was promoted 5 times in 6 years and was transferred to the NVIDIA Headquarters in Santa Clara, California. After making significant contributions to NVIDIA, he proceeded to attend Harvard for his dual Masters in Engineering and MBA from HBS. Our other Co-Founder/CEO is a successful Serial Entrepreneur who has built multiple companies. As a team, we work very hard, have a curious mind-set, and believe in a low-ego high output approach.
Virtual Hiring Drive Site Reliability Engineer (SRE)
Date: 25th April 2026, Saturday (Single-Day Drive)
Mode: 100% Virtual - All interview rounds on the same day
Experience: 3 to 7 Years
Note : We are looking for quick joiners who can join us within 30 days.
About the Role
We are looking for a Site Reliability Engineer who understands the realities of running production systems at scale. If building reliable, scalable, and observable systems excites you, you'll enjoy working with us.
At One2N, we solve One-to-N problems where proof of concept is already built and the real challenge lies in scalability, maintainability, performance, and reliability.
You will work closely with startups and mid-sized clients, helping them architect production-grade infrastructure and observability systems.
Key Responsibilities
- Design and build platform engineering solutions with a self-serve model
- Architect and optimize observability systems (metrics, logs, traces)
- Implement monitoring, logging, alerting & dashboards
- Build and optimize CI/CD pipelines
- Automate repetitive operational and infrastructure tasks (IaC-first approach)
- Improve Developer Experience (DX)
- Guide teams on SRE best practices & on-call processes
- Participate in code reviews and mentor engineers
- Contribute to cloud-native and platform engineering initiatives
Must-Have Skills
- 3 - 7 years experience in DevOps / SRE / Platform Engineering
- Strong hands-on with Kubernetes on AWS
- Expertise in observability tools like Datadog / Honeycomb / ELK / Grafana / Prometheus
- Experience with Docker & Microservices architecture
- Infrastructure as Code using Terraform / Pulumi
- Strong Linux troubleshooting skill
- Programming knowledge in Golang / Python / Java
- Automation & scripting expertise
Job Title: Lead Data Architect (AI & Cloud)
Company: Risosu Consulting
About the Role
Risosu Consulting is hiring a Lead Data Architect / Crew Manager for one of our global clients in the Cloud, Data & AI space. This role focuses on designing scalable data architectures and driving AI-led transformation across modern cloud platforms.
Key Responsibilities
- Design data strategies, architectures, and scalable cloud solutions
- Build and optimize data pipelines, data lakes, and warehouses
- Collaborate with cross-functional teams to enable AI/ML use cases
- Lead client engagements and translate business needs into data solutions
- Mentor and manage a team of consultants as a Crew Manager
Requirements
- 5+ years of experience in Data Architecture / Engineering
- Strong expertise in cloud platforms (GCP/AWS/Azure)
- Experience with data modeling, ETL, and data governance
- Exposure to tools like BigQuery, dbt, Airbyte, or Power BI
- Strong communication skills and stakeholder management
Why Join via Risosu?
- Opportunity to work on high-impact global projects
- Fast-growing, entrepreneurial environment
- Clear growth path with learning & certification support
- Work with cutting-edge Cloud, Data & AI technologies
If you’re passionate about building scalable data systems and leading teams, let’s connect.
Java Architect
Location: Pune, Maharashtra,
Partially remote
Role Overview
We are seeking an experienced Java Architect to lead the design and evolution of scalable, cloud-native, microservices-based platforms. This role requires deep expertise in distributed systems, Domain-Driven Design (DDD), multi-tenant architectures, and event-driven systems.
As a Java Architect, you will define technical vision, drive architectural decisions, establish engineering standards, and mentor development teams while ensuring scalability, resilience, security, and performance across the platform.
Key Responsibilities -
Architecture & Design
- Define and own end-to-end architecture for microservices-based, distributed systems using Java and Spring Boot.
- Design scalable, resilient, and high-performing multi-tenant platforms with strong tenant isolation and configurability.
- Architect and implement event-driven systems using Kafka, RabbitMQ, or similar messaging platforms.
- Apply Domain-Driven Design (DDD) principles to define bounded contexts, aggregates, domain models, and integration patterns.
- Establish architectural blueprints, best practices, design standards, and reusable frameworks.
Technical Leadership
- Provide technical direction and mentorship to engineering teams.
- Review and validate architectural decisions, design documents, and critical code.
- Drive coding excellence through clean architecture principles, SOLID design, TDD, and code review best practices.
- Guide teams in building secure, fault-tolerant, and observable systems.
Cloud & Platform Engineering
- Design cloud-native solutions leveraging AWS services and containerized deployments.
- Define strategies for API management using API Gateways such as Kong.
- Architect CI/CD pipelines, container strategies (Docker), and orchestration using Kubernetes.
- Ensure observability through logging, monitoring, and tracing using ELK Stack, Datadog, Prometheus, or similar tools.
Performance, Security & Scalability
- Architect for high availability, disaster recovery, and fault tolerance.
- Conduct performance tuning and scalability planning.
- Ensure secure design principles across services and integrations.
- Make architecture trade-offs balancing scalability, cost, maintainability, and performance.
Collaboration & Stakeholder Engagement
- Work closely with Product, DevOps, QA, and Engineering teams to align technical solutions with business goals.
- Communicate complex architectural concepts to technical and non-technical stakeholders.
- Participate in roadmap planning and technology strategy discussions.
Required Skills & Experience -
- 8+ years of professional software development experience with strong expertise in Java and Spring Boot.
- Proven experience architecting large-scale distributed, microservices-based systems.
- Deep expertise in Event-Driven Architecture and messaging platforms like Kafka, RabbitMQ, etc.
- Strong hands-on experience with Domain-Driven Design (DDD) in complex enterprise systems.
- Experience designing multi-tenant SaaS platforms.
- Strong knowledge of AWS and cloud-native architecture principles.
- Experience with API Gateway solutions (Kong or similar).
- Hands-on expertise in SQL/NoSQL databases (Postgres, Oracle, MongoDB, Cassandra, Redis, etc.).
- Strong experience with Docker and Kubernetes in production environments.
- Expertise in system design, scalability patterns, and performance optimization.
- Strong problem-solving skills with the ability to evaluate architectural trade-offs.
- Excellent communication and leadership skills.
Good to Have
- Experience building internal engineering platforms.
- Knowledge of infrastructure as code (Terraform, CloudFormation).
- Exposure to service mesh technologies.
- Experience in high-scale SaaS product environments.
6 + years of hands-on development experience and in-depth knowledge of , Spring Java, Spring boot, Quarkus and nice to have front-end technologies like Angular, React JS
● Excellent Engineering skills in designing and implementing scalable solutions
● Good knowledge of CI/CD Pipeline with strong focus on TDD
● Strong communication skills and ownership
● Exposure to Cloud, Kubernetes, Docker, Microservices is highly desired.
● Experience in working on public cloud environments like AWS, Azure, GCP w.r.t. solutions development, deployment & adoption of cloud-based technology components like IaaS / PaaS offerings
● Proficiency in PL/SQL and Database development.
Strong in J2EE & OOPS Design Patterns.
Dear Candidates,
Exp: 3+ years
NP: Immediate to 7 days
Location: Bangalore, Chennai
5 days week
Job Description
Function: Software Engineering → Full-Stack Development
Fintech/BFSI domain experience.
- React.js
- Node.js
- AWS
Requirements:
- Mandatory Skill: Strong Experience in React JS, Node JS, and AWS -3+ years of relevant experience from Current Projects.
- Expertise with at least one Object-oriented JavaScript Framework (React, Angular, Ember, Dojo, Node, etc. ).
- Good to have hands-on experience in Python development.
- Proficiency with Object Oriented Programming, multi-threading, data serialization, and REST API to connect applications to back-end services.
- Proficiency in Docker, Kubernetes (k8s), Jenkins, and GitHub Actions is essential for this role.
- Proven cloud development experience AWS.
- Understanding of IT life cycle methodology and processes.
- Experience in understanding and Leading Enterprise Platforms/Solutions.
- Experience working with Microservices/Service Oriented Architecture Frameworks.
- Good Understanding of Middleware technologies.
- Possess expertise in at least one unit testing framework.
- Education: Avoid UG Degree alone and look only at B. E/B. Tech/MCA/M. Sc.
Must-Have Skills:
- Hands-on experience with airgap Kubernetes clusters, ideally in regulated industries (finance, healthcare, etc.).
- Strong expertise in CI/CD pipelines, programmable infrastructure, and automation.
- Proficiency in Linux troubleshooting, observability (Prometheus, Grafana, ELK), and multi-region disaster recovery.
- Security & compliance knowledge for regulated industries.
- Preferred: Experience with GKE, RKE, Rook-Ceph and certifications like CKA, CKAD.
Who You Are
- A Kubernetes expert who thrives on scalability, automation, and security.
- Passionate about optimizing infrastructure, CI/CD, and high-availability systems.
- Comfortable troubleshooting Linux, improving observability, and ensuring disaster recovery readiness.
- A problem solver who simplifies complexity and drives cloud-native adoption.
What You’ll Do
- Architect & automate Kubernetes solutions for airgap and multi-region clusters.
- Optimize CI/CD pipelines & cloud-native deployments.
- Work with open-source projects, selecting the right tools for the job.
- Educate & guide teams on modern cloud-native infrastructure best practices.
- Solve real-world scaling, security, and infrastructure automation challenges.
Why Join Us?
- Work on high-impact Kubernetes projects in regulated industries.
- Solve real-world automation & infrastructure challenges with cutting-edge tools.
- Grow in a team that values learning, open-source contributions, and innovation.
Job role: Systems Engineer (L2)
Location: Remote/Bengaluru
Experience: 3-6 years
About the Role:
We are looking for a Systems Engineer (L2) to join our growing infrastructure team. You will be responsible for managing, optimizing, and scaling our cloud communication platform that handles billions of messages and voice calls annually.
Key Responsibilities:
— Design, deploy, and maintain scalable cloud infrastructure — AWS/GCP/Azure.
— Manage and optimize networking components — routers, switches, firewalls, load balancers.
— Handle incident response — monitor systems, identify issues, resolve production problems.
— Implement DevOps best practices — CI/CD pipelines, automation, containerization.
— Collaborate with backend and product teams on system architecture.
— Performance tuning — ensure high availability and reliability of platform.
— Security management — implement security protocols and compliance standards.
Required Skills:
Technical:
- Linux/Unix administration — strong fundamentals
- Networking — TCP/IP, DNS, BGP, VoIP protocols
- Cloud platforms — AWS/GCP/Azure — minimum 2 years
- DevOps tools — Docker, Kubernetes, Jenkins, CI/CD
- Monitoring tools — Grafana, Prometheus, Kibana, Datadog
- Scripting — Python, Bash, Shell
- Databases — MySQL, PostgreSQL, Redis
Soft skills:
- Strong problem-solving under pressure
- Good communication — English written and verbal
- Team player — collaborative mindset
Good to Have:
- Experience in telecom/CPaaS/cloud communications industry
- Knowledge of VoIP, SIP, RTP protocols
- AI/ML operations experience
- CCNA/AWS certifications
We are hiring an L1 IT Support Engineer with 2–3 years of experience in desktop/helpdesk support to provide first-level technical assistance across end-user systems, cloud, and enterprise IT environments.
Key Responsibilities
- Troubleshoot Windows OS and Office 365 issues (Outlook, Teams, OneDrive)
- Manage Active Directory tasks: password resets, access/user management
- Install/configure laptops, desktops, printers, and software
- Perform basic network troubleshooting (Wi-Fi, VPN, DNS, DHCP)
- Support AWS CloudWatch alerts and basic Linux troubleshooting
- Handle patching, RCA, documentation, and SOP updates
- Manage tickets in ServiceNow/Jira and meet SLA timelines
- Support onboarding/offboarding and escalate complex issues to L2
Required Skills
- 2–3 years in IT Support / Helpdesk / Desktop Support
- Strong in Windows 10/11, Office 365, Active Directory
- Basic exposure to AWS / CloudWatch and Linux/Unix
- Familiarity with ServiceNow/Jira, ITIL/SLA processes
- Knowledge of SIP/VoIP basics is a plus
- Strong communication and troubleshooting skills

Mid Size Product Engineering Services Company
This role will report to the Chief Technology Officer
You Will Be Responsible For
* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.
* Leading a team in building a high-performing and scalable SaaS product.
* Conducting code reviews to maintain code quality and follow best practices
* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams
* Developing and building microservices leveraging cloud services
* Working on application security aspects
* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.
* Creating a culture of innovation that enables the continued growth of individuals and the company
* Working closely with Product and Business teams to build winning solutions
* Led talent management, including hiring, developing, and retaining a world-class team
Ideal Profile
* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.
* Proficiency in MERN / Java / Full Stack.
* Led a team in optimizing the performance and scalability of a product
* You have extensive experience with DevOps environment and CI/CD practices and can train teams.
* You're a hands-on leader, visionary, and problem solver with a passion for excellence.
* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.
What's on Offer?
* Exciting opportunity to drive the Engineering efforts of a reputed organisation
* Work alongside & learn from best in class talent
* Competitive compensation + ESOPs
Excellent Opportunity- Lead Java Full Stack (React +AWS+ Dynamo) -Wissen Technology, Whitefield Bengaluru
Hi ,
As discussed please find company's details and JD as mentioned:
About Wissen Technology
Wissen Technology, established in 2015 and part of the Wissen Group (founded in 2000), is a specialized technology consulting company. We pride ourselves on delivering high-quality solutions for global organizations across Banking & Finance, Telecom, and Healthcare domains.
For more details:
Website: www.wissen.com
Wissen Thought leadership : https://www.wissen.com/articles/
LinkedIn: Wissen Technology
Job Description:
Requirements Lead (Java+ React+ AWS+ DynamoDB)
- Bachelor’s degree in computer science or related field.
- 7-12 years of experience in software development.
- Hands-on experience working on AWS cloud environment and DynamoDB.
- Proficiency in Java, J2EE, Spring, Hibernate, REST API, Microservices.
- Experience in developing applications using J2EE Design Patterns and AWS services.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork skills
Amita Soni
Senior Consultant-Talent Acquisition-Wissen Technology, Pune
Lead Data Engineer
What are we looking for
real solver?
Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.
Your Responsibilities
What you will wake up to solve.
- Lead Technical Design & Data Architecture: Architect and lead the end-to-end development of scalable, cloud-native data platforms. You’ll guide the squad on critical architectural decisions—choosing between Batch vs. Streaming or ETL vs. ELT—while remaining 100% hands-on, contributing high-quality, production-grade code.
- Build High-Velocity Data Pipelines: Drive the implementation of robust data transports and ingestion frameworks using Python, SQL, and Spark. You will build integration layers that connect heterogeneous sources (SaaS, RDBMS, NoSQL) into unified, high-availability environments like BigQuery, Snowflake, or Redshift.
- Mentor & Elevate the Squad: Foster a culture of technical excellence by mentoring and inspiring a team of data analysts and engineers. Lead deep-dive code reviews, promote best-practice data modeling (Star/Snowflake schema), and ensure the squad adopts modern engineering standards like CI/CD for data.
- Drive AI-Ready Data Strategy: Be the expert in designing data foundations optimized for AI and Machine Learning. You will champion the use of GCP (Dataflow, Pub/Sub, BigQuery) and AWS (Lambda, Glue, EMR) to create "clean room" environments that fuel advanced analytics and generative AI models.
- Partner with Clients as a Technical DRI: Act as the Directly Responsible Individual for client success. Translate ambiguous business questions into elegant data services, manage project deliverables using Agile methodologies, and ensure that the data provided is accurate, consistent, and mission-critical.
- Troubleshoot & Optimize for Scale: Own the reliability of the reporting layer. You will proactively monitor pipelines, troubleshoot complex transformation bottlenecks, and propose ways to improve platform performance and cost-efficiency.
- Innovate and Build Reusable IP: Spearhead the creation of reusable data frameworks, custom operators, and transformation libraries that accelerate future projects and establish Searce’s unique technical advantage in the market.
Welcome to Searce
The AI-Native tech consultancy that's rewriting the rules.
Searce is an AI-native, engineering-led, modern tech consultancy that empowers clients to futurify their business by delivering intelligent, impactful, real business outcomes. Searce solvers co-innovate with clients as their trusted transformational partners ensuring sustained competitive advantage. Searce clients realize smarter, faster, better business outcomes delivered by AI-native Searce solver squads.
Functional Skills
the solver personas.
- The Data Architect: This persona deconstructs ambiguous business goals into scalable, elegant data blueprints. They don't just move data; they design the foundation—from schema design to partitioning strategies—that allows data scientists and analysts to thrive, foreseeing technical bottlenecks and making pragmatic trade-offs.
- The Player-Coach: As a hands-on leader, this persona leads from the front by writing exemplary, production-grade SQL and Python while simultaneously mentoring and elevating the skills of the squad. Their success is measured by the team's ability to deliver high-quality, maintainable code and their growth as engineers.
- The Pragmatic Innovator: This individual balances a passion for modern data tech (like Generative AI and Real-time Streaming) with a sharp focus on business outcomes. They champion new tools where they add real value but are disciplined enough to choose stable, cost-effective solutions to meet deadlines and deliver robust products.
- The Client-Facing Technologist: This persona acts as the crucial technical bridge between the data squad and the client. They build trust by listening actively, explaining complex data concepts (like data latency or idempotency) in simple terms, and demonstrating how engineering decisions align with the client’s strategic goals.
- The Quality Craftsman: This individual possesses an unwavering commitment to data integrity and treats data engineering as a craft. They are the guardian of the reporting layer, advocating for robust testing, data validation frameworks, and clean, modular code to ensure the long-term reliability of the data platform.
Experience & Relevance
- Engineering Depth: 7-10 years of professional experience in end-to-end data product development. You have a portfolio that proves your ability to build complex, high-velocity pipelines for both Batch and Streaming workloads.
- Cloud-Native Fluency: Deep, hands-on experience designing and deploying scalable data solutions on at least one major cloud platform (AWS, GCP, or Azure). You are comfortable navigating the nuances of EMR, BigQuery, or Synapse at scale.
- AI-Native Workflow: You don’t just build for AI; you build with AI. You must be proficient in using AI coding assistants (e.g., GitHub Copilot) to accelerate your delivery and have a track record of building the data foundations required for Generative AI.
- Architectural Portfolio: Evidence of leading 2-3 large-scale transformations—including platform migrations, data lakehouse builds, or real-time analytics architectures.
- Client-Facing Acumen: You have direct experience in a consultative, client-facing role. You can confidently translate a CEO’s business vision into a Lead Engineer’s technical specification without losing anything in translation.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.
💼 Job Title: Full Stack Developer (experienced only)
🏢 Company: SDS Softwares
💻 Location: Work from Home
💸 Salary range: ₹10,000 - ₹18,000 per month (based on knowledge and interview)
🕛 Shift Timings: 12 PM to 9 PM (5 days working )
About the role: As a Full Stack Developer, you will work on both the front-end and back-end of web applications. You will be responsible for developing user-friendly interfaces and maintaining the overall functionality of our projects.
⚜️ Key Responsibilities:
- Collaborate with cross-functional teams to define, design, and ship new features.
- Develop and maintain high-quality web applications (frontend + backend )
- Troubleshoot and debug applications to ensure peak performance.
- Participate in code reviews and contribute to the team’s knowledge base.
⚜️ Required Skills:
- Proficiency in HTML, CSS, JavaScript, Redux, React.js for front-end development. ✅
- Understanding of server-side languages such as Node.js. ✅
- Familiarity with database technologies such as MySQL, MongoDB, or ✅ PostgreSQL.
- Basic knowledge of version control systems, particularly Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication skills and a team-oriented mindset.
💠 Qualifications:
- individuals with full-time work experience (1 year to 2 years) in software development.
- Must have a personal laptop and stable internet connection.
- Ability to join immediately is preferred.
If you are passionate about coding and eager to learn, we would love to hear from you. 👍
Overview
We are looking for a highly skilled Lead Data Engineer with strong expertise in Data Warehousing & Analytics to join our team. The ideal candidate will have extensive experience in designing and managing data solutions, advanced SQL proficiency, and hands-on expertise in Python & POWER BI .
Skills : Python, Databricks, SQL
Key Responsibilities:
- Design, develop, and maintain scalable data warehouse solutions.
- Write and optimize complex SQL queries for data extraction, transformation, and reporting.
- Develop and automate data pipelines using Python.
- Work with AWS cloud services for data storage, processing, and analytics.
- Collaborate with cross-functional teams to provide data-driven insights and solutions.
- Ensure data integrity, security, and performance optimization.
Required Skills & Experience:
- Must have a minimum of 6-10 years of experience in Data Warehousing & Analytics.
- Must have strong experience in Databricks
- Strong proficiency in writing complex SQL queries with deep understanding of query optimization, stored procedures, and indexing.
- Hands-on experience with Python for data processing and automation.
- Experience working with AWS cloud services.
- Hands-on experience with reporting tools like Power BI or Tableau.
- Ability to work independently and collaborate with teams across different time zones.
About the Role
At CAW Studios, we are building the future with agentic AI systems, RAG pipelines, and intelligent automation. From
autonomous AI agents at KnackLabs to developer productivity tools at CodeKnack, we ship production-ready AI
products that solve real problems for enterprises and startups alike.
This is your chance to work on cutting-edge GenAI, LLM fine-tuning, and agent frameworks—and see your code
power products used in the real world. If you’re excited about experimenting, shipping fast, and solving complex AI
challenges hands-on, you’ll love it here.
Who should apply
Engineer with 2 to 4 years of full-time experience building high-scale software systems, with a proven track record
of deploying complex Generative AI products to production.
Role Overview
We are hiring an AI/ML Engineer II (SE2) to own the architectural implementation and deployment of production-grade
agentic AI systems. This role requires a hybrid of traditional engineering rigour (OOPS, SOLID, high-concurrency)
and advanced AI specialization to build the next generation of intelligent tools.
Responsibilities
● Independently design modular and maintainable multi-agent AI systems aligned with SOLID principles
● Build high-concurrency, async FastAPI backends for complex AI workloads with enterprise stability
● Architect sophisticated agentic workflows using LangGraph with a focus on state persistence and error-recovery
● Design and optimize RAG pipelines involving advanced chunking, hybrid search, and re-ranking
● Take ownership of containerization and cloud deployment for observable, cost-efficient AI services
● Collaborate on reusable AI components and internal frameworks to enhance team engineering velocity
Expectations
● Deep obsession with automation, DevOps, OOPS, and SOLID principles
● Advanced experience deploying RAG or agent-based systems with LangGraph orchestration
● Expert-level mastery of async Python, system thinking, and building scalable backends
● High ownership and a "production-first" mindset for end-to-end system reliability
● Hands-on experience across multiple AI modalities (Vision, Audio, Text) and their architectures
Job Role: Sr. Full Stack Developer
Experience- Min 6 Years
Location-Bangalore
Company Profile- https://www.wissen.com/
Domain
Fintech, Banking, Capital Markets, Investment Banking
Job Summary
We are looking for a highly experienced Senior Full Stack Engineer with strong hands-on expertise in Java, Spring Boot, AWS,React, and DynamoDB. The ideal candidate will have a strong background in building secure, scalable, high-performance applications for financial services, with experience in regulated environments such as banking, capital markets, or investment banking.
Key Responsibilities
- Design, develop, and maintain scalable backend services using Java and Spring Boot.
- Build responsive and reusable user interfaces using React.
- Design and optimize data models and access patterns in DynamoDB.
- Develop RESTful APIs and integrate them with front-end and downstream systems.
- Work on microservices-based architecture and cloud-native application design.
- Collaborate with product managers, business analysts, architects, QA, and DevOps teams to deliver business-critical solutions.
- Ensure application security, performance, reliability, and maintainability.
- Participate in code reviews, architecture reviews, and design discussions.
- Troubleshoot production issues and support enhancements in live environments.
- Follow SDLC, Agile, and DevOps best practices in a fast-paced financial services environment.
Required Skills
- 8+ years of experience in software development.
- Hands-on experience working on React, AWS cloud environment and DynamoDB.
- Proficiency in Java, J2EE, Spring, Hibernate, REST API, Microservices.
- Experience in developing applications using J2EE Design Patterns and AWS services.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork skills.
- Qualifications
- Bachelor’s or Master’s degree in Computer Science, IT, or a related discipline.
- Proven experience delivering enterprise-grade applications in regulated financial environments.
Role Overview:
As a Database Administrator, you will be responsible for the full lifecycle management of our MySQL or PostgreSQL database systems. This includes installation, configuration, performance tuning, security implementation, backup and recovery, and proactive monitoring. You will ensure the reliability, availability, and security of our database infrastructure, supporting our internal operations and client projects.
Job Responsibilities
- Installation and configuration of database software across diverse operating systems.
- Designing efficient physical database models derived from logical designs and application specifications, along with configuring database servers according to best practices and workload requirements.
- Establishing and implementing robust backup and recovery strategies tailored to data volatility and application availability needs.
- Implementing comprehensive security measures at the OS, database, and network levels to ensure authorized data access and maintain a rigorous security infrastructure with auditing capabilities for compliance.
- Fine-tuning hardware/VM resources for optimal database performance.
- Proactive monitoring of the database environment, including performance optimization through adjustments to data structures, SQL, application logic, or the DBMS subsystem.
- Configuration and implementation of database replication technologies (e.g., Master-Slave, Master-Master, Log Shipping, Mirroring, Always On).
- Automation of routine DBA tasks utilizing scripting languages such as Shell, PowerShell, Python, or GO.
- Proficiency in writing general SQL queries.
- Setting up comprehensive monitoring solutions for databases (OS and database levels) using custom scripts or third-party monitoring tools.
- Basic Cloud platform knowledge (AWS/GCP/Azure)
Qualification
Experienced DBA (4-8 years) with deep expertise in MySQL or PostgreSQL architecture, configuration, and management.
Proficient in SQL, backup/recovery, security implementation, performance tuning, and replication for both systems.
Skilled in scripting (e.g., Shell, Python) and Linux, possessing strong problem-solving, communication, and teamwork abilities with a proactive approach.
A relevant Bachelor's degree in Computer Science, Information Technology, or a related field.
We identify better ways of doing things.
Solver? Absolutely. But not the usual kind. We are searching for the architects of the
audacious & the pioneers of the possible. If you are the type to dismantle assumptions,
re-engineer ‘best practices,’ and build solutions that make the future possible NOW,
then you are speaking our language.
➔ Improver. Solver. Futurist.
➔ Great sense of humor.
➔ ‘Possible. It is.’ Mindset.
➔ Compassionate collaborator. Bold experimenter. Tireless iterator.
➔ Natural creativity that doesn’t just challenge the norm, but solves to design what’s better.
➔ Thinks in systems. Solves at scale.
This Isn’t for Everyone. But if you’re the kind who questions why things are done a
certain way—and then identifies 3 better ways to do it — we’d love to chat with you.
Director - Data engineering
What are we looking for
real solver?
Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.
Your Responsibilities
what you will wake up to solve.
1. Delivery & Tactical Rigor
- Methodology Implementation: Implement and manage a unified, 'DataOps-First' methodology for data engineering delivery (ETL/ELT pipelines, Data Modeling, MLOps, Data Governance) within assigned business units. This ensures predictable outcomes and trusted data integrity by reducing architecture variability at the project level.
- Operational Stewardship: Drive initiatives to optimize team utilization and enhance operational efficiency within the practice. You manage the commercial success of your squads, ensuring data delivery models (from migration to modern data stack implementation) are executed profitably, scalably, and cost-effectively.
- Execution & Technical Resolution
- Technical Escalation: Serve as the primary escalation point for delivery issues, personally leading the resolution of complex data integration bottlenecks and pipeline failures to protect client timelines and data reliability standards.
- Quality Enforcement
- Quality Oversight: Execute and monitor technical data quality standards, ensuring engineering teams adhere to strict policies regarding data lineage, automated quality checks (observability), security/privacy compliance (GDPR/CCPA/PII), and active catalog management.
2. Strategic Growth & Practice Scaling
- Talent & Scaling Execution: Execute the strategy for data engineering talent acquisition and development within your business units. Implement objective metrics to assess and grow the 'Data-Native' DNA of your teams, ensuring squads are consistently equipped to handle petabyte-scale environments and high-impact delivery.
- Offerings Alignment: Drive the adoption of standardized regional offerings (e.g., Modern Data Platform, Data Mesh, Lakehouse Implementation). Ensure your teams leverage the profitable frameworks defined by the practice to accelerate time-to-insight and eliminate architectural fragmentation in client environments.
- Innovation & IP Development: Lead the practical integration of Vector Databases and LLM-ready architectures into project delivery. Champion the hands-on development of IP and reusable accelerators (e.g., automated ingestion engines) that improve delivery speed and enhance data availability across your portfolio.
3. Leadership & Unit Management
- Unit Leadership: Directly lead, mentor, and manage the Engineering Managers and Lead Architects within your business unit. Hold your teams accountable for project-level operational consistency, technical talent development, and strict adherence to the practice's data governance standards.
- Stakeholder Communication: Clearly articulate the business unit’s operational performance, technical quality metrics, and delivery progress to the C-suite Stakeholders and regional client leadership, bridging the gap between technical execution and business value.
- Ecosystem Alignment: Maintain strong technical relationships with key partner contacts (Snowflake, Databricks, AWS/GCP). Align team delivery capabilities with current product roadmaps and ensure squad-level participation in training, certifications, and partner-led enablement opportunities.
Welcome to Searce
The ‘process-first’, AI-native modern tech consultancy that's rewriting the rules.
We don’t do traditional.
As an engineering-led consultancy, we are dedicated to relentlessly improving the real business outcomes. Our solvers co-innovate with clients to futurify operations and make processes smarter, faster & better.
Functional Skills
1. Delivery Management & Operational Excellence
- Methodology Execution: Expert capability in implementing and enforcing a unified delivery methodology (DataOps, Agile, Mesh Principles) within specific business units. Proven track record of auditing squad-level adherence to ensure consistency across the project lifecycle.
- Operational Performance: High proficiency in managing day-to-day operational metrics, including squad utilization, resource forecasting, and productivity tracking. Skilled at optimizing team performance to meet profitability and efficiency targets.
- SOW & Risk Mitigation: Proven experience in operationalizing Statement of Work (SOW) requirements and identifying technical delivery risks early. Expert at mitigating scope creep and data-specific bottlenecks (e.g., latency, ingestion gaps) before they impact client outcomes.
- Technical Escalation Leadership: Demonstrated ability to lead "war room" efforts to resolve complex pipeline failures or data integrity issues. Skilled at providing clear, rapid remediation plans and communicating technical status directly to regional stakeholders.
2. Architectural Implementation & Technical Oversight
- Modern Stack Proficiency: Deep, hands-on expertise in implementing Cloud-Native architectures (Lakehouse, Data Mesh, MPP) on Snowflake, Databricks, or hyperscalers. Ability to conduct deep-dive architectural reviews and course-correct design decisions at the squad level to ensure scalability.
- Operationalizing Governance: Proven experience in embedding data quality and observability (completeness, freshness, accuracy) directly into the CI/CD pipeline. Responsible for technical enforcement of regulatory compliance (GDPR/PII) and maintaining the integrity of data catalogs across active projects.
- Applied Domain Expertise: Practical experience leading the delivery of high-growth solutions, specifically Generative AI infrastructure (RAG, Vector DBs), Real-Time Streaming, and large-scale platform migrations with a focus on zero-downtime execution.
- DataOps & Engineering Standards: Expert-level mastery of DataOps, including the setup and management of orchestration frameworks (Airflow, Dagster) and Infrastructure as Code (IaC). You ensure that automation is a baseline requirement, not an afterthought, for all delivery teams.
3. Unit Management & Commercial Execution
- Unit & Team Management: Proven success in leading and mentoring Engineering Managers and Lead Architects. Responsible for the operational metrics, technical output, and career development of the business unit's talent pool.
- Offerings Implementation & Scoping: Expertise in translating service offerings (e.g., Data Maturity Assessments, Lakehouse Builds) into accurate project scopes, technical estimates, and resource plans to ensure delivery is both profitable and competitive.
- Talent Growth & Mentorship: Functional ability to implement growth frameworks for data engineering roles. Focus on hands-on coaching and scaling high-performance technical talent to meet the demands of complex, petabyte-scale environments.
- Partner Enablement: Functional competence in managing regional technical relationships with major partners (Snowflake, Databricks, GCP/AWS). Drives squad-level certifications, joint technical enablement, and alignment with partner product roadmaps.
Tech Superpowers
- Modern Data Architect – Reimagines business with the Modern Data Stack (MDS) to deliver data mesh implementations, insights, & real value to clients.
- End-to-End Ecosystem Thinker – Builds modular, reusable data products across ingestion, transformation (ETL/ELT), governance, and consumption layers.
- Distributed Compute Savant – Crafts resilient, high-throughput architectures that survive petabyte-scale volume and data skew without breaking the bank.
- Governance & Integrity Guardian – Embeds data quality, complete lineage, and privacy-by-design (GDPR/PII) into every table, view, and pipeline.
- AI-Ready Orchestrator – Engineers pipelines that bridge structured data with Unstructured/Vector stores, powering RAG models and Generative AI workflows.
- Product-Minded Strategist – Balances architectural purity with time-to-insight; treats every dataset as a measurable "Data Product" with clear ROI.
- Pragmatic Stack Curator – Chooses the simplest tools that compound reliability; fluent in SQL, Python, Spark, dbt, and Cloud Warehouses.
- Builder @ Heart – Writes, reviews, and optimizes queries daily; proves architectures with cost-performance benchmarks, not slideware. Business-first, data-second, outcome focused technology leader.
Experience & Relevance
- Executive Experience: Minimum 10+ years of progressive experience in data engineering and analytics, with at least 3 years in a Senior Manager or Director -level role managing multiple technical teams and owning significant operational and efficiency metrics for a large data service line.
- Delivery Standardization: Demonstrated success in defining and implementing globally consistent, repeatable delivery methodologies (DataOps/Agile Data Warehousing) across diverse teams.
- Architectural Depth: Must retain deep, current expertise in Modern Data Stack architectures (Lakehouse, MPP, Mesh) and maintain the ability to personally validate high-level architectural and data pipeline design decisions.
- Operational Leadership: Proven expertise in managing and scaling large professional services organizations, demonstrated ability to optimize utilization, resource allocation, and operational expense.
- Domain Expertise: Strong background in Enterprise Data Platforms, Applied AI/ML, Generative AI integration, or large-scale Cloud Data Migration.
- Communication: Exceptional executive-level presentation and negotiation skills, particularly in communicating complex operational, data quality, and governance metrics to C-level stakeholders.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’. Don’t Just Send a Resume. Send a Statement.
Solutions Architect - Data Engineering
Modern tech solutions advisory & 'futurify' consulting as a Searce lead fds (‘forward deployed solver’) architecting scalable data platforms and robust data engineering solutions that power intelligent insights and fuel AI innovation.
If you’re a tech-savvy, consultative seller with the brain of a strategist, the heart of a builder, and the charisma of a storyteller — we’ve got a seat for you at the front of the table.
You're not a sales lead. You're the transformation driver.
What are we looking for
real solver?
Solver? Absolutely. But not the usual kind. We're searching for the architects of the audacious & the pioneers of the possible. If you're the type to dismantle assumptions, re-engineer ‘best practices,’ and build solutions that make the future possible NOW, then you're speaking our language.
- Improver. Solver. Futurist.
- Great sense of humor.
- ‘Possible. It is.’ Mindset.
- Compassionate collaborator. Bold experimenter. Tireless iterator.
- Natural creativity that doesn’t just challenge the norm, but solves to design what’s better.
- Thinks in systems. Solves at scale.
This Isn’t for Everyone. But if you’re the kind who questions why things are done a certain way— and then identifies 3 better ways to do it — we’d love to chat with you.
Your Responsibilities
what you will wake up to solve.
You are not just a Solutions Architect; you are a futurifier of our data universe and the primary enabler of our AI ambitions. With a deep-seated passion for data engineering, you will architect and build the foundational data infrastructure that powers the customers entire data intelligence ecosystem.
As the Directly Responsible Individual (DRI) for our enterprise-grade data platforms, you own the outcome, end-to-end. You are the definitive solver for our customer's most complex data challenges, leveraging a powerful tech stack including Snowflake, Databricks, etc. and core GCP & AWS services (BigQuery, Spanner, Airflow, Kafka). This is a hands-on-keys role where you won't just design solutions—you'll build them, break them, and perfect them.
- Solution Design & Pre-sales Excellence:Collaborate with cross-functional teams, including sales, engineering, and operations, to ensure successful project delivery.
- Design Core Data Engineering: Master data modeling, architecting high-performance data ingestion pipelines and ensuring data quality and governance throughout the data lifecycle.
- Enable Cloud & AI: Design and implement solutions utilizing core GCP data services, building foundational data platforms that efficiently support advanced analytics and AI/ML initiatives.
- Optimize Performance & Cost: Continuously optimize data architectures and implementations for performance, efficiency, and cost-effectiveness within the cloud environment.
- Bridge Business & Tech: Translate complex business requirements into clear technical designs, providing technical leadership and guidance to data engineering teams.
- Stay Ahead of the Curve: Continuously research and evaluate new data technologies, architectural patterns, and industry trends to keep our data platforms at the cutting edge.
Functional Skills:
- Enterprise Data Architecture Design: Expert ability to design holistic, scalable, and resilient data architectures for complex enterprise environments.
- Cloud Data Platform Strategy: Proven capability to strategize, design, and implement cloud-native data platforms.
- Pre-Sales & Technical Storyteller: Crafts compelling, client-ready proposals, architectural decks, and technical demonstrations. Doesn't just present; shapes the strategic technical narrative behind every proposed solution.
- Advanced Data Modelling: Mastery in designing various data models for analytical, operational, and transactional use cases.
- Data Ingestion & Pipeline Orchestration: Strong expertise in designing and optimizing robust data ingestion and transformation pipelines.
- Stakeholder Communication: Exceptional skills in articulating complex technical concepts and architectural decisions to both technical and non-technical stakeholders.
- Performance & Cost Optimization: Adept at optimizing data solutions for performance, efficiency, and cost within a cloud environment.
Tech Superpowers:
- Cloud Data Mastery: You're a wizard at leveraging public cloud data services, with deep expertise in GCP (BigQuery, Spanner, etc.) and expert proficiency in modern data warehouse solutions like Snowflake.
- Data Engineering Core: Highly skilled in designing, implementing, and managing data workflows using tools like Apache Airflow and Apache Kafka. You're also an authority on advanced data modeling and ETL/ELT patterns.
- AI/ML Data Foundation: You instinctively design data pipelines and structures that efficiently feed and empower Machine Learning and Artificial Intelligence applications.
- Programming for Data: You have a strong command over key programming languages (Python, SQL) for scripting, automation, and building data processing applications.
Experience & Relevance:
- Architectural Leadership (8+ Years): You bring extensive experience (7+ years) specifically in a Solutions Architect role, focused on data engineering and platform building.
- Cloud Data Expertise: You have a proven track record of designing and implementing production-grade data solutions leveraging major public cloud platforms, with significant experience in Google Cloud Platform (GCP).
- Data Warehousing & Data Platform: Demonstrated hands-on experience in the end-to-end design, implementation, and optimization of modern data warehouses and comprehensive data platforms.
- Databricks & BigQuery Mastery: You possess significant practical experience with Databricks as a core data warehouse and GCP BigQuery for analytical workloads.
- Data Ingestion & Orchestration: Proven experience designing and implementing complex data ingestion pipelines and workflow orchestration using tools like Airflow and real-time streaming technologies like Kafka.
- AI/ML Data Enablement: Experience in building data foundations specifically geared towards supporting Machine Learning and Artificial Intelligence initiatives.
Join the ‘real solvers’
ready to futurify?
If you are excited by the possibilities of what an AI-native engineering-led, modern tech consultancy can do to futurify businesses, apply here and experience the ‘Art of the possible’.
Don’t Just Send a Resume. Send a Statement.
So, If you are passionate about tech, future & what you read above (we really are!), apply here to experience the ‘Art of Possible’
Job Title : AWS Data Engineer
Experience : 4+ Years
Location : Bengaluru (HSR – Hybrid, 3 Days WFO)
Notice Period : Immediate Joiner
💡 Role Overview :
We are looking for a skilled AWS Data Engineer to design, build, and scale modern data platforms. The role involves working with AWS-native services, Python, Spark, and DBT to deliver secure, scalable, and high-performance data solutions in an Agile environment.
🔥 Mandatory Skills :
Python, SQL, Spark, AWS (S3, Glue, EMR, Redshift, Athena, Lambda), DBT, ETL/ELT pipeline development, Airflow/Step Functions, Data Lake (Parquet/ORC/Iceberg), Terraform & CI/CD, Data Governance & Security
🚀 Key Responsibilities :
- Design, build, and optimize ETL/ELT pipelines using Python, DBT, and AWS services
- Develop and manage scalable data lakes on S3 using formats like Parquet, ORC, and Iceberg
- Build end-to-end data solutions using Glue, EMR, Lambda, Redshift, and Athena
- Implement data governance, security, and metadata management using Glue Data Catalog, Lake Formation, IAM, and KMS
- Orchestrate workflows using Airflow, Step Functions, or AWS-native tools
- Ensure reliability and automation via CloudWatch, CloudTrail, CodePipeline, and Terraform
- Collaborate with data analysts and data scientists to deliver actionable insights
- Work in an Agile environment to deliver high-quality data solutions
✅ Mandatory Skills :
- Strong Python (including AWS SDKs), SQL, Spark
- Hands-on experience with AWS data stack (S3, Glue, EMR, Redshift, Athena, Lambda)
- Experience with DBT and ETL/ELT pipeline development
- Workflow orchestration using Airflow / Step Functions
- Knowledge of data lake formats (Parquet, ORC, Iceberg)
- Exposure to DevOps practices (Terraform, CI/CD)
- Strong understanding of data governance and security best practices
- Minimum 4–7 years in Data Engineering (3+ years on AWS)
➕ Good to Have :
- Understanding of Data Mesh architecture
- Experience with platforms like Data.World
- Exposure to Hadoop / HDFS ecosystems
🤝 What We’re Looking For :
- Strong problem-solving and analytical skills
- Ability to work in a collaborative, cross-functional environment
- Good communication and stakeholder management skills
- Self-driven and adaptable to fast-paced environments
📝 Interview Process :
- Online Assessment
- Technical Interview
- Fitment Round
- Client Round
Required Skills
- 8+ years of DevOps / Cloud Engineering experience
- Strong hands-on experience with AWS services (EC2, S3, RDS, IAM, VPC, etc.)
- Expertise in Kubernetes (deployment, scaling, cluster management)
- Strong experience in PostgreSQL and AWS RDS administration
- Proficiency in Terraform for infrastructure automation
- Experience building and maintaining CI/CD pipelines (Jenkins, GitLab CI, etc.)
- Strong knowledge of Java (mandatory) and application deployment lifecycle
- Experience with Docker and containerization
- Solid understanding of networking, security, and system architecture
- Strong troubleshooting and problem-solving skills

A leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage
Responsibilities:
- Lead architecture, technical decisions, and ensure code quality, scalability, and performance
- Develop backend systems using Python & SQL; build APIs and optimize databases
- Work with frontend (React/Angular) and API-driven architectures
- Integrate AI/ML models and support analytics/LLM-based solutions
- Manage cloud deployments (Azure/AWS) and implement CI/CD practices
- Ensure system reliability, monitoring, and production readiness
- Mentor team members, conduct reviews, and collaborate with cross-functional teams
Responsibilities:
- Lead architecture, technical decisions, and ensure code quality, scalability, and performance
- Develop backend systems using Python & SQL; build APIs and optimize databases
- Work with frontend (React/Angular) and API-driven architectures
- Integrate AI/ML models and support analytics/LLM-based solutions
- Manage cloud deployments (Azure/AWS) and implement CI/CD practices
- Ensure system reliability, monitoring, and production readiness
- Mentor team members, conduct reviews, and collaborate with cross-functional teams
Key Responsibilities:
- Lead and mentor a team of Java and Python developers, providing technical guidance and fostering a culture of continuous learning and improvement.
- Oversee the design, development, and implementation of high-performance, scalable, and secure software solutions for the financial services industry.
- Collaborate with product managers and architects to translate business requirements into technical specifications and ensure alignment with overall product strategy.
- Drive the adoption of best practices in software development, including code reviews, testing, and continuous integration/continuous deployment (CI/CD).
- Manage project timelines and resources effectively, ensuring on-time and within-budget delivery of projects.
- Identify and mitigate technical risks, proactively addressing potential issues and ensuring the stability and reliability of our platforms.
- Stay abreast of emerging technologies and trends in Java, Python, and related fields, and evaluate their potential application to our products and services.
- Contribute to the development of technical documentation and training materials.
Required Skillset:
- Demonstrated expertise in Java and Python development, with a strong understanding of object-oriented principles, design patterns, and data structures.
- Proven ability to lead and mentor a team of software engineers, fostering a collaborative and high-performing environment.
- Experience in designing and developing scalable, high-performance, and secure software solutions.
- Strong understanding of software development methodologies, including Agile and Waterfall.
- Excellent communication, interpersonal, and problem-solving skills.
- Ability to work effectively in a fast-paced, dynamic environment.
- Bachelor's or Master's degree in Computer Science or a related field.
- Experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
- Experience with cloud platforms (e.g., AWS, Azure, GCP) is a plus.
Brikito — Lead Full-Stack Developer
Job Description
About Brikito
Brikito is an early-stage PropTech startup building a construction management platform for SME developers and contractors. The founder has 7+ years of hands-on construction experience and an MBA from Warwick Business School. We have initial funding, a domain (brikito.com), wireframes ready, and active customer validation underway. We need our first technical leader to take this from wireframes to a live product.
This is a ground-floor opportunity. You will be the first technical hire — the person who makes every architecture decision and writes the first line of production code.
The Role
Title: CTO / Lead Full-Stack Developer (title depends on experience and equity arrangement)
Location: India (remote OK, occasional visits to Chennai office and overseas office planning to set up in Singapore or Dubai)
Type: Full-time
Compensation: ₹1,00,000–₹2,50,000/month + meaningful equity (0.5%–5% depending on role level, vesting over 4 years based on vesting schedule with a cliff.)
Start Date: May 2026
Reports to: Founder/CEO
- What You Will DoMonths 1–3: Build the MVPOwn all technical decisions — architecture, tech stack, database design, hosting
- Build and ship a working MVP with 3 core features: project dashboard, billing/invoicing, and indent/procurement management
- Set up CI/CD pipeline, staging, and production environments
- Integrate payment gateway (Razorpay for India)
- Build both web and mobile-responsive interfaces
- Ship the MVP within 12 weeks
- Months 3–6: Iterate and ScaleOnboard beta users and fix bugs based on real usage
- Build features based on customer feedback (not assumptions)
- Integrate AI capabilities where they add clear user value (e.g., auto-generated progress reports)
- Hire and manage 1–2 junior developers as the team grows
- Set up monitoring, error tracking, and basic analytics
- Months 6–12: Lead the Technical TeamGrow the engineering team to 4–6 people
- Establish code review processes, documentation standards, and sprint rhythms
- Own the technical roadmap alongside the founder
- Participate in investor conversations as the technical co-founder (if CTO-level)
- Make build-vs-buy decisions for new features
- Required SkillsMust Have7+ years of professional software development experience
- Strong proficiency in React or Next.js (frontend)
- Strong proficiency in Node.js (backend) — Express, Nest.js, or similar
- PostgreSQL or MySQL — database design, query optimisation, migrations
- REST API design — clean, well-documented APIs
- Cloud deployment — AWS (EC2, RDS, S3) or GCP equivalent
- Expertise in AI tools and integrations - Anthropic, OpenAI, Perplexity, etc.
- Git — clean branching, PR-based workflow
- Has shipped at least one product that real users used — not just academic or internal tools
- Comfortable working independently — no one will tell you what to do step by step
- Strongly PreferredPrevious experience at a startup (Series A or earlier)
- Experience building SaaS or B2B products
- Experience with mobile development (React Native or Flutter)
- Experience integrating payment gateways (Razorpay, Stripe)
- Experience with third-party API integrations (OpenAI, Twilio, etc.)
- Understanding of CI/CD pipelines (GitHub Actions, Docker)
- Basic understanding of construction, real estate, or field operations (not required, but a plus)
- Nice to HaveExperience with TypeScript
- Experience with real-time features (WebSockets, push notifications)
- Familiarity with Figma (to translate wireframes into UI)
- Experience hiring and mentoring junior developers
- Open source contributions or a personal project portfolio
- What We Are NOT Looking ForSomeone who needs detailed specifications for every task — we move fast and figure things out together
- Someone who only wants to code and not think about the product — you will be in customer calls and strategy discussions
- Someone who optimises for perfect code over shipping — we ship first, refactor later
- Someone looking for a stable corporate job — this is a startup with all the chaos and excitement that comes with it
- What You GetEquity ownership in an early-stage company with a large addressable market ($14.9B global construction SaaS)
- Founding team credit — you will be recognised as a technical co-founder if you take the CTO role
- Direct impact — every line of code you write will be used by real customers within weeks
- Technical freedom — you choose the stack, the tools, the architecture
- A founder who understands the domain — you will never have to guess what contractors need because the CEO has built construction projects himself
- Growth path — as we raise funding and scale, you grow into VP Engineering or CTO of a funded company
How to Apply
Send the following:
- A short note (5–10 lines) on why this role interests you and what you'd bring
- Your LinkedIn profile or resume
- One link to something you've built — a live product, a GitHub repo, an app, anything that shows your work
- Your availability — when can you start?
We will respond within 48 hours. The process is:
- 30-minute video call with the founder
- Small paid technical task (8 hours of work, ₹5,000 paid regardless of outcome)
- Final conversation about role, equity, and start date
- Offer within 1 week of first call
Questions?
DM the founder on LinkedIn: https://www.linkedin.com/in/aashiqahamed/
This is not a job posting from HR. This is a founder looking for his first technical partner. If this excites you, reach out.
WHAT YOU'LL WORK ON
- Build and scale backend services using Node.js & Express
- Architect and optimize MongoDB schemas for performance
- Contribute to frontend features with Next.js & React
- Debug production issues, optimize API latency & CI/CD pipelines
- Integrate MathJax (LaTeX rendering) & VdoCipher (secure video)
WHAT WE'RE LOOKING FOR
- Strong DSA fundamentals — logical thinking over competitive coding
- Deep JavaScript/TypeScript knowledge: Closures, Promises, Event Loop
- 1–2 original projects (no To-Do apps or tutorial clones)
- Ability to independently pick up Docker, Redis, or AWS
- Ownership mindset — ensure it works in production, not just locally
BONUS POINTS
- Docker / containerization basics
- Real-world AWS experimentation (EC2, S3, Lambda)
- Active GitHub profile: open-source contributions or unique projects
AI Usage Policy:
We encourage AI tools (Cursor, Copilot, GPT-4) as force multipliers — but you must own your code, explain rade-offs, and debug without solely relying on AI.
HOW TO APPLY
- Share your GitHub profile link
- Include live demo links to your best, most original projects
We value what you've built far more than what's on your resume.
WHAT YOU'LL WORK ON
- Design and implement scalable APIs and microservices using Node.js & Express
- Manage deployments via GitHub Actions and CodeDeploy; work with Docker & AWS
- Optimize MongoDB queries and use Redis caching for high-concurrency traffic
- Bridge Figma designs to backend logic using Next.js and Tailwind CSS
- Maintain monitoring with Nginx & PM2 to ensure 99.9% uptime
WHAT WE'RE LOOKING FOR
- 1+ year of professional experience building and maintaining production applications
- Deep Node.js knowledge: async programming, RESTful API architecture
- MongoDB mastery: schema design, indexing strategies, complex aggregation pipelines
- Hands-on AWS (EC2/S3 minimum) and practical CI/CD pipeline experience
- Proven ability to take a feature from PRD / Figma to stable production deployment
WHAT WILL MAKE YOU STAND OUT
- Experience maintaining apps with high concurrent user counts
- Comfortable with Nginx configs and Dockerfiles
- Hands-on with payment gateway integration (Cashfree) and webhook handling
- Obsession with maintainable, well-documented, DRY code
AI Usage Policy:
AI tools (Cursor, Copilot, GPT-4) are force multipliers — use them. But you must own your code, reason hrough architectural trade-offs, and debug without relying solely on AI.
HOW TO APPLY:
- Tell us about the most complex bug you've solved or a backend system you built from scratch
- Share your GitHub profile
- Include at least two live project links showcasing your best work
- Your code will directly impact the learning outcomes of thousands of students.
Experience: 1–3 Years
Qualification: B.Tech (Computer Science / IT or related field)
Shift Timing: 5:00 PM – 2:00 AM (Late Evening Shift)
Location: Hyderabad
Job Summary
We are seeking a proactive and detail-oriented Application Support Engineer with 1–3 years of experience in Linux/Windows environments, application servers, and monitoring tools. The candidate will be responsible for ensuring the stability, performance, and availability of applications, along with providing L2/L3 support in a fast-paced production environment.
Key Responsibilities :
- Provide application support and incident management for production systems.
- Monitor system performance using hardware/software monitoring and trending tools.
- Troubleshoot issues in Linux and Windows environments.
- Manage and support Apache and Tomcat servers.
- Analyze logs and debug application/system issues.
- Work on SQL/Oracle databases for query execution, troubleshooting, and performance tuning.
- Handle deployments and support CI/CD pipelines using tools like Docker and Jenkins.
- Ensure SLA adherence and timely resolution of incidents and service requests.
- Coordinate with development, infrastructure, and database teams for issue resolution.
- Maintain documentation for incidents, processes, and knowledge base articles.
- Support SaaS applications hosted in data center environments.
Required Skills :
Strong knowledge of Linux and Windows OS administration
Experience with Apache and Tomcat servers
Hands-on experience with monitoring and alerting tools
Good understanding of log analysis and troubleshooting techniques
Working knowledge of SQL / Oracle databases
Familiarity with Docker and Jenkins (CI/CD pipelines)
Understanding of ITIL processes (Incident, Problem, Change Management)
Knowledge of SaaS applications and data center operations.
Preferred Skills :
Experience with automation/scripting (Shell, Python, etc.)
Exposure to cloud platforms (AWS/Azure/GCP) is a plus
Basic networking knowledge
Soft Skills :
Strong analytical and problem-solving abilities
Good communication skills
Ability to work in night shifts and handle production support
Team player with a proactive attitude
About the Role
Qiro is building the infrastructure powering the next generation of underwriting, credit analytics, and tokenized private credit markets.
We are looking for a Tech Lead — Credit & Blockchain Infrastructure to lead the architecture and execution of our core systems — spanning underwriting engines, credit lifecycle workflows, and blockchain-integrated capital markets infrastructure.
This is not a feature delivery role. This is a system ownership role.
You will be hands-on while leading a growing engineering team in a fast-moving, in-office environment.
What You’ll Own
- Define and evolve the long-term technical vision for Qiro’s programmable credit infrastructure — architecting cohesive systems that unify underwriting engines, credit lifecycle workflows, and tokenized capital markets.
- Own the end-to-end architecture of scalable backend platforms (Python and/or TypeScript), establishing clear boundaries between risk logic, platform APIs, and smart contract integrations while ensuring scalability, auditability, and extensibility.
- Build and standardize configurable underwriting and credit lifecycle systems — from onboarding and drawdown orchestration to repayment waterfalls and early closures — ensuring deterministic, traceable financial state transitions at institutional scale.
- Set integration and infrastructure standards across API contracts, data models, validation layers, and event-driven architectures, enabling reliable synchronization between off-chain services and on-chain contracts.
- Architect secure and resilient blockchain integrations, including wallet interactions, capital flow coordination, and observable on-chain/off-chain state reconciliation.
- Lead high-impact, cross-product initiatives from RFC and system design through production launch — validating architectural decisions, aligning stakeholders, and delivering measurable improvements in reliability, performance, and developer velocity.
- Elevate reliability and operational excellence by defining SLOs, strengthening CI/CD and observability practices, reducing latency, and minimizing systemic risk in financial workflows.
- Build and scale the engineering organization — mentoring senior engineers, shaping hiring standards, driving architecture reviews, and fostering a culture of ownership, craftsmanship, and first-principles thinking.
- Partner closely with Product, Design, Security, and Operations to translate complex lending and capital market mechanics into simple, robust platform primitives.
Who You Are
- 6-8+ years of engineering experience, with 3+ years in technical leadership roles.
- Strong backend architecture experience in Python and/or TypeScript.
- Comfortable designing distributed systems and financial workflows.
- Experience building fintech, lending, underwriting, trading, or blockchain-integrated systems.
- Strong understanding of API design, state management, and data modeling.
- Able to navigate ambiguity and build 0→1 infrastructure.
- Hands-on builder who leads by writing production-grade code.
We Value
- Experience with underwriting engines or policy-driven decision systems.
- Exposure to smart contracts and blockchain integrations.
- Familiarity with PostgreSQL and event-driven architectures.
- Experience in early-stage or high-growth startups.
- Strong product thinking and ability to translate complex financial logic into scalable systems.
Why Join Qiro
- Lead the architecture of a programmable credit infrastructure platform.
- Join the founding technical leadership team.
- High autonomy and ownership — your decisions shape the company.
- In-office collaboration in Bangalore for speed and iteration.
- Competitive compensation and meaningful equity.
Our Culture
We operate with:
- First-principles thinking
- Technical craftsmanship
- High ownership
- Fast execution with long-term architectural discipline
Key Responsibilities:
- Develop and deploy machine learning, deep learning, and NLP models for various business use cases.
- Build end-to-end ML pipelines including data preprocessing, feature engineering, training, evaluation, and production deployment.
- Optimize model performance and ensure scalability in production environments.
- Work closely with data scientists, product teams, and engineers to translate business requirements into AI solutions.
- Conduct data analysis to identify trends and insights.
- Implement MLOps practices for versioning, monitoring, and automating ML workflows.
- Research and evaluate new AI/ML techniques, tools, and frameworks.
- Document system architecture, model design, and development processes.
Required Skills:
- Strong programming skills in Python (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, Keras).
- Hands-on experience in building and deploying, finetuning ML/DL models in production.
- Good understanding of machine learning algorithms, neural networks, NLP, and computer vision.
- Experience with REST APIs, Docker, Kubernetes, and cloud platforms (AWS/GCP/Azure).
- Working knowledge of MLOps tools such as MLflow, Airflow, DVC, or Kubeflow.
- Familiarity with data pipelines and big data technologies (Spark, Hadoop) is a plus.
- Strong analytical skills and ability to work with large datasets.
- Excellent communication and problem-solving abilities.
- Experience in deploying models using cloud services (AWS Sagemaker, GCP Vertex AI, etc.).
- Experience in LLM fine-tuning or Generative AI, Voice AI, is an added advantage.
Educational Qualification:
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI, Machine Learning, IT, from IIT/NIT colleges strongly preferred
Key Skills:
• Hands-on experience with AWS services such as EC2, S3, Lambda, API Gateway, RDS, or DynamoDB ☁️
• Basic understanding of AI/ML concepts and experience with Python-based ML libraries (NumPy, Pandas, Scikit-learn, etc.) 🤖
• Experience in Python / Node.js / Java for backend development 💻
• Understanding of REST APIs and microservices architecture
• Familiarity with Git, CI/CD pipelines, and DevOps fundamentals
• Knowledge of Docker / containerization (preferred) 🐳
• Basic understanding of cloud security, IAM roles, and policies 🔐
• Experience in using AI tools (e.g., ChatGPT, GitHub Copilot, or similar tools) for development, debugging, documentation, and productivity in day-to-day tasks ⚡
Roles & Responsibilities:
• Develop and maintain cloud-based applications on AWS ☁️
• Build and integrate APIs and backend services
• Assist in deploying, monitoring, and managing applications on AWS infrastructure
• Work with the team to integrate AI/ML models or AI-powered services into applications 🤖
• Utilize AI tools for coding assistance, debugging, automation, and improving development efficiency
• Optimize applications for performance, scalability, and reliability
• Collaborate with cross-functional teams for design, development, and deployment
• Troubleshoot and resolve cloud or application-related issues
AWS Certification is mandatory
Education Qualification:
B.Tech/M.Tech from CSE/IT/AI/ML/ECE
Key Requirements / Skills
- 6+ years of overall experience in software development with strong expertise in building scalable web applications.
- 2+ years of experience as a Technical Lead, managing development teams and driving project delivery.
- Strong technical decision-making ability, including architecture design, technology selection, and implementation of best practices.
- Front-end expertise: Strong experience in React, JavaScript, TypeScript, and building responsive and user-friendly UI/UX.
- Back-end development: Hands-on experience with Node.js, RESTful APIs, API design, and server-side architecture.
- AI/ML knowledge: Experience in implementing AI/ML models or integrating AI-based solutions to solve business problems.
- Cloud & DevOps exposure: Experience with AWS/Azure, understanding of CI/CD pipelines, and cloud-based deployments.
- Code quality & best practices: Experience in code reviews, Git version control, and ensuring maintainable and secure code.
- Team leadership: Ability to mentor developers, guide technical discussions, and collaborate across teams.
- Strong communication skills to effectively interact with technical and non-technical stakeholders.
- Experience working in high-compliance environments such as healthcare systems is a plus.
Education Qualifications:
- B.Tech/M.Tech in CSE/IT/AI/ML from a good university
























