50+ AWS (Amazon Web Services) Jobs in India
Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!
Company: Grey Chain AI
Location: Remote
Experience: 7+ Years
Employment Type: Full Time
About the Role
We are looking for a Senior Python AI Engineer who will lead the design, development, and delivery of production-grade GenAI and agentic AI solutions for global clients. This role requires a strong Python engineering background, experience working with foreign enterprise clients, and the ability to own delivery, guide teams, and build scalable AI systems.
You will work closely with product, engineering, and client stakeholders to deliver high-impact AI-driven platforms, intelligent agents, and LLM-powered systems.
Key Responsibilities
- Lead the design and development of Python-based AI systems, APIs, and microservices.
- Architect and build GenAI and agentic AI workflows using modern LLM frameworks.
- Own end-to-end delivery of AI projects for international clients, from requirement gathering to production deployment.
- Design and implement LLM pipelines, prompt workflows, and agent orchestration systems.
- Ensure reliability, scalability, and security of AI solutions in production.
- Mentor junior engineers and provide technical leadership to the team.
- Work closely with clients to understand business needs and translate them into robust AI solutions.
- Drive adoption of latest GenAI trends, tools, and best practices across projects.
Must-Have Technical Skills
- 7+ years of hands-on experience in Python development, building scalable backend systems.
- Strong experience with Python frameworks and libraries (FastAPI, Flask, Pydantic, SQLAlchemy, etc.).
- Solid experience working with LLMs and GenAI systems (OpenAI, Claude, Gemini, open-source models).
- Good experience with agentic AI frameworks such as LangChain, LangGraph, LlamaIndex, or similar.
- Experience designing multi-agent workflows, tool calling, and prompt pipelines.
- Strong understanding of REST APIs, microservices, and cloud-native architectures.
- Experience deploying AI solutions on AWS, Azure, or GCP.
- Knowledge of MLOps / LLMOps, model monitoring, evaluation, and logging.
- Proficiency with Git, CI/CD, and production deployment pipelines.
Leadership & Client-Facing Experience
- Proven experience leading engineering teams or acting as a technical lead.
- Strong experience working directly with foreign or enterprise clients.
- Ability to gather requirements, propose solutions, and own delivery outcomes.
- Comfortable presenting technical concepts to non-technical stakeholders.
What We Look For
- Excellent communication, comprehension, and presentation skills.
- High level of ownership, accountability, and reliability.
- Self-driven professional who can operate independently in a remote setup.
- Strong problem-solving mindset and attention to detail.
- Passion for GenAI, agentic systems, and emerging AI trends.
Why Grey Chain AI
Grey Chain AI is a Generative AI-as-a-Service and Digital Transformation company trusted by global brands such as UNICEF, BOSE, KFINTECH, WHO, and Fortune 500 companies. We build real-world, production-ready AI systems that drive business impact across industries like BFSI, Non-profits, Retail, and Consulting.
Here, you won’t just experiment with AI — you will build, deploy, and scale it for the real world.
Job Title : Senior Software Engineer (Full Stack — AI/ML & Data Applications)
Experience : 5 to 10 Years
Location : Bengaluru, India
Employment Type : Full-Time | Onsite
Role Overview :
We are seeking a Senior Full Stack Software Engineer with strong technical leadership and hands-on expertise in AI/ML, data-centric applications, and scalable full-stack architectures.
In this role, you will design and implement complex applications integrating ML/AI models, lead full-cycle development, and mentor engineering teams.
Mandatory Skills :
Full Stack Development (React/Angular/Vue + Node.js/Python/Java), Data Engineering (Spark/Kafka/ETL), ML/AI Model Integration (TensorFlow/PyTorch/scikit-learn), Cloud & DevOps (AWS/GCP/Azure, Docker, Kubernetes, CI/CD), SQL/NoSQL Databases (PostgreSQL/MongoDB).
Key Responsibilities :
- Architect, design, and develop scalable full-stack applications for data and AI-driven products.
- Build and optimize data ingestion, processing, and pipeline frameworks for large datasets.
- Deploy, integrate, and scale ML/AI models in production environments.
- Drive system design, architecture discussions, and API/interface standards.
- Ensure engineering best practices across code quality, testing, performance, and security.
- Mentor and guide junior developers through reviews and technical decision-making.
- Collaborate cross-functionally with product, design, and data teams to align solutions with business needs.
- Monitor, diagnose, and optimize performance issues across the application stack.
- Maintain comprehensive technical documentation for scalability and knowledge-sharing.
Required Skills & Experience :
- Education : B.E./B.Tech/M.E./M.Tech in Computer Science, Data Science, or equivalent fields.
- Experience : 5+ years in software development with at least 2+ years in a senior or lead role.
- Full Stack Proficiency :
- Front-end : React / Angular / Vue.js
- Back-end : Node.js / Python / Java
- Data Engineering : Experience with data frameworks such as Apache Spark, Kafka, and ETL pipeline development.
- AI/ML Expertise : Practical exposure to TensorFlow, PyTorch, or scikit-learn and deploying ML models at scale.
- Databases : Strong knowledge of SQL & NoSQL systems (PostgreSQL, MongoDB) and warehousing tools (Snowflake, BigQuery).
- Cloud & DevOps : Working knowledge of AWS, GCP, or Azure; containerization & orchestration (Docker, Kubernetes); CI/CD; MLflow/SageMaker is a plus.
- Visualization : Familiarity with modern data visualization tools (D3.js, Tableau, Power BI).
Soft Skills :
- Excellent communication and cross-functional collaboration skills.
- Strong analytical mindset with structured problem-solving ability.
- Self-driven with ownership mentality and adaptability in fast-paced environments.
Preferred Qualifications (Bonus) :
- Experience deploying distributed, large-scale ML or data-driven platforms.
- Understanding of data governance, privacy, and security compliance.
- Exposure to domain-driven data/AI use cases in fintech, healthcare, retail, or e-commerce.
- Experience working in Agile environments (Scrum/Kanban).
- Active open-source contributions or a strong GitHub technical portfolio.
Roles & Responsibilities
- Data Engineering Excellence: Design and implement data pipelines using formats like JSON, Parquet, CSV, and ORC, utilizing batch and streaming ingestion.
- Cloud Data Migration Leadership: Lead cloud migration projects, developing scalable Spark pipelines.
- Medallion Architecture: Implement Bronze, Silver, and gold tables for scalable data systems.
- Spark Code Optimization: Optimize Spark code to ensure efficient cloud migration.
- Data Modeling: Develop and maintain data models with strong governance practices.
- Data Cataloging & Quality: Implement cataloging strategies with Unity Catalog to maintain high-quality data.
- Delta Live Table Leadership: Lead the design and implementation of Delta Live Tables (DLT) pipelines for secure, tamper-resistant data management.
- Customer Collaboration: Collaborate with clients to optimize cloud migrations and ensure best practices in design and governance.
Educational Qualifications
- Experience: Minimum 5 years of hands-on experience in data engineering, with a proven track record in complex pipeline development and cloud-based data migration projects.
- Education: Bachelor’s or higher degree in Computer Science, Data Engineering, or a related field.
- Skills
- Must-have: Proficiency in Spark, SQL, Python, and other relevant data processing technologies. Strong knowledge of Databricks and its components, including Delta Live Table (DLT) pipeline implementations. Expertise in on-premises to cloud Spark code optimization and Medallion Architecture.
Good to Have
- Familiarity with AWS services (experience with additional cloud platforms like GCP or Azure is a plus).
Soft Skills
- Excellent communication and collaboration skills, with the ability to work effectively with clients and internal teams.
- Certifications
- AWS/GCP/Azure Data Engineer Certification.
Job Title: Technical Lead (Java/Spring Boot/Cloud)
Location: Bangalore
Experience: 8 to 12 Years
Overview
We are seeking a highly accomplished and charismatic Technical Lead to drive thedesign, development, and delivery of high-volume, scalable, and secure enterpriseapplications. The ideal candidate will possess deep expertise in the Java ecosystem, particularly with Spring Boot and Microservices Architecture, coupled with significant
experience in Cloud Solutions (AWS/Azure) and DevOps practices. This role requiresa proven leader capable of setting "big picture" strategy while mentoring a high-performing team.
Key Responsibilities
Architecture Design
- Lead the architecture and design of complex, scalable, and secure cloud-native applications using Java/J2EE and the Spring Boot Framework.
- Design and implement Microservices Architecture and RESTful/SOAP APIs.
- Spearhead Cloud Solution Architecture, including the design and optimization of cloud-based infrastructure deployment with auto-scaling, fault-tolerant, and reliability capabilities (AWS/Azure).
- Guide teams on applying Architecture Concepts, Architectural Styles, and Design Patterns (e.g., UML, Object-Oriented Analysis and Design).
- Solution Architect complex migrations of enterprise applications to Cloud.
- Conduct Proof-of-Concepts (PoC) for new technologies like Blockchain (Hyper Ledger) for solutions such as Identity Management.
Technical Leadership & Development
- Lead the entire software development process from conception to completion within an Agile/Waterfall and Cleanroom Engineering environment.
- Define and enforce best practices and coding standards for Java development, ensuring code quality, security, and performance optimization.
- Implement and manage CI/CD Pipelines &; DevOps Practices to automate software delivery.
- Oversee cloud migration and transformation programs for enterprise applications, focusing on reducing infrastructure costs and improving scalability.
- Troubleshoot and resolve complex technical issues related to the Java/Spring Boot stack, databases (SQL Server, Oracle, My-SQL, Postgres SQL, Elastic Search, Redis), and cloud components.
- Ensure the adoption of Test Driven Development (TDD), Unit Testing, and Mock Test-Driven Development practices.
People & Delivery Management
- Act as a Charismatic people leader and Transformative Force, building and mentoring high-performing teams from the ground up.
- Drive Delivery Management, collaborating with stakeholders to align technical solutions with business objectives and managing large-scale programs from initiation to delivery.
- Utilize Excellent Communication & Presentation Skills to articulate technical strategies to both technical and non-technical stakeholders.
- Champion organizational change, driving adoption of new processes, ways of working, and technology platforms.
Required Technical Skills
- Languages: Java (JDK1.5+), Spring Core Framework, Spring Batch, Java Server Pages (JSP), Servlets, Apache Struts, JSON, Hibernate.
- Cloud: Extensive experience with Amazon Web Services (AWS) (Solution Architect certification preferred) and familiarity with Azure.
- DevOps/Containerization: CI/CD Pipelines, Docker.
- Databases: Strong proficiency in MS SQL Server, Oracle, My-SQL, Postgres SQL, and NoSQL/Caching (Elastic Search, Redis).
Education and Certifications
- Master's or Bachelor's degree in a relevant field.
- Certified Amazon Web Services Solution Architect (or equivalent).
- Experience or certification in leadership is a plus.
Technical Lead – Golang | AWS | Database DesignWork Model: Hybrid (Mandatory Work From Office for the first 1 month in Chennai, followed by remote work)
Location: Chennai, India
Experience: 8–12 Years
Budget: 1L ~ 1.2L MonthlyRole Summary
We are seeking an experienced Technical Lead with strong expertise in Golang, AWS, and Database Design to spearhead backend development initiatives, drive architectural decisions, and mentor engineering teams. The ideal candidate will combine hands-on technical skills with leadership capabilities to deliver scalable, secure, and high-performance solutions.
Key Responsibilities
Backend Development Leadership:
Lead the design and development of backend systems using Golang and microservices architecture.
Ensure scalability, reliability, and maintainability of backend services.Database
Design & Optimization:
Own database schema modeling, normalization, and performance tuning.
Work with MySQL, PostgreSQL, and NoSQL databases to design efficient data storage solutions.
Implement strategies for query optimization and high availability.
Cloud Infrastructure Management:
Architect and manage scalable solutions on AWS cloud services including EC2, ECS/EKS, Lambda, RDS, DynamoDB, and S3.
Ensure cost optimization, security compliance, and disaster recovery planning.
Technical Governance & Mentorship:
Review code, enforce best practices, and maintain coding standards.
Mentor and guide developers, fostering a culture of continuous learning and innovation.
Collaboration & Delivery:
Partner with product managers, architects, and stakeholders to align technical solutions with business goals.
Drive end-to-end delivery of projects with a focus on quality and timelines.
Production Support & Optimization:
Troubleshoot and resolve production issues.
Continuously monitor system performance and implement improvements.
Required Skills & Qualifications
Technical Expertise:
Strong hands-on experience with Golang in production-grade applications.
Solid knowledge of Database Design (MySQL, PostgreSQL, NoSQL).
Proficiency in AWS services (EC2, ECS/EKS, Lambda, RDS, DynamoDB, S3).
Strong understanding of microservices and distributed systems.
DevOps & Tools:
Experience with Docker, Kubernetes, and container orchestration.
Familiarity with CI/CD pipelines using tools like Jenkins, Maven, or GitHub Actions.
Soft Skills:
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities.
Ability to mentor and inspire engineering teams.
Shift + Return to add a new line
We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.
Responsibilities:
* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.
* Deploy and orchestrate containerized applications using Kubernetes.
* Implement and maintain infrastructure as code (IaC) using Terraform.
* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.
* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.
* Collaborate with cross-functional teams to define technical requirements and deliver solutions.
* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.
Requirements:
* 2+ years of experience with AWS, including practical exposure to its services in production environments.
* Demonstrated expertise in Kubernetes for container orchestration.
* Proficiency in using Terraform for managing infrastructure as code.
* Exposure to at least one CI/CD tool, such as Jenkins or Chef.
* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.
Backend Engineer (Python / Django + DevOps)
Company: SurgePV (A product by Heaven Designs Pvt. Ltd.)
About SurgePV
SurgePV is an AI-first solar design software built from more than a decade of hands-on experience designing and engineering thousands of solar installations at Heaven Designs. After working with nearly every solar design tool in the market, we identified major gaps in speed, usability, and intelligence—particularly for rooftop solar EPCs.
Our vision is to build the most powerful and intuitive solar design platform for rooftop installers, covering fast PV layouts, code-compliant engineering, pricing, proposals, and financing in a single workflow. SurgePV enables small and mid-sized solar EPCs to design more systems, close more deals, and accelerate the clean energy transition globally.
As SurgePV scales, we are building a robust backend platform to support complex geometry, pricing logic, compliance rules, and workflow automation at scale.
Role Overview
We are seeking a Backend Engineer (Python / Django + DevOps) to own and scale SurgePV’s core backend systems. You will be responsible for designing, building, and maintaining reliable, secure, and high-performance services that power our solar design platform.
This role requires strong ownership—you will work closely with the founders, frontend engineers, and product team to make architectural decisions and ensure the platform remains fast, observable, and scalable as global usage grows.
Key Responsibilities
- Design, develop, and maintain backend services and REST APIs that power PV design, pricing, and core product workflows.
- Collaborate with the founding team on system architecture, including authentication, authorization, billing, permissions, integrations, and multi-tenant design.
- Build secure, scalable, and observable systems with structured logging, metrics, alerts, and rate limiting.
- Own DevOps responsibilities for backend services, including Docker-based containerization, CI/CD pipelines, and production deployments.
- Optimize PostgreSQL schemas, migrations, indexes, and queries for computation-heavy and geospatial workloads.
- Implement caching strategies and performance optimizations where required.
- Integrate with third-party APIs such as CRMs, financing providers, mapping platforms, and satellite or irradiance data services.
- Write clean, maintainable, well-tested code and actively participate in code reviews to uphold engineering quality.
Required Skills & Qualifications (Must-Have)
- 2–5 years of experience as a Backend Engineer.
- Strong proficiency in Python and Django / Django REST Framework.
- Solid computer science fundamentals, including data structures, algorithms, and basic distributed systems concepts.
- Proven experience designing and maintaining REST APIs in production environments.
- Hands-on DevOps experience, including:
- Docker and containerized services
- CI/CD pipelines (GitHub Actions, GitLab CI, or similar)
- Deployments on cloud platforms such as AWS, GCP, Azure, or DigitalOcean
- Strong working knowledge of PostgreSQL, including schema design, migrations, indexing, and query optimization.
- Strong debugging skills and a habit of instrumenting systems using logs, metrics, and alerts.
- Ownership mindset with the ability to take systems from spec → implementation → production → iteration.
Good-to-Have Skills
- Experience working in early-stage startups or building 0→1 products.
- Familiarity with Kubernetes or other container orchestration tools.
- Experience with Infrastructure as Code (Terraform, Pulumi).
- Exposure to monitoring and observability stacks such as Prometheus, Grafana, ELK, or similar tools.
- Prior exposure to solar, CAD/geometry, geospatial data, or financial/pricing workflows.
What We Offer
- Real-world impact: every feature you ship helps accelerate solar adoption on real rooftops.
- Opportunity to work across backend engineering, DevOps, integrations, and performance optimization.
- A mission-driven, fast-growing product focused on sustainability and clean energy.
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Job Summary
We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.
Key Responsibilities
- Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
- Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
- Containerize applications using Docker and manage deployments with Helm charts
- Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
- Provision and manage infrastructure using Terraform (Infrastructure as Code)
- Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
- Write and maintain Python scripts for automation, monitoring, and operational tasks
- Ensure high availability, scalability, performance, and cost optimization of cloud resources
- Implement and follow security best practices across AWS and Kubernetes environments
- Troubleshoot production issues, perform root cause analysis, and support incident resolution
- Collaborate closely with development and QA teams to streamline deployment and release processes
Required Skills & Qualifications
- 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
- Strong experience with AWS services, including:
- EC2, RDS, OpenSearch, VPC, S3
- Application Load Balancer (ALB), API Gateway, Lambda
- SNS and SQS.
- Hands-on experience with AWS EKS (Kubernetes)
- Strong knowledge of Docker and Helm charts
- Experience with Terraform for infrastructure provisioning and management
- Solid experience building and managing CI/CD pipelines using Jenkins
- Practical experience with Prometheus and Grafana for monitoring and alerting
- Proficiency in Python scripting for automation and operational tasks
- Good understanding of Linux systems, networking concepts, and cloud security
- Strong problem-solving and troubleshooting skills
Good to Have (Preferred Skills)
- Exposure to GitOps practices
- Experience managing multi-environment setups (Dev, QA, UAT, Production)
- Knowledge of cloud cost optimization techniques
- Understanding of Kubernetes security best practices
- Experience with log aggregation tools (e.g., ELK/OpenSearch stack)
Language Preference
- Fluency in English is mandatory.
- Fluency in Hindi is preferred.
We are seeking a highly skilled software developer with proven experience in developing and scaling education ERP solutions. The ideal candidate should have strong expertise in Node.js or PHP (Laravel), MySQL, and MongoDB, along with hands-on experience in implementing ERP modules such as HR, Exams, Inventory, Learning Management System (LMS), Admissions, Fee Management, and Finance.
Key Responsibilities
Design, develop, and maintain scalable Education ERP modules.
Work on end-to-end ERP features, including HR, exams, inventory, LMS, admissions, fees, and finance.
Build and optimize REST APIs/GraphQL services and ensure seamless integrations.
Optimize system performance, scalability, and security for high-volume ERP usage.
Conduct code reviews, enforce coding standards, and mentor junior developers.
Stay updated with emerging technologies and recommend improvements for ERP solutions.
Required Skills & Qualifications
Strong expertise in Node.js and PHP (Laravel, Core PHP).
Proficiency with MySQL, MongoDB, and PostgreSQL (database design & optimization).
Frontend knowledge: JavaScript, jQuery, HTML, CSS (React/Vue preferred).
Experience with REST APIs, GraphQL, and third-party integrations (payment gateways, SMS, and email).
Hands-on with Git/GitHub, Docker, and CI/CD pipelines.
Familiarity with cloud platforms (AWS, Azure, GCP) is a plus.
4+ years of professional development experience, with a minimum of 2 years in ERP systems.
Preferred Experience
Prior work in the education ERP domain.
Deep knowledge of HR, Exam, Inventory, LMS, Admissions, Fees & Finance modules.
Exposure to high-traffic enterprise applications.
Strong leadership, mentoring, and problem-solving abilities
Benefit:
Permanent Work From Home
Location : Bengaluru, India
Type : Full-time
Experience :4-7 Years
Mode :Hybrid
The Role
We're looking for a Full Stack Engineer who thrives on building high-performance applications at scale. You'll work across our entire stack—from optimizing PostgreSQL queries on 140M+ records to crafting intuitive React interfaces. This is a high-impact role where your code directly influences how sales teams discover and engage with prospects worldwide.
What You'll Do
- Build and optimize REST APIs using Django REST Framework handling millions of records
- Design and implement complex database queries, indexes, and caching strategies for PostgreSQL
- Develop responsive, high-performance front-end interfaces with Next.js and React
- Implement Redis caching layers and optimize query performance for sub-second response times
- Design and implement smart search/filter systems with complex logic
- Collaborate on data pipeline architecture for processing large datasets
- Write clean, testable code with comprehensive unit and integration tests
- Participate in code reviews, architecture discussions, and technical planning
Required Skills
- 4-7 years of professional experience in full stack development
- Strong proficiency in Python and Django/Django REST Framework
- Expert-level PostgreSQL knowledge: query optimization, indexing, EXPLAIN ANALYZE, partitioning
- Solid experience with Next.js, React, and modern JavaScript/TypeScript
- Experience with state management (Zustand, Redux, or similar)
- Working knowledge of Redis for caching and session management
- Familiarity with AWS services (RDS, EC2, S3, CloudFront)
- Understanding of RESTful API design principles and best practices
- Experience with Git, CI/CD pipelines, and agile development workflows
Nice to Have
- Experience with Elasticsearch for full-text search at scale
- Knowledge of data scraping, ETL pipelines, or data enrichment
- Experience with Celery for async task processing
- Familiarity with Tailwind CSS and modern UI/UX practices
- Previous work on B2B SaaS or data-intensive applications
- Understanding of security best practices and anti-scraping measures
Our Tech Stack
Backend
Python, Django REST Framework
Frontend
Next.js, React, Zustand, Tailwind CSS
Database
PostgreSQL 17, Redis
Infrastructure
AWS (RDS, EC2, S3, CloudFront), Docker
Tools
GitHub, pgBouncer
Why Join Us
- Work on a product processing 140M+ records—real scale, real challenges
- Direct impact on product direction and technical decisions
- Modern tech stack with room to experiment and innovate
- Collaborative team environment with a focus on growth
- Competitive compensation and flexible hybrid work model
About the role:
We are looking for a Backend Engineer to join a mature, scaled product platform that is already serving business-critical workflows. This role focuses on enhancing existing backend systems, improving reliability, performance, and scalability, and building new features on top of a well-established architecture.
The ideal candidate is strong at writing production-quality code, debugging complex distributed systems, and knowledgeable about how design decisions impact scalability, availability, and long-term maintainability. You will work closely with cross-functional teams to ensure the platform continues to perform reliably at scale while evolving with business needs.
What you will be expected to do:
- Develop, enhance, and maintain backend services for existing user, inventory, pricing, order, and payment management systems running at scale.
- Write clean, efficient, and highly reliable code using Java 8 and above and Spring Boot 2.7 and above.
- Own and improve production systems with a strong focus on performance, scalability, availability, and fault tolerance.
- Debug and resolve complex production issues involving services, databases, caches, and messaging systems.
- Contribute to low-level design (LLD) and actively participate in high-level architecture (HLD) discussions for new features and system improvements.
- Work with event-driven and asynchronous architectures, ensuring correctness and reliability of data flows.
- Optimize database schemas, queries, indexing strategies, and caching layers for high-throughput workloads.
- Partner with DevOps, QA, and Product teams to support smooth 24×7 production operations.
- Participate in code reviews, design reviews, and incident post-mortems to continuously improve system quality.
- Take end-to-end ownership of backend components, from design and implementation to deployment and production support.
You might be a strong candidate if you have/are:
- Bachelor’s degree in computer science, engineering, or equivalent experience.
- 2+ years of experience building and maintaining backend systems in production environments.
- Strong proficiency in Java with hands-on experience in Spring Boot–based microservices.
- Solid knowledge of data structures, algorithms, and backend problem solving.
- Strong experience with PostgreSQL or other relational databases in high-scale systems.
- Experience building and consuming RESTful APIs and working with asynchronous systems.
- Strong debugging and troubleshooting skills in live production systems.
- Good understanding of software engineering best practices, including testing, code reviews, CI/CD, and release management.
- Clear communication skills and the ability to partner effectively within a team.
- Hands-on experience with Kafka or similar messaging/event-streaming platforms.
- Exposure to distributed systems, microservices architecture, and scaling strategies.
Good to have:
- Experience using Redis, Elasticsearch, and MongoDB in production systems.
- Experience with monitoring, logging, and observability tools (e.g., Prometheus, Grafana, ELK).
- Familiarity with cloud infrastructure and containerized environments (Docker, Kubernetes).
- Experience participating in on-call rotations and handling production incidents.
What Sun King offers:
- Professional growth in a dynamic, rapidly expanding, high-social-impact industry
- An open-minded, collaborative culture made up of enthusiastic colleagues who are driven by the challenge of innovation towards profound impact on people and the planet.
- A truly multicultural experience: you will have the chance to work with and learn from people from different geographies, nationalities, and backgrounds.
- Structured, tailored learning and development programs that help you become a better leader, manager, and professional through the Sun King Center for Leadership.
About Sun King
Sun King is a leading off-grid solar energy company providing affordable, reliable electricity to 1.8 billion people without grid access. Operating across Africa and Asia, Sun King has connected over 20 million homes, adding 200,000 homes monthly.
Through a ‘pay-as-you-go’ model, customers make small daily payments (as low as $0.11) via mobile money or cash, eventually owning their solar equipment and saving on costly kerosene or diesel. To date, Sun King products have saved customers over $4 billion.
With 28,000 field agents and embedded electronics that regulate usage based on payments, Sun King ensures seamless energy access. Its products range from home lighting and phone charging systems to solar inverters capable of powering high-energy appliances.
Sun King is expanding into clean cooking, electric mobility, and entertainment while serving a wide range of income segments.
The company employs 2,800 staff across 12 countries, with women representing 44% of the workforce, and expertise spanning product design, data science, logistics, sales, software, and operations.
Like us, you'll be deeply committed to delivering impactful outcomes for customers.
- 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
- Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
- Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
- Experience writing batch/cron jobs using Python and Shell scripting.
- Experience in web application development using JavaScript and JavaScript libraries.
- Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
- Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
- Understanding of code versioning tools such as Git.
- Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
- Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
Senior PHP Laravel Backend Developer
Requirements:
• Proficiency in MySQL, AWS, Git, PHP, HTML.
• Minimum 2 years of experience in Laravel framework.
• Minimum 3 years of experience in PHP development.
• Overall professional experience of 3+ years.
• Basic knowledge of JavaScript, TypeScript, Node.js, and Express framework.
• Education: Graduation with an aggregate of 70%.
Roles and Responsibilities:
• The primary role will be development, quality check and maintenance of the platform to
ensure improvement and stability.
• Contribute to the development of effective functions and systems that can meet the
overall objectives of the company.
• Understanding of performance engineering and optimization.
• Ability to design and code complex programs.
Company Description
NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.
Role Description
This is a full-time hybrid role for a Java Software Engineer, based in Pune. The Java Software Engineer will be responsible for designing, developing, and maintaining software applications. Key responsibilities include working with microservices architecture, implementing and managing the Spring Framework, and programming in Java. Collaboration with cross-functional teams to define, design, and ship new features is also a key aspect of this role.
Responsibilities:
● Develop and Maintain: Write clean, efficient, and maintainable code for Java-based applications
● Collaborate: Work with cross-functional teams to gather requirements and translate them into technical solutions
● Code Reviews: Participate in code reviews to maintain high-quality standards
● Troubleshooting: Debug and resolve application issues in a timely manner
● Testing: Develop and execute unit and integration tests to ensure software reliability
● Optimize: Identify and address performance bottlenecks to enhance application performance
Qualifications & Skills:
● Strong knowledge of Java, Spring Framework (Spring Boot, Spring MVC), and Hibernate/JPA
● Familiarity with RESTful APIs and web services
● Proficiency in working with relational databases like MySQL or PostgreSQL
● Practical experience with AWS cloud services and building scalable, microservices-based architectures
● Experience with build tools like Maven or Gradle
● Understanding of version control systems, especially Git
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to work effectively in a collaborative team environment
Why Join Us?
● Opportunity to work on cutting-edge technology products
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you
About the role
We are seeking a seasoned Backend Tech Lead with deep expertise in Golang and Python to lead our backend team. The ideal candidate has 6+ years of experience in backend technologies and 2–3 years of proven engineering mentoring experience, having successfully scaled systems and shipped B2C applications in collaboration with product teams.
Responsibilities
Technical & Product Delivery
● Oversee design and development of backend systems operating at 10K+ RPM scale.
● Guide the team in building transactional systems (payments, orders, etc.) and behavioral systems (analytics, personalization, engagement tracking).
● Partner with product managers to scope, prioritize, and release B2C product features and applications.
● Ensure architectural best practices, high-quality code standards, and robust testing practices.
● Own delivery of projects end-to-end with a focus on scalability, reliability, and business impact.
Operational Excellence
● Champion observability, monitoring, and reliability across backend services.
● Continuously improve system performance, scalability, and resilience.
● Streamline development workflows and engineering processes for speed and quality.
Requirements
● Experience:
7+ years of professional experience in backend technologies.
2-3 years as Tech lead and driving delivery.
● Technical Skills:
Strong hands-on expertise in Golang and Python.
Proven track record with high-scale systems (≥10K RPM).
Solid understanding of distributed systems, APIs, SQL/NoSQL databases, and cloud platforms.
● Leadership Skills:
Demonstrated success in managing teams through 2–3 appraisal cycles.
Strong experience working with product managers to deliver consumer-facing applications.
● Excellent communication and stakeholder management abilities.
Nice-to-Have
● Familiarity with containerization and orchestration (Docker, Kubernetes).
● Experience with observability tools (Prometheus, Grafana, OpenTelemetry).
● Previous leadership experience in B2C product companies operating at scale.
What We Offer
● Opportunity to lead and shape a backend engineering team building at scale.
● A culture of ownership, innovation, and continuous learning.
● Competitive compensation, benefits, and career growth opportunities.
🚀 RECRUITING BOND HIRING
Role: CLOUD OPERATIONS & MONITORING ENGINEER - (THE GUARDIAN OF UPTIME)
⚡ THIS IS NOT A MONITORING ROLE
THIS IS A COMMAND ROLE
You don’t watch dashboards.
You control outcomes.
You don’t react to incidents.
You eliminate them before they escalate.
This role powers an AI-driven SaaS + IoT platform where:
---> Uptime is non-negotiable
---> Latency is hunted
---> Failures are never allowed to repeat
Incidents don’t grow.
Problems don’t hide.
Uptime is enforced.
🧠 WHAT YOU’LL OWN
(Real Work. Real Impact.)
🔍 Total Observability
---> Real-time visibility across cloud, application, database & infrastructure
---> High-signal dashboards (Grafana + cloud-native tools)
---> Performance trends tracked before growth breaks systems
🚨 Smart Alerting (No Noise)
---> Alerts that fire only when action is required
---> Zero false positives. Zero alert fatigue
Right signal → right person → right time
⚙ Automation as a Weapon
---> End-to-end automation of operational tasks
---> Standardized logging, metrics & alerting
---> Systems that scale without human friction
🧯 Incident Command & Reliability
---> First responder for critical incidents (on-call rotation)
---> Root cause analysis across network, app, DB & storage
Fix fast — then harden so it never breaks the same way again
📘 Operational Excellence
---> Battle-tested runbooks
---> Documentation that actually works under pressure
Every incident → a stronger platform
🛠️ TECHNOLOGIES YOU’LL MASTER
☁ Cloud: AWS | Azure | Google Cloud
📊 Monitoring: Grafana | Metrics | Traces | Logs
📡 Alerting: Production-grade alerting systems
🌐 Networking: DNS | Routing | Load Balancers | Security
🗄 Databases: Production systems under real pressure
⚙ DevOps: Automation | Reliability Engineering
🎯 WHO WE’RE LOOKING FOR
Engineers who take uptime personally.
You bring:
---> 3+ years in Cloud Ops / DevOps / SRE
---> Live production SaaS experience
---> Deep AWS / Azure / GCP expertise
---> Strong monitoring & alerting experience
---> Solid networking fundamentals
---> Calm, methodical incident response
---> Bonus (Highly Preferred):
---> B2B SaaS + IoT / hybrid platforms
---> Strong automation mindset
---> Engineers who think in systems, not tickets
💼 JOB DETAILS
📍 Bengaluru
🏢 Hybrid (WFH)
💰 (Final CTC depends on experience & interviews)
🌟 WHY THIS ROLE?
Most cloud teams manage uptime. We weaponize it.
Your work won’t just keep systems running — it will keep customers confident, operations flawless, and competitors wondering how it all works so smoothly.
📩 APPLY / REFER : 🔗 Know someone who lives for reliability, observability & cloud excellence?
🚀 RECRUITING BOND HIRING
Role: CLOUD OPERATIONS & MONITORING ENGINEER - (THE GUARDIAN OF UPTIME)
⚡ THIS IS NOT A MONITORING ROLE
THIS IS A COMMAND ROLE
You don’t watch dashboards.
You control outcomes.
You don’t react to incidents.
You eliminate them before they escalate.
This role powers an AI-driven SaaS + IoT platform where:
---> Uptime is non-negotiable
---> Latency is hunted
---> Failures are never allowed to repeat
Incidents don’t grow.
Problems don’t hide.
Uptime is enforced.
🧠 WHAT YOU’LL OWN
(Real Work. Real Impact.)
🔍 Total Observability
---> Real-time visibility across cloud, application, database & infrastructure
---> High-signal dashboards (Grafana + cloud-native tools)
---> Performance trends tracked before growth breaks systems
🚨 Smart Alerting (No Noise)
---> Alerts that fire only when action is required
---> Zero false positives. Zero alert fatigue
Right signal → right person → right time
⚙ Automation as a Weapon
---> End-to-end automation of operational tasks
---> Standardized logging, metrics & alerting
---> Systems that scale without human friction
🧯 Incident Command & Reliability
---> First responder for critical incidents (on-call rotation)
---> Root cause analysis across network, app, DB & storage
Fix fast — then harden so it never breaks the same way again
📘 Operational Excellence
---> Battle-tested runbooks
---> Documentation that actually works under pressure
Every incident → a stronger platform
🛠️ TECHNOLOGIES YOU’LL MASTER
☁ Cloud: AWS | Azure | Google Cloud
📊 Monitoring: Grafana | Metrics | Traces | Logs
📡 Alerting: Production-grade alerting systems
🌐 Networking: DNS | Routing | Load Balancers | Security
🗄 Databases: Production systems under real pressure
⚙ DevOps: Automation | Reliability Engineering
🎯 WHO WE’RE LOOKING FOR
Engineers who take uptime personally.
You bring:
---> 3+ years in Cloud Ops / DevOps / SRE
---> Live production SaaS experience
---> Deep AWS / Azure / GCP expertise
---> Strong monitoring & alerting experience
---> Solid networking fundamentals
---> Calm, methodical incident response
---> Bonus (Highly Preferred):
---> B2B SaaS + IoT / hybrid platforms
---> Strong automation mindset
---> Engineers who think in systems, not tickets
💼 JOB DETAILS
📍 Bengaluru
🏢 Hybrid (WFH)
💰 (Final CTC depends on experience & interviews)
🌟 WHY THIS ROLE?
Most cloud teams manage uptime. We weaponize it.
Your work won’t just keep systems running — it will keep customers confident, operations flawless, and competitors wondering how it all works so smoothly.
📩 APPLY / REFER : 🔗 Know someone who lives for reliability, observability & cloud excellence?
Multi-Cloud Operations: Minimum 2 public clouds (GCP & Azure preferred) Kubernetes:
Strong hands-on experience with production clusters DevOps: CI/CD pipelines, automation, IaC (Terraform preferred)
Troubleshooting: Deep Linux, networking, performance, and distributed systems debugging
Lead Software Engineer
Bidgely is seeking an exceptional and visionary Lead Software Engineer to join its core team in Bangalore. As a Lead Software Engineer, you will be working closely with EMs and org heads in shaping the roadmap and planning and set the technical direction for the team, influence architectural decisions, and mentor other engineers while delivering highly reliable, scalable products powered by large data, advanced machine learning models, and responsive user interfaces. Renowned for your deep technical expertise, you are capable of deconstructing any system, solving complex problems creatively, and elevating those around you. Join our innovative and dynamic team that thrives on creativity, technical excellence, and a belief that nothing is impossible with collaboration and hard work.
Responsibilities
- Lead the design and delivery of complex, scalable web services, APIs, and backend data modules.
- Define and drive adoption of best practices in system architecture, component reusability, and software design patterns across teams.
- Provide technical leadership in product, architectural, and strategic engineering discussions.
- Mentor and guide engineers at all levels, fostering a culture of learning and growth.
- Collaborate with cross-functional teams (engineering, product management, data science, and UX) to translate business requirements into scalable, maintainable solutions.
- Champion and drive continuous improvement initiatives for code quality, performance, security, and reliability.
- Evaluate and implement emerging technologies, tools, and methodologies to ensure competitive advantage.
- Present technical concepts and results clearly to both technical and non-technical stakeholders; influence organizational direction and recommend key technical investments.
Requirements
- 6+ years of experience in designing and developing highly scalable backend and middle tier systems.
- BS/MS/PhD in Computer Science or a related field from a leading institution.
- Demonstrated mastery of data structures, algorithms, and system design; experience architecting large-scale distributed systems and leading significant engineering projects.
- Deep fluency in Java, Spring, Hibernate, J2EE, RESTful services; expertise in at least one additional backend language/framework.
- Strong hands-on experience with both SQL (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Cassandra, Redis) databases, including schema design, optimization, and performance tuning for large data sets.
- Experience with Distributed Systems, Cloud Architectures, CI/CD, and DevOps principles.
- Strong leadership, mentoring, and communication skills; proven ability to drive technical vision and alignment across teams.
- Track record of delivering solutions in fast-paced and dynamic start-up environments.
- Commitment to quality, attention to detail, and a passion for coaching others.
Python Backend Developer
We are seeking a skilled Python Backend Developer responsible for managing the interchange of data between the server and the users. Your primary focus will be on developing server-side logic to ensure high performance and responsiveness to requests from the front end. You will also be responsible for integrating front-end elements built by your coworkers into the application, as well as managing AWS resources.
Roles & Responsibilities
- Develop and maintain scalable, secure, and robust backend services using Python
- Design and implement RESTful APIs and/or GraphQL endpoints
- Integrate user-facing elements developed by front-end developers with server-side logic
- Write reusable, testable, and efficient code
- Optimize components for maximum performance and scalability
- Collaborate with front-end developers, DevOps engineers, and other team members
- Troubleshoot and debug applications
- Implement data storage solutions (e.g., PostgreSQL, MySQL, MongoDB)
- Ensure security and data protection
Mandatory Technical Skill Set
- Implementing optimal data storage (e.g., PostgreSQL, MySQL, MongoDB, S3)
- Python backend development experience
- Design, implement, and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, or GitHub Actions
- Implemented and managed containerization platforms such as Docker and orchestration tools like Kubernetes
- Previous hands-on experience in:
- EC2, S3, ECS, EMR, VPC, Subnets, SQS, CloudWatch, CloudTrail, Lambda, SageMaker, RDS, SES, SNS, IAM, S3, Backup, AWS WAF
- SQL
Specific Knowledge/Skills
- 4-6 years of experience
- Proficiency in Python programming.
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with RESTful APIs
- Familiarity with Cloud (Azure /AWS) technologies
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
The firm :
It’s an amazing time to be joining SalaryBox as we continue to transform attendance and payroll for over 60 million MSME’s in India.
We launched the app in Jan 2021, and now have more than 2 million downloads of the app. We support more than 200k businesses and operate at scale.
Backed by Y-Combinator, SalaryBox is India’s leading attendance and payroll app. Considered to be the engine of economies around the world, the MSME segment in India alone has ~63 million units, and employs ~100 million people. The sector accounts for 27% of GDP and is crucial to the functioning of the economy.
We are on a mission to make work easier for these business owners, managers, and employees so that they can focus on the things they do best.
Today, SalaryBox is a fun bunch of analytical and ambitious folks building the first-of-its-kind technologies for the MSME ecosystem. We are here to enhance the employee experience of over 10 million end consumers in the next twelve months. Our mission is big, so we act with urgency in everything we do. We find creative ways to test ideas and learn today so that we focus on the right things tomorrow.
About the Role
We are looking for a Senior Backend Engineer with at least 3 years experience. You will be an integral part of building the back-end architecture and developing core systems.
Responsibilities:
- Architect and develop our core systems from scratch.
- Deploy and maintain the product on AWS.
- Make strategic technical decisions.
- Commit to best practices for testing, logging, and deployments
- Help build the engineering team & subsequently mentor junior developers.
Requirements:
- 3+ years’ experience as a backend engineer.
- Expert in Python and Django.
- Experience with databases (Postgres, Redis etc. ).
- Good understanding of platforms (Docker, AWS).
- Basic understanding of Dev Ops.
- Previous experience architecting and scaling back-end systems
- Hands-on attitude and ability to drive solutions to completion.
Review Criteria:
- Strong Software Engineer fullstack profile using NodeJS / Python and React
- 6+ YOE in Software Development using Python OR NodeJS (For backend) & React (For frontend)
- Must have strong experience in working on Typescript
- Must have experience in message-based systems like Kafka, RabbitMq, Redis
- Databases - PostgreSQL & NoSQL databases like MongoDB
- Product Companies Only
- Tier 1 Engineering Institutes (IIT, NIT, BITS, IIIT, DTU or equivalent)
Preferred:
- Experience in Fin-Tech, Payment, POS and Retail products is highly preferred
- Experience in mentoring, coaching the team.
Role & Responsibilities:
We are currently seeking a Senior Engineer to join our Financial Services team, contributing to the design and development of scalable system.
The Ideal Candidate Will Be Able To-
- Take ownership of delivering performant, scalable and high-quality cloud-based software, both frontend and backend side.
- Mentor team members to develop in line with product requirements.
- Collaborate with Senior Architect for design and technology choices for product development roadmap.
- Do code reviews.
Ideal Candidate:
- Thorough knowledge of developing cloud-based software including backend APIs and react based frontend.
- Thorough knowledge of scalable design patterns and message-based systems such as Kafka, RabbitMq, Redis, MongoDB, ORM, SQL etc.
- Experience with AWS services such as S3, IAM, Lambda etc.
- Expert level coding skills in Python FastAPI/Django, NodeJs, TypeScript, ReactJs.
- Eye for user responsive designs on the frontend.
Perks, Benefits and Work Culture:
- We prioritize people above all else. While we're recognized for our innovative technology solutions, it's our people who drive our success. That’s why we offer a comprehensive and competitive benefits package designed to support your well-being and growth:
- Medical Insurance with coverage up to INR 8,00,000 for the employee and their family
We’re looking for an experienced Site Reliability Engineer to fill the mission-critical role of ensuring that our complex, web-scale systems are healthy, monitored, automated, and designed to scale. You will use your background as an operations generalist to work closely with our development teams from the early stages of design all the way through identifying and resolving production issues. The ideal candidate will be passionate about an operations role that involves deep knowledge of both the application and the product, and will also believe that automation is a key component to operating large-scale systems.
6-Month Accomplishments
- Familiarize with poshmark tech stack and functional requirements.
- Get comfortable with automation tools/frameworks used within cloudops organization and deployment processes associated with.
- Gain in depth knowledge related to related product functionality and infrastructure required for it.
- Start Contributing by working on small to medium scale projects.
- Understand and follow on call rotation as a secondary to get familiarized with the on call process.
12+ Month Accomplishments
- Execute projects related to comms functionality, independently, with little guidance from lead.
- Create meaningful alerts and dashboards for various sub-system involved in targeted infrastructure.
- Identify gaps in infrastructure and suggest improvements or work on it.
- Get involved in on-call rotation.
Responsibilities
- Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services.
- Gain deep knowledge of our complex applications.
- Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth.
- Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment.
- Work closely with development teams to ensure that platforms are designed with "operability" in mind.
- Function well in a fast-paced, rapidly-changing environment.
- Participate in a 24x7 on-call rotation.
Desired Skills
- 5+ years of experience in Systems Engineering/Site Reliability Operations role is required, ideally in a startup or fast-growing company.
- 5+ years in a UNIX-based large-scale web operations role.
- 5+ years of experience in doing 24/7 support for large scale production environments.
- Battle-proven, real-life experience in running a large scale production operation.
- Experience working on cloud-based infrastructure e.g AWS, GCP, Azure.
- Hands-on experience with continuous integration tools such as Jenkins, configuration management with Ansible, systems monitoring and alerting with tools such as Nagios, New Relic, Graphite.
- Experience scripting/coding
- Ability to use a wide variety of open source technologies and tools.
Technologies we use:
- Ruby, JavaScript, NodeJs, Tomcat, Nginx, HaProxy
- MongoDB, RabbitMQ, Redis, ElasticSearch.
- Amazon Web Services (EC2, RDS, CloudFront, S3, etc.)
- Terraform, Packer, Jenkins, Datadog, Kubernetes, Docker, Ansible and other DevOps tools.
About Poshmark
Poshmark is a leading fashion resale marketplace powered by a vibrant, highly engaged community of buyers and sellers and real-time social experiences. Designed to make online selling fun, more social and easier than ever, Poshmark empowers its sellers to turn their closet into a thriving business and share their style with the world. Since its founding in 2011, Poshmark has grown its community to over 130 million users and generated over $10 billion in GMV, helping sellers realize billions in earnings, delighting buyers with deals and one-of-a-kind items, and building a more sustainable future for fashion. For more information, please visit www.poshmark.com, and for company news, visit newsroom.poshmark.com.
We’re looking for an experienced Site Reliability Engineer to fill the mission-critical role of ensuring that our complex, web-scale systems are healthy, monitored, automated, and designed to scale. You will use your background as an operations generalist to work closely with our development teams from the early stages of design all the way through identifying and resolving production issues. The ideal candidate will be passionate about an operations role that involves deep knowledge of both the application and the product, and will also believe that automation is a key component to operating large-scale systems.
6-Month Accomplishments
- Familiarize with poshmark tech stack and functional requirements.
- Get comfortable with automation tools/frameworks used within cloudops organization and deployment processes associated with.
- Gain in depth knowledge related to related product functionality and infrastructure required for it.
- Start Contributing by working on small to medium scale projects.
- Understand and follow on call rotation as a secondary to get familiarized with the on call process.
12+ Month Accomplishments
- Execute projects related to comms functionality, independently, with little guidance from lead.
- Create meaningful alerts and dashboards for various sub-system involved in targeted infrastructure.
- Identify gaps in infrastructure and suggest improvements or work on it.
- Get involved in on-call rotation.
Responsibilities
- Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services.
- Gain deep knowledge of our complex applications.
- Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth.
- Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment.
- Work closely with development teams to ensure that platforms are designed with "operability" in mind.
- Function well in a fast-paced, rapidly-changing environment.
- Participate in a 24x7 on-call rotation
Desired Skills
- 4+ years of experience in Systems Engineering/Site Reliability Operations role is required, ideally in a startup or fast-growing company.
- 4+ years in a UNIX-based large-scale web operations role.
- 4+ years of experience in doing 24/7 support for large scale production environments.
- Battle-proven, real-life experience in running a large scale production operation.
- Experience working on cloud-based infrastructure e.g AWS, GCP, Azure.
- Hands-on experience with continuous integration tools such as Jenkins, configuration management with Ansible, systems monitoring and alerting with tools such as Nagios, New Relic, Graphite.
- Experience scripting/coding
- Ability to use a wide variety of open source technologies and tools.
Technologies we use:
- Ruby, JavaScript, NodeJs, Tomcat, Nginx, HaProxy
- MongoDB, RabbitMQ, Redis, ElasticSearch.
- Amazon Web Services (EC2, RDS, CloudFront, S3, etc.)
- Terraform, Packer, Jenkins, Datadog, Kubernetes, Docker, Ansible and other DevOps tools.
SENIOR INFORMATION SECURITY ENGINEER (DEVSECOPS)
Key Skills: Software Development Life Cycle (SDLC), CI/CD
About Company: Consumer Internet / E-Commerce
Company Size: Mid-Sized
Experience Required: 6 - 10 years
Working Days: 5 days/week
Office Location: Bengaluru [Karnataka]
Review Criteria:
Mandatory:
- Strong DevSecOps profile
- Must have 5+ years of hands-on experience in Information Security, with a primary focus on cloud security across AWS, Azure, and GCP environments.
- Must have strong practical experience working with Cloud Security Posture Management (CSPM) tools such as Prisma Cloud, Wiz, or Orca along with SIEM / IDS / IPS platforms
- Must have proven experience in securing Kubernetes and containerized environments including image security,runtime protection, RBAC, and network policies.
- Must have hands-on experience integrating security within CI/CD pipelines using tools such as Snyk, GitHub Advanced Security,or equivalent security scanning solutions.
- Must have solid understanding of core security domains including network security, encryption, identity and access management key management, and security governance including cloud-native security services like GuardDuty, Azure Security Center etc
- Must have practical experience with Application Security Testing tools including SAST, DAST, and SCA in real production environments
- Must have hands-on experience with security monitoring, incident response, alert investigation, root-cause analysis (RCA), and managing VAPT / penetration testing activities
- Must have experience securing infrastructure-as-code and cloud deployments using Terraform, CloudFormation, ARM, Docker, and Kubernetes
- B2B SaaS Product companies
- Must have working knowledge of globally recognized security frameworks and standards such as ISO 27001, NIST, and CIS with exposure to SOC2, GDPR, or HIPAA compliance environments
Preferred:
- Experience with DevSecOps automation, security-as-code, and policy-as-code implementations
- Exposure to threat intelligence platforms, cloud security monitoring, and proactive threat detection methodologies, including EDR / DLP or vulnerability management tools
- Must demonstrate strong ownership mindset, proactive security-first thinking, and ability to communicate risks in clear business language
Roles & Responsibilities:
We are looking for a Senior Information Security Engineer who can help protect our cloud infrastructure, applications, and data while enabling teams to move fast and build securely.
This role sits deep within our engineering ecosystem. You’ll embed security into how we design, build, deploy, and operate systems—working closely with Cloud, Platform, and Application Engineering teams. You’ll balance proactive security design with hands-on incident response, and help shape a strong, security-first culture across the organization.
If you enjoy solving real-world security problems, working close to systems and code, and influencing how teams build securely at scale, this role is for you.
What You’ll Do-
Cloud & Infrastructure Security:
- Design, implement, and operate cloud-native security controls across AWS, Azure, GCP, and Oracle.
- Strengthen IAM, network security, and cloud posture using services like GuardDuty, Azure Security Center and others.
- Partner with platform teams to secure VPCs, security groups, and cloud access patterns.
Application & DevSecOps Security:
- Embed security into the SDLC through threat modeling, secure code reviews, and security-by-design practices.
- Integrate SAST, DAST, and SCA tools into CI/CD pipelines.
- Secure infrastructure-as-code and containerized workloads using Terraform, CloudFormation, ARM, Docker, and Kubernetes.
Security Monitoring & Incident Response:
- Monitor security alerts and investigate potential threats across cloud and application layers.
- Lead or support incident response efforts, root-cause analysis, and corrective actions.
- Plan and execute VAPT and penetration testing engagements (internal and external), track remediation, and validate fixes.
- Conduct red teaming activities and tabletop exercises to test detection, response readiness, and cross-team coordination.
- Continuously improve detection, response, and testing maturity.
Security Tools & Platforms:
- Manage and optimize security tooling including firewalls, SIEM, EDR, DLP, IDS/IPS, CSPM, and vulnerability management platforms.
- Ensure tools are well-integrated, actionable, and aligned with operational needs.
Compliance, Governance & Awareness:
- Support compliance with industry standards and frameworks such as SOC2, HIPAA, ISO 27001, NIST, CIS, and GDPR.
- Promote secure engineering practices through training, documentation, and ongoing awareness programs.
- Act as a trusted security advisor to engineering and product teams.
Continuous Improvement:
- Stay ahead of emerging threats, cloud vulnerabilities, and evolving security best practices.
- Continuously raise the bar on a company's security posture through automation and process improvement.
Endpoint Security (Secondary Scope):
- Provide guidance on endpoint security tooling such as SentinelOne and Microsoft Defender when required.
Ideal Candidate:
- Strong hands-on experience in cloud security across AWS and Azure.
- Practical exposure to CSPM tools (e.g., Prisma Cloud, Wiz, Orca) and SIEM / IDS / IPS platforms.
- Experience securing containerized and Kubernetes-based environments.
- Familiarity with CI/CD security integrations (e.g., Snyk, GitHub Advanced Security, or similar).
- Solid understanding of network security, encryption, identity, and access management.
- Experience with application security testing tools (SAST, DAST, SCA).
- Working knowledge of security frameworks and standards such as ISO 27001, NIST, and CIS.
- Strong analytical, troubleshooting, and problem-solving skills.
Nice to Have:
- Experience with DevSecOps automation and security-as-code practices.
- Exposure to threat intelligence and cloud security monitoring solutions.
- Familiarity with incident response frameworks and forensic analysis.
- Security certifications such as CISSP, CISM, CCSP, or CompTIA Security+.
Perks, Benefits and Work Culture:
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the comprehensive benefits that company offers.
Note: Salary will be offered based on your overall experience and last drawn salary.
Job Title: Python / Django Backend Developer
Experience: 3+ Years
Location: Gurgaon (Onsite)
Work Mode: 5 Days Working
About the Company
We are hiring for a product-based global furniture and homeware organization with operations in the UK and India. The company builds and maintains in-house digital platforms focused on design-to-delivery, supply-chain, and logistics. The team focuses on building scalable, high-performance internal systems.
Roles & Responsibilities
- Design, develop, and maintain RESTful APIs and backend services using Python & Django / Django REST Framework
- Build scalable, secure, and optimized database schemas and queries using PostgreSQL/MySQL
- Collaborate with frontend, product, and QA teams for end-to-end feature delivery
- Write clean, reusable, and testable code following best engineering practices
- Optimize application performance, reliability, and scalability
- Participate in code reviews, documentation, and CI/CD processes
- Deploy and manage backend services on cloud infrastructure and web servers
Required Skills & Qualifications
- Strong proficiency in Python and Django / Django REST Framework
- Solid understanding of relational databases (PostgreSQL/MySQL)
- Experience with REST API design, authentication & authorization
- Working knowledge of AWS services: EC2, ELB, S3, IAM, RDS
- Experience configuring and managing Nginx/Apache
- Familiarity with Git, Docker, and CI/CD workflows
- Strong problem-solving and debugging skills
Preferred Qualifications
- Experience with cloud platforms (AWS/GCP/Azure)
- Familiarity with microservices architecture
- Experience with Celery, RabbitMQ, Kafka
- Knowledge of testing frameworks (Pytest, unittest)
- Exposure to e-commerce platforms or high-traffic scalable systems
Procedure is hiring for Drover.
This is not a DevOps/SRE/cloud-migration role — this is a hands-on backend engineering and architecture role where you build the platform powering our hardware at scale.
About Drover
Ranching is getting harder. Increased labor costs and a volatile climate are placing mounting pressure to provide for a growing population. Drover is empowering ranchers to efficiently and sustainably feed the world by making it cheaper and easier to manage livestock, unlock productivity gains, and reduce carbon footprint with rotational grazing. Not only is this a $46B opportunity, you'll be working on a climate solution with the potential for real, meaningful impact.
We use patent-pending low-voltage electrical muscle stimulation (EMS) to steer and contain cows, replacing the need for physical fences or electric shock. We are building something that has never been done before, and we have hundreds of ranches on our waitlist.
Drover is founded by Callum Taylor (ex-Harvard), who comes from 5 generations of ranching, and Samuel Aubin, both of whom grew up in Australian ranching towns and have an intricate understanding of the problem space. We are well-funded and supported by Workshop Ventures, a VC firm with experience in building unicorn IoT companies.
We're looking to assemble a team of exceptional talent with a high eagerness to dive headfirst into understanding the challenges and opportunities within ranching.
About The Role
As our founding cloud engineer, you will be responsible for building and scaling the infrastructure that powers our IoT platform, connecting thousands of devices across ranches nationwide.
Because we are an early-stage startup, you will have high levels of ownership in what you build. You will play a pivotal part in architecting our cloud infrastructure, building robust APIs, and ensuring our systems can scale reliably. We are looking for someone who is excited about solving complex technical challenges at the intersection of IoT, agriculture, and cloud computing.
What You'll Do
- Develop Drover IoT cloud architecture from the ground up (it’s a green field project)
- Design and implement services to support wearable devices, mobile app, and backend API
- Implement data processing and storage pipelines
- Create and maintain Infrastructure-as-Code
- Support the engineering team across all aspects of early-stage development -- after all, this is a startup
Requirements
- 5+ years of experience developing cloud architecture on AWS
- In-depth understanding of various AWS services, especially those related to IoT
- Expertise in cloud-hosted, event-driven, serverless architectures
- Expertise in programming languages suitable for AWS micro-services (eg: TypeScript, Python)
- Experience with networking and socket programming
- Experience with Kubernetes or similar orchestration platforms
- Experience with Infrastructure-as-Code tools (e.g., Terraform, AWS CDK)
- Familiarity with relational databases (PostgreSQL)
- Familiarity with Continuous Integration and Continuous Deployment (CI/CD)
Nice To Have
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, Electrical Engineering, or a related field
Job Description: Full Stack Developer
Role Overview
We are seeking a skilled Full Stack Developer with a minimum of 3+ years of hands-on experience in building modern web and mobile applications. The ideal candidate will have strong expertise in React and/or Flutter on the frontend, backed by Java based backend development, and the ability to work across the full software delivery lifecycle.
This role requires a pragmatic engineer who can translate business requirements into scalable, maintainable solutions while collaborating effectively with product, QA, and DevOps teams.
Key Responsibilities
- Design, develop, and maintain full-stack applications with responsive web and/or cross platform mobile interfaces
- Build and optimize frontend components using React and/or Flutter with a focus on performance and usability
- Develop backend services and APIs using Java (Spring / Spring Boot preferred)
- Integrate frontend applications with backend services via RESTful APIs
- Write clean, well structured, and testable code following best practices
- Participate in architecture discussions, code reviews, and technical decision-making
- Debug, troubleshoot, and resolve application issues across the stack
- Collaborate closely with designers, product managers, and other engineers
- Support deployments and work with DevOps pipelines where required
Required Skills & Experience
- Minimum 3 years of professional experience as a Full Stack Developer
- Strong experience with React and/or Flutter
- Solid backend development experience using Java
- Experience building REST APIs and integrating frontend with backend services
- Working knowledge of HTML, CSS, JavaScript, and modern frontend tooling
- Familiarity with relational databases (PostgreSQL/MySQL) and basic query optimization
- Experience with Git and collaborative development workflows
- Understanding of application security, authentication, and authorization concepts
Preferred Skills
- Experience with Spring Boot, Hibernate/JPA
- Exposure to Node.js or Angular
- Experience with cloud platforms (AWS preferred)
- Familiarity with CI/CD pipelines and containerization (Docker)
- Experience building offline capable or mobile first applications
- Prior work in enterprise or product based environments
Soft Skills
- Strong problem solving and analytical abilities
- Good communication skills and ability to work in cross functional teams
- Ownership mindset with attention to code quality and maintainability
- Ability to adapt quickly in a fast paced development environment
Experience Level
- 3+ years of relevant full stack development experience
Educational Qualification
- Bachelor’s degree in Computer Science, Information Technology, or a related field
- A Master’s degree or equivalent exper
To design, automate, and manage scalable cloud infrastructure that powers real-time AI and communication workloads globally.
Key Responsibilities
- Implement and mange CI/CD pipelines (GitHub Actions, Jenkins, or GitLab).
- Manage Kubernetes/EKS clusters
- Implement infrastructure as code (provisioning via Terraform, CloudFormation, Pulumi etc).
- Implement observability (Grafana, Loki, Prometheus, ELK/CloudWatch).
- Enforce security/compliance guardrails (GDPR, DPDP, ISO 27001, PCI, HIPPA).
- Drive cost-optimization and zero-downtime deployment strategies.
- Collaborate with developers to containerize and deploy services.
Required Skills & Experience
- 4–8 years in DevOps or Cloud Infrastructure roles.
- Proficiency with AWS (EKS, Lambda, API Gateway, S3, IAM).
- Experience with infrastructure-as-code and CI/CD automation.
- Familiarity with monitoring, alerting, and incident management.
What Success Looks Like
- < 10 min build-to-deploy cycle.
- 99.999 % uptime with proactive incident response.
- Documented and repeatable DevOps workflows.
We are looking for a hands-on PostgreSQL Lead / Senior DBA (L3) to join our production engineering team. This is not an architect role. The focus is on deep PostgreSQL expertise, real-world production ownership, and mentoring junior DBAs within an existing database ecosystem.
You will work as a senior individual contributor with technical leadership responsibilities, operating in a live, high-availability environment with guidance and support from a senior team.
Key Responsibilities
- Own and manage PostgreSQL databases in production environments
- Perform PostgreSQL installation, upgrades, migrations, and configuration
- Handle L2/L3 production incidents, root cause analysis, and performance bottlenecks
- Execute performance tuning and query optimization
- Manage backup, recovery, replication, HA, and failover strategies
- Support re-architecture and optimization initiatives led by senior stakeholders
- Monitor database health, capacity, and reliability proactively
- Collaborate with application, infra, and DevOps teams
- Mentor and guide L1/L2 DBAs as part of the L3 role
- Demonstrate ownership during night/weekend production issues (comp-offs provided)
Must-Have Skills (Non-Negotiable)
- Very strong PostgreSQL expertise
- Deep understanding of PostgreSQL internals and behavior
- Proven experience with:
- Performance tuning & optimization
- Production troubleshooting (L2/L3)
- Backup & recovery
- Replication & High Availability
- Ability to work independently in critical production scenarios
- PostgreSQL-focused profiles are absolutely acceptable (no requirement to know other DBs)
Good-to-Have (Not Mandatory)
- Exposure to AWS and/or Azure
- Experience with cloud-managed or self-hosted Postgres
- Knowledge of other databases (Oracle, MS SQL, DB2, ClickHouse, Neo4j, etc.) — purely a plus
Note: Strong on-prem PostgreSQL DBAs are welcome. Cloud gaps can be trained post-joining.
Work Model & Availability (Important – Please Read Carefully)
- Work From Office only (Bangalore – Koramangala)
- Regular day shift, but with a 24×7 production ownership mindset
- Availability for night/weekend troubleshooting when required
- No rigid shifts; expectation is responsible lead-level ownership
- Comp-offs provided for off-hours work
Key Responsibilities:
- Design, implement, and maintain scalable, secure, and cost-effective infrastructure on AWS and Azure
- Set up and manage CI/CD pipelines for smooth code integration and delivery using tools like GitHub Actions, Bitbucket Runners, AWS Code build/deploy, Azure DevOps, etc.
- Containerize applications using Docker and manage orchestration with Kubernetes, ECS, Fargate, AWS EKS, Azure AKS.
- Manage and monitor production deployments to ensure high availability and performance
- Implement and manage CDN solutions using AWS CloudFront and Azure Front Door for optimal content delivery and latency reduction
- Define and apply caching strategies at application, CDN, and reverse proxy layers for performance and scalability
- Set up and manage reverse proxies and Cloudflare WAF to ensure application security and performance
- Implement infrastructure as code (IaC) using Terraform, CloudFormation, or ARM templates
- Administer and optimize databases (RDS, PostgreSQL, MySQL, etc.) including backups, scaling, and monitoring
- Configure and maintain VPCs, subnets, routing, VPNs, and security groups for secure and isolated network setups
- Implement monitoring, logging, and alerting using tools like CloudWatch, Grafana, ELK, or Azure Monitor
- Collaborate with development and QA teams to align infrastructure with application needs
- Troubleshoot infrastructure and deployment issues efficiently and proactively
- Ensure cloud cost optimization and usage tracking
Required Skills & Experience:
- 3-4 years of hands-on experience in a DevOps
- Strong expertise with both AWS and Azure cloud platforms
- Proficient in Git, branching strategies, and pull request workflows
- Deep understanding of CI/CD concepts and experience with pipeline tools
- Proficiency in Docker, container orchestration (Kubernetes, ECS/EKS/AKS)
- Good knowledge of relational databases and experience in managing DB backups, performance, and migrations
- Experience with networking concepts including VPC, subnets, firewalls, VPNs, etc.
- Experience with Infrastructure as Code tools (Terraform preferred)
- Strong working knowledge of CDN technologies: AWS CloudFront and Azure Front Door
- Understanding of caching strategies: edge caching, browser caching, API caching, and reverse proxy-level caching
- Experience with Cloudflare WAF, reverse proxy setups, SSL termination, and rate-limiting
- Familiarity with Linux system administration, scripting (Bash, Python), and automation tools
- Working knowledge of monitoring and logging tools
- Strong troubleshooting and problem-solving skills
Good to Have (Bonus Points):
- Experience with serverless architecture (e.g., AWS Lambda, Azure Functions)
- Exposure to cost monitoring tools like CloudHealth, Azure Cost Management
- Experience with compliance/security best practices (SOC2, ISO, etc.)
- Familiarity with Service Mesh (Istio, Linkerd) and API gateways
- Knowledge of Secrets Management tools (e.g., HashiCorp Vault, AWS Secrets Manager)
About VisibilityStack
VisibilityStack is a demand capture platform that helps businesses capture the demand for their solution, wherever their audience is searching, whether it's Google Search or AI-Search platforms or LLMs. The platform is powered by AI agents that identify what your audience is asking to look for a solution like yours, engineer content that answers those questions, and structure it so Google and AI systems can easily understand and recommend it in their responses. We also strengthen your online credibility through strategic backlinks and a strong social presence.
Everything is guided by real-time data and analytics. We system continues to analyze what's working, remove what does not, and keep your content working around the clock. The result is simple: the right people can find you, trust you, and reach out when they need what you offer.
The Role
We need a Senior Engineer who ships production code that scales. You'll be the technical anchor—building critical infrastructure, solving complex problems, and mentoring junior developers through code reviews and pair programming.
You'll shape our future in four key ways: writing the code that becomes our foundation, We’re looking for a Next.js Developer (Full-Stack) who can build and ship modern web products end-to-end. You’ll work across frontend, backend, and cloud (AWS) and collaborate closely with product and design to deliver fast, reliable experiences. A strong plus: you have a knack for AI-enabled products—you understand how to integrate LLM features thoughtfully, not just “add a chatbot”. This is about technical leadership through excellence.
What's in it for you:
- Own mission-critical systems end-to-end — Your code directly generates customer revenue
- Skip the politics, ship products — No layers of approval, no enterprise bureaucracy
- Shape product direction — Your technical insights influence product strategy, not just implementation
- Learn cutting-edge AI in production — Work with LLMs, vector databases, and agent orchestration at scale
- Shape technical decisions and processes — Your input matters on how we build, not just what
- Accelerated growth path — As we scale, you choose: become our technical lead or remain a deeply influential IC
- Direct founder access — Collaborate on product vision, not just execute specs
Location: Janakpuri, Delhi (Hybrid - Maker's Schedule)
Our Work Philosophy:
We follow the Maker's Schedule, not the Manager's Schedule. This means uninterrupted blocks of deep work when you're building, and high-bandwidth collaboration when we're solving problems together.
In Practice:
- In-office days: Whiteboard architecture sessions, rapid product iterations, deep dives into product strategy, complex debugging that needs three minds on one problem
- Deep work days: Uninterrupted coding from wherever you work best—home, office, or that coffee shop with perfect noise levels
- Balance by design: We optimize for both intense collaboration and deep focus
The best technical breakthroughs happen in two modes: intense in-person collaboration where ideas bounce rapidly, and deep solo work where complex problems get solved. We protect both.
Responsibilities
- Build and maintain production-grade web applications using Next.js (App Router preferred), React, and TypeScript.
- Develop full-stack features including APIs, server actions, background jobs, and integrations.
- Architect and implement scalable systems on AWS (deployment, monitoring, security, performance).
- Work with databases (SQL/NoSQL), caching layers, and queues where appropriate.
- Collaborate with product/design to turn requirements into clean UX and shippable increments.
- Own quality through testing, code reviews, CI/CD, observability, and performance tuning.
- Contribute to technical decisions, architecture, and best practices across the team.
What You’ll Bring (Required)
- Strong experience with Next.js + React + TypeScript in real production apps.
- Solid full-stack fundamentals:API design, authentication/authorization
- data modeling and query optimization
- debugging distributed systems issues
- Strong working knowledge of AWS, such as: Lambda, ECS/Fargate, EC2, S3, CloudFront
- RDS/DynamoDB, API Gateway
- IAM, secrets management, VPC basics
- CI/CD and infrastructure practices (Terraform/CDK is a plus)
- Good engineering habits: clean, maintainable code
- testing mindset
- performance and security awareness
AI Product “Knack” (Preferred / Nice to Have)
- Experience integrating AI features using APIs (e.g., OpenAI, Anthropic, Bedrock) including:prompt/version management
- structured outputs / tool calling
- retrieval (RAG) patterns and embeddings (basic familiarity is fine)
- Ability to design AI features with UX and safety in mind:latency expectations, fallbacks, streaming responses
- evaluation/quality loops, guardrails, and logging
- Curiosity about how AI changes workflows and product capabilities.
Tech Stack (Example)
- Next.js, React, TypeScript, Tailwind (or similar)
- Node.js, Postgres/MySQL (and/or DynamoDB)
- AWS (Lambda/ECS/S3/CloudFront/RDS), CI/CD (GitHub Actions, etc.)
- Observability: CloudWatch, Sentry, OpenTelemetry (any equivalent)
What Success Looks Like
- You can take a feature from idea → implementation → deployment with minimal oversight.
- You ship reliable releases and improve system performance and DX over time.
- You help the team build AI-powered features that users actually trust and use.
Job Details
- Job Title: Lead I - Data Engineering
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 6-9 years
- Employment Type: Full Time
- Job Location: Pune
- CTC Range: Best in Industry
Job Description
Job Title: Senior Data Engineer (Kafka & AWS)
Responsibilities:
- Develop and maintain real-time data pipelines using Apache Kafka (MSK or Confluent) and AWS services.
- Configure and manage Kafka connectors, ensuring seamless data flow and integration across systems.
- Demonstrate strong expertise in the Kafka ecosystem, including producers, consumers, brokers, topics, and schema registry.
- Design and implement scalable ETL/ELT workflows to efficiently process large volumes of data.
- Optimize data lake and data warehouse solutions using AWS services such as Lambda, S3, and Glue.
- Implement robust monitoring, testing, and observability practices to ensure reliability and performance of data platforms.
- Uphold data security, governance, and compliance standards across all data operations.
Requirements:
- Minimum of 5 years of experience in Data Engineering or related roles.
- Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).
- Proficient in coding with Python, SQL, and Java — with Java strongly preferred.
- Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.
- Excellent problem-solving, communication, and collaboration skills.
- Flexibility to write production-quality code in both Python and Java as required.
Skills: Aws, Kafka, Python
Must-Haves
Minimum of 5 years of experience in Data Engineering or related roles.
Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).
Proficient in coding with Python, SQL, and Java — with Java strongly preferred.
Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.
Excellent problem-solving, communication, and collaboration skills.
Flexibility to write production-quality code in both Python and Java as required.
Skills: Aws, Kafka, Python
Notice period - 0 to 15days only
ROLES AND RESPONSIBILITIES:
You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.
- Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
- Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
- Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
- Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
- Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
- Support self-service analytics by enabling governed data products and semantic layers.
- Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
- Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.
IDEAL CANDIDATE:
- Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
- 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
- Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
- Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
- Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
- Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
- Excellent problem-solving, documentation, and stakeholder communication skills.
PREFERRED:
- Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
- Exposure to Snowflake, Databricks, or BigQuery environments.
- Experience in high-tech, manufacturing, or enterprise data modernization programs.
Role Overview
We are seeking a DevOps Engineer with 2 years of experience to join our innovative team. The ideal
candidate will bridge the gap between development and operations, implementing and maintaining our
cloud infrastructure while ensuring secure deployment pipelines and robust security practices for our
client projects.
Responsibilities:
- Design, implement, and maintain CI/CD pipelines.
- Containerize applications using Docker and orchestrate deployments
- Manage and optimize cloud infrastructure on AWS and Azure platforms
- Monitor system performance and implement automation for operational tasks to ensure optimal
- performance, security, and scalability.
- Troubleshoot and resolve infrastructure and deployment issues
- Create and maintain documentation for processes and configurations
- Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
- Stay informed about emerging technologies and best practices within the fields of DevOps and cloud computing.
Requirements:
- 2+ years of hands-on experience with AWS cloud services
- Strong proficiency in CI/CD pipeline configuration
- Expertise in Docker containerisation and container management
- Proficiency in shell scripting (Bash/Power-Shell)
- Working knowledge of monitoring and logging tools
- Knowledge of network security and firewall configuration
- Strong communication and collaboration skills, with the ability to work effectively within a team
- environment
- Understanding of networking concepts and protocols in AWS and/or Azure
What You’ll Do:
We are looking for a Staff Software Engineer based in Pune, India who can master both DeepIntent’s data architectures and pharma research and analytics methodologies to make significant contributions to how health media is analyzed by our clients. This role requires an Engineer who not only understands DBA functions but also how they impact research objectives and can work with researchers and data scientists to achieve impactful results.
This role will be in the Analytics Organization and will require integration and partnership with the Engineering Organization. The ideal candidate is a self-starter who is inquisitive who is not afraid to take on and learn from challenges and will constantly seek to improve the facets of the business they manage. The ideal candidate will also need to demonstrate the ability to collaborate and partner with others.
- Serve as the Engineering interface between Analytics and Engineering teams.
- Develop and standardize all interface points for analysts to retrieve and analyze data with a focus on research methodologies and data-based decision-making.
- Optimize queries and data access efficiencies, serve as an expert in how to most efficiently attain desired data points.
- Build “mastered” versions of the data for Analytics-specific querying use cases.
- Help with data ETL, table performance optimization.
- Establish a formal data practice for the Analytics practice in conjunction with the rest of DeepIntent
- Build & operate scalable and robust data architectures.
- Interpret analytics methodology requirements and apply them to data architecture to create standardized queries and operations for use by analytics teams.
- Implement DataOps practices.
- Master existing and new Data Pipelines and develop appropriate queries to meet analytics-specific objectives.
- Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.
- Operate between Engineers and Analysts to unify both practices for analytics insight creation.
Who You Are:
- 8+ years of experience in Tech Support (Specialised in Monitoring and maintaining Data pipeline).
- Adept in market research methodologies and using data to deliver representative insights.
- Inquisitive, curious, understands how to query complicated data sets, move and combine data between databases.
- Deep SQL experience is a must.
- Exceptional communication skills with the ability to collaborate and translate between technical and non-technical needs.
- English Language Fluency and proven success working with teams in the U.S.
- Experience in designing, developing and operating configurable Data pipelines serving high-volume and velocity data.
- Experience working with public clouds like GCP/AWS.
- Good understanding of software engineering, DataOps, and data architecture, Agile and DevOps methodologies.
- Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown.
- Proficient with SQL, Python or JVM-based language, Bash.
- Experience with any of Apache open-source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc.
- Ability to think big, take bets and innovate, dive deep, hire and develop the best talent, learn and be curious.
What you'll be doing:
As a Software Developer at Trential, you will be the bridge between technical strategy and hands-on execution. You will be working with our dedicated engineering team designing, building, and deploying our core platforms and APIs. You will ensure our solutions are scalable, secure, interoperable, and aligned with open standards and our core vision. Build and maintain back-end interfaces using modern frameworks.
- Design & Implement: Lead the design, implementation and management of Trential’s products.
- Code Quality & Best Practices: Enforce high standards for code quality, security, and performance through rigorous code reviews, automated testing, and continuous delivery pipelines.
- Standards Adherence: Ensure all solutions comply with relevant open standards like W3C Verifiable Credentials (VCs), Decentralized Identifiers (DIDs) & Privacy Laws, maintaining global interoperability.
- Continuous Improvement: Lead the charge to continuously evaluate and improve the products & processes. Instill a culture of metrics-driven process improvement to boost team efficiency and product quality.
- Cross-Functional Collaboration: Work closely with the Co-Founders & Product Team to translate business requirements and market needs into clear, actionable technical specifications and stories. Represent Trential in interactions with external stakeholders for integrations.
What we're looking for:
- 3+ years of experience in backend development.
- Deep proficiency in JavaScript, Node.js experience in building and operating distributed, fault tolerant systems.
- Hands-on experience with cloud platforms (AWS & GCP) and modern DevOps practices (e.g., CI/CD, Infrastructure as Code, Docker).
- Strong knowledge of SQL/NoSQL databases and data modeling for high-throughput, secure applications.
Preferred Qualifications (Nice to Have)
- Knowledge of decentralized identity principles, Verifiable Credentials (W3C VCs), DIDs, and relevant protocols (e.g., OpenID4VC, DIDComm)
- Familiarity with data privacy and security standards (GDPR, SOC 2, ISO 27001) and designing systems complying to these laws.
- Experience integrating AI/ML models into verification or data extraction workflows.
Immediately available, performance test engineer who is having real-time exposure to LoadRunner, JMeter and have tested Java Applications on AWS environments.
Hiring for Full Stack with Agentic AI
Exp : 5 - 10 yrs
Work Location : Mumbai Vikroholi
Hybrid
Skills :
5+ yrs in full-stack development with demonstrated technical leadership.
• Backend: Node.js (Express/Nest.js), Java (Spring Boot / Micronaut).
• Frontend: React, TypeScript, HTML5, CSS3.
• Database: MySQL / MongoDB / Graph DB and familiarity with ORM frameworks.
• Deep understanding of microservices, RESTful APIs, and event-driven architectures.
• Familiarity with cloud platforms (AWS).
• Experience with WebSocket, HTTP, and similar communication protocols.
• Experience in CI/CD pipelines (GitHub Actions, Jenkins, etc.) and infrastructure-as-code concepts. • Excellent problem-solving, debugging, and communication skills.
Exp: 7- 10 Years
CTC: up to 35 LPA
Skills:
- 6–10 years DevOps / SRE / Cloud Infrastructure experience
- Expert-level Kubernetes (networking, security, scaling, controllers)
- Terraform Infrastructure-as-Code mastery
- Hands-on Kafka production experience
- AWS cloud architecture and networking expertise
- Strong scripting in Python, Go, or Bash
- GitOps and CI/CD tooling experience
Key Responsibilities:
- Design highly available, secure cloud infrastructure supporting distributed microservices at scale
- Lead multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
- Implement Infrastructure-as-Code using Terraform across full infrastructure lifecycle
- Optimize Kafka-based data pipelines for throughput, fault tolerance, and low latency
- Deliver zero-downtime CI/CD pipelines using GitOps-driven deployment models
- Establish SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
- Ensure production-ready disaster recovery and business continuity testing
If interested Kindly share your updated resume at 82008 31681
Seeking a Senior Staff Cloud Engineer who will lead the design, development, and optimization of scalable cloud architectures, drive automation across the platform, and collaborate with cross-functional stakeholders to deliver secure, high-performance cloud solutions aligned with business goals
Responsibilities:
- Cloud Architecture & Strategy
- Define and evolve the company’s cloud architecture, with AWS as the primary platform.
- Design secure, scalable, and resilient cloud-native and event-driven architectures to support product growth and enterprise demands.
- Create and scale up our platform for integrations with our enterprise customers (webhooks, data pipelines, connectors, batch ingestions, etc)
- Partner with engineering and product to convert custom solutions into productised capabilities.
- Security & Compliance Enablement
- Act as a foundational partner in building out the company’s security andcompliance functions.
- Help define cloud security architecture, policies, and controls to meet enterprise and customer requirements.
- Guide compliance teams on technical approaches to SOC2, ISO 27001, GDPR, and GxP standards.
- Mentor engineers and security specialists on embedding secure-by-design and compliance-first practices.
- Customer & Solutions Enablement
- Work with Solutions Engineering and customers to design and validate complex deployments.
- Contribute to processes that productise custom implementations into scalable platform features.
- Leadership & Influence
- Serve as a technical thought leader across cloud, data, and security domains.
- Collaborate with cross-functional leadership (Product, Platform, TPM, Security) to align technical strategy with business goals.
- Act as an advisor to security and compliance teams during their growth, helping establish scalable practices and frameworks.
- Represent the company in customer and partner discussions as a trusted cloud and security subject matter expert.
- Data Platforms & Governance
- Provide guidance to the data engineering team on database architecture, storage design, and integration patterns.
- Advise on selection and optimisation of a wide variety of databases (relational, NoSQL, time-series, graph, analytical).
- Collaborate on data governance frameworks covering lifecycle management, retention, classification, and access controls.
- Partner with data and compliance teams to ensure regulatory alignment and strong data security practices.
- Developer Experience & DevOps
- Build and maintain tools, automation, and CI/CD pipelines that accelerate developer velocity.
- Promote best practices for infrastructure as code, containerisation, observability, and cost optimisation.
- Embed security, compliance, and reliability standards into the development lifecycle.
Requirements:
- 12+ years of experience in cloud engineering or architecture roles.
- Deep expertise in AWS and strong understanding of modern distributed application design (microservices, containers, event-driven architectures).
- Hands-on experience with a wide range of databases (SQL, NoSQL, analytical, and specialized systems).
- Strong foundation in data management and governance, including lifecycle and compliance.
- Experience supporting or helping build security and compliance functions within a SaaS or enterprise environment.
- Expertise with IaC (Terraform, CDK, CloudFormation) and CI/CD pipelines.
- Strong foundation in networking, security, observability, and performance engineering.
- Excellent communication and influencing skills, with the ability to partner across technical and business functions.
Good to Have:
- Exposure to Azure, GCP, or other cloud environments.
- Experience working in SaaS/PaaS at enterprise scale.
- Background in product engineering, with experience shaping technical direction in collaboration with product teams.
- Knowledge of regulatory and compliance standards (SOC2, ISO 27001, GDPR, and GxP).
About Albert Invent
Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Every day, scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, bringing better products to market, faster.
Why Join Albert Invent
- Joining Albert Invent means becoming part of a mission-driven, fast-growing global team at the intersection of AI, data, and advanced materials science.
- You will collaborate with world-class scientists and technologists to redefine how new materials are discovered, developed, and brought to market.
- The culture is built on curiosity, collaboration, and ownership, with a strong focus on learning and impact.
- You will enjoy the opportunity to work on cutting-edge AI tools that accelerate real- world R&D and solve global challenges from sustainability to advanced manufacturing while growing your careers in a high-energy environment.
Senior Staff Engineer will play a critical role in shaping the technical direction and long-term architecture of the Albert platform. This role is responsible for driving scalable, reliable, and high- impact software engineering that align with business goals and customer needs. The position requires a strong balance of technical depth, execution excellence, and cross-functional leadership to accelerate product development while maintaining high standards of quality, performance, and maintainability
Responsibilities:
Technical Leadership
- Drive the architectural vision for core product areas across the Albert platform.
- Own the technical roadmap for major product features, ensuring alignment with business priorities and long-term platform evolution.
- Lead the design and development of highly reliable, performant, and scalable applications using modern tech stack.
- Establish durable engineering patterns and frameworks that enable product teams to move quickly with high confidence.
- Provide mentorship to Staff, Senior, and Mid-level engineers to uplevel engineering capabilities across product teams
Execution Excellence
- Translate business goals and customer needs into scalable technical designs that accelerate product development.
- Solve complex, multi-system issues and guide teams through debugging, incident response, and performance improvements.
- Lead design reviews, define coding standards, and elevate system observability, reliability, and maintainability.
- Drive technical decisions involving tradeoffs between speed, quality, and scalability, bringing clarity to ambiguity.
- Identify, prioritise, and drive down technical debt that impacts product velocity and quality
Cross-Team Influence & Collaboration
- Work with senior technical leadership to establish and uphold company-wide architectural standards and engineering practices.
- Partner closely with PMs to shape feature requirements, estimate complexity, and define engineering milestones.
- Collaborate with engineering, data, ML, and infra teams to develop cohesive, well-integrated product experiences
Requirements:
- Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
- 12+ years of software engineering experience, with 3+ years in senior technical leadership roles supporting product-oriented teams.
- Proven ability to lead end-to-end product development at scale — from concept through production rollout.
- Deep expertise in modern backend technologies, including Node.js, RESTful API design, backend services, and distributed system fundamentals, with strong proficiency across multiple programming languages.
- Strong understanding of product architecture patterns: domain-driven design, modular monoliths, micro-services, event-driven systems.
- Proficiency with SQL & NoSQL databases (PostgreSQL, DynamoDB, MongoDB, etc.).
- Significant experience with AWS services and modern cloud architectures.
- Strong product intuition — ability to understand user needs, evaluate tradeoffs, and craft solutions that balance speed with quality.
- Outstanding communication, collaboration, and organisational influence skills
Good to Have:
- Experience with modern front-end frameworks such as React.
- Experience building AI- or ML-driven user experiences.
- Experience scaling a product engineering team from 1 to N
About Albert Invent
Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Every day, scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, bringing better products to market, faster.
Why Join Albert Invent
- Joining Albert Invent means becoming part of a mission-driven, fast-growing global team at the intersection of AI, data, and advanced materials science.
- You will collaborate with world-class scientists and technologists to redefine how new materials are discovered, developed, and brought to market.
- The culture is built on curiosity, collaboration, and ownership, with a strong focus on learning and impact.
About The Role
- As a Data Platform Lead, you will utilize your strong technical background and hands-on development skills to design, develop, and maintain data platforms.
- Leading a team of skilled data engineers, you will create scalable and robust data solutions that enhance business intelligence and decision-making.
- You will ensure the reliability, efficiency, and scalability of data systems while mentoring your team to achieve excellence.
- Collaborating closely with our client’s CXO-level stakeholders, you will oversee pre-sales activities, solution architecture, and project execution.
- Your ability to stay ahead of industry trends and integrate the latest technologies will be crucial in maintaining our competitive edge.
Key Responsibilities
- Client-Centric Approach: Understand client requirements deeply and translate them into robust technical specifications, ensuring solutions meet their business needs.
- Architect for Success: Design scalable, reliable, and high-performance systems that exceed client expectations and drive business success.
- Lead with Innovation: Provide technical guidance, support, and mentorship to the development team, driving the adoption of cutting-edge technologies and best practices.
- Champion Best Practices: Ensure excellence in software development and IT service delivery, constantly assessing and evaluating new technologies, tools, and platforms for project suitability.
- Be the Go-To Expert: Serve as the primary point of contact for clients throughout the project lifecycle, ensuring clear communication and high levels of satisfaction.
- Build Strong Relationships: Cultivate and manage relationships with CxO/VP level stakeholders, positioning yourself as a trusted advisor.
- Deliver Excellence: Manage end-to-end delivery of multiple projects, ensuring timely and high-quality outcomes that align with business goals.
- Report with Clarity: Prepare and present regular project status reports to stakeholders, ensuring transparency and alignment.
- Collaborate Seamlessly: Coordinate with cross-functional teams to ensure smooth and efficient project execution, breaking down silos and fostering collaboration.
- Grow the Team: Provide timely and constructive feedback to support the professional growth of team members, creating a high-performance culture.
Qualifications
- Master’s (M. Tech., M.S.) in Computer Science or equivalent from reputed institutes like IIT, NIT preferred
- Overall 6–8 years of experience with minimum 2 years of relevant experience and a strong technical background
- Experience working in mid size IT Services company is preferred
Preferred Certification
- AWS Certified Data Analytics Specialty
- AWS Solution Architect Professional
- Azure Data Engineer + Solution Architect
- Databricks Certified Data Engineer / ML Professional
Technical Expertise
- Advanced knowledge of distributed architectures and data modeling practices.
- Extensive experience with Data Lakehouse systems like Databricks and data warehousing solutions such as Redshift and Snowflake.
- Hands-on experience with data technologies such as Apache Spark, SQL, Airflow, Kafka, Jenkins, Hadoop, Flink, Hive, Pig, HBase, Presto, and Cassandra.
- Knowledge in BI tools including PowerBi, Tableau, Quicksight and open source equivalent like Superset and Metabase is good to have.
- Strong knowledge of data storage formats including Iceberg, Hudi, and Delta.
- Proficient programming skills in Python, Scala, Go, or Java.
- Ability to architect end-to-end solutions from data ingestion to insights, including designing data integrations using ETL and other data integration patterns.
- Experience working with multi-cloud environments, particularly AWS and Azure.
- Excellent teamwork and communication skills, with the ability to thrive in a fast-paced, agile environment.
As Senior Backend developer, you will play a key role in building a product that will impact the way users experience Yoga and Fitness. Working closely with our technical and product leadership you will help solve for securing the performance, experience and scalability of our product. With your erudite experience, you will play a key part in our product and growth roadmap. We’re looking for an engineer who not only writes high-quality backend code but also embodies a forward-thinking, AI-augmented development mindset. You should be someone who embraces AI and automation as a force multiplier—leveraging modern AI tools to accelerate delivery, increase code quality, and focus your time on higher-order problems.
Responsibilities
- At least 3 years of experience in product development and backend technologies, with strong understanding of the technology and familiarity with latest trends in backend technology developments.
- Design, develop, and maintain scalable backend services and APIs, ensuring high performance and reliability.
- Lead the architecture and implementation of new features, driving projects from concept to deployment.
- Optimize application performance and ensure high availability across systems.
- Implement robust security and data protection measures to safeguard critical information.
- Contribute to technical decision-making and architectural planning, ensuring long-term scalability and efficiency.
- Create and maintain clear, concise technical documentation for new systems, architectures, and codebases.
- Lead knowledge-sharing sessions to promote best practices across teams.
- Work closely with product managers, front-end developers, and other stakeholders to define requirements, design systems, and deliver impactful product features within reasonable timelines.
- Continuously identify opportunities for system improvements, automation, and optimizations.
- Lead efforts to implement new technologies and processes that enhance engineering productivity and product performance.
- Take ownership of critical incidents, performing root cause analysis and implementing long-term solutions to minimize downtime and ensure business continuity.
- Ability to communicate clearly and effectively at various levels - intra-team, inter-group, spoken skills, and written skills - including email, presentation and articulation skills.
- Has strong knowledge of AI-assisted development tools, has hands-on experience on reducing boilerplate coding, identifying bugs faster, and optimizing system design.
Qualifications
- 2+ years of strong experience developing services in Go language
- Bachelor's degree in Computer Science, Software Engineering, or related field.
- 3+ years of experience in backend software engineering with a strong track record of delivering complex backend systems, preferably in cloud-native environments.
- Strong experience with designing and maintaining large-scale databases (SQL and NoSQL) and knowledge of performance optimization techniques.
- Hands-on experience with cloud platforms (AWS, GCP, Azure) and cloud-native architectures (containers, serverless, microservices) is highly desirable.
- Familiarity with modern software development practices, including CI/CD, test automation, and Agile methodologies.
- Proven ability to solve complex engineering problems with innovative solutions and practical thinking.
- Strong leadership and interpersonal skills, with the ability to work cross-functionally and influence technical direction across teams.
- Excellent communication skills, with the ability to communicate complex technical ideas to both technical and non-technical stakeholders.
- Demonstrated ability to boost engineering output through strategic use of AI tools and practices—contributing to a “10x developer” mindset focused on outcomes, not just effort.
- Comfortable working in a fast-paced, high-leverage environment where embracing automation and AI-first workflows is part of the engineering culture.
Required Skills
- Strong experience in Go language
- Strong experience in backend technologies and cloud-native environments.
- Proficiency in designing and maintaining large-scale databases.
- Strong problem-solving skills and familiarity with modern software development practices.
- Excellent communication skills.
Preferred Skills
- Experience with AI-assisted development tools.
- Knowledge of performance optimization techniques.
- Experience in Agile methodologies.
About the Company
MyYogaTeacher is a fast-growing health tech startup with a mission to improve the physical and mental well-being of the entire planet. We are the first online marketplace to connect qualified Fitness and Yoga coaches from India with consumers worldwide to provide personalized 1-on-1 sessions via live video conference (app, web). We started in 2019 and have been showing tremendous traction with rave customer reviews.
- Over 200,000 happy customers
- Over 335,000 5 star reviews
- Over 150 Highly qualified coaches on the platform
- 95% of sessions are being completed with 5-star rating
Headquartered in California, with operations based in Bangalore, we are dedicated to providing exceptional service and promoting the benefits of yoga and fitness coaching worldwide
Role Description
This is a full-time on-site role in Bengaluru for a Full Stack Python Developer at Euphoric Thought Technologies Pvt. Ltd. The developer will be responsible for back-end and front-end web development, software development, full-stack development, and using Cascading Style Sheets (CSS) to build effective and efficient applications.
Qualifications
- Back-End Web Development and Full-Stack Development skills
- Front-End Development and Software Development skills
- Proficiency in Cascading Style Sheets (CSS)
- Experience with Python, Django, and Flask frameworks
- Strong problem-solving and analytical skills
- Ability to work collaboratively in a team environment
- Bachelor's or Master's degree in Computer Science or relevant field
- Agile Methodologies: Proven experience working in agile teams, demonstrating the application of agile principles with lean thinking.
- Front end - React.js
- Data Engineering: Useful experience blending data engineering with core software engineering.
- Additional Programming Skills: Desirable experience with other programming languages (C++, .NET) and frameworks.
- CI/CD Tools: Familiarity with Github Actions is a plus.
- Cloud Platforms: Experience with cloud platforms (e.g., Azure, AWS,) and containerization technologies (e.g., Docker, Kubernetes).
- Code Optimization: Proficient in profiling and optimizing Python code.
Profile: Senior Data Engineer (Informatica MDM)
Primary Purpose:
The Senior Data Engineer will be responsible for building new segments in a Customer Data Platform (CDP), maintaining the segments, understanding the data requirements for use cases, data integrity, data quality and data sources involved to build the specific use cases. The resource should also have an understanding of ETL processes. This position will have an understanding of integrations with cloud service providers like Microsoft Azure, Azure Data Lake Services, Azure Data Factory and cloud data warehouse platforms in addition to Enterprise Data Ware house environments. The ideal candidate will also have proven experience in data analysis and management, with excellent analytical and problem-solving abilities.
Major Functions/Responsibilities
• Design, develop and implement robust and extensible solutions to build segmentations using Customer Data Platform.
• Work closely with subject matter experts to identify and document based on the business requirements, functional specs and translate them into appropriate technical solutions.
• Responsible for estimating, planning, and managing the user stories, tasks and reports on Agile Projects.
• Develop advanced SQL Procedures, Functions and SQL jobs.
• Performance tuning and optimization of ETL Jobs, SQL Queries and Scripts.
• Configure and maintain scheduled ETL jobs, data segments and refresh.
• Support exploratory data analysis, statistical analysis, and predictive analytics.
• Support production issues and maintain existing data systems by researching and trouble shooting any issues/problems in a timely manner.
• Proactive, great attention to detail, results-oriented problem solver.
Preferred Experience
• 6+ years of experience in writing SQL queries and stored procedures to extract, manipulate and load data.
• 6+ years’ experience with design, build, test, and maintain data integrations for data marts and data warehouses.
• 3+ years of experience in integrations Azure / AWS Data Lakes, Azure Data Factory & IDMC (Informatica Cloud Services).
• In depth understanding of database management systems, online analytical processing (OLAP) and ETL (Extract, transform, load) framework.
• Excellent verbal and written communication skills
• Collaboration with both onshore and offshore development teams.
• Good Understanding of Marketing tools like Sales Force Marketing cloud, Adobe Marketing or Microsoft Customer Insights Journey and Customer Data Platform will be important to this role. Communication
• Facilitate project team meetings effectively.
• Effectively communicate relevant project information to superiors
• Deliver engaging, informative, well-organized presentations that are effectively tailored to the intended audience.
• Serve as a technical liaison with development partner.
• Serve as a communication bridge between applications team, developers and infrastructure team members to facilitate understanding of current systems
• Resolve and/or escalate issues in a timely fashion.
• Understand how to communicate difficult/sensitive information tactfully.
• Works under the direction of Technical Data Lead / Data architect. Education
• Bachelor’s Degree or higher in Engineering, Technology or related field experience required.
Senior DevOps Engineer (8–10 years)
Location: Mumbai
Role Summary
As a Senior DevOps Engineer, you will own end-to-end platform reliability and delivery automation for mission-critical lending systems. You’ll architect cloud infrastructure, standardize CI/CD, enforce DevSecOps controls, and drive observability at scale—ensuring high availability, performance, and compliance consistent with BFSI standards.
Key Responsibilities
Platform & Cloud Infrastructure
- Design, implement, and scale multi-account, multi-VPC cloud architectures on AWS and/or Azure (compute, networking, storage, IAM, RDS, EKS/AKS, Load Balancers, CDN).
- Champion Infrastructure as Code (IaC) using Terraform (and optionally Pulumi/Crossplane) with GitOps workflows for repeatable, auditable deployments.
- Lead capacity planning, cost optimization, and performance tuning across environments.
CI/CD & Release Engineering
- Build and standardize CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps, ArgoCD) for microservices, data services, and frontends; enable blue‑green/canary releases and feature flags.
- Drive artifact management, environment promotion, and release governance with compliance-friendly controls.
Containers, Kubernetes & Runtime
- Operate production-grade Kubernetes (EKS/AKS), including cluster lifecycle, autoscaling, ingress, service mesh, and workload security; manage Docker/containerd images and registries.
Reliability, Observability & Incident Management
- Implement end-to-end monitoring, logging, and tracing (Prometheus, Grafana, ELK/EFK, CloudWatch/Log Analytics, Datadog/New Relic) with SLO/SLI error budgets.
- Establish on-call rotations, run postmortems, and continuously improve MTTR and change failure rate.
Security & Compliance (DevSecOps)
- Enforce cloud and container hardening, secrets management (AWS Secrets Manager / HashiCorp Vault), vulnerability scanning (Snyk/SonarQube), and policy-as-code (OPA/Conftest).
- Partner with infosec/risk to meet BFSI regulatory expectations for DR/BCP, audits, and data protection.
Data, Networking & Edge
- Optimize networking (DNS, TCP/IP, routing, OSI layers) and edge delivery (CloudFront/Fastly), including WAF rules and caching strategies.
- Support persistence layers (MySQL, Elasticsearch, DynamoDB) for performance and reliability.
Ways of Working & Leadership
- Lead cross-functional squads (Product, Engineering, Data, Risk) and mentor junior DevOps/SREs.
- Document runbooks, architecture diagrams, and operating procedures; drive automation-first culture.
Must‑Have Qualifications
- 8–10 years of total experience with 5+ years hands-on in DevOps/SRE roles.
- Strong expertise in AWS and/or Azure, Linux administration, Kubernetes, Docker, and Terraform.
- Proven track record building CI/CD with Jenkins/GitHub Actions/Azure DevOps/ArgoCD.
- Solid grasp of networking fundamentals (DNS, TLS, TCP/IP, routing, load balancing).
- Experience implementing observability stacks and responding to production incidents.
- Scripting in Bash/Python; ability to automate ops workflows and platform tasks.
- Good‑to‑Have / Preferred
- Exposure to BFSI/fintech systems and compliance standards; DR/BCP planning.
- Secrets management (Vault), policy-as-code (OPA), and security scanning (Snyk/SonarQube).
- Experience with GitOps patterns, service tiering, and SLO/SLI design. [illbeback.ai]
- Knowledge of CDNs (CloudFront/Fastly) and edge caching/WAF rule authoring.
- Education
- Bachelor’s/Master’s in Computer Science, Information Technology, or related field (or equivalent experience).
Job Description: DevOps Engineer
Location: Bangalore / Hybrid / Remote
Company: LodgIQ
Industry: Hospitality / SaaS / Machine Learning
About LodgIQ
Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the
travel industry. By leveraging machine learning and artificial intelligence, we enable precise
forecasting and optimized pricing for hotel revenue management. Backed by Highgate
Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a
global presence.
Role Summary:
We are seeking a Senior DevOps Engineer with 5+ years of strong hands-on experience in
AWS, Kubernetes, CI/CD, infrastructure as code, and cloud-native technologies. This
role involves designing and implementing scalable infrastructure, improving system
reliability, and driving automation across our cloud ecosystem.
Key Responsibilities:
• Architect, implement, and manage scalable, secure, and resilient cloud
infrastructure on AWS
• Lead DevOps initiatives including CI/CD pipelines, infrastructure automation,
and monitoring
• Deploy and manage Kubernetes clusters and containerized microservices
• Define and implement infrastructure as code using
Terraform/CloudFormation
• Monitor production and staging environments using tools like CloudWatch,
Prometheus, and Grafana
• Support MongoDB and MySQL database administration and optimization
• Ensure high availability, performance tuning, and cost optimization
• Guide and mentor junior engineers, and enforce DevOps best practices
• Drive system security, compliance, and audit readiness in cloud environments
• Collaborate with engineering, product, and QA teams to streamline release
processes
Required Qualifications:
• 5+ years of DevOps/Infrastructure experience in production-grade environments
• Strong expertise in AWS services: EC2, EKS, IAM, S3, RDS, Lambda, VPC, etc.
• Proven experience with Kubernetes and Docker in production
• Proficient with Terraform, CloudFormation, or similar IaC tools
• Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, or
similar
• Advanced scripting in Python, Bash, or Go
• Solid understanding of networking, firewalls, DNS, and security protocols
• Exposure to monitoring and logging stacks (e.g., ELK, Prometheus, Grafana)
• Experience with MongoDB and MySQL in cloud environments
Preferred Qualifications:
• AWS Certified DevOps Engineer or Solutions Architect
• Experience with service mesh (Istio, Linkerd), Helm, or ArgoCD
• Familiarity with Zero Downtime Deployments, Canary Releases, and Blue/Green
Deployments
• Background in high-availability systems and incident response
• Prior experience in a SaaS, ML, or hospitality-tech environment
Tools and Technologies You’ll Use:
• Cloud: AWS
• Containers: Docker, Kubernetes, Helm
• CI/CD: Jenkins, GitHub Actions
• IaC: Terraform, CloudFormation
• Monitoring: Prometheus, Grafana, CloudWatch
• Databases: MongoDB, MySQL
• Scripting: Bash, Python
• Collaboration: Git, Jira, Confluence, Slack
Why Join Us?
• Competitive salary and performance bonuses.
• Remote-friendly work culture.
• Opportunity to work on cutting-edge tech in AI and ML.
• Collaborative, high-growth startup environment.
• For more information, visit http://www.lodgiq.com






























