50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) | AWS (Amazon Web Services) Job openings in Bangalore (Bengaluru)
Apply to 50+ AWS (Amazon Web Services) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.



Company Description
Appiness Interactive Pvt. Ltd. is a Bangalore-based product development and UX firm that
specializes in digital services for startups to fortune-500s. We work closely with our clients to
create a comprehensive soul for their brand in the online world, engaged through multiple
platforms of digital media. Our team is young, passionate, and aggressive, not afraid to think
out of the box or tread the un-trodden path in order to deliver the best results for our clients.
We pride ourselves on Practical Creativity where the idea is only as good as the returns it
fetches for our clients.
Key Responsibilities:
- Design and implement advanced AI/ML models and algorithms to address real-world challenges.
- Analyze large and complex datasets to derive actionable insights and train predictive models.
- Build and deploy scalable, production-ready AI solutions on cloud platforms such as AWS, Azure, or GCP.
- Collaborate closely with cross-functional teams, including data engineers, product managers, and software developers, to integrate AI solutions into business workflows.
- Continuously monitor and optimize model performance, ensuring scalability, robustness, and reliability.
- Stay abreast of the latest advancements in AI, ML, and Generative AI technologies, and proactively apply them where applicable.
- Implement MLOps best practices using tools such as MLflow, Docker, and CI/CD pipelines.
- Work with Large Language Models (LLMs) like GPT and LLaMA, and develop Retrieval-Augmented Generation (RAG) pipelines when needed.
Required Skills:
- Strong programming skills in Python (preferred); experience with R or Java is also valuable.
- Proficiency with machine learning libraries and frameworks such as TensorFlow, PyTorch, and Scikit-learn.
- Hands-on experience with cloud platforms like AWS, Azure, or GCP.
- Solid foundation in data structures, algorithms, statistics, and machine learning principles.
- Familiarity with MLOps tools and practices, including MLflow, Docker, and Kubernetes.
- Proven experience in deploying and maintaining AI/ML models in production environments.
- Exposure to Large Language Models (LLMs), Generative AI, and vector databases is a strong plus.

Backend Engineer - Python
Location
Bangalore, India
Experience Required
2-3 years minimum
Job Overview
We are seeking a skilled Backend Engineer with expertise in Python to join our engineering team. The ideal candidate will have hands-on experience building and maintaining enterprise-level, scalable backend systems.
Key Requirements
Technical Skills
CS fundamentals are must (CN, DBMS, OS, System Design, OOPS) • Python Expertise: Advanced proficiency in Python with deep understanding of frameworks like Django, FastAPI, or Flask
• Database Management: Experience with PostgreSQL, MySQL, MongoDB, and database optimization
• API Development: Strong experience in designing and implementing RESTful APIs and GraphQL
• Cloud Platforms: Hands-on experience with AWS, GCP, or Azure services
• Containerization: Proficiency with Docker and Kubernetes
• Message Queues: Experience with Redis, RabbitMQ, or Apache Kafka
• Version Control: Advanced Git workflows and collaboration
Experience Requirements
• Minimum 2-3 years of backend development experience
• Proven track record of working on enterprise-level applications
• Experience building scalable systems handling high traffic loads
• Background in microservices architecture and distributed systems
• Experience with CI/CD pipelines and DevOps practices
Responsibilities
• Design, develop, and maintain robust backend services and APIs
• Optimize application performance and scalability
• Collaborate with frontend teams and product managers
• Implement security best practices and data protection measures
• Write comprehensive tests and maintain code quality
• Participate in code reviews and architectural discussions
• Monitor system performance and troubleshoot production issues
Preferred Qualifications
• Knowledge of caching strategies (Redis, Memcached)
• Understanding of software architecture patterns
• Experience with Agile/Scrum methodologies
• Open source contributions or personal projects

Location: Hybrid/ Remote
Type: Contract / Full‑Time
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Responsibilities:
- Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
- Design and build Python‑based services (FastAPI) for generating and updating embeddings.
- Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
- Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
- Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
- Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
- Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
- Collaborate with frontend engineers to define API contracts and demo endpoints.
- Document architecture diagrams, API specifications, and runbooks for future team onboarding.
Required Skills
- Strong Python expertise (FastAPI, async programming).
- Proficiency with Node.js and Express for API development.
- Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
- Familiarity with OpenAI’s APIs (embeddings, chat completions).
- Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
- Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).
Containerization skills (Docker):
- Good understanding of RAG architectures, prompt design, and memory management.
- Strong Git workflow and collaborative development practices (GitHub, CI/CD).
Nice‑to‑Have:
- Experience with Llama family models or other open‑source LLMs.
- Familiarity with MongoDB Atlas free tier and cluster management.
- Background in data engineering for streaming or batch processing.
- Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
- Frontend skills in React to prototype demo UIs.

Location: Hybrid/ Remote
Openings: 2
Experience: 5–12 Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Key Responsibilities
Architect & Design:
- Provide technical and architectural direction for complex frontend solutions,ensuring alignment with enterprise standards and best practices.
- Conduct design and code reviews to maintain high-quality, reusable, and scalable frontend interfaces for enterprise applications.
- Collaborate with cross-functional teams to define and enforce UI/UX design guidelines, accessibility standards, and performance benchmarks.
- Identify and address potential security vulnerabilities in frontend implementations, ensuring compliance with security and data privacy requirements.
Development & Debugging:
- Write clean, maintainable, and efficient frontend code.
- Debug and troubleshoot code to ensure robust, high-performing applications.
- Develop reusable frontend libraries that can be leveraged across multiple projects.
AI Awareness (Preferred):
- Understand AI/ML fundamentals and how they can enhance frontend applications.
- Collaborate with teams integrating AI-based features into chat applications.
Collaboration & Reporting:
- Work closely with cross-functional teams to align on architecture and deliverables.
- Regularly report progress, identify risks, and propose mitigation strategies.
Quality Assurance:
- Implement unit tests and end-to-end tests to ensure code quality.
- Participate in code reviews and enforce best practices.
Required Skills
- 5-10 years of experience architecting and developing cloud-based global applications in a public cloud environment (AWS, Azure, or GCP).
- Strong hands-on expertise in frontend technologies: JavaScript, HTML5, CSS3
- Proficiency with Modern frameworks like React, Angular, or Node.js
- Backend familiarity with Java, Spring Boot (or similar technologies).
- Experience developing real-world, at-scale products.
- General knowledge of cloud platforms (AWS, Azure, or GCP) and their structure, use, and capabilities.
- Strong problem-solving, debugging, and performance optimization skills.

Location: Hybrid/ Remote
Openings: 2
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or related field
Job Responsibilities
Problem Solving & Optimization:
- Analyze and resolve complex technical and application issues.
- Optimize application performance, scalability, and reliability.
Design & Develop:
- Build, test, and deploy scalable full-stack applications with high performance and security.
- Develop clean, reusable, and maintainable code for both frontend and backend.
AI Integration (Preferred):
- Collaborate with the team to integrate AI/ML models into applications where applicable.
- Explore Generative AI, NLP, or machine learning solutions that enhance product capabilities.
Technical Leadership & Mentorship:
- Provide guidance, mentorship, and code reviews for junior developers.
- Foster a culture of technical excellence and knowledge sharing.
Agile & Delivery Management:
- Participate in Agile ceremonies (sprint planning, stand-ups, retrospectives).
- Define and scope backlog items, track progress, and ensure timely delivery.
Collaboration:
- Work closely with cross-functional teams (product managers, designers, QA) to deliver high-quality solutions.
- Coordinate with geographically distributed teams.
Quality Assurance & Security:
- Conduct peer reviews of designs and code to ensure best practices.
- Implement security measures and ensure compliance with industry standards.
Innovation & Continuous Improvement:
- Identify areas for improvement in the software development lifecycle.
- Stay updated with the latest tech trends, especially in AI and cloud technologies, and recommend new tools or frameworks.
Required Skills
- Strong proficiency in JavaScript, HTML5, CSS3
- Hands-on expertise with frontend frameworks like React, Angular, or Vue.js
- Backend development experience with Java, Spring Boot (Node.js is a plus)
- Knowledge of REST APIs, microservices, and scalable architectures
- Familiarity with cloud platforms (AWS, Azure, or GCP)
- Experience with Agile/Scrum methodologies and JIRA for project tracking
- Proficiency in Git and version control best practices
- Strong debugging, performance optimization, and problem-solving skills
- Ability to analyze customer requirements and translate them into technical specifications

Location: Hybrid/ Remote
Openings: 5
Experience: 0 - 2Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Key Responsibilities:
Backend Development & APIs
- Build microservices that provide REST APIs to power web frontends.
- Design clean, reusable, and scalable backend code meeting enterprise security standards.
- Conceptualize and implement optimized data storage solutions for high-performance systems.
Deployment & Cloud
- Deploy microservices using a common deployment framework on AWS and GCP.
- Inspect and optimize server code for speed, security, and scalability.
Frontend Integration
- Work on modern front-end frameworks to ensure seamless integration with back-end services.
- Develop reusable libraries for both frontend and backend codebases.
AI Awareness (Preferred)
- Understand how AI/ML or Generative AI can enhance enterprise software workflows.
- Collaborate with AI specialists to integrate AI-driven features where applicable.
Quality & Collaboration
- Participate in code reviews to maintain high code quality.
- Collaborate with teams using Agile/Scrum methodologies for rapid and structured delivery.
Required Skills:
- Proficiency in JavaScript (ES6+), Webpack, Mocha, Jest
- Experience with recent frontend frameworks – React.js, Redux.js, Node.js (or similar)
- Deep understanding of HTML5, CSS3, SASS/LESS, and Content Management Systems
- Ability to design and implement RESTful APIs and understand their impact on client-side applications
- Familiarity with cloud platforms (AWS, Azure, or GCP) – deployment, storage, and scalability
- Experience working with Agile and Scrum methodologies
- Strong backend expertise in Java, J2EE, Spring Boot is a plus but not mandatory
∙
Experience ETL Tools: Talend
Experience in Database: Snowflake, Oracle, Amazon RDS and Casandra
Experience in Big Data and Amazon Services: Apache Sqoop, AWS S3, AWS CLI, Amazon EMR, Amazon MSK, Amazon
Sagemaker
Experience Reporting: Power BI
Experience in Scripting: SQL, PL/SQL, Python, R
Experience Data Modeling Tools: Archimate, Erwin, Oracle Data Modeler (secondary/preferred)
Experience in Insurance domain

Job Description:
Title : Python AWS Developer with API
Tech Stack : AWS API gateway, Lambda functionality, Oracle RDS, SQL & database management, (OOPS) principles, Java script, Object relational Mapper, Git, Docker, Java dependency management, CI/CD, AWS cloud & S3, Secret Manager, Python, API frameworks, well-versed with Front and back end programming (python).
Responsibilities:
· Worked on building high-performance APIs using AWS services and Python. Python coding, debugging programs and integrating app with third party web services.
· Troubleshoot and debug non-prod defects, back-end development, API, main focus on coding and monitoring applications.
· Core application logic design.
· Supports dependency teams in UAT testing and perform functional application testing which includes postman testing

🔍 Job Description:
We are looking for an experienced and highly skilled Technical Lead to guide the development and enhancement of a large-scale Data Observability solution built on AWS. This platform is pivotal in delivering monitoring, reporting, and actionable insights across the client's data landscape.
The Technical Lead will drive end-to-end feature delivery, mentor junior engineers, and uphold engineering best practices. The position reports to the Programme Technical Lead / Architect and involves close collaboration to align on platform vision, technical priorities, and success KPIs.
🎯 Key Responsibilities:
- Lead the design, development, and delivery of features for the data observability solution.
- Mentor and guide junior engineers, promoting technical growth and engineering excellence.
- Collaborate with the architect to align on platform roadmap, vision, and success metrics.
- Ensure high quality, scalability, and performance in data engineering solutions.
- Contribute to code reviews, architecture discussions, and operational readiness.
🔧 Primary Must-Have Skills (Non-Negotiable):
- 5+ years in Data Engineering or Software Engineering roles.
- 3+ years in a technical team or squad leadership capacity.
- Deep expertise in AWS Data Services: Glue, EMR, Kinesis, Lambda, Athena, S3.
- Advanced programming experience with PySpark, Python, and SQL.
- Proven experience in building scalable, production-grade data pipelines on cloud platforms.

- Build and maintain full stack applications—from planning and design to deployment and maintenance.
- Develop responsive and dynamic user interfaces using React.js.
- Create robust server-side logic, APIs, and microservices with Node.js.
- Design and optimize schemas in MongoDB and manage high-performance caching with Redis.
- Deploy, scale, and manage applications on AWS (EC2, S3, Lambda, etc.).
- Write well-tested, maintainable code following TDD principles.
- Partner with product managers, designers, and fellow engineers to deliver top-quality features and improvements.
- Proven expertise in Node.js and React.
- Strong experience with MongoDB and Redis.
- Deep understanding of AWS services and cloud-native application design.
- Solid grasp of TDD, clean code practices, and software craftsmanship.
- B.E. in Computer Science, Engineering, or a related field.
- Experience with CI/CD tools such as Jenkins, GitLab CI, or similar.
- Familiarity with software design patterns and scalable system architecture.
- Excellent communication and teamwork skills.
- A self-starter attitude with strong problem-solving abilities.

We are seeking a Senior Laravel Developer with a minimum of 8+ years of experience and a proven track record in developing and maintaining PHP/Laravel-based websites and applications. The ideal candidate should excel in creating high-performance web applications using PHP and MySQL, with expertise in debugging, performance optimisation, and scalability.
Responsibilities:
- Write clean, maintainable code adhering to company coding standards.
- Develop and enhance existing PHP/Laravel/Code-igniter projects.
- Troubleshoot, test, and maintain core product software and databases for optimisation and functionality.
- Contribute to all phases of the development lifecycle.
- Follow industry best practices for secure and scalable development.
Key Skills
Technical Proficiency
- Expertise in PHP, MySQL, and related web technologies.
- Strong experience in debugging, performance optimisation, and scalability.
- Familiarity with resource-intensive application architectures.
- Hands-on experience with AWS services (e.g., EC2, S3, RDS, Lambda) for scalable application deployment.
- Proficiency in setting up and managing CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or AWS CodePipeline.
- Strong knowledge of Git version control and collaborative workflows.
- Experience with microservices architecture and containerization (Docker/Kubernetes) is a plus.
- Familiarity with front-end frameworks (Vue.js, React, or Angular) and modern JavaScript practices is a bonus.
Frameworks
- Hands-on experience with Laravel and CodeIgniter frameworks.
Proactive Problem-Solving
- Ability to identify potential bottlenecks and provide innovative solutions.
Soft Skills
- Adaptability to balance support and development tasks effectively.
- Ability to work independently and collaboratively in teams.
- Exceptional problem-solving skills and attention to detail.
Leadership and Mentorship
- Mentor junior and mid-level developers, providing guidance on coding standards, best practices, and technical challenges.
- Conduct code reviews to ensure high-quality, maintainable codebases.
- Collaborate with stakeholders to define technical requirements and project roadmaps.
Documentation and Best Practices
- Maintain comprehensive technical documentation for projects, processes, and workflows.
- Establish and enforce coding standards, security protocols, and development guidelines across the team.
Preferred Background
- Proven experience managing performance-critical applications.
- Exposure to legacy systems alongside modern development practices.
- Prior experience balancing support and development roles.
- Demonstrated expertise in resource-intensive application development and optimisation.
- Evidence of significant past performance improvements in web applications.
- Proficiency in frameworks such as Laravel and CodeIgniter, and relevant tech stacks.
Qualifications
- Minimum 8+ years of experience managing performance-critical applications.
- BE/B.Tech in Computer Science or equivalent degree required.
- Sound knowledge of OOP principles and best practices.
- Familiarity with Git version control and Agile/Scrum methodologies.
This role offers the opportunity to work on innovative projects while making meaningful contributions to high-scale applications. If you are passionate about creating exceptional web applications and thrive in a fast-paced environment, we encourage you to apply!
• Strong knowledge of JavaScript, TypeScript, and Node.js (experienced in Nest.js, Express.js, or any other framework).
• Knowledge of AWS technologies such as DynamoDB, Elasticsearch, relational and NoSQL databases, EventBridge and messaging and queuing solutions like SQS, SNS (or any other cloud platform like Google Cloud or Azure).
• General understanding of common design and architectural patterns, with the ability to produce elegant designs in back-end, REST API, EDA and microservice architectures.
• Passion for delivering clean code, API tests, and maintainable documentation.
• Familiarity with Agile/Scrum methodologies and DevOps best practices.
• Knowledge of common frameworks such as GitLab, Docker, and CI/CD solutions.
About Eazeebox
Eazeebox is India’s first B2B Quick Commerce platform for home electrical goods. We empower electrical retailers with access to 100+ brands, flexible credit options, and 4-hour delivery—making supply chains faster, smarter, and more efficient. Our tech-driven approach enables sub-3 hour inventory-aware fulfilment across micro-markets, with a goal of scaling to 50+ orders/day per store.
About the Role
We’re looking for a DevOps Engineer to help scale and stabilize the cloud-native backbone that powers Eazeebox. You’ll play a critical role in ensuring our microservices architecture remains reliable, responsive, and performant—especially during peak retailer ordering windows.
What We’re Looking For
- 2+ years in a DevOps or SRE role in production-grade, cloud-native environments (AWS-focused)
- Solid hands-on experience with Docker, Kubernetes/EKS, and container networking
- Proficiency with CI/CD tools, especially GitHub Actions
- Experience with staged rollout strategies for microservices
- Familiarity with event-driven architectures using SNS, SQS, and Step Functions
- Strong ability to optimize cloud costs without compromising uptime or performance
- Scripting/automation skills in Python, Go, or Bash
- Good understanding of observability, on-call readiness, and incident response workflows
Nice to Have
- Experience in B2B commerce, delivery/logistics networks, or on-demand operations
- Exposure to real-time inventory systems or marketplaces
- Worked on high-concurrency, low-latency backend systems
We are looking for someone with a hacker mindset who is ready to pick up new problems and build full stack AI solutions for some of the biggest brands in the country and the world.

Mode of Hire: Permanent
Required Skills Set (Mandatory): Linux, Shell Scripting, Python, AWS, Security best practices, Git
Desired Skills (Good if you have): Ansible, Terraform
Job Responsibilities
- Design, develop, and maintain deployment pipelines and automation tooling to improve platform efficiency, scalability, and reliability.
- Manage infrastructure and services in production AWS environments.
- Drive platform improvements with a focus on security, scalability, and operational excellence.
- Collaborate with engineering teams to enhance development tooling, streamline access workflows, and improve platform usability through feedback.
- Mentor junior engineers and help foster a culture of high-quality engineering and knowledge sharing.
Job Requirements
- Strong foundational understanding of Linux systems.
- Cloud experience (e.g., AWS) with strong problem-solving in cloud-native environments.
- Proven track record of delivering robust, well-documented, and secure automation solutions.
- Comfortable owning end-to-end delivery of infrastructure components and tooling.
Preferred Qualifications
- Advanced system and cloud optimization skills.
- Prior experience in platform teams or DevOps roles at product-focused startups.
- Demonstrated contributions to internal tooling, open-source, or automation projects.
Job Description
We are looking for a hands-on Tech Lead – Java with strong software engineering fundamentals, a deep understanding of Java-based backend systems, and proven experience leading agile teams. This role involves a balance of individual contribution and technical leadership — mentoring developers, designing scalable architectures, and driving the success of product delivery in fast-paced environments.
Key Responsibilities
- Lead the end-to-end design, development, and deployment of Java-based applications and RESTful APIs.
- Collaborate with product managers and architects to define technical solutions and translate business requirements into scalable software.
- Guide and mentor team members in best coding practices, design patterns, and architectural decisions.
- Drive code reviews, technical discussions, and ensure high code quality and performance standards.
- Troubleshoot critical production issues and implement long-term fixes and improvements.
- Advocate for continuous improvement in tools, processes, and systems across the engineering team.
- Stay up to date with modern technologies and recommend their adoption where appropriate.
Required Skills
- 5+ years of experience in Java backend development with expertise in Spring/Spring Boot and RESTful services.
- Solid grasp of Object-Oriented Programming (OOP), system design, and design patterns.
- Proven experience leading a team of engineers or taking ownership of modules/projects.
- Experience with AWS Cloud services (EC2, Lambda, S3, etc.) is a strong advantage.
- Familiarity with Agile/Scrum methodologies and working in cross-functional teams.
- Excellent problem-solving, debugging, and analytical skills.
- Strong communication and leadership skills.
About HummingWave
HummingWave is a leading IT product development company specializing in building full-scale application systems with robust cloud backends, sleek mobile/web frontends, and seamless enterprise integrations. With 50+ digital products delivered across domains for clients in the US, Europe, and Asia-Pacific, we are a team of highly skilled engineers committed to technical excellence and innovation.
Thanks
Create and manage Jenkins Pipelines using Linux groovy
scripting and python
Analyze and fix issues in Jenkins, GitHub, Nexus,
SonarQube and AWS cloud
Perform Jenkins, GitHub, SonarQube and Nexus
administration.
Create resources in AWS environment using
infrastructure-as-code. Analyze and fix issues in AWS
Cloud.
Good-to-Have
AWS Cloud certification
Terraform Certification
- Kubernetes/Docker experience
Job Title: Backend Developer
Location: In-Office, Bangalore, Karnataka, India
Job Summary:
We are seeking a highly skilled and experienced Backend Developer with a minimum of 1 year of experience in product building to join our dynamic and innovative team. In this role, you will be responsible for designing, developing, and maintaining robust backend systems that drive our applications. You will collaborate with cross-functional teams to ensure seamless integration between frontend and backend components, and your expertise will be critical in architecting scalable, secure, and high-performance backend solutions.
Annual Compensation: 6-10 LPA
Responsibilities:
- Design, develop, and maintain scalable and efficient backend systems and APIs using NodeJS.
- Architect and implement complex backend solutions, ensuring high availability and performance.
- Collaborate with product managers, frontend developers, and other stakeholders to deliver comprehensive end-to-end solutions.
- Design and optimize data storage solutions using relational databases (e.g., MySQL) and NoSQL databases (e.g., MongoDB, Redis).
- Promoting a culture of collaboration, knowledge sharing, and continuous improvement.
- Implement and enforce best practices for code quality, security, and performance optimization.
- Develop and maintain CI/CD pipelines to automate build, test, and deployment processes.
- Ensure comprehensive test coverage, including unit testing, and implement various testing methodologies and tools to validate application functionality.
- Utilize cloud services (e.g., AWS, Azure, GCP) for infrastructure deployment, management, and optimization.
- Conduct system design reviews and contribute to architectural discussions.
- Stay updated with industry trends and emerging technologies to drive innovation within the team.
- Implement secure authentication and authorization mechanisms and ensure data encryption for sensitive information.
- Design and develop event-driven applications utilizing serverless computing principles to enhance scalability and efficiency.
Requirements:
- Minimum of 1 year of proven experience as a Backend Developer, with a strong portfolio of product-building projects.
- Extensive experience with JavaScript backend frameworks (e.g., Express, Socket) and a deep understanding of their ecosystems.
- Strong expertise in SQL and NoSQL databases (MySQL and MongoDB) with a focus on data modeling and scalability.
- Practical experience with Redis and caching mechanisms to enhance application performance.
- Proficient in RESTful API design and development, with a strong understanding of API security best practices.
- In-depth knowledge of asynchronous programming and event-driven architecture.
- Familiarity with the entire web stack, including protocols, web server optimization techniques, and performance tuning.
- Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes) is highly desirable.
- Proven experience working with cloud technologies (AWS/GCP/Azure) and understanding of cloud architecture principles.
- Strong understanding of fundamental design principles behind scalable applications and microservices architecture.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work collaboratively in a fast-paced, agile environment and lead projects to successful completion.
How to Apply
Visit: https://www.thealteroffice.com/about

Job Summary:
We are looking for a skilled and motivated Python AWS Engineer to join our team. The ideal candidate will have strong experience in backend development using Python, cloud infrastructure on AWS, and building serverless or microservices-based architectures. You will work closely with cross-functional teams to design, develop, deploy, and maintain scalable and secure applications in the cloud.
Key Responsibilities:
- Develop and maintain backend applications using Python and frameworks like Django or Flask
- Design and implement serverless solutions using AWS Lambda, API Gateway, and other AWS services
- Develop data processing pipelines using services such as AWS Glue, Step Functions, S3, DynamoDB, and RDS
- Write clean, efficient, and testable code following best practices
- Implement CI/CD pipelines using tools like CodePipeline, GitHub Actions, or Jenkins
- Monitor and optimize system performance and troubleshoot production issues
- Collaborate with DevOps and front-end teams to integrate APIs and cloud-native services
- Maintain and improve application security and compliance with industry standards
Required Skills:
- Strong programming skills in Python
- Solid understanding of AWS cloud services (Lambda, S3, EC2, DynamoDB, RDS, IAM, API Gateway, CloudWatch, etc.)
- Experience with infrastructure as code (e.g., CloudFormation, Terraform, or AWS CDK)
- Good understanding of RESTful API design and microservices architecture
- Hands-on experience with CI/CD, Git, and version control systems
- Familiarity with containerization (Docker, ECS, or EKS) is a plus
- Strong problem-solving and communication skills
Preferred Qualifications:
- Experience with PySpark, Pandas, or data engineering tools
- Working knowledge of Django, Flask, or other Python frameworks
- AWS Certification (e.g., AWS Certified Developer – Associate) is a plus
Educational Qualification:
- Bachelor's or Master’s degree in Computer Science, Engineering, or related field
About Us:
At Vahan, we are building the first AI powered recruitment marketplace for India’s 300 million strong Blue Collar workforce, opening doors to economic opportunities and brighter futures.
Already India’s largest recruitment platform, Vahan is supported by marquee investors like Khosla Ventures, Y Combinator, Airtel, Vijay Shekhar Sharma (CEO, Paytm), and leading executives from Google and Facebook.
Our customers include names like Swiggy, Zomato, Rapido, Zepto, and many more. We leverage cutting-edge technology and AI to recruit for the workforces of some of the most recognized companies in the country.
Our vision is ambitious: to become the go-to platform for blue-collar professionals worldwide, empowering them with not just earning opportunities but also the tools, benefits, and support they need to thrive. We aim to impact over a billion lives worldwide, creating a future where everyone has access to economic prosperity.
If our vision excites you, Vahan might just be your next adventure. We’re on the hunt for driven individuals who love tackling big challenges. If this sounds like your kind of journey, dive into the details and see where you can make your mark.
What You Will Be Doing:
- Build & Automate Cloud Infrastructure – Design, deploy, and optimize cloud environments, ensuring scalability, reliability, and cost efficiency.
- Set Up CI/CD & Deployment Pipelines – Develop automated workflows to streamline code integration, testing, and deployment for faster releases.
- Monitor & Improve System Performance – Implement robust monitoring, logging, and alerting mechanisms to proactively identify and resolve issues.
- Manage Containers & Scalability – Deploy and maintain containerized applications, ensuring efficient resource utilization and high availability.
- Ensure Security & Reliability – Enforce access controls, backup strategies, and disaster recovery plans to safeguard infrastructure and data.
- Adapt & Scale with the Startup – Take on dynamic responsibilities, quickly learn new technologies, and evolve processes to meet growing business needs.
You Will Thrive in This Role If You:
Must Haves:
- Experience: 3+ years in DevOps or related roles, focusing on cloud environments, automation, CI/CD, and Linux system administration. Strong expertise in debugging and infrastructure performance improvements.
- Cloud Expertise: In-depth experience with one or more cloud platforms (AWS, GCP), including services like EC2, RDS, S3, VPC, etc.
- IaC Tools: Proficiency in Terraform, Ansible, CloudFormation, or similar tools.
- Scripting Skills: Strong scripting abilities in Python, Bash, or PowerShell.
- Containerization: Experience with Docker, including managing containers in production.
- Monitoring Tools: Hands-on experience with tools like ELK, Prometheus, Grafana, CloudWatch, New Relic, and Data dog.
- Version Control: Proficiency with Git and code repository management.
- Soft Skills: Excellent problem-solving skills, attention to detail, and effective communication with both technical and non-technical team members.
- Database Management: Experience with managing and tuning databases like MySQL and PostgreSQL.
- Deployment Pipelines: Experience with Jenkins and similar CI/CD tools.
- Message Queues: Experience with rabbitMQ/SQS/Kafka.
Nice to Have:
- Certifications: AWS Certified DevOps Engineer, Certified Kubernetes Administrator (CKA), or similar.
- SRE Practices: Familiarity with Site Reliability Engineering (SRE) principles, including error budgeting and service level objectives (SLOs).
- Serverless Computing: Knowledge of AWS Lambda, Azure Functions, or similar architectures.
- Containerization: Experience with Docker and Kubernetes, including managing production clusters.
- Security: Awareness of security best practices and implementations.
- Cloud Cost Optimization: Experience with cost-saving initiatives in cloud environments.
- Data Pipelines & ETL: Experience in setting up and managing data pipelines and ETL workflows.
- Familiarity with Modern Tech Stacks: Exposure to Python, Node.js, React.js, and Kotlin for app deployment CI/CD pipelines.
- MLOps Pipelines: Understanding of ML model deployment and operationalization.
- Data Retrieval & Snapshots: Experience with PITI, EC2, and RDS snapshots.
- System Resiliency & Recovery: Strategies for ensuring system reliability and recovery in case of downtime.
At Vahan, you’ll have the opportunity to make a real impact in a sector that touches millions of lives. We’re committed to not only advancing the livelihoods of our workforce but also, in taking care of the people who make this mission possible. Here’s what we offer:
- Unlimited PTO: Trust and flexibility to manage your time in the way that works best for you.
- Comprehensive Medical Insurance: We’ve got you covered with plans designed to support you and your loved ones.
- Monthly Wellness Leaves: Regular time off to recharge and focus on what matters most.
- Competitive Pay: Your contributions are recognized and rewarded with a compensation package that reflects your impact.
Join us, and be part of something bigger—where your work drives real,
positive change in the world.


About Vahan
At Vahan.ai, we are building India’s first AI-powered recruitment marketplace for the 300 million-strong blue-collar workforce — opening doors to economic opportunities and brighter futures.
Already India’s largest recruitment platform, Vahan.ai is backed by marquee investors like Khosla Ventures, Bharti Airtel, Vijay Shekhar Sharma (CEO, Paytm), and leading executives from Google and Facebook.
Our customers include Swiggy, Zomato, Rapido, Zepto, and many more. We leverage cutting-edge technology and AI to recruit for the workforces of some of the most recognized companies in the country.
Our vision is ambitious:
To become the go-to platform for blue-collar professionals worldwide — empowering them with not just earning opportunities but also the tools, benefits, and support they need to thrive. We aim to impact over a billion lives globally, creating a future where everyone has access to economic prosperity.
If our vision excites you, Vahan.ai might just be your next adventure. We’re on the hunt for driven individuals who love tackling big challenges. Dive into the details below and see where you can make your mark.
Role Overview:
We're seeking an experienced Senior Engineering Manager to lead multiple engineering pods/projects (2–3) and drive technical excellence across our product portfolio. This role combines hands-on technical leadership with people management, requiring someone who can scale teams, deliver complex projects, and maintain high engineering standards in a fast-paced startup environment.
Key Responsibilities
Team Leadership & Management
- Lead and manage multiple engineering pods (15–20 engineers in total)
- Hire, onboard, and develop engineering talent across different experience levels
- Conduct performance reviews, provide career guidance, and manage team growth
- Foster a culture of technical excellence, collaboration, and continuous learning
- Build and implement engineering processes that scale with company growth
Technical Leadership
- Guide technical architecture decisions across full-stack applications
- Provide technical mentorship and code review guidance to engineers
- Lead both frontend (React.js, React Native) and backend (Node.js) development initiatives
- Oversee web and mobile application development strategies
- Drive technical problem-solving and debugging for complex production issues
Infrastructure & Operations
- Manage AWS infrastructure, ensuring scalability, reliability, and cost-effectiveness
- Build and maintain production support processes and incident response procedures
- Implement monitoring, alerting, and observability practices
- Lead post-mortem processes and drive continuous improvement initiatives
Cross-Functional Collaboration
- Partner closely with Product, QA, and Business teams to deliver on company objectives
- Translate business requirements into technical roadmaps and delivery plans
- Manage stakeholder expectations and communicate technical decisions effectively
- Balance feature delivery with technical debt management and system scalability
Strategic Planning
- Develop engineering roadmaps aligned with business goals and product strategy
- Make data-driven decisions about technology choices and team structure
- Identify and mitigate technical and operational risks
- Drive engineering metrics and KPIs to measure team performance and product quality
Required Qualifications
Experience
- 7–10 years of software engineering experience
- 3+ years of engineering management experience, preferably in startup environments
- Proven track record of leading teams of 10+ engineers across multiple projects
- Experience hiring and scaling engineering teams in high-growth companies
Technical Skills
- Strong expertise in Node.js, React.js, and React Native (at least one frontend and backend)
- Experience with full-stack web and mobile application development
- Solid understanding of AWS services and cloud infrastructure management
- Knowledge of software architecture, system design, and scalability principles
- Experience with CI/CD pipelines, monitoring tools, and DevOps practices
Leadership & Management
- Demonstrated ability to mentor and develop engineering talent
- Experience conducting performance reviews and managing career development
- Strong stakeholder management and cross-functional collaboration skills
- Proven ability to balance technical decisions with business objectives
- Experience managing production systems and incident response processes
Soft Skills
- Excellent communication and presentation skills
- Strong problem-solving and analytical thinking abilities
- Ability to work effectively in fast-paced, ambiguous environments
- Experience making difficult prioritization decisions under resource constraints
- Cultural fit for startup environment with hands-on, results-oriented approach
Preferred Qualifications
- Experience in Series B+ stage startups or high-growth technology companies
- Background in tech-first / AI-first startups
- AI/ML product knowledge and experience (good to have)
- Experience with microservices architecture and distributed systems
- Contributions to open-source projects or technical community involvement
What We Offer
- Unlimited PTO – Trust and flexibility to manage your time in the way that works best for you
- Comprehensive Medical Insurance – Plans designed to support you and your loved ones
- Monthly Wellness Leaves – Regular time off to recharge and focus on what matters most
- Competitive Pay – Your contributions are recognized and rewarded with a compensation package that reflects your impact
Join us, and be part of something bigger — where your work drives real, positive change in the world.
Job Description :
We are seeking a highly experienced Sr Data Modeler / Solution Architect to join the Data Architecture team at Corporate Office in Bangalore. The ideal candidate will have 4 to 8 years of experience in data modeling and architecture with deep expertise in AWS cloud stack, data warehousing, and enterprise data modeling tools. This individual will be responsible for designing and creating enterprise-grade data models and driving the implementation of Layered Scalable Architecture or Medallion Architecture to support robust, scalable, and high-quality data marts across multiple business units.
This role will involve managing complex datasets from systems like PoS, ERP, CRM, and external sources, while optimizing performance and cost. You will also provide strategic leadership on data modeling standards, governance, and best practices, ensuring the foundation for analytics and reporting is solid and future-ready.
Key Responsibilities:
· Design and deliver conceptual, logical, and physical data models using tools like ERWin.
· Implement Layered Scalable Architecture / Medallion Architecture for building scalable, standardized data marts.
· Optimize performance and cost of AWS-based data infrastructure (Redshift, S3, Glue, Lambda, etc.).
· Collaborate with cross-functional teams (IT, business, analysts) to gather data requirements and ensure model alignment with KPIs and business logic.
· Develop and optimize SQL code, materialized views, stored procedures in AWS Redshift.
· Ensure data governance, lineage, and quality mechanisms are established across systems.
· Lead and mentor technical teams in an Agile project delivery model.
· Manage data layer creation and documentation: data dictionary, ER diagrams, purpose mapping.
· Identify data gaps and availability issues with respect to source systems.
Required Skills & Qualifications:
· Bachelor’s or Master’s degree in Computer Science, IT, or related field (B.E./B.Tech/M.E./M.Tech/MCA).
· Minimum 4 years of experience in data modeling and architecture.
· Proficiency with data modeling tools such as ERWin, with strong knowledge of forward and reverse engineering.
· Deep expertise in SQL (including advanced SQL, stored procedures, performance tuning).
· Strong experience in data warehousing, RDBMS, and ETL tools like AWS Glue, IBM DataStage, or SAP Data Services.
· Hands-on experience with AWS services: Redshift, S3, Glue, RDS, Lambda, Bedrock, and Q.
· Good understanding of reporting tools such as Tableau, Power BI, or AWS QuickSight.
· Exposure to DevOps/CI-CD pipelines, AI/ML, Gen AI, NLP, and polyglot programming is a plus.
· Familiarity with data governance tools (e.g., ORION/EIIG).
· Domain knowledge in Retail, Manufacturing, HR, or Finance preferred.
· Excellent written and verbal communication skills.
Certifications (Preferred):
· AWS Certification (e.g., AWS Certified Solutions Architect or Data Analytics – Specialty)
· Data Governance or Data Modeling Certifications (e.g., CDMP, Databricks, or TOGAF)
Mandatory Skills
aws, Technical Architecture, Aiml, SQL, Data Warehousing, Data Modelling
Employment type- Contract basis
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using PySpark and distributed computing frameworks.
- Implement ETL processes and integrate data from structured and unstructured sources into cloud data warehouses.
- Work across Azure or AWS cloud ecosystems to deploy and manage big data workflows.
- Optimize performance of SQL queries and develop stored procedures for data transformation and analytics.
- Collaborate with Data Scientists, Analysts, and Business teams to ensure reliable data availability and quality.
- Maintain documentation and implement best practices for data architecture, governance, and security.
⚙️ Required Skills
- Programming: Proficient in PySpark, Python, and SQL.
- Cloud Platforms: Hands-on experience with Azure Data Factory, Databricks, or AWS Glue/Redshift.
- Data Engineering Tools: Familiarity with Apache Spark, Kafka, Airflow, or similar tools.
- Data Warehousing: Strong knowledge of designing and working with data warehouses like Snowflake, BigQuery, Synapse, or Redshift.
- Data Modeling: Experience in dimensional modeling, star/snowflake schema, and data lake architecture.
- CI/CD & Version Control: Exposure to Git, Terraform, or other DevOps tools is a plus.
🧰 Preferred Qualifications
- Bachelor's or Master's in Computer Science, Engineering, or related field.
- Certifications in Azure/AWS are highly desirable.
- Knowledge of business intelligence tools (Power BI, Tableau) is a bonus.

Supply Wisdom: Full Stack Developer
Location: Hybrid Position based in Bangalore
Reporting to: Tech Lead Manager
Supply Wisdom is a global leader in transformative risk intelligence, offering real-time insights to drive business growth, reduce costs, enhance security and compliance, and identify revenue opportunities. Our AI-based SaaS products cover various risk domains, including financial, cyber, operational, ESG, and compliance. With a diverse workforce that is 57% female, our clients include Fortune 100 and Global 2000 firms in sectors like financial services, insurance, healthcare, and technology.
Objective: We are seeking a skilled Full Stack Developer to design and build scalable software solutions. You will be part of a cross-functional team responsible for the full software development life cycle, from conception to deployment.
As a Full Stack Developer, you should be proficient in both front-end and back-end technologies, development frameworks, and third-party libraries. We’re looking for a team player with strong problem-solving abilities, attention to visual design, and a focus on utility. Familiarity with Agile methodologies, including Scrum and Kanban, is essential.
Responsibilities
- Collaborate with the development team and product manager to ideate software solutions.
- Write effective and secure REST APIs.
- Integrate third-party libraries for product enhancement.
- Design and implement client-side and server-side architecture.
- Work with data scientists and analysts to enhance software using RPA and AI/ML techniques.
- Develop and manage well-functioning databases and applications.
- Ensure software responsiveness and efficiency through testing.
- Troubleshoot, debug, and upgrade software as needed.
- Implement security and data protection settings.
- Create features and applications with mobile-responsive design.
- Write clear, maintainable technical documentation.
- Build front-end applications with appealing, responsive visual design.
Requirements
- Degree in Computer Science (or related field) with 4+ years of hands-on experience in Python development, with strong expertise in the Django framework and Django REST Framework (DRF).
- Proven experience in designing and building RESTful APIs, with a solid understanding of API versioning, authentication (JWT/OAuth2), and best practices.
- Experience with relational databases such as PostgreSQL or MySQL; familiarity with query optimization and database migrations.
- Basic front-end development skills using HTML, CSS, and JavaScript; experience with any JavaScript framework (like React or Next Js) is a plus.
- Good understanding of Object-Oriented Programming (OOP) and design patterns in Python.
- Familiarity with Git and collaborative development workflows (e.g., GitHub, GitLab).
- Knowledge of Docker, CI/CD pipelines.
- Hands-on experience with AWS services, Nginx web server, RabbitMQ (or similar message brokers), event handling, and synchronization.
- Familiarity with Postgres, SSO implementation (desirable), and integration of third-party libraries.
- Experience with unit testing, debugging, and code reviews.
- Experience using tools like Jira and Confluence.
- Ability to work in Agile/Scrum teams with good communication and problem-solving skills.
Our Commitment to You:
We offer a competitive salary and generous benefits. In addition, we offer a vibrant work environment, a global team filled with passionate and fun-loving people coming from diverse cultures and backgrounds.
If you are looking to make an impact in delivering market-leading risk management solutions, empowering our clients, and making the world a better place, then Supply Wisdom is the place for you.
You can learn more at supplywisdom.com and on LinkedIn.
Job Title: Sr. DevOps Engineer
Experience Required: 2 to 4 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two

Job Title: Site Reliability Engineer (SRE)
Experience: 4+ Years
Work Location: Bangalore / Chennai / Pune / Gurgaon
Work Mode: Hybrid or Onsite (based on project need)
Domain Preference: Candidates with past experience working in shoe/footwear retail brands (e.g., Nike, Adidas, Puma) are highly preferred.
🛠️ Key Responsibilities
- Design, implement, and manage scalable, reliable, and secure infrastructure on AWS.
- Develop and maintain Python-based automation scripts for deployment, monitoring, and alerting.
- Monitor system performance, uptime, and overall health using tools like Prometheus, Grafana, or Datadog.
- Handle incident response, root cause analysis, and ensure proactive remediation of production issues.
- Define and implement Service Level Objectives (SLOs) and Error Budgets in alignment with business requirements.
- Build tools to improve system reliability, automate manual tasks, and enforce infrastructure consistency.
- Collaborate with development and DevOps teams to ensure robust CI/CD pipelines and safe deployments.
- Conduct chaos testing and participate in on-call rotations to maintain 24/7 application availability.
✅ Must-Have Skills
- 4+ years of experience in Site Reliability Engineering or DevOps with a focus on reliability, monitoring, and automation.
- Strong programming skills in Python (mandatory).
- Hands-on experience with AWS cloud services (EC2, S3, Lambda, ECS/EKS, CloudWatch, etc.).
- Expertise in monitoring and alerting tools like Prometheus, Grafana, Datadog, CloudWatch, etc.
- Strong background in Linux-based systems and shell scripting.
- Experience implementing infrastructure as code using tools like Terraform or CloudFormation.
- Deep understanding of incident management, SLOs/SLIs, and postmortem practices.
- Prior working experience in footwear/retail brands such as Nike or similar is highly preferred.
Role Overview
Zolvit is seeking a Node.js Backend Lead Engineer to lead our engineering efforts in building scalable systems with a microservices architecture. The ideal candidate will have 7+ years of experience in backend development, platformization expertise, and the ability to mentor junior engineers. You will play a key role in driving architectural decisions, ensuring system scalability, and fostering a strong engineering culture.
Responsibilities
- Design and implement scalable backend systems using Node.js and microservices architecture.
- Lead the development of platform components to enable efficient code reuse, modularity, and scalability.
- Collaborate with stakeholders to define system architecture and technical roadmap.
- Design and build solutions using event-driven architecture and middleware such as Kafka.
- Develop and maintain SQL and NoSQL databases, ensuring optimal performance and scalability.
- Define and implement high-level and low-level designs, documenting key decisions and ensuring junior engineers understand the architecture.
- Mentor junior engineers, conduct code reviews, and promote best practices in coding, design, and system architecture.
- Lead technical discussions, participate in hiring processes, and contribute to building a high-performance engineering team.
- Implement and maintain CI/CD pipelines to ensure seamless integration and deployment.
- Leverage AWS services for scalable infrastructure and deployment solutions.
Requirements
- 7+ years of hands-on experience in building scalable backend systems using Node.js.
- Strong understanding of microservices architecture, event-driven systems, and middleware like Kafka.
- Experience in building platform solutions with a focus on reusability and modularity.
- Proficient in SQL and NoSQL databases with a clear understanding of their tradeoffs.
- Solid knowledge of high-level and low-level system design concepts.
- Proven experience in mentoring engineers, conducting code reviews, and driving engineering excellence.
- Experience working with CI/CD pipelines and modern DevOps practices.
- Proficient in leveraging AWS services for building scalable infrastructure.
- Strong problem-solving skills, effective communication, and ability to thrive in a fast-paced environment.
What We Offer
- Opportunity to lead technical initiatives and shape the platform architecture.
- Work on cutting-edge technologies with a team that values innovation and engineering excellence.
- A collaborative environment where mentorship and growth are highly encouraged.
- Competitive compensation and growth opportunities aligned with your contributions.


About Eazeebox
Eazeebox is India’s first specialized B2B platform for home electrical goods. We simplify supply chain logistics and empower electrical retailers through our one-stop digital platform — offering access to 100+ brands across 15+ categories, no MOQs, flexible credit options, and 4-hour delivery. We’re on a mission to bring technological inclusion to India's massive electrical retail industry.
Role Overview
We’re looking for a hands-on Full Stack Engineer who can build scalable backend systems using Python and mobile applications using React Native. You’ll work directly with the founder and a lean engineering team to architect and deliver core modules across our Quick Commerce stack – including retailer apps, driver apps, order management systems, and more.
What You’ll Do
- Develop and maintain backend services using Python
- Build and ship high-performance React Native apps for Android and iOS
- Collaborate on API design, microservices, and systems integration
- Ensure performance, reliability, and scalability across the stack
- Contribute to decisions on re-engineering, tech stack, and infra setup
- Work closely with the founder and product team to own end-to-end delivery
- Participate in collaborative working sessions and pair programming when needed
What We’re Looking For
- Strong proficiency in Python for backend development
- Experience building mobile apps with React Native
- Solid understanding of microservices architecture, API layers, and shared data models
- Familiarity with AWS or equivalent cloud platforms
- Exposure to Docker, Kubernetes, and CI/CD pipelines
- Ability to thrive in a fast-paced, high-ownership environment
Good-to-Have (Bonus Points)
- Experience working in Quick Commerce, logistics, or consumer apps
- Knowledge of PIM (Product Information Management) systems
- Understanding of key commerce algorithms (search, ranking, filtering, order management)
- Ability to use AI-assisted coding tools to speed up development
Why Join Us
- Build from scratch, not maintain legacy
- Work directly with the founder and influence tech decisions
- Shape meaningful digital infrastructure for a $35B+ industry
Minimum 5 years of experience in a customer-facing role such as pre-sales, solutions engineering or technical architecture.
- Exceptional communication and presentation skills.
- Proven ability in technical integrations and conducting POCs.
- Proficiency in coding with high-level programming languages (Java, Go, Python).
- Solid understanding of Monitoring, Observability, Log Management, SIEM.
- Background in Engineering/DevOps will be considered an advantage.
- Previous experience in Technical Sales of Log Analytics, Monitoring, APM, RUM, SIEM is desirable.
Technical Expertise :
- In-depth knowledge of Kubernetes, AWS, Azure, GCP, Docker, Prometheus, OpenTelemetry.
- Candidates should have hands-on experience and the ability to integrate these technologies into customer environments, providing tailored solutions that meet diverse operational requirements.
Responsibilities
- Provide technology contributions in
- Working in an agile development environment
- Translating business requirements into low-level application design
- Application code development through a collaborative approach
- Doing Full-scale unit testing
- Applying test-driven and behaviour-driven development (TDD/BDD) QA concepts
- Applying continuous integration and continuous deployment (CI/CD) concepts
Soft Skills
- Should be able to contribute as an individual contributor
- Should be able to execute his/her responsibility independently
- Excellent problem-solving skills and attention to detail.
- Focus on self-planning activities
- Firm with communication skills
Mandatory Skills
- Java, Spring Boot, Python and relational / non-relational databases
- Container orchestration - Kubernetes, Docker
- Development experience in Linux environment
- Modern SDLC tooling (Maven, Git)
- Micro services design-oriented application development and deploying the same using Container orchestration in the cloud environment
- Understanding CI/CD pipeline & related system development environment
Nice-to-have Skills
- Front-end technologies (JavaScript, HTML5, CSS, Angular)
Job Title : Senior Software Engineer – Backend
Experience Required : 6 to 12 Years
Location : Bengaluru (Hybrid – 3 Days Work From Office)
Number of Openings : 2
Work Hours : 11:00 AM – 8:00 PM IST
Notice Period : 30 Days Preferred
Work Location : SmartWorks The Cube, Karle Town SEZ, Building No. 5, Nagavara, Bangalore – 560045
Note : Face-to-face interview in Bangalore is mandatory during the second round.
Role Overview :
We are looking for an experienced Senior Backend Developer to join our growing team. This is a hands-on role focused on building cloud-based, scalable applications in the mortgage finance domain.
Key Responsibilities :
- Design, develop, and maintain backend components for cloud-based web applications.
- Contribute to architectural decisions involving microservices and distributed systems.
- Work extensively with Node.js and RESTful APIs.
- Implement scalable solutions using AWS services (e.g., Lambda, SQS, SNS, RDS).
- Utilize both relational and NoSQL databases effectively.
- Collaborate with cross-functional teams to deliver robust and maintainable code.
- Participate in agile development practices and deliver rapid iterations based on feedback.
- Take ownership of system performance, scalability, and reliability.
Core Requirements :
- 5+ Years of total experience in backend development.
- Minimum 3 Years of experience in building scalable microservices or delivering large-scale products.
- Strong expertise in Node.js and REST APIs.
- Solid experience with RDBMS, SQL, and data modeling.
- Good understanding of distributed systems, scalability, and availability.
- Familiarity with AWS infrastructure and services.
- Development experience in Python and/or Java is a plus.
Preferred Skills :
- Experience with frontend frameworks like React.js or AngularJS.
- Working knowledge of Docker and containerized applications.
Interview Process :
- Round 1 : Online technical assessment (1 hour)
- Round 2 : Virtual technical interview
- Round 3 : In-person interview at the Bangalore office (2 hours – mandatory)
- Develop, and maintain Java applications using Core Java, Spring framework, JDBC, and threading concepts.
- Strong understanding of the Spring framework and its various modules.
- Experience with JDBC for database connectivity and manipulation
- Utilize database management systems to store and retrieve data efficiently.
- Proficiency in Core Java8 and thorough understanding of threading concepts and concurrent programming.
- Experience in in working with relational and nosql databases.
- Basic understanding of cloud platforms such as Azure and GCP and gain experience on DevOps practices is added advantage.
- Knowledge of containerization technologies (e.g., Docker, Kubernetes)
- Perform debugging and troubleshooting of applications using log analysis techniques.
- Understand multi-service flow and integration between components.
- Handle large-scale data processing tasks efficiently and effectively.
- Hands on experience using Spark is an added advantage.
- Good problem-solving and analytical abilities.
- Collaborate with cross-functional teams to identify and solve complex technical problems.
- Knowledge of Agile methodologies such as Scrum or Kanban
- Stay updated with the latest technologies and industry trends to improve development processes continuously and methodologies.

Role: Data Engineer (14+ years of experience)
Location: Whitefield, Bangalore
Mode of Work: Hybrid (3 days from office)
Notice period: Immediate/ Serving with 30days left
Location: Candidate should be based out of Bangalore as one round has to be taken F2F
Job Summary:
Role and Responsibilities
● Design and implement scalable data pipelines for ingesting, transforming, and loading data from various tools and sources.
● Design data models to support data analysis and reporting.
● Automate data engineering tasks using scripting languages and tools.
● Collaborate with engineers, process managers, data scientists to understand their needs and design solutions.
● Act as a bridge between the engineering and the business team in all areas related to Data.
● Automate monitoring and alerting mechanism on data pipelines, products and Dashboards and troubleshoot any issues. On call requirements.
● SQL creation and optimization - including modularization and optimization which might need views, table creation in the sources etc.
● Defining best practices for data validation and automating as much as possible; aligning with the enterprise standards
● QA environment data management - e.g Test Data Management etc
Qualifications
● 14+ years of experience as a Data engineer or related role.
● Experience with Agile engineering practices.
● Strong experience in writing queries for RDBMS, cloud-based data warehousing solutions like Snowflake and Redshift.
● Experience with SQL and NoSQL databases.
● Ability to work independently or as part of a team.
● Experience with cloud platforms, preferably AWS.
● Strong experience with data warehousing and data lake technologies (Snowflake)
● Expertise in data modelling
● Experience with ETL/LT tools and methodologies .
● 5+ years of experience in application development including Python, SQL, Scala, or Java
● Experience working on real-time Data Streaming and Data Streaming platform.
NOTE: IT IS MANDATORY TO GIVE ONE TECHNICHAL ROUND FACE TO FACE.
Role: Sr. Java Developer
Experience: 6+ Years
Location: Bangalore (Whitefield)
Work Mode: Hybrid (2-3 days WFO)
Shift Timing: Regular Morning Shift
About the Role:
We are looking for a seasoned Java Developer with 6+ years of experience to join our growing engineering team. The ideal candidate should have strong expertise in Java, Spring Boot, Microservices, and cloud-based deployment using AWS or DevOps tools. This is a hybrid role based out of our Whitefield, Bangalore location.
Key Responsibilities:
- Participate in agile development processes and scrum ceremonies.
- Translate business requirements into scalable and maintainable technical solutions.
- Design and develop applications using Java, Spring Boot, and Microservices architecture.
- Ensure robust and reliable code through full-scale unit testing and TDD/BDD practices.
- Contribute to CI/CD pipeline setup and cloud deployments.
- Work independently and as an individual contributor on complex features.
- Troubleshoot production issues and optimize application performance.
Mandatory Skills:
- Strong Core Java and Spring Boot expertise.
- Proficiency in AWS or DevOps (Docker & Kubernetes).
- Experience with relational and/or non-relational databases (SQL, NoSQL).
- Sound understanding of Microservices architecture and RESTful APIs.
- Containerization experience using Docker and orchestration via Kubernetes.
- Familiarity with Linux-based development environments.
- Exposure to modern SDLC tools – Maven, Git, Jenkins, etc.
- Good understanding of CI/CD pipelines and cloud-based deployment.
Soft Skills:
- Self-driven, proactive, and an individual contributor.
- Strong problem-solving and analytical skills.
- Excellent communication and interpersonal abilities.
- Able to plan, prioritize, and manage tasks independently.
Nice-to-Have Skills:
- Exposure to frontend technologies like Angular, JavaScript, HTML5, and CSS.
Looking for Fresher developers
Responsibilities:
- Implement integrations requested by customers
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Requirements and skill:
Knowledge in DevOps Engineer or similar software engineering role
Good knowledge of Terraform, Kubernetes
Working knowledge on AWS, Google Cloud
You can directly contact me on nine three one six one two zero one three two

5+ years of IT development experience with min 3+ years hands-on experience in Snowflake · Strong experience in building/designing the data warehouse or data lake, and data mart end-to-end implementation experience focusing on large enterprise scale and Snowflake implementations on any of the hyper scalers. · Strong experience with building productionized data ingestion and data pipelines in Snowflake · Good knowledge of Snowflake's architecture, features likie Zero-Copy Cloning, Time Travel, and performance tuning capabilities · Should have good exp on Snowflake RBAC and data security. · Strong experience in Snowflake features including new snowflake features. · Should have good experience in Python/Pyspark. · Should have experience in AWS services (S3, Glue, Lambda, Secrete Manager, DMS) and few Azure services (Blob storage, ADLS, ADF) · Should have experience/knowledge in orchestration and scheduling tools experience like Airflow · Should have good understanding on ETL or ELT processes and ETL tools.
Job Description:
We are seeking a skilled and experienced Java Developer with expertise in Spring Boot to join our development team. The ideal candidate should have strong backend development experience, with a focus on building scalable and high-performing applications.
Key Responsibilities:
- Design, develop, and maintain backend services using Java and Spring Boot.
- Write clean, efficient, and reusable code following best practices.
- Work closely with front-end developers, architects, and product owners to deliver high-quality solutions.
- Participate in code reviews and provide constructive feedback.
- Optimize application performance and ensure high availability.
- Debug and resolve technical issues in development and production environments.
- Contribute to system design and architecture decisions.
Required Skills:
- Strong hands-on experience with Java (8 or above) and Spring Boot and Hibernate
- Good understanding of RESTful APIs and microservices architecture.
- Experience with Angular8+
- Experience with databases such as MySQL, PostgreSQL, or Oracle.
- Familiarity with tools like Maven/Gradle, Git, and CI/CD pipelines.
- Knowledge of cloud platforms (AWS, Azure, or GCP) is a plus.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork skills.
Preferred Qualifications:
- Experience with containerization tools like Docker and orchestration with Kubernetes.
- Exposure to Agile development methodologies.
About the Role
We are looking for a skilled Backend Engineer with strong experience in building scalable microservices, integrating with distributed data systems, and deploying web APIs that serve UI applications in the cloud. You’ll work on high-performance systems involving Kafka, DynamoDB, Redis, and other modern backend technologies.
Responsibilities
- Design, develop, and deploy backend microservices and APIs that power UI applications.
- Implement event-driven architectures using Apache Kafka or similar messaging platforms.
- Build scalable and highly available systems using NoSQL databases (e.g., DynamoDB, MongoDB).
- Optimize backend systems using caching layers like Redis to enhance performance.
- Ensure seamless deployment and operation of services in cloud environments (AWS, GCP, or Azure).
- Write clean, maintainable, and well-tested code; contribute to code reviews and architecture discussions.
- Collaborate closely with frontend, DevOps, and product teams to deliver integrated solutions.
- Monitor and troubleshoot production issues and participate in on-call rotations as needed.
Required Qualifications
- 3–7 years of professional experience in backend development.
- Strong programming skills in one or more languages: Java, Python, Go, Node.js.
- Hands-on experience with microservices architecture and API design (REST/gRPC).
- Practical experience with Kafka, RabbitMQ, or other event streaming/message queue systems.
- Solid knowledge of NoSQL databases, especially DynamoDB or equivalents.
- Experience using Redis or Memcached for caching or pub/sub mechanisms.
- Proficiency with cloud platforms (preferably AWS – e.g., Lambda, ECS, EKS, API Gateway).
- Familiarity with Docker, Kubernetes, and CI/CD pipelines.

Position : Senior Data Analyst
Experience Required : 5 to 8 Years
Location : Hyderabad or Bangalore (Work Mode: Hybrid – 3 Days WFO)
Shift Timing : 11:00 AM – 8:00 PM IST
Notice Period : Immediate Joiners Only
Job Summary :
We are seeking a highly analytical and experienced Senior Data Analyst to lead complex data-driven initiatives that influence key business decisions.
The ideal candidate will have a strong foundation in data analytics, cloud platforms, and BI tools, along with the ability to communicate findings effectively across cross-functional teams. This role also involves mentoring junior analysts and collaborating closely with business and tech teams.
Key Responsibilities :
- Lead the design, execution, and delivery of advanced data analysis projects.
- Collaborate with stakeholders to identify KPIs, define requirements, and develop actionable insights.
- Create and maintain interactive dashboards, reports, and visualizations.
- Perform root cause analysis and uncover meaningful patterns from large datasets.
- Present analytical findings to senior leaders and non-technical audiences.
- Maintain data integrity, quality, and governance in all reporting and analytics solutions.
- Mentor junior analysts and support their professional development.
- Coordinate with data engineering and IT teams to optimize data pipelines and infrastructure.
Must-Have Skills :
- Strong proficiency in SQL and Databricks
- Hands-on experience with cloud data platforms (AWS, Azure, or GCP)
- Sound understanding of data warehousing concepts and BI best practices
Good-to-Have :
- Experience with AWS
- Exposure to machine learning and predictive analytics
- Industry-specific analytics experience (preferred but not mandatory)

About Role
We are seeking a skilled Backend Engineer with 2+ years of experience to join our dynamic team, focusing on building scalable web applications using Python frameworks (Django/FastAPI) and cloud technologies. You'll be instrumental in developing and maintaining our cloud-native backend services.
Responsibilities:
- Design and develop scalable backend services using Django and FastAPI
- Create and maintain RESTful APIs
- Implement efficient database schemas and optimize queries
- Implement containerisation using Docker and container orchestration
- Design and implement cloud-native solutions using microservices architecture
- Participate in technical design discussions, code reviews and maintain coding standards
- Document technical specifications and APIs
- Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
Requirements:
- Experience with Django and/or Fast-API (2+ years)
- Proficiency in SQL and ORM frameworks
- Docker containerisation and orchestration
- Proficiency in shell scripting (Bash/Power-Shell)
- Understanding of micro-services architecture
- Experience building server-less back end
- Knowledge of deployment and debugging on cloud platforms (AWS/Azure)


Role Overview:
We are looking for a skilled Golang Developer with 3.5+ years of experience in building scalable backend services and deploying cloud-native applications using AWS. This is a key position that requires a deep understanding of Golang and cloud infrastructure to help us build robust solutions for global clients.
Key Responsibilities:
- Design and develop backend services, APIs, and microservices using Golang.
- Build and deploy cloud-native applications on AWS using services like Lambda, EC2, S3, RDS, and more.
- Optimize application performance, scalability, and reliability.
- Collaborate closely with frontend, DevOps, and product teams.
- Write clean, maintainable code and participate in code reviews.
- Implement best practices in security, performance, and cloud architecture.
- Contribute to CI/CD pipelines and automated deployment processes.
- Debug and resolve technical issues across the stack.
Required Skills & Qualifications:
- 3.5+ years of hands-on experience with Golang development.
- Strong experience with AWS services such as EC2, Lambda, S3, RDS, DynamoDB, CloudWatch, etc.
- Proficient in developing and consuming RESTful APIs.
- Familiar with Docker, Kubernetes or AWS ECS for container orchestration.
- Experience with Infrastructure as Code (Terraform, CloudFormation) is a plus.
- Good understanding of microservices architecture and distributed systems.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with Git, CI/CD pipelines, and agile workflows.
- Strong problem-solving, debugging, and communication skills.
Nice to Have:
- Experience with serverless applications and architecture (AWS Lambda, API Gateway, etc.)
- Exposure to NoSQL databases like DynamoDB or MongoDB.
- Contributions to open-source Golang projects or an active GitHub portfolio.


Job Title : Python Developer – API Integration & AWS Deployment
Experience : 5+ Years
Location : Bangalore
Work Mode : Onsite
Job Overview :
We are seeking an experienced Python Developer with strong expertise in API development and AWS cloud deployment.
The ideal candidate will be responsible for building scalable RESTful APIs, automating power system simulations using PSS®E (psspy), and deploying automation workflows securely and efficiently on AWS.
Mandatory Skills : Python, FastAPI/Flask, PSS®E (psspy), RESTful API Development, AWS (EC2, Lambda, S3, EFS, API Gateway), AWS IAM, CloudWatch.
Key Responsibilities :
Python Development & API Integration :
- Design, build, and maintain RESTful APIs using FastAPI or Flask to interface with PSS®E.
- Automate simulations and workflows using the PSS®E Python API (psspy).
- Implement robust bulk case processing, result extraction, and automated reporting systems.
AWS Cloud Deployment :
- Deploy APIs and automation pipelines using AWS services such as EC2, Lambda, S3, EFS, and API Gateway.
- Apply cloud-native best practices to ensure reliability, scalability, and cost efficiency.
- Manage secure access control using AWS IAM, API keys, and implement monitoring using CloudWatch.
Required Skills :
- 5+ Years of professional experience in Python development.
- Hands-on experience with RESTful API development (FastAPI/Flask).
- Solid experience working with PSS®E and its psspy Python API.
- Strong understanding of AWS services, deployment, and best practices.
- Proficiency in automation, scripting, and report generation.
- Knowledge of cloud security and monitoring tools like IAM and CloudWatch.
Good to Have :
- Experience in power system simulation and electrical engineering concepts.
- Familiarity with CI/CD tools for AWS deployments.

You will:
- Collaborate with the I-Stem Voice AI team and CEO to design, build and ship new agent capabilities
- Develop, test and refine end-to-end voice agent models (ASR, NLU, dialog management, TTS)
- Stress-test agents in noisy, real-world scenarios and iterate for improved robustness and low latency
- Research and prototype cutting-edge techniques (e.g. robust speech recognition, adaptive language understanding)
- Partner with backend and frontend engineers to seamlessly integrate AI components into live voice products
- Monitor agent performance in production, analyze failure cases, and drive continuous improvement
- Occasionally demo our Voice AI solutions at industry events and user forums
You are:
- An AI/Software Engineer with hands-on experience in speech-centric ML (ASR, NLU or TTS)
- Skilled in building and tuning transformer-based speech models and handling real-time audio pipelines
- Obsessed with reliability: you design experiments to push agents to their limits and root-cause every error
- A clear thinker who deconstructs complex voice interactions from first principles
- Passionate about making voice technology inclusive and accessible for diverse users
- Comfortable moving fast in a small team, yet dogged about code quality, testing and reproducibility
Job Summary:
We are seeking a skilled and experienced Java Developer with hands-on expertise in AWS, Spring Boot, and Microservices architecture. As a core member of our backend development team, you will design and build scalable cloud-native applications that support high-performance systems and business logic.
Key Responsibilities:
- Design, develop, and maintain backend services using Java (Spring Boot).
- Build and deploy microservices-based architectures hosted on AWS.
- Collaborate with DevOps and architecture teams to ensure scalable and secure cloud solutions.
- Write clean, efficient, and well-documented code.
- Optimize application performance and troubleshoot production issues.
- Participate in code reviews, technical discussions, and architecture planning.
Must-Have Skills:
- 4.5+ years of experience in Java development.
- Strong proficiency in Spring Boot and RESTful APIs.
- Proven hands-on experience with AWS services (EC2, S3, Lambda, RDS, etc.).
- Solid understanding of microservices architecture, CI/CD, and containerization tools.
- Experience with version control (Git), and deployment tools.
About Root Node
We’re an early-stage startup building intelligent tools for planning, scheduling, and optimization—starting with timetabling and warehouse logistics. Backed by deep domain expertise and a growing customer pipeline, we’re now building our core tech team. This is not just a coding job — it's a chance to build something meaningful from the ground up.
About the job
- Design and implement robust backend systems and APIs using Java or similar backend language and Spring Boot or equivalent frameworks
- Integrate backend services with existing custom ERP systems
- Work closely with the founder on product architecture, feature prioritization, and go-to-market feedback
- Take full ownership of features — from system design and development to deployment and iterative improvements
- Help shape our engineering culture and technical foundations
You're a Great Fit If You:
- Have 3+ years of experience in backend development
- Are strong in Java or similar languages (e.g., Kotlin, Go, Node.js)
- Have solid experience with Spring Boot or equivalent backend frameworks
- Have integrated with ERP or enterprise systems in production environments
- Are comfortable with both SQL (PostgreSQL) and NoSQL (MongoDB)
- Understand REST API development, authentication, Docker
- Have an entrepreneurial mindset — you're excited about ownership, ambiguity, and making decisions that shape the product and company
- Want more than just a job — you want to build, solve, and learn rapidly
What We Offer
- Competitive salary
- High degree of ownership and autonomy
- Ability to shape the tech and product direction from Day 1
- Transparent and fast decision-making culture
- A builder’s environment — solve real-world problems with real impact

Job Title : Python Data Engineer
Experience : 4+ Years
Location : Bangalore / Hyderabad (On-site)
Job Summary :
We are seeking a skilled Python Data Engineer to work on cloud-native data platforms and backend services.
The role involves building scalable APIs, working with diverse data systems, and deploying containerized services using modern cloud infrastructure.
Mandatory Skills : Python, AWS, RESTful APIs, Microservices, SQL/PostgreSQL/NoSQL, Docker, Kubernetes, CI/CD (Jenkins/GitLab CI/AWS CodePipeline)
Key Responsibilities :
- Design, develop, and maintain backend systems using Python.
- Build and manage RESTful APIs and microservices architectures.
- Work extensively with AWS cloud services for deployment and data storage.
- Implement and manage SQL, PostgreSQL, and NoSQL databases.
- Containerize applications using Docker and orchestrate with Kubernetes.
- Set up and maintain CI/CD pipelines using Jenkins, GitLab CI, or AWS CodePipeline.
- Collaborate with teams to ensure scalable and reliable software delivery.
- Troubleshoot and optimize application performance.
Must-Have Skills :
- 4+ years of hands-on experience in Python backend development.
- Strong experience with AWS cloud infrastructure.
- Proficiency in building microservices and APIs.
- Good knowledge of relational and NoSQL databases.
- Experience with Docker and Kubernetes.
- Familiarity with CI/CD tools and DevOps processes.
- Strong problem-solving and collaboration skills.

Job Title : Full Stack Drupal Developer
Experience : Minimum 5 Years
Location : Hyderabad / Bangalore / Mumbai / Pune / Chennai / Gurgaon (Hybrid or On-site)
Notice Period : Immediate to 15 Days Preferred
Job Summary :
We are seeking a skilled and experienced Full Stack Drupal Developer with a strong background in Drupal (version 8 and above) for both front-end and back-end development. The ideal candidate will have hands-on experience in AWS deployments, Drupal theming and module development, and a solid understanding of JavaScript, PHP, and core Drupal architecture. Acquia certifications and contributions to the Drupal community are highly desirable.
Mandatory Skills :
Drupal 8+, PHP, JavaScript, Custom Module & Theming Development, AWS (EC2, Lightsail, S3, CloudFront), Acquia Certified, Drupal Community Contributions.
Key Responsibilities :
- Develop and maintain full-stack Drupal applications, including both front-end (theming) and back-end (custom module) development.
- Deploy and manage Drupal applications on AWS using services like EC2, Lightsail, S3, and CloudFront.
- Work with the Drupal theming layer and module layer to build custom and reusable components.
- Write efficient and scalable PHP code integrated with JavaScript and core JS concepts.
- Collaborate with UI/UX teams to ensure high-quality user experiences.
- Optimize performance and ensure high availability of applications in cloud environments.
- Contribute to the Drupal community and utilize contributed modules effectively.
- Follow best practices for code versioning, documentation, and CI/CD deployment processes.
Required Skills & Qualifications :
- Minimum 5 Years of hands-on experience in Drupal development (Drupal 8 onwards).
- Strong experience in front-end (theming, JavaScript, HTML, CSS) and back-end (custom module development, PHP).
- Experience with Drupal deployment on AWS, including services such as EC2, Lightsail, S3, and CloudFront.
- Proficiency in JavaScript, core JS concepts, and PHP coding.
- Acquia certifications such as:
- Drupal Developer Certification
- Site Management Certification
- Acquia Certified Developer (preferred)
- Experience with contributed modules and active participation in the Drupal community is a plus.
- Familiarity with version control (Git), Agile methodologies, and modern DevOps tools.
Preferred Certifications :
- Acquia Certified Developer.
- Acquia Site Management Certification.
- Any relevant AWS certifications are a bonus.
Key Responsibilities
- Develop and maintain backend services and APIs using Java (Spring Boot preferred).
- Integrate Large Language Models (LLMs) and Generative AI models (e.g., OpenAI, Hugging Face, LangChain) into applications.
- Collaborate with data scientists to build data pipelines and enable intelligent application features.
- Design scalable systems to support AI model inference and deployment.
- Work with cloud platforms (AWS, GCP, or Azure) for deploying AI-driven services.
- Write clean, maintainable, and well-tested code.
- Participate in code reviews and technical discussions.
Required Skills
- 3–5 years of experience in Java development (preferably with Spring Boot).
- Experience working with RESTful APIs, microservices, and cloud-based deployments.
- Exposure to LLMs, NLP, or GenAI tools (OpenAI, Cohere, Hugging Face, LangChain, etc.).
- Familiarity with Python for data science/ML integration is a plus.
- Good understanding of software engineering best practices (CI/CD, testing, etc.).
- Ability to work collaboratively in cross-functional teams.
Job Summary:
We are seeking a skilled DevOps Engineer to design, implement, and manage CI/CD pipelines, containerized environments, and infrastructure automation. The ideal candidate should have hands-on experience with ArgoCD, Kubernetes, and Docker, along with a deep understanding of cloud platforms and deployment strategies.
Key Responsibilities:
- CI/CD Implementation: Develop, maintain, and optimize CI/CD pipelines using ArgoCD, GitOps, and other automation tools.
- Container Orchestration: Deploy, manage, and troubleshoot containerized applications using Kubernetes and Docker.
- Infrastructure as Code (IaC): Automate infrastructure provisioning with Terraform, Helm, or Ansible.
- Monitoring & Logging: Implement and maintain observability tools like Prometheus, Grafana, ELK, or Loki.
- Security & Compliance: Ensure best security practices in containerized and cloud-native environments.
- Cloud & Automation: Manage cloud infrastructure on AWS, Azure, or GCP with automated deployments.
- Collaboration: Work closely with development teams to optimize deployments and performance.
Required Skills & Qualifications:
- Experience: 5+ years in DevOps, Site Reliability Engineering (SRE), or Infrastructure Engineering.
- Tools & Tech: Strong knowledge of ArgoCD, Kubernetes, Docker, Helm, Terraform, and CI/CD pipelines.
- Cloud Platforms: Experience with AWS, GCP, or Azure.
- Programming & Scripting: Proficiency in Python, Bash, or Go.
- Version Control: Hands-on with Git and GitOps workflows.
- Networking & Security: Knowledge of ingress controllers, service mesh (Istio/Linkerd), and container security best practices.
Nice to Have:
- Experience with Kubernetes Operators, Kustomize, or FluxCD.
- Exposure to serverless architectures and multi-cloud deployments.
- Certifications in CKA, AWS DevOps, or similar.

Job Summary:
As an AWS Data Engineer, you will be responsible for designing, developing, and maintaining scalable, high-performance data pipelines using AWS services. With 6+ years of experience, you’ll collaborate closely with data architects, analysts, and business stakeholders to build reliable, secure, and cost-efficient data infrastructure across the organization.
Key Responsibilities:
- Design, develop, and manage scalable data pipelines using AWS Glue, Lambda, and other serverless technologies
- Implement ETL workflows and transformation logic using PySpark and Python on AWS Glue
- Leverage AWS Redshift for warehousing, performance tuning, and large-scale data queries
- Work with AWS DMS and RDS for database integration and migration
- Optimize data flows and system performance for speed and cost-effectiveness
- Deploy and manage infrastructure using AWS CloudFormation templates
- Collaborate with cross-functional teams to gather requirements and build robust data solutions
- Ensure data integrity, quality, and security across all systems and processes
Required Skills & Experience:
- 6+ years of experience in Data Engineering with strong AWS expertise
- Proficient in Python and PySpark for data processing and ETL development
- Hands-on experience with AWS Glue, Lambda, DMS, RDS, and Redshift
- Strong SQL skills for building complex queries and performing data analysis
- Familiarity with AWS CloudFormation and infrastructure as code principles
- Good understanding of serverless architecture and cost-optimized design
- Ability to write clean, modular, and maintainable code
- Strong analytical thinking and problem-solving skills