50+ AWS (Amazon Web Services) Jobs in India
Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!


Job Title: Frontend Engineer- Reactjs, Nextjs, MUI
Location: Hybrid weekly ⅔ days WFO (Bengaluru- India)
About the Role:
We're looking for a passionate and skilled Frontend Engineer with 1–3 years of experience to join our growing development team. This role is front-end-heavy, focused on building clean, scalable, and high-performance user interfaces using the latest technologies in the MERN stack—particularly Next.js, React, TypeScript, and Material UI (MUI).
You’ll work alongside a collaborative and talented team to design and build seamless web experiences that delight users. If you're excited about modern frontend architecture and want to grow in a fast-moving, remote-first environment, we'd love to hear from you.
Key Responsibilities:
- Develop responsive, high-performance web applications using Next.js, React, and TypeScript.
- Translate UI/UX designs into functional frontend components using MUI.
- Collaborate with backend developers, designers, and product managers to deliver new features and improvements.
- Ensure code quality through best practices, code reviews, and testing.
- Optimize applications for maximum speed and scalability.
Must-Have Skills:
- 1–3 years of professional experience in frontend development.
- Strong proficiency in React, Next.js, and TypeScript.
- Experience with Material UI (MUI) or similar component libraries.
- Understanding of responsive design, modern frontend tooling, and web performance best practices.
- Familiarity with Git and collaborative workflows.
Nice-to-Have (Bonus) Skills:
- Familiarity with testing libraries (Jest, React Testing Library, Cypress).
- Experience working with design tools like Figma or Adobe XD.
- Basic knowledge of accessibility (a11y) standards and performance optimization.
- Basic experience with Node.js, MongoDB, or working in a MERN stack environment.
- Familiarity with AWS services or cloud deployment practices.
- Experience with RESTful APIs or integrating with backend services.

Job Title: MERN STACK Developer
Location: Hybrid weekly ⅔ days WFO (Bengaluru- India)
About the Role:
We're looking for a passionate and skilled Frontend Engineer with 1–3 years of experience to join our growing development team. This role is front-end-heavy, focused on building clean, scalable, and high-performance user interfaces using the latest technologies in the MERN stack—particularly Next.js, React, TypeScript, and Material UI (MUI).
You’ll work alongside a collaborative and talented team to design and build seamless web experiences that delight users. If you're excited about modern frontend architecture and want to grow in a fast-moving, remote-first environment, we'd love to hear from you.
Key Responsibilities:
- Develop responsive, high-performance web applications using Next.js, React, and TypeScript.
- Translate UI/UX designs into functional frontend components using MUI.
- Collaborate with backend developers, designers, and product managers to deliver new features and improvements.
- Ensure code quality through best practices, code reviews, and testing.
- Optimize applications for maximum speed and scalability.
Must-Have Skills:
- 1–3 years of professional experience in frontend development.
- Strong proficiency in React, Next.js, and TypeScript.
- Experience with Material UI (MUI) or similar component libraries.
- Understanding of responsive design, modern frontend tooling, and web performance best practices.
- Familiarity with Git and collaborative workflows.
Nice-to-Have (Bonus) Skills:
- Familiarity with testing libraries (Jest, React Testing Library, Cypress).
- Experience working with design tools like Figma or Adobe XD.
- Basic knowledge of accessibility (a11y) standards and performance optimization.
- Basic experience with Node.js, MongoDB, or working in a MERN stack environment.
- Familiarity with AWS services or cloud deployment practices.
- Experience with RESTful APIs or integrating with backend services.
Job Title: Backend Engineer - NodeJS, NestJS, and Python
Location: Hybrid weekly ⅔ days WFO (Bengaluru- India)
About the role:
We are looking for a skilled and passionate Senior Backend Developer to join our dynamic team. The ideal candidate should have strong experience in Node.js and NestJS, along with a solid understanding of database management, query optimization, and microservices architecture. As a backend developer, you will be responsible for developing and maintaining scalable backend systems, building robust APIs, integrating databases, and working closely with frontend and DevOps teams to deliver high-quality software solutions.
What You'll Do 🛠️
- Design, develop, and maintain server-side logic using Node.js, NestJS, and Python.
- Develop and integrate RESTful APIs and microservices to support scalable systems.
- Work with NoSQL and SQL databases (e.g., MongoDB, PostgreSQL, MySQL) to create and manage schemas, write complex queries, and optimize performance.
- Collaborate with cross-functional teams including frontend, DevOps, and QA.
- Ensure code quality, maintainability, and scalability through code reviews, testing, and documentation.
- Monitor and troubleshoot production systems, ensuring high availability and performance.
- Implement security and data protection best practices.
What You'll Bring 💼
- 4 to 6 years of professional experience as a backend developer.
- Strong proficiency in Node.js and NestJS framework.
- Good hands-on experience with Python (Django/Flask experience is a plus).
- Solid understanding of relational and non-relational databases.
- Proficient in writing complex NoSQL queries and SQL queries
- Experience with microservices architecture and distributed systems.
- Familiarity with version control systems like Git.
- Basic understanding of containerization (e.g., Docker) and cloud services is a plus.
- Excellent problem-solving skills and a collaborative mindset.
Bonus Points ➕
- Experience with CI/CD pipelines.
- Exposure to cloud platforms like AWS, GCP or Azure.
- Familiarity with event-driven architecture or message brokers (MQTT, Kafka, RabbitMQ)
Why this role matters
You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.
Role Description
This is a full-time on-site role for a Cloud & DevOps Engineer located in Coimbatore. The Cloud & DevOps Engineer will be responsible for managing and optimizing cloud infrastructure, implementing continuous integration and deployment processes, and ensuring the smooth operation of services. The role will involve working with Kubernetes for container orchestration and managing Linux environments.
Qualifications
- Proficiency in Software Development skills
- Experience with Continuous Integration and Deployment processes
- Skills in Kubernetes management and orchestration
- Strong understanding of Linux operating systems
- Ability to work collaboratively in a team environment
- Relevant certifications in cloud and DevOps technologies are a plus

Job Title: Sr Dev Ops Engineer
Location: Bengaluru- India (Hybrid work type)
Reports to: Sr Engineer manager
About Our Client :
We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure
About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you.
What You'll Do 🛠️
- Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
- Billing & Cost Optimization: Monitor and optimize cloud spending.
- Containerization & Orchestration: Deploy and manage applications and orchestrate them.
- Database Management: Deploy, manage, and optimize database instances and their lifecycles.
- Authentication Solutions: Implement and manage authentication systems.
- Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
- Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
- Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
- Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
- Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks.
What You'll Bring 💼
- Minimum of 4 years of experience in a DevOps or SRE role.
- Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
- Solid understanding of Linux fundamentals and command-line tools.
- Extensive experience with CI/CD tools, GitLab CI.
- Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
- Proven experience deploying and managing microservices.
- Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
- Experience with Identity and Access management solutions like Keycloak.
- Experience implementing backup and recovery solutions.
- Familiarity with optimizing scaling, ideally with Karpenter.
- Proficiency in scripting (Python, Bash).
- Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
- Excellent problem-solving and communication skills.
Bonus Points ➕
- Basic understanding of MQTT or general IoT concepts and protocols.
- Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
- Knowledge of specific AWS services relevant to application stacks.
- Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
- AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).
Why this role:
•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.
Job Title: Sr Dev Ops Engineer
Location: Bengaluru- India (Hybrid work type)
Reports to: Sr Engineer manager
About Our Client :
We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure
About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you.
What You'll Do 🛠️
- Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
- Billing & Cost Optimization: Monitor and optimize cloud spending.
- Containerization & Orchestration: Deploy and manage applications and orchestrate them.
- Database Management: Deploy, manage, and optimize database instances and their lifecycles.
- Authentication Solutions: Implement and manage authentication systems.
- Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
- Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
- Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
- Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
- Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks.
What You'll Bring 💼
- Minimum of 4 years of experience in a DevOps or SRE role.
- Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
- Solid understanding of Linux fundamentals and command-line tools.
- Extensive experience with CI/CD tools, GitLab CI.
- Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
- Proven experience deploying and managing microservices.
- Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
- Experience with Identity and Access management solutions like Keycloak.
- Experience implementing backup and recovery solutions.
- Familiarity with optimizing scaling, ideally with Karpenter.
- Proficiency in scripting (Python, Bash).
- Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
- Excellent problem-solving and communication skills.
Bonus Points ➕
- Basic understanding of MQTT or general IoT concepts and protocols.
- Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
- Knowledge of specific AWS services relevant to application stacks.
- Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
- AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).
Why this role:
•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.
We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls, and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.
Key Roles & Responsibilities:
- Lead the design and development of scalable and secure software solutions using Java.
- Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
- Provide technical leadership, mentoring, and guidance to the development team.
- Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
- Collaborate with cross-functional teams, including Product Management, Security, and DevOps to deliver high-quality security solutions.
- Design and implement security analytics, automation workflows and ITSM integrations.
- Drive continuous improvements in engineering processes, tools, and technologies.
- Troubleshoot complex technical issues and lead incident response for critical production systems.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 3-6 years of software development experience, with expertise in Java.
- Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
- In-depth understanding of microservices architecture, APIs, and distributed systems.
- Experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
- Strong problem-solving skills and ability to work in an agile, fast-paced environment.
- Excellent communication and leadership skills, with a track record of mentoring engineers.
Preferred Qualifications:
- Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
- Knowledge of zero-trust security models and secure API development.
- Hands-on experience with machine learning or AI-driven security analytics.
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
NonStop io is seeking a proficient Java Developer to join our dynamic team. In this role, you will contribute to designing, developing, and maintaining high-quality Java-based applications. You will work closely with cross-functional teams, ensuring the delivery of robust and scalable software solutions.
Responsibilities:
● Develop and Maintain: Write clean, efficient, and maintainable code for Java-based applications
● Collaborate: Work with cross-functional teams to gather requirements and translate them into technical solutions
● Code Reviews: Participate in code reviews to maintain high-quality standards
● Troubleshooting: Debug and resolve application issues in a timely manner
● Testing: Develop and execute unit and integration tests to ensure software reliability
● Optimize: Identify and address performance bottlenecks to enhance application performance
Qualifications & Skills:
● Strong knowledge of Java, Spring Framework (Spring Boot, Spring MVC), and Hibernate/JPA
● Familiarity with RESTful APIs and web services
● Proficiency in working with relational databases like MySQL or PostgreSQL
● Practical experience with AWS cloud services and building scalable, microservices-based architectures
● Experience with build tools like Maven or Gradle
● Understanding of version control systems, especially Git
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to work effectively in a collaborative team environment
Why Join Us?
● Opportunity to work on cutting-edge technology products
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you


Company Description
Appiness Interactive Pvt. Ltd. is a Bangalore-based product development and UX firm that
specializes in digital services for startups to fortune-500s. We work closely with our clients to
create a comprehensive soul for their brand in the online world, engaged through multiple
platforms of digital media. Our team is young, passionate, and aggressive, not afraid to think
out of the box or tread the un-trodden path in order to deliver the best results for our clients.
We pride ourselves on Practical Creativity where the idea is only as good as the returns it
fetches for our clients.
Key Responsibilities:
- Design and implement advanced AI/ML models and algorithms to address real-world challenges.
- Analyze large and complex datasets to derive actionable insights and train predictive models.
- Build and deploy scalable, production-ready AI solutions on cloud platforms such as AWS, Azure, or GCP.
- Collaborate closely with cross-functional teams, including data engineers, product managers, and software developers, to integrate AI solutions into business workflows.
- Continuously monitor and optimize model performance, ensuring scalability, robustness, and reliability.
- Stay abreast of the latest advancements in AI, ML, and Generative AI technologies, and proactively apply them where applicable.
- Implement MLOps best practices using tools such as MLflow, Docker, and CI/CD pipelines.
- Work with Large Language Models (LLMs) like GPT and LLaMA, and develop Retrieval-Augmented Generation (RAG) pipelines when needed.
Required Skills:
- Strong programming skills in Python (preferred); experience with R or Java is also valuable.
- Proficiency with machine learning libraries and frameworks such as TensorFlow, PyTorch, and Scikit-learn.
- Hands-on experience with cloud platforms like AWS, Azure, or GCP.
- Solid foundation in data structures, algorithms, statistics, and machine learning principles.
- Familiarity with MLOps tools and practices, including MLflow, Docker, and Kubernetes.
- Proven experience in deploying and maintaining AI/ML models in production environments.
- Exposure to Large Language Models (LLMs), Generative AI, and vector databases is a strong plus.
Job Summary:
We are seeking a highly skilled and proactive DevOps Engineer with 4+ years of experience to join our dynamic team. This role requires strong technical expertise across cloud infrastructure, CI/CD pipelines, container orchestration, and infrastructure as code (IaC). The ideal candidate should also have direct client-facing experience and a proactive approach to managing both internal and external stakeholders.
Key Responsibilities:
- Collaborate with cross-functional teams and external clients to understand infrastructure requirements and implement DevOps best practices.
- Design, build, and maintain scalable cloud infrastructure on AWS (EC2, S3, RDS, ECS, etc.).
- Develop and manage infrastructure using Terraform or CloudFormation.
- Manage and orchestrate containers using Docker and Kubernetes (EKS).
- Implement and maintain CI/CD pipelines using Jenkins or GitHub Actions.
- Write robust automation scripts using Python and Shell scripting.
- Monitor system performance and availability, and ensure high uptime and reliability.
- Execute and optimize SQL queries for MSSQL and PostgreSQL databases.
- Maintain clear documentation and provide technical support to stakeholders and clients.
Required Skills:
- Minimum 4+ years of experience in a DevOps or related role.
- Proven experience in client-facing engagements and communication.
- Strong knowledge of AWS services – EC2, S3, RDS, ECS, etc.
- Proficiency in Infrastructure as Code using Terraform or CloudFormation.
- Hands-on experience with Docker and Kubernetes (EKS).
- Strong experience in setting up and maintaining CI/CD pipelines with Jenkins or GitHub.
- Solid understanding of SQL and working experience with MSSQL and PostgreSQL.
- Proficient in Python and Shell scripting.
Preferred Qualifications:
- AWS Certifications (e.g., AWS Certified DevOps Engineer) are a plus.
- Experience working in Agile/Scrum environments.
- Strong problem-solving and analytical skills.
Work Mode & Timing:
- Hybrid – Pune-based candidates preferred.
- Working hours: 12:30 PM to 9:30 PM IST to align with client time zones.
Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
▪ Design, build, and maintain scalable data pipelines for structured and unstructured data sources
▪ Develop ETL processes to collect, clean, and transform data from internal and external systems
▪ Support integration of data into dashboards, analytics tools, and reporting systems
▪ Collaborate with data analysts and software developers to improve data accessibility and performance
▪ Document workflows and maintain data infrastructure best practices
▪ Assist in identifying opportunities to automate repetitive data tasks
Position Responsibilities :
- Develop and maintain architectural frameworks, standards and processes for Deltek cloud platform services
- Conceptual, logical and physical designs of cloud solutions and services
- Ensure all designs follow well-architected principles to meet security, reliability, performance and cost optimization requirements
- Collaborate with internal teams for end to end service delivery and support
- Work closely with other architects and technical leaders to create and refine roadmaps for the cloud platform
- Stay up to date with emerging cloud technologies and leverage them to continuously improve service quality and supportability
- Create and maintain technical design documents and participate in peer design reviews
Qualifications :
- B.S. in Computer Science, Engineering or related experience.
- Extensive knowledge and experience with public cloud providers: AWS, Azure, OCI
- 8+ years of experience in cloud design and implementation
- Strong hands on experience with Authentication Service, DNS, SMTP, SFTP, NFS, monitoring tools and products, backup and recovery
- Solid understanding of container orchestration, serverless architecture, CI/CD concepts and technologies
- Comprehensive knowledge and understanding of web, database, networking, and security standards and technologies
- Proven ability to work cross-functionally and collaboratively
- Strong analytical and communication skills, attention to detail
- Experience with SOC, NIST, GDPR, and FedRAMP compliance standards

Backend Engineer - Python
Location
Bangalore, India
Experience Required
2-3 years minimum
Job Overview
We are seeking a skilled Backend Engineer with expertise in Python to join our engineering team. The ideal candidate will have hands-on experience building and maintaining enterprise-level, scalable backend systems.
Key Requirements
Technical Skills
CS fundamentals are must (CN, DBMS, OS, System Design, OOPS) • Python Expertise: Advanced proficiency in Python with deep understanding of frameworks like Django, FastAPI, or Flask
• Database Management: Experience with PostgreSQL, MySQL, MongoDB, and database optimization
• API Development: Strong experience in designing and implementing RESTful APIs and GraphQL
• Cloud Platforms: Hands-on experience with AWS, GCP, or Azure services
• Containerization: Proficiency with Docker and Kubernetes
• Message Queues: Experience with Redis, RabbitMQ, or Apache Kafka
• Version Control: Advanced Git workflows and collaboration
Experience Requirements
• Minimum 2-3 years of backend development experience
• Proven track record of working on enterprise-level applications
• Experience building scalable systems handling high traffic loads
• Background in microservices architecture and distributed systems
• Experience with CI/CD pipelines and DevOps practices
Responsibilities
• Design, develop, and maintain robust backend services and APIs
• Optimize application performance and scalability
• Collaborate with frontend teams and product managers
• Implement security best practices and data protection measures
• Write comprehensive tests and maintain code quality
• Participate in code reviews and architectural discussions
• Monitor system performance and troubleshoot production issues
Preferred Qualifications
• Knowledge of caching strategies (Redis, Memcached)
• Understanding of software architecture patterns
• Experience with Agile/Scrum methodologies
• Open source contributions or personal projects


🚀 We're Hiring: Senior Backend Developer | Gurugram (On-site)
Looking to work with a fast-growing, tech-driven team in the SaaS space?
We’re on the lookout for a Senior Backend Developer with 2–3+ years of experience to help us build robust and scalable solutions.
🎯 What We're Looking For: ✅ Strong hands-on experience with Node.js & React
✅ Proficient in AWS (EC2, S3, Lambda, etc.)
✅ Good knowledge of MySQL
✨ Bonus: Exposure to AI/ML technologies
🏢 Preferred Industry Background: SaaS
📍 Location: Candidate should be from Delhi NCR
💰 Budget: Up to 11 LPA
If you're passionate about backend systems and want to make an impact in a high-growth environment, let’s connect!

We are looking for a customer-obsessed, analytical Sr. Staff Engineer to lead the development and growth of our Tax Compliance product suite. In this role, you’ll shape innovative digital solutions that simplify and automate tax filing, reconciliation, and compliance workflows for businesses of all sizes. You will join a fast-growing company where you’ll work in a dynamic and competitive market, impacting how businesses meet their statutory obligations with speed, accuracy, and confidence.
As the Sr. Staff Engineer, you’ll work closely with product, DevOps, and data teams to architect reliable systems, drive engineering excellence, and ensure high availability across our platform. We’re looking for a technical leader who’s not just an expert in building scalable systems, but also passionate about mentoring engineers and shaping the future of fintech.
Responsibilities
- Lead, mentor, and inspire a high-performing engineering team (or operate as a hands-on technical lead).
- Drive the design and development of scalable backend services using Python.
- Experience in Django, FastAPI, Task Orchestration Systems.
- Own and evolve our CI/CD pipelines with Jenkins, ensuring fast, safe, and reliable deployments.
- Architect and manage infrastructure using AWS and Terraform with a DevOps-first mindset.
- Collaborate cross-functionally with product managers, designers, and compliance experts to deliver features that make tax compliance seamless for our users.
- Set and enforce engineering best practices, code quality standards, and operational excellence.
- Stay up-to-date with industry trends and advocate for continuous improvement in engineering processes.
- Experience in fintech, tax, or compliance industries.
- Familiarity with containerization tools like Docker and orchestration with Kubernetes.
- Background in security, observability, or compliance automation.
Requirements
- 7+ years of software engineering experience, with at least 2+ years in a leadership or principal-level role.
- Deep expertise in Python, including API development, performance optimization, and testing.
- Experience in Event-driven architecture, Kafka/RabbitMQ-like systems.
- Strong experience with AWS services (e.g., ECS, Lambda, S3, RDS, CloudWatch).
- Solid understanding of Terraform for infrastructure as code.
- Proficiency with Jenkins or similar CI/CD tooling.
- Comfortable balancing technical leadership with hands-on coding and problem-solving.
- Strong communication skills and a collaborative mindset.
Job Title: Sr. Node.js Developer
Location: Ahmedabad, Gujarat
Job Type: Full Time
Department: MEAN Stack
About Simform:
Simform is a premier digital engineering company specializing in Cloud, Data, AI/ML, and Experience Engineering to create seamless digital experiences and scalable products. Simform is a strong partner for Microsoft, AWS, Google Cloud, and Databricks. With a presence in 5+ countries, Simform primarily serves North America, the UK, and the Northern European market.
Simform takes pride in being one of the most reputed employers in the region, having created a thriving work culture with a high work-life balance that gives a sense of freedom and opportunity to grow.
Role Overview:
We are looking for a Sr. Node Developer who not only possesses extensive backend expertise but also demonstrates proficiency in system design, cloud services, microservices architecture, and containerization. (Additionally, a good understanding of frontend tech stack to give support to frontend developers is highly valued) We're currently seeking a seasoned Senior Node.js Engineer who not only possesses extensive backend expertise but also demonstrates proficiency in system design, cloud services, microservices architecture, and containerization. (Additionally, a good understanding of frontend tech stack to give support to frontend developers is highly valued)
Key Responsibilities:
- Develop reusable, testable, maintainable, and scalable code with a focus on unit testing.
- Implement robust security measures and data protection mechanisms across projects.
- Champion the implementation of design patterns such as Test-Driven Development (TDD) and Behavior-Driven Development (BDD).
- Actively participate in architecture design sessions and sprint planning meetings, contributing valuable insights.
- Lead code reviews, providing insightful comments and guidance to team members.
- Mentor team members, assisting in debugging complex issues and providing optimal solutions.
Required Skills & Qualifications:
- Excellent written and verbal communication skills.
- Experience: 4+yrs
- Advanced knowledge of JavaScript and TypeScript, including core concepts and best practices.
- Extensive experience in developing highly scalable services and APIs using various protocols.
- Proficiency in data modeling and optimizing database performance in both SQL and NoSQL databases.
- Hands-on experience with PostgreSQL and MongoDB, leveraging technologies like TypeORM, Sequelize, or Knex.
- Proficient in working with frameworks like NestJS, LoopBack, Express, and other TypeScript-based frameworks.
- Strong familiarity with unit testing libraries such as Jest, Mocha, and Chai.
- Expertise in code versioning using Git or Bitbucket.
- Practical experience with Docker for building and deploying microservices.
- Strong command of Linux, including familiarity with server configurations.
- Familiarity with queuing protocols and asynchronous messaging systems.
Preferred Qualification:
- Experience with frontend JavaScript concepts and frameworks such as ReactJS.
- Proficiency in designing and implementing cloud architectures, particularly on AWS services.
- Knowledge of GraphQL and its associated libraries like Apollo and Prisma.
- Hands-on experience with deployment pipelines and CI/CD processes.
- Experience with document, key/value, or other non-relational database systems like Elasticsearch, Redis, and DynamoDB.
- Ability to build AI-centric applications and work with machine learning models, Langchain, vector databases, embeddings, etc.
Why Join Us:
- Young Team, Thriving Culture
- Flat-hierarchical, friendly, engineering-oriented, and growth-focused culture.
- Well-balanced learning and growth opportunities
- Free health insurance.
- Office facilities with a game zone, in-office kitchen with affordable lunch service, and free snacks.
- Sponsorship for certifications/events and library service.
- Flexible work timing, leaves for life events, WFH, and hybrid options

Location: Hybrid/ Remote
Type: Contract / Full‑Time
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Responsibilities:
- Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
- Design and build Python‑based services (FastAPI) for generating and updating embeddings.
- Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
- Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
- Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
- Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
- Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
- Collaborate with frontend engineers to define API contracts and demo endpoints.
- Document architecture diagrams, API specifications, and runbooks for future team onboarding.
Required Skills
- Strong Python expertise (FastAPI, async programming).
- Proficiency with Node.js and Express for API development.
- Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
- Familiarity with OpenAI’s APIs (embeddings, chat completions).
- Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
- Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).
Containerization skills (Docker):
- Good understanding of RAG architectures, prompt design, and memory management.
- Strong Git workflow and collaborative development practices (GitHub, CI/CD).
Nice‑to‑Have:
- Experience with Llama family models or other open‑source LLMs.
- Familiarity with MongoDB Atlas free tier and cluster management.
- Background in data engineering for streaming or batch processing.
- Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
- Frontend skills in React to prototype demo UIs.

Location: Hybrid/ Remote
Openings: 2
Experience: 5–12 Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Key Responsibilities
Architect & Design:
- Provide technical and architectural direction for complex frontend solutions,ensuring alignment with enterprise standards and best practices.
- Conduct design and code reviews to maintain high-quality, reusable, and scalable frontend interfaces for enterprise applications.
- Collaborate with cross-functional teams to define and enforce UI/UX design guidelines, accessibility standards, and performance benchmarks.
- Identify and address potential security vulnerabilities in frontend implementations, ensuring compliance with security and data privacy requirements.
Development & Debugging:
- Write clean, maintainable, and efficient frontend code.
- Debug and troubleshoot code to ensure robust, high-performing applications.
- Develop reusable frontend libraries that can be leveraged across multiple projects.
AI Awareness (Preferred):
- Understand AI/ML fundamentals and how they can enhance frontend applications.
- Collaborate with teams integrating AI-based features into chat applications.
Collaboration & Reporting:
- Work closely with cross-functional teams to align on architecture and deliverables.
- Regularly report progress, identify risks, and propose mitigation strategies.
Quality Assurance:
- Implement unit tests and end-to-end tests to ensure code quality.
- Participate in code reviews and enforce best practices.
Required Skills
- 5-10 years of experience architecting and developing cloud-based global applications in a public cloud environment (AWS, Azure, or GCP).
- Strong hands-on expertise in frontend technologies: JavaScript, HTML5, CSS3
- Proficiency with Modern frameworks like React, Angular, or Node.js
- Backend familiarity with Java, Spring Boot (or similar technologies).
- Experience developing real-world, at-scale products.
- General knowledge of cloud platforms (AWS, Azure, or GCP) and their structure, use, and capabilities.
- Strong problem-solving, debugging, and performance optimization skills.

Location: Hybrid/ Remote
Openings: 2
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or related field
Job Responsibilities
Problem Solving & Optimization:
- Analyze and resolve complex technical and application issues.
- Optimize application performance, scalability, and reliability.
Design & Develop:
- Build, test, and deploy scalable full-stack applications with high performance and security.
- Develop clean, reusable, and maintainable code for both frontend and backend.
AI Integration (Preferred):
- Collaborate with the team to integrate AI/ML models into applications where applicable.
- Explore Generative AI, NLP, or machine learning solutions that enhance product capabilities.
Technical Leadership & Mentorship:
- Provide guidance, mentorship, and code reviews for junior developers.
- Foster a culture of technical excellence and knowledge sharing.
Agile & Delivery Management:
- Participate in Agile ceremonies (sprint planning, stand-ups, retrospectives).
- Define and scope backlog items, track progress, and ensure timely delivery.
Collaboration:
- Work closely with cross-functional teams (product managers, designers, QA) to deliver high-quality solutions.
- Coordinate with geographically distributed teams.
Quality Assurance & Security:
- Conduct peer reviews of designs and code to ensure best practices.
- Implement security measures and ensure compliance with industry standards.
Innovation & Continuous Improvement:
- Identify areas for improvement in the software development lifecycle.
- Stay updated with the latest tech trends, especially in AI and cloud technologies, and recommend new tools or frameworks.
Required Skills
- Strong proficiency in JavaScript, HTML5, CSS3
- Hands-on expertise with frontend frameworks like React, Angular, or Vue.js
- Backend development experience with Java, Spring Boot (Node.js is a plus)
- Knowledge of REST APIs, microservices, and scalable architectures
- Familiarity with cloud platforms (AWS, Azure, or GCP)
- Experience with Agile/Scrum methodologies and JIRA for project tracking
- Proficiency in Git and version control best practices
- Strong debugging, performance optimization, and problem-solving skills
- Ability to analyze customer requirements and translate them into technical specifications

Location: Hybrid/ Remote
Openings: 5
Experience: 0 - 2Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Key Responsibilities:
Backend Development & APIs
- Build microservices that provide REST APIs to power web frontends.
- Design clean, reusable, and scalable backend code meeting enterprise security standards.
- Conceptualize and implement optimized data storage solutions for high-performance systems.
Deployment & Cloud
- Deploy microservices using a common deployment framework on AWS and GCP.
- Inspect and optimize server code for speed, security, and scalability.
Frontend Integration
- Work on modern front-end frameworks to ensure seamless integration with back-end services.
- Develop reusable libraries for both frontend and backend codebases.
AI Awareness (Preferred)
- Understand how AI/ML or Generative AI can enhance enterprise software workflows.
- Collaborate with AI specialists to integrate AI-driven features where applicable.
Quality & Collaboration
- Participate in code reviews to maintain high code quality.
- Collaborate with teams using Agile/Scrum methodologies for rapid and structured delivery.
Required Skills:
- Proficiency in JavaScript (ES6+), Webpack, Mocha, Jest
- Experience with recent frontend frameworks – React.js, Redux.js, Node.js (or similar)
- Deep understanding of HTML5, CSS3, SASS/LESS, and Content Management Systems
- Ability to design and implement RESTful APIs and understand their impact on client-side applications
- Familiarity with cloud platforms (AWS, Azure, or GCP) – deployment, storage, and scalability
- Experience working with Agile and Scrum methodologies
- Strong backend expertise in Java, J2EE, Spring Boot is a plus but not mandatory
∙
Good experience in 5+ SQL and NoSQL database development and optimization.
∙Strong hands-on experience with Amazon Redshift, MySQL, MongoDB, and Flyway.
∙In-depth understanding of data warehousing principles and performance tuning techniques.
∙Strong hands-on experience in building complex aggregation pipelines in NoSQL databases such as MongoDB.
∙Proficient in Python or Scala for data processing and automation.
∙3+ years of experience working with AWS-managed database services.
∙3+ years of experience with Power BI or similar BI/reporting platforms.

Job Description:
Title : Python AWS Developer with API
Tech Stack : AWS API gateway, Lambda functionality, Oracle RDS, SQL & database management, (OOPS) principles, Java script, Object relational Mapper, Git, Docker, Java dependency management, CI/CD, AWS cloud & S3, Secret Manager, Python, API frameworks, well-versed with Front and back end programming (python).
Responsibilities:
· Worked on building high-performance APIs using AWS services and Python. Python coding, debugging programs and integrating app with third party web services.
· Troubleshoot and debug non-prod defects, back-end development, API, main focus on coding and monitoring applications.
· Core application logic design.
· Supports dependency teams in UAT testing and perform functional application testing which includes postman testing
We are hiring a Site Reliability Engineer (SRE) to join our high-performance engineering team. In this role, you'll be responsible for driving reliability, performance, scalability, and security across cloud-native systems while bridging the gap between development and operations.
Key Responsibilities
- Design and implement scalable, resilient infrastructure on AWS
- Take ownership of the SRE function – availability, latency, performance, monitoring, incident response, and capacity planning
- Partner with product and engineering teams to improve system reliability, observability, and release velocity
- Set up, maintain, and enhance CI/CD pipelines using Jenkins, GitHub Actions, or AWS CodePipeline
- Conduct load and stress testing, identify performance bottlenecks, and implement optimization strategies
Required Skills & Qualifications
- Proven hands-on experience in cloud infrastructure design (AWS strongly preferred)
- Strong background in DevOps and SRE principles
- Proficiency with performance testing tools like JMeter, Gatling, k6, or Locust
- Deep understanding of cloud security and best practices for reliability engineering
- AWS Solution Architect Certification – Associate or Professional (preferred)
- Solid problem-solving skills and a proactive approach to systems improvement
Why Join Us?
- Work with cutting-edge technologies in a cloud-native, fast-paced environment
- Collaborate with cross-functional teams driving meaningful impact
- Hybrid work culture with flexibility and autonomy
- Open, inclusive work environment focused on innovation and excellence
We are looking for a highly skilled DevOps/Cloud Engineer with over 6 years of experience in infrastructure automation, cloud platforms, networking, and security. If you are passionate about designing scalable systems and love solving complex cloud and DevOps challenges—this opportunity is for you.
Key Responsibilities
- Design, deploy, and manage cloud-native infrastructure using Kubernetes (K8s), Helm, Terraform, and Ansible
- Automate provisioning and orchestration workflows for cloud and hybrid environments
- Manage and optimize deployments on AWS, Azure, and GCP for high availability and cost efficiency
- Troubleshoot and implement advanced network architectures including VPNs, firewalls, load balancers, and routing protocols
- Implement and enforce security best practices: IAM, encryption, compliance, and vulnerability management
- Collaborate with development and operations teams to improve CI/CD workflows and system observability
Required Skills & Qualifications
- 6+ years of experience in DevOps, Infrastructure as Code (IaC), and cloud-native systems
- Expertise in Helm, Terraform, and Kubernetes
- Strong hands-on experience with AWS and Azure
- Solid understanding of networking, firewall configurations, and security protocols
- Experience with CI/CD tools like Jenkins, GitHub Actions, or similar
- Strong problem-solving skills and a performance-first mindset
Why Join Us?
- Work on cutting-edge cloud infrastructure across diverse industries
- Be part of a collaborative, forward-thinking team
- Flexible hybrid work model – work from anywhere while staying connected
- Opportunity to take ownership and lead critical DevOps initiatives

🔍 Job Description:
We are looking for an experienced and highly skilled Technical Lead to guide the development and enhancement of a large-scale Data Observability solution built on AWS. This platform is pivotal in delivering monitoring, reporting, and actionable insights across the client's data landscape.
The Technical Lead will drive end-to-end feature delivery, mentor junior engineers, and uphold engineering best practices. The position reports to the Programme Technical Lead / Architect and involves close collaboration to align on platform vision, technical priorities, and success KPIs.
🎯 Key Responsibilities:
- Lead the design, development, and delivery of features for the data observability solution.
- Mentor and guide junior engineers, promoting technical growth and engineering excellence.
- Collaborate with the architect to align on platform roadmap, vision, and success metrics.
- Ensure high quality, scalability, and performance in data engineering solutions.
- Contribute to code reviews, architecture discussions, and operational readiness.
🔧 Primary Must-Have Skills (Non-Negotiable):
- 5+ years in Data Engineering or Software Engineering roles.
- 3+ years in a technical team or squad leadership capacity.
- Deep expertise in AWS Data Services: Glue, EMR, Kinesis, Lambda, Athena, S3.
- Advanced programming experience with PySpark, Python, and SQL.
- Proven experience in building scalable, production-grade data pipelines on cloud platforms.
Job Title: Backend Engineer – SDE II
Location: Delhi
Employment Type: Full-Time
About the Role:
We are looking for a passionate and experienced Backend Engineer (SDE II) to join our growing team. As a core backend contributor, you'll be responsible for building scalable, secure, and high-performance backend systems. You’ll collaborate closely with product, DevOps, and frontend teams to deliver best-in-class technology solutions.
Key Responsibilities:
- Design, develop, and maintain robust backend systems using Golang and PostgreSQL.
- Build and manage scalable microservices and APIs for high availability and performance.
- Ensure secure authentication and authorization using AWS services.
- Work with Docker, Kubernetes (EKS) to build and deploy containerized applications.
- Implement messaging and real-time systems using Kafka and WebSocket's.
- Maintain best practices in performance tuning, monitoring, and CI/CD.
- Collaborate with cross-functional teams to gather requirements and translate into technical solutions.
- Write clean, maintainable, and well-tested code.
- Contribute to architectural decisions and code reviews.
Required Skills & Qualifications:
- Bachelor's degree in Computer Science, Engineering, or related field.
- Proficiency in Golang and strong experience with PostgreSQL.
- Solid understanding of AWS Cloud, EKS, and Kubernetes.
- Hands-on experience with Docker and container orchestration tools.
- Strong experience in building and maintaining RESTful APIs and real-time systems using Kafka/WebSocket.
- Expertise in building and scaling microservices-based architecture.
- Good understanding of Go concurrency and parallelism.
What We Offer:
- Opportunity to work in a fast-paced, innovative environment.
- Flexible work culture and flat team structure.
- Competitive salary and benefits.
- Work with cutting-edge technologies and a highly talented team.

- Build and maintain full stack applications—from planning and design to deployment and maintenance.
- Develop responsive and dynamic user interfaces using React.js.
- Create robust server-side logic, APIs, and microservices with Node.js.
- Design and optimize schemas in MongoDB and manage high-performance caching with Redis.
- Deploy, scale, and manage applications on AWS (EC2, S3, Lambda, etc.).
- Write well-tested, maintainable code following TDD principles.
- Partner with product managers, designers, and fellow engineers to deliver top-quality features and improvements.
- Proven expertise in Node.js and React.
- Strong experience with MongoDB and Redis.
- Deep understanding of AWS services and cloud-native application design.
- Solid grasp of TDD, clean code practices, and software craftsmanship.
- B.E. in Computer Science, Engineering, or a related field.
- Experience with CI/CD tools such as Jenkins, GitLab CI, or similar.
- Familiarity with software design patterns and scalable system architecture.
- Excellent communication and teamwork skills.
- A self-starter attitude with strong problem-solving abilities.


Job Title : AI Architect
Location : Pune (On-site | 3 Days WFO)
Experience : 6+ Years
Shift : US or flexible shifts
Job Summary :
We are looking for an experienced AI Architect to design and deploy AI/ML solutions that align with business goals.
The role involves leading end-to-end architecture, model development, deployment, and integration using modern AI/ML tools and cloud platforms (AWS/Azure/GCP).
Key Responsibilities :
- Define AI strategy and identify business use cases
- Design scalable AI/ML architectures
- Collaborate on data preparation, model development & deployment
- Ensure data quality, governance, and ethical AI practices
- Integrate AI into existing systems and monitor performance
Must-Have Skills :
- Machine Learning, Deep Learning, NLP, Computer Vision
- Data Engineering, Model Deployment (CI/CD, MLOps)
- Python Programming, Cloud (AWS/Azure/GCP)
- Distributed Systems, Data Governance
- Strong communication & stakeholder collaboration
Good to Have :
- AI certifications (Azure/GCP/AWS)
- Experience in big data and analytics

We are seeking a Senior Laravel Developer with a minimum of 8+ years of experience and a proven track record in developing and maintaining PHP/Laravel-based websites and applications. The ideal candidate should excel in creating high-performance web applications using PHP and MySQL, with expertise in debugging, performance optimisation, and scalability.
Responsibilities:
- Write clean, maintainable code adhering to company coding standards.
- Develop and enhance existing PHP/Laravel/Code-igniter projects.
- Troubleshoot, test, and maintain core product software and databases for optimisation and functionality.
- Contribute to all phases of the development lifecycle.
- Follow industry best practices for secure and scalable development.
Key Skills
Technical Proficiency
- Expertise in PHP, MySQL, and related web technologies.
- Strong experience in debugging, performance optimisation, and scalability.
- Familiarity with resource-intensive application architectures.
- Hands-on experience with AWS services (e.g., EC2, S3, RDS, Lambda) for scalable application deployment.
- Proficiency in setting up and managing CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or AWS CodePipeline.
- Strong knowledge of Git version control and collaborative workflows.
- Experience with microservices architecture and containerization (Docker/Kubernetes) is a plus.
- Familiarity with front-end frameworks (Vue.js, React, or Angular) and modern JavaScript practices is a bonus.
Frameworks
- Hands-on experience with Laravel and CodeIgniter frameworks.
Proactive Problem-Solving
- Ability to identify potential bottlenecks and provide innovative solutions.
Soft Skills
- Adaptability to balance support and development tasks effectively.
- Ability to work independently and collaboratively in teams.
- Exceptional problem-solving skills and attention to detail.
Leadership and Mentorship
- Mentor junior and mid-level developers, providing guidance on coding standards, best practices, and technical challenges.
- Conduct code reviews to ensure high-quality, maintainable codebases.
- Collaborate with stakeholders to define technical requirements and project roadmaps.
Documentation and Best Practices
- Maintain comprehensive technical documentation for projects, processes, and workflows.
- Establish and enforce coding standards, security protocols, and development guidelines across the team.
Preferred Background
- Proven experience managing performance-critical applications.
- Exposure to legacy systems alongside modern development practices.
- Prior experience balancing support and development roles.
- Demonstrated expertise in resource-intensive application development and optimisation.
- Evidence of significant past performance improvements in web applications.
- Proficiency in frameworks such as Laravel and CodeIgniter, and relevant tech stacks.
Qualifications
- Minimum 8+ years of experience managing performance-critical applications.
- BE/B.Tech in Computer Science or equivalent degree required.
- Sound knowledge of OOP principles and best practices.
- Familiarity with Git version control and Agile/Scrum methodologies.
This role offers the opportunity to work on innovative projects while making meaningful contributions to high-scale applications. If you are passionate about creating exceptional web applications and thrive in a fast-paced environment, we encourage you to apply!

What you’ll do here:
• Define and evolve the architecture of the Resulticks platform, ensuring alignment with business strategy and scalability requirements.
• Lead the design of highly scalable, fault-tolerant, secure, and performant systems.
• Provide architectural oversight across application layers—UI, services, data, and integration.
• Drive modernization of the platform using cloud-native, microservices, and API-first approaches.
• Collaborate closely with product managers, developers, QA, and DevOps to ensure architectural integrity across development lifecycles. • Identify and address architectural risks, tech debt, and scalability challenges early in the design process.
• Guide the selection and integration of third-party technologies and platforms.
• Define and enforce architectural best practices, coding standards, and technology governance.
• Contribute to roadmap planning by assessing feasibility and impact of new features or redesigns.
• Participate in code reviews and architectural discussions, mentoring developers and technical leads.
• Stay current on emerging technologies, architecture patterns, and industry best practices to maintain platform competitiveness.
• Ensure security, compliance, and data privacy are embedded in architectural decisions, especially for industries like BFSI and telecom.
What you will need to thrive:
• 15+ years of experience in software/product development with at least 5 years in a senior architecture role.
• Proven experience architecting SaaS or large-scale B2B platforms, ideally in MarTech, AdTech, or CRM domains.
• Deep expertise in cloud architecture (AWS, Azure, or GCP), containerization (Docker, Kubernetes), and server less technologies. Copyright © RESULTICKS Solution Inc 2
• Strong command of modern backend and frontend frameworks (e.g., .NET Core, Java, Python, React).
• Excellent understanding of data architecture including SQL/NoSQL, event streaming, and analytics pipelines.
• Familiarity with CI/CD, DevSecOps, and monitoring frameworks.
• Strong understanding of security protocols, compliance standards (e.g., GDPR, ISO 27001), and authentication/authorization frameworks (OAuth, SSO, etc.).
• Effective communication and leadership skills, with experience influencing C-level and cross functional stakeholders.
• Strong analytical and problem-solving abilities, with a strategic mindset.
About us
RockED is the premier people development platform for the automotive industry, supporting the entire employee lifecycle from pre-hire and onboarding to upskilling and career transitions. With microlearning content, gamified delivery, and real-time feedback, RockED is educating the automotive workforce and solving the industry's greatest business challenges.
The RockED Company Inc. is headquartered in Florida. Backed by top industry experts and investors, we’re a well-funded startup on an exciting growth journey. Our R&D team (Indian entity) is at the core of all product and technology innovation.
Check out our website https://www.rocked.us/
Your Impact
We’re looking for passionate and self-driven Backend Software Engineers who can combine technical depth with strategic product thinking. This role is ideal for someone who enjoys working with modern backend stacks, brings strong critical thinking to the table, and can balance short-term delivery with long-term technical vision.
- Build scalable and reliable backend services and APIs that power the product.
- Design, implement, and maintain databases, ensuring data integrity, security, and efficient retrieval.
- Implement the core logic that makes applications work, handling data processing, user requests, and system operations.
- Contribute actively to system architecture, proposing enhancements and leading design discussions for new features and services.
- Working closely with product managers and designers to turn ideas into reality and shape the product roadmap.
- Optimize systems for performance, scalability, and security
- Stay up-to-date with new technologies and frameworks, contributing to the advancement of software development practices
- Drive code quality through writing unit tests, code reviews, and documentation.
- Take ownership of end-to-end feature development — from design to deployment and monitoring.
- Mentor junior developers, setting high standards for engineering excellence within the team
What skills do you need?
- Extensive (Must have) hands-on experience in JavaScript / TypeScript, backend development using Node.js and Express.js, and database management with MySQL.
- Strong command of JavaScript and understanding of its quirks and best practices
- Exposure to system design and interest in building scalable, high-availability systems.
- Experience from a high-growth product-based startup is a must-have.
- Ability to think strategically when designing systems—not just how to build, but why
- Prior work on B2C applications with a focus on performance and user experience
- Ensure that applications can handle increasing loads and maintain performance, even under heavy traffic
- Work with complex queries for performing sophisticated data manipulation, analysis, and reporting.
- Knowledge of Sequelize, MongoDB, and AWS would be an advantage.
- Experience in optimizing backend systems for speed and scalability.
Why choose a career at RockED?
- Remote-first culture with initial in-office training at our Bangalore HQ to help you settle in and connect with the team.
- Company-sponsored travel and stay during quarterly in-person meetups.
- Comprehensive health insurance – ₹10 lakh coverage for you and your family, fully paid by RockED.
- A rare opportunity to learn directly from our investors, who have built and exited multi-million dollar companies.
- Direct access to leadership with experience at top global companies like Adobe, Microsoft, Walmart, and more.
- Be part of a diverse and global team working across the US, India, and Germany.
- Unlimited leave policy – built on trust and so far used responsibly by our team.
• Strong knowledge of JavaScript, TypeScript, and Node.js (experienced in Nest.js, Express.js, or any other framework).
• Knowledge of AWS technologies such as DynamoDB, Elasticsearch, relational and NoSQL databases, EventBridge and messaging and queuing solutions like SQS, SNS (or any other cloud platform like Google Cloud or Azure).
• General understanding of common design and architectural patterns, with the ability to produce elegant designs in back-end, REST API, EDA and microservice architectures.
• Passion for delivering clean code, API tests, and maintainable documentation.
• Familiarity with Agile/Scrum methodologies and DevOps best practices.
• Knowledge of common frameworks such as GitLab, Docker, and CI/CD solutions.
About Eazeebox
Eazeebox is India’s first B2B Quick Commerce platform for home electrical goods. We empower electrical retailers with access to 100+ brands, flexible credit options, and 4-hour delivery—making supply chains faster, smarter, and more efficient. Our tech-driven approach enables sub-3 hour inventory-aware fulfilment across micro-markets, with a goal of scaling to 50+ orders/day per store.
About the Role
We’re looking for a DevOps Engineer to help scale and stabilize the cloud-native backbone that powers Eazeebox. You’ll play a critical role in ensuring our microservices architecture remains reliable, responsive, and performant—especially during peak retailer ordering windows.
What We’re Looking For
- 2+ years in a DevOps or SRE role in production-grade, cloud-native environments (AWS-focused)
- Solid hands-on experience with Docker, Kubernetes/EKS, and container networking
- Proficiency with CI/CD tools, especially GitHub Actions
- Experience with staged rollout strategies for microservices
- Familiarity with event-driven architectures using SNS, SQS, and Step Functions
- Strong ability to optimize cloud costs without compromising uptime or performance
- Scripting/automation skills in Python, Go, or Bash
- Good understanding of observability, on-call readiness, and incident response workflows
Nice to Have
- Experience in B2B commerce, delivery/logistics networks, or on-demand operations
- Exposure to real-time inventory systems or marketplaces
- Worked on high-concurrency, low-latency backend systems
We are looking for a proactive and detail-oriented AWS Project Manager / AWS Administrator to join our team. The ideal candidate will be responsible for configuring, monitoring, and managing AWS cloud services while optimizing usage, ensuring compliance, and driving automation across deployments.
Key Responsibilities:
1) Configure and manage AWS Cloud services including VPCs, URL proxies, Bastion Hosts, and C2S access points.
2) Monitor AWS resources and ensure high availability and performance.
3) Automate deployment processes to reduce manual efforts and enhance operational efficiency.
4) Daily cost monitoring and backup setup for EC2 source code to S3.
5) Manage AWS Elastic Beanstalk (EB), RDS, CloudFront, Load Balancer, and NAT Gateway traffic.
6) Implement security and compliance best practices using AWS Secret Manager, IAM, and organizational policies.
7) Perform system setup for CBT (Computer-Based Test) exams.
8) Utilize AWS Pricing Calculator for resource estimation and budgeting.
9) Implement RI (Reserved Instance) and Spot Instance strategies for cost savings.
10) Create and manage S3 bucket policies with controlled access, especially for specific websites.
11) Automate disaster recovery and backup strategies for RDS and critical infrastructure.
12) Configure and manage AWS environments using EB, RDS, and routing setups.
13) Create and maintain AWS CloudFormation templates for infrastructure as code.
14) Develop comprehensive documentation for all AWS-related activities and configurations.
15) Ensure adherence to AWS best practices in all deployments and services.
16) Handle account creation, policy management, and SSO configuration for organizational identity management.
17) Ability to configure and design AWS architecture.
Required Skills:
· Strong understanding of AWS services and architecture.
· Hands-on experience with:
1) Elastic Beanstalk (EB)
2) RDS
3) EC2
4) S3
5) CloudFront
6) Load Balancer & NAT Gateway
7) CloudFormation
8) IAM, SSO & Identity Management
· Proficient in AWS cost optimization techniques.
· Ability to automate regular tasks and create efficient workflows.
· Clear and structured documentation skills.
Preferred Qualifications:
AWS Certified Solutions Architect (required).
Must have 5+ years of experience in this field.
Previous experience in managed AWS services.
We are looking for someone with a hacker mindset who is ready to pick up new problems and build full stack AI solutions for some of the biggest brands in the country and the world.

Mode of Hire: Permanent
Required Skills Set (Mandatory): Linux, Shell Scripting, Python, AWS, Security best practices, Git
Desired Skills (Good if you have): Ansible, Terraform
Job Responsibilities
- Design, develop, and maintain deployment pipelines and automation tooling to improve platform efficiency, scalability, and reliability.
- Manage infrastructure and services in production AWS environments.
- Drive platform improvements with a focus on security, scalability, and operational excellence.
- Collaborate with engineering teams to enhance development tooling, streamline access workflows, and improve platform usability through feedback.
- Mentor junior engineers and help foster a culture of high-quality engineering and knowledge sharing.
Job Requirements
- Strong foundational understanding of Linux systems.
- Cloud experience (e.g., AWS) with strong problem-solving in cloud-native environments.
- Proven track record of delivering robust, well-documented, and secure automation solutions.
- Comfortable owning end-to-end delivery of infrastructure components and tooling.
Preferred Qualifications
- Advanced system and cloud optimization skills.
- Prior experience in platform teams or DevOps roles at product-focused startups.
- Demonstrated contributions to internal tooling, open-source, or automation projects.

About Moative
Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.
Work you’ll do
As a Data Scientist at Moative, you’ll play a crucial role in extracting valuable insights from data to drive informed decision-making. You’ll work closely with cross-functional teams to build predictive models and develop solutions to complex business problems. You will also be involved in conducting experiments, building POCs and prototypes.
Responsibilities
- Support end-to-end development and deployment of ML/ AI models - from data preparation, data analysis and feature engineering to model development, validation and deployment
- Gather, prepare and analyze data, write code to develop and validate models, and continuously monitor and update them as needed.
- Collaborate with domain experts, engineers, and stakeholders in translating business problems into data-driven solutions
- Document methodologies and results, present findings and communicate insights to non-technical audiences
Skills & Requirements
- Proficiency in Python and familiarity with basic Python libraries for data analysis and ML algorithms (such as NumPy, Pandas, ScikitLearn, NLTK).
- Strong understanding and experience with data analysis, statistical and mathematical concepts and ML algorithms
- Working knowledge of cloud platforms (e.g., AWS, Azure, GCP).
- Broad understanding of data structures and data engineering.
- Strong communication skills
- Strong collaboration skills, continuous learning attitude and a problem solving mind-set
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to be present in the city. We intend to move to a hybrid model in a few months time.


About Moative
Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.
Work you’ll do
As a Data Engineer, you will work on data architecture, large-scale processing systems, and data flow management. You will build and maintain optimal data architecture and data pipelines, assemble large, complex data sets, and ensure that data is readily available to data scientists, analysts, and other users. In close collaboration with ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade solutions that directly impact business outcomes. Ultimately, you will be responsible for developing and implementing systems that optimize the organization’s data use and data quality.
Responsibilities
- Create and maintain optimal data architecture and data pipelines on cloud infrastructure (such as AWS/ Azure/ GCP)
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements
- Build the pipeline infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
- Support development of analytics that utilize the data pipeline to provide actionable insights into key business metrics
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs
Who you are
You are a passionate and results-oriented engineer who understands the importance of data architecture and data quality to impact solution development, enhance products, and ultimately improve business applications. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You have experience in developing and deploying data pipelines to support real-world applications. You have a good understanding of data structures and are excellent at writing clean, efficient code to extract, create and manage large data sets for analytical uses. You have the ability to conduct regular testing and debugging to ensure optimal data pipeline performance. You are excited at the possibility of contributing to intelligent applications that can directly impact business services and make a positive difference to users.
Skills & Requirements
- 3+ years of hands-on experience as a data engineer, data architect or similar role, with a good understanding of data structures and data engineering.
- Solid knowledge of cloud infra and data-related services on AWS (EC2, EMR, RDS, Redshift) and/ or Azure.
- Advanced knowledge of SQL, including writing complex queries, stored procedures, views, etc.
- Strong experience with data pipeline and workflow management tools (such as Luigi, Airflow).
- Experience with common relational SQL, NoSQL and Graph databases.
- Strong experience with scripting languages: Python, PySpark, Scala, etc.
- Practical experience with basic DevOps concepts: CI/CD, containerization (Docker, Kubernetes), etc
- Experience with big data tools (Spark, Kafka, etc) and stream processing.
- Excellent communication skills to collaborate with colleagues from both technical and business backgrounds, discuss and convey ideas and findings effectively.
- Ability to analyze complex problems, think critically for troubleshooting and develop robust data solutions.
- Ability to identify and tackle issues efficiently and proactively, conduct thorough research and collaborate to find long-term, scalable solutions.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.
Job Description
We are looking for a hands-on Tech Lead – Java with strong software engineering fundamentals, a deep understanding of Java-based backend systems, and proven experience leading agile teams. This role involves a balance of individual contribution and technical leadership — mentoring developers, designing scalable architectures, and driving the success of product delivery in fast-paced environments.
Key Responsibilities
- Lead the end-to-end design, development, and deployment of Java-based applications and RESTful APIs.
- Collaborate with product managers and architects to define technical solutions and translate business requirements into scalable software.
- Guide and mentor team members in best coding practices, design patterns, and architectural decisions.
- Drive code reviews, technical discussions, and ensure high code quality and performance standards.
- Troubleshoot critical production issues and implement long-term fixes and improvements.
- Advocate for continuous improvement in tools, processes, and systems across the engineering team.
- Stay up to date with modern technologies and recommend their adoption where appropriate.
Required Skills
- 5+ years of experience in Java backend development with expertise in Spring/Spring Boot and RESTful services.
- Solid grasp of Object-Oriented Programming (OOP), system design, and design patterns.
- Proven experience leading a team of engineers or taking ownership of modules/projects.
- Experience with AWS Cloud services (EC2, Lambda, S3, etc.) is a strong advantage.
- Familiarity with Agile/Scrum methodologies and working in cross-functional teams.
- Excellent problem-solving, debugging, and analytical skills.
- Strong communication and leadership skills.
About HummingWave
HummingWave is a leading IT product development company specializing in building full-scale application systems with robust cloud backends, sleek mobile/web frontends, and seamless enterprise integrations. With 50+ digital products delivered across domains for clients in the US, Europe, and Asia-Pacific, we are a team of highly skilled engineers committed to technical excellence and innovation.
Thanks


iSchoolConnect is an online platform that makes the University Admissions process hassle-free, fun and accessible to students around the globe. Using our unique AI technology, we allow students to apply to multiple universities with a single application. iSchoolConnect also connects with institutions worldwide and aids them in the transformation of their end-to-end admission processes using our various cutting-edge use cases.
Designation : Senior Fullstack Developer
We are seeking an experienced and highly skilled Senior Full Stack Developer to join our growing development team. The ideal candidate will have extensive experience in building scalable, high-performance web applications and will be responsible for delivering robust backend services and modern, user-friendly frontend solutions. This role will also involve working with cloud services, databases, and ensuring the technical success of projects from inception to deployment.
Responsibilities:
- End-to-End Development: Lead the development and maintenance of both frontend and backend applications. Write clean, scalable, and efficient code for web applications.
- Backend Development: Develop RESTful APIs and microservices using technologies like Node.js, Express.js, and Nest.js.
- Frontend Development: Implement and maintain modern, responsive web applications using frameworks React, Angular, etc
- Database Management: Design and maintain scalable databases, including MongoDB and MySQL, to ensure data consistency, performance, and reliability.
- Cloud Services: Manage cloud infrastructure on AWS and Google Cloud, ensuring optimal performance, scalability, and cost-efficiency.
- Collaboration: Work closely with product managers, designers, and other engineers to deliver new features and improvements.
- Code Quality & Testing: Follow best practices for code quality and maintainability, utilizing Test-Driven Development (TDD), and write unit and integration tests using Jest, and Postman.
- Mentorship: Provide guidance to junior developers, perform code reviews, and ensure high standards of development across the team.
Requirements:
- Experience: 5+ years of hands-on experience in full stack development, with a proven track record in both backend and frontend development.
- Backend Technologies: Proficiency in Node.js, Express.js, and Nest.js for building scalable backend services and APIs.
- Frontend Technologies: Strong experience with React, Angular, etc to build dynamic and responsive user interfaces.
- Databases: Strong knowledge of both relational (MySQL) and NoSQL (MongoDB) databases.
- Cloud Infrastructure: Hands-on experience with AWS and Google Cloud for managing cloud services, databases, and deployments.
- Version Control: Proficient in Git for version control and collaboration.
- Testing: Experience in writing unit and integration tests with Jest, and Postman.
- Problem Solving: Strong analytical and problem-solving skills to work with complex systems.
- Communication: Excellent communication and teamwork skills, with the ability to collaborate cross-functionally.
Nice-to-Have:
- Experience with Docker, Kubernetes, and CI/CD tools.
- Familiarity with GraphQL and Microservices Architecture.
- Experience working in an Agile/Scrum environment.
Create and manage Jenkins Pipelines using Linux groovy
scripting and python
Analyze and fix issues in Jenkins, GitHub, Nexus,
SonarQube and AWS cloud
Perform Jenkins, GitHub, SonarQube and Nexus
administration.
Create resources in AWS environment using
infrastructure-as-code. Analyze and fix issues in AWS
Cloud.
Good-to-Have
AWS Cloud certification
Terraform Certification
- Kubernetes/Docker experience

Tableau Server Administrator (10+ Yrs Exp.) 📊🔒
📍Location: Remote
🗓️ Experience: 10+ years
MandatorySkills & Qualifications:
1. Proven expertise in Tableau architecture, clustering, scalability, and high availability.
2. Proficiency in PowerShell, Python, or Shell scripting.
3. Experience with cloud platforms (AWS, Azure, GCP) and Tableau Cloud.
4. Familiarity with database systems (SQL Server, Oracle, Snowflake).
5. Any certification Plus.



About NxtWave:
NxtWave is one of India’s fastest-growing edtech startups, transforming the way students learn and build careers in tech. With a strong community of learners across the country, we’re building cutting-edge products that make industry-ready skills accessible and effective at scale.
What will you do:
- Build and ship full-stack features end-to-end (frontend, backend, data).
- Own your code – from design to deployment with CI/CD pipelines.
- Make key architectural decisions and implement scalable systems.
- Lead code reviews, enforce clean code practices, and mentor SDE-1s.
- Optimize performance across frontend (Lighthouse) and backend (tracing, metrics)
- Ensure secure, accessible, and SEO-friendly applications.
- Collaborate with Product, Design, and Ops to deliver fast and effectively.
- Work in a fast-paced, high-impact environment with rapid release cycles.
What we are expecting:
- 3–5 years of experience building production-grade full-stack applications.
- Proficiency in React (or Angular/Vue), TypeScript, Node.js / NestJS / Django / Spring Boot.
- Strong understanding of REST/GraphQL APIs, relational & NoSQL databases.
- Experience with Docker, AWS (Lambda, EC2, S3, API Gateway), Redis, Elasticsearch.
- Solid testing experience – unit, integration, and E2E (Jest, Cypress, Playwright).
- Strong problem-solving, communication, and team collaboration skills.
- Passion for learning, ownership, and building great software.
Location: Hyderabad (In-office)
Apply here:- https://forms.gle/QeoNC8LmWY6pwckX9

We are looking for a Senior Software Engineer to join our team and contribute to key business functions. The ideal candidate will bring relevant experience, strong problem-solving skills, and a collaborative
mindset.
Responsibilities:
- Design, build, and maintain high-performance systems using modern C++
- Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
- Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and
- managing applications in the cloud.
- Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
- Participate in system design, peer code reviews, and performance tuning
Qualifications:
- 5+ years of software development experience, with strong command over modern C++
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud.
- Apache Airflow for orchestrating complex data workflows.
- EKS (Elastic Kubernetes Service) for managing containerized workloads.
- Proven expertise in designing and managing robust data pipelines & Microservices.
- Proficient in building and scaling data processing workflows and working with structured/unstructured data
- Strong hands-on experience with Docker, container orchestration, and microservices architecture
- Working knowledge of CI/CD practices, Git, and build/release tools
- Strong problem-solving, debugging, and cross-functional collaboration skills
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.
Job Description:
We are seeking a highly motivated and skilled Full Stack Java Developer with strong experience in backend and frontend development, cloud deployment, and modern DevOps practices. The ideal candidate will have a solid background in Java/J2EE technologies, hands-on experience in developing microservices on AWS, and a good understanding of CI/CD pipelines and monitoring tools.
Key Responsibilities:
- Design, develop, test, and maintain scalable web applications and microservices.
- Implement backend logic using Core Java, J2EE, Spring Boot, Spring Batch, JPA, and REST APIs.
- Build responsive UIs using React, HTML, CSS, and integrate them with backend services.
- Develop and deploy microservices using AWS services like ECS, EC2, S3, API Gateway, Aurora, ALB, and Route 53.
- Participate in Agile development processes, including sprint planning, stand-ups, and retrospectives.
- Write unit and integration tests using JUnit and Cucumber.
- Monitor application performance using APM tools and analyze logs using Splunk or similar tools.
- Implement CI/CD pipelines using Jenkins, Maven/Gradle, and Git/Bitbucket.
- Work with containerization and orchestration tools like Docker and Kubernetes.
- Collaborate with cross-functional teams using tools like JIRA and Confluence.
Required Skills & Experience:
- Strong hands-on experience in Java/J2EE application development.
- Proficiency in Spring frameworks (Spring Boot, Batch) and RESTful services.
- Frontend development experience with React, HTML, CSS.
- Solid experience with AWS cloud technologies and deployment practices.
- Experience in writing and maintaining automated tests with JUnit and Cucumber.
- Knowledge of APM tools and log analysis tools (e.g., Splunk).
- Familiarity with CI/CD pipelines and tools (Jenkins, Maven/Gradle, Git).
- Experience with Docker and Kubernetes.
- Excellent problem-solving, debugging, and analytical skills.
- Good communication skills and ability to work in Agile teams.
Preferred Qualifications:
- AWS certification is a plus.
- Experience with Amazon Aurora and API Gateway in production environments.
- Knowledge of SOA, messaging systems (e.g., MQ), and security best practices.


Job Title: Senior Backend Developer
Location: Sector 58, Noida (On-site)
Type: Full-time | Founding Team
Experience: 4–5 Years
About the Company:
Weya AI is on a mission to revolutionize how businesses communicate with their customers. Our intelligent AI agents automate voice, Whatsapp, and email interactions- powering end-to-end support, collections, onboarding, and more for banks, NBFCs and fintechs. We're building the future of customer engagement: AI systems that talk like humans, operate at scale, and deliver 24/7 responsiveness with precision.
If you're looking to work on real-world AI problems with enterprise impact, Weya AI is one of the most mission-driven startups you can join right now.
About the Role:
As a Senior Backend Developer at Weya.ai, you’ll join our core founding tech team and take full ownership of backend systems that power our real-time AI communication stack.
This isn’t just a development job — it's a leadership opportunity to help scale a live product already in production. You'll architect, build, and optimize scalable, secure backend services while collaborating across AI, frontend, and telephony teams. Expect rapid release cycles, close collaboration with founders, and the freedom to influence architecture, tooling, and team culture.
What You'll Do?
- Architect and build scalable microservices using Node.js or Go
- Design clean, secure APIs and data pipelines for real-time voice and chat systems
- Manage cloud infrastructure on Azure or AWS (App Services, containers, databases)
- Optimize backend systems for performance, cost-efficiency, and reliability
- Collaborate with product, AI, and frontend teams to launch new features rapidly
- Own deployments, monitoring, and backend stability in a production environment
- Contribute to code reviews, system design decisions, and long-term architecture strategy
What We're Looking For?
- 4-5 years of backend or full-stack experience with Node.js, Go (Golang), React, PostgreSQL or MongoDB
- Strong understanding of system design, microservice architecture, and clean code practices
- Experience deploying and scaling cloud applications on Azure or AWS
- Proven track record of delivering end-to-end features in SaaS, voice tech, or AI-driven platforms
- A product-first mindset — you think about UX and business impact, not just code
- Strong communication, ownership, and cross-functional collaboration skills
- A self-starter attitude — you thrive in startup environments and ambiguity
Good-to-Have (Bonus Points)
- Exposure to Next.js, or frontend integration workflows
- Familiarity with CRM integrations, WhatsApp APIs, Twilio, or Knowlarity
- Experience in early-stage startups or mentoring junior engineers
- Passion for AI, voice interfaces, or building tools that transform CX
What You'll Get?
- High Ownership: Your code and work directly impact and helps businesses scale across India and the US
- Direct client and live Product Exposure: No red tape culture, Ideas get shipped, not stuck in slides Work
- Founding Team Vibe: Influence our tech stack, culture, and product roadmap
- Career Growth: Build, lead, and scale systems at the heart of a fast-growing AI startup
Let’s Build the Future of Conversations.
If you're excited about scaling backend systems, building impactful tech, and owning your work — apply now!
Apply now!
We’re looking for an Engineering Manager to guide our micro-service platform and mentor a fully remote backend team. You’ll blend hands-on technical ownership with people leadership—shaping architecture, driving cloud best practices, and coaching engineers in their careers and craft.
Key Responsibilities:
Area
What You’ll Own
Architecture & Delivery
• Define and evolve backend architecture built on Java 17+, Spring Boot 3, AWS (Containers, Lambdas, SQS, S3), Elasticsearch, PostgreSQL/MySQL, Databricks, Redis etc...
• Lead design and code reviews; enforce best practices for testing, CI/CD, observability, security, and cost-efficient cloud operations.
• Drive technical roadmaps, ensuring scalability (billions of events, 99.9 %+ uptime) and rapid feature delivery.
Team Leadership & Growth
• Manage and inspire a distributed team of 6-10 backend engineers across multiple time zones.
• Set clear growth objectives, run 1-on-1s, deliver feedback, and foster an inclusive, high-trust culture.
• Coach the team on AI-assisted development workflows (e.g., GitHub Copilot, LLM-based code review) to boost productivity and code quality.
Stakeholder Collaboration
• Act as technical liaison to Product, Frontend, SRE, and Data teams, translating business goals into resilient backend solutions.
• Communicate complex concepts to both technical and non-technical audiences; influence cross-functional decisions.
Technical Vision & Governance
• Own coding standards, architectural principles, and technology selection.
• Evaluate emerging tools and frameworks (especially around GenAI and cloud-native patterns) and create adoption strategies.
• Balance technical debt and new feature delivery through data-driven prioritization.
Required Qualifications:
● 8+ years designing, building, and operating distributed backend systems with Java & Spring Boot
● Proven experience leading or mentoring engineers; direct people-management a plus
● Expert knowledge of AWS services and cloud-native design patterns
● Hands-on mastery of Elasticsearch, PostgreSQL/MySQL, and Redis for high-volume, low-latency workloads
● Demonstrated success scaling systems to millions of users or billions of events Strong grasp of DevOps practices: containerization (Docker), CI/CD (GitHub Actions), observability stacks
● Excellent communication and stakeholder-management skills in a remote-fi rst environment
Nice-to-Have:
● Hands-on experience with Datadog (APM, Logs, RUM) and a data-driven approach to debugging/performance tuning
● Startup experience—comfortable wearing multiple hats and juggling several projects simultaneously
● Prior title of Principal Engineer, Staff Engineer, or Engineering Manager in a high-growth SaaS company
● Familiarity with AI-assisted development tools (Copilot, CodeWhisperer, Cursor) and a track record of introducing them safely
Job Title: Engineering Manager (Java / Spring Boot, AWS) – Remote
Leadership Role
Location: Remote
Employment Type: Full-time
Position Overview: We are looking for an experienced and highly skilled Senior Data Engineer to join our team and help design, implement, and optimize data systems that support high-end analytical solutions for our clients. As a customer-centric Data Engineer, you will work closely with clients to understand their business needs and translate them into robust, scalable, and efficient technical solutions. You will be responsible for end-to-end data modelling, integration workflows, and data transformation processes while ensuring security, privacy, and compliance.In this role, you will also leverage the latest advancements in artificial intelligence, machine learning, and large language models (LLMs) to deliver high-impact solutions that drive business success. The ideal candidate will have a deep understanding of data infrastructure, optimization techniques, and cost-effective data management
Key Responsibilities:
• Customer Collaboration:
– Partner with clients to gather and understand their business
requirements, translating them into actionable technical specifications.
– Act as the primary technical consultant to guide clients through data challenges and deliver tailored solutions that drive value.
•Data Modeling & Integration:
– Design and implement scalable, efficient, and optimized data models to support business operations and analytical needs.
– Develop and maintain data integration workflows to seamlessly extract, transform, and load (ETL) data from various sources into data repositories.
– Ensure smooth integration between multiple data sources and platforms, including cloud and on-premise systems
• Data Processing & Optimization:
– Develop, optimize, and manage data processing pipelines to enable real-time and batch data processing at scale.
– Continuously evaluate and improve data processing performance, optimizing for throughput while minimizing infrastructure costs.
• Data Governance & Security:
–Implement and enforce data governance policies and best practices, ensuring data security, privacy, and compliance with relevant industry regulations (e.g., GDPR, HIPAA).
–Collaborate with security teams to safeguard sensitive data and maintain privacy controls across data environments.
• Cross-Functional Collaboration:
– Work closely with data engineers, data scientists, and business
analysts to ensure that the data architecture aligns with organizational objectives and delivers actionable insights.
– Foster collaboration across teams to streamline data workflows and optimize solution delivery.
• Leveraging Advanced Technologies:
– Utilize AI, machine learning models, and large language models (LLMs) to automate processes, accelerate delivery, and provide
smart, data-driven solutions to business challenges.
– Identify opportunities to apply cutting-edge technologies to improve the efficiency, speed, and quality of data processing and analytics.
• Cost Optimization:
–Proactively manage infrastructure and cloud resources to optimize throughput while minimizing operational costs.
–Make data-driven recommendations to reduce infrastructure overhead and increase efficiency without sacrificing performance.
Qualifications:
• Experience:
– Proven experience (5+ years) as a Data Engineer or similar role, designing and implementing data solutions at scale.
– Strong expertise in data modelling, data integration (ETL), and data transformation processes.
– Experience with cloud platforms (AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark).
• Technical Skills:
– Advanced proficiency in SQL, data modelling tools (e.g., Erwin,PowerDesigner), and data integration frameworks (e.g., Apache
NiFi, Talend).
– Strong understanding of data security protocols, privacy regulations, and compliance requirements.
– Experience with data storage solutions (e.g., data lakes, data warehouses, NoSQL, relational databases).
• AI & Machine Learning Exposure:
– Familiarity with leveraging AI and machine learning technologies (e.g., TensorFlow, PyTorch, scikit-learn) to optimize data processing and analytical tasks.
–Ability to apply advanced algorithms and automation techniques to improve business processes.
• Soft Skills:
– Excellent communication skills to collaborate with clients, stakeholders, and cross-functional teams.
– Strong problem-solving ability with a customer-centric approach to solution design.
– Ability to translate complex technical concepts into clear, understandable terms for non-technical audiences.
• Education:
– Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or a related field (or equivalent practical experience).
LIFE AT FOUNTANE:
- Fountane offers an environment where all members are supported, challenged, recognized & given opportunities to grow to their fullest potential.
- Competitive pay
- Health insurance for spouses, kids, and parents.
- PF/ESI or equivalent
- Individual/team bonuses
- Employee stock ownership plan
- Fun/challenging variety of projects/industries
- Flexible workplace policy - remote/physical
- Flat organization - no micromanagement
- Individual contribution - set your deadlines
- Above all - culture that helps you grow exponentially!
A LITTLE BIT ABOUT THE COMPANY:
Established in 2017, Fountane Inc is a Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi-million valuation companies in the US and a handful of sister ventures for large corporations, including Target, US Ventures, and Imprint Engine.
We’re a team of 120+ strong from around the world that are radically open-minded and believes in excellence, respecting one another, and pushing our boundaries to the furthest it's ever been.

Job Summary:
We are looking for a skilled and motivated Python AWS Engineer to join our team. The ideal candidate will have strong experience in backend development using Python, cloud infrastructure on AWS, and building serverless or microservices-based architectures. You will work closely with cross-functional teams to design, develop, deploy, and maintain scalable and secure applications in the cloud.
Key Responsibilities:
- Develop and maintain backend applications using Python and frameworks like Django or Flask
- Design and implement serverless solutions using AWS Lambda, API Gateway, and other AWS services
- Develop data processing pipelines using services such as AWS Glue, Step Functions, S3, DynamoDB, and RDS
- Write clean, efficient, and testable code following best practices
- Implement CI/CD pipelines using tools like CodePipeline, GitHub Actions, or Jenkins
- Monitor and optimize system performance and troubleshoot production issues
- Collaborate with DevOps and front-end teams to integrate APIs and cloud-native services
- Maintain and improve application security and compliance with industry standards
Required Skills:
- Strong programming skills in Python
- Solid understanding of AWS cloud services (Lambda, S3, EC2, DynamoDB, RDS, IAM, API Gateway, CloudWatch, etc.)
- Experience with infrastructure as code (e.g., CloudFormation, Terraform, or AWS CDK)
- Good understanding of RESTful API design and microservices architecture
- Hands-on experience with CI/CD, Git, and version control systems
- Familiarity with containerization (Docker, ECS, or EKS) is a plus
- Strong problem-solving and communication skills
Preferred Qualifications:
- Experience with PySpark, Pandas, or data engineering tools
- Working knowledge of Django, Flask, or other Python frameworks
- AWS Certification (e.g., AWS Certified Developer – Associate) is a plus
Educational Qualification:
- Bachelor's or Master’s degree in Computer Science, Engineering, or related field
Job Opening: Cloud and Observability Engineer
📍 Location: Work From Office – Gurgaon (Sector 43)
🕒 Experience: 2+ Years
💼 Employment Type: Full-Time
Role Overview:
As a Cloud and Observability Engineer, you will play a critical role in helping customers transition and optimize their monitoring and observability infrastructure. You'll be responsible for building high-quality extension packages for alerts, dashboards, and parsing rules using the organization Platform. Your work will directly impact the reliability, scalability, and efficiency of monitoring across cloud-native environments.
This is a work-from-office role requiring collaboration with global customers and internal stakeholders.
Key Responsibilities:
- Extension Delivery:
- Develop, enhance, and maintain extension packages for alerts, dashboards, and parsing rules to improve monitoring experience.
- Conduct in-depth research to create world-class observability solutions (e.g., for cloud-native and container technologies).
- Customer & Internal Support:
- Act as a technical advisor to both internal teams and external clients.
- Respond to queries, resolve issues, and incorporate feedback related to deployed extensions.
- Observability Solutions:
- Design and implement optimized monitoring architectures.
- Migrate and package dashboards, alerts, and rules based on customer environments.
- Automation & Deployment:
- Use CI/CD tools and version control systems to package and deploy monitoring components.
- Continuously improve deployment workflows.
- Collaboration & Enablement:
- Work closely with DevOps, engineering, and customer success teams to gather requirements and deliver solutions.
- Deliver technical documentation and training for customers.
Requirements:
Professional Experience:
- Minimum 2 years in Systems Engineering or similar roles.
- Focus on monitoring, observability, and alerting tools.
- Cloud & Container Tech:
- Hands-on experience with AWS, Azure, or GCP.
- Experience with Kubernetes, EKS, GKE, or AKS.
- Cloud DevOps certifications (preferred).
Observability Tools:
- Practical experience with at least two observability platforms (e.g., Prometheus, Grafana, Datadog, etc.).
- Strong understanding of alerting, dashboards, and infrastructure monitoring.
Scripting & Automation:
- Familiarity with CI/CD, deployment pipelines, and version control.
- Experience in packaging and managing observability assets.
- Technical Skills:
- Working knowledge of PromQL, Grafana, and related query languages.
- Willingness to learn Dataprime and Lucene syntax.
- Soft Skills:
- Excellent problem-solving and debugging abilities.
- Strong verbal and written communication in English.
- Ability to work across US and European time zones as needed.
Why Join Us?
- Opportunity to work on cutting-edge observability platforms.
- Collaborate with global teams and top-tier clients.
- Shape the future of cloud monitoring and performance optimization.
- Growth-oriented, learning-focused environment.