50+ AWS (Amazon Web Services) Jobs in Delhi, NCR and Gurgaon | AWS (Amazon Web Services) Job openings in Delhi, NCR and Gurgaon
Apply to 50+ AWS (Amazon Web Services) Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.


One of the reputed Client in India

Our Client is looking to hire Databricks Amin immediatly.
This is PAN-INDIA Bulk hiring
Minimum of 6-8+ years with Databricks, Pyspark/Python and AWS.
Notice 15-30 days is preferred.
Share profiles at hr at etpspl dot com
Please refer/share our email to your friends/colleagues who are looking for job.

Job Title: Mid-Level .NET Developer (Agile/SCRUM)
Location: Mohali, PTP or anywhere else)
Night Shift from 6:30 pm to 3:30 am IST
Experience:
5 Years
Job Summary:
We are seeking a proactive and detail-oriented Mid-Level .NET Developer to join our dynamic team. The ideal candidate will be responsible for designing, developing, and maintaining high-quality applications using Microsoft technologies with a strong emphasis on .NET Core, C#, Web API, and modern front-end frameworks. You will collaborate with cross-functional teams in an Agile/SCRUM environment and participate in the full software development lifecycle—from requirements gathering to deployment—while ensuring adherence to best coding and delivery practices.
Key Responsibilities:
- Design, develop, and maintain applications using C#, .NET, .NET Core, MVC, and databases such as SQL Server, PostgreSQL, and MongoDB.
- Create responsive and interactive user interfaces using JavaScript, TypeScript, Angular, HTML, and CSS.
- Develop and integrate RESTful APIs for multi-tier, distributed systems.
- Participate actively in Agile/SCRUM ceremonies, including sprint planning, daily stand-ups, and retrospectives.
- Write clean, efficient, and maintainable code following industry best practices.
- Conduct code reviews to ensure high-quality and consistent deliverables.
- Assist in configuring and maintaining CI/CD pipelines (Jenkins or similar tools).
- Troubleshoot, debug, and resolve application issues effectively.
- Collaborate with QA and product teams to validate requirements and ensure smooth delivery.
- Support release planning and deployment activities.
Required Skills & Qualifications:
- 4–6 years of professional experience in .NET development.
- Strong proficiency in C#, .NET Core, MVC, and relational databases such as SQL Server.
- Working knowledge of NoSQL databases like MongoDB.
- Solid understanding of JavaScript/TypeScript and the Angular framework.
- Experience in developing and integrating RESTful APIs.
- Familiarity with Agile/SCRUM methodologies.
- Basic knowledge of CI/CD pipelines and Git version control.
- Hands-on experience with AWS cloud services.
- Strong analytical, problem-solving, and debugging skills.
- Excellent communication and collaboration skills.
Preferred / Nice-to-Have Skills:
- Advanced experience with AWS services.
- Knowledge of Kubernetes or other container orchestration platforms.
- Familiarity with IIS web server configuration and management.
- Experience in the healthcare domain.
- Exposure to AI-assisted code development tools (e.g., GitHub Copilot, ChatGPT).
- Experience with application security and code quality tools such as Snyk or SonarQube.
- Strong understanding of SOLID principles and clean architecture patterns.
Technical Proficiencies:
- ASP.NET Core, ASP.NET MVC
- C#, Entity Framework, Razor Pages
- SQL Server, MongoDB
- REST API, jQuery, AJAX
- HTML, CSS, JavaScript, TypeScript, Angular
- Azure Services, Azure Functions, AWS
- Visual Studio
- CI/CD, Git



Frontend Architect
Experience: 6+ years
Location: Delhi / Gurgaon
Roles & Responsibilities:
- Design, develop, and maintain scalable applications using React.js and FastAPI/Node.js.
- Write clean, modular, and well-documented code in Python and JavaScript.
- Deploy and manage applications on AWS using ECS, ECR, EKS, S3, and CodePipeline.
- Build and maintain CI/CD pipelines to automate testing, deployment, and monitoring.
- Implement unit, integration, and end-to-end tests using frameworks like Swagger and Pytest.
- Ensure secure coding practices, including authentication and authorization.
- Collaborate with cross-functional teams and mentor junior developers.
Skills Required:
- Strong expertise in React.js and modern frontend development
- Experience with FastAPI and Node.js backend
- Proficient in Python and JavaScript
- Hands-on experience with AWS cloud services and containerization (Docker)
- Knowledge of CI/CD pipelines, automated testing, and secure coding practices
- Excellent problem-solving, communication, and leadership skills


Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
Job Title : Senior QA Automation Architect (Cloud & Kubernetes)
Experience : 6+ Years
Location : India (Multiple Offices)
Shift Timings : 12 PM to 9 PM (Noon Shift)
Working Days : 5 Days WFO (NO Hybrid)
About the Role :
We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.
You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.
Key Responsibilities :
- Architect and maintain test automation frameworks for microservices.
- Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
- Ensure reliability, scalability, and observability of test systems.
- Work closely with DevOps and Cloud teams to streamline automation infrastructure.
Mandatory Skills :
- Kubernetes, Helm, Docker, Linux
- Cloud Platforms : AWS / Azure / GCP
- CI/CD Tools : Jenkins, GitHub Actions
- Scripting : Python, Pytest, Bash
- Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
- IaC Practices : Terraform / Ansible
Good to Have :
- Experience with Service Mesh (Istio/Linkerd).
- Container Security or DevSecOps exposure.
Job Specification:
- Job Location - Noida
- Experience - 2-5 Years
- Qualification - B.Tech, BE, MCA (Technical background required)
- Working Days - 5
- Job nature - Permanent
- Role - IT Cloud Engineer
- Proficient in Linux.
- Hands on experience with AWS cloud or Google Cloud.
- Knowledge of container technology like Docker.
- Expertise in scripting languages. (Shell scripting or Python scripting)
- Working knowledge of LAMP/LEMP stack, networking and version control system like Gitlab or Github.
Job Description:
The incumbent would be responsible for:
- Deployment of various infrastructures on Cloud platforms like AWS, GCP, Azure, OVH etc.
- Server monitoring, analysis and troubleshooting.
- Deploying multi-tier architectures using microservices.
- Integration of Container technologies like Docker, Kubernetes etc as per application requirement.
- Automating workflow with python or shell scripting.
- CI and CD integration for application lifecycle management.
- Hosting and managing websites on Linux machines.
- Frontend, backend and database optimization.
- Protecting operations by keeping information confidential.
- Providing information by collecting, analyzing, summarizing development & service issues.
- Prepares & installs solutions by determining and designing system specifications, standards & programming.
About GradRight
Our vision is to be the world’s leading Ed-Fin Tech company dedicated to making higher education accessible and affordable to all. Our mission is to drive transparency and accountability in the global higher education sector and create significant impact using the power of technology, data science and collaboration.
GradRight is the world’s first SaaS ecosystem that brings together students, universities and financial institutions in an integrated manner. It enables students to find and fund high return college education, universities to engage and select the best-fit students and banks to lend in an effective and efficient manner.
In the last three years, we have enabled students to get the best deals on a $ 2.8+ Billion of loan requests and facilitated disbursements of more than $ 350+ Million in loans. GradRight won the HSBC Fintech Innovation Challenge supported by the Ministry of Electronics & IT, Government of India & was among the top 7 global finalists in The PIEoneer awards, UK.
GradRight’s team possesses extensive domestic and international experience in the launch and scale-up of premier higher education institutions. It is led by alumni of IIT Delhi, BITS Pilani, IIT Roorkee, ISB Hyderabad and University of Pennsylvania. GradRight is a Delaware, USA registered company with a wholly owned subsidiary in India.
About the Role
We are looking for a passionate DevOps Engineer with hands-on experience in AWS cloud infrastructure, containerization, and orchestration. The ideal candidate will be responsible for building, automating, and maintaining scalable cloud solutions, ensuring smooth CI/CD pipelines, and supporting development and operations teams.
Core Responsibilities
Design, implement, and manage scalable, secure, and highly available infrastructure on AWS.
Build and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or GitHub Actions.
Containerize applications using Docker and manage deployments with Kubernetes (EKS, self-managed, or other distributions).
Monitor system performance, availability, and security using tools like CloudWatch, Prometheus, Grafana, ELK/EFK stack.
Collaborate with development teams to optimize application performance and deployment processes.
Required Skills & Experience
3–4 years of professional experience as a DevOps Engineer or similar role.
Strong expertise in AWS services (EC2, S3, RDS, Lambda, VPC, IAM, CloudWatch, EKS, etc.).
Hands-on experience with Docker and Kubernetes (EKS or self-hosted clusters).
Proficiency in CI/CD pipeline design and automation.
Experience with Infrastructure as Code (Terraform / AWS CloudFormation).
Solid understanding of Linux/Unix systems and shell scripting.
Knowledge of monitoring, logging, and alerting tools.
Familiarity with networking concepts (DNS, Load Balancing, Security Groups, Firewalls).
Basic programming/scripting experience in Python, Bash, or Go.
Nice to Have
Exposure to microservices architecture and service mesh (Istio/Linkerd).
Knowledge of serverless (AWS Lambda, API Gateway).

SENIOR DATA ENGINEER:
ROLE SUMMARY:
Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.
RESPONSIBILITIES:
- Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
- Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
- Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
- Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
- DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
- Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
- Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
- Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
- Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
- Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.
REQUIRED QUALIFICATIONS:
- Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
- Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
- Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
- ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
- Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
- DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
- Serverless and events: Design event-driven distributed systems on AWS.
- NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
- AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.
NICE-TO-HAVE QUALIFICATIONS:
- Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
- Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
- Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
- Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.
OUTCOMES AND MEASURES:
- Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
- Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
- Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
- Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
- Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.
LOCATION AND SCHEDULE:
● Location: Outside US (OUS).
● Schedule: Minimum 6 hours of overlap with US time zones.
Job Title: PySpark/Scala Developer
Functional Skills: Experience in Credit Risk/Regulatory risk domain
Technical Skills: Spark ,PySpark, Python, Hive, Scala, MapReduce, Unix shell scripting
Good to Have Skills: Exposure to Machine Learning Techniques
Job Description:
5+ Years of experience with Developing/Fine tuning and implementing programs/applications
Using Python/PySpark/Scala on Big Data/Hadoop Platform.
Roles and Responsibilities:
a) Work with a Leading Bank’s Risk Management team on specific projects/requirements pertaining to risk Models in
consumer and wholesale banking
b) Enhance Machine Learning Models using PySpark or Scala
c) Work with Data Scientists to Build ML Models based on Business Requirements and Follow ML Cycle to Deploy them all
the way to Production Environment
d) Participate Feature Engineering, Training Models, Scoring and retraining
e) Architect Data Pipeline and Automate Data Ingestion and Model Jobs
Skills and competencies:
Required:
· Strong analytical skills in conducting sophisticated statistical analysis using bureau/vendor data, customer performance
Data and macro-economic data to solve business problems.
· Working experience in languages PySpark & Scala to develop code to validate and implement models and codes in
Credit Risk/Banking
· Experience with distributed systems such as Hadoop/MapReduce, Spark, streaming data processing, cloud architecture.
- Familiarity with machine learning frameworks and libraries (like scikit-learn, SparkML, tensorflow, pytorch etc.
- Experience in systems integration, web services, batch processing
- Experience in migrating codes to PySpark/Scala is big Plus
- The ability to act as liaison conveying information needs of the business to IT and data constraints to the business
applies equal conveyance regarding business strategy and IT strategy, business processes and work flow
· Flexibility in approach and thought process
· Attitude to learn and comprehend the periodical changes in the regulatory requirement as per FED
Job Summary:
We are seeking passionate Developers with experience in Microservices architecture to join our team in Noida. The ideal candidate should have hands-on expertise in Java, Spring Boot, Hibernate, and front-end technologies like Angular, JavaScript, and Bootstrap. You will be responsible for developing enterprise-grade software applications that enhance patient safety worldwide.
Key Responsibilities:
- Develop and maintain applications using Microservices architecture.
- Work with modern technologies like Java, Spring Boot, Hibernate, Angular, Kafka, Redis, and Hazelcast.
- Utilize AWS, Git, Nginx, Tomcat, Oracle, Jira, Confluence, and Jenkins for development and deployment.
- Collaborate with cross-functional teams to design and build scalable enterprise applications.
- Develop intuitive UI/UX components using Bootstrap, jQuery, and JavaScript.
- Ensure high-performance, scalable, and secure applications for Fortune 100 pharmaceutical companies.
- Participate in Agile development, managing changing priorities effectively.
- Conduct code reviews, troubleshoot issues, and optimize application performance.
Required Skills & Qualifications:
- 5+ years of hands-on experience in Java 7/8, Spring Boot, and Hibernate.
- Strong knowledge of OOP concepts and Design Patterns.
- Experience working with relational databases (Oracle/MySQL).
- Proficiency in Bootstrap, JavaScript, jQuery, HTML, and Angular.
- Hands-on experience in Microservices-based application development.
- Strong problem-solving, debugging, and analytical skills.
- Excellent communication and collaboration abilities.
- Ability to adapt to new technologies and manage multiple priorities.
- Experience in developing high-quality web applications.
Good to Have:
- Exposure to Kafka, Redis, and Hazelcast.
- Experience working with cloud-based solutions (AWS preferred).
- Familiarity with DevOps tools like Jenkins, Docker, and Kubernetes.
Job Overview:
We are looking for a skilled Senior Backend Engineer to join our team. The ideal candidate will have a strong foundation in Java and Spring, with proven experience in building scalable microservices and backend systems. This role also requires familiarity with automation tools, Python development, and working knowledge of AI technologies.
Responsibilities:
- Design, develop, and maintain backend services and microservices.
- Build and integrate RESTful APIs across distributed systems.
- Ensure performance, scalability, and reliability of backend systems.
- Collaborate with cross-functional teams and participate in agile development.
- Deploy and maintain applications on AWS cloud infrastructure.
- Contribute to automation initiatives and AI/ML feature integration.
- Write clean, testable, and maintainable code following best practices.
- Participate in code reviews and technical discussions.
Required Skills:
- 4+ years of backend development experience.
- Strong proficiency in Java and Spring/Spring Boot frameworks.
- Solid understanding of microservices architecture.
- Experience with REST APIs, CI/CD, and debugging complex systems.
- Proficient in AWS services such as EC2, Lambda, S3.
- Strong analytical and problem-solving skills.
- Excellent communication in English (written and verbal).
Good to Have:
- Experience with automation tools like Workato or similar.
- Hands-on experience with Python development.
- Familiarity with AI/ML features or API integrations.
- Comfortable working with US-based teams (flexible hours).

About the Role
We’re looking for a passionate Fullstack Product Engineer with a strong JavaScript foundation to work on a high-impact, scalable product. You’ll collaborate closely with product and engineering teams to build intuitive UIs and performant backends using modern technologies.
Responsibilities
- Build and maintain scalable features across the frontend and backend.
- Work with tech stacks like Node.js, React.js, Vue.js, and others.
- Contribute to system design, architecture, and code quality enforcement.
- Follow modern engineering practices including TDD, CI/CD, and live coding evaluations.
- Collaborate in code reviews, performance optimizations, and product iterations.
Required Skills
- 4–6 years of hands-on fullstack development experience.
- Strong command over JavaScript, Node.js, and React.js.
- Solid understanding of REST APIs and/or GraphQL.
- Good grasp of OOP principles, TDD, and writing clean, maintainable code.
- Experience with CI/CD tools like GitHub Actions, GitLab CI, Jenkins, etc.
- Familiarity with HTML, CSS, and frontend performance optimization.
Good to Have
- Exposure to Docker, AWS, Kubernetes, or Terraform.
- Experience in other backend languages or frameworks.
- Experience with microservices and scalable system architectures.
Job Title: MERN TECH
Location: Paschim Vihar, West Delhi
Company: Eye Mantra
Job Type: Full-time, Onsite
Salary: Commensurate with experience and interview performance
Experience:- 3+ years
Contact: +91 97180 11146(Rizwana Siddique,HR)
Interview Mode: Face to Face
About Eye Mantra:
Eye Mantra is a premier eye care organization committed to delivering exceptional services using advanced technology. We’re growing fast and looking to strengthen our in-house tech team with talented individuals who share our passion for innovation and excellence in patient care.
Position Overview:
We are currently hiring a skilled Full Stack Developer to join our in-house development team. If you have strong experience working with the MERN stack, including Node.js, React.js, MongoDB, and SQL, and you thrive in a collaborative, fast-paced work environment, we’d love to connect with you.
This role requires working onsite at our West Delhi office (Paschim Vihar), where you’ll contribute directly to building and maintaining robust, scalable, and user-friendly applications that support our medical operations and patient services.
Responsibilities:
- Build and manage web applications using the MERN stack (MongoDB, Express, React, Node).
- Create and maintain efficient backend services and RESTful APIs.
- Develop intuitive frontend interfaces with React.js.
- Design and optimize relational databases using SQL.
- Work closely with internal teams to implement new features and enhance existing ones.
- Ensure applications perform well across all platforms and devices.
- Identify and resolve bugs and performance issues quickly.
- Stay current with emerging web development tools and trends.
- (Bonus) Leverage AWS or other cloud platforms to enhance scalability and performance.
Required Skills & Qualifications:
- Proficiency in Node.js for backend programming.
- Strong hands-on experience with React.js for frontend development.
- Good command of SQL and understanding of database design.
- Practical knowledge of the MERN stack.
- Experience using Git for version control and team collaboration.
- Excellent analytical and problem-solving abilities.
- Strong interpersonal and communication skills.
- Self-motivated, with the ability to manage tasks independently.

🚀 About Us
At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews — all while improving their SEO.
We’re also the creators of Convertlens, our generative AI-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.
We’re a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solves real problems — you’ll fit right in.
🛠️ What You’ll Do
- Build and maintain scalable Python back-end systems that power Convertlens and internal applications.
- Develop Agentic AI applications and workflows to drive automation and insights.
- Design and implement connectors to third-party systems (APIs, CRMs, marketing tools) to source and unify data.
- Ensure system reliability with strong practices in observability, monitoring, and troubleshooting.
⚙️ What You Bring
- 2+ years of hands-on experience in Python back-end development.
- Strong understanding of REST API design and integration.
- Proficiency with relational databases (MySQL/PostgreSQL).
- Familiarity with observability tools (logging, monitoring, tracing — e.g., OpenTelemetry, Prometheus, Grafana, ELK).
- Experience maintaining production systems with a focus on reliability and scalability.
- Bonus: Exposure to Node.js and modern front-end frameworks like ReactJs.
- Strong problem-solving skills and comfort working in a startup/product environment.
- A builder mindset — scrappy, curious, and ready to ship.
💼 Perks & Culture
- Flexible work setup — remote-first for most, hybrid if you’re in Delhi NCR.
- A high-growth, high-impact environment where your code goes live fast.
- Opportunities to work with Agentic AI and cutting-edge tech.
- Small team, big vision — your work truly matters here.

Senior Cloud & ML Infrastructure Engineer
Location: Bangalore / Bengaluru, Hyderabad, Pune, Mumbai, Mohali, Panchkula, Delhi
Experience: 6–10+ Years
Night Shift - 9 pm to 6 am
About the Role:
We’re looking for a Senior Cloud & ML Infrastructure Engineer to lead the design,scaling, and optimization of cloud-native machine learning infrastructure. This role is ideal forsomeone passionate about solving complex platform engineering challenges across AWS, witha focus on model orchestration, deployment automation, and production-grade reliability. You’llarchitect ML systems at scale, provide guidance on infrastructure best practices, and work cross-functionally to bridge DevOps, ML, and backend teams.
Key Responsibilities:
● Architect and manage end-to-end ML infrastructure using SageMaker, AWS StepFunctions, Lambda, and ECR
● Design and implement multi-region, highly-available AWS solutions for real-timeinference and batch processing
● Create and manage IaC blueprints for reproducible infrastructure using AWS CDK
● Establish CI/CD practices for ML model packaging, validation, and drift monitoring
● Oversee infrastructure security, including IAM policies, encryption at rest/in-transit, andcompliance standards
● Monitor and optimize compute/storage cost, ensuring efficient resource usage at scale
● Collaborate on data lake and analytics integration
● Serve as a technical mentor and guide AWS adoption patterns across engineeringteams
Required Skills:
● 6+ years designing and deploying cloud infrastructure on AWS at scale
● Proven experience building and maintaining ML pipelines with services like SageMaker,ECS/EKS, or custom Docker pipelines
● Strong knowledge of networking, IAM, VPCs, and security best practices in AWS
● Deep experience with automation frameworks, IaC tools, and CI/CD strategies
● Advanced scripting proficiency in Python, Go, or Bash
● Familiarity with observability stacks (CloudWatch, Prometheus, Grafana)
Nice to Have:
● Background in robotics infrastructure, including AWS IoT Core, Greengrass, or OTA deployments
● Experience designing systems for physical robot fleet telemetry, diagnostics, and control
● Familiarity with multi-stage production environments and robotic software rollout processes
● Competence in frontend hosting for dashboard or API visualization
● Involvement with real-time streaming, MQTT, or edge inference workflows
● Hands-on experience with ROS 2 (Robot Operating System) or similar robotics frameworks, including launch file management, sensor data pipelines, and deployment to embedded Linux devices

🚀 We’re Hiring: Senior Cloud & ML Infrastructure Engineer 🚀
We’re looking for an experienced engineer to lead the design, scaling, and optimization of cloud-native ML infrastructure on AWS.
If you’re passionate about platform engineering, automation, and running ML systems at scale, this role is for you.
What you’ll do:
🔹 Architect and manage ML infrastructure with AWS (SageMaker, Step Functions, Lambda, ECR)
🔹 Build highly available, multi-region solutions for real-time & batch inference
🔹 Automate with IaC (AWS CDK, Terraform) and CI/CD pipelines
🔹 Ensure security, compliance, and cost efficiency
🔹 Collaborate across DevOps, ML, and backend teams
What we’re looking for:
✔️ 6+ years AWS cloud infrastructure experience
✔️ Strong ML pipeline experience (SageMaker, ECS/EKS, Docker)
✔️ Proficiency in Python/Go/Bash scripting
✔️ Knowledge of networking, IAM, and security best practices
✔️ Experience with observability tools (CloudWatch, Prometheus, Grafana)
✨ Nice to have: Robotics/IoT background (ROS2, Greengrass, Edge Inference)
📍 Location: Bengaluru, Hyderabad, Mumbai, Pune, Mohali, Delhi
5 days working, Work from Office
Night shifts: 9pm to 6am IST
👉 If this sounds like you (or someone you know), let’s connect!
Apply here:
Java AWS engineer with experience in building AWS services like Lambda, Batch, SQS, S3, DynamoDB etc. using AWS Java SDK and Cloud formation templates.
- Java AWS engineer with experience in building AWS services like Lambda, Batch, SQS, S3, DynamoDB etc. using AWS Java SDK and Cloud formation templates.
- 4 to 8 years of experience in design, development and triaging for large, complex systems. Experience in Java and object-oriented design skills
- 3-4+ years of microservices development
- 2+ years working in Spring Boot
- Experienced using API dev tools like IntelliJ/Eclipse, Postman, Git, Cucumber
- Hands on experience in building microservices based application using Spring Boot and REST, JSON
- DevOps understanding – containers, cloud, automation, security, configuration management, CI/CD
- Experience using CICD processes for application software integration and deployment using Maven, Git, Jenkins.
- Experience dealing with NoSQL databases like Cassandra
- Experience building scalable and resilient applications in private or public cloud environments and cloud technologies
- Experience in Utilizing tools such as Maven, Docker, Kubernetes, ELK, Jenkins
- Agile Software Development (typically Scrum, Kanban, Safe)
- Experience with API gateway and API security.


Job Title : Full Stack Engineer
Location : New Delhi
Job Type : Full-Time
About the Role :
We are seeking a passionate Full Stack Engineer who enjoys building scalable, high-performance applications across web and mobile platforms.
This role is ideal for someone eager to work in a fast-paced startup environment, contributing across the full development lifecycle.
Mandatory Skills :
React.js, Next.js, React Native, Node.js, Express.js, PostgreSQL, AWS (EC2/S3/RDS/IAM), RESTful APIs.
Key Responsibilities :
- Design, build, and maintain scalable applications for web and mobile.
- Translate UI/UX wireframes into efficient, reusable, and maintainable code.
- Optimize applications for maximum performance, scalability, and reliability.
- Work on both front-end and back-end development using modern frameworks.
- Participate in code reviews, debugging, testing, and deployment.
- Collaborate with cross-functional teams to deliver high-quality products.
Requirements :
- Experience : 1+ years of software development (internships + full-time combined acceptable).
- Frontend Skills : React.js, Next.js, React Native.
- Backend Skills : Node.js, Express.js, RESTful APIs.
- Database : Strong knowledge of PostgreSQL.
- Cloud : Exposure to AWS services (EC2, Elastic Beanstalk, RDS, S3, ElasticCache, IAM).
- Additional : Experience with Redis and BullMQ is a strong plus.
- Strong debugging, problem-solving, and analytical skills.
- Enthusiasm for working in a dynamic startup environment.
1 Senior Associate Technology L1 – Java Microservices
Company Description
Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.
Job Description
We are looking for a Senior Associate Technology Level 1 - Java Microservices Developer to join our team of bright thinkers and doers. You’ll use your problem-solving creativity to design, architect, and develop high-end technology solutions that solve our clients’ most complex and challenging problems across different industries.
We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions.
Your Impact:
• Drive the design, planning, and implementation of multifaceted applications, giving you breadth and depth of knowledge across the entire project lifecycle.
• Combine your technical expertise and problem-solving passion to work closely with clients, turning • complex ideas into end-to-end solutions that transform our clients’ business
• Constantly innovate and evaluate emerging technologies and methods to provide scalable and elegant solutions that help clients achieve their business goals.
Qualifications
➢ 5 to 7 Years of software development experience
➢ Strong development skills in Java JDK 1.8 or above
➢ Java fundamentals like Exceptional handling, Serialization/Deserialization and Immutability concepts
➢ Good fundamental knowledge in Enums, Collections, Annotations, Generics, Auto boxing and Data Structure
➢ Database RDBMS/No SQL (SQL, Joins, Indexing)
➢ Multithreading (Re-entrant Lock, Fork & Join, Sync, Executor Framework)
➢ Spring Core & Spring Boot, security, transactions ➢ Hands-on experience with JMS (ActiveMQ, RabbitMQ, Kafka etc)
➢ Memory Mgmt (JVM configuration, Profiling, GC), profiling, Perf tunning, Testing, Jmeter/similar tool)
➢ Devops (CI/CD: Maven/Gradle, Jenkins, Quality plugins, Docker and containersization)
➢ Logical/Analytical skills. Thorough understanding of OOPS concepts, Design principles and implementation of
➢ different type of Design patterns. ➢ Hands-on experience with any of the logging frameworks (SLF4J/LogBack/Log4j) ➢ Experience of writing Junit test cases using Mockito / Powermock frameworks.
➢ Should have practical experience with Maven/Gradle and knowledge of version control systems like Git/SVN etc.
➢ Good communication skills and ability to work with global teams to define and deliver on projects.
➢ Sound understanding/experience in software development process, test-driven development.
➢ Cloud – AWS / AZURE / GCP / PCF or any private cloud would also be fine
➢ Experience in Microservices

Location: Delhi NCR (Hybrid)
Experience: Minimum 5 years in software development, with prior exposure to leading projects or mentoring team members
Employment Type: Full-time
Key Responsibilities:
- Lead development efforts across backend, frontend, and infrastructure in collaboration with the product team.
- Be hands-on in MERN stack and Python while mentoring junior developers.
- Design and maintain microservices and event-driven systems on AWS.
- Manage deployments and scaling on AWS ECS, Lambda, SQS, S3, SES, CloudFront, ELB.
- Build and optimize data pipelines & reporting using BigQuery.
- Set up and manage Dockerized applications with proper CI/CD pipelines.
- Implement and own monitoring & alerting systems (Prometheus, Loki, Grafana, CloudWatch).
- Ensure best practices for code quality, security, and system performance.
- Collaborate closely with product managers, designers, and testers to deliver features on time.
Required Skills & Experience:
- 5-8 years of experience in full-stack/backend engineering.
- Strong expertise in MERN stack (MongoDB, Express.js, React.js, Node.js).
- Working knowledge of Python (APIs, scripting, or data processing).
- Experience with AWS services – ECS, Lambda, SQS, S3, SES.
- Hands-on with Docker & container orchestration.
- Exposure to data warehousing/analytics with BigQuery (or similar).
- Experience in CI/CD automation.
- Familiarity with logging & monitoring tools (Prometheus, Loki, Grafana, or AWS CloudWatch).
- Ability to mentor junior developers and take ownership of projects.
Good to Have:
- Experience in SaaS product development.
- Knowledge of multi-tenant architectures.
- Familiarity with AI/RAG-based chatbots.

Key Responsibilities
- Design and implement ETL/ELT pipelines using Databricks, PySpark, and AWS Glue
- Develop and maintain scalable data architectures on AWS (S3, EMR, Lambda, Redshift, RDS)
- Perform data wrangling, cleansing, and transformation using Python and SQL
- Collaborate with data scientists to integrate Generative AI models into analytics workflows
- Build dashboards and reports to visualize insights using tools like Power BI or Tableau
- Ensure data quality, governance, and security across all data assets
- Optimize performance of data pipelines and troubleshoot bottlenecks
- Work closely with stakeholders to understand data requirements and deliver actionable insights
🧪 Required Skills
Skill AreaTools & TechnologiesCloud PlatformsAWS (S3, Lambda, Glue, EMR, Redshift)Big DataDatabricks, Apache Spark, PySparkProgrammingPython, SQLData EngineeringETL/ELT, Data Lakes, Data WarehousingAnalyticsData Modeling, Visualization, BI ReportingGen AI IntegrationOpenAI, Hugging Face, LangChain (preferred)DevOps (Bonus)Git, Jenkins, Terraform, Docker
📚 Qualifications
- Bachelor's or Master’s degree in Computer Science, Data Science, or related field
- 3+ years of experience in data engineering or data analytics
- Hands-on experience with Databricks, PySpark, and AWS
- Familiarity with Generative AI tools and frameworks is a strong plus
- Strong problem-solving and communication skills
🌟 Preferred Traits
- Analytical mindset with attention to detail
- Passion for data and emerging technologies
- Ability to work independently and in cross-functional teams
- Eagerness to learn and adapt in a fast-paced environment

About Us
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.
Role Overview:
As a Senior Data Scientist / AI Engineer, you will be a key player in our technical leadership. You will be responsible for designing, developing, and deploying sophisticated AI and Machine Learning solutions, with a strong emphasis on Generative AI and Large Language Models (LLMs). You will architect and manage scalable AI microservices, drive research into state-of-the-art techniques, and translate complex business requirements into tangible, high-impact products. This role requires a blend of deep technical expertise, strategic thinking, and leadership.
Key Responsibilities:
- Architect & Develop AI Solutions: Design, build, and deploy robust and scalable machine learning models, with a primary focus on Natural Language Processing (NLP), Generative AI, and LLM-based Agents.
- Build AI Infrastructure: Create and manage AI-driven microservices using frameworks like Python FastAPI, ensuring high performance and reliability.
- Lead AI Research & Innovation: Stay abreast of the latest advancements in AI/ML. Lead research initiatives to evaluate and implement state-of-the-art models and techniques for performance and cost optimization.
- Solve Business Problems: Collaborate with product and business teams to understand challenges and develop data-driven solutions that create significant business value, such as building business rule engines or predictive classification systems.
- End-to-End Project Ownership: Take ownership of the entire lifecycle of AI projects—from ideation, data processing, and model development to deployment, monitoring, and iteration on cloud platforms.
- Team Leadership & Mentorship: Lead learning initiatives within the engineering team, mentor junior data scientists and engineers, and establish best practices for AI development.
- Cross-Functional Collaboration: Work closely with software engineers to integrate AI models into production systems and contribute to the overall system architecture.
Required Skills and Qualifications
- Master’s (M.Tech.) or Bachelor's (B.Tech.) degree in Computer Science, Artificial Intelligence, Information Technology, or a related field.
- 6+ years of professional experience in a Data Scientist, AI Engineer, or related role.
- Expert-level proficiency in Python and its core data science libraries (e.g., PyTorch, Huggingface Transformers, Pandas, Scikit-learn).
- Demonstrable, hands-on experience building and fine-tuning Large Language Models (LLMs) and implementing Generative AI solutions.
- Proven experience in developing and deploying scalable systems on cloud platforms, particularly AWS. Experience with GCS is a plus.
- Strong background in Natural Language Processing (NLP), including experience with multilingual models and transcription.
- Experience with containerization technologies, specifically Docker.
- Solid understanding of software engineering principles and experience building APIs and microservices.
Preferred Qualifications
- A strong portfolio of projects. A track record of publications in reputable AI/ML conferences is a plus.
- Experience with full-stack development (Node.js, Next.js) and various database technologies (SQL, MongoDB, Elasticsearch).
- Familiarity with setting up and managing CI/CD pipelines (e.g., Jenkins).
- Proven ability to lead technical teams and mentor other engineers.
- Experience developing custom tools or packages for data science workflows.

Location: Hybrid/ Remote
Type: Contract / Full‑Time
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Responsibilities:
- Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
- Design and build Python‑based services (FastAPI) for generating and updating embeddings.
- Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
- Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
- Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
- Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
- Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
- Collaborate with frontend engineers to define API contracts and demo endpoints.
- Document architecture diagrams, API specifications, and runbooks for future team onboarding.
Required Skills
- Strong Python expertise (FastAPI, async programming).
- Proficiency with Node.js and Express for API development.
- Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
- Familiarity with OpenAI’s APIs (embeddings, chat completions).
- Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
- Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).
Containerization skills (Docker):
- Good understanding of RAG architectures, prompt design, and memory management.
- Strong Git workflow and collaborative development practices (GitHub, CI/CD).
Nice‑to‑Have:
- Experience with Llama family models or other open‑source LLMs.
- Familiarity with MongoDB Atlas free tier and cluster management.
- Background in data engineering for streaming or batch processing.
- Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
- Frontend skills in React to prototype demo UIs.

Location: Hybrid/ Remote
Openings: 2
Experience: 5–12 Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Key Responsibilities
Architect & Design:
- Provide technical and architectural direction for complex frontend solutions,ensuring alignment with enterprise standards and best practices.
- Conduct design and code reviews to maintain high-quality, reusable, and scalable frontend interfaces for enterprise applications.
- Collaborate with cross-functional teams to define and enforce UI/UX design guidelines, accessibility standards, and performance benchmarks.
- Identify and address potential security vulnerabilities in frontend implementations, ensuring compliance with security and data privacy requirements.
Development & Debugging:
- Write clean, maintainable, and efficient frontend code.
- Debug and troubleshoot code to ensure robust, high-performing applications.
- Develop reusable frontend libraries that can be leveraged across multiple projects.
AI Awareness (Preferred):
- Understand AI/ML fundamentals and how they can enhance frontend applications.
- Collaborate with teams integrating AI-based features into chat applications.
Collaboration & Reporting:
- Work closely with cross-functional teams to align on architecture and deliverables.
- Regularly report progress, identify risks, and propose mitigation strategies.
Quality Assurance:
- Implement unit tests and end-to-end tests to ensure code quality.
- Participate in code reviews and enforce best practices.
Required Skills
- 5-10 years of experience architecting and developing cloud-based global applications in a public cloud environment (AWS, Azure, or GCP).
- Strong hands-on expertise in frontend technologies: JavaScript, HTML5, CSS3
- Proficiency with Modern frameworks like React, Angular, or Node.js
- Backend familiarity with Java, Spring Boot (or similar technologies).
- Experience developing real-world, at-scale products.
- General knowledge of cloud platforms (AWS, Azure, or GCP) and their structure, use, and capabilities.
- Strong problem-solving, debugging, and performance optimization skills.

Location: Hybrid/ Remote
Openings: 2
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or related field
Job Responsibilities
Problem Solving & Optimization:
- Analyze and resolve complex technical and application issues.
- Optimize application performance, scalability, and reliability.
Design & Develop:
- Build, test, and deploy scalable full-stack applications with high performance and security.
- Develop clean, reusable, and maintainable code for both frontend and backend.
AI Integration (Preferred):
- Collaborate with the team to integrate AI/ML models into applications where applicable.
- Explore Generative AI, NLP, or machine learning solutions that enhance product capabilities.
Technical Leadership & Mentorship:
- Provide guidance, mentorship, and code reviews for junior developers.
- Foster a culture of technical excellence and knowledge sharing.
Agile & Delivery Management:
- Participate in Agile ceremonies (sprint planning, stand-ups, retrospectives).
- Define and scope backlog items, track progress, and ensure timely delivery.
Collaboration:
- Work closely with cross-functional teams (product managers, designers, QA) to deliver high-quality solutions.
- Coordinate with geographically distributed teams.
Quality Assurance & Security:
- Conduct peer reviews of designs and code to ensure best practices.
- Implement security measures and ensure compliance with industry standards.
Innovation & Continuous Improvement:
- Identify areas for improvement in the software development lifecycle.
- Stay updated with the latest tech trends, especially in AI and cloud technologies, and recommend new tools or frameworks.
Required Skills
- Strong proficiency in JavaScript, HTML5, CSS3
- Hands-on expertise with frontend frameworks like React, Angular, or Vue.js
- Backend development experience with Java, Spring Boot (Node.js is a plus)
- Knowledge of REST APIs, microservices, and scalable architectures
- Familiarity with cloud platforms (AWS, Azure, or GCP)
- Experience with Agile/Scrum methodologies and JIRA for project tracking
- Proficiency in Git and version control best practices
- Strong debugging, performance optimization, and problem-solving skills
- Ability to analyze customer requirements and translate them into technical specifications

Location: Hybrid/ Remote
Openings: 5
Experience: 0 - 2Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Key Responsibilities:
Backend Development & APIs
- Build microservices that provide REST APIs to power web frontends.
- Design clean, reusable, and scalable backend code meeting enterprise security standards.
- Conceptualize and implement optimized data storage solutions for high-performance systems.
Deployment & Cloud
- Deploy microservices using a common deployment framework on AWS and GCP.
- Inspect and optimize server code for speed, security, and scalability.
Frontend Integration
- Work on modern front-end frameworks to ensure seamless integration with back-end services.
- Develop reusable libraries for both frontend and backend codebases.
AI Awareness (Preferred)
- Understand how AI/ML or Generative AI can enhance enterprise software workflows.
- Collaborate with AI specialists to integrate AI-driven features where applicable.
Quality & Collaboration
- Participate in code reviews to maintain high code quality.
- Collaborate with teams using Agile/Scrum methodologies for rapid and structured delivery.
Required Skills:
- Proficiency in JavaScript (ES6+), Webpack, Mocha, Jest
- Experience with recent frontend frameworks – React.js, Redux.js, Node.js (or similar)
- Deep understanding of HTML5, CSS3, SASS/LESS, and Content Management Systems
- Ability to design and implement RESTful APIs and understand their impact on client-side applications
- Familiarity with cloud platforms (AWS, Azure, or GCP) – deployment, storage, and scalability
- Experience working with Agile and Scrum methodologies
- Strong backend expertise in Java, J2EE, Spring Boot is a plus but not mandatory

Job Description:
Title : Python AWS Developer with API
Tech Stack : AWS API gateway, Lambda functionality, Oracle RDS, SQL & database management, (OOPS) principles, Java script, Object relational Mapper, Git, Docker, Java dependency management, CI/CD, AWS cloud & S3, Secret Manager, Python, API frameworks, well-versed with Front and back end programming (python).
Responsibilities:
· Worked on building high-performance APIs using AWS services and Python. Python coding, debugging programs and integrating app with third party web services.
· Troubleshoot and debug non-prod defects, back-end development, API, main focus on coding and monitoring applications.
· Core application logic design.
· Supports dependency teams in UAT testing and perform functional application testing which includes postman testing
We are hiring a Site Reliability Engineer (SRE) to join our high-performance engineering team. In this role, you'll be responsible for driving reliability, performance, scalability, and security across cloud-native systems while bridging the gap between development and operations.
Key Responsibilities
- Design and implement scalable, resilient infrastructure on AWS
- Take ownership of the SRE function – availability, latency, performance, monitoring, incident response, and capacity planning
- Partner with product and engineering teams to improve system reliability, observability, and release velocity
- Set up, maintain, and enhance CI/CD pipelines using Jenkins, GitHub Actions, or AWS CodePipeline
- Conduct load and stress testing, identify performance bottlenecks, and implement optimization strategies
Required Skills & Qualifications
- Proven hands-on experience in cloud infrastructure design (AWS strongly preferred)
- Strong background in DevOps and SRE principles
- Proficiency with performance testing tools like JMeter, Gatling, k6, or Locust
- Deep understanding of cloud security and best practices for reliability engineering
- AWS Solution Architect Certification – Associate or Professional (preferred)
- Solid problem-solving skills and a proactive approach to systems improvement
Why Join Us?
- Work with cutting-edge technologies in a cloud-native, fast-paced environment
- Collaborate with cross-functional teams driving meaningful impact
- Hybrid work culture with flexibility and autonomy
- Open, inclusive work environment focused on innovation and excellence

Job Summary:
We are looking for a skilled and motivated Python AWS Engineer to join our team. The ideal candidate will have strong experience in backend development using Python, cloud infrastructure on AWS, and building serverless or microservices-based architectures. You will work closely with cross-functional teams to design, develop, deploy, and maintain scalable and secure applications in the cloud.
Key Responsibilities:
- Develop and maintain backend applications using Python and frameworks like Django or Flask
- Design and implement serverless solutions using AWS Lambda, API Gateway, and other AWS services
- Develop data processing pipelines using services such as AWS Glue, Step Functions, S3, DynamoDB, and RDS
- Write clean, efficient, and testable code following best practices
- Implement CI/CD pipelines using tools like CodePipeline, GitHub Actions, or Jenkins
- Monitor and optimize system performance and troubleshoot production issues
- Collaborate with DevOps and front-end teams to integrate APIs and cloud-native services
- Maintain and improve application security and compliance with industry standards
Required Skills:
- Strong programming skills in Python
- Solid understanding of AWS cloud services (Lambda, S3, EC2, DynamoDB, RDS, IAM, API Gateway, CloudWatch, etc.)
- Experience with infrastructure as code (e.g., CloudFormation, Terraform, or AWS CDK)
- Good understanding of RESTful API design and microservices architecture
- Hands-on experience with CI/CD, Git, and version control systems
- Familiarity with containerization (Docker, ECS, or EKS) is a plus
- Strong problem-solving and communication skills
Preferred Qualifications:
- Experience with PySpark, Pandas, or data engineering tools
- Working knowledge of Django, Flask, or other Python frameworks
- AWS Certification (e.g., AWS Certified Developer – Associate) is a plus
Educational Qualification:
- Bachelor's or Master’s degree in Computer Science, Engineering, or related field

A fast-growing, tech-driven loyalty programs and benefits business is looking to hire a Technical Architect with expertise in:
Key Responsibilities:
1. Architectural Design & Governance
• Define, document, and maintain the technical architecture for projects and product modules.
• Ensure architectural decisions meet scalability, performance, and security requirements.
2. Solution Development & Technical Leadership
• Translate product and client requirements into robust technical solutions, balancing short-term deliverables with long-term product viability.
• Oversee system integrations, ensuring best practices in coding standards, security, and performance optimization.
3. Collaboration & Alignment
• Work closely with Product Managers and Project Managers to prioritize and plan feature development.
• Facilitate cross-team communication to ensure technical feasibility and timely execution of features or client deliverables.
4. Mentorship & Code Quality
• Provide guidance to senior developers and junior engineers through code reviews, design reviews, and technical coaching.
• Advocate for best-in-class engineering practices, encouraging the use of CI/CD, automated testing, and modern development tooling.5. Risk Management & Innovation
• Proactively identify technical risks or bottlenecks, proposing mitigation strategies.
• Investigate and recommend new technologies, frameworks, or tools that enhance product capabilities and developer productivity.
6. Documentation & Standards
• Maintain architecture blueprints, design patterns, and relevant documentation to align the team on shared standards.
• Contribute to the continuous improvement of internal processes, ensuring streamlined development and deployment workflows.
Skills:
1. Technical Expertise
• 7–10 years of overall experience in software development with at least a couple of years in senior or lead roles.
• Strong proficiency in at least one mainstream programming language (e.g., Golang,
Python, JavaScript).
• Hands-on experience with architectural patterns (microservices, monolithic systems, event-driven architectures).
• Good understanding of Cloud Platforms (AWS, Azure, or GCP) and DevOps practices
(CI/CD pipelines, containerization with Docker/Kubernetes).
• Familiarity with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB).
Location: Saket, Delhi (Work from Office)
Schedule: Monday – Friday
Experience : 7-10 Years
Compensation: As per industry standards
Job Title: Tableau BI Developer
Years of Experience: 4-8Yrs
12$ per hour fte engagement
8 hrs. working
Required Skills & Experience:
✅ 4–8 years of experience in BI development and data engineering
✅ Expertise in BigQuery and/or Snowflake for large-scale data processing
✅ Strong SQL skills with experience writing complex analytical queries
✅ Experience in creating dashboards in tools like Power BI, Looker, or similar
✅ Hands-on experience with ETL/ELT tools and data pipeline orchestration
✅ Familiarity with cloud platforms (GCP, AWS, or Azure)
✅ Strong understanding of data modeling, data warehousing, and analytics best practices
✅ Excellent communication skills with the ability to explain technical concepts to non-technical stakeholders
About the Role
We are looking for a DevOps Engineer to build and maintain scalable, secure, and high-
performance infrastructure for our next-generation healthcare platform. You will be
responsible for automation, CI/CD pipelines, cloud infrastructure, and system reliability,
ensuring seamless deployment and operations.
Responsibilities
1. Infrastructure & Cloud Management
• Design, deploy, and manage cloud-based infrastructure (AWS, Azure, GCP)
• Implement containerization (Docker, Kubernetes) and microservices orchestration
• Optimize infrastructure cost, scalability, and performance
2. CI/CD & Automation
• Build and maintain CI/CD pipelines for automated deployments
• Automate infrastructure provisioning using Terraform, Ansible, or CloudFormation
• Implement GitOps practices for streamlined deployments
3. Security & Compliance
• Ensure adherence to ABDM, HIPAA, GDPR, and healthcare security standards
• Implement role-based access controls, encryption, and network security best
practices
• Conduct Vulnerability Assessment & Penetration Testing (VAPT) and compliance
audits
4. Monitoring & Incident Management
• Set up monitoring, logging, and alerting systems (Prometheus, Grafana, ELK,
Datadog, etc.)
• Optimize system reliability and automate incident response mechanisms
• Improve MTTR (Mean Time to Recovery) and system uptime KPIs
5. Collaboration & Process Improvement
• Work closely with development and QA teams to streamline deployments
• Improve DevSecOps practices and cloud security policies
• Participate in architecture discussions and performance tuning
Required Skills & Qualifications
• 2+ years of experience in DevOps, cloud infrastructure, and automation
• Hands-on experience with AWS and Kubernetes
• Proficiency in Docker and CI/CD tools (Jenkins, GitHub Actions, ArgoCD, etc.)
• Experience with Terraform, Ansible, or CloudFormation
• Strong knowledge of Linux, shell scripting, and networking
• Experience with cloud security, monitoring, and logging solutions
Nice to Have
• Experience in healthcare or other regulated industries
• Familiarity with serverless architectures and AI-driven infrastructure automation
• Knowledge of big data pipelines and analytics workflows
What You'll Gain
• Opportunity to build and scale a mission-critical healthcare infrastructure
• Work in a fast-paced startup environment with cutting-edge technologies
• Growth potential into Lead DevOps Engineer or Cloud Architect roles


Role Overview:
We are looking for a skilled Golang Developer with 3.5+ years of experience in building scalable backend services and deploying cloud-native applications using AWS. This is a key position that requires a deep understanding of Golang and cloud infrastructure to help us build robust solutions for global clients.
Key Responsibilities:
- Design and develop backend services, APIs, and microservices using Golang.
- Build and deploy cloud-native applications on AWS using services like Lambda, EC2, S3, RDS, and more.
- Optimize application performance, scalability, and reliability.
- Collaborate closely with frontend, DevOps, and product teams.
- Write clean, maintainable code and participate in code reviews.
- Implement best practices in security, performance, and cloud architecture.
- Contribute to CI/CD pipelines and automated deployment processes.
- Debug and resolve technical issues across the stack.
Required Skills & Qualifications:
- 3.5+ years of hands-on experience with Golang development.
- Strong experience with AWS services such as EC2, Lambda, S3, RDS, DynamoDB, CloudWatch, etc.
- Proficient in developing and consuming RESTful APIs.
- Familiar with Docker, Kubernetes or AWS ECS for container orchestration.
- Experience with Infrastructure as Code (Terraform, CloudFormation) is a plus.
- Good understanding of microservices architecture and distributed systems.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with Git, CI/CD pipelines, and agile workflows.
- Strong problem-solving, debugging, and communication skills.
Nice to Have:
- Experience with serverless applications and architecture (AWS Lambda, API Gateway, etc.)
- Exposure to NoSQL databases like DynamoDB or MongoDB.
- Contributions to open-source Golang projects or an active GitHub portfolio.
Summary
We are seeking a highly skilled and motivated Software Engineer with expertise in both backend development and DevOps practices. The ideal candidate will have a proven track record of designing, developing, and deploying robust and scalable backend systems, while also possessing strong knowledge of cloud infrastructure and DevOps principles. This role requires a collaborative individual who thrives in a fast-paced environment and is passionate about building high-quality software.
Responsibilities
Design, develop, and maintain backend services using appropriate technologies.
Implement and maintain CI/CD pipelines.
Manage and monitor cloud infrastructure (e.g., AWS, Azure, GCP).
Troubleshoot and resolve production issues.
Collaborate with frontend developers to integrate backend services.
Contribute to the design and implementation of database schemas.
Participate in code reviews and ensure code quality.
Contribute to the improvement of DevOps processes and tools.
Write clear and concise documentation.
Stay up-to-date with the latest technologies and best practices.
Qualifications
Bachelor's degree in Computer Science or a related field.
3+ years of experience in backend software development.
2+ years of experience in DevOps.
Proficiency in at least one backend programming language (e.g., Java, Python, Node.js, Go).
Experience with cloud platforms (e.g., AWS, Azure, GCP).
Experience with containerization technologies (e.g., Docker, Kubernetes).
Experience with CI/CD tools (e.g., Jenkins, GitLab CI, CircleCI).
Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack).
Strong understanding of database technologies (e.g., SQL, NoSQL).
Excellent problem-solving and debugging skills.
Strong communication and collaboration skills.
Bonus Points
Experience with specific technologies used by our company (list technologies if applicable).
Experience with serverless architectures.
Experience with infrastructure as code (e.g., Terraform, CloudFormation).
Contributions to open-source projects.
Relevant certifications.

🚀 We're Urgently Hiring – Node.js Backend Development Intern
Join our backend team as an intern and get hands-on experience building scalable, real-world applications with Node.js, Firebase, and AWS.
📍 Remote / Onsite
📍 📅 Duration: 2 Months
🔧 What You’ll Work On:
Backend development using Node.js
Firebase, SQL & NoSQL database management
RESTful API integration
Deployment on AWS infrastructure
Roles and Responsibilities:
• Independently analyze, solve, and correct issues in real time, providing problem resolution end-
to-end.
• Strong experience in development tools, CI/CD pipelines. Extensive experience with Agile.
• Good proficiency overlap with technologies like: Java8, Spring, SpringMVC, RESTful web services, Hibernate, Oracle PL/SQL, SpringSecurity, Ansible, Docker, JMeter, Angular.
• Strong fundamentals and clarity of REST web services. Person should have exposure to
developing REST services which handles large sets
• Fintech or lending domain experience is a plus but not necessary.
• Deep understanding of cloud technologies on at least one of the cloud platforms AWS, Azure or Google Cloud
• Wide knowledge of technology solutions and ability to learn and work with emerging technologies, methodologies, and solutions.
• Strong communicator with ability to collaborate cross-functionally, build relationships, and achieve broader organizational goals.
• Provide vision leadership for the technology roadmap of our products. Understand product capabilities and strategize technology for its alignment with business objectives and maximizing ROI.
• Define technical software architectures and lead development of frameworks.
• Engage end to end in product development, starting from business requirements to realization of product and to its deployment in production.
• Research, design, and implement the complex features being added to existing products and/or create new applications / components from scratch.
Minimum Qualifications
• Bachelor s or higher engineering degree in Computer Science, or related technical field, or equivalent additional professional experience.
• 5 years of experience in delivering solutions from concept to production that are based on Java and open-source technologies as an enterprise architect in global organizations.
• 12-15 years of industry experience in design, development, deployments, operations and managing non-functional perspectives of technical solutions.

Job Title : Backend Developer (.NET Core)
Experience : 2 to 5 Years
Work Environment :
- Hybrid model : Minimum 3 days/week from office
- Office Location : Hauz Khas, Delhi
- Flexible working hours based on project needs
Job Summary :
We are looking for a skilled and motivated Backend Developer with 2–5 years of experience, proficient in .NET Core and experienced with cloud platforms (AWS preferred or Azure).
You will be responsible for building scalable backend services and RESTful APIs, integrating with cloud components, and ensuring robust data handling.
Key Responsibilities :
- Design, develop, and maintain robust backend systems using .NET Core 6 or above.
- Develop and manage Web APIs for frontend and third-party integrations.
- Work with cloud infrastructure (preferably AWS: Lambda, EventBridge, SQS, CloudWatch, S3; or Azure: AppService, Functions, AzureSQL, EventHub, Blob Storage).
- Work with databases such as SQL Server, PostgreSQL, and tools like PGAdmin.
- Utilize Entity Framework Core, Dapper, or equivalent ORM tools.
- Collaborate using Git or TFS for version control.
- Develop, debug, and maintain code in Visual Studio.
- Write optimized and scalable SQL queries and stored procedures.
- Participate in code reviews and maintain high code quality.
Required Skills :
- Proficient in .NET Core 6+.
- Experience with RESTful API development.
- Strong command of SQL and relational databases.
- Familiarity with cloud platforms (AWS preferred, Azure acceptable).
- Good understanding of Git/TFS.
- Hands-on experience with tools like Visual Studio, SQL Server, PGAdmin.
- ORM experience with Entity Framework Core, Dapper, or similar.
Nice to Have :
- Exposure to asynchronous processing or event-driven architecture.
- Experience working with Cursor-based data fetching or streaming.
Job description
- Database Design & Development:
- Design, implement, and maintain PostgreSQL databases.
- Develop efficient schemas, indexes, and stored procedures to optimize data performance.
- Backend Development:
- Develop RESTful APIs and backend services using Node.js.
- Integrate databases with backend applications for seamless data flow.
- Query Optimization & Performance Tuning:
- Optimize SQL queries for performance and scalability.
- Monitor database health and troubleshoot slow queries or deadlocks.
- Security & Compliance:
- Implement database security best practices, including role-based access control (RBAC) and encryption.
- Ensure compliance with industry standards like GDPR, HIPAA, etc.
- Data Migration & Backup:
- Develop and maintain data migration scripts between different PostgreSQL versions or other databases like mongodb.
- Set up and manage database backup and recovery strategies.
- Write efficient SQL queries and manage database schemas using PostgreSQL.
- Build RESTful APIs and integrate third-party APIs/services.
- Optimize application performance and troubleshoot production issues.
- Ensure data security and protection practices are followed.
- Write clean, maintainable code and participate in code reviews.
Technical Skills:
- Hands-on experience with AWS, Google Cloud Platform (GCP), and Microsoft Azure cloud computing
- Proficiency in Windows Server and Linux server environments
- Proficiency with Internet Information Services (IIS), Nginx, Apache, etc.
- Experience in deploying .NET applications (ASP.NET, MVC, Web API, WCF, etc.), .NET Core, Python and Node.js applications etc.
- Familiarity with GitLab or GitHub for version control and Jenkins for CI/CD processes
Key Responsibilities:
- ☁️ Manage cloud infrastructure and automation on AWS, Google Cloud (GCP), and Azure.
- 🖥️ Deploy and maintain Windows Server environments, including Internet Information Services (IIS).
- 🐧 Administer Linux servers and ensure their security and performance.
- 🚀 Deploy .NET applications (ASP.Net, MVC, Web API, WCF, etc.) using Jenkins CI/CD pipelines.
- 🔗 Manage source code repositories using GitLab or GitHub.
- 📊 Monitor and troubleshoot cloud and on-premises server performance and availability.
- 🤝 Collaborate with development teams to support application deployments and maintenance.
- 🔒 Implement security best practices across cloud and server environments.
Required Skills:
- ☁️ Hands-on experience with AWS, Google Cloud (GCP), and Azure cloud services.
- 🖥️ Strong understanding of Windows Server administration and IIS.
- 🐧 Proficiency in Linux server management.
- 🚀 Experience in deploying .NET applications and working with Jenkins for CI/CD automation.
- 🔗 Knowledge of version control systems such as GitLab or GitHub.
- 🛠️ Good troubleshooting skills and ability to resolve system issues efficiently.
- 📝 Strong documentation and communication skills.
Preferred Skills:
- 🖥️ Experience with scripting languages (PowerShell, Bash, or Python) for automation.
- 📦 Knowledge of containerization technologies (Docker, Kubernetes) is a plus.
- 🔒 Understanding of networking concepts, firewalls, and security best practices.


Job Description:
Deqode is seeking a skilled .NET Full Stack Developer with expertise in .NET Core, Angular, and C#. The ideal candidate will have hands-on experience with either AWS or Azure cloud platforms. This role involves developing robust, scalable applications and collaborating with cross-functional teams to deliver high-quality software solutions.
Key Responsibilities:
- Develop and maintain web applications using .NET Core, C#, and Angular.
- Design and implement RESTful APIs and integrate with front-end components.
- Collaborate with UI/UX designers, product managers, and other developers to deliver high-quality products.
- Deploy and manage applications on cloud platforms (AWS or Azure).
- Write clean, scalable, and efficient code following best practices.
- Participate in code reviews and provide constructive feedback.
- Troubleshoot and debug applications to ensure optimal performance.
- Stay updated with emerging technologies and propose improvements to existing systems.
Required Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Minimum of 4 years of professional experience in software development.
- Proficiency in .NET Core, C#, and Angular.
- Experience with cloud services (either AWS or Azure).
- Strong understanding of RESTful API design and implementation.
- Familiarity with version control systems like Git.
- Excellent problem-solving skills and attention to detail.
- Ability to work independently and collaboratively in a team environment.
Preferred Qualifications:
- Experience with containerization tools like Docker and orchestration platforms like Kubernetes.
- Knowledge of CI/CD pipelines and DevOps practices.
- Familiarity with Agile/Scrum methodologies.
- Strong communication and interpersonal skills.
What We Offer:
- Competitive salary and performance-based incentives.
- Flexible working hours and remote work options.
- Opportunities for professional growth and career advancement.
- Collaborative and inclusive work environment.
- Access to the latest tools and technologies.



Job Title: .NET Developer
Location: Pan India (Hybrid)
Employment Type: Full-Time
Join Date: Immediate / Within 15 Days
Experience: 4+ Years
Deqode is looking for a skilled and passionate Senior .NET Developer to join our growing tech team. The ideal candidate is an expert in building scalable web applications and has hands-on experience with cloud platforms and modern front-end technologies.
Key Responsibilities:
- Design, develop, and maintain scalable web applications using .NET Core.
- Work on RESTful APIs and integrate third-party services.
- Collaborate with UI/UX designers and front-end developers using Angular or React.
- Deploy, monitor, and maintain applications on AWS or Azure.
- Participate in code reviews, technical discussions, and architecture planning.
- Write clean, well-structured, and testable code following best practices.
Must-Have Skills:
- 4+ years of experience in software development using .NET Core.
- Proficiency with Angular or React for front-end development.
- Strong working knowledge of AWS or Microsoft Azure.
- Experience with SQL/NoSQL databases.
- Excellent communication and team collaboration skills.
Education:
- Bachelor’s/Master’s degree in Computer Science, Information Technology, or a related field.
Required Skills:
- Experience in systems administration, SRE or DevOps focused role
- Experience in handling production support (on-call)
- Good understanding of the Linux operating system and networking concepts.
- Demonstrated competency with the following AWS services: ECS, EC2, EBS, EKS, S3, RDS, ELB, IAM, Lambda.
- Experience with Docker containers and containerization concepts
- Experience with managing and scaling Kubernetes clusters in a production environment
- Experience building scalable infrastructure in AWS with Terraform.
- Strong knowledge of Protocol-level such as HTTP/HTTPS, SMTP, DNS, and LDAP
- Experience monitoring production systems
- Expertise in leveraging Automation / DevOps principles, experience with operational tools, and able to apply best practices for infrastructure and software deployment (Ansible).
- HAProxy, Nginx, SSH, MySQL configuration and operation experience
- Ability to work seamlessly with software developers, QA, project managers, and business development
- Ability to produce and maintain written documentation

Role - MLops Engineer
Location - Pune, Gurgaon, Noida, Bhopal, Bangalore
Mode - Hybrid
Role Overview
We are looking for an experienced MLOps Engineer to join our growing AI/ML team. You will be responsible for automating, monitoring, and managing machine learning workflows and infrastructure in production environments. This role is key to ensuring our AI solutions are scalable, reliable, and continuously improving.
Key Responsibilities
- Design, build, and manage end-to-end ML pipelines, including model training, validation, deployment, and monitoring.
- Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into production systems.
- Develop and manage scalable infrastructure using AWS, particularly AWS Sagemaker.
- Automate ML workflows using CI/CD best practices and tools.
- Ensure model reproducibility, governance, and performance tracking.
- Monitor deployed models for data drift, model decay, and performance metrics.
- Implement robust versioning and model registry systems.
- Apply security, performance, and compliance best practices across ML systems.
- Contribute to documentation, knowledge sharing, and continuous improvement of our MLOps capabilities.
Required Skills & Qualifications
- 4+ years of experience in Software Engineering or MLOps, preferably in a production environment.
- Proven experience with AWS services, especially AWS Sagemaker for model development and deployment.
- Working knowledge of AWS DataZone (preferred).
- Strong programming skills in Python, with exposure to R, Scala, or Apache Spark.
- Experience with ML model lifecycle management, version control, containerization (Docker), and orchestration tools (e.g., Kubernetes).
- Familiarity with MLflow, Airflow, or similar pipeline/orchestration tools.
- Experience integrating ML systems into CI/CD workflows using tools like Jenkins, GitHub Actions, or AWS CodePipeline.
- Solid understanding of DevOps and cloud-native infrastructure practices.
- Excellent problem-solving skills and the ability to work collaboratively across teams.
Job Title : Senior Backend Engineer – Java, AI & Automation
Experience : 4+ Years
Location : Any Cognizant location (India)
Work Mode : Hybrid
Interview Rounds :
- Virtual
- Face-to-Face (In-person)
Job Description :
Join our Backend Engineering team to design and maintain services on the Intuit Data Exchange (IDX) platform.
You'll work on scalable backend systems powering millions of daily transactions across Intuit products.
Key Qualifications :
- 4+ years of backend development experience.
- Strong in Java, Spring framework.
- Experience with microservices, databases, and web applications.
- Proficient in AWS and cloud-based systems.
- Exposure to AI and automation tools (Workato preferred).
- Python development experience.
- Strong communication skills.
- Comfortable with occasional US shift overlap.

Role - MLops Engineer
Required Experience - 4 Years
Location - Pune, Gurgaon, Noida, Bhopal, Bangalore
Mode - Hybrid
Key Requirements:
- 4+ years of experience in Software Engineering with MLOps focus
- Strong expertise in AWS, particularly AWS SageMaker (required)
- AWS Data Zone experience (preferred)
- Proficiency in Python, R, Scala, or Spark
- Experience developing scalable, reliable, and secure applications
- Track record of production-grade development, integration and support
POSITION: Sr. Devops Engineer
Job Type: Work From Office (5 days)
Location: Sector 16A, Film City, Noida / Mumbai
Relevant Experience: Minimum 4+ year
Salary: Competitive
Education- B.Tech
About the Company: Devnagri is a AI company dedicated to personalizing business communication and making it hyper-local to attract non-English speakers. We address the significant gap in internet content availability for most of the world’s population who do not speak English. For more detail - Visit www.devnagri.com
We seek a highly skilled and experienced Senior DevOps Engineer to join our dynamic team. As a key member of our technology department, you will play a crucial role in designing and implementing scalable, efficient and robust infrastructure solutions with a strong focus on DevOps automation and best practices.
Roles and Responsibilities
- Design, plan, and implement scalable, reliable, secure, and robust infrastructure architectures
- Manage and optimize cloud-based infrastructure components - Architect and implement containerization technologies, such as Docker, Kubernetes
- Implement the CI/CD pipelines to automate the build, test, and deployment processes
- Design and implement effective monitoring and logging solutions for applications and infrastructure. Establish metrics and alerts for proactive issue identification and resolution
- Work closely with cross-functional teams to troubleshoot and resolve issues.
- Implement and enforce security best practices across infrastructure components
- Establish and enforce configuration standards across various environments.
- Implement and manage infrastructure using Infrastructure as Code principles
- Leverage tools like Terraform for provisioning and managing resources.
- Stay abreast of industry trends and emerging technologies.
- Evaluate and recommend new tools and technologies to enhance infrastructure and operations
Must have Skills:
Cloud ( AWS & GCP ), Redis, MongoDB, MySQL, Docker, bash scripting, Jenkins, Prometheus, Grafana, ELK Stack, Apache, Linux
Good to have Skills:
Kubernetes, Collaboration and Communication, Problem Solving, IAM, WAF, SAST/DAST
Interview Process:
Screening Round then Shortlisting >> 3 technical round >> 1 Managerial round >> HR Closure
with your short success story into Devops and Tech
Cheers
For more details, visit our website- https://www.devnagri.com
Note for approver

Unstop (Formerly Dare2Compete) is looking for Frontend and Full Stack Developers. Developer responsibilities include building our application from concept to completion from the bottom up, fashioning everything from the home page to site layout and function.
Requirements:-
- Write well-designed, testable, efficient code by using the best software development practices
- Integrate data from various back-end services and databases
- Gather and refine specifications and requirements based on technical needs
- Be responsible for maintaining, expanding, and scaling our products
- Stay plugged into emerging technologies/industry trends and apply them into operations and activities
- End-to-end management and coding of all our products and services
- To make products modular, flexible, scalable and robust
Tech Skill:-
- Angular 10 or later
- PHP Laravel
- NodeJS
- MYSQL 8
- NoSQL DB
- Amazon AWS services – EC2, WAF, EBS, SNS, SES, Lambda, Fargate, etc.
- The whole ecosystem of AWS
Qualifications:-
- Freshers and Candidates with a maximum of 10 years of experience in the technologies that we work with
- Proven working experience in programming – Full Stack
- Top-notch programming and analytical skills
- Must know and have experience in AngularJS 2 onwards
- A solid understanding of how web applications work including security, session management, and best development practices
- Adequate knowledge of relational database systems, Object-Oriented Programming and web application development
- Ability to work and thrive in a fast-paced environment, learn rapidly and master diverse web technologies and techniques
- B.Tech in Computer Science or a related field or equivalent
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-Fri, In office role with excellent perks and benefits!
Key Responsibilities:
1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.
2. Build and implement scalable and robust microservices and integrate API gateways.
3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).
4. Implement real-time data pipelines using Kafka.
5. Collaborate with front-end developers to ensure seamless integration of backend services.
6. Write clean, reusable, and efficient code following best practices, including design patterns.
7. Troubleshoot, debug, and enhance existing systems for improved performance.
Mandatory Skills:
1. Proficiency in at least one backend technology: Node.js or Python, or Java.
2. Strong experience in:
i. Microservices architecture,
ii. API gateways,
iii. NoSQL databases (e.g., MongoDB, DynamoDB),
iv. Kafka
v. Data structures (e.g., arrays, linked lists, trees).
3. Frameworks:
i. If Java : Spring framework for backend development.
ii. If Python: FastAPI/Django frameworks for AI applications.
iii. If Node: Express.js for Node.js development.
Good to Have Skills:
1. Experience with Kubernetes for container orchestration.
2. Familiarity with in-memory databases like Redis or Memcached.
3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.

Job Title : Sr. Data Engineer
Experience : 5+ Years
Location : Noida (Hybrid – 3 Days in Office)
Shift Timing : 2-11 PM
Availability : Immediate
Job Description :
- We are seeking a Senior Data Engineer to design, develop, and optimize data solutions.
- The role involves building ETL pipelines, integrating data into BI tools, and ensuring data quality while working with SQL, Python (Pandas, NumPy), and cloud platforms (AWS/GCP).
- You will also develop dashboards using Looker Studio and work with AWS services like S3, Lambda, Glue ETL, Athena, RDS, and Redshift.
- Strong debugging, collaboration, and communication skills are essential.