Cutshort logo
AWS (Amazon Web Services) Jobs in Delhi, NCR and Gurgaon

50+ AWS (Amazon Web Services) Jobs in Delhi, NCR and Gurgaon | AWS (Amazon Web Services) Job openings in Delhi, NCR and Gurgaon

Apply to 50+ AWS (Amazon Web Services) Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest AWS (Amazon Web Services) Job opportunities across top companies like Google, Amazon & Adobe.

icon
Media and Entertainment Industry

Media and Entertainment Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
5 - 7 yrs
₹15L - ₹25L / yr
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
Infrastructure
Scripting
+28 more

Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting


Criteria:

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
  • Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
  • Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
  • Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
  • Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
  • Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
  • Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
  • Strong scripting skills (Bash, Shell, Python) for automation
  • Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
  • Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
  • Strong experience in incident management, root cause analysis & production firefighting

 

Description

Role Overview

Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.

 

 Key Responsibilities

1. Cloud Infrastructure — AWS (Primary Focus)

  • Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
  • Optimize cloud cost, resource utilization, and performance across environments.
  • Design high-availability, fault-tolerant systems for streaming workloads.

 

2. CI/CD Automation

  • Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
  • Automate deployments for microservices, mobile apps, and backend APIs.
  • Implement blue/green and canary deployments for seamless production rollouts.

 

3. Observability & Monitoring

  • Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
  • Perform proactive performance analysis to minimize downtime and bottlenecks.
  • Set up dashboards for real-time visibility into system health and user traffic spikes.

 

4. Security, Compliance & Risk Highlighting

• Conduct frequent risk assessments and identify vulnerabilities in:

  o Cloud architecture

  o Access policies (IAM)

  o Secrets & key management

  o Data flows & network exposure


• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.

 

5. Scalability & Reliability Engineering

  • Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
  • Identify scalability gaps and propose solutions across:
  •   o Microservices
  •   o Caching layers
  •   o CDN distribution (CloudFront)
  •   o Database workloads
  • Perform capacity planning and load testing to ensure readiness for 10x traffic growth.

 

6. Database & Storage Support

  • Administer and optimize MongoDB for high-read/low-latency use cases.
  • Design backup, recovery, and data replication strategies.
  • Work closely with backend teams to tune query performance and indexing.

 

7. Automation & Infrastructure as Code

  • Implement IaC using Terraform, CloudFormation, or Ansible.
  • Automate repetitive infrastructure tasks to ensure consistency across environments.

 

Required Skills & Experience

Technical Must-Haves

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
  • Strong hands-on experience with AWS (core and advanced services).
  • Expertise in Jenkins CI/CD pipelines.
  • Solid background working with MongoDB in production environments.
  • Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
  • Strong scripting experience (Bash, Python, Shell).
  • Experience handling risk identification, root cause analysis, and incident management.

 

Nice to Have

  • Experience with OTT, video streaming, media, or any content-heavy product environments.
  • Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
  • Understanding of CDN, caching, and streaming pipelines.

 

Personality & Mindset

  • Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
  • Proactive problem solver with ability to think about long-term scalability.
  • Comfortable working with cross-functional engineering teams.

 

Why Join company?

• Build and operate infrastructure powering millions of monthly users.

• Opportunity to shape DevOps culture and cloud architecture from the ground up.

• High-impact role in a fast-scaling Indian OTT product.

Read more
ConvertLens

at ConvertLens

2 candid answers
3 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote, Noida
2yrs+
Best in industry
skill iconPython
FastAPI
AI Agents
Artificial Intelligence (AI)
Large Language Models (LLM)
+9 more

🚀 About Us

At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews — all while improving their SEO.


We’re also the creators of Convertlens, our generative AI-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.

We’re a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solves real problems — you’ll fit right in.


🛠️ What You’ll Do

  • Build and maintain scalable Python back-end systems that power Convertlens and internal applications.
  • Develop Agentic AI applications and workflows to drive automation and insights.
  • Design and implement connectors to third-party systems (APIs, CRMs, marketing tools) to source and unify data.
  • Ensure system reliability with strong practices in observability, monitoring, and troubleshooting.


⚙️ What You Bring

  • 2+ years of hands-on experience in Python back-end development.
  • Strong understanding of REST API design and integration.
  • Proficiency with relational databases (MySQL/PostgreSQL).
  • Familiarity with observability tools (logging, monitoring, tracing — e.g., OpenTelemetry, Prometheus, Grafana, ELK).
  • Experience maintaining production systems with a focus on reliability and scalability.
  • Bonus: Exposure to Node.js and modern front-end frameworks like ReactJs.
  • Strong problem-solving skills and comfort working in a startup/product environment.
  • A builder mindset — scrappy, curious, and ready to ship.


💼 Perks & Culture

  • Flexible work setup — remote-first for most, hybrid if you’re in Delhi NCR.
  • A high-growth, high-impact environment where your code goes live fast.
  • Opportunities to work with Agentic AI and cutting-edge tech.
  • Small team, big vision — your work truly matters here.
Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Noida, Delhi
6 - 8 yrs
₹15L - ₹18L / yr
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconDocker
skill iconKubernetes
+2 more

Hands on experience with Infra as a code tools like terraform, ansible, puppet, chef, cloud formation , etc..


Expertise in any Cloud (AWS, Azure, GCP)


Good understand of version control (Git, Gitlab, GitHub)


Hands-on experience in Container Infrastructure (Docker, Kubernetes)



Ensuring availability, performance, security and scalability of production system


Hands on experience with Infrastructure Automation tools like Chef/Puppet/Ansible, Terraform, ARM, Cloud Formation


Hand on experience with Artifact repositories (Nexus, JFrog Artifactory)


Hands on experience with CI/CD tools on-premises/cloud (Jenkins, CircleCI, etc. )


Hands on experience with Monitoring, Logging, and Security (CloudWatch, cloud trail, log analytics, hosted tools such as ELK, EFK, Splunk, Datadog, Prometheus,)


Hands-on experience with scripting languages like Python, Ant, Bash, and Shell


Hands-on experience in designing pipelines & pipelines as code.


Hands-on experience in end-to-end deployment process & strategy


Hands-on experience of GCP/AWS/AZURE with a good understanding of computing, networks, storage, IAM, Security, and integration services

Read more
Lovoj

at Lovoj

2 candid answers
LOVOJ CONTACT
Posted by LOVOJ CONTACT
Delhi
3 - 10 yrs
₹8L - ₹14L / yr
skill iconAmazon Web Services (AWS)
AWS Lambda
CI/CD
DevOps

Key Responsibilities

  • Design, implement, and maintain CI/CD pipelines for backend, frontend, and mobile applications.
  • Manage cloud infrastructure using AWS (EC2, Lambda, S3, VPC, RDS, CloudWatch, ECS/EKS).
  • Configure and maintain Docker containers and/or Kubernetes clusters.
  • Implement and maintain Infrastructure as Code (IaC) using Terraform / CloudFormation.
  • Automate build, deployment, and monitoring processes.
  • Manage code repositories using Git/GitHub/GitLab, enforce branching strategies.
  • Implement monitoring and alerting using tools like Prometheus, Grafana, CloudWatch, ELK, Splunk.
  • Ensure system scalability, reliability, and security.
  • Troubleshoot production issues and perform root-cause analysis.
  • Collaborate with engineering teams to improve deployment and development workflows.
  • Optimize infrastructure costs and improve performance.

Required Skills & Qualifications

  • 3+ years of experience in DevOps, SRE, or Cloud Engineering.
  • Strong hands-on knowledge of AWS cloud services.
  • Experience with Docker, containers, and orchestrators (ECS, EKS, Kubernetes).
  • Strong understanding of CI/CD tools: GitHub Actions, Jenkins, GitLab CI, or AWS CodePipeline.
  • Experience with Linux administration and shell scripting.
  • Strong understanding of Networking, VPC, DNS, Load Balancers, Security Groups.
  • Experience with monitoring/logging tools: CloudWatch, ELK, Prometheus, Grafana.
  • Experience with Terraform or CloudFormation (IaC).
  • Good understanding of Node.js or similar application deployments.
  • Knowledge of NGINX/Apache and load balancing concepts.
  • Strong problem-solving and communication skills.

Preferred/Good to Have

  • Experience with Kubernetes (EKS).
  • Experience with Serverless architectures (Lambda).
  • Experience with Redis, MongoDB, RDS.
  • Certification in AWS Solutions Architect / DevOps Engineer.
  • Experience with security best practices, IAM policies, and DevSecOps.
  • Understanding of cost optimization and cloud cost management.


Read more
Media and entertainment

Media and entertainment

Agency job
via Jobdost by Saida Pathan
Noida
4 - 6 yrs
₹30L - ₹35L / yr
TypeScript
skill iconExpress
skill iconNextJs (Next.js)
MVC Framework
Microsoft Windows Azure
+2 more

What you will be doing

  • Build and ship features in our Node.js (and now migrating to TypeScript) codebase that directly impact user experience and help move the top and bottom line of the business.
  • Collaborate closely with our product, design and data team to build innovative features to deliver a world class product to our customers. At STAGE, product managers don’t “tell” what to build. In fact, we all collaborate on how to solve a problem for our customers and the business. Engineering plays a big part in it.
  • Design scalable platforms that empower our product and marketing teams to rapidly experiment.
  • Own the quality of our products by writing automated tests, reviewing code, making systems observable and resilient to failures.
  • Drive code quality and pay down architectural debt by continuous analysis of our codebases and systems, and continuous refactoring.
  • Architect our systems for faster iterations, releasability, scalability and high availability using practices like Domain Driven Design, Event Driven Architecture, Cloud Native Architecture and Observability.
  • Set the engineering culture with the rest of the team by defining how we should work as a team, set standards for quality, and improve the speed of engineering execution.

 

The role could be ideal for you if you

  • Experience of 4-6 years of working in backend engineering with at least 2 years of production experience in TypeScript, Express.js (or another popular framework like Nest.js) and MongoDB (or any popular database like MySQL, PostgreSQL, DynamoDB, etc.).
  • Well versed with one or more architectures and design patterns such as MVC, Domain Driven Design, CQRS, Event Driven Architecture, Cloud Native Architecture, etc.
  • Experienced in writing automated tests (especially integration tests) and Continuous Integration. At STAGE, engineers own quality and hence, writing automated tests is crucial to the role.
  • Experience with managing production infrastructure using technologies like public cloud providers (AWS, GCP, Azure, etc.). Bonus: if you have experience in using Kubernetes.
  • Experience in observability techniques like code instrumentation for metrics, tracing and logging.
  • Care deeply about code quality, code reviews, software architecture (think about Object Oriented Programming, Clean Code, etc.), scalability and reliability. Bonus: if you have experience in this from your past roles.
  • Understand the importance of shipping fast in a startup environment and constantly try to find ingenious ways to achieve the same.
  • Collaborate well with everyone on the team. We communicate a lot and don’t hesitate to get quick feedback from other members on the team sooner than later.
  • Can take ownership of goals and deliver them with high accountability.


Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹0.1L - ₹0.1L / yr
skill iconPython
MLOps
Apache Airflow
Apache Spark
AWS CloudFormation
+23 more

Review Criteria

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field.

 

Read more
Media and Entertainment Industry

Media and Entertainment Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
4 - 8 yrs
₹20L - ₹45L / yr
TypeScript
skill iconMongoDB
Microservices
MVC Framework
Google Cloud Platform (GCP)
+14 more

Required Skills: TypeScript, MVC, Cloud experience (Azure, AWS, etc.), mongodb, Express.js, Nest.js

 

Criteria:

Need candidates from Growing startups or Product based companies only

1. 4–8 years’ experience in backend engineering

2. Minimum 2+ years hands-on experience with:

  • TypeScript
  • Express.js / Nest.js

3. Strong experience with MongoDB (or MySQL / PostgreSQL / DynamoDB)

4. Strong understanding of system design & scalable architecture

5. Hands-on experience in:

  • Event-driven architecture / Domain-driven design
  • MVC / Microservices

6. Strong in automated testing (especially integration tests)

7. Experience with CI/CD pipelines (GitHub Actions or similar)

8. Experience managing production systems

9. Solid understanding of performance, reliability, observability

10. Cloud experience (AWS preferred; GCP/Azure acceptable)

11. Strong coding standards — Clean Code, code reviews, refactoring

 

Description 

About the opportunity

We are looking for an exceptional Senior Software Engineer to join our Backend team. This is a unique opportunity to join a fast-growing company where you will get to solve real customer and business problems, shape the future of a product built for Bharat and build the engineering culture of the team. You will have immense responsibility and autonomy to push the boundaries of engineering to deliver scalable and resilient systems.

As a Senior Software Engineer, you will be responsible for shipping innovative features at breakneck speed, designing the architecture, mentoring other engineers on the team and pushing for a high bar of engineering standards like code quality, automated testing, performance, CI/CD, etc. If you are someone who loves solving problems for customers, technology, the craft of software engineering, and the thrill of building startups, we would like to talk to you.

 

What you will be doing

  • Build and ship features in our Node.js (and now migrating to TypeScript) codebase that directly impact user experience and help move the top and bottom line of the business.
  • Collaborate closely with our product, design and data team to build innovative features to deliver a world class product to our customers. At company, product managers don’t “tell” what to build. In fact, we all collaborate on how to solve a problem for our customers and the business. Engineering plays a big part in it.
  • Design scalable platforms that empower our product and marketing teams to rapidly experiment.
  • Own the quality of our products by writing automated tests, reviewing code, making systems observable and resilient to failures.
  • Drive code quality and pay down architectural debt by continuous analysis of our codebases and systems, and continuous refactoring.
  • Architect our systems for faster iterations, releasability, scalability and high availability using practices like Domain Driven Design, Event Driven Architecture, Cloud Native Architecture and Observability.
  • Set the engineering culture with the rest of the team by defining how we should work as a team, set standards for quality, and improve the speed of engineering execution.

 

The role could be ideal for you if you

  • Experience of 4-8 years of working in backend engineering with at least 2 years of production experience in TypeScript, Express.js (or another popular framework like Nest.js) and MongoDB (or any popular database like MySQL, PostgreSQL, DynamoDB, etc.).
  • Well versed with one or more architectures and design patterns such as MVC, Domain Driven Design, CQRS, Event Driven Architecture, Cloud Native Architecture, etc.
  • Experienced in writing automated tests (especially integration tests) and Continuous Integration. At company, engineers own quality and hence, writing automated tests is crucial to the role.
  • Experience with managing production infrastructure using technologies like public cloud providers (AWS, GCP, Azure, etc.). Bonus: if you have experience in using Kubernetes.
  • Experience in observability techniques like code instrumentation for metrics, tracing and logging.
  • Care deeply about code quality, code reviews, software architecture (think about Object Oriented Programming, Clean Code, etc.), scalability and reliability. Bonus: if you have experience in this from your past roles.
  • Understand the importance of shipping fast in a startup environment and constantly try to find ingenious ways to achieve the same.
  • Collaborate well with everyone on the team. We communicate a lot and don’t hesitate to get quick feedback from other members on the team sooner than later.
  • Can take ownership of goals and deliver them with high accountability.

 

Don’t hesitate to try out new technologies. At company, nobody is limited to a role. Every engineer in our team is an expert of at least one technology but often ventures out in adjacent technologies like React.js, Flutter, Data Platforms, AWS and Kubernetes. If you are not excited by this, you will not like working at company. Bonus: if you have experience in adjacent technologies like AWS (or any public cloud provider, Github Actions (or CircleCI), Kubernetes, Infrastructure as Code (Terraform, Pulumi, etc.), etc.

 

 

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Noida
8 - 12 yrs
₹60L - ₹80L / yr
DevOps
cicd
skill iconAmazon Web Services (AWS)
Terraform
Ansible
+1 more

Strong DevSecOps / Cloud Security profile

Mandatory (Experience 1) – Must have 8+ years total experience in DevSecOps / Cloud Security / Platform Security roles securing AWS workloads and CI/CD systems.

Mandatory (Experience 2) – Must have strong hands-on experience securing AWS services (including but not limited to) KMS, WAF, Shield, CloudTrail, AWS Config, Security Hub, Inspector, Macie and IAM governance

Mandatory (Experience 3) – Must have hands-on expertise in Identity & Access Security including RBAC, IRSA, PSP/PSS, SCPs and IAM least-privilege enforcement

Mandatory (Experience 4) – Must have hands-on experience with security automation using Terraform and Ansible for configuration hardening and compliance

Mandatory (Experience 5) – Must have strong container & Kubernetes security experience including Docker image scanning, EKS runtime controls, network policies, and registry security

Mandatory (Experience 6) – Must have strong CI/CD pipeline security expertise including SAST, DAST, SCA, Jenkins Security, artifact integrity, secrets protection, and automated remediation

Mandatory (Experience 7) – Must have experience securing data & ML platforms including databases, data centers/on-prem environments, MWAA/Airflow, and sensitive ETL/ML workflows

Mandatory (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

Read more
Mckinely and rice
Pune, Noida
5 - 15 yrs
₹5L - ₹25L / yr
skill iconMongoDB
skill iconNodeJS (Node.js)
Generative AI
skill iconExpress
DevOps
+2 more

Company Overview 

McKinley Rice is not just a company; it's a dynamic community, the next evolutionary step in professional development. Spiritually, we're a hub where individuals and companies converge to unleash their full potential. Organizationally, we are a conglomerate composed of various entities, each contributing to the larger narrative of global excellence.

Redrob by McKinley Rice: Redefining Prospecting in the Modern Sales Era


Backed by a $40 million Series A funding from leading Korean & US VCs, Redrob is building the next frontier in global outbound sales. We’re not just another database—we’re a platform designed to eliminate the chaos of traditional prospecting. In a world where sales leaders chase meetings and deals through outdated CRMs, fragmented tools, and costly lead-gen platforms, Redrob provides a unified solution that brings everything under one roof.

Inspired by the breakthroughs of Salesforce, LinkedIn, and HubSpot, we’re creating a future where anyone, not just enterprise giants, can access real-time, high-quality data on 700 M+ decision-makers, all in just a few clicks.

At Redrob, we believe the way businesses find and engage prospects is broken. Sales teams deserve better than recycled data, clunky workflows, and opaque credit-based systems. That’s why we’ve built a seamless engine for:

  • Precision prospecting
  • Intent-based targeting
  • Data enrichment from 16+ premium sources
  • AI-driven workflows to book more meetings, faster

We’re not just streamlining outbound—we’re making it smarter, scalable, and accessible. Whether you’re an ambitious startup or a scaled SaaS company, Redrob is your growth copilot for unlocking warm conversations with the right people, globally.



EXPERIENCE



Duties you'll be entrusted with:


  • Develop and execute scalable APIs and applications using the Node.js or Nest.js framework
  • Writing efficient, reusable, testable, and scalable code.
  • Understanding, analyzing, and implementing – Business needs, feature modification requests, and conversion into software components
  • Integration of user-oriented elements into different applications, data storage solutions
  • Developing – Backend components to enhance performance and receptiveness, server-side logic, and platform, statistical learning models, highly responsive web applications
  • Designing and implementing – High availability and low latency applications, data protection and security features
  • Performance tuning and automation of applications and enhancing the functionalities of current software systems.
  • Keeping abreast with the latest technology and trends.


Expectations from you:


Basic Requirements


  • Minimum qualification: Bachelor’s degree or more in Computer Science, Software Engineering, Artificial Intelligence, or a related field.
  • Experience with Cloud platforms (AWS, Azure, GCP).
  • Strong understanding of monitoring, logging, and observability practices.
  • Experience with event-driven architectures (e.g., Kafka, RabbitMQ).
  • Expertise in designing, implementing, and optimizing Elasticsearch.
  • Work with modern tools including Jira, Slack, GitHub, Google Docs, etc.
  • Expertise in Event driven architecture.
  • Experience in Integrating Generative AI APIs.
  • Working experience in high user concurrency.
  • Experience in scaled databases for handling millions of records - indexing, retrieval, etc.,


Technical Skills


  • Demonstrable experience in web application development with expertise in Node.js or Nest.js.
  • Knowledge of database technologies and agile development methodologies.
  • Experience working with databases, such as MySQL or MongoDB.
  • Familiarity with web development frameworks, such as Express.js.
  • Understanding of microservices architecture and DevOps principles.
  • Well-versed with AWS and serverless architecture.



Soft Skills


  • A quick and critical thinker with the ability to come up with a number of ideas about a topic and bring fresh and innovative ideas to the table to enhance the visual impact of our content.
  • Potential to apply innovative and exciting ideas, concepts, and technologies.
  • Stay up-to-date with the latest design trends, animation techniques, and software advancements.
  • Multi-tasking and time-management skills, with the ability to prioritize tasks.


THRIVE


Some of the extensive benefits of being part of our team:


  • We offer skill enhancement and educational reimbursement opportunities to help you further develop your expertise.
  • The Member Reward Program provides an opportunity for you to earn up to INR 85,000 as an annual Performance Bonus.
  • The McKinley Cares Program has a wide range of benefits:
  • The wellness program covers sessions for mental wellness, and fitness and offers health insurance.
  • In-house benefits have a referral bonus window and sponsored social functions.
  • An Expanded Leave Basket including paid Maternity and Paternity Leaves and rejuvenation Leaves apart from the regular 20 leaves per annum. 
  • Our Family Support benefits not only include maternity and paternity leaves but also extend to provide childcare benefits.
  • In addition to the retention bonus, our McKinley Retention Benefits program also includes a Leave Travel Allowance program.
  • We also offer an exclusive McKinley Loan Program designed to assist our employees during challenging times and alleviate financial burdens.


Read more
iDreamCareercom

at iDreamCareercom

1 video
3 recruiters
Recruitment Team
Posted by Recruitment Team
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
10 - 20 yrs
₹40L - ₹65L / yr
MERN Stack
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Retrieval Augmented Generation (RAG)
skill iconAmazon Web Services (AWS)

At iDreamCareer, we’re on a mission to democratize career guidance for millions of young learners across India and beyond. Technology is at the heart of this mission — and we’re looking for an Engineering Manager who thrives in high-ownership environments, thinks with an enterprising mindset, and gets excited about solving problems that genuinely change lives.

This is not just a management role. It’s a chance to shape the product, scale the platform, influence the engineering culture, and lead a team that builds with heart and hustle.



As an Director-Engineering here, you will:


  • Lead a talented team of engineers while remaining hands-on with architecture and development.
  • Champion the use of AI/ML, LLM-driven features, and intelligent systems to elevate learner experience.
  • Inspire a culture of high performance, clear thinking, and thoughtful engineering.
  • Partner closely with product, design, and content teams to deliver delightful, meaningful user experiences.
  • Bring structure, clarity, and energy to complex problem-solving.
  • This role is ideal for someone who loves building, mentoring, scaling, and thinking several steps ahead.


Key Responsibilities

Technical Leadership & Ownership

  • Lead end-to-end development across backend, frontend, architecture, and infrastructure in partnership with product and design teams.
  • Stay hands-on with the MERN stack, Python, and AI/ML technologies, while guiding and coaching a high-performance engineering team.
  • Architect, develop, and maintain distributed microservices, event-driven systems, and robust APIs on AWS.


AI/ML Engineering

  • Build and deploy AI-powered features, leveraging LLMs, RAG pipelines, embeddings, vector databases, and model evaluation frameworks.
  • Drive prompt engineering, retrieval optimization, and continuous refinement of AI system performance.
  • Champion the adoption of modern AI coding tools and emerging AI platforms to boost team productivity.


Cloud, Data, DevOps & Scaling

  • Own deployments and auto-scaling on AWS (ECS, Lambda, CloudFront, SQS, SES, ELB, S3).
  • Build and optimize real-time and batch data pipelines using BigQuery and other analytics tools.
  • Implement CI/CD pipelines for Dockerized applications, ensuring strong observability through Prometheus, Loki, Grafana, CloudWatch.
  • Enforce best practices around security, code quality, testing, and system performance.

Collaboration & Delivery Excellence

  • Partner closely with product managers, designers, and QA to deliver features with clarity, speed, and reliability.
  • Drive agile rituals, ensure engineering predictability, and foster a culture of ownership, innovation, and continuous improvement


Required Skills & Experience

  • 8-15 years of experience in full-stack or backend engineering with at least 5+ years leading engineering teams.
  • Strong hands-on expertise in the MERN stack and modern JavaScript/TypeScript ecosystems.
  • 5+ years building and scaling production-grade applications and distributed systems.
  • 2+ years building and deploying AI/ML products — including training, tuning, integrating, and monitoring AI models in production.
  • Practical experience with SQL, NoSQL, vector databases, embeddings, and production-grade RAG systems.
  • Strong understanding of LLM prompt optimization, evaluation frameworks, and AI-driven system design.
  • Hands-on with AI developer tools, automation utilities, and emerging AI productivity platforms.

Preferred Skills

  • Familiarity with LLM orchestration frameworks (LangChain, LlamaIndex, etc.) and advanced tool-calling workflows.
  • Experience building async workflows, schedulers, background jobs, and offline processing systems.
  • Exposure to modern frontend testing frameworks, QA automation, and performance testing.
Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹30L - ₹40L / yr
DevOps
skill iconDocker
CI/CD
skill iconAmazon Web Services (AWS)
AWS CloudFormation
+22 more

ROLES AND RESPONSIBILITIES:

We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.


KEY RESPONSIBILITIES:

Cloud & Infrastructure as Code (IaC)-

  • Architect and manage AWS environments ensuring scalability, security, and high availability.
  • Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
  • Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.


CI/CD & Automation:

  • Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
  • Automate deployments, provisioning, and monitoring across environments.


Containers & Orchestration:

  • Deploy and operate workloads on Docker and Kubernetes (EKS).
  • Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
  • Optimize performance of containerized and microservices applications.


Monitoring & Reliability:

  • Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
  • Establish logging, alerting, and proactive monitoring for high availability.


Security & Compliance:

  • Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
  • Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
  • Configure VPNs, firewalls, and secure access policies and AWS organizations.


Databases & Analytics:

  • Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
  • Manage data reliability, performance tuning, and cloud-native integrations.
  • Experience with Apache Airflow and Spark.


IDEAL CANDIDATE:

  • 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
  • Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
  • Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
  • Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
  • Proven ability with CI/CD pipeline automation and DevSecOps practices.
  • Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
  • Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
  • Strong scripting skills (Shell/bash, Python, or similar) for automation.
  • Bachelor / Master’s degree
  • Effective communication skills


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
Spark Eighteen
Rishabh Jain
Posted by Rishabh Jain
Delhi
5 - 10 yrs
₹23L - ₹30L / yr
cicd
skill iconAmazon Web Services (AWS)
skill iconDocker
skill iconKubernetes
skill iconJenkins
+2 more

About the Job

This is a full-time role for a Lead DevOps Engineer at Spark Eighteen. We are seeking an experienced DevOps professional to lead our infrastructure strategy, design resilient systems, and drive continuous improvement in our deployment processes. In this role, you will architect scalable solutions, mentor junior engineers, and ensure the highest standards of reliability and security across our cloud infrastructure. The job location is flexible with preference for the Delhi NCR region.


Responsibilities

  • Lead and mentor the DevOps/SRE team
  • Define and drive DevOps strategy and roadmaps
  • Oversee infrastructure automation and CI/CD at scale
  • Collaborate with architects, developers, and QA teams to integrate DevOps practices
  • Ensure security, compliance, and high availability of platforms
  • Own incident response, postmortems, and root cause analysis
  • Budgeting, team hiring, and performance evaluation


Requirements

Technical Skills

  • Bachelor's or Master's degree in Computer Science, Engineering, or related field.
  • 7+ years of professional DevOps experience with demonstrated progression.
  • Strong architecture and leadership background
  • Deep hands-on knowledge of infrastructure as code, CI/CD, and cloud
  • Proven experience with monitoring, security, and governance
  • Effective stakeholder and project management
  • Experience with tools like Jenkins, ArgoCD, Terraform, Vault, ELK, etc.
  • Strong understanding of business continuity and disaster recovery


Soft Skills

  • Cross-functional communication excellence with ability to lead technical discussions.
  • Strong mentorship capabilities for junior and mid-level team members.
  • Advanced strategic thinking and ability to propose innovative solutions.
  • Excellent knowledge transfer skills through documentation and training.
  • Ability to understand and align technical solutions with broader business strategy.
  • Proactive problem-solving approach with focus on continuous improvement.
  • Strong leadership skills in guiding team performance and technical direction.
  • Effective collaboration across development, QA, and business teams.
  • Ability to make complex technical decisions with minimal supervision.
  • Strategic approach to risk management and mitigation.


What We Offer

  • Professional Growth: Continuous learning opportunities through diverse projects and mentorship from experienced leaders
  • Global Exposure: Work with clients from 20+ countries, gaining insights into different markets and business cultures
  • Impactful Work: Contribute to projects that make a real difference, with solutions generating over $1B in revenue
  • Work-Life Balance: Flexible arrangements that respect personal wellbeing while fostering productivity
  • Career Advancement: Clear progression pathways as you develop skills within our growing organization
  • Competitive Compensation: Attractive salary packages that recognize your contributions and expertise


Our Culture

At Spark Eighteen, our culture centers on innovation, excellence, and growth. We believe in:

  • Quality-First: Delivering excellence rather than just quick solutions
  • True Partnership: Building relationships based on trust and mutual respect
  • Communication: Prioritizing clear, effective communication across teams
  • Innovation: Encouraging curiosity and creative approaches to problem-solving
  • Continuous Learning: Supporting professional development at all levels
  • Collaboration: Combining diverse perspectives to achieve shared goals
  • Impact: Measuring success by the value we create for clients and users


Apply Here - https://tinyurl.com/t6x23p9b

Read more
QAgile Services

at QAgile Services

1 recruiter
Radhika Chotai
Posted by Radhika Chotai
Noida
3 - 6 yrs
₹5L - ₹12L / yr
DevOps
Windows Azure
AWS CloudFormation
skill iconAmazon Web Services (AWS)
skill iconKubernetes
+3 more

We seek a skilled and motivated Azure DevOps engineer to join our dynamic team. The ideal candidate will design, implement, and manage CI/CD pipelines, automate deployments, and optimize cloud infrastructure using Azure DevOps tools and services. You will collaborate closely with development and IT teams to ensure seamless integration and delivery of software solutions in a fast-paced environment.

Responsibilities:

  • Design, implement, and manage CI/CD pipelines using Azure DevOps.
  • Automate infrastructure provisioning and deployments using Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Azure CLI.
  • Monitor and optimize Azure environments to ensure high availability, performance, and security.
  • Collaborate with development, QA, and IT teams to streamline the software development lifecycle (SDLC).
  • Troubleshoot and resolve issues related to build, deployment, and infrastructure.
  • Implement and manage version control systems, primarily using Git.
  • Manage containerization and orchestration using tools like Docker and Kubernetes.
  • Ensure compliance with industry standards and best practices for security, scalability, and reliability.


Read more
Webkul Software PvtLtd
Avantika Giri
Posted by Avantika Giri
Noida
2 - 7 yrs
₹4L - ₹8L / yr
DevOps
skill iconDocker
skill iconJenkins
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

Job Specification:

  • Job Location - Noida
  • Experience - 2-5 Years
  • Qualification - B.Tech, BE, MCA (Technical background required)
  • Working Days - 5
  • Job nature - Permanent
  • Role IT Cloud Engineer
  • Proficient in Linux.
  • Hands on experience with AWS cloud or Google Cloud.
  • Knowledge of container technology like Docker.
  • Expertise in scripting languages. (Shell scripting or Python scripting)
  • Working knowledge of LAMP/LEMP stack, networking and version control system like Gitlab or Github.


Job Description:

The incumbent would be responsible for:

  • Deployment of various infrastructures on Cloud platforms like AWS, GCP, Azure, OVH etc.
  • Server monitoring, analysis and troubleshooting.
  • Deploying multi-tier architectures using microservices.
  • Integration of Container technologies like Docker, Kubernetes etc as per application requirement.
  • Automating workflow with python or shell scripting.
  • CI and CD integration for application lifecycle management.
  • Hosting and managing websites on Linux machines.
  • Frontend, backend and database optimization.
  • Protecting operations by keeping information confidential.
  • Providing information by collecting, analyzing, summarizing development & service issues.
  • Prepares & installs solutions by determining and designing system specifications, standards & programming.
Read more
 Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad
6 - 8 yrs
₹10L - ₹26L / yr
Large Language Models (LLM)
Prompt engineering
Knowledge base
Large Language Models (LLM) tuning
Artificial Intelligence (AI)
+7 more

MUST-HAVES:

  • LLM, AI, Prompt Engineering LLM Integration & Prompt Engineering
  • Context & Knowledge Base Design.
  • Context & Knowledge Base Design.
  • Experience running LLM evals


NOTICE PERIOD: Immediate – 30 Days


SKILLS: LLM, AI, PROMPT ENGINEERING


NICE TO HAVES:

Data Literacy & Modelling Awareness Familiarity with Databricks, AWS, and ChatGPT Environments



ROLE PROFICIENCY:

Role Scope / Deliverables:

  • Scope of Role Serve as the link between business intelligence, data engineering, and AI application teams, ensuring the Large Language Model (LLM) interacts effectively with the modeled dataset.
  • Define and curate the context and knowledge base that enables GPT to provide accurate, relevant, and compliant business insights.
  • Collaborate with Data Analysts and System SMEs to identify, structure, and tag data elements that feed the LLM environment.
  • Design, test, and refine prompt strategies and context frameworks that align GPT outputs with business objectives.
  • Conduct evaluation and performance testing (evals) to validate LLM responses for accuracy, completeness, and relevance.
  • Partner with IT and governance stakeholders to ensure secure, ethical, and controlled AI behavior within enterprise boundaries.



KEY DELIVERABLES:

  • LLM Interaction Design Framework: Documentation of how GPT connects to the modeled dataset, including context injection, prompt templates, and retrieval logic.
  • Knowledge Base Configuration: Curated and structured domain knowledge to enable precise and useful GPT responses (e.g., commercial definitions, data context, business rules).
  • Evaluation Scripts & Test Results: Defined eval sets, scoring criteria, and output analysis to measure GPT accuracy and quality over time.
  • Prompt Library & Usage Guidelines: Standardized prompts and design patterns to ensure consistent business interactions and outcomes.
  • AI Performance Dashboard / Reporting: Visualizations or reports summarizing GPT response quality, usage trends, and continuous improvement metrics.
  • Governance & Compliance Documentation: Inputs to data security, bias prevention, and responsible AI practices in collaboration with IT and compliance teams.



KEY SKILLS:

Technical & Analytical Skills:

  • LLM Integration & Prompt Engineering – Understanding of how GPT models interact with structured and unstructured data to generate business-relevant insights.
  • Context & Knowledge Base Design – Skilled in curating, structuring, and managing contextual data to optimize GPT accuracy and reliability.
  • Evaluation & Testing Methods – Experience running LLM evals, defining scoring criteria, and assessing model quality across use cases.
  • Data Literacy & Modeling Awareness – Familiar with relational and analytical data models to ensure alignment between data structures and AI responses.
  • Familiarity with Databricks, AWS, and ChatGPT Environments – Capable of working in cloud-based analytics and AI environments for development, testing, and deployment.
  • Scripting & Query Skills (e.g., SQL, Python) – Ability to extract, transform, and validate data for model training and evaluation workflows.
  • Business & Collaboration Skills Cross-Functional Collaboration – Works effectively with business, data, and IT teams to align GPT capabilities with business objectives.
  • Analytical Thinking & Problem Solving – Evaluates LLM outputs critically, identifies improvement opportunities, and translates findings into actionable refinements.
  • Commercial Context Awareness – Understands how sales and marketing intelligence data should be represented and leveraged by GPT.
  • Governance & Responsible AI Mindset – Applies enterprise AI standards for data security, privacy, and ethical use.
  • Communication & Documentation – Clearly articulates AI logic, context structures, and testing results for both technical and non-technical audiences.
Read more
HelloTrade (An IndiaMART Company)
Noida
5 - 8 yrs
Upto ₹40L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconNextJs (Next.js)
skill iconPostgreSQL
skill iconAmazon Web Services (AWS)

HelloTrade is a wholly-owned subsidiary of Indiamart, India’s largest B2B marketplace. We are building a next-generation financial services and wealth management platform for MSMEs.

Leveraging Indiamart’s user base of over 20 million SMEs, HelloTrade aims to revolutionize access to business loans, insurance, credit cards, deposits, and financing solutions.

Our vision is to become the financial operating system for Indian MSMEs—simplifying access to credit, enabling smarter financial decisions, and building long-term financial health.


Role Overview:

We’re looking for a Tech Lead who will own the development of HelloTrade’s fintech platform.

This is a founding engineer + leadership role, where you’ll be hands-on in building scalable systems while shaping the engineering culture and aligning technology decisions with business goals.


The ideal candidate blends strong technical depth with leadership skills—someone who can code, mentor, set engineering standards, and drive delivery in a fast-paced, startup-style environment.


Key Responsibilities:


Technology Leadership

  • Own key technical decisions and the overall tech roadmap aligned with HelloTrade’s fintech vision.
  • Design scalable, modular, and secure architecture to support multi-product financial journeys.
  • Make build-vs-buy and architectural decisions balancing speed, scalability, and compliance.
  • Stay current on fintech innovations (AI/ML credit scoring, Account Aggregator APIs, digital KYC) and introduce relevant solutions.

Execution Excellence

  • Be hands-on with coding and code reviews while balancing leadership duties.
  • Oversee AWS infrastructure, DevOps pipelines, monitoring, and cost optimization.
  • Ensure platform reliability, scalability, and security with proactive risk management.
  • Deliver features on time and with quality, managing dependencies and trade-offs effectively.

People & Team Leadership

  • Lead and mentor developers—setting engineering best practices, coding standards, and review processes.
  • Build an engineering culture of ownership, accountability, and innovation.
  • Drive agile practices (sprints, standups, retrospectives) for predictable delivery.
  • Collaborate closely with Product, Design, and Business teams for seamless execution.

Must-Have Skills

  • Strong proficiency in Next.js, Node.js, PostgreSQL, and AWS with 3+ years of hands-on development.
  • Proven experience leading teams (mentoring, code reviews, sprint leadership).
  • Expertise in designing and building scalable APIs and microservices.
  • Strong understanding of system architecture, database design, and performance tuning.
  • Experience with secure authentication, authorization, and partner integrations.
  • Proficiency in Git, CI/CD pipelines, Docker/Kubernetes.
  • Excellent communication and stakeholder management skills.

Good-to-Have Skills

  • Experience in fintech or financial services (loans, payments, insurance).
  • Familiarity with OCR, AI/ML-driven decision engines, or Account Aggregator APIs.
  • Knowledge of DevSecOps and regulatory compliance frameworks.
  • Prior experience in early-stage or startup environments, scaling teams and systems.
Read more
One of the reputed Client in India

One of the reputed Client in India

Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Hyderabad, Pune
6 - 8 yrs
₹12L - ₹13L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark

Our Client is looking to hire Databricks Amin immediatly.


This is PAN-INDIA Bulk hiring


Minimum of 6-8+ years with Databricks, Pyspark/Python and AWS.

Must have AWS


Notice 15-30 days is preferred.


Share profiles at hr at etpspl dot com

Please refer/share our email to your friends/colleagues who are looking for job.

Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Mohali
5 - 7 yrs
₹10L - ₹15L / yr
dot net
skill iconC#
.Net core
mvc
sql server
+10 more

Job Title: Mid-Level .NET Developer (Agile/SCRUM)

Location: Mohali, PTP or anywhere else) 

Night Shift from 6:30 pm to 3:30 am IST

Experience:

5 Years


Job Summary:

We are seeking a proactive and detail-oriented Mid-Level .NET Developer to join our dynamic team. The ideal candidate will be responsible for designing, developing, and maintaining high-quality applications using Microsoft technologies with a strong emphasis on .NET Core, C#, Web API, and modern front-end frameworks. You will collaborate with cross-functional teams in an Agile/SCRUM environment and participate in the full software development lifecycle—from requirements gathering to deployment—while ensuring adherence to best coding and delivery practices.


Key Responsibilities:

  • Design, develop, and maintain applications using C#, .NET, .NET Core, MVC, and databases such as SQL Server, PostgreSQL, and MongoDB.
  • Create responsive and interactive user interfaces using JavaScript, TypeScript, Angular, HTML, and CSS.
  • Develop and integrate RESTful APIs for multi-tier, distributed systems.
  • Participate actively in Agile/SCRUM ceremonies, including sprint planning, daily stand-ups, and retrospectives.
  • Write clean, efficient, and maintainable code following industry best practices.
  • Conduct code reviews to ensure high-quality and consistent deliverables.
  • Assist in configuring and maintaining CI/CD pipelines (Jenkins or similar tools).
  • Troubleshoot, debug, and resolve application issues effectively.
  • Collaborate with QA and product teams to validate requirements and ensure smooth delivery.
  • Support release planning and deployment activities.


Required Skills & Qualifications:

  • 4–6 years of professional experience in .NET development.
  • Strong proficiency in C#, .NET Core, MVC, and relational databases such as SQL Server.
  • Working knowledge of NoSQL databases like MongoDB.
  • Solid understanding of JavaScript/TypeScript and the Angular framework.
  • Experience in developing and integrating RESTful APIs.
  • Familiarity with Agile/SCRUM methodologies.
  • Basic knowledge of CI/CD pipelines and Git version control.
  • Hands-on experience with AWS cloud services.
  • Strong analytical, problem-solving, and debugging skills.
  • Excellent communication and collaboration skills.


Preferred / Nice-to-Have Skills:

  • Advanced experience with AWS services.
  • Knowledge of Kubernetes or other container orchestration platforms.
  • Familiarity with IIS web server configuration and management.
  • Experience in the healthcare domain.
  • Exposure to AI-assisted code development tools (e.g., GitHub Copilot, ChatGPT).
  • Experience with application security and code quality tools such as Snyk or SonarQube.
  • Strong understanding of SOLID principles and clean architecture patterns.


Technical Proficiencies:

  • ASP.NET Core, ASP.NET MVC
  • C#, Entity Framework, Razor Pages
  • SQL Server, MongoDB
  • REST API, jQuery, AJAX
  • HTML, CSS, JavaScript, TypeScript, Angular
  • Azure Services, Azure Functions, AWS
  • Visual Studio
  • CI/CD, Git


Read more
TekPillar
Agency job
via TekPillar by Tulsi Virani
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 12 yrs
₹15L - ₹30L / yr
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconPython
skill iconFlask
FastAPI
+3 more

Frontend Architect

Experience: 6+ years

Location: Delhi / Gurgaon


Roles & Responsibilities:

  • Design, develop, and maintain scalable applications using React.js and FastAPI/Node.js.
  • Write clean, modular, and well-documented code in Python and JavaScript.
  • Deploy and manage applications on AWS using ECS, ECR, EKS, S3, and CodePipeline.
  • Build and maintain CI/CD pipelines to automate testing, deployment, and monitoring.
  • Implement unit, integration, and end-to-end tests using frameworks like Swagger and Pytest.
  • Ensure secure coding practices, including authentication and authorization.
  • Collaborate with cross-functional teams and mentor junior developers.


Skills Required:

  • Strong expertise in React.js and modern frontend development
  • Experience with FastAPI and Node.js backend
  • Proficient in Python and JavaScript
  • Hands-on experience with AWS cloud services and containerization (Docker)
  • Knowledge of CI/CD pipelines, automated testing, and secure coding practices
  • Excellent problem-solving, communication, and leadership skills


Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Nagpur, Ahmedabad, Jaipur, Kochi (Cochin)
3.6 - 8 yrs
₹4L - ₹18L / yr
skill iconPython
skill iconDjango
skill iconFlask
skill iconAmazon Web Services (AWS)
AWS Lambda
+3 more

Job Summary:

Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.


Key Responsibilities:

  • Design, develop, and deploy backend services and APIs using Python.
  • Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
  • Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
  • Implement containerized environments using Docker and manage orchestration via Kubernetes.
  • Write automation and scripting solutions in Bash/Shell to streamline operations.
  • Work with relational databases like MySQL and SQL, including query optimization.
  • Collaborate directly with clients to understand requirements and provide technical solutions.
  • Ensure system reliability, performance, and scalability across environments.


Required Skills:

  • 3.5+ years of hands-on experience in Python development.
  • Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
  • Good understanding of Terraform or other Infrastructure as Code tools.
  • Proficient with Docker and container orchestration using Kubernetes.
  • Experience with CI/CD tools like Jenkins or GitHub Actions.
  • Strong command of SQL/MySQL and scripting with Bash/Shell.
  • Experience working with external clients or in client-facing roles.


Preferred Qualifications:

  • AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
  • Familiarity with Agile/Scrum methodologies.
  • Strong analytical and problem-solving skills.
  • Excellent communication and stakeholder management abilities.


Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Mohali, Dehradun, Panchkula, Chennai
6 - 14 yrs
₹12L - ₹28L / yr
Test Automation (QA)
skill iconKubernetes
helm
skill iconDocker
skill iconAmazon Web Services (AWS)
+13 more

Job Title : Senior QA Automation Architect (Cloud & Kubernetes)

Experience : 6+ Years

Location : India (Multiple Offices)

Shift Timings : 12 PM to 9 PM (Noon Shift)

Working Days : 5 Days WFO (NO Hybrid)


About the Role :

We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.

You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.


Key Responsibilities :

  • Architect and maintain test automation frameworks for microservices.
  • Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
  • Ensure reliability, scalability, and observability of test systems.
  • Work closely with DevOps and Cloud teams to streamline automation infrastructure.

Mandatory Skills :

  • Kubernetes, Helm, Docker, Linux
  • Cloud Platforms : AWS / Azure / GCP
  • CI/CD Tools : Jenkins, GitHub Actions
  • Scripting : Python, Pytest, Bash
  • Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
  • IaC Practices : Terraform / Ansible

Good to Have :

  • Experience with Service Mesh (Istio/Linkerd).
  • Container Security or DevSecOps exposure.
Read more
Webkul Software PvtLtd
Avantika Giri
Posted by Avantika Giri
Noida
2 - 5 yrs
₹5L - ₹15L / yr
skill iconAmazon Web Services (AWS)
AWS Lambda
DevOps
Cloud Computing
Amazon EC2
+10 more

Job Specification:

  • Job Location - Noida
  • Experience - 2-5 Years
  • Qualification - B.Tech, BE, MCA (Technical background required)
  • Working Days - 5
  • Job nature - Permanent
  • Role IT Cloud Engineer
  • Proficient in Linux.
  • Hands on experience with AWS cloud or Google Cloud.
  • Knowledge of container technology like Docker.
  • Expertise in scripting languages. (Shell scripting or Python scripting)
  • Working knowledge of LAMP/LEMP stack, networking and version control system like Gitlab or Github.

Job Description:

The incumbent would be responsible for:

  • Deployment of various infrastructures on Cloud platforms like AWS, GCP, Azure, OVH etc.
  • Server monitoring, analysis and troubleshooting.
  • Deploying multi-tier architectures using microservices.
  • Integration of Container technologies like Docker, Kubernetes etc as per application requirement.
  • Automating workflow with python or shell scripting.
  • CI and CD integration for application lifecycle management.
  • Hosting and managing websites on Linux machines.
  • Frontend, backend and database optimization.
  • Protecting operations by keeping information confidential.
  • Providing information by collecting, analyzing, summarizing development & service issues.
  • Prepares & installs solutions by determining and designing system specifications, standards & programming.


Read more
GradRight

at GradRight

4 recruiters
Patrali M
Posted by Patrali M
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 4 yrs
₹8L - ₹15L / yr
skill iconAmazon Web Services (AWS)
CI/CD
Computer Networking

About GradRight

Our vision is to be the world’s leading Ed-Fin Tech company dedicated to making higher education accessible and affordable to all. Our mission is to drive transparency and accountability in the global higher education sector and create significant impact using the power of technology, data science and collaboration.

GradRight is the world’s first SaaS ecosystem that brings together students, universities and financial institutions in an integrated manner. It enables students to find and fund high return college education, universities to engage and select the best-fit students and banks to lend in an effective and efficient manner.

In the last three years, we have enabled students to get the best deals on a $ 2.8+ Billion of loan requests and facilitated disbursements of more than $ 350+ Million in loans. GradRight won the HSBC Fintech Innovation Challenge supported by the Ministry of Electronics & IT, Government of India & was among the top 7 global finalists in The PIEoneer awards, UK.

GradRight’s team possesses extensive domestic and international experience in the launch and scale-up of premier higher education institutions. It is led by alumni of IIT Delhi, BITS Pilani, IIT Roorkee, ISB Hyderabad and University of Pennsylvania. GradRight is a Delaware, USA registered company with a wholly owned subsidiary in India. 


About the Role

We are looking for a passionate DevOps Engineer with hands-on experience in AWS cloud infrastructure, containerization, and orchestration. The ideal candidate will be responsible for building, automating, and maintaining scalable cloud solutions, ensuring smooth CI/CD pipelines, and supporting development and operations teams.


Core Responsibilities

Design, implement, and manage scalable, secure, and highly available infrastructure on AWS.

Build and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or GitHub Actions.

Containerize applications using Docker and manage deployments with Kubernetes (EKS, self-managed, or other distributions).

Monitor system performance, availability, and security using tools like CloudWatch, Prometheus, Grafana, ELK/EFK stack.

Collaborate with development teams to optimize application performance and deployment processes.


Required Skills & Experience

3–4 years of professional experience as a DevOps Engineer or similar role.

Strong expertise in AWS services (EC2, S3, RDS, Lambda, VPC, IAM, CloudWatch, EKS, etc.).

Hands-on experience with Docker and Kubernetes (EKS or self-hosted clusters).

Proficiency in CI/CD pipeline design and automation.

Experience with Infrastructure as Code (Terraform / AWS CloudFormation).

Solid understanding of Linux/Unix systems and shell scripting.

Knowledge of monitoring, logging, and alerting tools.

Familiarity with networking concepts (DNS, Load Balancing, Security Groups, Firewalls).

Basic programming/scripting experience in Python, Bash, or Go.


Nice to Have

Exposure to microservices architecture and service mesh (Istio/Linkerd).

Knowledge of serverless (AWS Lambda, API Gateway).



Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Hyderabad, Noida, Mumbai, Navi Mumbai, Ahmedabad, Chennai, Coimbatore, Gurugram, Kochi (Cochin), Kolkata, Calcutta, Pune, Thiruvananthapuram, Trivandrum
7 - 15 yrs
₹15L - ₹30L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
Data Lake

SENIOR DATA ENGINEER:

ROLE SUMMARY:

Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.



RESPONSIBILITIES:

  • Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
  • Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
  • Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
  • Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
  • DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
  • Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
  • Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
  • Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
  • Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
  • Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.



REQUIRED QUALIFICATIONS:

  • Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
  •  Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
  • Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
  • ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
  • Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
  • DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
  • Serverless and events: Design event-driven distributed systems on AWS.
  • NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
  • AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.



NICE-TO-HAVE QUALIFICATIONS:

  • Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
  • Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
  • Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
  • Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.



OUTCOMES AND MEASURES:

  • Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
  • Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
  • Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
  • Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
  • Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.



LOCATION AND SCHEDULE:

●      Location: Outside US (OUS).

●      Schedule: Minimum 6 hours of overlap with US time zones.

Read more
Tata Consultancy Services
Chennai, Hyderabad, Kolkata, Delhi, Pune, Bengaluru (Bangalore)
4 - 10 yrs
₹6L - ₹30L / yr
Scala
PySpark
Spark
skill iconAmazon Web Services (AWS)

Job Title: PySpark/Scala Developer

 

Functional Skills: Experience in Credit Risk/Regulatory risk domain

Technical Skills: Spark ,PySpark, Python, Hive, Scala, MapReduce, Unix shell scripting

Good to Have Skills: Exposure to Machine Learning Techniques

Job Description:

5+ Years of experience with Developing/Fine tuning and implementing programs/applications

Using Python/PySpark/Scala on Big Data/Hadoop Platform.

Roles and Responsibilities:

a)     Work with a Leading Bank’s Risk Management team on specific projects/requirements pertaining to risk Models in

 consumer and wholesale banking

b)     Enhance Machine Learning Models using PySpark or Scala

c)     Work with Data Scientists to Build ML Models based on Business Requirements and Follow ML Cycle to Deploy them all

the way to Production Environment

d)     Participate Feature Engineering, Training Models, Scoring and retraining

e)     Architect Data Pipeline and Automate Data Ingestion and Model Jobs

 

Skills and competencies:

Required:

·       Strong analytical skills in conducting sophisticated statistical analysis using bureau/vendor data, customer performance

Data and macro-economic data to solve business problems.

·       Working experience in languages PySpark & Scala to develop code to validate and implement models and codes in

Credit Risk/Banking

·       Experience with distributed systems such as Hadoop/MapReduce, Spark, streaming data processing, cloud architecture.

  • Familiarity with machine learning frameworks and libraries (like scikit-learn, SparkML, tensorflow, pytorch etc.
  • Experience in systems integration, web services, batch processing
  • Experience in migrating codes to PySpark/Scala is big Plus
  • The ability to act as liaison conveying information needs of the business to IT and data constraints to the business

applies equal conveyance regarding business strategy and IT strategy, business processes and work flow

·       Flexibility in approach and thought process

·       Attitude to learn and comprehend the periodical changes in the regulatory requirement as per FED

 

 

Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Mumbai, Pune, Hyderabad
4 - 8 yrs
₹18L - ₹30L / yr
skill iconJava
skill iconSpring Boot
skill iconAmazon Web Services (AWS)
RESTful APIs
CI/CD
+3 more

Job Overview:

We are looking for a skilled Senior Backend Engineer to join our team. The ideal candidate will have a strong foundation in Java and Spring, with proven experience in building scalable microservices and backend systems. This role also requires familiarity with automation tools, Python development, and working knowledge of AI technologies.


Responsibilities:


  • Design, develop, and maintain backend services and microservices.
  • Build and integrate RESTful APIs across distributed systems.
  • Ensure performance, scalability, and reliability of backend systems.
  • Collaborate with cross-functional teams and participate in agile development.
  • Deploy and maintain applications on AWS cloud infrastructure.
  • Contribute to automation initiatives and AI/ML feature integration.
  • Write clean, testable, and maintainable code following best practices.
  • Participate in code reviews and technical discussions.


Required Skills:

  • 4+ years of backend development experience.
  • Strong proficiency in Java and Spring/Spring Boot frameworks.
  • Solid understanding of microservices architecture.
  • Experience with REST APIs, CI/CD, and debugging complex systems.
  • Proficient in AWS services such as EC2, Lambda, S3.
  • Strong analytical and problem-solving skills.
  • Excellent communication in English (written and verbal).


Good to Have:

  • Experience with automation tools like Workato or similar.
  • Hands-on experience with Python development.
  • Familiarity with AI/ML features or API integrations.
  • Comfortable working with US-based teams (flexible hours).


Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune
4 - 8 yrs
₹15L - ₹25L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconJavascript
TypeScript
RESTful APIs
+8 more

About the Role

We’re looking for a passionate Fullstack Product Engineer with a strong JavaScript foundation to work on a high-impact, scalable product. You’ll collaborate closely with product and engineering teams to build intuitive UIs and performant backends using modern technologies.


Responsibilities

  • Build and maintain scalable features across the frontend and backend.
  • Work with tech stacks like Node.js, React.js, Vue.js, and others.
  • Contribute to system design, architecture, and code quality enforcement.
  • Follow modern engineering practices including TDD, CI/CD, and live coding evaluations.
  • Collaborate in code reviews, performance optimizations, and product iterations.


Required Skills

  • 4–6 years of hands-on fullstack development experience.
  • Strong command over JavaScript, Node.js, and React.js.
  • Solid understanding of REST APIs and/or GraphQL.
  • Good grasp of OOP principles, TDD, and writing clean, maintainable code.
  • Experience with CI/CD tools like GitHub Actions, GitLab CI, Jenkins, etc.
  • Familiarity with HTML, CSS, and frontend performance optimization.


Good to Have

  • Exposure to Docker, AWS, Kubernetes, or Terraform.
  • Experience in other backend languages or frameworks.
  • Experience with microservices and scalable system architectures.
Read more
MantraCare
Kiran Saini
Posted by Kiran Saini
Delhi
1 - 6 yrs
₹3.5L - ₹10L / yr
MERN Stack
SQL
skill iconAmazon Web Services (AWS)
skill iconNextJs (Next.js)

Job Title: MERN TECH

Location: Paschim Vihar, West Delhi

Company: Eye Mantra

Job Type: Full-time, Onsite

Salary: Commensurate with experience and interview performance

Experience:- 3+ years

Contact: +91 97180 11146(Rizwana Siddique,HR)

Interview Mode: Face to Face

About Eye Mantra:

Eye Mantra is a premier eye care organization committed to delivering exceptional services using advanced technology. We’re growing fast and looking to strengthen our in-house tech team with talented individuals who share our passion for innovation and excellence in patient care.

Position Overview:

We are currently hiring a skilled Full Stack Developer to join our in-house development team. If you have strong experience working with the MERN stack, including Node.js, React.js, MongoDB, and SQL, and you thrive in a collaborative, fast-paced work environment, we’d love to connect with you.

This role requires working onsite at our West Delhi office (Paschim Vihar), where you’ll contribute directly to building and maintaining robust, scalable, and user-friendly applications that support our medical operations and patient services.

Responsibilities:

  • Build and manage web applications using the MERN stack (MongoDB, Express, React, Node).
  • Create and maintain efficient backend services and RESTful APIs.
  • Develop intuitive frontend interfaces with React.js.
  • Design and optimize relational databases using SQL.
  • Work closely with internal teams to implement new features and enhance existing ones.
  • Ensure applications perform well across all platforms and devices.
  • Identify and resolve bugs and performance issues quickly.
  • Stay current with emerging web development tools and trends.
  • (Bonus) Leverage AWS or other cloud platforms to enhance scalability and performance.

Required Skills & Qualifications:

  • Proficiency in Node.js for backend programming.
  • Strong hands-on experience with React.js for frontend development.
  • Good command of SQL and understanding of database design.
  • Practical knowledge of the MERN stack.
  • Experience using Git for version control and team collaboration.
  • Excellent analytical and problem-solving abilities.
  • Strong interpersonal and communication skills.
  • Self-motivated, with the ability to manage tasks independently.


Read more
Hunarstreet technologies pvt ltd

Hunarstreet technologies pvt ltd

Agency job
Bengaluru (Bangalore), Mumbai, Hyderabad, Pune, Mohali, Panchkula, Delhi
6 - 10 yrs
₹10L - ₹15L / yr
cloud
DevOps
skill iconMachine Learning (ML)
IT infrastructure
sagemaker
+7 more

Senior Cloud & ML Infrastructure Engineer

Location: Bangalore / Bengaluru, Hyderabad, Pune, Mumbai, Mohali, Panchkula, Delhi

Experience: 6–10+ Years

Night Shift - 9 pm to 6 am


About the Role:

We’re looking for a Senior Cloud & ML Infrastructure Engineer to lead the design,scaling, and optimization of cloud-native machine learning infrastructure. This role is ideal forsomeone passionate about solving complex platform engineering challenges across AWS, witha focus on model orchestration, deployment automation, and production-grade reliability. You’llarchitect ML systems at scale, provide guidance on infrastructure best practices, and work cross-functionally to bridge DevOps, ML, and backend teams.

Key Responsibilities:

● Architect and manage end-to-end ML infrastructure using SageMaker, AWS StepFunctions, Lambda, and ECR

● Design and implement multi-region, highly-available AWS solutions for real-timeinference and batch processing

● Create and manage IaC blueprints for reproducible infrastructure using AWS CDK

● Establish CI/CD practices for ML model packaging, validation, and drift monitoring

● Oversee infrastructure security, including IAM policies, encryption at rest/in-transit, andcompliance standards

● Monitor and optimize compute/storage cost, ensuring efficient resource usage at scale

● Collaborate on data lake and analytics integration

● Serve as a technical mentor and guide AWS adoption patterns across engineeringteams


Required Skills:

● 6+ years designing and deploying cloud infrastructure on AWS at scale

● Proven experience building and maintaining ML pipelines with services like SageMaker,ECS/EKS, or custom Docker pipelines

● Strong knowledge of networking, IAM, VPCs, and security best practices in AWS

● Deep experience with automation frameworks, IaC tools, and CI/CD strategies

● Advanced scripting proficiency in Python, Go, or Bash

● Familiarity with observability stacks (CloudWatch, Prometheus, Grafana)


Nice to Have:

● Background in robotics infrastructure, including AWS IoT Core, Greengrass, or OTA deployments

● Experience designing systems for physical robot fleet telemetry, diagnostics, and control

● Familiarity with multi-stage production environments and robotic software rollout processes

● Competence in frontend hosting for dashboard or API visualization

● Involvement with real-time streaming, MQTT, or edge inference workflows

● Hands-on experience with ROS 2 (Robot Operating System) or similar robotics frameworks, including launch file management, sensor data pipelines, and deployment to embedded Linux devices


Read more
IT Industry - Night Shifts

IT Industry - Night Shifts

Agency job
Bengaluru (Bangalore), Hyderabad, Mumbai, Navi Mumbai, Pune, Mohali, Delhi
5 - 10 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
IT infrastructure
skill iconMachine Learning (ML)
DevOps
Automation
+1 more

🚀 We’re Hiring: Senior Cloud & ML Infrastructure Engineer 🚀


We’re looking for an experienced engineer to lead the design, scaling, and optimization of cloud-native ML infrastructure on AWS.

If you’re passionate about platform engineering, automation, and running ML systems at scale, this role is for you.


What you’ll do:

🔹 Architect and manage ML infrastructure with AWS (SageMaker, Step Functions, Lambda, ECR)

🔹 Build highly available, multi-region solutions for real-time & batch inference

🔹 Automate with IaC (AWS CDK, Terraform) and CI/CD pipelines

🔹 Ensure security, compliance, and cost efficiency

🔹 Collaborate across DevOps, ML, and backend teams


What we’re looking for:

✔️ 6+ years AWS cloud infrastructure experience

✔️ Strong ML pipeline experience (SageMaker, ECS/EKS, Docker)

✔️ Proficiency in Python/Go/Bash scripting

✔️ Knowledge of networking, IAM, and security best practices

✔️ Experience with observability tools (CloudWatch, Prometheus, Grafana)


✨ Nice to have: Robotics/IoT background (ROS2, Greengrass, Edge Inference)


📍 Location: Bengaluru, Hyderabad, Mumbai, Pune, Mohali, Delhi

5 days working, Work from Office

Night shifts: 9pm to 6am IST

👉 If this sounds like you (or someone you know), let’s connect!


Apply here:

Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Hyderabad, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 10 yrs
₹5L - ₹20L / yr
skill iconJava
06692
Microservices
skill iconAmazon Web Services (AWS)

Java AWS engineer with experience in building AWS services like Lambda, Batch, SQS, S3, DynamoDB etc. using AWS Java SDK and Cloud formation templates.

 

  • Java AWS engineer with experience in building AWS services like Lambda, Batch, SQS, S3, DynamoDB etc. using AWS Java SDK and Cloud formation templates.
  • 4 to 8 years of experience in design, development and triaging for large, complex systems. Experience in Java and object-oriented design skills
  • 3-4+ years of microservices development
  • 2+ years working in Spring Boot
  • Experienced using API dev tools like IntelliJ/Eclipse, Postman, Git, Cucumber
  • Hands on experience in building microservices based application using Spring Boot and REST, JSON
  • DevOps understanding – containers, cloud, automation, security, configuration management, CI/CD
  • Experience using CICD processes for application software integration and deployment using Maven, Git, Jenkins.
  • Experience dealing with NoSQL databases like Cassandra
  • Experience building scalable and resilient applications in private or public cloud environments and cloud technologies
  • Experience in Utilizing tools such as Maven, Docker, Kubernetes, ELK, Jenkins
  • Agile Software Development (typically Scrum, Kanban, Safe)
  • Experience with API gateway and API security.


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
1 - 3 yrs
₹2L - ₹5L / yr
skill iconReact.js
skill iconNextJs (Next.js)
skill iconReact Native
skill iconNodeJS (Node.js)
skill iconExpress
+5 more

Job Title : Full Stack Engineer

Location : New Delhi

Job Type : Full-Time


About the Role :

We are seeking a passionate Full Stack Engineer who enjoys building scalable, high-performance applications across web and mobile platforms.

This role is ideal for someone eager to work in a fast-paced startup environment, contributing across the full development lifecycle.


Mandatory Skills :

React.js, Next.js, React Native, Node.js, Express.js, PostgreSQL, AWS (EC2/S3/RDS/IAM), RESTful APIs.


Key Responsibilities :

  • Design, build, and maintain scalable applications for web and mobile.
  • Translate UI/UX wireframes into efficient, reusable, and maintainable code.
  • Optimize applications for maximum performance, scalability, and reliability.
  • Work on both front-end and back-end development using modern frameworks.
  • Participate in code reviews, debugging, testing, and deployment.
  • Collaborate with cross-functional teams to deliver high-quality products.

Requirements :

  • Experience : 1+ years of software development (internships + full-time combined acceptable).
  • Frontend Skills : React.js, Next.js, React Native.
  • Backend Skills : Node.js, Express.js, RESTful APIs.
  • Database : Strong knowledge of PostgreSQL.
  • Cloud : Exposure to AWS services (EC2, Elastic Beanstalk, RDS, S3, ElasticCache, IAM).
  • Additional : Experience with Redis and BullMQ is a strong plus.
  • Strong debugging, problem-solving, and analytical skills.
  • Enthusiasm for working in a dynamic startup environment.
Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Dipika
Posted by Dipika
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Hyderabad, Pune
5 - 7 yrs
₹5L - ₹20L / yr
skill iconJava
Microservices
06692
Apache Kafka
Apache ActiveMQ
+3 more

1 Senior Associate Technology L1 – Java Microservices


Company Description

Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.


Job Description

We are looking for a Senior Associate Technology Level 1 - Java Microservices Developer to join our team of bright thinkers and doers. You’ll use your problem-solving creativity to design, architect, and develop high-end technology solutions that solve our clients’ most complex and challenging problems across different industries.

We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions.


Your Impact:

• Drive the design, planning, and implementation of multifaceted applications, giving you breadth and depth of knowledge across the entire project lifecycle.

• Combine your technical expertise and problem-solving passion to work closely with clients, turning • complex ideas into end-to-end solutions that transform our clients’ business

• Constantly innovate and evaluate emerging technologies and methods to provide scalable and elegant solutions that help clients achieve their business goals.


Qualifications

➢ 5 to 7 Years of software development experience

➢ Strong development skills in Java JDK 1.8 or above

➢ Java fundamentals like Exceptional handling, Serialization/Deserialization and Immutability concepts

➢ Good fundamental knowledge in Enums, Collections, Annotations, Generics, Auto boxing and Data Structure

➢ Database RDBMS/No SQL (SQL, Joins, Indexing)

➢ Multithreading (Re-entrant Lock, Fork & Join, Sync, Executor Framework)

➢ Spring Core & Spring Boot, security, transactions ➢ Hands-on experience with JMS (ActiveMQ, RabbitMQ, Kafka etc)

➢ Memory Mgmt (JVM configuration, Profiling, GC), profiling, Perf tunning, Testing, Jmeter/similar tool)

➢ Devops (CI/CD: Maven/Gradle, Jenkins, Quality plugins, Docker and containersization)

➢ Logical/Analytical skills. Thorough understanding of OOPS concepts, Design principles and implementation of

➢ different type of Design patterns. ➢ Hands-on experience with any of the logging frameworks (SLF4J/LogBack/Log4j) ➢ Experience of writing Junit test cases using Mockito / Powermock frameworks.

➢ Should have practical experience with Maven/Gradle and knowledge of version control systems like Git/SVN etc.

➢ Good communication skills and ability to work with global teams to define and deliver on projects.

➢ Sound understanding/experience in software development process, test-driven development.

➢ Cloud – AWS / AZURE / GCP / PCF or any private cloud would also be fine

➢ Experience in Microservices

Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Chennai, Bengaluru (Bangalore), Hyderabad, Mumbai, Pune, Noida
4 - 6 yrs
₹3L - ₹21L / yr
AWS Data Engineer
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark
databricks
+1 more

 Key Responsibilities

  • Design and implement ETL/ELT pipelines using Databricks, PySpark, and AWS Glue
  • Develop and maintain scalable data architectures on AWS (S3, EMR, Lambda, Redshift, RDS)
  • Perform data wrangling, cleansing, and transformation using Python and SQL
  • Collaborate with data scientists to integrate Generative AI models into analytics workflows
  • Build dashboards and reports to visualize insights using tools like Power BI or Tableau
  • Ensure data quality, governance, and security across all data assets
  • Optimize performance of data pipelines and troubleshoot bottlenecks
  • Work closely with stakeholders to understand data requirements and deliver actionable insights

🧪 Required Skills

Skill AreaTools & TechnologiesCloud PlatformsAWS (S3, Lambda, Glue, EMR, Redshift)Big DataDatabricks, Apache Spark, PySparkProgrammingPython, SQLData EngineeringETL/ELT, Data Lakes, Data WarehousingAnalyticsData Modeling, Visualization, BI ReportingGen AI IntegrationOpenAI, Hugging Face, LangChain (preferred)DevOps (Bonus)Git, Jenkins, Terraform, Docker

📚 Qualifications

  • Bachelor's or Master’s degree in Computer Science, Data Science, or related field
  • 3+ years of experience in data engineering or data analytics
  • Hands-on experience with Databricks, PySpark, and AWS
  • Familiarity with Generative AI tools and frameworks is a strong plus
  • Strong problem-solving and communication skills

🌟 Preferred Traits

  • Analytical mindset with attention to detail
  • Passion for data and emerging technologies
  • Ability to work independently and in cross-functional teams
  • Eagerness to learn and adapt in a fast-paced environment


Read more
CLOUDSUFI
Noida
6 - 12 yrs
₹22L - ₹34L / yr
Natural Language Processing (NLP)
Large Language Models (LLM) tuning
Generative AI
skill iconPython
CI/CD
+4 more

About Us


CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.


Our Values


We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.


Equal Opportunity Statement


CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.


Role Overview:


As a Senior Data Scientist / AI Engineer, you will be a key player in our technical leadership. You will be responsible for designing, developing, and deploying sophisticated AI and Machine Learning solutions, with a strong emphasis on Generative AI and Large Language Models (LLMs). You will architect and manage scalable AI microservices, drive research into state-of-the-art techniques, and translate complex business requirements into tangible, high-impact products. This role requires a blend of deep technical expertise, strategic thinking, and leadership.


Key Responsibilities:


  • Architect & Develop AI Solutions: Design, build, and deploy robust and scalable machine learning models, with a primary focus on Natural Language Processing (NLP), Generative AI, and LLM-based Agents.
  • Build AI Infrastructure: Create and manage AI-driven microservices using frameworks like Python FastAPI, ensuring high performance and reliability.
  • Lead AI Research & Innovation: Stay abreast of the latest advancements in AI/ML. Lead research initiatives to evaluate and implement state-of-the-art models and techniques for performance and cost optimization.
  • Solve Business Problems: Collaborate with product and business teams to understand challenges and develop data-driven solutions that create significant business value, such as building business rule engines or predictive classification systems.
  • End-to-End Project Ownership: Take ownership of the entire lifecycle of AI projects—from ideation, data processing, and model development to deployment, monitoring, and iteration on cloud platforms.
  • Team Leadership & Mentorship: Lead learning initiatives within the engineering team, mentor junior data scientists and engineers, and establish best practices for AI development.
  • Cross-Functional Collaboration: Work closely with software engineers to integrate AI models into production systems and contribute to the overall system architecture.

Required Skills and Qualifications


  • Master’s (M.Tech.) or Bachelor's (B.Tech.) degree in Computer Science, Artificial Intelligence, Information Technology, or a related field.
  • 6+ years of professional experience in a Data Scientist, AI Engineer, or related role.
  • Expert-level proficiency in Python and its core data science libraries (e.g., PyTorch, Huggingface Transformers, Pandas, Scikit-learn).
  • Demonstrable, hands-on experience building and fine-tuning Large Language Models (LLMs) and implementing Generative AI solutions.
  • Proven experience in developing and deploying scalable systems on cloud platforms, particularly AWS. Experience with GCS is a plus.
  • Strong background in Natural Language Processing (NLP), including experience with multilingual models and transcription.
  • Experience with containerization technologies, specifically Docker.
  • Solid understanding of software engineering principles and experience building APIs and microservices.


Preferred Qualifications


  • A strong portfolio of projects. A track record of publications in reputable AI/ML conferences is a plus.
  • Experience with full-stack development (Node.js, Next.js) and various database technologies (SQL, MongoDB, Elasticsearch).
  • Familiarity with setting up and managing CI/CD pipelines (e.g., Jenkins).
  • Proven ability to lead technical teams and mentor other engineers.
  • Experience developing custom tools or packages for data science workflows.


Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
5 - 10 yrs
₹7L - ₹20L / yr
Artificial Intelligence (AI)
Generative AI
skill iconPython
skill iconNodeJS (Node.js)
Vector database
+7 more

Location: Hybrid/ Remote

Type: Contract / Full‑Time

Experience: 5+ Years

Qualification: Bachelor’s or Master’s in Computer Science or a related technical field


Responsibilities:

  • Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
  • Design and build Python‑based services (FastAPI) for generating and updating embeddings.
  • Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
  • Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
  • Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
  • Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
  • Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
  • Collaborate with frontend engineers to define API contracts and demo endpoints.
  • Document architecture diagrams, API specifications, and runbooks for future team onboarding.


Required Skills

  • Strong Python expertise (FastAPI, async programming).
  • Proficiency with Node.js and Express for API development.
  • Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
  • Familiarity with OpenAI’s APIs (embeddings, chat completions).
  • Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
  • Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).

Containerization skills (Docker):

  • Good understanding of RAG architectures, prompt design, and memory management.
  • Strong Git workflow and collaborative development practices (GitHub, CI/CD).


Nice‑to‑Have:

  • Experience with Llama family models or other open‑source LLMs.
  • Familiarity with MongoDB Atlas free tier and cluster management.
  • Background in data engineering for streaming or batch processing.
  • Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
  • Frontend skills in React to prototype demo UIs.
Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
5 - 10 yrs
₹10L - ₹20L / yr
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Azure
skill iconJavascript
skill iconReact.js
+5 more

 Location: Hybrid/ Remote

Openings: 2

Experience: 5–12 Years

Qualification: Bachelor’s or Master’s in Computer Science or a related technical field


Key Responsibilities

Architect & Design:

  • Provide technical and architectural direction for complex frontend solutions,ensuring alignment with enterprise standards and best practices.
  • Conduct design and code reviews to maintain high-quality, reusable, and scalable frontend interfaces for enterprise applications.
  • Collaborate with cross-functional teams to define and enforce UI/UX design guidelines, accessibility standards, and performance benchmarks.
  • Identify and address potential security vulnerabilities in frontend implementations, ensuring compliance with security and data privacy requirements.

Development & Debugging:

  • Write clean, maintainable, and efficient frontend code.
  • Debug and troubleshoot code to ensure robust, high-performing applications.
  • Develop reusable frontend libraries that can be leveraged across multiple projects.

AI Awareness (Preferred):

  • Understand AI/ML fundamentals and how they can enhance frontend applications.
  • Collaborate with teams integrating AI-based features into chat applications.

Collaboration & Reporting:

  • Work closely with cross-functional teams to align on architecture and deliverables.
  • Regularly report progress, identify risks, and propose mitigation strategies.

Quality Assurance:

  • Implement unit tests and end-to-end tests to ensure code quality.
  • Participate in code reviews and enforce best practices.


Required Skills 

  • 5-10 years of experience architecting and developing cloud-based global applications in a public cloud environment (AWS, Azure, or GCP).
  • Strong hands-on expertise in frontend technologies: JavaScript, HTML5, CSS3
  • Proficiency with Modern frameworks like React, Angular, or Node.js
  • Backend familiarity with Java, Spring Boot (or similar technologies).
  • Experience developing real-world, at-scale products.
  • General knowledge of cloud platforms (AWS, Azure, or GCP) and their structure, use, and capabilities.
  • Strong problem-solving, debugging, and performance optimization skills.
Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
5 - 10 yrs
₹7L - ₹20L / yr
Fullstack Developer
skill iconJavascript
skill iconHTML/CSS
skill iconReact.js
skill iconSpring Boot
+9 more

Location: Hybrid/ Remote

Openings: 2

Experience: 5+ Years

Qualification: Bachelor’s or Master’s in Computer Science or related field


Job Responsibilities


Problem Solving & Optimization:

  • Analyze and resolve complex technical and application issues.
  • Optimize application performance, scalability, and reliability.

Design & Develop:

  • Build, test, and deploy scalable full-stack applications with high performance and security.
  • Develop clean, reusable, and maintainable code for both frontend and backend.

AI Integration (Preferred):

  • Collaborate with the team to integrate AI/ML models into applications where applicable.
  • Explore Generative AI, NLP, or machine learning solutions that enhance product capabilities.

Technical Leadership & Mentorship:

  • Provide guidance, mentorship, and code reviews for junior developers.
  • Foster a culture of technical excellence and knowledge sharing.

Agile & Delivery Management:

  • Participate in Agile ceremonies (sprint planning, stand-ups, retrospectives).
  • Define and scope backlog items, track progress, and ensure timely delivery.

Collaboration:

  • Work closely with cross-functional teams (product managers, designers, QA) to deliver high-quality solutions.
  • Coordinate with geographically distributed teams.

Quality Assurance & Security:

  • Conduct peer reviews of designs and code to ensure best practices.
  • Implement security measures and ensure compliance with industry standards.

Innovation & Continuous Improvement:

  • Identify areas for improvement in the software development lifecycle.
  • Stay updated with the latest tech trends, especially in AI and cloud technologies, and recommend new tools or frameworks.

Required Skills

  • Strong proficiency in JavaScript, HTML5, CSS3
  • Hands-on expertise with frontend frameworks like React, Angular, or Vue.js
  • Backend development experience with Java, Spring Boot (Node.js is a plus)
  • Knowledge of REST APIs, microservices, and scalable architectures
  • Familiarity with cloud platforms (AWS, Azure, or GCP)
  • Experience with Agile/Scrum methodologies and JIRA for project tracking
  • Proficiency in Git and version control best practices
  • Strong debugging, performance optimization, and problem-solving skills
  • Ability to analyze customer requirements and translate them into technical specifications
Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
0 - 2 yrs
₹3.5L - ₹4.5L / yr
Fullstack Developer
skill iconJavascript
skill iconReact.js
skill iconNodeJS (Node.js)
RESTful APIs
+6 more

Location: Hybrid/ Remote

Openings: 5

Experience: 0 - 2Years

Qualification: Bachelor’s or Master’s in Computer Science or a related technical field


Key Responsibilities:

Backend Development & APIs

  • Build microservices that provide REST APIs to power web frontends.
  • Design clean, reusable, and scalable backend code meeting enterprise security standards.
  • Conceptualize and implement optimized data storage solutions for high-performance systems.

Deployment & Cloud

  • Deploy microservices using a common deployment framework on AWS and GCP.
  • Inspect and optimize server code for speed, security, and scalability.

Frontend Integration

  • Work on modern front-end frameworks to ensure seamless integration with back-end services.
  • Develop reusable libraries for both frontend and backend codebases.


AI Awareness (Preferred)

  • Understand how AI/ML or Generative AI can enhance enterprise software workflows.
  • Collaborate with AI specialists to integrate AI-driven features where applicable.

Quality & Collaboration

  • Participate in code reviews to maintain high code quality.
  • Collaborate with teams using Agile/Scrum methodologies for rapid and structured delivery.


Required Skills:

  • Proficiency in JavaScript (ES6+), Webpack, Mocha, Jest
  • Experience with recent frontend frameworks – React.js, Redux.js, Node.js (or similar)
  • Deep understanding of HTML5, CSS3, SASS/LESS, and Content Management Systems
  • Ability to design and implement RESTful APIs and understand their impact on client-side applications
  • Familiarity with cloud platforms (AWS, Azure, or GCP) – deployment, storage, and scalability
  • Experience working with Agile and Scrum methodologies
  • Strong backend expertise in Java, J2EE, Spring Boot is a plus but not mandatory
Read more
VDart
Don Blessing
Posted by Don Blessing
Hyderabad, Bengaluru (Bangalore), Noida, Gurugram
5 - 15 yrs
₹10L - ₹15L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
API

Job Description:


Title : Python AWS Developer with API

 

Tech Stack : AWS API gateway, Lambda functionality, Oracle RDS, SQL & database management, (OOPS) principles, Java script, Object relational Mapper, Git, Docker, Java dependency management, CI/CD, AWS cloud & S3, Secret Manager, Python, API frameworks, well-versed with Front and back end programming (python).

 

Responsibilities: 

·      Worked on building high-performance APIs using AWS services and Python. Python coding, debugging programs and integrating app with third party web services.

·      Troubleshoot and debug non-prod defects, back-end development, API, main focus on coding and monitoring applications.

·      Core application logic design.

·      Supports dependency teams in UAT testing and perform functional application testing which includes postman testing

 

Read more
Deqode

at Deqode

1 recruiter
Shubham Das
Posted by Shubham Das
Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Chennai
6 - 9 yrs
₹15L - ₹17L / yr
skill iconAmazon Web Services (AWS)
Reliability engineering

We are hiring a Site Reliability Engineer (SRE) to join our high-performance engineering team. In this role, you'll be responsible for driving reliability, performance, scalability, and security across cloud-native systems while bridging the gap between development and operations.

Key Responsibilities

  • Design and implement scalable, resilient infrastructure on AWS
  • Take ownership of the SRE function – availability, latency, performance, monitoring, incident response, and capacity planning
  • Partner with product and engineering teams to improve system reliability, observability, and release velocity
  • Set up, maintain, and enhance CI/CD pipelines using Jenkins, GitHub Actions, or AWS CodePipeline
  • Conduct load and stress testing, identify performance bottlenecks, and implement optimization strategies

Required Skills & Qualifications

  • Proven hands-on experience in cloud infrastructure design (AWS strongly preferred)
  • Strong background in DevOps and SRE principles
  • Proficiency with performance testing tools like JMeter, Gatling, k6, or Locust
  • Deep understanding of cloud security and best practices for reliability engineering
  • AWS Solution Architect Certification – Associate or Professional (preferred)
  • Solid problem-solving skills and a proactive approach to systems improvement

Why Join Us?

  • Work with cutting-edge technologies in a cloud-native, fast-paced environment
  • Collaborate with cross-functional teams driving meaningful impact
  • Hybrid work culture with flexibility and autonomy
  • Open, inclusive work environment focused on innovation and excellence


Read more
Deqode

at Deqode

1 recruiter
Sneha Jain
Posted by Sneha Jain
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad
3.5 - 9 yrs
₹3L - ₹13L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
AWS Lambda
skill iconDjango
Amazon S3

Job Summary:

We are looking for a skilled and motivated Python AWS Engineer to join our team. The ideal candidate will have strong experience in backend development using Python, cloud infrastructure on AWS, and building serverless or microservices-based architectures. You will work closely with cross-functional teams to design, develop, deploy, and maintain scalable and secure applications in the cloud.

Key Responsibilities:

  • Develop and maintain backend applications using Python and frameworks like Django or Flask
  • Design and implement serverless solutions using AWS Lambda, API Gateway, and other AWS services
  • Develop data processing pipelines using services such as AWS Glue, Step Functions, S3, DynamoDB, and RDS
  • Write clean, efficient, and testable code following best practices
  • Implement CI/CD pipelines using tools like CodePipeline, GitHub Actions, or Jenkins
  • Monitor and optimize system performance and troubleshoot production issues
  • Collaborate with DevOps and front-end teams to integrate APIs and cloud-native services
  • Maintain and improve application security and compliance with industry standards

Required Skills:

  • Strong programming skills in Python
  • Solid understanding of AWS cloud services (Lambda, S3, EC2, DynamoDB, RDS, IAM, API Gateway, CloudWatch, etc.)
  • Experience with infrastructure as code (e.g., CloudFormation, Terraform, or AWS CDK)
  • Good understanding of RESTful API design and microservices architecture
  • Hands-on experience with CI/CD, Git, and version control systems
  • Familiarity with containerization (Docker, ECS, or EKS) is a plus
  • Strong problem-solving and communication skills

Preferred Qualifications:

  • Experience with PySpark, Pandas, or data engineering tools
  • Working knowledge of Django, Flask, or other Python frameworks
  • AWS Certification (e.g., AWS Certified Developer – Associate) is a plus

Educational Qualification:

  • Bachelor's or Master’s degree in Computer Science, Engineering, or related field


Read more
TalentRep

TalentRep

Agency job
via TalentRep by Vrinda Makhija
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
7 - 12 yrs
₹30L - ₹50L / yr
Microsoft Windows Azure
skill iconAmazon Web Services (AWS)
Golang
skill iconPython
skill iconJavascript
+8 more

A fast-growing, tech-driven loyalty programs and benefits business is looking to hire a Technical Architect with expertise in:


Key Responsibilities:


1. Architectural Design & Governance

• Define, document, and maintain the technical architecture for projects and product modules.

• Ensure architectural decisions meet scalability, performance, and security requirements.


2. Solution Development & Technical Leadership

• Translate product and client requirements into robust technical solutions, balancing short-term deliverables with long-term product viability.

• Oversee system integrations, ensuring best practices in coding standards, security, and performance optimization.


3. Collaboration & Alignment

• Work closely with Product Managers and Project Managers to prioritize and plan feature development.

• Facilitate cross-team communication to ensure technical feasibility and timely execution of features or client deliverables.


4. Mentorship & Code Quality

• Provide guidance to senior developers and junior engineers through code reviews, design reviews, and technical coaching.

• Advocate for best-in-class engineering practices, encouraging the use of CI/CD, automated testing, and modern development tooling.5. Risk Management & Innovation

• Proactively identify technical risks or bottlenecks, proposing mitigation strategies.

• Investigate and recommend new technologies, frameworks, or tools that enhance product capabilities and developer productivity.


6. Documentation & Standards

• Maintain architecture blueprints, design patterns, and relevant documentation to align the team on shared standards.

• Contribute to the continuous improvement of internal processes, ensuring streamlined development and deployment workflows.


Skills:


1. Technical Expertise

7–10 years of overall experience in software development with at least a couple of years in senior or lead roles.

• Strong proficiency in at least one mainstream programming language (e.g., Golang,

Python, JavaScript).

• Hands-on experience with architectural patterns (microservices, monolithic systems, event-driven architectures).

• Good understanding of Cloud Platforms (AWS, Azure, or GCP) and DevOps practices

(CI/CD pipelines, containerization with Docker/Kubernetes).

• Familiarity with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB).


Location: Saket, Delhi (Work from Office)

Schedule: Monday – Friday

Experience : 7-10 Years

Compensation: As per industry standards

Read more
Metric Vibes

Metric Vibes

Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Noida
4 - 8 yrs
₹10L - ₹15L / yr
PowerBI
skill iconJavascript
RESTful APIs
Embedded software
SQL
+9 more

Job Title: Tableau BI Developer

Years of Experience: 4-8Yrs

12$ per hour fte engagement

8 hrs. working


Required Skills & Experience:

✅ 4–8 years of experience in BI development and data engineering

✅ Expertise in BigQuery and/or Snowflake for large-scale data processing

✅ Strong SQL skills with experience writing complex analytical queries

✅ Experience in creating dashboards in tools like Power BI, Looker, or similar

✅ Hands-on experience with ETL/ELT tools and data pipeline orchestration

✅ Familiarity with cloud platforms (GCP, AWS, or Azure)

✅ Strong understanding of data modeling, data warehousing, and analytics best practices

✅ Excellent communication skills with the ability to explain technical concepts to non-technical stakeholders

Read more
Eyther
Atharva Kulkarni
Posted by Atharva Kulkarni
Delhi, Faridabad, Bhopal
2 - 5 yrs
₹6L - ₹8L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)

About the Role


We are looking for a DevOps Engineer to build and maintain scalable, secure, and high-

performance infrastructure for our next-generation healthcare platform. You will be

responsible for automation, CI/CD pipelines, cloud infrastructure, and system reliability,

ensuring seamless deployment and operations.


Responsibilities


1. Infrastructure & Cloud Management


• Design, deploy, and manage cloud-based infrastructure (AWS, Azure, GCP)

• Implement containerization (Docker, Kubernetes) and microservices orchestration

• Optimize infrastructure cost, scalability, and performance


2. CI/CD & Automation


• Build and maintain CI/CD pipelines for automated deployments

• Automate infrastructure provisioning using Terraform, Ansible, or CloudFormation

• Implement GitOps practices for streamlined deployments


3. Security & Compliance


• Ensure adherence to ABDM, HIPAA, GDPR, and healthcare security standards

• Implement role-based access controls, encryption, and network security best

practices

• Conduct Vulnerability Assessment & Penetration Testing (VAPT) and compliance

audits


4. Monitoring & Incident Management


• Set up monitoring, logging, and alerting systems (Prometheus, Grafana, ELK,

Datadog, etc.)

• Optimize system reliability and automate incident response mechanisms

• Improve MTTR (Mean Time to Recovery) and system uptime KPIs


5. Collaboration & Process Improvement


• Work closely with development and QA teams to streamline deployments

• Improve DevSecOps practices and cloud security policies

• Participate in architecture discussions and performance tuning


Required Skills & Qualifications


• 2+ years of experience in DevOps, cloud infrastructure, and automation

• Hands-on experience with AWS and Kubernetes

• Proficiency in Docker and CI/CD tools (Jenkins, GitHub Actions, ArgoCD, etc.)

• Experience with Terraform, Ansible, or CloudFormation

• Strong knowledge of Linux, shell scripting, and networking

• Experience with cloud security, monitoring, and logging solutions


Nice to Have


• Experience in healthcare or other regulated industries

• Familiarity with serverless architectures and AI-driven infrastructure automation

• Knowledge of big data pipelines and analytics workflows


What You'll Gain


• Opportunity to build and scale a mission-critical healthcare infrastructure

• Work in a fast-paced startup environment with cutting-edge technologies

• Growth potential into Lead DevOps Engineer or Cloud Architect roles

Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Bengaluru (Bangalore), Mumbai, Gurugram, Noida, Pune, Chennai, Nagpur, Indore, Ahmedabad, Kochi (Cochin), Delhi
3.5 - 8 yrs
₹4L - ₹15L / yr
skill iconGo Programming (Golang)
skill iconAmazon Web Services (AWS)
skill iconPython

Role Overview:


We are looking for a skilled Golang Developer with 3.5+ years of experience in building scalable backend services and deploying cloud-native applications using AWS. This is a key position that requires a deep understanding of Golang and cloud infrastructure to help us build robust solutions for global clients.


Key Responsibilities:

  • Design and develop backend services, APIs, and microservices using Golang.
  • Build and deploy cloud-native applications on AWS using services like Lambda, EC2, S3, RDS, and more.
  • Optimize application performance, scalability, and reliability.
  • Collaborate closely with frontend, DevOps, and product teams.
  • Write clean, maintainable code and participate in code reviews.
  • Implement best practices in security, performance, and cloud architecture.
  • Contribute to CI/CD pipelines and automated deployment processes.
  • Debug and resolve technical issues across the stack.


Required Skills & Qualifications:

  • 3.5+ years of hands-on experience with Golang development.
  • Strong experience with AWS services such as EC2, Lambda, S3, RDS, DynamoDB, CloudWatch, etc.
  • Proficient in developing and consuming RESTful APIs.
  • Familiar with Docker, Kubernetes or AWS ECS for container orchestration.
  • Experience with Infrastructure as Code (Terraform, CloudFormation) is a plus.
  • Good understanding of microservices architecture and distributed systems.
  • Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
  • Familiarity with Git, CI/CD pipelines, and agile workflows.
  • Strong problem-solving, debugging, and communication skills.


Nice to Have:

  • Experience with serverless applications and architecture (AWS Lambda, API Gateway, etc.)
  • Exposure to NoSQL databases like DynamoDB or MongoDB.
  • Contributions to open-source Golang projects or an active GitHub portfolio.


Read more
Hiringdog Interview Platform
Noida
2 - 4 yrs
₹12L - ₹18L / yr
DevOps
CI/CD
skill iconJenkins
skill iconAmazon Web Services (AWS)
skill iconGit
+1 more

Summary


We are seeking a highly skilled and motivated Software Engineer with expertise in both backend development and DevOps practices. The ideal candidate will have a proven track record of designing, developing, and deploying robust and scalable backend systems, while also possessing strong knowledge of cloud infrastructure and DevOps principles. This role requires a collaborative individual who thrives in a fast-paced environment and is passionate about building high-quality software.


Responsibilities


Design, develop, and maintain backend services using appropriate technologies.

Implement and maintain CI/CD pipelines.

Manage and monitor cloud infrastructure (e.g., AWS, Azure, GCP).

Troubleshoot and resolve production issues.

Collaborate with frontend developers to integrate backend services.

Contribute to the design and implementation of database schemas.

Participate in code reviews and ensure code quality.

Contribute to the improvement of DevOps processes and tools.

Write clear and concise documentation.

Stay up-to-date with the latest technologies and best practices.


Qualifications


Bachelor's degree in Computer Science or a related field.

3+ years of experience in backend software development.

2+ years of experience in DevOps.

Proficiency in at least one backend programming language (e.g., Java, Python, Node.js, Go).

Experience with cloud platforms (e.g., AWS, Azure, GCP).

Experience with containerization technologies (e.g., Docker, Kubernetes).

Experience with CI/CD tools (e.g., Jenkins, GitLab CI, CircleCI).

Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack).

Strong understanding of database technologies (e.g., SQL, NoSQL).

Excellent problem-solving and debugging skills.

Strong communication and collaboration skills.



Bonus Points


Experience with specific technologies used by our company (list technologies if applicable).

Experience with serverless architectures.

Experience with infrastructure as code (e.g., Terraform, CloudFormation).

Contributions to open-source projects.

Relevant certifications.

Read more
Parksmart
Agency job
via Parksmart by Saurav Kumar
Remote, Noida
0 - 1 yrs
₹10000 - ₹15000 / mo
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)
skill iconReact.js
SQL
skill iconMongoDB
+1 more


🚀 We're Urgently Hiring – Node.js Backend Development Intern

Join our backend team as an intern and get hands-on experience building scalable, real-world applications with Node.js, Firebase, and AWS.

📍 Remote / Onsite

📍 📅 Duration: 2 Months


🔧 What You’ll Work On:

Backend development using Node.js

Firebase, SQL & NoSQL database management

RESTful API integration

Deployment on AWS infrastructure


Read more
Hunarstreet Technologies pvt ltd

Hunarstreet Technologies pvt ltd

Agency job
via Hunarstreet Technologies pvt ltd by Sakshi Patankar
Noida
9 - 15 yrs
₹40L - ₹60L / yr
skill iconJava
CI/CD
Agile/Scrum
Hibernate (Java)
java 8
+8 more

Roles and Responsibilities:


• Independently analyze, solve, and correct issues in real time, providing problem resolution end-

to-end.


• Strong experience in development tools, CI/CD pipelines. Extensive experience with Agile.

• Good proficiency overlap with technologies like: Java8, Spring, SpringMVC, RESTful web services, Hibernate, Oracle PL/SQL, SpringSecurity, Ansible, Docker, JMeter, Angular.

• Strong fundamentals and clarity of REST web services. Person should have exposure to

developing REST services which handles large sets

• Fintech or lending domain experience is a plus but not necessary.

• Deep understanding of cloud technologies on at least one of the cloud platforms AWS, Azure or Google Cloud

• Wide knowledge of technology solutions and ability to learn and work with emerging technologies, methodologies, and solutions.

• Strong communicator with ability to collaborate cross-functionally, build relationships, and achieve broader organizational goals.

• Provide vision leadership for the technology roadmap of our products. Understand product capabilities and strategize technology for its alignment with business objectives and maximizing ROI.

• Define technical software architectures and lead development of frameworks.

• Engage end to end in product development, starting from business requirements to realization of product and to its deployment in production.

• Research, design, and implement the complex features being added to existing products and/or create new applications / components from scratch.

Minimum Qualifications

• Bachelor s or higher engineering degree in Computer Science, or related technical field, or equivalent additional professional experience.

• 5 years of experience in delivering solutions from concept to production that are based on Java and open-source technologies as an enterprise architect in global organizations.

• 12-15 years of industry experience in design, development, deployments, operations and managing non-functional perspectives of technical solutions.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort