Cutshort logo

50+ AWS (Amazon Web Services) Jobs in India

Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!

icon
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Mumbai
8 - 10 yrs
₹12L - ₹20L / yr
CI/CD
skill iconAmazon Web Services (AWS)
skill iconJenkins
skill iconGitHub
ArgoCD
+1 more

Senior DevOps Engineer (8–10 years)

Location: Mumbai


Role Summary

As a Senior DevOps Engineer, you will own end-to-end platform reliability and delivery automation for mission-critical lending systems. You’ll architect cloud infrastructure, standardize CI/CD, enforce DevSecOps controls, and drive observability at scale—ensuring high availability, performance, and compliance consistent with BFSI standards.


Key Responsibilities


Platform & Cloud Infrastructure

  • Design, implement, and scale multi-account, multi-VPC cloud architectures on AWS and/or Azure (compute, networking, storage, IAM, RDS, EKS/AKS, Load Balancers, CDN). 
  • Champion Infrastructure as Code (IaC) using Terraform (and optionally Pulumi/Crossplane) with GitOps workflows for repeatable, auditable deployments.
  • Lead capacity planning, cost optimization, and performance tuning across environments.

CI/CD & Release Engineering

  • Build and standardize CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps, ArgoCD) for microservices, data services, and frontends; enable blue‑green/canary releases and feature flags.
  • Drive artifact management, environment promotion, and release governance with compliance-friendly controls.

Containers, Kubernetes & Runtime

  • Operate production-grade Kubernetes (EKS/AKS), including cluster lifecycle, autoscaling, ingress, service mesh, and workload security; manage Docker/containerd images and registries. 

Reliability, Observability & Incident Management

  • Implement end-to-end monitoring, logging, and tracing (Prometheus, Grafana, ELK/EFK, CloudWatch/Log Analytics, Datadog/New Relic) with SLO/SLI error budgets. 
  • Establish on-call rotations, run postmortems, and continuously improve MTTR and change failure rate.

Security & Compliance (DevSecOps)

  • Enforce cloud and container hardening, secrets management (AWS Secrets Manager / HashiCorp Vault), vulnerability scanning (Snyk/SonarQube), and policy-as-code (OPA/Conftest).
  • Partner with infosec/risk to meet BFSI regulatory expectations for DR/BCP, audits, and data protection.

Data, Networking & Edge

  • Optimize networking (DNS, TCP/IP, routing, OSI layers) and edge delivery (CloudFront/Fastly), including WAF rules and caching strategies. 
  • Support persistence layers (MySQL, Elasticsearch, DynamoDB) for performance and reliability.

Ways of Working & Leadership

  • Lead cross-functional squads (Product, Engineering, Data, Risk) and mentor junior DevOps/SREs.
  • Document runbooks, architecture diagrams, and operating procedures; drive automation-first culture.


Must‑Have Qualifications

  • 8–10 years of total experience with 5+ years hands-on in DevOps/SRE roles.
  • Strong expertise in AWS and/or Azure, Linux administration, Kubernetes, Docker, and Terraform.
  • Proven track record building CI/CD with Jenkins/GitHub Actions/Azure DevOps/ArgoCD. 
  • Solid grasp of networking fundamentals (DNS, TLS, TCP/IP, routing, load balancing).
  • Experience implementing observability stacks and responding to production incidents. 
  • Scripting in Bash/Python; ability to automate ops workflows and platform tasks. 
  • Good‑to‑Have / Preferred
  • Exposure to BFSI/fintech systems and compliance standards; DR/BCP planning. 
  • Secrets management (Vault), policy-as-code (OPA), and security scanning (Snyk/SonarQube). 
  • Experience with GitOps patterns, service tiering, and SLO/SLI design. [illbeback.ai]
  • Knowledge of CDNs (CloudFront/Fastly) and edge caching/WAF rule authoring. 
  • Education
  • Bachelor’s/Master’s in Computer Science, Information Technology, or related field (or equivalent experience).


Read more
Ciroos

Ciroos

Agency job
via Uplers by Sainayan Rai
Gurugram
5 - 8 yrs
₹50L - ₹70L / yr
skill iconKubernetes
skill iconGo Programming (Golang)
skill iconRust
skill iconC++
skill iconAmazon Web Services (AWS)
+4 more

What You'll Work On


• Design and develop a next-generation scalable observability platform for modern cloud-native and hybrid infrastructures that works in tandem with AI agents.

• Create intelligent AI agents to analyze logs, traces, and metrics in real time, delivering automated insights and remediation.

• Build scalable and fault tolerant AI agent frameworks

• Engineer and optimize large-scale analytics pipelines to process high-velocity telemetry data.

• Build resilient distributed systems with high reliability, performance, and fault tolerance.

• Implement and fine-tune LLMs for natural language querying and automated troubleshooting.

• Partner with ML engineers to streamline AI model deployment and management.


What We're Looking For


• Strong programming skills in Python and Golang (experience with Rust is a plus)

• Track record of building distributed systems and large-scale analytics pipelines • Hands-on experience with cloud infrastructure (AWS, GCP, or Azure) and Kubernetes

• Deep understanding of observability technologies (Prometheus, OpenTelemetry, Grafana, Elastic, etc.)

• Knowledge of LLMs, AI agents, agent frameworks liks langchain, autogen is a plus

• Experience with stream processing and real-time data processing frameworks

• Proficiency in database technologies (SQL & NoSQL, Clickhouse, Time-Series DBs)

• Bachelor's degree in Computer Science, Engineering, or related field (Master's/PhD is a plus)

Read more
Hashone Careers
Madhavan I
Posted by Madhavan I
Remote only
5 - 10 yrs
₹20L - ₹40L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconKubernetes
cicd
skill iconPython
+1 more

Job Description: DevOps Engineer

Location: Bangalore / Hybrid / Remote

Company: LodgIQ

Industry: Hospitality / SaaS / Machine Learning


About LodgIQ

Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the

travel industry. By leveraging machine learning and artificial intelligence, we enable precise

forecasting and optimized pricing for hotel revenue management. Backed by Highgate

Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a

global presence.


Role Summary:

We are seeking a Senior DevOps Engineer with 5+ years of strong hands-on experience in

AWS, Kubernetes, CI/CD, infrastructure as code, and cloud-native technologies. This

role involves designing and implementing scalable infrastructure, improving system

reliability, and driving automation across our cloud ecosystem.


Key Responsibilities:

• Architect, implement, and manage scalable, secure, and resilient cloud

infrastructure on AWS

• Lead DevOps initiatives including CI/CD pipelines, infrastructure automation,

and monitoring

• Deploy and manage Kubernetes clusters and containerized microservices

• Define and implement infrastructure as code using

Terraform/CloudFormation

• Monitor production and staging environments using tools like CloudWatch,

Prometheus, and Grafana

• Support MongoDB and MySQL database administration and optimization

• Ensure high availability, performance tuning, and cost optimization

• Guide and mentor junior engineers, and enforce DevOps best practices

• Drive system security, compliance, and audit readiness in cloud environments

• Collaborate with engineering, product, and QA teams to streamline release

processes


Required Qualifications:

• 5+ years of DevOps/Infrastructure experience in production-grade environments

• Strong expertise in AWS services: EC2, EKS, IAM, S3, RDS, Lambda, VPC, etc.

• Proven experience with Kubernetes and Docker in production

• Proficient with Terraform, CloudFormation, or similar IaC tools

• Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, or

similar

• Advanced scripting in Python, Bash, or Go

• Solid understanding of networking, firewalls, DNS, and security protocols

• Exposure to monitoring and logging stacks (e.g., ELK, Prometheus, Grafana)

• Experience with MongoDB and MySQL in cloud environments

Preferred Qualifications:

• AWS Certified DevOps Engineer or Solutions Architect

• Experience with service mesh (Istio, Linkerd), Helm, or ArgoCD

• Familiarity with Zero Downtime Deployments, Canary Releases, and Blue/Green

Deployments

• Background in high-availability systems and incident response

• Prior experience in a SaaS, ML, or hospitality-tech environment


Tools and Technologies You’ll Use:

• Cloud: AWS

• Containers: Docker, Kubernetes, Helm

• CI/CD: Jenkins, GitHub Actions

• IaC: Terraform, CloudFormation

• Monitoring: Prometheus, Grafana, CloudWatch

• Databases: MongoDB, MySQL

• Scripting: Bash, Python

• Collaboration: Git, Jira, Confluence, Slack


Why Join Us?

• Competitive salary and performance bonuses.

• Remote-friendly work culture.

• Opportunity to work on cutting-edge tech in AI and ML.

• Collaborative, high-growth startup environment.

• For more information, visit http://www.lodgiq.com

Read more
Matchmaking platform

Matchmaking platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai
2 - 5 yrs
₹15L - ₹28L / yr
skill iconData Science
skill iconPython
Natural Language Processing (NLP)
MySQL
skill iconMachine Learning (ML)
+15 more

Review Criteria

  • Strong Data Scientist/Machine Learnings/ AI Engineer Profile
  • 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
  • Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
  • Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
  • Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
  • Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
  • Preferred (Company) – Must be from product companies

 

Job Specific Criteria

  • CV Attachment is mandatory
  • What's your current company?
  • Which use cases you have hands on experience?
  • Are you ok for Mumbai location (if candidate is from outside Mumbai)?
  • Reason for change (if candidate has been in current company for less than 1 year)?
  • Reason for hike (if greater than 25%)?

 

Role & Responsibilities

  • Partner with Product to spot high-leverage ML opportunities tied to business metrics.
  • Wrangle large structured and unstructured datasets; build reliable features and data contracts.
  • Build and ship models to:
  • Enhance customer experiences and personalization
  • Boost revenue via pricing/discount optimization
  • Power user-to-user discovery and ranking (matchmaking at scale)
  • Detect and block fraud/risk in real time
  • Score conversion/churn/acceptance propensity for targeted actions
  • Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
  • Design and run A/B tests with guardrails.
  • Build monitoring for model/data drift and business KPIs


Ideal Candidate

  • 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
  • Proven, hands-on success in at least two (preferably 3–4) of the following:
  • Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
  • Fraud/risk detection (severe class imbalance, PR-AUC)
  • Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
  • Propensity models (payment/churn)
  • Programming: strong Python and SQL; solid git, Docker, CI/CD.
  • Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
  • ML breadth: recommender systems, NLP or user profiling, anomaly detection.
  • Communication: clear storytelling with data; can align stakeholders and drive decisions.



Read more
ICloudEMS
Ruchi ICloudEMS
Posted by Ruchi ICloudEMS
Remote only
3 - 6 yrs
₹4L - ₹8L / yr
skill iconNodeJS (Node.js)
skill iconPHP
ERP management
EdTech
MySQL
+2 more

We are looking for a skilled Node.js Developer with PHP experience to build, enhance, and maintain ERP and EdTech platforms. The role involves developing scalable backend services, integrating ERP modules, and supporting education-focused systems such as LMS, student management, exams, and fee management.


Key Responsibilities

Develop and maintain backend services using Node.js and PHP.


Build and integrate ERP modules for EdTech platforms (Admissions, Students, Exams, Attendance, Fees, Reports).


Design and consume RESTful APIs and third-party integrations (payment gateway, SMS, email).


Work with databases (MySQL / MongoDB / PostgreSQL) for high-volume education data.


Optimize application performance, scalability, and security.


Collaborate with frontend, QA, and product teams.


Debug, troubleshoot, and provide production support.


Required Skills

Strong experience in Node.js (Express.js / NestJS).


Working experience in PHP (Core PHP / Laravel / CodeIgniter).


Hands-on experience with ERP systems.


Domain experience in EdTech / Education ERP / LMS.


Strong knowledge of MySQL and database design.


Experience with authentication, role-based access, and reporting.


Familiarity with Git, APIs, and server environments.



Preferred Skills

Experience with online examination systems.


Knowledge of cloud platforms (AWS / Azure).


Understanding of security best practices (CSRF, XSS, SQL Injection).


Exposure to microservices or modular architecture.



Qualification

Bachelor’s degree in Computer Science or equivalent experience.


3–6 years of relevant experience in Node.js & PHP development

Read more
Cspar Enterprises Private Limited
Bengaluru (Bangalore)
7 - 14 yrs
₹9L - ₹12L / yr
skill iconDjango
skill iconPython
skill icon.NET
skill iconPHP
skill iconReact.js
+16 more

Job Description -Technical Project Manager

Job Title: Technical Project Manager

Location: Bhopal / Bangalore (On-site)

Experience Required: 7+ Years

Industry: Fintech / SaaS / Software Development

Role Overview

We are looking for a Technical Project Manager (TPM) who can bridge the gap between management and developers. The TPM will manage Android, Frontend, and Backend teams, ensure smooth development processes, track progress, evaluate output quality, resolve technical issues, and deliver timely reports.

Key Responsibilities

Project & Team Management

  • Manage daily tasks for Android, Frontend, and Backend developers
  • Conduct daily stand-ups, weekly planning, and reviews
  • Track progress, identify blockers, and ensure timely delivery
  • Maintain sprint boards, task estimations, and timelines

Technical Requirement Translation

  • Convert business requirements into technical tasks
  • Communicate requirements clearly to developers
  • Create user stories, flow diagrams, and PRDs
  • Ensure requirements are understood and implemented correctly

Quality & Build Review

  • Validate build quality, UI/UX flow, functionality
  • Check API integrations, errors, performance issues
  • Ensure coding practices and architecture guidelines are followed
  • Perform preliminary QA before handover to testing or clients

Issue Resolution

  • Identify development issues early
  • Coordinate with developers to fix bugs
  • Escalate major issues to founders with clear insights

Reporting & Documentation

  • Daily/weekly reports to management
  • Sprint documentation, release notes
  • Maintain project documentation & version control processes

Cross-Team Communication

  • Act as the single point of contact for management
  • Align multiple tech teams with business goals
  • Coordinate with HR and operations for resource planning

Required Skills

  • Strong understanding of Android, Web (Frontend/React), Backend development flows
  • Knowledge of APIs, Git, CI/CD, basic testing
  • Experience with Agile/Scrum methodologies
  • Ability to review builds and suggest improvements
  • Strong documentation skills (Jira, Notion, Trello, Asana)
  • Excellent communication & leadership
  • Ability to handle pressure and multiple projects

Good to Have

  • Prior experience in Fintech projects
  • Basic knowledge of UI/UX
  • Experience in preparing FSD/BRD/PRD
  • QA experience or understanding of test cases

Salary Range: 9 to 12 LPA

Read more
AryuPay Technologies
Bengaluru (Bangalore), Bhopal
4 - 8 yrs
₹5L - ₹10L / yr
skill iconDjango
RESTful APIs
skill iconFlask
skill iconPostgreSQL
CI/CD
+7 more

Senior Python Django Developer 

Experience: Back-end development: 6 years (Required)


Location:  Bangalore/ Bhopal

Job Description:

We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.

This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.

Responsibilities:

  • Design and develop scalable, secure, and high-performance applications using Python (Django framework).
  • Architect system components, define database schemas, and optimize backend services for speed and efficiency.
  • Lead and implement design patterns and software architecture best practices.
  • Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
  • Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
  • Drive performance improvements, monitor system health, and troubleshoot production issues.
  • Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
  • Contribute to technical decision-making and mentor junior developers.

Requirements:

  • 6 to 10 years of professional backend development experience with Python and Django.
  • Strong background in payments/financial systems or FinTech applications.
  • Proven experience in designing software architecture in a microservices or modular monolith environment.
  • Experience working in fast-paced startup environments with agile practices.
  • Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
  • Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
  • Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
  • Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).

Preferred Skills:

  • Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
  • Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
  • Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
  • Contributions to open-source or personal finance-related projects.

Job Types: Full-time, Permanent


Schedule:

  • Day shift

Supplemental Pay:

  • Performance bonus
  • Yearly bonus

Ability to commute/relocate:

  • JP Nagar, 5th Phase, Bangalore, Karnataka or Indrapuri, Bhopal, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred)

Read less


Read more
Codemonk

at Codemonk

4 candid answers
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
7yrs+
Upto ₹35L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconPython
Google Cloud Platform (GCP)
RESTful APIs
SQL
+4 more

Like us, you'll be deeply committed to delivering impactful outcomes for customers.

  • 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
  • Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
  • Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
  • Experience writing batch/cron jobs using Python and Shell scripting.
  • Experience in web application development using JavaScript and JavaScript libraries.
  • Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
  • Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
  • Understanding of code versioning tools such as Git.
  • Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
  • Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai, Kochi (Cochin), Pune, Trivandrum, Thiruvananthapuram
5 - 7 yrs
₹10L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconJenkins
CI/CD
skill iconDocker
skill iconKubernetes
+15 more

Job Description

We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.


Key Responsibilities:

  • Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
  • Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
  • Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
  • Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
  • Ensure compliance with security best practices and organizational policies across GCP environments.
  • Document processes, configurations, and architectural decisions to maintain operational transparency.
  • Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.


Mandatory Skills:

  • Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
  • Jenkins – Expertise in Declarative Pipeline creation and optimization.
  • CI/CD – Strong understanding of automated build, test, and deployment workflows.
  • Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
  • Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
  • Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.


Preferred Skills:

  • Experience with TerraformAnsible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
  • Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
  • Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
  • GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.


Skills

Gcp, Jenkins, CICD Aws,


Nice to Haves

Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).

Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.

Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).

GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.

 

******

Notice period - 0 to 15days only

Location – Pune, Trivandrum, Kochi, Chennai

Read more
Arcitech
Navi Mumbai
5 - 7 yrs
₹12L - ₹14L / yr
Cyber Security
VAPT
Cloud Computing
CI/CD
skill iconJenkins
+4 more

Senior DevSecOps Engineer (Cybersecurity & VAPT) - Arcitech AI



Arcitech AI, located in Mumbai's bustling Lower Parel, is a trailblazer in software and IT, specializing in software development, AI, mobile apps, and integrative solutions. Committed to excellence and innovation, Arcitech AI offers incredible growth opportunities for team members. Enjoy unique perks like weekends off and a provident fund. Our vibrant culture is friendly and cooperative, fostering a dynamic work environment that inspires creativity and forward-thinking. Join us to shape the future of technology.

Full-time

Navi Mumbai, Maharashtra, India

5+ Years Experience

1200000 - 1400000

Job Title: Senior DevSecOps Engineer (Cybersecurity & VAPT)

Location: Vashi, Navi Mumbai (On-site)

Shift: 10:00 AM - 7:00 PM

Experience: 5+ years

Salary : INR 12,00,000 - 14,00,000


Job Summary

Hiring a Senior DevSecOps Engineer with strong cloud, CI/CD, automation skills and hands-on experience in Cybersecurity & VAPT to manage deployments, secure infrastructure, and support DevSecOps initiatives.


Key Responsibilities

Cloud & Infrastructure

  • Manage deployments on AWS/Azure
  • Maintain Linux servers & cloud environments
  • Ensure uptime, performance, and scalability


CI/CD & Automation

  • Build and optimize pipelines (Jenkins, GitHub Actions, GitLab CI/CD)
  • Automate tasks using Bash/Python
  • Implement IaC (Terraform/CloudFormation)


Containerization

  • Build and run Docker containers
  • Work with basic Kubernetes concepts


Cybersecurity & VAPT

  • Perform Vulnerability Assessment & Penetration Testing
  • Identify, track, and mitigate security vulnerabilities
  • Implement hardening and support DevSecOps practices
  • Assist with firewall/security policy management


Monitoring & Troubleshooting

  • Use ELK, Prometheus, Grafana, CloudWatch
  • Resolve cloud, deployment, and infra issues


Cross-Team Collaboration

  • Work with Dev, QA, and Security for secure releases
  • Maintain documentation and best practices


Required Skills

  • AWS/Azure, Linux, Docker
  • CI/CD tools: Jenkins, GitHub Actions, GitLab
  • Terraform / IaC
  • VAPT experience + understanding of OWASP, cloud security
  • Bash/Python scripting
  • Monitoring tools (ELK, Prometheus, Grafana)
  • Strong troubleshooting & communication
Read more
Daten  Wissen Pvt Ltd

at Daten Wissen Pvt Ltd

1 recruiter
Ashwini poojari
Posted by Ashwini poojari
Mumbai, Bhander, Thane
1 - 2 yrs
₹2L - ₹5L / yr
skill iconDjango
DRF
skill iconPython
RESTful APIs
Celery
+6 more

Backend Developer (Django)


About the Role:

We are looking for a highly motivated Backend Developer with hands-on experience in the Django framework to join our dynamic team. The ideal candidate should be passionate about backend development and eager to learn and grow in a fast-paced environment. You’ll be involved in developing web applications, APIs, and automation workflows.


Key Responsibilities:

  • Develop and maintain Python-based web applications using Django and Django Rest Framework.
  • Build and integrate RESTful APIs.
  • Work collaboratively with frontend developers to integrate user-facing elements with server-side logic.
  • Contribute to improving development workflows through automation.
  • Assist in deploying applications using cloud platforms like Heroku or AWS.
  • Write clean, maintainable, and efficient code.

Requirements:

Backend:

  • Strong understanding of Django and Django Rest Framework (DRF).
  • Experience with task queues like Celery.

Frontend (Basic Understanding):

  • Proficiency in HTML, CSS, Bootstrap, JavaScript, and jQuery.

Hosting & Deployment:

  • Familiarity with at least one hosting service such as Heroku, AWS, or similar platforms.

Linux/Server Knowledge:

  • Basic to intermediate understanding of Linux commands and server environments.
  • Ability to work with terminal, virtual environments, SSH, and basic server configurations.

Python Knowledge:

  • Good grasp of OOP concepts.
  • Familiarity with Pandas for data manipulation is a plus.

Soft & Team Skills:

  • Strong collaboration and team management abilities.
  • Ability to work in a team-driven environment and coordinate tasks smoothly.
  • Problem-solving mindset and attention to detail.
  • Good communication skills and eagerness to learn

What We Offer:

  • A collaborative, friendly, and growth-focused work environment.
  • Opportunity to work on real-time projects using modern technologies.
  • Guidance and mentorship to help you advance in your career.
  • Flexible and supportive work culture.
  • Opportunities for continuous learning and skill development.

Location : Bhayander (Onsite)

Immediate to 30-day joiner and Mumbai-based candidate preferred. 



Read more
Media and Entertainment Industry

Media and Entertainment Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
5 - 7 yrs
₹15L - ₹25L / yr
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
Infrastructure
Scripting
+28 more

Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting


Criteria:

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
  • Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
  • Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
  • Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
  • Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
  • Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
  • Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
  • Strong scripting skills (Bash, Shell, Python) for automation
  • Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
  • Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
  • Strong experience in incident management, root cause analysis & production firefighting

 

Description

Role Overview

Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.

 

 Key Responsibilities

1. Cloud Infrastructure — AWS (Primary Focus)

  • Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
  • Optimize cloud cost, resource utilization, and performance across environments.
  • Design high-availability, fault-tolerant systems for streaming workloads.

 

2. CI/CD Automation

  • Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
  • Automate deployments for microservices, mobile apps, and backend APIs.
  • Implement blue/green and canary deployments for seamless production rollouts.

 

3. Observability & Monitoring

  • Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
  • Perform proactive performance analysis to minimize downtime and bottlenecks.
  • Set up dashboards for real-time visibility into system health and user traffic spikes.

 

4. Security, Compliance & Risk Highlighting

• Conduct frequent risk assessments and identify vulnerabilities in:

  o Cloud architecture

  o Access policies (IAM)

  o Secrets & key management

  o Data flows & network exposure


• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.

 

5. Scalability & Reliability Engineering

  • Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
  • Identify scalability gaps and propose solutions across:
  •   o Microservices
  •   o Caching layers
  •   o CDN distribution (CloudFront)
  •   o Database workloads
  • Perform capacity planning and load testing to ensure readiness for 10x traffic growth.

 

6. Database & Storage Support

  • Administer and optimize MongoDB for high-read/low-latency use cases.
  • Design backup, recovery, and data replication strategies.
  • Work closely with backend teams to tune query performance and indexing.

 

7. Automation & Infrastructure as Code

  • Implement IaC using Terraform, CloudFormation, or Ansible.
  • Automate repetitive infrastructure tasks to ensure consistency across environments.

 

Required Skills & Experience

Technical Must-Haves

  • 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
  • Strong hands-on experience with AWS (core and advanced services).
  • Expertise in Jenkins CI/CD pipelines.
  • Solid background working with MongoDB in production environments.
  • Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
  • Strong scripting experience (Bash, Python, Shell).
  • Experience handling risk identification, root cause analysis, and incident management.

 

Nice to Have

  • Experience with OTT, video streaming, media, or any content-heavy product environments.
  • Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
  • Understanding of CDN, caching, and streaming pipelines.

 

Personality & Mindset

  • Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
  • Proactive problem solver with ability to think about long-term scalability.
  • Comfortable working with cross-functional engineering teams.

 

Why Join company?

• Build and operate infrastructure powering millions of monthly users.

• Opportunity to shape DevOps culture and cloud architecture from the ground up.

• High-impact role in a fast-scaling Indian OTT product.

Read more
Inteliment Technologies

at Inteliment Technologies

2 candid answers
Ariba Khan
Posted by Ariba Khan
Pune
10 - 20 yrs
Upto ₹25L / yr (Varies
)
Project Management
PowerBI
skill iconAmazon Web Services (AWS)
Windows Azure
Informatica

About the company:

Inteliment is a niche business analytics company with almost 2 decades proven track record of partnering with hundreds of fortunes 500 global companies. Inteliment operates its ISO certified development centre in Pune, India and has business operations in multiple countries through subsidiaries in Singapore, Europe and headquarter in India.


About the role:

As a Technical Project Manager, you will lead the planning, execution, and delivery of complex technical projects while ensuring alignment with business objectives and timelines. You will act as a bridge between technical teams and stakeholders, managing resources, risks, and communications to deliver high-quality solutions. This role demands strong leadership, project management expertise, and technical acumen to drive project success in a dynamic and collaborative environment.


Qualifications:

  • Education Background: Any ME / M Tech / BE / B Tech 


Key Competencies:

Technical Skills

1. Data & BI Technologies-

  • Proficiency in SQL & PL/SQL for database querying and optimization.
  • Understanding of data warehousing concepts, dimensional modeling, and data lake/lakehouse architectures.
  • Experience with BI tools such as Power BI, Tableau, Qlik Sense/View.
  • Familiarity with traditional platforms like Oracle, Informatica, SAP BO, BODS, BW.

2. Cloud & Data Engineering :

  • Strong knowledge of AWS (EC2, S3, Lambda, Glue, Redshift), Azure (Data Factory, Synapse, Databricks, ADLS),
  • Snowflake (warehouse architecture, performance tuning), and Databricks (Delta Lake, Spark).
  • Experience with cloud-based ETL/ELT pipelines, data ingestion, orchestration, and workflow automation.

3. Programming

  • Hands-on experience in Python or similar scripting languages for data processing and automation.

Soft Skills

  • Strong leadership and team management skills.
  • Excellent verbal and written communication for stakeholder alignment.
  • Structured problem-solving and decision-making capability.
  • Ability to manage ambiguity and handle multiple priorities.

Tools & Platforms

  • Cloud: AWS, Azure
  • Data Platforms: Snowflake, Databricks
  • BI Tools: Power BI, Tableau, Qlik
  • Data Management: Oracle, Informatica, SAP BO
  • Project Tools: JIRA, MS Project, Confluence (recommended additions if you want)


Key Responsibilities:

  • End-to-End Project Management: Lead the team through the full project lifecycle, delivering techno-functional solutions.
  • Methodology Expertise: Apply Agile, PMP, and other frameworks to ensure effective project execution and resource management.
  • Technology Integration: Oversee technology integration and ensure alignment with business goals.
  • Stakeholder & Conflict Management: Manage relationships with customers, partners, and vendors, addressing expectations and conflicts proactively.
  • Technical Guidance: Provide expertise in software design, architecture, and ensure project feasibility.
  • Change Management: Analyse new requirements/change requests, ensuring alignment with project goals.
  • Effort & Cost Estimation: Estimate project efforts and costs and identify potential risks early.
  • Risk Mitigation: Proactively identify risks and develop mitigation strategies, escalating issues in advance.
  • Hands-On Contribution: Participate in coding, code reviews, testing, and documentation as needed.
  • Project Planning & Monitoring: Develop detailed project plans, track progress, and monitor task dependencies.
  • Scope Management: Manage project scope, deliverables, and exclusions, ensuring technical feasibility.
  • Effective Communication: Communicate with stakeholders to ensure agreement on scope, timelines, and objectives.
  • Reporting: Provide status and RAG reports, proactively addressing risks and issues.
  • Change Control: Manage changes in project scope, schedule, and costs using appropriate verification techniques.
  • Performance Measurement: Measure project performance with tools and techniques to ensure progress.
  • Operational Process Management: Oversee operational tasks like timesheet approvals, leave, appraisals, and invoicing.
Read more
ConvertLens

at ConvertLens

2 candid answers
3 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote, Noida
2yrs+
Best in industry
skill iconPython
FastAPI
AI Agents
Artificial Intelligence (AI)
Large Language Models (LLM)
+9 more

🚀 About Us

At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews — all while improving their SEO.


We’re also the creators of Convertlens, our generative AI-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.

We’re a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solves real problems — you’ll fit right in.


🛠️ What You’ll Do

  • Build and maintain scalable Python back-end systems that power Convertlens and internal applications.
  • Develop Agentic AI applications and workflows to drive automation and insights.
  • Design and implement connectors to third-party systems (APIs, CRMs, marketing tools) to source and unify data.
  • Ensure system reliability with strong practices in observability, monitoring, and troubleshooting.


⚙️ What You Bring

  • 2+ years of hands-on experience in Python back-end development.
  • Strong understanding of REST API design and integration.
  • Proficiency with relational databases (MySQL/PostgreSQL).
  • Familiarity with observability tools (logging, monitoring, tracing — e.g., OpenTelemetry, Prometheus, Grafana, ELK).
  • Experience maintaining production systems with a focus on reliability and scalability.
  • Bonus: Exposure to Node.js and modern front-end frameworks like ReactJs.
  • Strong problem-solving skills and comfort working in a startup/product environment.
  • A builder mindset — scrappy, curious, and ready to ship.


💼 Perks & Culture

  • Flexible work setup — remote-first for most, hybrid if you’re in Delhi NCR.
  • A high-growth, high-impact environment where your code goes live fast.
  • Opportunities to work with Agentic AI and cutting-edge tech.
  • Small team, big vision — your work truly matters here.
Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Noida, Delhi
6 - 8 yrs
₹15L - ₹18L / yr
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconDocker
skill iconKubernetes
+2 more

Hands on experience with Infra as a code tools like terraform, ansible, puppet, chef, cloud formation , etc..


Expertise in any Cloud (AWS, Azure, GCP)


Good understand of version control (Git, Gitlab, GitHub)


Hands-on experience in Container Infrastructure (Docker, Kubernetes)



Ensuring availability, performance, security and scalability of production system


Hands on experience with Infrastructure Automation tools like Chef/Puppet/Ansible, Terraform, ARM, Cloud Formation


Hand on experience with Artifact repositories (Nexus, JFrog Artifactory)


Hands on experience with CI/CD tools on-premises/cloud (Jenkins, CircleCI, etc. )


Hands on experience with Monitoring, Logging, and Security (CloudWatch, cloud trail, log analytics, hosted tools such as ELK, EFK, Splunk, Datadog, Prometheus,)


Hands-on experience with scripting languages like Python, Ant, Bash, and Shell


Hands-on experience in designing pipelines & pipelines as code.


Hands-on experience in end-to-end deployment process & strategy


Hands-on experience of GCP/AWS/AZURE with a good understanding of computing, networks, storage, IAM, Security, and integration services

Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Nikita Sinha
Posted by Nikita Sinha
Hyderabad
2 - 4 yrs
Upto ₹16L / yr (Varies
)
Automation
Terraform
skill iconPython
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)

The Software Engineer – SRE will be responsible for building and maintaining highly reliable, scalable, and secure infrastructure that powers the Albert platform. This role focuses on automation, observability, and operational excellence to ensure seamless deployment, performance, and reliability of core platform services.


Key Responsibilities

  • Act as a passionate representative of the Albert product and brand.
  • Collaborate with Product Engineering and other stakeholders to plan and deliver core platform capabilities that enable scalability, reliability, and developer productivity.
  • Work with the Site Reliability Engineering (SRE) team on shared full-stack ownership of a collection of services and/or technology areas.
  • Understand the end-to-end configuration, technical dependencies, and overall behavioral characteristics of all microservices.
  • Design and deliver the mission-critical stack, focusing on security, resiliency, scale, and performance.
  • Take ownership of end-to-end performance and operability.
  • Apply strong knowledge of automation and orchestration principles.
  • Serve as the ultimate escalation point for complex or critical issues not yet documented as Standard Operating Procedures (SOPs).
  • Troubleshoot and define mitigations using a deep understanding of service topology and dependencies.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
  • 2+ years of software engineering experience, with at least 1 year in an SRE role focused on automation.
  • Strong experience in Infrastructure as Code (IAC), preferably using Terraform.
  • Proficiency in Python or Node.js, with experience designing RESTful APIs and working in microservices architecture.
  • Solid expertise in AWS cloud infrastructure and platform technologies including APIs, distributed systems, and microservices.
  • Hands-on experience with observability stacks, including centralized log management, metrics, and tracing.
  • Familiarity with CI/CD tools (e.g., CircleCI) and performance testing tools like K6.
  • Passion for bringing automation and standardization to engineering operations.
  • Ability to build high-performance APIs with low latency (<200ms).
  • Ability to work in a fast-paced environment, learning from peers and leaders.
  • Demonstrated ability to mentor other engineers and contribute to team growth, including participation in recruiting activities.

Good to Have

  • Experience with Kubernetes and container orchestration.
  • Familiarity with observability tools such as Prometheus, Grafana, OpenTelemetry, or Datadog.
  • Experience building Internal Developer Platforms (IDPs) or reusable frameworks for engineering teams.
  • Exposure to ML infrastructure or data engineering workflows.
  • Experience working in compliance-heavy environments (e.g., SOC2, HIPAA).


Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Amruta Mundale
Posted by Amruta Mundale
Pune
4 - 8 yrs
Best in industry
skill iconJava
SQL
skill iconSpring Boot
Apache
skill iconAmazon Web Services (AWS)
+1 more

What You’ll Do:

  • Setting up formal data practices for the company.
  • Building and running super stable and scalable data architectures.
  • Making it easy for folks to add and use new data with self-service pipelines.
  • Getting DataOps practices in place.
  • Designing, developing, and running data pipelines to help out Products, Analytics, data scientists and machine learning engineers.
  • Creating simple, reliable data storage, ingestion, and transformation solutions that are a breeze to deploy and manage.
  • Writing and Managing reporting API for different products.
  • Implementing different methodologies for different reporting needs.
  • Teaming up with all sorts of people – business folks, other software engineers, machine learning engineers, and analysts.

Who You Are:

  • Bachelor’s degree in engineering (CS / IT) or equivalent degree from a well-known Institute / University.
  • 3.5+ years of experience in building and running data pipelines for tons of data.
  • Experience with public clouds like GCP or AWS.
  • Experience with Apache open-source projects like Spark, Druid, Airflow, and big data databases like BigQuery, Clickhouse.
  • Experience making data architectures that are optimised for both performance and cost.
  • Good grasp of software engineering, DataOps, data architecture, Agile, and DevOps.
  • Proficient in SQL, Java, Spring Boot, Python, and Bash.
  • Good communication skills for working with technical and non-technical people.
  • Someone who thinks big, takes chances, innovates, dives deep, gets things done, hires and develops the best, and is always learning and curious.


Read more
Bengaluru (Bangalore)
6 - 10 yrs
₹15L - ₹28L / yr
Business Analysis
Data integration
SQL
PMS
CRS
+2 more

Job Description: Business Analyst – Data Integrations

Location: Bangalore / Hybrid / Remote

Company: LodgIQ

Industry: Hospitality / SaaS / Machine Learning

About LodgIQ

Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the

travel industry. By leveraging machine learning and artificial intelligence, we enable precise

forecasting and optimized pricing for hotel revenue management. Backed by Highgate

Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a

global presence.

About the Role

We’re looking for a skilled Business Analyst – Data Integrations who can bridge the gap

between business operations and technology teams, ensuring smooth, efficient, and scalable

integrations. If you’re passionate about hospitality tech and enjoy solving complex data

challenges, we’d love to hear from you!

What You’ll Do

Key Responsibilities

 Collaborate with vendors to gather requirements for API development and ensure

technical feasibility.

 Collect API documentation from vendors; document and explain business logic to

use external data sources effectively.

 Access vendor applications to create and validate sample data; ensure the accuracy

and relevance of test datasets.

 Translate complex business logic into documentation for developers, ensuring

clarity for successful integration.

 Monitor all integration activities and support tickets in Jira, proactively resolving

critical issues.

 Lead QA testing for integrations, overseeing pilot onboarding and ensuring solution

viability before broader rollout.

 Document onboarding processes and best practices to streamline future

integrations and improve efficiency.

 Build, train, and deploy machine learning models for forecasting, pricing, and

optimization, supporting strategic goals.

 Drive end-to-end execution of data integration projects, including scoping, planning,

delivery, and stakeholder communication.

 Gather and translate business requirements into actionable technical specifications,

liaising with business and technical teams.


 Oversee maintenance and enhancement of existing integrations, performing RCA

and resolving integration-related issues.

 Document workflows, processes, and best practices for current and future

integration projects.

 Continuously monitor system performance and scalability, recommending

improvements to increase efficiency.

 Coordinate closely with Operations for onboarding and support, ensuring seamless

handover and issue resolution.

Desired Skills & Qualifications

 Strong experience in API integration, data analysis, and documentation.

 Familiarity with Jira for ticket management and project workflow.

 Hands-on experience with machine learning model development and deployment.

 Excellent communication skills for requirement gathering and stakeholder

engagement.

 Experience with QA test processes and pilot rollouts.

 Proficiency in project management, data workflow documentation, and system

monitoring.

 Ability to manage multiple integrations simultaneously and work cross-functionally.

Required Qualifications

 Experience: Minimum 4 years in hotel technology or business analytics, preferably

handling data integration or system interoperability projects.

 Technical Skills:

 Basic proficiency in SQL or database querying.

 Familiarity with data integration concepts such as APIs or ETL workflows

(preferred but not mandatory).

 Eagerness to learn and adapt to new tools, platforms, and technologies.

 Hotel Technology Expertise: Understanding of systems such as PMS, CRS, Channel

Managers, or RMS.

 Project Management: Strong organizational and multitasking abilities.

 Problem Solving: Analytical thinker capable of troubleshooting and driving resolution.


 Communication: Excellent written and verbal skills to bridge technical and non-

technical discussions.


 Attention to Detail: Methodical approach to documentation, testing, and deployment.

Preferred Qualification

 Exposure to debugging tools and troubleshooting methodologies.

 Familiarity with cloud environments (AWS).

 Understanding of data security and privacy considerations in the hospitality industry.

Why LodgIQ?

 Join a fast-growing, mission-driven company transforming the future of hospitality.


 Work on intellectually challenging problems at the intersection of machine learning,

decision science, and human behavior.

 Be part of a high-impact, collaborative team with the autonomy to drive initiatives from

ideation to production.

 Competitive salary and performance bonuses.

 For more information, visit https://www.lodgiq.com

Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
7 - 9 yrs
₹15L - ₹28L / yr
databricks
skill iconPython
SQL
PySpark
skill iconAmazon Web Services (AWS)
+9 more

Role Proficiency:

This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.


Skill Examples:

  1. Proficiency in SQL Python or other programming languages used for data manipulation.
  2. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
  3. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
  4. Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
  5. Experience in performance tuning.
  6. Experience in data warehouse design and cost improvements.
  7. Apply and optimize data models for efficient storage retrieval and processing of large datasets.
  8. Communicate and explain design/development aspects to customers.
  9. Estimate time and resource requirements for developing/debugging features/components.
  10. Participate in RFP responses and solutioning.
  11. Mentor team members and guide them in relevant upskilling and certification.

 

Knowledge Examples:

  1. Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.
  2. Proficient in SQL for analytics and windowing functions.
  3. Understanding of data schemas and models.
  4. Familiarity with domain-related data.
  5. Knowledge of data warehouse optimization techniques.
  6. Understanding of data security concepts.
  7. Awareness of patterns frameworks and automation practices.


 

Additional Comments:

# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026

Project Overview:

Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025.

The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency.

Design, build, and maintain scalable data pipelines using Databricks and PySpark.

Develop and optimize complex SQL queries for data extraction, transformation, and analysis.

Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).

Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets.

Ensure data quality, performance, and reliability across data workflows.

Participate in code reviews, data architecture discussions, and performance optimization initiatives.

Support migration and modernization efforts for legacy data systems to modern cloud-based solutions.


Key Skills:

Hands-on experience with Databricks, PySpark & Python for building ETL/ELT pipelines.

Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).

Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).

Experience with data modeling, schema design, and performance optimization.

Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred).

Excellent problem-solving, communication, and collaboration skills.

 

Skills: Databricks, Pyspark & Python, Sql, Aws Services

 

Must-Haves

Python/PySpark (5+ years), SQL (5+ years), Databricks (3+ years), AWS Services (3+ years), ETL tools (Informatica, Glue, DataProc) (3+ years)

Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.

Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).

Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).

Experience with data modeling, schema design, and performance optimization.

Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).


******

Notice period - Immediate to 15 days

Location: Bangalore

Read more
Lovoj

at Lovoj

2 candid answers
LOVOJ CONTACT
Posted by LOVOJ CONTACT
Delhi
3 - 10 yrs
₹8L - ₹14L / yr
skill iconAmazon Web Services (AWS)
AWS Lambda
CI/CD
DevOps

Key Responsibilities

  • Design, implement, and maintain CI/CD pipelines for backend, frontend, and mobile applications.
  • Manage cloud infrastructure using AWS (EC2, Lambda, S3, VPC, RDS, CloudWatch, ECS/EKS).
  • Configure and maintain Docker containers and/or Kubernetes clusters.
  • Implement and maintain Infrastructure as Code (IaC) using Terraform / CloudFormation.
  • Automate build, deployment, and monitoring processes.
  • Manage code repositories using Git/GitHub/GitLab, enforce branching strategies.
  • Implement monitoring and alerting using tools like Prometheus, Grafana, CloudWatch, ELK, Splunk.
  • Ensure system scalability, reliability, and security.
  • Troubleshoot production issues and perform root-cause analysis.
  • Collaborate with engineering teams to improve deployment and development workflows.
  • Optimize infrastructure costs and improve performance.

Required Skills & Qualifications

  • 3+ years of experience in DevOps, SRE, or Cloud Engineering.
  • Strong hands-on knowledge of AWS cloud services.
  • Experience with Docker, containers, and orchestrators (ECS, EKS, Kubernetes).
  • Strong understanding of CI/CD tools: GitHub Actions, Jenkins, GitLab CI, or AWS CodePipeline.
  • Experience with Linux administration and shell scripting.
  • Strong understanding of Networking, VPC, DNS, Load Balancers, Security Groups.
  • Experience with monitoring/logging tools: CloudWatch, ELK, Prometheus, Grafana.
  • Experience with Terraform or CloudFormation (IaC).
  • Good understanding of Node.js or similar application deployments.
  • Knowledge of NGINX/Apache and load balancing concepts.
  • Strong problem-solving and communication skills.

Preferred/Good to Have

  • Experience with Kubernetes (EKS).
  • Experience with Serverless architectures (Lambda).
  • Experience with Redis, MongoDB, RDS.
  • Certification in AWS Solutions Architect / DevOps Engineer.
  • Experience with security best practices, IAM policies, and DevSecOps.
  • Understanding of cost optimization and cloud cost management.


Read more
IndArka Energy Pvt Ltd

at IndArka Energy Pvt Ltd

3 recruiters
Mita Hemant
Posted by Mita Hemant
Bengaluru (Bangalore)
4 - 5 yrs
₹15L - ₹18L / yr
Microsoft Windows Azure
CI/CD
Scripting language
skill iconDocker
skill iconKubernetes
+3 more

About Us

At Arka Energy, we're redefining how renewable energy is experienced and adopted in homes. Our focus is on developing next-generation residential solar energy solutions through a unique combination of custom product design, intuitive simulation software, and high-impact technology. With engineering teams in Bangalore and the Bay Area, we’re committed to building innovative products that transform rooftops into smart energy ecosystems.

Our flagship product is a 3D simulation platform that models rooftops and commercial sites, allowing users to design solar layouts and generate accurate energy estimates — streamlining the residential solar design process like never before.

 

What We're Looking For

We're seeking a Senior DevOps Engineer who will be responsible for managing and automating cloud infrastructure and services, ensuring seamless integration and deployment of applications, and maintaining high availability and reliability. You will work closely with development and operations teams to streamline processes and enhance productivity.

Key Responsibilities

  • Design and implement CI/CD pipelines using Azure DevOps.
  • Automate infrastructure provisioning and configuration in the Azure cloud environment.
  • Monitor and manage system health, performance, and security.
  • Collaborate with development teams to ensure smooth and secure deployment of applications.
  • Troubleshoot and resolve issues related to deployment and operations.
  • Implement best practices for configuration management and infrastructure as code.
  • Maintain documentation of processes and solutions.

 

Requirements

  • Total relevant experience of 4 to 5 years.
  • Proven experience as a DevOps Engineer, specifically with Azure.
  • Experience with CI/CD tools and practices.
  • Strong understanding of infrastructure as code (IaC) using tools like Terraform or ARM templates.
  • Knowledge of scripting languages such as PowerShell or Python.
  • Familiarity with containerization technologies like Docker and Kubernetes.
  • Good to have – knowledge on AWS, Digital Ocean, GCP
  • Excellent troubleshooting and problem-solving skills
  • High ownership, self-starter attitude, and ability to work independently
  • Strong aptitude and reasoning ability with a growth mindset

 

Nice to Have

·        Experience working in a SaaS or product-driven startup

·        Familiarity with solar industry (preferred but not required)

Read more
Capace Software Private Limited
Bengaluru (Bangalore), Bhopal
5 - 10 yrs
₹4L - ₹10L / yr
skill iconDjango
CI/CD
Software deployment
RESTful APIs
skill iconFlask
+8 more

Senior Python Django Developer 

Experience: Back-end development: 6 years (Required)


Location:  Bangalore/ Bhopal

Job Description:

We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the start-up environment.

This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.

Responsibilities:

  • Design and develop scalable, secure, and high-performance applications using Python (Django framework).
  • Architect system components, define database schemas, and optimize backend services for speed and efficiency.
  • Lead and implement design patterns and software architecture best practices.
  • Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
  • Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
  • Drive performance improvements, monitor system health, and troubleshoot production issues.
  • Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
  • Contribute to technical decision-making and mentor junior developers.

Requirements:

  • 6 to 10 years of professional backend development experience with Python and Django.
  • Strong background in payments/financial systems or FinTech applications.
  • Proven experience in designing software architecture in a microservices or modular monolith environment.
  • Experience working in fast-paced startup environments with agile practices.
  • Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
  • Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
  • Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
  • Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).

Preferred Skills:

  • Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
  • Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
  • Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
  • Contributions to open-source or personal finance-related projects.

Job Types: Full-time, Permanent


Schedule:

  • Day shift

Supplemental Pay:

  • Performance bonus
  • Yearly bonus

Ability to commute/relocate:

  • JP Nagar, 5th Phase, Bangalore, Karnataka or Indrapuri, Bhopal, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred)


Read more
Technology, Information and Internet Company

Technology, Information and Internet Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹65L / yr
Data Structures
CI/CD
Microservices
Architecture
Cloud Computing
+19 more

Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems


Criteria:

  • Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
  • Must be strong in one core backend language: Node.js, Go, Java, or Python.
  • Deep understanding of distributed systems, caching, high availability, and microservices architecture.
  • Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
  • Strong command over system design, data structures, performance tuning, and scalable architecture
  • Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.


Description

What This Role Is All About

We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.

 

What You’ll Own

● Architect backend systems that handle India-scale traffic without breaking a sweat.

● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.

● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.

● Partner with Product, Data, and Infra to ship features that are reliable and delightful.

● Set high engineering standards—clean architecture, performance, automation, and testing.

● Lead discussions on system design, performance tuning, and infra choices.

● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.

● Identify gaps proactively and push for improvements instead of waiting for fires.

 

What Makes You a Great Fit

● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.

● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.

● Deep understanding of distributed systems, caching, high-availability, and microservices.

● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.

● You think data structures and system design are not interviews — they’re daily tools.

● You write code that future-you won’t hate.

● Strong communication and a let’s figure this out attitude.

 

Bonus Points If You Have

● Built or scaled consumer apps with millions of DAUs.

● Experimented with event-driven architecture, streaming systems, or real-time pipelines.

● Love startups and don’t mind wearing multiple hats.

● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.

 

Why company Might Be Your Best Move

● Work on products used by real people every single day.

● Ownership from day one—your decisions will shape our core architecture.

● No unnecessary hierarchy; direct access to founders and senior leadership.

● A team that cares about quality, speed, and impact in equal measure.

● Build for Bharat — complex constraints, huge scale, real impact.


Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Kochi (Cochin), Trivandrum, Hyderabad, Thiruvananthapuram
8 - 10 yrs
₹10L - ₹25L / yr
Business Analysis
Data Visualization
PowerBI
SQL
Tableau
+18 more

Job Description – Senior Technical Business Analyst

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

 

About the Role

We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.

As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.

 

Key Responsibilities

Business & Analytical Responsibilities

  • Partner with business teams to understand one-line problem statements and translate them into detailed business requirementsopportunities, and project scope.
  • Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
  • Create documentation including Business Requirement Documents (BRDs)user storiesprocess flows, and analytical models.
  • Break down business needs into concise, actionable, and development-ready user stories in Jira.

Data & Technical Responsibilities

  • Collaborate with data engineering teams to design, review, and validate data pipelinesdata models, and ETL/ELT workflows.
  • Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
  • Apply foundational data science concepts such as statistical analysispredictive modeling, and machine learning fundamentals.
  • Validate and ensure data quality, consistency, and accuracy across datasets and systems.

Collaboration & Execution

  • Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
  • Assist in development, testing, and rollout of data-driven solutions.
  • Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.

 

Required Skillsets

Core Technical Skills

  • 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
  • Data Analytics: SQL, descriptive analytics, business problem framing.
  • Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
  • Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
  • Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.

 

Soft Skills

  • Strong analytical thinking and structured problem-solving capability.
  • Ability to convert business problems into clear technical requirements.
  • Excellent communication, documentation, and presentation skills.
  • High curiosity, adaptability, and eagerness to learn new tools and techniques.

 

Educational Qualifications

  • BE/B.Tech or equivalent in:
  • Computer Science / IT
  • Data Science

 

What We Look For

  • Demonstrated passion for data and analytics through projects and certifications.
  • Strong commitment to continuous learning and innovation.
  • Ability to work both independently and in collaborative team environments.
  • Passion for solving business problems using data-driven approaches.
  • Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.

 

Why Join Us?

  • Exposure to modern data platforms, analytics tools, and AI technologies.
  • A culture that promotes innovation, ownership, and continuous learning.
  • Supportive environment to build a strong career in data and analytics.

 

Skills: Data Analytics, Business Analysis, Sql


Must-Haves

Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R

 

******

Notice period - 0 to 15 days (Max 30 Days)

Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

Read more
Phi Commerce

at Phi Commerce

2 candid answers
Ariba Khan
Posted by Ariba Khan
Pune
10 - 14 yrs
Upto ₹25L / yr (Varies
)
Automation
Manual testing
Selenium
skill iconJava
Playwright
+2 more

About Phi Commerce

Founded in 2015, Phi Commerce has created PayPhi, a ground-breaking omni-channel payment processing platform which processes digital payments at doorstep, online & in-store across variety of form factors such as cards, net-banking, UPI, Aadhaar, BharatQR, wallets, NEFT, RTGS, and NACH. The company was established with the objective to digitize white spaces in payments & go beyond routine payment processing.


Phi Commerce's PayPhi Digital Enablement suite has been developed with the mission of empowering very large untapped blue-ocean sectors dominated by offline payment modes such as cash & cheque to accept digital payments.


Core team comprises of industry veterans with complementary skill sets and nearly 100 years of global experience with noteworthy players such as Mastercard, Euronet, ICICI Bank, Opus Software and Electra Card Services.


Awards & Recognitions:

The company innovative work has been recognized at prestigious forums in short span of its existence:


  • Certification of Recognition as StartUp by Department of Industrial Policy and Promotion.
  • Winner of the "Best Payment Gateway" of the year award at Payments & Cards Awards 2018
  • Winner at Payments & Cards Awards 2017 in 3 categories - Best Startup Of The Year, Best Online Payment Solution Of The Year- Consumer And Best Online Payment Solution Of The Year-Merchant,
  • Winner of NPCI IDEATHON on Blockchain in Payments
  • Shortlisted by Govt. of Maharashtra as top 100 start-ups pan-India across 8 sectors


About the role:

We are seeking an experienced and dynamic QA Manager to lead our quality assurance team in delivering high-quality software products for our organization. The ideal candidate will have a strong background in manual and automation testing, with hands-on experience in SQL, UNIX commands, STLC/SDLC, and managing QA for critical financial systems. You will be responsible for test strategy creation, resource planning, stakeholder communication, and ensuring process adherence to deliver robust and secure systems.


Key Responsibilities:

Team & Test Management

  • Lead and manage a team of manual and automation testers, providing guidance, mentorship, and performance feedback.
  • Define and execute test strategies and plans for each product release in alignment with business goals and timelines.
  • Oversee test case design, execution, and test data management to ensure full coverage across all functionalities.
  • Plan and manage QA deliverables in coordination with release and sprint planning.

Process & Quality Oversight

  • Ensure compliance with STLC, SDLC, and Defect Management processes.
  • Maintain and manage QA environments, ensuring they are up-t-date and aligned with production-like conditions.
  • Implement best practices and continuously improve QA processes for efficiency and quality

Stakeholder & Communication Management

  • Serve as a primary point of contact for all QA-related updates across internal teams and external partners.
  • Provide regular DSR (Daily Status Reports) and WSR (Weekly Status Reports) to stakeholders.
  • Communicate effectively with both technical and non-technical stakeholders regarding quality issues, risks, and expectations.

Technical Responsibilities

  • Work with SQL for data validation and backend testing.
  • Use UNIX commands for system checks, log analysis, and troubleshooting.
  • Collaborate closely with developers, product managers, and release engineers to ensure high-quality deliverables.


Required Skills & Experience:


Technical Skills:

  • Strong hands-on experience with SQL and UNIX/Linux commands.
  • Proficient in manual test case creation and automation testing processes.
  • Good understanding of QA tools like JIRA, TestRail, Confluence, and defect tracking systems.
  • Knowledge of test automation frameworks and scripting languages (optional but a plus).

Domain Expertise:

  • Solid understanding of payment systems, including ATM, E-commerce transactions, settlement, and reconciliation workflows.
  • Experience in testing APIs, transaction flows, chargebacks, refunds, and financial reporting systems.

Leadership & Soft Skills:

  • Proven experience in leading QA teams and managing test resources effectively.
  • Strong analytical and problem-solving skills to identify root causes of defects and quality issues.
  • Excellent communication and interpersonal skills for effective collaboration across teams and stakeholders.


Qualifications:

  • 10+ years of total QA experience with at least 2 years in a QA leadership/ managerial role.
  • Experience in fintech, banking, or payment processing environments is strongly preferred
Read more
Phi Commerce

at Phi Commerce

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Pune
11 - 15 yrs
Upto ₹32L / yr (Varies
)
Linux/Unix
SQL
Shell Scripting
skill iconAmazon Web Services (AWS)
CI/CD
+2 more

The Production Infrastructure Manager is responsible for overseeing and maintaining the infrastructure that powers our payment gateway systems in a high-availability production environment. This role requires deep technical expertise in cloud platforms, networking, and security, along with strong leadership capability to guide a team of infrastructure engineers. You will ensure the system’s reliability, performance, and compliance with regulatory standards while driving continuous improvement.


Key Responsibilities:

Infrastructure Management

  • Manage and optimize infrastructure for payment gateway systems to ensure high availability, reliability, and scalability.
  • Oversee daily operations of production environments, including AWS cloud services, load balancers, databases, and monitoring systems.
  • Implement and maintain infrastructure automation, provisioning, configuration management, and disaster recovery strategies.
  • Develop and maintain capacity planning, monitoring, and backup mechanisms to support peak transaction periods.
  • Oversee regular patching, updates, and version control to minimize vulnerabilities.

Team Leadership

  • Lead and mentor a team of infrastructure engineers and administrators.
  • Provide technical direction to ensure efficient and effective implementation of infrastructure solutions.

Cross-Functional Collaboration

  • Work closely with development, security, and product teams to ensure infrastructure aligns with business needs and regulatory requirements (PCI-DSS, GDPR).
  • Ensure infrastructure practices meet industry standards and security requirements (PCI-DSS, ISO 27001).

Monitoring & Incident Management

  • Monitor infrastructure performance using tools like Prometheus, Grafana, Datadog, etc.
  • Conduct incident response, root cause analysis, and post-mortems to prevent recurring issues.
  • Manage and execute on-call duties, ensuring timely resolution of infrastructure-related issues.

Documentation

  • Maintain comprehensive documentation, including architecture diagrams, processes, and disaster recovery plans.

Skills and Qualifications

Required

  • Bachelor’s degree in Computer Science, IT, or equivalent experience.
  • 8+ years of experience managing production infrastructure in high-availability, mission-critical environments (fintech or payment gateways preferred).
  • Expertise in AWS cloud environments.
  • Strong experience with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation.
  • Deep understanding of:
  • Networking (load balancers, firewalls, VPNs, distributed systems)
  • Database systems (SQL/NoSQL), HA & DR strategies
  • Automation tools (Ansible, Chef, Puppet) and containerization/orchestration (Docker, Kubernetes)
  • Security best practices, encryption, vulnerability management, PCI-DSS compliance
  • Experience with monitoring tools (Prometheus, Grafana, Datadog).
  • Strong analytical and problem-solving skills.
  • Excellent communication and leadership capabilities.

Preferred

  • Experience in fintech/payment industry with regulatory exposure.
  • Ability to operate effectively under pressure and ensure service continuity.


Read more
CFRA

at CFRA

4 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Remote only
3yrs+
Upto ₹15L / yr (Varies
)
skill iconAmazon Web Services (AWS)
SQL
Selenium
Appium
cypress
+3 more

Quality Engineer is responsible for planning, developing, and executing tests for CFRA’s financial software. The responsibilities include designing and implementing tests, debugging and defining corrective actions. The role plays an important part in our company’s product development process. Our ideal candidate will be responsible for conducting tests to ensure software runs efficiently and meets client needs, while at the same time being cost-effective. You will be part of CFRA Data Collection Team responsible for collecting, processing and publishing financial market data for internal and external stakeholders. The team uses a contemporary stack in the AWS Cloud to design, build and maintain a robust data architecture, data engineering pipelines, and large-scale data systems. You will be responsible for verifying and validating all data quality and completeness parameters for the automated (ETL) pipeline processes (new and existing).

Key Responsibilities

  • Review requirements, specifications and technical design documents to provide timely and meaningful feedback
  • Create detailed, comprehensive and well-structured test plans and test cases
  • Estimate, prioritize, plan and coordinate testing activities
  • Identify, record, document thoroughly and track bugs
  • Develop and apply testing processes for new and existing products to meet client needs
  • Liaise with internal teams to identify system requirements and develop testing plans
  • Investigate the causes of non-conforming software and train users to implement solutions
  • Stay up-to-date with new testing tools and test strategies

Desired Skills

  • Proven work experience in software development and quality assurance
  • Strong knowledge of software QA methodologies, tools and processes
  • Experience in writing clear, concise and comprehensive test plans and test cases
  • Hands-on experience with automated testing tools
  • Acute attention to detail
  • Experience working in an Agile/Scrum development process
  • Excellent collaboration skills

 Technical Skills

  • Proficient with SQL, and capable of developing queries for testing
  • Familiarity with Python, especially for scripting tests
  • Familiarity with Cloud Technology and working with remote servers


Read more
CFRA

at CFRA

4 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Remote only
4yrs+
Upto ₹23L / yr (Varies
)
skill iconAmazon Web Services (AWS)
SQL
skill iconPython
skill iconNodeJS (Node.js)
skill iconJava
+1 more

The Senior Software Developer is responsible for development of CFRA’s report generation framework using a modern technology stack: Python on AWS cloud infrastructure, SQL, and Web technologies. This is an opportunity to make an impact on both the team and the organization by being part of the design and development of a new customer-facing report generation framework that will serve as the foundation for all future report development at CFRA.

The ideal candidate has a passion for solving business problems with technology and can effectively communicate business and technical needs to stakeholders. We are looking for candidates that value collaboration with colleagues and having an immediate, tangible impact for a leading global independent financial insights and data company.


Key Responsibilities

  • Analyst Workflows: Design and development of CFRA’s integrated content publishing platform using a proprietary 3rd party editorial and publishing platform for integrated digital publishing.
  • Designing and Developing APIs: Design and development of robust, scalable, and secure APIs on AWS, considering factors like performance, reliability, and cost-efficiency.
  • AWS Service Integration: Integrate APIs with various AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, AWS Glue, and others, to build comprehensive and efficient solutions.
  • Performance Optimization: Identify and implement optimizations to improve performance, scalability, and efficiency, leveraging AWS services and tools.
  • Security and Compliance: Ensure APIs are developed following best security practices, including authentication, authorization, encryption, and compliance with relevant standards and regulations.
  • Monitoring and Logging: Implement monitoring and logging solutions for APIs using AWS CloudWatch, AWS X-Ray, or similar tools, to ensure availability, performance, and reliability.
  • Continuous Integration and Deployment (CI/CD): Establish and maintain CI/CD pipelines for API development, automating testing, deployment, and monitoring processes on AWS.
  • Documentation and Training: Create and maintain comprehensive documentation for internal and external users, and provide training and support to developers and stakeholders.
  • Team Collaboration: Collaborate effectively with cross-functional teams, including product managers, designers, and other developers, to deliver high-quality solutions that meet business requirements.
  • Problem Solving: troubleshooting efforts, identifying root causes and implementing solutions to ensure system stability and performance.
  • Stay Updated: Stay updated with the latest trends, tools, and technologies related to development on AWS, and continuously improve your skills and knowledge.

Desired Skills and Experience

  • Development: 5+ years of extensive experience in designing, developing, and deploying using modern technologies, with a focus on scalability, performance, and security.
  • AWS Services: proficiency in using AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, Amazon SES, Amazon RDS, Amazon DynamoDB, and others, to build and deploy API solutions.
  • Programming Languages: Proficiency in programming languages commonly used for development, such as Python, Node.js, or others, as well as experience with serverless frameworks like AWS.
  • Architecture Design: Ability to design scalable and resilient API architectures using microservices, serverless, or other modern architectural patterns, considering factors like performance, reliability, and cost-efficiency.
  • Security: Strong understanding of security principles and best practices, including authentication, authorization, encryption, and compliance with standards like OAuth, OpenID Connect, and AWS IAM.
  • DevOps Practices: Familiarity with DevOps practices and tools, including CI/CD pipelines, infrastructure as code (IaC), and automated testing, to ensure efficient and reliable deployment on AWS.
  • Problem-solving Skills: Excellent problem-solving skills, with the ability to troubleshoot complex issues, identify root causes, and implement effective solutions to ensure the stability and performance.
  • Communication Skills: Strong communication skills, with the ability to effectively communicate technical concepts to both technical and non-technical stakeholders, and collaborate with cross- functional teams.
  • Agile Methodologies: Experience working in Agile development environments, following practices like Scrum or Kanban, and ability to adapt to changing requirements and priorities.
  • Continuous Learning: A commitment to continuous learning and staying updated with the latest trends, tools, and technologies related to development and AWS services.
  • Bachelor's Degree: A bachelor's degree in Computer Science, Software Engineering, or a related field is often preferred, although equivalent experience and certifications can also be valuable.




Read more
CFRA

at CFRA

4 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Remote only
7yrs+
Upto ₹36L / yr (Varies
)
skill iconAmazon Web Services (AWS)
SQL
skill iconPython
skill iconNodeJS (Node.js)
skill iconJava

The Lead Software Developer is responsible for development of CFRA’s report generation framework using a modern technology stack: Python on AWS cloud infrastructure, SQL, and Web technologies. This is an opportunity to make an impact on both the team and the organization by being part of the design and development of a new customer-facing report generation framework that will serve as the foundation for all future report development at CFRA.

The ideal candidate has a passion for solving business problems with technology and can effectively communicate business and technical needs to stakeholders. We are looking for candidates that value collaboration with colleagues and having an immediate, tangible impact for a leading global independent financial insights and data company.


Key Responsibilities

  • Analyst Workflows: Lead the design and development of CFRA’s integrated content publishing platform using a proprietary 3rd party editorial and publishing platform for integrated digital publishing.
  • Designing and Developing APIs: Lead the design and development of robust, scalable, and secure APIs on AWS, considering factors like performance, reliability, and cost-efficiency.
  • Architecture Planning: Collaborate with architects and stakeholders to define architecture, including API gateway, microservices, and serverless components, ensuring alignment with business goals and AWS best practices.
  • Technical Leadership: Provide technical guidance and leadership to the development team, ensuring adherence to coding standards, best practices, and AWS guidelines.
  • AWS Service Integration: Integrate APIs with various AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, AWS Glue, and others, to build comprehensive and efficient solutions.
  • Performance Optimization: Identify and implement optimizations to improve performance, scalability, and efficiency, leveraging AWS services and tools.
  • Security and Compliance: Ensure APIs are developed following best security practices, including authentication, authorization, encryption, and compliance with relevant standards and regulations.
  • Monitoring and Logging: Implement monitoring and logging solutions for APIs using AWS CloudWatch, AWS X-Ray, or similar tools, to ensure availability, performance, and reliability.
  • Continuous Integration and Deployment (CI/CD): Establish and maintain CI/CD pipelines for API development, automating testing, deployment, and monitoring processes on AWS.
  • Documentation and Training: Create and maintain comprehensive documentation for internal and external users, and provide training and support to developers and stakeholders.
  • Team Collaboration: Collaborate effectively with cross-functional teams, including product managers, designers, and other developers, to deliver high-quality solutions that meet business requirements.
  • Problem Solving: Lead troubleshooting efforts, identifying root causes and implementing solutions to ensure system stability and performance.
  • Stay Updated: Stay updated with the latest trends, tools, and technologies related to development on AWS, and continuously improve your skills and knowledge.


Desired Skills and Experience

  • Development: 10+ years of extensive experience in designing, developing, and deploying using modern technologies, with a focus on scalability, performance, and security.
  • AWS Services: Strong proficiency in using AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, Amazon SES, Amazon RDS, Amazon DynamoDB, and others, to build and deploy API solutions.
  • Programming Languages: Proficiency in programming languages commonly used for development, such as Python, Node.js, or others, as well as experience with serverless frameworks like AWS.
  • Architecture Design: Ability to design scalable and resilient API architectures using microservices, serverless, or other modern architectural patterns, considering factors like performance, reliability, and cost-efficiency.
  • Security: Strong understanding of security principles and best practices, including authentication, authorization, encryption, and compliance with standards like OAuth, OpenID Connect, and AWS IAM.
  • DevOps Practices: Familiarity with DevOps practices and tools, including CI/CD pipelines, infrastructure as code (IaC), and automated testing, to ensure efficient and reliable deployment on AWS.
  • Problem-solving Skills: Excellent problem-solving skills, with the ability to troubleshoot complex issues, identify root causes, and implement effective solutions to ensure the stability and performance.
  • Team Leadership: Experience leading and mentoring a team of developers, providing technical guidance, code reviews, and fostering a collaborative and innovative environment.
  • Communication Skills: Strong communication skills, with the ability to effectively communicate technical concepts to both technical and non-technical stakeholders, and collaborate with cross- functional teams.
  • Agile Methodologies: Experience working in Agile development environments, following practices like Scrum or Kanban, and ability to adapt to changing requirements and priorities.
  • Continuous Learning: A commitment to continuous learning and staying updated with the latest trends, tools, and technologies related to development and AWS services.
  • Bachelor's Degree: A bachelor's degree in Computer Science, Software Engineering, or a related field is often preferred, although equivalent experience and certifications can also be valuable.


Read more
Technoidentity
Human Resources
Posted by Human Resources
Remote only
5 - 12 yrs
Best in industry
Tableau
Administrative support
SQL server
skill iconAmazon Web Services (AWS)
Linux administration

Supercharge Your Career as a Tableau Systems Engineer at Technoidentity!

Are you ready to solve people challenges that fuel business growth? At Technoidentity, we’re a Data+AI product engineering company building cutting-edge solutions in the FinTech domain for over 13 years—and we’re expanding globally. It’s the perfect time to join our

team of tech innovators and leave your mark!


What’s in it for You?

Our Self Serve Business Intelligence team is focused on enabling strong data exploration and analytics across the organisation. The team manages Tableau dashboards, provides foundational Tableau systems administration, and resolves connectivity issues across platforms. We also oversee Tableau Server and Tableau Cloud, including license utilisation and allocation. As the organisation continues to scale its analytical maturity, this team plays a key role in expanding Tableau adoption across business functions and setting standards for

canonical dashboards and curated data sources. The team also contributes to company wide analytics education.

As a Tableau and Analytical Systems Engineer, you will strengthen our data visualisationecosystem and ensure seamless data exploration for stakeholders. Your work will span installation, configuration, maintenance, and support of Tableau Server environments. You

will manage user accounts, permissions, and security settings across analytics systems, monitor performance, and troubleshoot issues related to Tableau and its integrations. Your contribution will support our goal of building a data literate culture and ensuring reliable analytical infrastructure.


What Will You Be Doing?

• Install, configure, and maintain multi node Linux based Tableau Server environments and Tableau Cloud, including upgrades and patches

• Manage user accounts, permissions, and security configurations across analytical systems

• Monitor system performance and troubleshoot issues related to Tableau, integrations, and upstream data systems

• Automate tasks such as deployment of data sources or workbooks, alerting, and integrations with Airflow using scripting languages

• Collaborate with the Tableau vendor on support cases, enhancements, and product fixes

• Partner with analysts and data teams to understand visualisation needs and deliver efficient Tableau solutions

• Track server activity and usage insights to identify performance improvements

• Create and maintain data extracts, refresh schedules, and quality checks

• Manage migrations of dashboards and workbooks across development and production environments

• Document procedures and administration practices for all analytical systems

• Stay updated on latest Tableau features, product updates, and industry best practices

• Maintain server logs, backups, and metadata exports

• Provide training, onboarding support, and guidance to users on Tableau Desktop and Server


What Makes You the Perfect Fit?

• At least 3 years of experience in Tableau Server administration

• Strong technical experience with AWS and familiarity with Linux

• Two or more years of coding experience in Python or JavaScript, with additional scripting knowledge in PowerShell or Bash

• Experience in designing cloud architectures for compute and storage

• Strong understanding of relational databases such as Postgres SQL and Trino or Hive

• Experience with Tableau APIs and Airflow is preferred

• Strong analytical and problem solving skills

• Ability to communicate clearly, collaborate effectively, and work both independently and in a team environment

Read more
Hiret Consulting
Sanikha M
Posted by Sanikha M
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹35L / yr
skill iconSpring Boot
skill iconNextJs (Next.js)
nestjs
Native android
skill iconPostgreSQL
+2 more

Required Qualifications

● Experience: 5-8 years of professional experience in software engineering, with a strong

background in developing and deploying scalable applications.

● Technical Skills:

○ Architecture: Demonstrated experience in architecture/ system design for scale,

preferably as a digital public good

○ Full Stack: Extensive experience with full-stack development, including mobile

app development and backend technologies.

○ App Development: Hands-on experience building and launching mobile

applications, preferably for Android.

○ Cloud Infrastructure: Familiarity with cloud platforms and containerization

technologies (Docker, Kubernetes).

○ (Bonus) ML Ops: Proven experience with ML Ops practices and tools.

● Soft Skills:

○ Experience in hiring team members

○ A proactive and independent problem-solver, comfortable working in a fast-paced

environment.

○ Excellent communication and leadership skills, with the ability to mentor junior

engineers.

○ A strong desire to use technology for social good.


Preferred Qualifications

● Experience working in a startup or smaller team environment.

● Familiarity with the healthcare or public health sector.

● Experience in developing applications for low-resource environments.

● Experience with data management in privacy and security-sensitive applications.

Read more
Forbes Advisor

at Forbes Advisor

3 candid answers
Nikita Sinha
Posted by Nikita Sinha
Chennai
11 - 16 yrs
Upto ₹50L / yr (Varies
)
Google Webmaster Tools
CI/CD
Cloud Computing
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

A DevSecOps Staff Engineer integrates security into DevOps practices, designing secure CI/CD

pipelines, building and automating secure cloud infrastructure and ensuring compliance across

development, operations, and security teams.


Responsibilities

• Design, build and maintain secure CI/CD pipelines utilizing DevSecOps principles and

practices to increase automation and reduce human involvement in the process

• Integrate tools of SAST, DAST, SCA, etc. within pipelines to enable automated application

building, testing, securing and deployment.

• Implement security controls for cloud platforms (AWS, GCP), including IAM, container

security (EKS/ECS), and data encryption for services like S3 or BigQuery, etc.

• Automate vulnerability scanning, monitoring, and compliance processes by collaborating

with DevOps and Development teams to minimize risks in deployment pipelines.

• Suggesting architecture improvements, recommending process improvements.

• Review cloud deployment architectures and implement required security controls.

• Mentor other engineers on security practices and processes.


Requirements

• Bachelor's degree, preferably in CS or a related field, or equivalent experience

• 10+ years of overall industry experience with AWS Certified - Security Specialist.• Must have implementation experience using security tools and processes related to SAST,

DAST and Pen Testing

• AWS-specific: 5+ years’ experience with using a broad range of AWS technologies (e.g.

EC2, RDS, ELB, S3, VPC, CloudWatch) to develop and maintain an Amazon AWS based

cloud solution, with an emphasis on best practice cloud security.

• Experienced with CI/CD tool chain (GitHub Actions, Packages, Jenkins, etc.)

• Passionate about solving security challenges and being informed of available and

emerging security threats and various security technologies.

• Must be familiar with the OWASP Top 10 Security Risks and Controls

• Good skills in at least one or more scripting languages: Python, Bash

• Good knowledge in Kubernetes, Docker Swarm or other cluster management software.

• Willing to work in shifts as required


Good to Have

• AWS Certified DevOps Engineer

• Observability: Experience with system monitoring tools (e.g. CloudWatch, New Relic,

etc.).

• Experience with Terraform/Ansible/Chef/Puppet

• Operating Systems: Windows and Linux system administration.


Perks:

● Day off on the 3rd Friday of every month (one long weekend each month)

● Monthly Wellness Reimbursement Program to promote health well-being

● Monthly Office Commutation Reimbursement Program

● Paid paternity and maternity leaves

Read more
Media and entertainment

Media and entertainment

Agency job
via Jobdost by Saida Pathan
Noida
4 - 6 yrs
₹30L - ₹35L / yr
TypeScript
skill iconExpress
skill iconNextJs (Next.js)
MVC Framework
Microsoft Windows Azure
+2 more

What you will be doing

  • Build and ship features in our Node.js (and now migrating to TypeScript) codebase that directly impact user experience and help move the top and bottom line of the business.
  • Collaborate closely with our product, design and data team to build innovative features to deliver a world class product to our customers. At STAGE, product managers don’t “tell” what to build. In fact, we all collaborate on how to solve a problem for our customers and the business. Engineering plays a big part in it.
  • Design scalable platforms that empower our product and marketing teams to rapidly experiment.
  • Own the quality of our products by writing automated tests, reviewing code, making systems observable and resilient to failures.
  • Drive code quality and pay down architectural debt by continuous analysis of our codebases and systems, and continuous refactoring.
  • Architect our systems for faster iterations, releasability, scalability and high availability using practices like Domain Driven Design, Event Driven Architecture, Cloud Native Architecture and Observability.
  • Set the engineering culture with the rest of the team by defining how we should work as a team, set standards for quality, and improve the speed of engineering execution.

 

The role could be ideal for you if you

  • Experience of 4-6 years of working in backend engineering with at least 2 years of production experience in TypeScript, Express.js (or another popular framework like Nest.js) and MongoDB (or any popular database like MySQL, PostgreSQL, DynamoDB, etc.).
  • Well versed with one or more architectures and design patterns such as MVC, Domain Driven Design, CQRS, Event Driven Architecture, Cloud Native Architecture, etc.
  • Experienced in writing automated tests (especially integration tests) and Continuous Integration. At STAGE, engineers own quality and hence, writing automated tests is crucial to the role.
  • Experience with managing production infrastructure using technologies like public cloud providers (AWS, GCP, Azure, etc.). Bonus: if you have experience in using Kubernetes.
  • Experience in observability techniques like code instrumentation for metrics, tracing and logging.
  • Care deeply about code quality, code reviews, software architecture (think about Object Oriented Programming, Clean Code, etc.), scalability and reliability. Bonus: if you have experience in this from your past roles.
  • Understand the importance of shipping fast in a startup environment and constantly try to find ingenious ways to achieve the same.
  • Collaborate well with everyone on the team. We communicate a lot and don’t hesitate to get quick feedback from other members on the team sooner than later.
  • Can take ownership of goals and deliver them with high accountability.


Read more
Client

at Client

2 candid answers
Priyanka Malik
Posted by Priyanka Malik
Vasant Vihar
5 - 10 yrs
₹12L - ₹20L / yr
skill iconAmazon Web Services (AWS)

Here is the JD updated for 5–10 years of experience:

Job Title: Senior Java Developer (AWS Mandatory)

Location: Onsite

Experience: 5–10 Years

Employment Type: Full-Time

Job Description:

We are seeking an experienced Senior Java Developer with strong expertise in AWS, Spring Boot, and Microservices. The candidate will be responsible for designing, developing, and scaling complex enterprise applications while providing technical leadership within the team.

Key Responsibilities:

  • Architect, design, and develop robust Java-based applications.
  • Lead the development of microservices using Spring Boot.
  • Utilize AWS cloud services for application deployment, scaling, and automation.
  • Participate in code reviews, provide technical guidance, and mentor junior developers.
  • Collaborate with product, QA, and DevOps teams to deliver high-quality solutions.
  • Troubleshoot, optimize, and improve application performance.

Required Skills:

  • Java (Advanced) – Strong hands-on coding and design skills.
  • AWS (Mandatory) – Experience with EC2, S3, Lambda, RDS, ECS/EKS, CloudWatch, API Gateway.
  • Spring Boot & Microservices – Deep understanding and real project experience.
  • Strong understanding of distributed systems and design patterns.
  • Experience with CI/CD tools (Jenkins, GitLab, etc.).
  • Familiarity with Git, Maven/Gradle, and modern development practices.

Good to Have:

  • Experience with Docker and Kubernetes.
  • Exposure to NoSQL databases (MongoDB, DynamoDB).
  • Understanding of event-driven architecture (Kafka, RabbitMQ).


Read more
Kuku FM
Bengaluru (Bangalore)
5 - 12 yrs
₹30L - ₹60L / yr
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

We're seeking an experienced Engineer to join our engineering team, handling massive-scale data processing and analytics infrastructure that supports over 1B daily events, 3M+ DAU, and 50k+ hours of content. The ideal candidate will bridge the gap between raw data collection and actionable insights, while supporting our ML initiatives.

Key Responsibilities

  • Lead and scale the Infrastructure Pod, setting technical direction for data, platform, and DevOps initiatives.
  • Architect and evolve our cloud infrastructure to support 1B+ daily events — ensuring reliability, scalability, and cost efficiency.
  • Collaborate with Data Engineering and ML pods to build high-performance pipelines and real-time analytics systems.
  • Define and implement SLOs, observability standards, and best practices for uptime, latency, and data reliability.
  • Mentor and grow engineers, fostering a culture of technical excellence, ownership, and continuous learning.
  • Partner with leadership on long-term architecture and scaling strategy — from infrastructure cost optimization to multi-region availability.
  • Lead initiatives on infrastructure automation, deployment pipelines, and platform abstractions to improve developer velocity.
  • Own security, compliance, and governance across infrastructure and data systems.

 

Who You Are

  • Previously a Tech Co-founder / Founding Engineer / First Infra Hire who scaled a product from early MVP to significant user or data scale.
  • 5–12 years of total experience, with at least 2+ years in leadership or team-building roles.
  • Deep experience with cloud infrastructure (AWS/GCP), 
  • Experience with containers (Docker, Kubernetes), and IaC tools (Terraform, Pulumi, or CDK).
  • Hands-on expertise in data-intensive systems, streaming (Kafka, RabbitMQ, Spark Streaming), and distributed architecture design.
  • Proven experience building scalable CI/CD pipelines, observability stacks (Prometheus, Grafana, ELK), and infrastructure for data and ML workloads.
  • Comfortable being hands-on when needed — reviewing design docs, debugging issues, or optimizing infrastructure.
  • Strong system design and problem-solving skills; understands trade-offs between speed, cost, and scalability.
  • Passionate about building teams, not just systems — can recruit, mentor, and inspire engineers.

 

Preferred Skills

  • Experience managing infra-heavy or data-focused teams.
  • Familiarity with real-time streaming architectures.
  • Exposure to ML infrastructure, data governance, or feature stores.
  • Prior experience in the OTT / streaming / consumer platform domain is a plus.
  • Contributions to open-source infra/data tools or strong engineering community presence.

 

What We Offer

  • Opportunity to build and scale infrastructure from the ground up, with full ownership and autonomy.
  • High-impact leadership role shaping our data and platform backbone.
  • Competitive compensation + ESOPs.
  • Continuous learning budget and certification support.
  • A team that values velocity, clarity, and craftsmanship.

 

Success Metrics

  • Reduction in infra cost per active user and event processed.
  • Increase in developer velocity (faster pipeline deployments, reduced MTTR).
  • High system availability and data reliability SLAs met.
  • Successful rollout of infra automation and observability frameworks.
  • Team growth, retention, and technical quality.


Read more
Sagri
Bengaluru (Bangalore)
5 - 8 yrs
₹14L - ₹15L / yr
skill iconReact.js
skill iconPython
skill iconNextJs (Next.js)
skill iconAmazon Web Services (AWS)
TypeScript
+3 more
  • 5+ years full-stack development
  • Proficiency in AWS cloud-native development
  • Experience with microservices & async architectures
  • Strong TypeScript proficiency
  • Strong Python proficiency
  • React.js expertise
  • Next.js expertise
  • PostgreSQL + PostGIS experience
  • GraphQL development experience
  • Prisma ORM experience
  • Experience in B2C product development(Retail/Ecommerce)
  • Looking for candidates based out of Bangalore only


CTC: up to 40 LPA


If interested kindly share your updated resume at 82008 31681


Read more
IT Industry

IT Industry

Agency job
via Truetech by Meimozhi balu
Remote only
4 - 8 yrs
₹20L - ₹30L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconDocker
skill iconKubernetes
skill iconAmazon Web Services (AWS)

Key Responsibilities

●     Design and maintain high-performance backend applications and microservices

●     Architect scalable, cloud-native systems and collaborate across engineering teams

●     Write high-quality, performant code and conduct thorough code reviews

●     Build and operate CI/CD pipelines and production systems

●     Work with databases, containerization (Docker/Kubernetes), and cloud platforms

●     Lead agile practices and continuously improve service reliability

Required Qualifications

●     4+ years of professional software development experience

●     2+ years contributing to service design and architecture

●     Strong expertise in modern languages like Golang, Python

●     Deep understanding of scalable, cloud-native architectures and microservices

●     Production experience with distributed systems and database technologies

●     Experience with Docker, software engineering best practices

●     Bachelor's Degree in Computer Science or related technical field

Preferred Qualifications

●     Experience with Golang, AWS, and Kubernetes

●     CI/CD pipeline experience with GitHub Actions

●     Start-up environment experience

Read more
BlogVault

at BlogVault

3 candid answers
1 recruiter
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
3 - 6 yrs
Upto ₹30L / yr (Varies
)
skill iconRuby
skill iconNodeJS (Node.js)
skill iconGo Programming (Golang)
skill iconReact.js
skill iconAngular (2+)
+3 more

We’re building a suite of SaaS products for WordPress professionals—each with a clear product-market fit and the potential to become a $100M+ business. As we grow, we need engineers who go beyond feature delivery. We’re looking for someone who wants to build enduring systems, make practical decisions, and help us ship great products with high velocity.


What You’ll Do

  • Work with product, design, and support teams to turn real customer problems into thoughtful, scalable solutions.
  • Design and build robust backend systems, services, and APIs that prioritize long-term maintainability and performance.
  • Use AI-assisted tooling (where appropriate) to explore solution trees, accelerate development, and reduce toil.
  • Improve velocity across the team by building reusable tools, abstractions, and internal workflows—not just shipping isolated features.
  • Dig into problems deeply—whether it's debugging a performance issue, streamlining a process, or questioning a product assumption.
  • Document your decisions clearly and communicate trade-offs with both technical and non-technical stakeholders.


What Makes You a Strong Fit

  • You’ve built and maintained real-world software systems, ideally at meaningful scale or complexity.
  • You think in systems and second-order effects—not just in ticket-by-ticket outputs.
  • You prefer well-reasoned defaults over overengineering.
  • You take ownership—not just of code, but of the outcomes it enables.
  • You work cleanly, write clear code, and make life easier for those who come after you.
  • You’re curious about the why, not just the what—and you’re comfortable contributing to product discussions.


Bonus if You Have Experience With

  • Building tools or workflows that accelerate other developers.
  • Working with AI coding tools and integrating them meaningfully into your workflow.
  • Building for SaaS products, especially those with large user bases or self-serve motions.
  • Working in small, fast-moving product teams with a high bar for ownership.


Why Join Us

  • A small team that values craftsmanship, curiosity, and momentum.
  • A product-driven culture where engineering decisions are informed by customer outcomes.
  • A chance to work on multiple zero-to-one opportunities with strong PMF.
  • No vanity perks—just meaningful work with people who care.
Read more
Virtana

at Virtana

2 candid answers
Krutika Devadiga
Posted by Krutika Devadiga
Pune
3 - 6 yrs
Best in industry
skill iconKubernetes
skill iconJava
skill iconDocker
Test Automation (QA)
Manual testing
+8 more

Position Overview:

As a Senior QA Engineer, you will play a critical role in driving quality across our product offerings. You will work closely with developers and product/support teams to ensure that our storage and networking monitoring solutions are thoroughly tested and meet enterprise-level reliability. A strong background in automation testing using Python and scripting is essential, along with proven debugging experience in enterprise products utilizing AWS, Cloud, and Kubernetes technologies. You will act as a key advocate for quality across the organization, interacting with diverse teams and stakeholders to push the boundaries of product excellence.


Work Location- Pune


Job Type- Hybrid


Key Responsibilities:

  • QA and Automation Testing: Come up with exhaustive test plans and automation test-cases using Python and scripting languages to validate end to end real world scenarios.
  • Enterprise Product Testing: Test enterprise-grade products deployed in AWS, Cloud, and Kubernetes environments, ensuring that they perform optimally in large-scale, real-world scenarios.
  • Debugging and Issue Resolution: Work closely with development teams to identify, debug, and resolve issues in enterprise-level products, ensuring high-quality and reliable product releases.
  • Test Automation Frameworks: Develop and maintain test automation frameworks to streamline testing processes, reduce manual testing efforts, and increase test coverage.
  • Customer Interaction: Be open to interacting with cross-geo customers to understand their quality requirements, test against real-world use cases, and ensure their satisfaction with product performance.
  • Voice of Quality: Act as an advocate for quality within the organization, pushing for excellence in product development and championing improvements in testing practices and processes.
  • Documentation: Create and maintain detailed documentation of testing processes, test cases, and issue resolutions, enabling knowledge sharing and consistent quality assurance practices.


Qualifications:

  • Bachelor’s or master’s degree in computer science, Software Engineering, or a related field.
  • 3+ years of hands-on experience in QA and automation testing, with a strong focus on Python and scripting.
  • Proven experience in testing and debugging enterprise products deployed in AWS, Cloud, and Kubernetes environments.
  • Solid understanding of storage and networking domains, with practical exposure to monitoring use-cases.
  • Strong experience with automation testing frameworks, including the development and execution of automated test cases.
  • Excellent debugging, problem-solving, and analytical skills.
  • Strong communication skills, with the ability to collaborate with diverse teams across geographies and time zones.
  • Experience in working in agile development environments, with a focus on continuous integration and delivery.
  • Passion for quality and a relentless drive to push the boundaries of what can be achieved in product excellence.


Company Overview:

Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud.


As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Senior QA Engineer with strong Automation focus to contribute to the futuristic development of our Platform.


Why Join Us

  • Opportunity to play a pivotal role in driving quality for a leading performance monitoring company with a focus on storage and networking monitoring.
  • Collaborative and innovative work environment with a global team.
  • Competitive salary and benefits package.
  • Professional growth and development opportunities.
  • Exposure to cutting-edge technologies and enterprise-level challenges.


If you are a passionate QA Engineer with a strong background in automation, testing, and debugging in AWS, Cloud, and Kubernetes environments, and if you are eager to be the voice of quality in a rapidly growing company, we invite you to apply and help us raise the bar on product excellence.

Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹50L - ₹75L / yr
Ansible
Terraform
skill iconAmazon Web Services (AWS)
Platform as a Service (PaaS)
CI/CD
+30 more

ROLE & RESPONSIBILITIES:

We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.


KEY RESPONSIBILITIES:

1.     Cloud Security (AWS)-

  • Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
  • Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
  • Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
  • Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
  • Ensure encryption of data at rest/in transit across all cloud services.

 

2.     DevOps Security (IaC, CI/CD, Kubernetes, Linux)-

Infrastructure as Code & Automation Security:

  • Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
  • Enforce misconfiguration scanning and automated remediation.

CI/CD Security:

  • Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
  • Implement secure build, artifact signing, and deployment workflows.

Containers & Kubernetes:

  • Harden Docker images, private registries, runtime policies.
  • Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
  • Apply CIS Benchmarks for Kubernetes and Linux.

Monitoring & Reliability:

  • Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
  • Ensure audit logging across cloud/platform layers.


3.     MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-

Pipeline & Workflow Security:

  • Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
  • Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.

ML Platform Security:

  • Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
  • Control model access, artifact protection, model registry security, and ML metadata integrity.

Data Security:

  • Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
  • Enforce data versioning security, lineage tracking, PII protection, and access governance.

ML Observability:

  • Implement drift detection (data drift/model drift), feature monitoring, audit logging.
  • Integrate ML monitoring with Grafana/Prometheus/CloudWatch.


4.     Network & Endpoint Security-

  • Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
  • Conduct vulnerability assessments, penetration test coordination, and network segmentation.
  • Secure remote workforce connectivity and internal office networks.


5.     Threat Detection, Incident Response & Compliance-

  • Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
  • Build security alerts, automated threat detection, and incident workflows.
  • Lead incident containment, forensics, RCA, and remediation.
  • Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
  • Maintain security policies, procedures, RRPs (Runbooks), and audits.


IDEAL CANDIDATE:

  • 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
  • Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
  • Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
  • Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
  • Strong Linux security (CIS hardening, auditing, intrusion detection).
  • Proficiency in Python, Bash, and automation/scripting.
  • Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
  • Understanding of microservices, API security, serverless security.
  • Strong understanding of vulnerability management, penetration testing practices, and remediation plans.


EDUCATION:

  • Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
  • Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.


PERKS, BENEFITS AND WORK CULTURE:

  • Competitive Salary Package
  • Generous Leave Policy
  • Flexible Working Hours
  • Performance-Based Bonuses
  • Health Care Benefits
Read more
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Bengaluru (Bangalore)
1 - 8 yrs
₹5L - ₹30L / yr
skill iconPython
skill iconReact.js
skill iconPostgreSQL
TypeScript
skill iconNextJs (Next.js)
+11 more


Job Summary

We are seeking a highly skilled Full Stack Engineer with 2+ years of hands-on experience to join our high-impact engineering team. You will work across the full stack—building scalable, high-performance frontends using Typescript & Next.js and developing robust backend services using Python (FastAPI/Django).

This role is crucial in shaping product experiences and driving innovation at scale.


Mandatory Candidate Background

  • Experience working in product-based companies only
  • Strong academic background
  • Stable work history
  • Excellent coding skills and hands-on development experience
  • Strong foundation in Data Structures & Algorithms (DSA)
  • Strong problem-solving mindset
  • Understanding of clean architecture and code quality best practices


Key Responsibilities

  • Design, develop, and maintain scalable full-stack applications
  • Build responsive, performant, user-friendly UIs using Typescript & Next.js
  • Develop APIs and backend services using Python (FastAPI/Django)
  • Collaborate with product, design, and business teams to translate requirements into technical solutions
  • Ensure code quality, security, and performance across the stack
  • Own features end-to-end: architecture, development, deployment, and monitoring
  • Contribute to system design, best practices, and the overall technical roadmap


Requirements

Must-Have:

  • 2+ years of professional full-stack engineering experience
  • Strong expertise in Typescript / Next.js OR Python (FastAPI, Django) — must be familiar with both areas
  • Experience building RESTful APIs and microservices
  • Hands-on experience with Git, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure)
  • Strong debugging, optimization, and problem-solving abilities
  • Comfortable working in fast-paced startup environments


Good-to-Have:

  • Experience with containerization (Docker/Kubernetes)
  • Exposure to message queues or event-driven architectures
  • Familiarity with modern DevOps and observability tooling


Read more
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Pune
6 - 8 yrs
₹12L - ₹22L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconJavascript
skill iconGo Programming (Golang)
Elixir
+10 more


Job Description – Full Stack Developer (React + Node.js)

Experience: 5–8 Years

Location: Pune

Work Mode: WFO

Employment Type: Full-time


About the Role

We are looking for an experienced Full Stack Developer with strong hands-on expertise in React and Node.js to join our engineering team. The ideal candidate should have solid experience building scalable applications, working with production systems, and collaborating in high-performance tech environments.


Key Responsibilities

  • Design, develop, and maintain scalable full-stack applications using React and Node.js.
  • Collaborate with cross-functional teams to define, design, and deliver new features.
  • Write clean, maintainable, and efficient code following OOP/FP and SOLID principles.
  • Work with relational databases such as PostgreSQL or MySQL.
  • Deploy and manage applications in cloud environments (preferably GCP or AWS).
  • Optimize application performance, troubleshoot issues, and ensure high availability in production systems.
  • Utilize containerization tools like Docker for efficient development and deployment workflows.
  • Integrate third-party services and APIs, including AI APIs and tools.
  • Contribute to improving development processes, documentation, and best practices.


Required Skills

  • Strong experience with React.js (frontend).
  • Solid hands-on experience with Node.js (backend).
  • Good understanding of relational databases: PostgreSQL / MySQL.
  • Experience working in production environments and debugging live systems.
  • Strong understanding of OOP or Functional Programming, and clean coding standards.
  • Knowledge of Docker or other containerization tools.
  • Experience with cloud platforms (GCP or AWS).
  • Excellent written and verbal communication skills.


Good to Have

  • Experience with Golang or Elixir.
  • Familiarity with Kubernetes, RabbitMQ, Redis, etc.
  • Contributions to open-source projects.
  • Previous experience working with AI APIs or machine learning tools.


Read more
Virtana

at Virtana

2 candid answers
Krutika Devadiga
Posted by Krutika Devadiga
Pune
4 - 10 yrs
Best in industry
skill iconJava
skill iconKubernetes
skill iconGo Programming (Golang)
skill iconPython
Apache Kafka
+13 more

Senior Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 


Work Location: Pune/ Chennai


Job Type: Hybrid

 

Role Responsibilities: 

  • The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform 
  • Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform. 
  • Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.  
  • Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation 
  • Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution 
  • Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery 

 

Required Qualifications:    

  • Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS) 
  • Experience with CI/CD and cloud-based software development and delivery 
  • Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM. 
  • Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required. 
  • Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent 
  • Highly effective verbal and written communication skills and ability to lead and participate in multiple projects 
  • Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities 
  • Must be results-focused, team-oriented and with a strong work ethic 

 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills 
  • Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 

  

About Virtana:  Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

Read more
Inteliment Technologies

at Inteliment Technologies

2 candid answers
Ariba Khan
Posted by Ariba Khan
Pune
3 - 5 yrs
Upto ₹16L / yr (Varies
)
SQL
skill iconPython
ETL
skill iconAmazon Web Services (AWS)
Azure
+1 more

About the company:

Inteliment is a niche business analytics company with almost 2 decades proven track record of partnering with hundreds of fortunes 500 global companies. Inteliment operates its ISO certified development centre in Pune, India and has business operations in multiple countries through subsidiaries in Singapore, Europe and headquarter in India.


About the Role:

As a Data Engineer, you will contribute to cutting-edge global projects and innovative product initiatives, delivering impactful solutions for our Fortune clients. In this role, you will take ownership of the entire data pipeline and infrastructure development lifecycle—from ideation and design to implementation and ongoing optimization. Your efforts will ensure the delivery of high-performance, scalable, and reliable data solutions. Join us to become a driving force in shaping the future of data infrastructure and innovation, paving the way for transformative advancements in the data ecosystem.


Qualifications:

  • Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
  • Certifications with related field will be an added advantage.


Key Competencies:

  • Must have experience with SQL, Python and Hadoop
  • Good to have experience with Cloud Computing Platforms (AWS, Azure, GCP, etc.), DevOps Practices, Agile Development Methodologies
  • ETL or other similar technologies will be an advantage.
  • Core Skills: Proficiency in SQL, Python, or Scala for data processing and manipulation
  • Data Platforms: Experience with cloud platforms such as AWS, Azure, or Google Cloud.
  • Tools: Familiarity with tools like Apache Spark, Kafka, and modern data warehouses (e.g., Snowflake, Big Query, Redshift).
  • Soft Skills: Strong problem-solving abilities, collaboration, and communication skills to work effectively with technical and non-technical teams.
  • Additional: Knowledge of SAP would be an advantage 


Key Responsibilities:

  • Data Pipeline Development: Build, maintain, and optimize ETL/ELT pipelines for seamless data flow.
  • Data Integration: Consolidate data from various sources into unified systems.
  • Database Management: Design and optimize scalable data storage solutions.
  • Data Quality Assurance: Ensure data accuracy, consistency, and completeness.
  • Collaboration: Work with analysts, scientists, and stakeholders to meet data needs.
  • Performance Optimization: Enhance pipeline efficiency and database performance.
  • Data Security: Implement and maintain robust data security and governance policies
  • Innovation: Adopt new tools and design scalable solutions for future growth.
  • Monitoring: Continuously monitor and maintain data systems for reliability.
  • Data Engineers ensure reliable, high-quality data infrastructure for analytics and decision-making.
Read more
Chargebackguru

Chargebackguru

Agency job
via TrueTech Solutions by GayathriBajwana M
Chennai
3 - 8 yrs
₹6L - ₹12L / yr
skill iconAmazon Web Services (AWS)
Business Intelligence (BI)
PowerBI
Qlik

Job brief

An ideal candidate will have 3 to 6 years of experience working in live projects related to data analysis or Business Intelligence.

 

Responsibilities:

 

  • Work with business groups and technical teams to develop and maintain BI Reports & Dashboards.
  • Designing, developing, and maintaining complex BI solutions, such as dashboards, reports, and data visualizations.
  • Knowledge on extract data from various sources such as files, cloud, and databases & load to data warehouse and data lake.
  • Responsible for performance tuning and optimization of BI solutions and ensure that the BI solutions perform efficiently and provide quick responses.
  • Provide technical support during weekends, after-hours and holidays when needed.

 

Skills Areas

  • Good knowledge of Data Analysis and Data Visualization and BI performance optimization techniques.
  • Experience in developing Reports & Dashboards, relational and multidimensional models, report migration & Upgrade activities.
  • Visualization Tools: Experience in one or more following visualization tools (Amazon QuickSight(Preferred), Bold BI, Power BI, Tableau, Qlik)
  • Strong knowledge on writing simple and Complex SQL Query
  •  Knowledge in ETL job development using Informatica, Data stage or Any other ETL tools
  • Knowledge in one or more of the following domains – fraud and risk, retail payments, banking, and financial services. 
  • Willing to learn new technologies and implement as per business requirement on-demand basis.
  • A Self-Motivated Challenging Professional with excellent
  • Problem solving Skills.
  • Business Communication skills (Written & Verbal)
  • Presentation & Documentation skills
  • Mentoring and People Management skills

 

Experience

The ideal candidate will have relevant experience of > 3 years. Possession of a professional degree / post-graduation is desirable. Certifications in the respective technology areas will be an added advantage. 

Read more
TrueTech Solutions

at TrueTech Solutions

1 recruiter
Poorvi S
Posted by Poorvi S
Remote only
4 - 15 yrs
₹20L - ₹30L / yr
skill iconGo Programming (Golang)
skill iconPython
skill iconAmazon Web Services (AWS)
CI/CD
skill iconKubernetes
+1 more

Required Qualifications

  • 4+ years of professional software development experience
  • 2+ years contributing to service design and architecture
  • Strong expertise in modern languages like Golang, Python
  • Deep understanding of scalable, cloud-native architectures and microservices
  • Production experience with distributed systems and database technologies
  • Experience with Docker, software engineering best practices
  • Bachelor's Degree in Computer Science or related technical field

Preferred Qualifications

  • Experience with Golang, AWS, and Kubernetes
  • CI/CD pipeline experience with GitHub Actions

Start-up environment experience

Read more
Remote only
8 - 15 yrs
₹20L - ₹30L / yr
PySpark
skill iconPython
ETL
databricks
skill iconAmazon Web Services (AWS)
+14 more

Job Summary: Lead/Senior ML Data Engineer (Cloud-Native, Healthcare AI)

Experience Required: 8+ Years

Work Mode: Remote


We are seeking a highly autonomous and experienced Lead/Senior ML Data Engineer to drive the critical data foundation for our AI analytics and Generative AI platforms. This is a specialized hybrid position, focusing on designing, building, and optimizing scalable data pipelines (ETL/ELT) that transform complex, messy clinical and healthcare data into high-quality, production-ready feature stores for Machine Learning and NLP models.

The successful candidate will own technical work streams end-to-end, ensuring data quality, governance, and low-latency delivery in a cloud-native environment.

Key Responsibilities & Focus Areas:

  • ML Data Pipeline Ownership (70-80% Focus): Design and implement high-performance, scalable ETL/ELT pipelines using PySpark and a Lakehouse architecture (such as Databricks) to ingest, clean, and transform large-scale healthcare datasets.
  • AI Data Preparation: Specialize in Feature Engineering and data preparation for complex ML workloads, including transforming unstructured clinical data (e.g., medical notes) for Generative AI and NLP model training.
  • Cloud Architecture & Orchestration: Deploy, manage, and optimize data workflows using Airflow in a production AWS environment.
  • Data Governance & Compliance: Mandatorily implement pipelines with robust data masking, pseudonymization, and security controls to ensure continuous adherence to HIPAA and other relevant health data privacy regulations.
  • Technical Leadership: Lead and define technical requirements from ambiguous business problems, acting as a key contributor to the data architecture strategy for the core AI platform.

Non-Negotiable Requirements (The "Must-Haves"):

  • 5+ years of progressive experience as a Data Engineer, with a clear focus on ML/AI support.
  • Deep expertise in PySpark/Python for distributed data processing.
  • Mandatory proficiency with Lakehouse platforms (e.g., Databricks) in an AWS production environment.
  • Proven experience handling complex clinical/healthcare data (EHR, Claims), including unstructured text.
  • Hands-on experience with HIPAA/GDPR compliance in data pipeline design.


Read more
Hardwin Software Solutions

at Hardwin Software Solutions

2 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
4yrs+
Upto ₹12L / yr (Varies
)
skill iconPython
Retrieval Augmented Generation (RAG)
Artificial Intelligence (AI)
skill iconAmazon Web Services (AWS)
Generative AI
+4 more

About the Role

We are looking for a passionate GenAI Developer to join our dynamic team at Hardwin Software Solutions. In this role, you will design and develop scalable backend systems, leverage AWS services for data processing, and work on cutting-edge Generative AI solutions. If you enjoy solving complex problems and building impactful applications, we’d love to hear from you.


What You Will Do

  • Develop robust and scalable backend services and APIs using Python, integrating with various AWS services.
  • Design, implement, and maintain data processing pipelines leveraging AWS (e.g., S3, Lambda).
  • Collaborate with cross-functional teams to translate requirements into efficient technical solutions.
  • Write clean, maintainable code while following agile engineering practices (CI/CD, version control, release cycles).
  • Optimize application performance and scalability by fine-tuning AWS resources and leveraging advanced Python techniques.
  • Contribute to the development and integration of Generative AI techniques into business applications.


What You Should Have

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 3+ years of professional experience in software development.
  • Strong programming skills in Python and good understanding of data structures & algorithms.
  • Hands-on experience with AWS services: S3, Lambda, DynamoDB, OpenSearch.
  • Experience with Relational Databases, Source Control, and CI/CD pipelines.
  • Practical knowledge of Generative AI techniques (mandatory).
  • Strong analytical and mathematical problem-solving abilities.
  • Excellent communication skills in English.
  • Ability to work both independently and collaboratively, with a proactive and self-motivated attitude.
  • Strong organizational skills with the ability to prioritize tasks and meet deadlines.
Read more
Helius Technologies

at Helius Technologies

2 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
8yrs+
Upto ₹27L / yr (Varies
)
skill iconJava
skill iconSpring Boot
Microservices
RESTful APIs
skill iconAmazon Web Services (AWS)
+1 more

Required Skills & Experience

  • Must have 8+ years relevant experience in Java Design Development.
  • Extensive experience working on solution design and API design.
  • Experience in Java development at an enterprise level (Spring Boot, Java 17+, Spring Security, Microservices, Spring).
  • Extensive work experience in monolithic applications using Spring.
  • Extensive experience leading API development and integration (REST/JSON).
  • Extensive work experience using Apache Camel.
  • In-depth technical knowledge of database systems (Oracle, SQL Server).
  • Ability to refactor and optimize existing code for performance, readability, and maintainability.
  • Experience working with Continuous Delivery/Continuous Integration (CI/CD) pipelines.
  • Experience in container platforms (Docker, OpenShift, Kubernetes).
  • DevOps knowledge including:
  • Configuring continuous integration, deployment, and delivery tools like Jenkins or Codefresh
  • Container-based development using Docker, Kubernetes, and OpenShift
  • Instrumenting monitoring and logging of applications
Read more
AdTech Industry

AdTech Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
8 - 12 yrs
₹0.1L - ₹0.1L / yr
skill iconPython
MLOps
Apache Airflow
Apache Spark
AWS CloudFormation
+23 more

Review Criteria

  • Strong MLOps profile
  • 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
  • 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
  • 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
  • Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
  • Must have hands-on Python for pipeline & automation development
  • 4+ years of experience in AWS cloud, with recent companies
  • (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth

 

Preferred

  • Hands-on in Docker deployments for ML workflows on EKS / ECS
  • Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
  • Experience with CI / CD / CT using GitHub Actions / Jenkins.
  • Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
  • Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.

 

Job Specific Criteria

  • CV Attachment is mandatory
  • Please provide CTC Breakup (Fixed + Variable)?
  • Are you okay for F2F round?
  • Have candidate filled the google form?

 

Role & Responsibilities

We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.

 

Key Responsibilities:

  • Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
  • Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
  • Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
  • Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
  • Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
  • Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
  • Collaborate with data scientists to productionize notebooks, experiments, and model deployments.

 

Ideal Candidate

  • 8+ years in MLOps/DevOps with strong ML pipeline experience.
  • Strong hands-on experience with AWS:
  • Compute/Orchestration: EKS, ECS, EC2, Lambda
  • Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
  • Workflow: MWAA/Airflow, Step Functions
  • Monitoring: CloudWatch, OpenSearch, Grafana
  • Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
  • Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
  • Strong Linux, scripting, and troubleshooting skills.
  • Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.

 

Education:

  • Master’s degree in computer science, Machine Learning, Data Engineering, or related field.

 

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort