Cutshort logo

50+ AWS (Amazon Web Services) Jobs in India

Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!

icon
MNK Global Corporate Solutions
Rithika Raghavan
Posted by Rithika Raghavan
Bengaluru (Bangalore)
5 - 7 yrs
₹15L - ₹20L / yr
skill iconPython
skill iconDjango
skill iconAmazon Web Services (AWS)

About the Role

We are looking for an experienced Senior Backend Developer to design and build scalable, secure, and high-performance backend systems. The ideal candidate will have deep expertise in Python/Django, microservices architecture, and cloud technologies, along with strong problem-solving skills and leadership capabilities.


Key Responsibilities

•Design and develop backend services using Django and Python.

•Architect and implement microservices-based solutions for scalability and maintainability.

•Work with PostgreSQL and Redis for efficient data storage and caching.

•Build and maintain RESTful APIs and ensure robust API design principles.

•Implement system design best practices for high availability and fault tolerance.

•Containerize applications using Docker and manage deployments with Kubernetes.

•Integrate with cloud platforms (AWS/Azure) for hosting and infrastructure management.

•Apply security best practices to protect data and application integrity.

•Collaborate with frontend, QA, and DevOps teams for seamless delivery.

•Mentor junior developers and conduct code reviews to maintain quality standards.


Required Skills & Expertise

•Django/Python – Advanced proficiency in backend development.

•Microservices Architecture – Strong understanding of distributed systems.

•PostgreSQL & Redis – Expertise in relational and in-memory databases.

•Docker/Kubernetes – Hands-on experience with containerization and orchestration.

•API Design & System Design – Ability to design scalable and secure systems.

•Cloud (AWS/Azure) – Practical experience with cloud services and deployments.

•Security Best Practices – Knowledge of authentication, authorization, and data protection.


Preferred Qualifications

•Experience with CI/CD pipelines and DevOps practices.

•Familiarity with message queues (e.g., RabbitMQ, Kafka).

•Exposure to monitoring tools (Prometheus, Grafana).


What We Offer

•Competitive salary and benefits.

•Opportunity to work on cutting-edge backend technologies.

•Collaborative and growth-oriented work environment.

Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
DrSoumya Sahadevan
Posted by DrSoumya Sahadevan
Pune
7 - 15 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
PySpark
databricks
+2 more

About TVARIT

TVARIT GmbH specializes in developing and delivering cutting-edge artificial intelligence (AI) solutions for the metal industry, including steel, aluminum, copper, cast iron, and more. Our software products empower customers to make intelligent, data-driven decisions, driving advancements in Predictive Quality (PsQ), Predictive Maintenance (PdM), and Energy Consumption Reduction (PsE), etc. With a strong portfolio of renowned reference customers, state-of-the-art technology, a talented research team from prestigious universities, and recognition through esteemed awards such as the EU Horizon 2020 AI Prize, TVARIT is recognized as one of the most innovative AI companies in Germany and Europe. We are seeking a self-motivated individual with a positive "can-do" attitude and excellent oral and written communication skills in English to join our team.


Job Description: We are looking for a Senior Data Engineer with strong expertise in Azure Databricks, PySpark, and distributed computing to develop and optimize scalable ETL pipelines for manufacturing analytics. The role involves working with high-frequency industrial data to enable real-time and batch data processing.


Key Responsibilities · Build scalable real-time and batch processing workflows using Azure Databricks, PySpark, and Apache Spark.

· Perform data pre-processing, including cleaning, transformation, deduplication, normalization, encoding, and scaling to ensure high-quality input for downstream analytics.

· Design and maintain cloud-based data architectures, including data lakes, lakehouses, and warehouses, following Medallion Architecture.

· Deploy and optimize data solutions on Azure (preferred), AWS, or GCP with a focus on performance, security, and scalability.

· Develop and optimize ETL/ELT pipelines for structured and unstructured data from IoT, MES, SCADA, LIMS, and ERP systems. · Automate data workflows using CI/CD and DevOps best practices, ensuring security and compliance with industry standards

· Monitor, troubleshoot, and enhance data pipelines for high availability and reliability.

· Utilize Docker and Kubernetes for scalable data processing.

· Collaborate with automation team, data scientists and engineers to provide clean, structured data for AI/ML models.


Desired Skills and Qualifications · Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.

· 7+ years of experience in core data engineering, with a strong focus on cloud platforms such as Azure (preferred), AWS, or GCP · Proficiency in PySpark, Azure Databricks, Python and Apache Spark, etc.

. 2 years of team handling experience.

· Expertise in relational databases (e.g., SQL Server, PostgreSQL), time series databases (e.g. Influx DB), and NoSQL databases (e.g., MongoDB, Cassandra) · Experience in containerization (Docker, Kubernetes).

· Strong analytical and problem-solving skills with attention to detail.

· Good to have MLOps, DevOps including model lifecycle management

· Excellent communication and collaboration skills, with a proven ability to work effectively as a team player.

· Comfortable working in a dynamic, fast-paced startup environment, adapting quickly to changing priorities and responsibilities.

Read more
Remote, remote
0 - 0 yrs
₹1L - ₹1.5L / yr
skill iconAmazon Web Services (AWS)
Troubleshooting
IT infrastructure
Disaster recovery
IT operations
+5 more

📍 Position: IT Intern

👩‍💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)

🎓 Qualification: B.Tech (IT) / M.Tech (IT) only

📌 Mode: Remote (WFH)

⏳ Shift: Willingness to work in night/rotational shifts

🗣 Communication: Excellent English


𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:

- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.

- Support user account management activities using hashtag

hashtag

#Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.

- Assist the IT team in configuring, monitoring, and supporting hashtag

hashtag

#AWS cloud services (EC2, S3, IAM, WorkSpaces).

- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.

- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.

- Help create and update technical documentation and knowledge base articles.

- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.


💻 Technical Requirements:

- Laptop with i5 or higher processor

-Reliable internet connectivity with 100mbps speed

Read more
REConnect Energy

at REConnect Energy

4 candid answers
2 recruiters
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
4.5 - 7 yrs
Upto ₹30L / yr (Varies
)
skill iconPython
MLOps
skill iconMachine Learning (ML)
SQL
skill iconAmazon Web Services (AWS)

About Us:

REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.


We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI. 


Responsibilities:

● Engineering - Take complete ownership of engineering stacks including Data Engineering and MLOps. Define and maintain software systems architecture for high availability 24x7 systems.

● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.

● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.

● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime. 


Requirements:

● 4-5 years of experience building highly available systems

● 2-3 years experience leading a team of engineers and analysts

● Bachelors or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent

● Proficient in python programming skills and expertise with data engineering and machine learning deployment

● Experience in databases including MySQL and NoSQL

● Experience in developing and maintaining critical and high availability systems will be given strong preference

● Experience in software design using design principles and architectural modeling.

● Experience working with AWS cloud platform.

● Strong analytical and data driven approach to problem solving 

Read more
Redtring
Keshav Senthil
Posted by Keshav Senthil
Hyderabad
3 - 6 yrs
₹15L - ₹20L / yr
skill iconJava
skill iconKotlin
skill iconAmazon Web Services (AWS)
skill iconRedis
Apache Kafka
+7 more

About Us:


We are hiring for a pre seed funded startup called Zeromoblt (https://zeromoblt.com/), a high-agency Hyderabad-based startup revolutionizing student transportation with lean, intelligent tech stacks.


Our mission: architect world-class systems from scratch—fast, scalable, and algorithmically sharp—using Kotlin, React, AWS (EC2, IoT, IAM), Google Maps, and multi-cloud setups. Stealth mode operations mean you're building 0→1 products with founders, not fixing tickets.


What You'll Do

  • Lead end-to-end ownership of complex systems: design, build, deploy, monitor, and iterate at scale.
  • Architect high-performance backends in Kotlin (or JVM langs) that handle real-time routing and IoT data.
  • Craft scalable React UIs that power ops dashboards and parent-facing apps.
  • Drive cloud decisions across AWS, Azure/GCP—optimising costs for our bootstrap runway.
  • Apply DSA/system design to solve hard problems like dynamic route optimization and predictive scaling.
  • Shape the engineering roadmap: propose, prioritise, and ship features with founders.
  • Mentor juniors while executing solo on high-impact bets—no layers, just results.


We're Looking For

  • 3-6 years of hands-on engineering where you've owned and shipped production systems (prove it with code/stories).
  • Elite CS fundamentals: advanced DSA, system design (distributed systems a must), design patterns.
  • Mastery of Kotlin/Java + modern React; real AWS experience (EC2, IAM, CLI—you know our stack).
  • Proven "leap-taker": startup grit, side projects, or open-source that screams hunger.
  • Figure-it-out velocity: you thrive in chaos, learn our domain overnight, and deliver 10x faster than peers.


This Role Is Not For You If…

  • You need structured roadmaps, PM hand-holding, or big-tech process.
  • Comfort > impact: stable salary over equity upside and chaos.
  • You've never worn all hats (dev, ops, product) in a resource-constrained environment.


Why Join Us

  • Massive ownership: lead tech for 10k+ students, direct founder access, shape ZeroMoblt's scale.
  • Flat, high-trust team: flexible Hyderabad/remote, no bureaucracy.
  • Hungry culture: we hire hustlers scaling from 700 to 10k students—your wins are visible daily.
  • Hungry to Leap? Apply now!
Read more
NeoGenCode Technologies Pvt Ltd
Mumbai
5 - 10 yrs
₹12L - ₹24L / yr
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
skill iconKubernetes
+12 more

Job Title : Senior DevOps Engineer (Only Mumbai Candidates)

Experience : 5+ Years

Location : Mumbai (On-site)

Notice Period : Immediate to 15 Days

Interview Process : 1 Internal Round + 1 Client Round


Mandatory Skills :

Multi-Cloud (AWS/GCP/Azure – any two), Kubernetes, Terraform, Helm (writing Helm Charts), CI/CD (GitLab CI/Jenkins/GitHub Actions), GitOps (ArgoCD/FluxCD), Multi-tenant deployments, Stateful microservices on Kubernetes, Enterprise Linux.


Role Overview :

We are looking for a Senior DevOps Engineer to design, build, and manage scalable cloud infrastructure and DevOps pipelines for product-based platforms.

The ideal candidate should have strong experience with Kubernetes, Terraform, Helm Charts, CI/CD, and GitOps practices.


Key Responsibilities :

  • Design and manage scalable cloud infrastructure across AWS/GCP/Azure.
  • Deploy and manage microservices on Kubernetes clusters.
  • Build and maintain Infrastructure as Code using Terraform and Helm.
  • Implement CI/CD pipelines using GitLab CI, Jenkins, or GitHub Actions.
  • Implement GitOps workflows using ArgoCD or FluxCD.
  • Ensure secure, scalable, and reliable DevOps architecture.
  • Implement monitoring and logging using Prometheus, Grafana, or ELK.

Good to Have :

  • Packer, OpenShift/Rancher/K3s, On-prem deployments, PaaS experience, scripting (Bash/Python), Terraform modules.
Read more
ManpowerGroup
Shirisha Jangi
Posted by Shirisha Jangi
Bengaluru (Bangalore), Hyderabad
7 - 15 yrs
₹20L - ₹27L / yr
Data engineering
skill iconJava
skill iconPython
SQL
skill iconScala
+3 more

Immediate hiring for Senior Data Engineer

📍 Location: Hyderabad/Bangalore

💼 Experience: 7+Years

🕒 Employment Type: Full-Time

🏢 Work Mode: Hybrid

📅 Notice Period: 0-1Month serving notice only

 

   We are seeking a highly skilled and motivated Data Engineer to join our innovative team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support our enterprise-wide data-driven initiatives. You will collaborate closely with cross-functional teams to ensure the availability, reliability, and performance of our data systems and solutions.

 

🔎 Key Responsibilities:

  • Data Pipeline Development
  • Data Modeling and Architecture
  • Data Integration and API Development
  • Data Infrastructure Management
  • Collaboration and Documentation

 

🎯 Required Skills:

  • Bachelor’s degree in computer science, Engineering, Information Systems, or a related field.
  • 7+ years of proven experience in data engineering, software development, or related technical roles.
  • 7+ years of experience in programming languages commonly used in data engineering (Python, Java, SQL, Stored Procedures, Scala, etc.).
  • 7+ years of experience with database systems, data modeling, and advanced SQL.
  • 7+ years of experience with ETL tools such as SSIS, Snowflake, Databricks, Azure Data Factory, Stored Procedures, etc.
  • Experience with big data technologies such as Hadoop, Spark, Kafka, etc.
  • 5+ years of experience working with cloud platforms like Azure, AWS, or Google Cloud.
  • Strong analytical, problem-solving, and debugging skills with high attention to detail.
  • Excellent communication and collaboration skills in a team-oriented, fast-paced environment.
  • Ability to adapt to rapidly evolving technologies and business requirements.

 

 

Read more
Product development MNC
Hyderabad
12 - 20 yrs
₹45L - ₹60L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconAmazon Web Services (AWS)
Fullstack Developer
TypeScript

Work Mode: 5 days in office

Notice: Max 30 days

*1 final round will be in-person


Responsibilities

●      Own and champion the development process of our web-based applications, including: SDLC, coding standards, code reviews, check-ins and builds, issue tracking, bug triages, incident management. and testing.

●      Build and maintain a high-performing software development team including hiring, training, and onboarding.

●      Identify opportunities to eliminate non-value add activities to enable our developers to do what they love best—developing! No pointless meetings, no unnecessary interruptions, no random changes of course, no new problems from on high dumped in their lap each month.

●      Identify growth opportunities for team members to continue to learn and develop in a supportive environment.

●      Provide an engaging and challenging landscape for career growth.

●      Provide leadership, mentorship, and motivation to the engineering team to sustain high levels of productivity and morale.

●      Collaborate with Product Management on product requirements.

●      Champion and advocate for the engineering team to the rest of the organization.

●      Create a positive culture of fairness, quality, and accountability while challenging the status quo and bringing new ideas to light.

●      Participate as a member of company’s Engineering Leadership team to build a high performing organization across multiple locations.

 

Requirements

●      12+ years of software development experience, 2+ years of development leadership experience.

●      Demonstrated technical leadership and people management skills.

●      Experience with agile development processes.

●      Hands-on experience in driving/leading technical efforts in cloud-based applications.

●      Proven track record of driving quality within a team, with a commitment to automated testing.

●      Strong communication skills with the ability to effectively influence product at different levels of abstraction and communicate to both technical and non-technical audiences.

●      Excellent coding skills to provide guidance and craftsmanship for our engineers

●      Technical acumen to provide solid judgment in situations so you can provide the optimal short term decisions without sacrificing long term technology goals

●      Demonstrated critical analysis skills to provide continuous improvement of technology, process, and productivity

 

Technical Experience

We are looking for someone who has experience working in environments that utilize some of the following technologies:

●      AWS & Azure

●      Typescript

●      Node.js

●      React.js

●      Material UI

●      Jira

●      GitHub

●      CI/CD

●      SQL (MySQL, PostgreSQL, SQL Server)

●      MongoDB

Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Pune
4 - 8 yrs
₹12L - ₹21L / yr
skill iconJava
skill iconSpring Boot
Microservices
Apache Kafka
skill iconAmazon Web Services (AWS)

Proficiency in Java 8+. Solid understanding of REST APIs(Spring boot), microservices, databases (SQL/NoSQL), and caching systems like Redis/Aerospike.


Solid understanding of different messaging systems like Kafka with experience in writing respective publisher & consumers.


Familiarity with cloud platforms (AWS, ) and DevOps tools (Docker, Kubernetes, CI/CD).


Good understanding of data structures, algorithms, and software design principles.

Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
1 - 4 yrs
Upto ₹22L / yr (Varies
)
Automation
Terraform
skill iconPython
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)

The Software Engineer – SRE will be responsible for building and maintaining highly reliable, scalable, and secure infrastructure that powers the Albert platform. This role focuses on automation, observability, and operational excellence to ensure seamless deployment, performance, and reliability of core platform services.


Responsibilities

  • Act as a passionate representative of the Albert product and brand.
  • Work closely with Product Engineering and other stakeholders to plan and deliver core platform capabilities that enable scalability, reliability, and developer productivity.
  • Work with the Site Reliability Engineering (SRE) team on shared full-stack ownership of a collection of services and/or technology areas.
  • Understand the end-to-end configuration, technical dependencies, and overall behavioral characteristics of all microservices.
  • Be responsible for the design and delivery of the mission-critical stack with a focus on security, resiliency, scale, and performance.
  • Own end-to-end performance and operability.
  • Demonstrate a clear understanding of automation and orchestration principles.
  • Act as the escalation point for complex or critical issues that have not yet been documented as Standard Operating Procedures (SOPs).
  • Use a deep understanding of service topology and dependencies to troubleshoot issues and define mitigations.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
  • 1+ years of software engineering experience, with at least 1 year in an SRE role focused on automation.
  • Strong experience with Infrastructure as Code (IaC), preferably using Terraform.
  • Strong expertise in Python or Node.js, including designing RESTful APIs and microservices architecture.
  • Strong expertise in cloud infrastructure (AWS) and platform technologies including microservices, APIs, and distributed systems.
  • Hands-on experience with observability stacks including centralized log management, metrics, and tracing.
  • Familiarity with CI/CD tools such as CircleCI and performance testing using K6.
  • Passion for bringing more automation and engineering standards to organizations.
  • Experience building high-performance APIs with low latency (<200 ms).
  • Ability to work in a fast-paced environment and collaborate with peers and leaders.
  • Ability to lead technically, mentor engineers, and contribute to hiring and team growth.

Good to Have

  • Experience with Kubernetes and container orchestration.
  • Familiarity with observability tools such as Prometheus, Grafana, OpenTelemetry, Datadog.
  • Experience building internal developer platforms (IDPs) or reusable engineering frameworks.
  • Exposure to ML infrastructure or data engineering workflows.
  • Experience working in compliance-heavy environments (SOC2, HIPAA, etc.).


About Albert Invent


Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, helping bring better products to market faster.

Why Join Albert Invent

  • Work with a mission-driven, fast-growing global team at the intersection of AI, data, and advanced materials science.
  • Collaborate with world-class scientists and technologists to redefine how new materials are discovered and developed.
  • Culture built on curiosity, collaboration, ownership, and continuous learning.
  • Opportunity to build cutting-edge AI tools that accelerate real-world R&D and solve global challenges such as sustainability and advanced manufacturing.


Read more
Redtring
Keshav Senthil
Posted by Keshav Senthil
Hyderabad
1 - 3 yrs
₹8L - ₹12L / yr
skill iconKotlin
skill iconJava
skill iconSpring Boot
skill iconReact.js
skill iconAmazon Web Services (AWS)
+6 more

Software Engineer (Backend) – Kotlin & React

About Us

We are a high-agency startup building elegant technological solutions to real-world problems.

Our mission is to build world-class systems from scratch that are lean, fast, and intelligent. We are currently operating in stealth mode, developing deeply technical products involving Kotlin, React, Azure, AWS, GCP, Google Maps integrations, and algorithmically intensive backends.

We are building a team of builders — not ticket takers. If you want to design systems, make real decisions, and own your work end-to-end, this is the place for you.

Role Overview

As a Software Engineer, you will take full ownership of building and scaling critical product systems. You will work directly with the founding team to transform complex real-world problems into scalable technical solutions.

This role is ideal for engineers who enjoy thinking deeply about systems, writing clean code, and building products from 0 → 1.

Key Responsibilities

System Development & Architecture

  • Design, develop, and maintain scalable backend services, primarily using Kotlin or JVM-based languages (Java/Scala).
  • Architect systems that are robust, high-performance, and production-ready.
  • Apply strong data structures, algorithms, and system design principles to solve complex engineering challenges.

Full Stack Development

  • Build fast, maintainable front-end applications using React.
  • Ensure seamless integration between frontend systems and backend services.

Cloud Infrastructure

  • Design and manage cloud architecture using AWS, Azure, and/or Google Cloud Platform (GCP).
  • Implement scalable deployment pipelines, monitoring, and infrastructure optimization.

Product & Technical Collaboration

  • Work closely with founders and product stakeholders to translate business problems into technical solutions.
  • Contribute actively to product and engineering roadmap decisions.

Performance Optimization

  • Continuously improve system performance, scalability, and reliability.
  • Implement efficient algorithms and system optimizations to gain a technical advantage.

Engineering Excellence

  • Write clean, well-tested, and maintainable code.
  • Maintain strong engineering standards across the codebase.

Required Skills & Qualifications

We value capability and ownership over years of experience. Whether you have 10 years of experience or none, what matters is your ability to build and solve hard problems.

Core Requirements

  • Strong computer science fundamentals (Data Structures, Algorithms, System Design).
  • Experience with Kotlin or JVM languages such as Java or Scala.
  • Experience building modern React applications.
  • Hands-on experience with cloud platforms (AWS / Azure / GCP).
  • Experience designing and deploying scalable distributed systems.
  • Strong problem-solving and analytical thinking.

Preferred / Bonus Skills

  • Experience with Google Maps APIs or geospatial integrations.
  • Prior startup experience.
  • Contributions to open-source projects.
  • Personal side projects demonstrating strong engineering ability.

Ideal Candidate

You will thrive in this role if you:

  • Take ownership of problems, not just tasks.
  • Are comfortable working in high-ambiguity environments.
  • Have a builder mindset and enjoy creating systems from scratch.
  • Learn quickly and execute with speed and precision.

This Role May Not Be For You If

  • You prefer strict task assignments and detailed specifications before starting work.
  • You want to focus only on coding tickets without product involvement.
  • You prefer large teams with multiple layers of management.

Why Join Us

  • Build 0 → 1 products with massive ownership.
  • Work in a flat organization with no unnecessary hierarchy.
  • Collaborate directly with founders and core product builders.
  • Your contributions will have immediate and visible impact.
  • Flexible remote work environment.
  • Opportunity to shape the technology, culture, and future of the company.

If you are passionate about building powerful systems, solving complex problems, and owning your work, we would love to hear from you.

Read more
Pace Wisdom Solutions
Bengaluru (Bangalore)
7 - 10 yrs
₹15L - ₹30L / yr
skill icon.NET
ASP.NET
ASP.NET MVC
MVC Framework
skill iconAmazon Web Services (AWS)
+1 more

Location: Bangalore

Experience required: 7-10 years.

Key skills: .NET core, ASP .NET, Microsoft Azure, MVC, AWS


"At Pace Wisdom Solutions, our .NET team is a dynamic and collaborative group of experts specializing in end-to-end development. With a focus on both front-end and back-end technologies, we leverage the robust .NET framework and Azure to deliver innovative and scalable solutions. Our agile approach ensures adaptability to industry changes, empowering us to provide clients with cutting-edge and tailored applications."


We are seeking a highly skilled and experienced Senior .NET Developer with a minimum of 7 years of hands-on experience. The ideal candidate will possess expertise in both front-end and back-end development, with a strong background in MVC architecture and exposure to Microsoft Azure technologies. The role requires an individual who can work independently, lead a team effectively, and contribute to the successful delivery of projects.


Engineering Culture at Pace Wisdom:

We foster a collaborative and communicative environment where engineers are empowered to share ideas freely. Teamwork is paramount, and we believe the best solutions come from diverse perspectives. We are committed to promoting from within, providing clear career paths and mentorship opportunities to help our engineers reach their full potential. Our culture prioritizes continuous learning and growth, offering a safe space to experiment, innovate, and refine your skills.


Responsibilities:

• Create scalable solutions by understanding business requirements, write code, test according to best practices.

• Own and Collaborate with the team including our customers, QA, design, and other stakeholders to drive successful project delivery.

• Advocate and mentor teams to follow best practices around: documentation, unit testing, code reviews etc.

• Comply with security policies and processes.


Qualifications:

• 7-10 years of professional experience in developing applications using .NET framework, .NET Core, Azure Services, Entity Framework

• Good knowledge of common software architecture design patterns, Object Oriented Programming, Data structures, Algorithms, Database design patterns and other best practices.

• Exposure to Cloud technologies (AWS, Azure, Google Cloud - at least one of them)

• Exposure to developing SPA on React, Angular or VueJS

• Experience with micro services, messaging systems (RabbitMQ/Kafka)

• Proven ability to lead and mentor development teams.

• Effective communication and interpersonal skills.


About the Company:

Pace Wisdom Solutions is a deep-tech Product engineering and consulting firm. We have offices in San Francisco, Bengaluru, and Singapore. We specialize in designing and developing bespoke software solutions that cater to solving niche business problems.


We engage with our clients at various stages:

• Right from the idea stage to scope out business requirements.

• Design & architect the right solution and define tangible milestones.

• Setup dedicated and on-demand tech teams for agile delivery.

• Take accountability for successful deployments to ensure efficient go-to-market Implementations.


Pace Wisdom has been working with Fortune 500 Enterprises and growth-stage startups/SMEs since 2012. We also work as an extended Tech team and at times we have played the role of a Virtual CTO too. We believe in building lasting relationships and providing value-add every time and going beyond business. 

Read more
Hyderabad
5 - 8 yrs
₹15L - ₹25L / yr
ETL
Snowflake
skill iconPython
SQL
Fivetran
+4 more

Role Overview


We are looking for a Senior Data Quality Engineer who is passionate about building reliable and scalable data platforms. In this role, you will ensure high-quality, trustworthy data across pipelines and analytics systems by designing robust data ingestion frameworks, implementing data quality checks, and optimizing data transformations.

You will work closely with data engineers, analytics teams, and product stakeholders to ensure data accuracy, consistency, and reliability across the organization.


Key Responsibilities


  • Cleanse, normalize, and enhance data quality across operational systems and new data sources flowing through the data platform.
  • Design, build, monitor, and maintain ETL/ELT pipelines using Python, SQL, and Airflow.
  • Develop and optimize data models, tables, and transformations in Snowflake.
  • Build and maintain data ingestion workflows, including API integrations, file ingestion, and database connectors.
  • Ensure data reliability, integrity, and performance across pipelines.
  • Perform comprehensive data profiling to understand data structures, detect anomalies, and resolve inconsistencies.
  • Implement data quality validation frameworks and automated checks across pipelines.
  • Use data integration and data quality tools such as Deequ, Great Expectations (GX), Splink, Fivetran, Workato, Informatica, etc., to onboard new data sources.
  • Troubleshoot pipeline failures and implement data monitoring and alerting mechanisms.
  • Collaborate with engineering, analytics, and product teams in an Agile development environment.


Required Technical Skills


Core Technologies


  • Strong hands-on experience with SQL
  • Python for data transformation and pipeline development
  • Workflow orchestration using Apache Airflow
  • Experience working with Snowflake data warehouse


Data Engineering Expertise


  • Strong understanding of ETL / ELT pipeline design
  • Data profiling and data quality validation techniques
  • Experience building data ingestion pipelines from APIs, files, and databases
  • Data modeling and schema design


Tools & Platforms


  • Data Quality Tools: Deequ, Great Expectations (GX), Splink
  • Data Integration Tools: Fivetran, Workato, Informatica
  • Cloud Platforms: AWS (preferred)
  • Version Control & DevOps: Git, CI/CD pipelines


Qualifications


  • 5–8 years of experience in Data Quality Engineering / Data Engineering
  • Strong expertise in SQL, Python, Airflow, and Snowflake
  • Experience working with large-scale datasets and distributed data systems
  • Solid understanding of data engineering best practices across the development lifecycle
  • Experience working in Agile environments (Scrum, sprint planning, etc.)
  • Strong analytical and problem-solving skills


What We Look For


  • Passion for data accuracy, reliability, and governance
  • Ability to identify and resolve complex data issues
  • Strong collaboration skills across data, engineering, and analytics teams
  • Ownership mindset and attention to data integrity and performance


Why Join Us


  • Opportunity to work on modern data platforms and large-scale datasets
  • Collaborate with high-performing data and engineering teams
  • Exposure to cloud data architecture and modern data tools
  • Competitive compensation and strong career growth opportunities
Read more
HireTo
Rishita Sharma
Posted by Rishita Sharma
Hyderabad
5 - 13 yrs
₹15L - ₹30L / yr
snowflake
skill iconPython
SQL
Windows Azure
databricks
+4 more

Position Title : Senior Data Engineer(Founding Member) - Insurtech StartUp

Location : Hyderabad(Onsite)

Immediate to 15 days Joiners

Experience : 5+ to 13 Years

Role Summary

We are looking for a Senior Data Engineer who will play a foundational role in:

  • Client onboarding from a data perspective
  • Understanding complex insurance data flows
  • Designing secure, scalable ingestion pipelines
  • Establishing strong data modeling and governance standards

This role sits at the intersection of technology, data architecture, security, and business onboarding.

.

Key Responsibilities

  • Lead end-to-end data onboarding for new clients and partners, working closely with business and product teams to understand client systems, data formats, and migration constraints
  • Define and implement data ingestion strategies supporting multiple sources and formats, including CSV, XML, JSON files, and API-based integrations
  • Design, build, and operate robust, scalable ETL/ELT pipelines, supporting both batch and near-real-time data processing
  • Handle complex insurance-domain data including Contracts, Claims, Reserves, Cancellations, and Refunds
  • Architect ingestion pipelines with security-by-design principles, including secure credential management (keys, secrets, tokens), encryption at rest and in transit, and network-level controls where required
  • Enforce role-based and attribute-based access controls, ensuring strict data isolation, tenancy boundaries, and stakeholder-specific access rules
  • Design, maintain, and evolve canonical data models that support operational workflows, reporting & analytics, and regulatory/audit requirements
  • Define and enforce data governance standards, ensuring compliance with insurance and financial data regulations and consistent definitions of business metrics across stakeholders
  • Build and operate data pipelines on a cloud-native platform, leveraging distributed processing frameworks (Spark / PySpark), data lakes, lakehouses, and warehouses
  • Implement and manage orchestration, monitoring, alerting, and cost-optimization mechanisms across the data platform
  • Contribute to long-term data strategy, platform architecture decisions, and cost-optimization initiatives while maintaining strict security and compliance standards

Required Technical Skills

  • Core Stack: Python, Advanced SQL(Complex joins, window functions, performance tuning), Pyspark
  • Platforms: Azure, AWS, Data Bricks, Snowflake
  • ETL / Orchestration: Airflow or similar frameworks
  • Data Modeling: Star/Snowflake schema, dimensional modeling, OLAP/OLTP
  • Visualization Exposure: Power BI
  • Version Control & CI/CD: GitHub, Azure Devops, or equivalent
  • Integrations: APIs, real-time data streaming, ML model integration exposure

Preferred Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
  • 5+ years of experience in data engineering or similar roles
  • Strong ability to align technical solutions with business objectives
  • Excellent communication and stakeholder management skills

What We Offer

  • Direct collaboration with the core US data leadership team
  • High ownership and trust to manage the function end-to-end
  • Exposure to a global environment with advanced tools and best practices
Read more
Remote only
2 - 7 yrs
₹5L - ₹15L / yr
DevOps
CI/CD
skill iconDocker
skill iconKubernetes
skill iconAmazon Web Services (AWS)
+8 more

BluePMS Software Solutions Pvt Ltd is hiring a talented DevOps Engineer to join our growing engineering team. In this role, you will be responsible for building and maintaining scalable infrastructure, automating deployment processes, and improving the reliability of our software delivery pipelines.


KeyResponsibilities:

 1: Design, build, and maintain CI/CD pipelines for faster and reliable deployments.

 2: Manage and monitor cloud infrastructure and servers.

 3: Automate build, testing, and deployment processes.

 4: Collaborate with development and QA teams to improve release cycles.

 5: Monitor system performance and ensure high availability and reliability.

 6: Troubleshoot infrastructure and deployment issues.

 7: Implement security best practices in DevOps workflows.


RequiredSkills:

 1: Strong understanding of DevOps principles and CI/CD pipelines.

 2: Experience with Docker, Kubernetes, or containerization technologies.

 3: Familiarity with cloud platforms such as AWS, Azure, or GCP.

 4: Experience with Git, Jenkins, GitHub Actions, or similar tools.

 5: Basic scripting knowledge (Bash, Python, or Shell).

 6: Good understanding of Linux systems and networking concepts.


Eligibility:

 1: Experience: 2 – 7 years

 2: Qualification: Bachelor's degree in Computer Science, IT, or related field

 3: Strong analytical and problem-solving skills.


Location: Chennai / Remote


Apply here: https://connectsblue.com/jobs/753/devops-engineer-at-bluepms-software-solutions-pvt-ltd

Read more
Neuvamacro Technology Pvt Ltd
Remote only
5 - 15 yrs
₹12L - ₹15L / yr
Tableau
Snow flake schema
SQL
ETL
Data modeling
+4 more

Job Description:

Position Type: Full-Time Contract (with potential to convert to Permanent)

Location: Remote (Australian Time Zone)

Availability: Immediate Joiners Preferred

About the Role

We are seeking an experienced Tableau and Snowflake Specialist with 5+ years of hands‑on expertise to join our team as a full‑time contractor for the next few months. Based on performance and business requirements, this role has a strong potential to transition into a permanent position.

The ideal candidate is highly proficient in designing scalable dashboards, managing Snowflake data warehousing environments, and collaborating with cross-functional teams to drive data‑driven insights.

Key Responsibilities

  • Develop, design, and optimize advanced Tableau dashboards, reports, and visual analytics.
  • Build, maintain, and optimize datasets and data models in Snowflake Cloud Data Warehouse.
  • Collaborate with business stakeholders to gather requirements and translate them into analytics solutions.
  • Write efficient SQL queries, stored procedures, and data pipelines to support reporting needs.
  • Perform data profiling, data validation, and ensure data quality across systems.
  • Work closely with data engineering teams to improve data structures for better reporting efficiency.
  • Troubleshoot performance issues and implement best practices for both Snowflake and Tableau.
  • Support deployment, version control, and documentation of BI solutions.
  • Ensure availability of dashboards during Australian business hours.

Required Skills & Experience

  • 5+ years of strong hands-on experience with Tableau development (Dashboards, Storyboards, Calculated Fields, LOD Expressions).
  • 5+ years of experience working with Snowflake including schema design, warehouse configuration, and query optimization.
  • Advanced knowledge of SQL and performance tuning.
  • Strong understanding of data modeling, ETL processes, and cloud data platforms.
  • Experience working in fast-paced environments with tight delivery timelines.
  • Excellent communication and stakeholder management skills.
  • Ability to work independently and deliver high‑quality outputs aligned with business objectives.

Nice-to-Have Skills

  • Knowledge of Python or any ETL tool.
  • Experience with Snowflake integrations (Fivetran, DBT, Azure/AWS/GCP).
  • Tableau Server/Prep experience.

Contract Details

  • Full-Time Contract for several months.
  • High possibility of conversion to permanent, based on performance.
  • Must be available to work on the Australian Time Zone.
  • Immediate joiners are highly encouraged.


Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹25L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconJava
skill iconAmazon Web Services (AWS)



We’re Hiring Backend Developers | Java / Go / Python | 3–5 Years | Bangalore

We are expanding our engineering team and looking for talented Backend Developers with 3–5 years of experience to join us in Bangalore.

If you enjoy building scalable systems, working with modern cloud technologies, and solving complex problems, this opportunity is for you!


💼 Position

Backend Developer (Java / Go / Python)

📍 Location: Bangalore

👨‍💻 Experience: 3–5 Years

🔎 What You Bring

✔ Strong proficiency in Go or similar backend languages like Python with Fast API or JAVA with Springboot .

✔ Experience designing RESTful APIs

✔ Hands-on experience with AWS / GCP

✔ Experience working with PostgreSQL, Redis, Kafka, or SQS

✔ Strong experience with Microservices architecture

✔ Hands-on experience with CI/CD pipelines

✔ Experience with containerized environments (Docker / Kubernetes)

✔ Familiarity with monitoring tools like Prometheus, Grafana, and Spring Actuator

✔ Strong understanding of data structures, algorithms, and system design fundamentals

✔ Ability to own features end-to-end and solve complex engineering problems

✔ Strong focus on code quality, observability, and operational ownership

✔ Comfortable working in fast-paced, high-growth environments





Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
DrSoumya Sahadevan
Posted by DrSoumya Sahadevan
Pune
5 - 15 yrs
₹20L - ₹38L / yr
skill iconReact.js
API
AWS CloudFormation
skill iconDjango
skill iconNodeJS (Node.js)
+7 more

Availability: Full time 

Location: Pune, India 

Experience: 5- 6 years

 

Tvarit Solutions Private Limited (wholly owned subsidiary of TVARIT GmbH, Germany). TVARIT provides software to reduce manufacturing waste like scrap, energy, and machine downtime using its patented technology. With its software products, and highly competent team from renowned Universities, TVARIT has gained customer trust across 4 continents within a short span of 3 years. TVARIT is awarded among the top 8 out of 490 AI companies by European Data Incubator, apart from many more awards by the German government and industrial organizations making TVARIT one of the most innovative AI companies in Germany and Europe.  

 

We are looking for a passionate Full Stack Developer Level 2 to join our technology team in Pune Centre. You will be responsible for handling architecting, design, development, testing, leading the software development team and working toward infrastructure development that will support the company’s solutions. You will get an opportunity to work closely on projects that will involve the automation of the manufacturing process. 

 

Key Responsibilities 

· Full Stack Development: Design, develop, and maintain scalable web applications using React with TypeScript for the frontend and Node.js/Python for the backend.

· AI Integration: Collaborate with data scientists and ML engineers to integrate AI/ML models into the SaaS platform, ensuring seamless performance and usability.

· API Development & Optimization: Build and optimize high-performance REST APIs in Node.js and Python (Django, Flask, or FastAPI) to support real-time data processing and analytics.

· Database Engineering: Design, manage, and optimize data storage using relational (PostgreSQL), NoSQL (MongoDB/DynamoDB), graph, and vector databases for handling complex industrial data.

· Cloud-Native Deployment: Deploy, monitor, and manage services in containerized environments using Docker and Kubernetes on Linux-based systems (Ubuntu/Debian).

· System Architecture & Design: Contribute to architectural decisions, leveraging OOPs, microservices, domain-driven design, and design patterns to ensure scalability, security, and maintainability.

· Data Handling & Processing: Work with large-scale manufacturing datasets using Python (pandas) to enable predictive analytics and AI-driven insights.

· Collaboration & Agile Delivery: Partner with cross-functional teams—including product managers, manufacturing domain experts, and AI researchers—to translate business needs into technical solutions.

· Performance & Security: Ensure robust, secure, and high-performance software by implementing best practices in algorithms, data structures, and system design.

· Continuous Improvement: Stay updated on emerging technologies in AI, SaaS, and manufacturing systems to propose innovative solutions that enhance product capability.

 

Must have worked on these technologies.

· 5+ years of experience working with React-Typescript, node.js on a production level

· Python, pandas, High performance REST APIs in node and Python (in Django or Flask or Fast API)

· Databases: Relational DB like PostgreSQL, No SQL DB like Mongo or Dynamo DB, Vector databases, Graphs DBs

· OS: Linux flavor like Ubuntu, Debian

· Source Control and CI/CD

· Software Fundamentals: Excellent command on Algorithms and Data Structures

· Software design and Architecture: OOPs, Design Patterns, Micro Services, monolithic architectures, Domain driven Design

· Containers: Docker and Kubernetes

· Cloud: Fundamentals of AWS like S3 buckets, EC2, IAMs, Security groups


Benefits and Perks:

· Be part of the product which is transforming the manufacturing landscape with AI

· Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you.

· Progressive leave policy for effective work-life balance.

· Get mentored by highly qualified internal resource groups and opportunities to avail industry-driven mentorship programs.

· Multicultural peer groups and supportive workplace policies. 

· Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work.

 

 

 

How it's like to work for a Startup?

Working for TVARIT (deep-tech German IT Startup) can offer you a unique blend of innovation, collaboration, and growth opportunities. But it's essential to approach it with a willingness to adapt and thrive in a dynamic environment.

 

If this position sparked your interest, do apply today!

Read more
Bengaluru (Bangalore)
1 - 2 yrs
₹5L - ₹6L / yr
skill iconAmazon Web Services (AWS)
DevOps

AWS DevOps Engineer (1–2 Years Experience)

We are looking for a motivated AWS DevOps Engineer with 1–2 years of experience to join our team. The ideal candidate should have hands-on experience with cloud infrastructure, CI/CD pipelines, and automation tools.

Key Responsibilities

  • Manage and maintain cloud infrastructure on Amazon Web Services
  • Build and manage CI/CD pipelines for automated deployments
  • Work with containerization tools like Docker
  • Assist in deployment and orchestration using Kubernetes
  • Monitor applications and infrastructure performance
  • Collaborate with development teams to improve deployment processes
  • Automate infrastructure using scripts and DevOps tools

Required Skills

  • 1–2 years experience in DevOps or Cloud Engineering
  • Strong knowledge of Amazon Web Services services such as EC2, S3, IAM
  • Experience with CI/CD tools like Jenkins or GitHub Actions
  • Knowledge of container tools like Docker
  • Familiarity with version control systems like Git
  • Basic scripting knowledge (Shell / Python)

Good to Have

  • Experience with Infrastructure as Code tools like Terraform
  • Knowledge of monitoring tools such as Prometheus or Grafana
  • Understanding of Linux environments


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Monika Sekaran
Posted by Monika Sekaran
Pune
7 - 11 yrs
Best in industry
skill iconJava
skill iconSpring Boot
skill iconAmazon Web Services (AWS)
Microservices
Design patterns
+2 more

Key Responsibilities:

  • Design, develop, and maintain scalable backend applications using Java and Spring Boot.
  • Build and consume RESTful APIs and ensure secure, reliable API integrations.
  • Develop microservices-based architecture and deploy applications in cloud environments.
  • Work with cloud platforms such as AWS/Azure/GCP for application deployment and management.
  • Write clean, maintainable, and efficient code following best practices.
  • Implement CI/CD pipelines and support DevOps practices.
  • Optimize applications for performance, scalability, and reliability.
  • Collaborate with cross-functional teams including frontend, QA, DevOps, and product teams.
  • Participate in code reviews, technical design discussions, and architectural decisions.
  • Troubleshoot production issues and provide timely resolution.

Required Skills & Qualifications:

  • 5–10 years of hands-on experience in Java (Java 8 or above).
  • Strong experience with Spring Boot, Spring MVC, Spring Data, Spring Security.
  • Solid understanding of RESTful API design & development.
  • Experience in microservices architecture.
  • Hands-on experience with at least one cloud platform (AWS / Azure / GCP).
  • Knowledge of containerization tools like Docker and orchestration tools like Kubernetes.
  • Experience with relational and/or NoSQL databases (MySQL, PostgreSQL, MongoDB).
  • Familiarity with CI/CD tools (Jenkins, GitHub Actions, etc.).
  • Strong understanding of Git and version control practices.
  • Good understanding of design patterns and object-oriented programming principles.


Read more
Tradelab Technologies

at Tradelab Technologies

1 candid answer
Aakanksha Yadav
Posted by Aakanksha Yadav
Mumbai
10 - 15 yrs
₹30L - ₹50L / yr
CI/CD
skill iconAmazon Web Services (AWS)
Terraform
skill icongrafana

Key Responsibilities

DevOps Strategy & Leadership

  • Define and execute the end-to-end DevOps strategy for high-frequency trading and fintech platforms.
  • Lead, mentor, and scale a high-performing DevOps team focused on automation, reliability, and performance.
  • Partner closely with engineering and product leaders to ensure infrastructure strategy supports business and technical goals.

CI/CD & Infrastructure Automation

  • Architect, implement, and optimize enterprise-grade CI/CD pipelines for ultra-low-latency trading systems.
  • Drive Infrastructure as Code (IaC) adoption using Terraform, Helm, Kubernetes, and advanced automation toolsets.
  • Establish robust release management, deployment workflows, and versioning best practices for mission‑critical environments.

Cloud & On‑Prem Infrastructure Management

  • Design and manage hybrid infrastructures across AWS, GCP, and on-premise data centers ensuring high availability and fault tolerance.
  • Implement sophisticated networking strategies for low-latency workloads including routing optimization and performance tuning.
  • Lead multi‑cloud scalability, cost optimization, and environment standardization initiatives.

Performance Monitoring & Optimization

  • Oversee large-scale monitoring systems using Prometheus, Grafana, ELK, and related observability tools.
  • Implement predictive alerting, automated remediation, and system‑wide health checks for zero‑downtime operations.
  • Conduct root-cause analyses and performance tuning for systems processing millions of transactions per second.

Security & Compliance

  • Champion DevSecOps practices and embed security across the entire development and deployment lifecycle.
  • Ensure adherence to financial regulatory standards (SEBI and global frameworks) with strong audit and compliance mechanisms.
  • Lead security automation efforts, vulnerability management, and advanced IAM policy implementation.


Required Skills & Qualifications

  • 10+ years of DevOps experience, with 5+ years in a leadership capacity.
  • Deep hands-on expertise in CI/CD tools such as Jenkins, GitLab CI/CD, and ArgoCD.
  • Strong command of AWS, GCP, and hybrid cloud infrastructures.
  • Expert-level knowledge of Kubernetes, Docker, and large-scale container orchestration.
  • Advanced proficiency in Terraform, Helm, and overall IaC workflows.
  • Strong Linux administration, networking fundamentals (TCP/IP, DNS, Firewalls), and system internals.
  • Experience with monitoring and observability platforms (Prometheus, Grafana, ELK).
  • Excellent scripting skills in Python, Bash, or Go for automation and tooling.
  • Deep understanding of security principles, encryption, IAM, and compliance frameworks.


Good to Have

  • Experience with ultra-low-latency or high-frequency trading systems.
  • Knowledge of FIX protocol, FPGA acceleration, or network‑level optimizations.
  • Familiarity with Redis, Nginx, or other high‑throughput systems.
  • Exposure to micro‑second‑level performance tuning or network acceleration technologies.


Why Join Us?

  • Be part of a team that consistently raises the bar and delivers exceptional engineering outcomes.
  • A culture where innovation, ownership, and bold thinking are valued.
  • Exceptional growth opportunities—ideal for someone who thrives in fast-paced, high-impact environments.
  • Build systems that influence markets and redefine the fintech landscape.


This isn’t just a role—it’s a challenge, a platform, and a proving ground.

Ready to step up? Apply now.

Read more
Applix

at Applix

3 candid answers
Eman Khan
Posted by Eman Khan
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹30L / yr
skill iconPython
Microsoft Windows Azure
Windows Azure
Artificial Intelligence (AI)
skill iconAmazon Web Services (AWS)
+1 more

About the Role

Applix is looking for a Python Software Engineer with strong Azure cloud experience to build and operate AI-powered applications and agentic workflows. The engineer will work closely with our enterprise client teams to develop, deploy, and maintain AI solutions running on the Azure platform.


This role combines Python application development, AI platform integration, and cloud deployment responsibilities.


Key Responsibilities

  • Build and maintain Python-based services and AI agents
  • Develop and manage agentic workflows and automation pipelines
  • Deploy and monitor applications on Azure cloud services
  • Integrate with Azure AI services such as Azure OpenAI and Azure Document Intelligence
  • Manage application deployments using Azure App Services or equivalent cloud platforms
  • Monitor system performance, logs, and reliability in production environments
  • Work with engineering teams to ensure scalable and secure deployments
  • Support CI/CD pipelines and DevOps practices for application delivery


Experience

3–8 years of relevant experience in software engineering and cloud development.


Required Skills

  • Strong programming experience in Python
  • Experience deploying applications on Microsoft Azure
  • Familiarity with Azure App Services or equivalent cloud services
  • Understanding of cloud deployment, monitoring, and DevOps practices
  • Experience building APIs, automation workflows, or backend services
  • Good problem-solving ability and communication skills
  • Experience with Azure OpenAI
  • Experience with Azure Document Intelligence
  • Familiarity with Azure AI Foundry or AI platform services
  • Exposure to LLM-based applications or AI workflows
  • Experience with CI/CD pipelines and cloud automation
Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 10 yrs
₹25L - ₹35L / yr
SQL
skill iconPython
skill iconAmazon Web Services (AWS)
Data Lake
OLTP
+6 more

Hiring for Lead Data Engineer


Exp : 6 - 10 yrs

Edu : Any Graduates

Work Location : Noida WFO


Skills :

Team Handling Experience .


Advanced SQL and PySpark


Data Engineering concepts (Data Warehouse (DW), Data Lake, OLTP vs OLAP, etc.)


API development experience (preferably in Python)


Familiarity with Docker and Kubernetes


Experience with Airflow and DBT


Exposure to Hudi, Iceberg, or Delta Lake


Strong AWS project experience

Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
4 - 6 yrs
₹10L - ₹15L / yr
PySpark
SQL
skill iconAmazon Web Services (AWS)
Apache Airflow
Hadoop
+1 more

Hiring for Data Engineer - AWS


Exp : 3 - 6 yrs

Edu : BE/B.Tech

Work Location : Noida WFO


Skills


Data Engineers, Pyspark, SQL, AWS, Data Pipelines,airflow, Hadoop 

Read more
Techjays
Agency job
via techjays by Samuel Santhosh P
Remote, Coimbatore
5 - 6.5 yrs
₹30L - ₹45L / yr
skill iconPython
skill iconDjango
skill iconFlask
RESTful APIs
WebSocket
+12 more

We are seeking an experienced Python Lead to design, develop, and scale high-performance backend systems. The ideal candidate will have strong expertise in Python-based backend development, system design, and cloud-native architectures. You will lead the development of scalable APIs, work with modern cloud platforms, and collaborate with cross-functional teams to deliver reliable and efficient applications.

Key Responsibilities

  • Design and develop scalable backend services using Python (Django/Flask).
  • Build and maintain RESTful APIs and WebSocket-based applications.
  • Implement efficient algorithms, data structures, and design patterns for high-performance systems.
  • Develop and optimize database schemas and queries using PostgreSQL, MySQL, or MongoDB.
  • Integrate caching and queuing systems to improve system performance and reliability.
  • Deploy and manage applications on AWS or GCP cloud environments.
  • Implement and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI, or GitHub Actions.
  • Work with Docker containers and Linux-based environments for development and deployment.
  • Collaborate with engineering teams to design scalable system architectures.
  • Explore and integrate AI-driven capabilities such as RAG, LLMs, and vector databases where applicable.

Required Skills

  • Strong expertise in Python backend development using Django or Flask
  • Experience with REST APIs, WebSockets, and microservices architecture
  • Solid knowledge of design patterns, algorithms, and data structures
  • Experience with relational and NoSQL databases (PostgreSQL, MySQL, MongoDB)
  • Hands-on experience with AWS or GCP cloud services
  • Experience with CI/CD pipelines and containerization (Docker)
  • Proficiency in Git and Linux environments

Preferred Skills

  • Familiarity with AI/ML concepts
  • Experience with RAG architectures and LLM integrations
  • Knowledge of vector databases such as Pinecone or ChromaDB

What We’re Looking For

  • Strong problem-solving and system design skills
  • Ability to lead backend development initiatives
  • Experience building scalable and production-grade systems
  • Excellent collaboration and communication skills


Read more
Wohlig Transformations Pvt Ltd
Apoorva Lakshkar
Posted by Apoorva Lakshkar
Mumbai
7 - 10 yrs
₹15L - ₹23L / yr
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
DevOps
skill iconKubernetes

Job Overview 


We are seeking an experienced Senior Solution Architect to join our dynamic DevOps organization. The ideal candidate will have a strong background in cloud technologies, with expertise in migration projects across platforms such as GCP, AWS, and Azure. The candidate should possess a deep understanding of DevOps principles, Kubernetes orchestration, Data migration & management and automation tools like CI/CD pipelines and Terraform.The individual should be highly skilled in designing scalable application architectures capable of handling substantial workloads while ensuring the highest standards of quality.


Key Responsibilities 


  • Lead and drive cloud migration projects from on-premises data centers or other cloud platforms to GCP, AWS, or Azure.
  • Design and implement migration strategies that ensure minimal downtime and maximum efficiency.
  • Demonstrate proficiency in GCP, AWS, and Azure, with the ability to choose and optimize solutions based on specific business requirements.
  • Provide guidance on selecting the appropriate cloud services for various workloads.
  • Design, implement, and optimize CI/CD pipelines to streamline software delivery.
  • Utilize Terraform for infrastructure as code (IaC) to automate deployment processes.
  • Collaborate with development and operations teams to enhance the overall DevOps culture.
  • Possess in-depth knowledge and practical experience with Kubernetes orchestration for containerized applications.
  • Architect and optimize Kubernetes clusters for high availability and scalability.
  • Engage in research and development activities to stay abreast of industry trends and emerging technologies.
  • Evaluate and introduce new tools and methodologies to enhance the efficiency and effectiveness of cloud solutions.
  • Architect solutions that can handle large-scale workloads and provide guidance on scaling strategies.
  • Ensure high-performance levels and reliability in production environments.
  • Design scalable and high-performance database architectures tailored to meet business needs.
  • Execute database migrations with a keen focus on data consistency, integrity, and performance.
  • Develop and implement database pipelines to automate processes such as data migrations, schema changes, and backups.
  • Optimize database workflows to enhance efficiency and reliability.
  • Work closely with clients to assess and enhance the quality of existing architectures.
  • Implement best practices to ensure robust, secure, and well-architected solutions.
  • Drive migration projects, collaborating with cross-functional teams to ensure successful execution.
  • Provide technical leadership and mentorship to junior team members.


Required Skills and Qualifications: 


  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • Relevant industry experience in a Solution Architect role.
  • Proven experience in leading cloud migration projects across GCP, AWS, and Azure.
  • Expertise in DevOps practices, CI/CD pipelines, and infrastructure automation.
  • In-depth knowledge of Kubernetes and container orchestration.
  • Strong background in scaling architectures to handle significant workloads.
  • Sound knowledge in database migrations
  • Excellent communication skills and the ability to articulate complex technical concepts to both technical and non-technical stakeholders.


Read more
PhotonMatters
Human Resource
Posted by Human Resource
Remote only
2 - 11 yrs
₹4L - ₹12L / yr
CI/CD
skill iconAmazon Web Services (AWS)
Terraform
Ansible
skill iconDocker
+4 more

Role Overview:

We are looking for a skilled DevOps Engineer to join our team. You will be responsible for managing and automating the deployment, monitoring, and scaling of our applications, ensuring high availability, security, and performance. The ideal candidate is passionate about automation, CI/CD, and cloud infrastructure.

Key Responsibilities:

  • Design, implement, and maintain CI/CD pipelines for development, testing, and production environments.
  • Manage cloud infrastructure (AWS, Azure, GCP, or others) and ensure scalability, reliability, and security.
  • Automate deployment, configuration management, and infrastructure provisioning using tools like Terraform, Ansible, or Chef.
  • Monitor application performance and infrastructure health using tools like Prometheus, Grafana, ELK Stack, or Datadog.
  • Collaborate with development and QA teams to streamline workflows and resolve deployment issues.
  • Implement security best practices in pipelines, infrastructure, and cloud environments.
  • Maintain version control and manage release cycles.
  • Troubleshoot and resolve production issues efficiently.

Required Skills & Qualifications:

  • Bachelor’s degree in Computer Science, IT, or related field.
  • Proven experience in DevOps, system administration, or cloud engineering.
  • Strong knowledge of CI/CD tools (Jenkins, GitLab CI/CD, CircleCI, etc.).
  • Hands-on experience with containerization (Docker, Kubernetes).
  • Experience with cloud platforms (AWS, Azure, or GCP).
  • Scripting skills (Python, Bash, or PowerShell).
  • Knowledge of infrastructure as code (Terraform, CloudFormation).
  • Familiarity with monitoring and logging tools.
  • Strong problem-solving, communication, and teamwork skills.

Preferred Qualifications:

  • Experience with microservices architecture.
  • Knowledge of networking, load balancing, and firewalls.
  • Exposure to Agile/Scrum methodologies.

What We Offer:

  • Competitive salary
  • Flexible working hours and remote options.
  • Learning and development opportunities.
  • Collaborative and inclusive work environment.


Read more
PhotonMatters
Human Resource
Posted by Human Resource
Remote only
4 - 13 yrs
₹8L - ₹20L / yr
skill iconPython
ETL
Spark
skill iconAmazon Web Services (AWS)
ELT
+2 more

 

 

 

Job Title: Data Engineer

Experience: 4–14 Years

Work Mode: Remote

Employment Type: Full-Time

 

Position Overview:

We are looking for highly experienced Senior Data Engineers to design, architect, and lead scalable, cloud-based data platforms on AWS. The role involves building enterprise-grade data pipelines, modernizing legacy systems, and developing high-performance scoring engines and analytics solutions and collaborate closely with architecture, analytics, risk, and business teams to deliver secure, reliable, and scalable data solutions.

 

Key Responsibilities:

·      Design and build scalable data pipelines for financial and customer data

·      Build and optimize scoring engines (credit, risk, fraud, customer scoring)

·      Design, develop, and optimize complex ETL/ELT pipelines (batch & real-time)

·      Ensure data quality, governance, reliability, and compliance standards

·      Optimize large-scale data processing using SQL, Spark/PySpark, and cloud technologies

·      Lead cloud data architecture, cost optimization, and performance tuning initiatives

·      Collaborate with Data Science, Analytics, and Product teams to deliver business-ready datasets

·      Mentor junior engineers and establish best practices for data engineering

 

Key Requirements:

·      Strong programming skills in Python and advanced SQL

·      Experience building scalable scoring or rule-based decision engines

·      Hands-on experience with Big Data technologies (Spark/PySpark/Kafka)

·      Strong expertise in designing ETL/ELT pipelines and data modeling

·      Experience with cloud platforms (AWS/Azure) and modern data architectures

·      Solid understanding of data warehousing, data lakes, and performance tuning

·      Knowledge of CI/CD, version control (Git), and production support best practices

Read more
Software and consulting company

Software and consulting company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹14L / yr
skill icon.NET
ReAct (Reason + Act)
skill iconAmazon Web Services (AWS)
Unit testing
skill iconReact Native
+22 more

FULL STACK DEVELOPER

JOB DESCRIPTION – FULL STACK DEVELOPER 

Location: Bangalore 

 

Key Responsibilities      

Establish processes, SLAs, and escalation protocols for the support & maintenance of web applications       

Manage stakeholders with effective communication & collaborate with cross functional teams to address issues and maintain business continuity.      

Design, implement, unit test, and build business applications using React, React-Native, .Net Core, .Net 8, Azure/AWS and leveraging an agile methodology and latest tech like Agentic AI & Gihub Copilot.     

Facilitate scrum ceremonies including sprint planning, retrospectives, reviews, and daily stand-ups·       

Facilitate discussion, assessment of alternatives or different approaches, decision making, and conflict resolution within the development team       

Develop and administer CI/CD pipelines in cloud-hosted Git repositories, and source control artifacts via Git in alignment with common branching strategies and workflows    

Assist Software Designer/Implementers with the creation of detailed software design specifications      

Participate in the system specification review process to ensure system requirements can be translated into valid software architecture       

Integrate internal and external product designs into a cohesive user experience       

Identify and keep track of metrics that indicate how software is performing     

Handle technical and non-technical queries from the development team and stakeholders      

Ensure that all development practices follow best practices and any relevant policies / procedures 

 

Other Duties·       Maintain project reporting including dashboards, status reports, road maps, burn down, velocity, and resource utilization.    

Own the technical solution and ensure all technical aspects are implemented as designed. ·       

Partner with the customer success team and aid in triaging and troubleshooting customer support issues spanning across a range of software components, infrastructure, integrations, and services, some of which target 24/7/365 availability     

Flexible to work in rotational shift   

 

Required Qualification     

Previous experience of leading full stack technology projects with scrum teams and stakeholder management·       

BTech or MTech in computer science, or related field·       

3-5 years of experience.  

 

Required Knowledge, Skills and Abilities: (Include any required computer skills, certifications, licenses, languages, etc)·      

With Proficiency in .NET Core/.Net 8/, React, React-Native, Redux, Material, Bootstrap, Typescript, SCSS, Microservices, EF, LINQ, SQL, Azure/AWS, CI CD, Agile, Agentic AI, Github Copilot·       

Azure Dev Ops, Design System, Micro front ends, Data Science·       

Stakeholder management & excellent communication skills.    

 

Must have skills

React - 3 years

React Native - 3 years

Redux - 1 years

Material UI - 1 years

Typescript - 1 years

Bootstrap - 1 years

Microservices - 2 years

SQL - 1 years

Azure - 1 years

 

Nice to have skills

.NET Core - 3 years

NET 8 - 3 years

AWS - 1 years

LINQ - 1 years

Read more
CAW.Tech

at CAW.Tech

5 recruiters
Ranjana Singh
Posted by Ranjana Singh
Bengaluru (Bangalore), Hyderabad
4 - 7 yrs
Best in industry
skill iconPython
FastAPI
skill iconNodeJS (Node.js)
skill iconReact.js
Large Language Models (LLM)
+5 more

Role

We are looking for a Full Stack Engineer who can own the entire technical stack, design systems that scale, and ship products fast. You will work across frontend, backend, and AI systems, making key architectural decisions while building a product used by real users.

This role offers high ownership, where engineers move ideas to production quickly and take responsibility for both technical decisions and product impact.


What would you do?

  • Build and own the end-to-end platform using React, Node.js microservices, Python AI agents, and AWS.
  • Design and implement scalable system architecture, including caching, databases, and state management between AI and UI.
  • Develop AI-powered backend services and orchestrate LLM workflows using modern frameworks.
  • Build highly interactive front-end experiences using modern React and real-time communication tools.
  • Define and maintain engineering best practices, including CI/CD pipelines, monorepo structures, and development workflows.
  • Collaborate closely with users and product teams to identify problems and ship impactful solutions.
  • Continuously simplify systems by removing unnecessary complexity and keeping architecture clean.


Who should apply?

  • Engineers with 4+ years of experience building and shipping production-grade products.
  • Strong understanding of system design, architecture, and scalable backend systems.
  • Hands-on experience with Python (FastAPI, async systems) and LLM-based applications.
  • Proficiency in JavaScript / TypeScript with Node.js and modern backend frameworks.
  • Experience building modern frontend applications using React (React 18+).
  • Familiarity with databases such as Redis, PostgreSQL, or MongoDB, and designing scalable APIs.
  • Engineers comfortable working in fast-paced environments with high ownership and minimal process overhead.


Technical Skills

  • Backend: Node.js, Express, Python, FastAPI
  • Frontend: React (React 18+), interactive UI development
  • AI/LLM Systems: LLM orchestration, multi-model integrations
  • Databases: Redis, PostgreSQL, MongoDB
  • Infrastructure: AWS, CI/CD pipelines, microservices architecture
  • Real-time Systems: Socket.IO, Server-Sent Events (SSE)


Read more
Towards AGI
Shivani Sharma
Posted by Shivani Sharma
Bengaluru (Bangalore), Chennai
5 - 11 yrs
₹20L - ₹25L / yr
Data Transformation Tool (DBT)
skill iconAmazon Web Services (AWS)
Apache Airflow
SQL
Data engineering
+4 more

We are looking for an experienced Data Engineer with strong expertise in AWS, DBT, Databricks, and Apache Airflow to join our growing data engineering team.


Immediate joiners preferred


Role Overview 


The ideal candidate will design, develop, and maintain scalable data pipelines and data platforms to support analytics and business intelligence initiatives.


Key Responsibilities

  1. Design and build scalable data pipelines using AWS, Databricks, DBT, and Airflow.
  2. Develop and optimize ETL/ELT workflows for large-scale data processing.
  3. Implement data transformation models using DBT.
  4. Orchestrate workflows using Apache Airflow.
  5. Work with Databricks for big data processing and analytics.
  6. Ensure data quality, reliability, and performance optimization.
  7. Collaborate with data analysts, engineers, and business teams.


Required Skills

  1. Strong experience with AWS data services
  2. Hands-on experience with Databricks
  3. Experience in DBT (Data Build Tool)
  4. Workflow orchestration using Apache Airflow
  5. Strong SQL and Python skills
  6. Experience in data warehousing and ETL pipelines


Read more
Generative AI Persona platform

Generative AI Persona platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 7 yrs
₹15L - ₹20L / yr
skill iconMachine Learning (ML)
skill iconPython
ETL
skill iconData Science
ELT
+6 more

Description

We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years’ experience).

 

Please find the detailed Job Description attached for your reference. We are looking for candidates with strong experience in:

  • Machine Learning model development
  • Scalable data pipeline development (ETL/ELT)
  • Python and SQL
  • Cloud platforms such as Azure/AWS/Databricks
  • ML deployment environments (SageMaker, Azure ML, etc.)

 

Kindly note:

  • Location: Pune (Work From Office)
  • Immediate joiners preferred

 

While sharing profiles, please ensure the following details are included:

  • Current CTC
  • Expected CTC
  • Notice Period
  • Current Location
  • Confirmation on Pune WFO comfort

 

Must have skills

Machine Learning - 6 years

Python - 6 years

ETL(Extract, Transform, Load) - 6 years

SQL - 6 years

Azure - 6 years

 

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Remote only
5 - 12 yrs
₹2L - ₹10L / yr
Generative AI
Data engineering
AWS Bedrock
Retrieval Augmented Generation (RAG)
Llama
+6 more

Job Title : Data / Generative AI Engineer

Experience : 5+ Years (Mid-Level) | 10+ Years (Senior)

Location : Remote

Employment Type : Contract

Open Positions : 5


Job Overview :

We are hiring Data / Generative AI Engineers for remote contract engagements supporting client-facing AI implementations. The role involves building production-grade Generative AI solutions on AWS, including conversational AI systems, RAG-based architectures, intelligent automation platforms, and scalable data engineering pipelines.


Mandatory Skills :

Amazon Bedrock, Generative AI, RAG Architecture, LangChain/LlamaIndex/Bedrock Agents, Python (3.9+), AWS Serverless (Lambda, API Gateway, Step Functions), Vector Databases, Data Engineering & ETL, AWS Glue, Amazon Athena.


Key Responsibilities :

  • Design and build production-ready Generative AI applications on AWS.
  • Implement Retrieval-Augmented Generation (RAG) architectures for enterprise AI solutions.
  • Integrate Amazon Bedrock with foundation models and enterprise systems.
  • Develop AI agent orchestration workflows using frameworks such as LangChain, LlamaIndex, or Bedrock Agents.
  • Build and manage serverless architectures using AWS services like Lambda, API Gateway, and Step Functions.
  • Implement vector databases and semantic search solutions for intelligent knowledge retrieval.
  • Design and maintain data engineering pipelines and ETL workflows for large-scale data processing.
  • Use AWS Glue for data transformation and orchestration.
  • Utilize Amazon Athena for querying large datasets and performing analytics.
  • Develop scalable Python-based APIs and backend services.
  • Collaborate with cross-functional teams and clients to deliver AI-powered solutions in production environments.


Required Skills :

  • Strong experience with Amazon Bedrock and foundation model integrations
  • Hands-on experience with LangChain, LlamaIndex, or Bedrock Agents
  • Advanced Python (3.9+) development and API building
  • Experience with AWS serverless architectures (Lambda, API Gateway, Step Functions)
  • Experience implementing vector databases and semantic search systems
  • Strong knowledge of data engineering and ETL pipeline development
  • Hands-on experience with AWS Glue for data transformation and orchestration
  • Experience using Amazon Athena for querying and analytics
  • Experience building RAG-based AI applications

Engagement Details :

  • Contract Duration : Minimum 3 to 6 Months
  • Work Timing : 8:00 AM – 4:00 PM EST
  • Start Timeline : Within 2 Weeks
  • Open Positions : 5
Read more
ManpowerGroup
Shirisha Jangi
Posted by Shirisha Jangi
Hyderabad, Bengaluru (Bangalore)
6 - 10 yrs
₹8L - ₹23L / yr
ASP.NET MVC
skill iconAngular (2+)
skill iconReact.js
Microsoft Windows Azure
skill iconAmazon Web Services (AWS)
+1 more

Great Opportunity for .NET Developer

📍 Location: Hyderabad/Bangalore

💼 Experience: 3-10Years

🕒 Employment Type: Full-Time

🏢 Work Mode: Hybrid

📅 Notice Period: 0-1Month

 

Responsibilities:

· Lead moderately complex initiatives and deliverables within technical domain environments

· Contribute to large scale planning of strategies

· Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments

· Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures

· Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements

· Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals

· Contributes heavily projects and act as an escalation point, provide guidance and direction to less experienced staff

· Participate in daily scrum activities.

 

Required Qualifications:

· 3+ years of Software Engineering experience, or equivalent demonstrated through work experience

· 2+ years of experience in C#, Web Forms, .NET, .NET Framework, .NET Core

· 2+ years of experience in Angular/React.js

· 2+ years of Experience with SDLC and Agile tools such as JIRA, GitHub, Jenkins, Confluence etc.

 

Desired Qualifications:

· Bachelor's/Master's Degree in Computer Science or equivalent

· 2+ Years of experience in Enterprise Architecture

· 2+ years of DevOps tool set based continuous incremental delivery experience

· 2+ years working in Azure Public Cloud Platform

· 2+ years working in Micro Front End Architecture

· 2+ years using SQL Server or Oracle DB.

 

Skillset:

· SQL (PL/SQL and T-SQL)

· CI/CD

· REST API’s

· C#, Web Forms, .NET, .NET Framework, .NET Core

· Angular/React.js

· Web Form

Read more
Appsforbharat
Pooja V
Posted by Pooja V
Bengaluru (Bangalore)
6 - 13 yrs
Best in industry
skill iconGo Programming (Golang)
skill iconPython
skill iconAmazon Web Services (AWS)
SQL

About the role


We are seeking a seasoned Backend Tech Lead with deep expertise in Golang and Python to lead our backend team. The ideal candidate has 6+ years of experience in backend technologies and 2–3 years of proven engineering mentoring experience, having successfully scaled systems and shipped B2C applications in collaboration with product teams.

Responsibilities

Technical & Product Delivery

● Oversee design and development of backend systems operating at 10K+ RPM scale.

● Guide the team in building transactional systems (payments, orders, etc.) and behavioral systems (analytics, personalization, engagement tracking).

● Partner with product managers to scope, prioritize, and release B2C product features and applications.

● Ensure architectural best practices, high-quality code standards, and robust testing practices.

● Own delivery of projects end-to-end with a focus on scalability, reliability, and business impact.

Operational Excellence

● Champion observability, monitoring, and reliability across backend services.

● Continuously improve system performance, scalability, and resilience.

● Streamline development workflows and engineering processes for speed and quality.

Requirements

Experience:

7+ years of professional experience in backend technologies.

2-3 years as Tech lead and driving delivery.

● Technical Skills:

Strong hands-on expertise in Golang and Python.

Proven track record with high-scale systems (≥10K RPM).

Solid understanding of distributed systems, APIs, SQL/NoSQL databases, and cloud platforms.

Leadership Skills:

Demonstrated success in managing teams through 2–3 appraisal cycles.

Strong experience working with product managers to deliver consumer-facing applications.

● Excellent communication and stakeholder management abilities.

Nice-to-Have

● Familiarity with containerization and orchestration (Docker, Kubernetes).

● Experience with observability tools (Prometheus, Grafana, OpenTelemetry).

● Previous leadership experience in B2C product companies operating at scale.

What We Offer

● Opportunity to lead and shape a backend engineering team building at scale.

● A culture of ownership, innovation, and continuous learning.

● Competitive compensation, benefits, and career growth opportunities.

Read more
LogIQ Labs Pvt.Ltd.

at LogIQ Labs Pvt.Ltd.

2 recruiters
HR eShipz
Posted by HR eShipz
Remote only
6 - 8 yrs
₹8L - ₹16L / yr
Terraform
skill iconAmazon Web Services (AWS)
Iac
yaml

Key Responsibilities

  • Design, implement, and maintain highly available infrastructure on AWS.
  • Automate infrastructure provisioning using Terraform (Infrastructure as Code).
  • Define and monitor SLIs, SLOs, and error budgets to improve service reliability.
  • Build and manage CI/CD pipelines to enable safe and frequent deployments.
  • Implement robust monitoring, alerting, and logging solutions.
  • Perform incident response, root cause analysis (RCA), and postmortems.
  • Improve system resilience through automation and self-healing mechanisms.
  • Optimize cloud resource utilization and cost (FinOps awareness).
  • Collaborate with development teams to improve application reliability.
  • Manage containerized workloads using Docker and Kubernetes (EKS preferred).
  • Implement security and compliance best practices across infrastructure.
  • Maintain operational runbooks and documentation.

Required Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 7–8 years of experience in SRE, DevOps, or Production Engineering.
  • Strong hands-on experience with AWS services.
  • Proven experience with Terraform for infrastructure automation.
  • Experience building CI/CD pipelines (GitHub Actions, Jenkins, or similar).
  • Strong scripting skills (Python, Bash, or Shell).
  • Experience with Linux system administration.
  • Hands-on experience with monitoring and observability tools.
  • Good understanding of networking and cloud security fundamentals.
  • Experience with Git and branching strategies


Read more
Pentabay Softwares

at Pentabay Softwares

1 recruiter
Sandhiya M
Posted by Sandhiya M
Chennai
1 - 5 yrs
₹2L - ₹8L / yr
ISO9001
ISO27001
Security Information and Event Management (SIEM)
Cyber Security
skill iconAmazon Web Services (AWS)
+4 more

Hi Folks, we are currently Hiring for Security Engineer.

Gemini said


Hiring: Security Engineer

Company : Pentabay Softwares

Location : Anna salai, Mount Road

Mode: Fulltime


Pentabay Softwares INC is looking for a proactive Security Engineer (2–7 Years Exp) to fortify our global digital solutions. As we scale our footprint in the Healthcare IT sector, you will play a critical role in safeguarding sensitive data (ePHI) and ensuring our cloud-native architectures are resilient against evolving threats.


The Mission

You will be the architect of our defense, bridging the gap between high-speed development and rigorous security standards. Your day-to-day will involve "shifting security left" by embedding DevSecOps practices into our CI/CD pipelines and leading our compliance efforts for SOC 2, ISO 27001, and HIPAA.


Key Responsibilities


Defense & Architecture: Design and maintain secure cloud (AWS/Azure/GCP) and on-prem environments. Implement IAM policies, Zero Trust frameworks, and robust secrets management.

Offensive Testing: Conduct regular vulnerability assessments (VAPT), penetration testing, and code reviews using tools like Burp Suite and Nessus.

DevSecOps & Automation: Integrate SAST/DAST/SCA scanning into engineering workflows. Automate security tasks using Python or Bash.

Incident Response: Monitor SIEM tools (Splunk/CrowdStrike), respond to threats, and develop risk mitigation strategies.

Healthcare Compliance (Plus): Ensure data integrity for HL7/FHIR APIs and maintain HIPAA/HITECH audit readiness for healthcare clients.


What You Bring


Experience: 2–7 years in Information/Application Security with a strong grasp of the OWASP Top 10 and threat modeling (STRIDE).

Technical Depth: Proficiency in network/endpoint security, PKI, encryption standards (TLS/SSL), and container security (Docker/Kubernetes).

Compliance Knowledge: Familiarity with NIST, GDPR, and SOC 2 frameworks.

Tools: Hands-on experience with Metasploit, Wireshark, and Infrastructure-as-Code (Terraform).

Bonus Points: Industry certifications like OSCP, CISSP, or CEH, and experience in Healthcare IT workflows.

Auditing space like ISO27001 , ISO9001 prefered


Why Pentabay?

At Pentabay, we offer more than just a job; we offer a security-first engineering culture.

Growth: A dedicated learning budget for certifications and conferences.

Impact: Work on cutting-edge Healthcare projects that demand the highest levels of data privacy.


Send resumes to : sandhiya.m at pentabay.com

Read more
Truetech solutions

Truetech solutions

Agency job
via TrueTech Solutions by Meimozhi balu
Chennai
5 - 7 yrs
₹10L - ₹15L / yr
skill iconJava
skill iconAmazon Web Services (AWS)
AWS Lambda
fargate
AWS Simple Notification Service (SNS)
+1 more

Key Responsibilities:

• Design, develop, and maintain Java-based applications using best practices and modern frameworks.

• Integrate and manage AWS services such as EC2, S3, RDS, Lambda, and CloudFormation.

• Develop RESTful APIs and microservices architecture to support scalable and robust applications.

• Implement CI/CD pipelines using tools like Jenkins, Git, and Docker to automate deployment processes.

• Monitor and troubleshoot application performance, defect triaging, and reliability issues, providing timely support and resolution.

• Collaborate with cross-functional teams to gather requirements, design solutions, and deliver high-quality software.

• Conduct code reviews and ensure adherence to coding standards and best practices.

• Stay updated with the latest industry trends and AWS services to continuously improve our technology stack.


Qualifications:

• Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.

• 5-7 years of experience in Java development, with a strong understanding of object-oriented programming and design patterns.

• Proficiency in AWS services such as EC2, S3, RDS, Lambda, CloudFormation, etc.

• Experience with Spring Boot and other Java frameworks.

• Strong knowledge of SQL and NoSQL databases.

• Familiarity with containerization technologies like Docker and Kubernetes.

• Experience with CI/CD tools and processes.

• Excellent problem-solving skills and the ability to work independently and as part of a team.

• Strong communication skills and the ability to articulate technical concepts to non-technical stakeholders.

Read more
Recruiting Bond

at Recruiting Bond

2 candid answers
Pavan Kumar
Posted by Pavan Kumar
Mumbai, Navi Mumbai
10 - 15 yrs
₹55L - ₹80L / yr
Distributed Systems
Systems design
Systems architecture
High-level design
LLD
+77 more

Location: Mumbai, Maharashtra, India

Sector: Technology, Information & Media

Company Size: 500 - 1,000 Employees

Employment: Full-Time, Permanent

Experience: 10 - 14 Years (Engineering Leadership)

Level: Engineering Manager / Group EM


ABOUT THIS MANDATE :


Recruiting Bond has been exclusively retained by one of India's most prominent and well-established digital platform organisations operating at the intersection of Technology, Information, and Media to identify and place an exceptional Engineering Manager who can lead engineering teams through an enterprise-wide AI adoption and digital transformation agenda.


This is a high-impact, hands-on leadership role at the nexus of people, product, and technology. The organisation is executing one of the most ambitious AI transformation programmes in its sector and this Engineering Manager will be a core driver of that change. You will lead multiple squads, own engineering delivery end-to-end, embed AI tooling and practices into the team's DNA, and shape the engineering culture of tomorrow.


We are seeking leaders who code when it matters, who build systems and teams with equal conviction, and who view AI not as a trend but as a fundamental shift in how great software is built.


THE OPPORTUNITY AT A GLANCE :


AI-First Engineering Culture :

  • Own AI adoption across your squads - from LLM tooling integration to automation-first delivery workflows. Make AI a default, not an afterthought.


Hands-On Engineering Leadership :

  • Stay close to the code. Lead architecture reviews, unblock engineers, and set the technical bar - not just the management agenda.


People & Org Builder :

  • Grow engineers into leaders. Build squads of 615 across functions. Drive hiring, career frameworks, and a culture of psychological safety.


KEY RESPONSIBILITIES :


1. Hands-On Technical Engagement :

  • Remain deeply embedded in the technical work participate in design reviews, architecture decisions, and critical code reviews
  • Set and uphold the engineering quality bar : performance benchmarks, security standards, test coverage, and release quality
  • Provide technical direction on backend platform strategy, API design, service decomposition, and data architecture
  • Identify and resolve systemic technical debt and architectural risks across team-owned services
  • Unblock engineers by diving into complex problems debugging, pair programming, and system analysis when it matters
  • Own key technical decisions in collaboration with Tech Leads and Principal Engineers; balance pragmatism with long-term sustainability


2. AI Adoption, Integration & Transformation (2026 Mandate) :

  • Define and execute the team's AI adoption roadmap - from developer tooling to product-facing AI features
  • Champion the integration of GenAI tools (GitHub Copilot, Cursor, Claude, ChatGPT) across the full engineering workflow coding, testing, documentation, incident response
  • Embed LLM-powered capabilities into the product : recommendation engines, intelligent search, conversational interfaces, content generation, and predictive systems
  • Lead evaluation and adoption of AI-assisted SDLC practices : automated code review, AI-generated test suites, intelligent observability, and anomaly detection
  • Partner with Data Science and ML Platform teams to productionise ML models with robust MLOps pipelines
  • Build team literacy in prompt engineering, RAG (Retrieval-Augmented Generation), and AI agent frameworks
  • Create an experimentation culture : run structured AI pilots, measure productivity impact, and scale what works
  • Stay ahead of the AI tooling landscape and advise senior leadership on strategic AI investments and engineering implications


3. People Leadership & Team Development :

  • Lead, manage, and grow squads of 6 - 15 engineers across seniority levels (L2 through L6 / Junior through Staff)
  • Conduct structured 1 : 1s, career growth conversations, and development planning with every direct report
  • Design and execute personalised AI upskilling programmes ensure every engineer develops practical AI fluency by end of 2026
  • Build and maintain a high-performance team culture : clarity of ownership, accountability, fast feedback loops, and psychological safety
  • Drive performance management fairly and rigorously recognise top performers, manage underperformance constructively
  • Lead technical hiring end-to-end : define job requirements, conduct bar-raising interviews, and make data-driven hire decisions
  • Contribute to engineering career frameworks and level definitions in partnership with the VP / Director of Engineering


4. Engineering Delivery & Execution Excellence :

  • Own end-to-end delivery for multiple product squads from planning and scoping through production release and post-launch stability
  • Implement and refine agile delivery frameworks (Scrum, Kanban, Shape Up) calibrated to squad needs and product cadence
  • Drive predictable delivery : maintain healthy sprint velocity, manage WIP limits, and ensure dependency resolution across teams.
  • Establish and own engineering KPIs : DORA metrics (deployment frequency, lead time, MTTR, change failure rate), uptime SLOs, and velocity trends
  • Lead incident management : build blameless post-mortem culture, own RCA processes, and drive systemic reliability improvements
  • Balance technical debt repayment with feature velocity negotiate prioritisation transparently with Product leadership


5. Strategic Leadership & Cross-Functional Influence :

  • Serve as the primary engineering partner for Product, Design, Data, and Business stakeholders translate ambiguity into executable engineering plans
  • Participate in quarterly roadmap planning, capacity forecasting, and OKR definition for engineering teams
  • Represent engineering in leadership forums articulate technical constraints, risks, and opportunities in business terms
  • Contribute to org-wide engineering strategy : platform investments, build-vs-buy decisions, and shared infrastructure priorities
  • Build relationships across geographies (Mumbai HQ + distributed teams) to maintain alignment and delivery cohesion
  • Act as a culture carrier and ambassador for engineering excellence, innovation, and responsible AI use


AI TRANSFORMATION LEADERSHIP 2026 EXPECTATIONS :


In 2026, Engineering Managers at this organisation are expected to be active architects of AI transformation not passive observers. The following outlines the specific AI leadership expectations for this role :


AI Developer Productivity

  • Drive measurable uplift in developer velocity through AI tooling adoption. Target : 30%+ reduction in code review cycle time and 40%+ increase in test coverage automation by Q3 2026.


LLM & GenAI Product Features

  • Own delivery of GenAI-powered product capabilities : intelligent content, semantic search, personalisation, and conversational UX in production, at scale.


AI-Augmented Observability

  • Implement AI-driven monitoring and anomaly detection pipelines. Reduce MTTR by leveraging predictive alerting, intelligent runbooks, and auto-remediation scripts.


Team AI Fluency :

  • Build mandatory AI literacy across all engineering levels.
  • Every engineer understands prompt engineering basics, AI ethics guardrails, and responsible AI deployment practices.


Responsible AI Governance :

  • Partner with Security, Legal, and Data Privacy to ensure all AI deployments meet compliance standards, bias mitigation requirements, and explainability benchmarks.


TECHNOLOGY STACK & DOMAIN FAMILIARITY REQUIRED :


  • Languages: Java/ Go/ Python/ Node.js /PHP /Rust (must be hands-on in at least 2)
  • Cloud: AWS / GCP / Azure (multi-cloud exposure strongly preferred)
  • AI & GenAI: OpenAI / Anthropic / Gemini APIs /LangChain /LlamaIndex / RAG / Vector DBs / GitHub
  • Copilot: Cursor /Hugging Face
  • Containers: Docker /Kubernetes /Helm /Service Mesh (Istio / Linkerd)
  • Databases: PostgreSQL /MongoDB / Redis / Cassandra / Elasticsearch / Pinecone (Vector DB)
  • Messaging: Apache Kafka /RabbitMQ /AWS SQS/SNS /Google Pub/Sub
  • MLOps & DataOps: MLflow /Kubeflow / SageMaker / Vertex AI /Airflow /dbt
  • Observability: Datadog /Prometheus /Grafana /OpenTelemetry / Jaeger /ELK Stack
  • CI/CD & IaC: GitHub Actions ArgoCD / Jenkins / Terraform /Ansible /Backstage (IDP)


QUALIFICATIONS & CANDIDATE PROFILE :

Education :

  • B.E. / B.Tech or M.E. / M.Tech from a Tier-I or Tier-II Institution - CS, IS, ECE, AI/ML streams strongly preferred
  • Demonstrated engineering depth and leadership impact may complement institution pedigree


Experience :

  • 10 to 14 years of progressive engineering experience, with at least 3 years in a formal Engineering Manager or equivalent people-leadership role
  • Proven track record of managing and scaling engineering teams (615+ engineers) in a fast-growing SaaS or digital product environment
  • Hands-on backend engineering background must be able to read, write, and critique production code
  • Direct experience driving AI/ML feature delivery or AI tooling adoption within engineering organisations
  • Exposure across start-up, mid-size, and large-scale product organisations, preferred adaptability is a core requirement
  • Strong CS fundamentals: distributed systems, algorithms, system design, and software architecture
  • Demonstrated career stability minimum of 2 years of average tenure per organisation.


The Ideal Engineering Manager in 2026 :

  • Leads with context, not control, empowers engineers while maintaining accountability and quality
  • Is fluent in both people language and technical language, switches registers naturally with engineers and executives alike
  • Sees AI as a force multiplier for the team, not a threat. Actively experiments with and advocates for AI tooling
  • Measures success by team outcomes, not personal output. Takes pride in what the team ships, not what they build alone
  • Creates feedback loops obsessively between product and engineering, between seniors and juniors, between metrics and decisions
  • Has strong opinions, loosely held, brings conviction to discussions but updates on evidence
  • Invests in engineering excellence as seriously as delivery velocity knows that quality and speed are not opposites


WHY THIS ROLE STANDS APART :


AI Transformation at Scale :

  • Lead one of the most significant AI adoption programmes in India's digital media sector.
  • Our decisions will shape how hundreds of engineers work in 2026 and beyond.


Hands-On & Strategic Balance :

  • A rare EM role that actively encourages technical depth.
  • Stay close to the code while owning the people agenda - the best of both worlds.


Established Platform, Real Scale :

  • 5001,000 engineers, proven product-market fit, and the org maturity to execute.
  • This is not a greenfield startup gamble it is a serious company with serious ambition.


Clear Leadership Growth Path :

  • A visible, direct path toward Director / VP of Engineering.
  • Senior leadership is invested in growing its next generation of technology executives.


Read more
LogIQ Labs Pvt.Ltd.

at LogIQ Labs Pvt.Ltd.

2 recruiters
HR eShipz
Posted by HR eShipz
Remote only
7 - 9 yrs
₹8L - ₹16L / yr
skill iconAmazon Web Services (AWS)
Terraform
skill iconJenkins

Key Responsibilities

  • Design, implement, and maintain highly available infrastructure on AWS.
  • Automate infrastructure provisioning using Terraform (Infrastructure as Code).
  • Define and monitor SLIs, SLOs, and error budgets to improve service reliability.
  • Build and manage CI/CD pipelines to enable safe and frequent deployments.
  • Implement robust monitoring, alerting, and logging solutions.
  • Perform incident response, root cause analysis (RCA), and postmortems.
  • Improve system resilience through automation and self-healing mechanisms.
  • Optimize cloud resource utilization and cost (FinOps awareness).
  • Collaborate with development teams to improve application reliability.
  • Manage containerized workloads using Docker and Kubernetes (EKS preferred).
  • Implement security and compliance best practices across infrastructure.
  • Maintain operational runbooks and documentation.

Required Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or related field.
  • 7–8 years of experience in SRE, DevOps, or Production Engineering.
  • Strong hands-on experience with AWS services.
  • Proven experience with Terraform for infrastructure automation.
  • Experience building CI/CD pipelines (GitHub Actions, Jenkins, or similar).
  • Strong scripting skills (Python, Bash, or Shell).
  • Experience with Linux system administration.
  • Hands-on experience with monitoring and observability tools.
  • Good understanding of networking and cloud security fundamentals.
  • Experience with Git and branching strategies


Read more
Recruiting Bond

at Recruiting Bond

2 candid answers
Pavan Kumar
Posted by Pavan Kumar
Bengaluru (Bangalore)
6 - 12 yrs
₹40L - ₹55L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconNodeJS (Node.js)
skill iconJava
Distributed Systems
+27 more

NOW HIRING · WORLD-CLASS TALENT Backend Tech Lead (Senior Level Engineering Leadership)

Placed by Recruiting Bond on behalf of a Confidential Digital Platform Leader

📍Location: Bengaluru, India (Hybrid / On-Site)

🏢Sector: Technology, Information & Media

👥Company Size: 500 – 1,000 Employees

💼Employment: Full-Time, Permanent

🎯Experience: 6 – 9 Years (Backend Engineering)

🚀 Level: Tech Lead


ABOUT THIS MANDATE

Recruiting Bond has been exclusively retained by one of India's most well-established digital platform organisations — a company operating at the intersection of Technology, Information, and Media — to identify and place a world-class Backend Tech Lead who can drive a transformational engineering agenda at scale.

This is not an ordinary role. The organisation is executing a high-stakes, large-scale modernisation of its backend infrastructure — migrating from legacy monolithic systems to resilient, cloud-native, AI-augmented distributed architectures that serve millions of concurrent users. The person in this seat will be a core pillar of that transformation.

We are looking exclusively for the top 1% — engineers who think in systems, own outcomes, and lead by example.


THE OPPORTUNITY AT A GLANCE

🏗️ Architecture Ownership

Drive system design decisions across the entire backend platform. Shape the future of distributed, fault-tolerant architecture.

🤖 AI-Augmented Engineering

Embed GenAI and LLM tooling directly into the SDLC. Champion automation-first development practices across squads.

🎓 Engineering Leadership

Mentor and grow the next generation of backend engineers. Lead hiring, reviews, and cross-functional technical alignment.


KEY RESPONSIBILITIES


1. Architecture & Platform Modernisation

  • Lead the full migration of legacy monolithic systems to a scalable, cloud-native microservices architecture
  • Design and own distributed, fault-tolerant backend systems with sub-millisecond SLO targets
  • Architect API-first and event-driven platforms using async messaging patterns (Kafka, Pub/Sub, SQS)
  • Resolve systemic performance bottlenecks, concurrency conflicts, and scalability ceilings
  • Establish backend design standards, coding guidelines, and architectural review processes


2. Distributed Systems Engineering (Production-Grade)

  • Design and implement Webhook reliability frameworks with intelligent retry and exponential backoff strategies
  • Build idempotent, versioned APIs with enterprise-grade rate limiting and throttling controls
  • Implement circuit breakers, bulkheads, and resilience patterns using Resilience4j / Hystrix or equivalents
  • Engineer Dead-Letter Queue (DLQ) strategies and event reprocessing pipelines with guaranteed delivery semantics
  • Apply Saga orchestration and choreography patterns for distributed transaction integrity
  • Execute zero-downtime deployments and canary release strategies with rollback capability
  • Design and enforce multi-region disaster recovery and business continuity protocols


3. AI-Driven Engineering Practices

  • Champion LLM and GenAI adoption as first-class tooling across the software development lifecycle
  • Apply prompt engineering techniques for automated code generation, review, and documentation workflows
  • Utilise AI-assisted debugging, root cause analysis, and predictive performance optimisation
  • Build automation-first pipelines that reduce toil and accelerate delivery velocity
  • Evaluate and integrate emerging AI developer tools into the engineering ecosystem


4. Engineering Leadership & Culture

  • Own backend platforms end-to-end with full accountability across development, stability, and performance
  • Actively mentor, coach, and elevate engineers at all levels (L3–L6) through structured 1:1s and code reviews
  • Drive and lead technical hiring — from designing assessments to final hire decisions
  • Partner with Product, Data, DevOps, and Security stakeholders to align engineering with business objectives
  • Represent the engineering org in cross-functional roadmap planning and architecture decision reviews
  • Foster a culture of technical excellence, psychological safety, and high-velocity delivery


TECHNOLOGY STACK (HANDS-ON PROFICIENCY REQUIRED)

Languages: Java (primary) · Go · Python · Node.js · PHP · Rust

Cloud: AWS · GCP · Azure (Multi-cloud exposure preferred)

Containers: Docker · Kubernetes · Helm · Service Mesh (Istio / Linkerd)

Databases: PostgreSQL · MySQL · MongoDB · Cassandra · Redis · Elasticsearch

Messaging: Apache Kafka · RabbitMQ · AWS SQS/SNS · Google Pub/Sub

Observability: Datadog · Prometheus · Grafana · OpenTelemetry · Jaeger · ELK Stack

CI/CD & IaC: GitHub Actions · Jenkins · ArgoCD · Terraform · Ansible

AI & GenAI: OpenAI / Claude APIs · LangChain · RAG Pipelines · GitHub Copilot · Cursor


QUALIFICATIONS & CANDIDATE PROFILE


Education

  • B.E. / B.Tech or M.E. / M.Tech from a Tier-I or Tier-II Institution — CS, IS, ECE, AI/ML streams strongly preferred
  • Exceptional real-world engineering track record may be considered in lieu of institution pedigree


Experience

  • 6 to 9 years of progressive backend engineering experience with demonstrable ownership and impact
  • Proven track record of shipping and scaling production SaaS / Product systems at significant user load
  • Exposure to and success within start-up, mid-size, and large-scale product organisations — the full spectrum
  • Strong computer science fundamentals: algorithms, data structures, distributed systems theory, OS internals
  • Demonstrated career stability — minimum 2 years average tenure per organisation
  • The Ideal Candidate Exemplifies
  • System-level thinking with an ability to hold context across code, architecture, product, and business
  • An ownership mindset — no task is 'not my job'; outcomes and quality are personal commitments
  • Strong written and verbal communication skills for asynchronous, cross-functional collaboration
  • Intellectual curiosity: actively follows engineering trends, contributes to the community (OSS, blogs, talks)
  • Bias for automation, observability, and engineering efficiency at every level
  • A mentor's instinct — genuine desire to grow others and raise the capability of the team around them


WHY THIS ROLE STANDS APART

✅ Transformational Scope

Lead platform modernisation at scale. Your architectural choices will define systems serving millions of users for years.

✅ AI-Forward Engineering Culture

Be at the forefront of AI-augmented development. This org invests in tools and practices that make great engineers exceptional.

✅ Established, Stable Platform

Join a company with 500–1,000 employees, proven product-market fit, and the resources to execute on a serious technical vision.

✅ Career-Defining Leadership

Operate with strategic influence, direct access to senior leadership, and a clear path toward Principal / Staff / VP Engineering.


HOW TO APPLY

This search is being managed exclusively by Recruiting Bond

Submit your application with an updated resume

Only shortlisted candidates will be contacted. All applications are treated with the strictest confidentiality.

⚡ We move fast — qualified candidates can expect a response within 48–72 business hours.


Recruiting Bond | Bengaluru, Karnataka, India | 2026

Read more
LogIQ Labs Pvt.Ltd.
Remote only
6 - 8 yrs
₹8L - ₹18L / yr
skill iconPython
skill iconMongoDB
skill iconFlask
skill iconAmazon Web Services (AWS)
Team leadership

Key Responsibilities

  • Design end-to-end architecture for scalable full-stack applications.
  • Lead backend development using Python and Flask framework.
  • Design and optimize MongoDB data models and queries.
  • Define frontend architecture (React/Angular/Vue – as applicable).
  • Establish coding standards, design patterns, and best practices.
  • Build and optimize RESTful APIs and microservices.
  • Implement authentication, authorization, and security best practices.
  • Ensure high performance, scalability, and reliability of applications.
  • Drive CI/CD implementation and DevOps best practices.
  • Review code, mentor developers, and guide technical decisions.
  • Collaborate with product, DevOps, and data teams.
  • Troubleshoot complex production issues and perform root cause analysis.
  • Lead cloud deployment strategies (Azure/AWS/GCP preferred).

Required Qualifications

  • Bachelor’s or Master’s degree in Computer Science or related field.
  • 8+ years of software development experience.
  • 4+ years of hands-on Python backend development.
  • Strong expertise in Flask framework.
  • Deep experience with MongoDB (schema design, indexing, aggregation).
  • Experience designing RESTful and microservices architectures.
  • Strong understanding of frontend technologies (JavaScript, HTML, CSS).
  • Experience with Git and modern CI/CD pipelines.
  • Solid knowledge of system design, scalability, and performance tuning.
  • Experience with containerization (Docker preferred).
  • Strong problem-solving and architectural thinking skills.


Read more
Techjays
SREEHARIVASU S
Posted by SREEHARIVASU S
Remote only
5 - 10 yrs
₹30L - ₹50L / yr
Design patterns
Data Structures
Relational Database (RDBMS)
skill iconGit
Linux/Unix
+3 more

What makes Techjays an inspiring place to work

At Techjays, we are helping companies reimagine how they build, operate, and scale with AI at the core.

We operate as part of the 1% of companies globally that can truly leverage AI the right way and  not just as experimentation, but as secure, scalable, production-grade systems that drive measurable business outcomes.

Our strength lies in combining deep backend engineering with AI system design, building AI-native platforms, intelligent workflows, and cloud architectures that are reliable, observable, and enterprise-ready.

Our team includes engineers and leaders who have built and scaled products at global technology organizations such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. Today, we function as a high-agency, execution-focused team building advanced AI systems for global clients.

We are looking for a strong backend engineer who can design and build secure, scalable Python systems that power AI-native applications.

You will work on AI-enabled platforms, production systems, and scalable backend services that support LLM integrations, RAG pipelines, and intelligent workflows.


Years of Experience: 5 - 8 years


Location: Remote/ Coimbatore


Key Skills:

  • Backend Development (Expert): Python, Django/Flask, RESTful APIs, Websockets
  • Cloud Technologies (Proficient): AWS (EC2, S3, Lambda), GCP (Compute Engine, Cloud Storage, Cloud Functions), CI/CD pipelines with Jenkins/GitLab CI or Github Actions
  • Databases (Advanced): PostgreSQL, MySQL, MongoDB
  • AI/ML (Familiar): Basic understanding of Machine Learning concepts, experience with RAG, Vector Databases (Pinecone or ChromaDB or others)
  • Tools (Expert): Git, Docker, Linux

Roles and Responsibilities:

  • Design, development, and implementation of highly scalable and secure backend services using Python and Django.
  • Architect and develop complex features for our AI-powered platforms
  • Write clean, maintainable, and well-tested code, adhering to best practices and coding standards.
  • Collaborate with cross-functional teams, including front-end developers, data scientists, and product managers, to deliver high-quality software.
  • Mentor junior developers and provide technical guidance.

What We’re Looking For Beyond Skills

  • Builder mindset — you think in systems, not just tickets
  • Ownership — you take features from idea to production
  • Structured thinking in ambiguous environments
  • Clear communication and collaborative approach
  • Ability to work in a fast-paced, evolving startup environment


What We Offer

  • Competitive compensation
  • Flexible work environment (Remote / Coimbatore office)
  • Paid holidays & flexible time off
  • Medical insurance (Self & Family up to ₹4 Lakhs per person)
  • Opportunity to work on production-grade AI systems
  • Exposure to global clients and high-impact projects
  • A culture that values clarity, integrity, and continuous growth

If you want to build AI-native systems that are used in the real world,  not just prototypes, Techjays is the place to do it.

Survey Form Link


Read more
Optimo Capital

at Optimo Capital

2 candid answers
Ajinkya Pokharkar
Posted by Ajinkya Pokharkar
Bengaluru (Bangalore)
3 - 8 yrs
₹5.5L - ₹9L / yr
System Administration
IT security
Endpoint protection
Sophos
Patch Management
+14 more


About the role


We’re hiring an IT Systems Administrator for an NBFC to secure endpoints, SaaS, and networks across ~50 branches, ~250+ field staff, and ~50+ office users.

This is primarily an IT Admin + Security role, with secondary exposure to AWS cloud ops + light DevOps + basic DB access management.


If you’re an IT Admin aiming to break into AWS Cloud Ops + DevOps, this role is a strong next step — you’ll own core IT/security and get hands-on exposure to cloud operations and deployments.


Key responsibilities (Primary: IT Admin + Security)

  • Manage endpoint security for laptops and mobiles (policies, patching, encryption, antivirus/EDR); drive MDM implementation now/future (e.g., Intune/Jamf).
  • Administer Google Workspace (Gmail/Drive/Calendar): users, groups, permissions, SSO, MFA, sharing controls.
  • Own joiner–mover–leaver lifecycle: provisioning/deprovisioning, access controls, periodic access reviews.
  • Secure branch connectivity: VPN, internal Wi-Fi, internet usage controls; coordinate troubleshooting and standardization across branches.
  • Manage HO security stack: firewall operations, rule changes with change control, monitoring/log review (basic but consistent).
  • Secure SaaS tools (CRM/HRMS/comms like Slack/Zoom): role-based access, MFA enforcement, offboarding, integration/OAuth controls.
  • Maintain IT asset inventory: procurement coordination, issuance/return, audits, warranty/AMC, license renewals; remote lock/wipe for lost devices.
  • Handle security incidents: phishing, account compromise, device loss/theft — contain, investigate, recover, and prevent recurrence.
  • Run backups and basic DR testing; maintain SOPs/documentation and train staff on cyber hygiene.
  • Provide hands-on user support: laptop builds, software installs, Outlook/Excel issues, VPN/Wi-Fi troubleshooting, escalations and vendor coordination.


Secondary responsibilities (AWS + DevOps + DB ops support)

  • Support AWS administration: IAM users/roles/policies, MFA, access key hygiene, basic log review (e.g., CloudTrail).
  • Manage AWS access controls: security groups/firewall rules, IP allowlists/whitelisting (admin tools, databases, vendor access).
  • Assist engineering with DevOps operations:
  • CI/CD support (deployment coordination, rollbacks, environment configuration)
  • Secrets/credentials management and rotation (no shared creds)
  • DNS + SSL/TLS certificates, basic monitoring/alerting coordination
  • Bonus: Docker/Kubernetes and Terraform exposure
  • Basic database operations (admin-lite):
  • DB user creation, roles/permissions, least-privilege access
  • IP allowlisting/whitelisting for DB access via VPN/approved sources
  • Backup/restore verification coordination and basic monitoring signals (connections/storage)


Requirements

  • 3+ years in IT security / systems administration (BFSI or branch-heavy org preferred).
  • Hands-on with Google Workspace administration.
  • Strong endpoint/security fundamentals: encryption, patching, AV/EDR, remote support, device compliance.
  • Comfortable with networks: VPN/Wi-Fi/LAN troubleshooting; firewall basics and change discipline.
  • Strong operational discipline: asset tracking, vendor management, documentation, ticketing, user communication.
  • Practical AWS familiarity (IAM, access controls, logging) and ability to support DevOps workflows.


Nice to have

  • Experience implementing MDM at scale (Intune/Jamf/SureMDM).
  • Exposure to SOC2 / ISO27001 evidence, controls, and audit workflows.
  • Scripting for automation (PowerShell/Bash/Python).
  • Familiarity with managed databases and secure access patterns.
Read more
WITS Innovation Lab
Prabhnoor Kaur
Posted by Prabhnoor Kaur
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
2 - 5 yrs
₹3L - ₹7L / yr
Terraform
skill iconKubernetes
skill iconJenkins
Ansible
skill iconAmazon Web Services (AWS)
+8 more

We are looking for a skilled DevOps Engineer with hands-on experience in cloud platforms, CI/CD pipelines, container orchestration, and infrastructure automation. The ideal candidate is someone who loves solving reliability challenges, automating everything, and ensuring seamless delivery across environments.

Key Responsibilities

  • Design, implement, and maintain CI/CD pipelines using GitHub Actions, Jenkins, and GitHub.
  • Manage and optimize infrastructure on AWS/GCP, ensuring scalability, security, and high availability.
  • Deploy and manage containerized applications using Docker and Kubernetes.
  • Build, automate, and manage infrastructure as code using Terraform.
  • Configure and manage automation tools and workflows using Ansible.
  • Monitor system performance, troubleshoot production issues, and ensure smooth operations.
  • Implement best practices for code management, release processes, and DevOps standards.
  • Collaborate closely with development teams to improve build pipelines and deployment workflows.
  • Write scripts in Python/Bash to automate operational tasks.

Required Skills & Experience

  • 2+ years of hands-on experience as a DevOps Engineer or in a similar role.
  • Strong expertise in AWS or GCP cloud services.
  • Solid understanding of Kubernetes (deployment, scaling, service mesh, packaging).
  • Proficiency with Terraform for infrastructure automation.
  • Experience with Git, GitHub, and GitHub Actions for source control and CI/CD.
  • Good knowledge of Jenkins pipelines and automation.
  • Hands-on experience with Ansible for configuration management.
  • Strong scripting skills using Python or Bash.
  • Understanding of monitoring, logging, and security best practices.


Read more
Digital solutions and services company

Digital solutions and services company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 7 yrs
₹17L - ₹23L / yr
skill iconMachine Learning (ML)
skill iconPython
ETL
skill iconData Science
SQL
+5 more

Data Scientist or Senior Machine Learning Engineer


We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years' experience).


Please find the detailed Job Description attached for your reference.

We are looking for candidates with strong experience in:

  • Machine Learning model development
  • Scalable data pipeline development (ETL/ELT)
  • Python and SQL
  • Cloud platforms such as Azure/AWS/Databricks
  • ML deployment environments (SageMaker, Azure ML, etc.)


Kindly note:

  • Location: Pune (Work from Office)
  • Immediate joiners preferred


While sharing profiles, please ensure the following details are included:

  • Current CTC
  • Expected CTC
  • Notice Period
  • Current Location
  • Confirmation on Pune WFO comfort


Must have Skills

  • Machine Learning - 6 Years
  • Python - 6 Years
  • ETL (Extract, Transform, Load) - 6 Years
  • SQL - 6 Years
  • Azure - 6 Years


Request you to share relevant profiles at the earliest. Looking forward to your support.

Read more
SPGConsulting
Anitha K
Posted by Anitha K
Bengaluru (Bangalore)
7 - 12 yrs
₹8L - ₹16L / yr
DevOps
skill iconAmazon Web Services (AWS)
cicd
skill iconKubernetes
skill iconDocker
+5 more

Hiring: AWS DevOps Developer

📍 Location: Bangalore

🧑‍💻 Experience: 4–7 Years

📌 Job Summary

We are looking for a skilled AWS DevOps Developer with strong experience in AWS cloud infrastructure, CI/CD automation, containerization, and Infrastructure as Code. The ideal candidate should have hands-on experience building scalable and secure cloud environments.

🛠 Required Technical Skills

☁️ AWS Services

  • Amazon EC2
  • Amazon S3
  • IAM
  • VPC
  • Amazon EKS
  • RDS
  • Route 53
  • CloudWatch
  • AWS Lambda

🔄 DevOps & CI/CD

  • Jenkins (Pipelines, Shared Libraries)
  • Git / GitHub
  • Maven / Build tools
  • CI/CD pipeline design & implementation

🐳 Containers & Orchestration

  • Docker
  • Kubernetes (EKS preferred)
  • Helm

🏗 Infrastructure as Code

  • Terraform
  • Ansible

📊 Monitoring & Logging

  • CloudWatch
  • Prometheus
  • Grafana

📋 Roles & Responsibilities

  • Design and implement scalable AWS infrastructure
  • Build and maintain CI/CD pipelines
  • Deploy containerized applications using Docker & Kubernetes
  • Automate infrastructure provisioning using Terraform
  • Implement monitoring and alerting solutions
  • Ensure security, compliance, and cost optimization
  • Troubleshoot production issues and improve system reliability

➕ Good to Have

  • AWS Certification (Solutions Architect / DevOps Engineer)
  • Experience with Microservices architecture
  • Knowledge of DevSecOps practices
  • Experience in Agile methodology


Read more
Amura Health

at Amura Health

3 candid answers
1 video
supriya C
Posted by supriya C
Chennai
4 - 14 yrs
₹20L - ₹50L / yr
skill iconNodeJS (Node.js)
skill iconPython
skill iconAmazon Web Services (AWS)

Role Overview:

We are seeking a Tech Lead (5–14 yrs experience) to design, build, and scale the technology

foundation for our Support Excellence function. This role sits at the intersection of engineering,

product, and operations ensuring that our internal teams and, eventually, our end users

experience seamless, efficient, and data-driven support.

You will lead a small but high-impact team of engineers, own the support tooling roadmap, and

implement solutions that handle ticket triage, data quality issues, automation, and integrations

with our healthcare SaaS platform.

This is a hands-on technical leadership role ideal for someone who thrives on solving

operational challenges through technology, building frameworks from scratch, and enabling

customer-facing and internal support teams to scale effectively.

Key Responsibilities

1. Build & Enhance Support Platform:

● Own the engineering roadmap for support tooling — ticketing systems, triage workflows,

knowledge bases, and automation bots.

● Design and implement scalable support frameworks, ensuring fast triage, data-driven

escalation, and high-quality resolution.

● Integrate support tooling with product backend, CMS, and analytics systems to enable

context-aware assistance.

2. Technical Leadership & Delivery:

● Lead a small team of engineers (SEs and SSEs), providing guidance on design, architecture,

and coding standards.

● Stay hands-on with coding and reviews while enabling the team to deliver high-quality,

maintainable solutions.

● Partner closely with Program Managers and Business Analysts to translate requirements

into technical execution.

3. Automation, Data & AI-Driven Support:

● Implement automation workflows (bots, routing, notifications) to reduce manual load and

optimize SLA adherence.

● Drive the adoption of AI/ML solutions for ticket classification, triage, and predictive

resolution.

● Build analytics dashboards to track support KPIs (FRT, TTR, resolution quality).

● Partner with Product Managers, Designers, and Engineers to ensure delivery fidelity.

● Restore transparency and speed between business stakeholders and tech teams.

4. Cross-functional Collaboration:

● Work with Product, QA, Customer Success, and Ops to ensure support needs are captured

early in the roadmap.

● Serve as the engineering voice in discussions on escalation flows, release readiness, and

customer-facing support enablement.

● Collaborate with content and ops teams to power self-service experiences (knowledge

bases, FAQs, in-app help).

5. Documentation & Knowledge Management:

● Maintain technical documentation for support tooling, workflows, and integrations.

● Contribute to knowledge bases (internal + external) alongside ops and content teams.

● Foster a documentation-first culture to enable faster onboarding and smoother

collaboration.

What We’re Looking For

Must-Have

● 5–7 years of experience in software engineering, with at least 2+ years in a senior/lead

role.

● Proven track record in building internal platforms, support tools, or automation systems.

● Strong technical skills: Python/Node/Java, SQL, cloud services (AWS/GCP), and integration

experience with ticketing/support platforms (e.g., Zendesk, Freshdesk, ServiceNow, Jira

Service Management).

● Experience leading small teams and owning delivery from design → build → release.

● Excellent problem-solving skills with a bias for execution in fast-paced environments.

Nice to Have

● Exposure to SaaS or healthcare platforms with multi-tenant architecture.

● Familiarity with AI/ML-driven support solutions (classification, prediction, NLP chatbots).

● Hands-on experience with support metrics and dashboards (CSAT, SLA adherence, TTR).

● Knowledge of documentation frameworks (Confluence, Notion, Git-based wikis).

Read more
Spatial Alphabet
Hyderabad, Bengaluru (Bangalore)
10 - 15 yrs
₹28L - ₹50L / yr
skill iconPython
skill iconDjango
skill iconReact.js
skill iconAmazon Web Services (AWS)
SQL
+1 more

Job Title: Senior Full-stack Developer (Python,React)

Location: Hyderabad, India (On-site Only)

Employment Type: Full-Time

Work Mode: Office-Based; Remote or Hybrid Not Allowed

Role Summary

We are looking for a skilled Senior Fullstack Developer with expertise in Django (Python),React, RESTful APIs, GraphQL, microservices architecture, Redis, and AWS services (SNS, SQS, etc.). The ideal candidate will be responsible for designing, developing, and maintaining scalable backend systems and APIs to support dynamic frontend applications and services.

 

 Required Skillset:

l 9+ years of professional experience writing production-grade software, including experience leading the design of complex systems.

l Strong expertise in Python (Django or equivalent frameworks) and REST API development.

l Solid exp of frontend frameworks such as React and TypeScript.

l Strong understanding of relational databases (MySQL or PostgreSQL preferred).

l Experience with CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes).

l Hands-on experience with cloud infrastructure (AWS preferred)

l Proven experience debugging complex production issues and improving observability.

Preferred Skillset:

l Experience in enterprise SaaS or B2B systems with multi-tenancy, authentication (OAuth, SSO, SAML), and data partitioning. Exposure to Kafka or RabbitMQ, microservices.

l Knowledge of event-driven architecture, A/B testing frameworks, and analytics pipelines.

l Familiarity with accessibility standards and best practices Agile/Scrum methodologies.

l Exposure to the Open edX ecosystem or open-source contributions in education tech.

l Demonstrated history of technical mentorship, team leadership, or cross-team collaboration.

Tech Stack:

l Backend: Python (Django), (Celery,Redis Asynchronous workflows), REST APIs

l Frontend: React, TypeScript, SCSS

l Data: MySQL, Snowflake, Elasticsearch

l DevOps/Cloud: Docker,Kubernetes,GitHub Actions,AWS

l Monitoring: Datadog 

l Collaboration Tools: GitHub, Jira, Slack, Segment

Primary Responsibilities:

l Lead, guide, and mentor a team of Python/Django engineers, offering hands-on technical support and direction.

l Architect, design, and deliver secure, scalable, and high-performing web applications.

l Manage the complete software development lifecycle including requirements gathering, system design, development, testing, deployment, and post-launch maintenance.

l Ensure compliance with coding standards, architectural patterns, and established development best practices.

l Collaborate with product teams, QA, UI/UX, and other stakeholders to ensure timely and high-quality product releases.

l Perform detailed code reviews, optimize system performance, and resolve production-level issues.

l Drive engineering improvements such as automation, CI/CD implementation, and modernization of outdated systems.

l Create and maintain technical documentation while providing regular updates to leadership and stakeholders.

 

 

 

 

Read more
A real time Customer Data Platform and cross channel marketing automation delivers superior experiences that result in an increased revenue for some of the largest enterprises in the world.

A real time Customer Data Platform and cross channel marketing automation delivers superior experiences that result in an increased revenue for some of the largest enterprises in the world.

Agency job
via HyrHub by Neha Koshy
Bengaluru (Bangalore)
1 - 2 yrs
₹7L - ₹14L / yr
skill iconJava
skill iconSpring Boot
Apache Kafka
SQL
skill iconPostgreSQL
+3 more

Key Responsibilities:

  • Design and develop backend components and sub-systems for large-scale platforms under guidance from senior engineers.
  • Contribute to building and evolving the next-generation customer data platform.
  • Write clean, efficient, and well-tested code with a focus on scalability and performance.
  • Explore and experiment with modern technologies—especially open-source frameworks—
  • and build small prototypes or proof-of-concepts.
  • Use AI-assisted development tools to accelerate coding, testing, debugging, and learning while adhering to engineering best practices.
  • Participate in code reviews, design discussions, and continuous improvement of the platform.

Qualifications:

  • 0–2 years of experience (or strong academic/project background) in backend development with Java.
  • Good fundamentals in algorithms, data structures, and basic performance optimizations.
  • Bachelor’s or Master’s degree in Computer Science or IT (B.E / B.Tech / M.Tech / M.S) from premier institutes.

Technical Skill Set:

  • Strong aptitude and analytical skills with emphasis on problem solving and clean coding.
  • Working knowledge of SQL and NoSQL databases.
  • Familiarity with unit testing frameworks and writing testable code is a plus.
  • Basic understanding of distributed systems, messaging, or streaming platforms is a bonus.

AI-Assisted Engineering (LLM-Era Skills):

  • Familiarity with modern AI coding tools such as Cursor, Claude Code, Codex, Windsurf, Opencode, or similar.
  • Ability to use AI tools for code generation, refactoring, test creation, and learning new systems responsibly.
  • Willingness to learn how to combine human judgment with AI assistance for high-quality engineering outcomes.

Soft Skills & Nice to Have

  • Appreciation for technology and its ability to create real business value, especially in data and marketing platforms.
  • Clear written and verbal communication skills.
  • Strong ownership mindset and ability to execute in fast-paced environments.
  • Prior internship or startup experience is a plus.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort