Cutshort logo

50+ AWS (Amazon Web Services) Jobs in India

Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!

icon
Mango Sciences
Remote only
7 - 12 yrs
₹20L - ₹40L / yr
skill iconPython
SQL
ETL
Data pipeline
Datawarehousing
+12 more

The Mission: We are looking for a visionary Technical Leader to own our healthcare data ecosystem from the first byte to the final dashboard. You won't just be managing a platform; you’ll be the primary architect of a clinical data engine that powers life-changing analytics. If you are an expert in SQL and Python who thrives on solving the "puzzle" of healthcare interoperability (FHIR/HL7) while mentoring a high-performing team, this is your seat at the table.

What You’ll Own

  • Architectural Sovereignty: Define the end-to-end blueprint for our data warehouse (staging, marts, and semantic layers). You choose the frameworks, set the coding standards, and decide how we handle complex dimensional modeling and SCDs.
  • Engineering Excellence: Lead by example. You’ll write production-grade Python for ingestion frameworks and craft advanced, set-based SQL transformations that others use as gold-standard references.
  • The Interoperability Bridge: Turn the chaos of EHR exports, REST APIs, and claims data into clean, FHIR-aligned governed datasets. You ensure our data speaks the language of modern healthcare.
  • Technical Mentorship: Act as the "Engineer’s Engineer." You’ll run design reviews, champion CI/CD best practices, and build the runbooks that keep our small but mighty team efficient.
  • Security by Design: Direct the implementation of HIPAA-compliant data flows, ensuring encryption, auditability, and access controls are baked into the architecture, not bolted on.

The Stack You’ll Command

  • Languages: Expert-level SQL (CTE, Window Functions, Tuning) and Production Python.
  • Databases: Deep polyglot experience across MSSQL, PostgreSQL, Oracle, and NoSQL (MongoDB/Elasticsearch).
  • Orchestration: Advanced Apache Airflow (SLAs, retries, and complex DAGs).
  • Ecosystem: GitHub for CI/CD, Tableau/PowerBI for semantic layers, and Unix/Linux for shell scripting.

Who You Are

  • Experienced: You have 8–12+ years in data engineering, with a significant portion spent in a Lead or Architect capacity.
  • Healthcare-Fluent: You understand the stakes of PHI. You’ve worked with FHIR/HL7 and know how to map clinical resources to analytical models.
  • Performance-Obsessed: You don’t just make it work; you make it fast. You’re the person who uses EXPLAIN/ANALYZE to shave minutes off a query.
  • Culture-Builder: You believe in documentation, observability (lineage/freshness), and "leaving the campground cleaner than you found it."

Bonus Points for:

  • Privacy Pro: Experience with PII/PHI de-identification and privacy-by-design.
  • Cloud Native: Deep familiarity with Azure, AWS, or GCP security and data services.
  • Search Experts: Experience with near-real-time indexing via Elasticsearch.

To process your resume for the next process, please fill out the Google form with your updated resume.

 

Pre-screen Question: https://forms.gle/q3CzfdSiWoXTCEZJ7

 

Details: https://forms.gle/FGgkmQvLnS8tJqo5A

Read more
StarApps Studio

at StarApps Studio

2 candid answers
4 products
Shivani Kawade
Posted by Shivani Kawade
Pune
4 - 8 yrs
₹18L - ₹25L / yr
skill iconReact.js
TypeScript
skill iconJavascript
skill iconHTML/CSS
skill iconRuby on Rails (ROR)
+9 more

StarApps Studio is a product-driven SaaS company building Shopify apps that power thousands of online stores. We’ve developed 6 highly-rated Shopify apps (averaging 4.9★) used by 30,000+ Shopify merchants worldwide, including over 1,000 Shopify Plus stores. In just a few years, our bootstrapped team grew from $5.5M to $10M in Annual Recurring Revenue (ARR) by obsessing over quality and merchant success. We’re a tight-knit, 20-person team based in Baner, Pune, on a mission to help e-commerce brands create world-class shopping experiences.


Role Overview

We are looking for a Full Stack Developer who will own features end-to-end with an emphasis on backend excellence. In this role, you will design and optimize complex data models and API architectures in Ruby on Rails, implement robust background job queues (e.g. delayed_job) for heavy workloads, and perform rigorous performance tuning to ensure our systems scale. On the frontend, you'll build and integrate React components to deliver complete, user-friendly features. This is a role for someone who loves tackling deep technical challenges in the backend while also crafting intuitive user interfaces – an opportunity to leverage your backend expertise while driving full-stack product ownership.


Key Responsibilities

  • Architect & Optimize Backend: Design scalable database schemas and efficient data models. Develop high-performance RESTful APIs and services in Ruby on Rails, ensuring clean, maintainable code and great performance.
  • Backend API Development: Design, implement, and maintain robust backend services and RESTful APIs in Ruby on Rails to support new features and internal tools.
  • End-to-End Performance Tuning: Optimize application performance across the stack – from minimizing frontend load times to improving backend query efficiency – for our high-traffic, data-intensive apps.
  • Collaboration & Agile Delivery: Work closely with designers, product managers, and QA to translate requirements into technical solutions. Participate in sprint planning, code reviews, and daily deployments to ship features continuously and reliably.
  • Quality & Maintenance: Write clean, maintainable code with appropriate test coverage (unit and integration tests) to ensure reliability. Monitor, debug, and resolve issues in production, and continually refactor and improve existing code for stability and performance


What We’re Looking For (Requirements)

  • 4–8 Years Experience: Proven experience as a software developer in a product company (experience in e-commerce or SaaS is highly preferred). You have built real products used by actual customers at scale.
  • Ruby on Rails Expertise: Strong command of Ruby on Rails. Experience designing RESTful APIs, working with MVC architecture, and using Rails best practices. You should understand how to structure large Rails applications for maintainability.
  • Backend Proficiency: Comfortable building server-side applications and APIs with Ruby on Rails. You can implement business logic, integrate with databases, and create RESTful endpoints (bonus if you’ve worked with GraphQL or other backend frameworks).
  • Database Skills: Proficiency with PostgreSQL (or similar RDBMS). Capable of writing complex SQL queries, optimizing queries/indexes, and designing efficient relational schemas. Familiarity with Redis or caching strategies is a plus.
  • Front-End Proficiency: Comfortable building user interfaces with React and modern JavaScript/TypeScript. Able to implement frontend components that consume APIs and provide a smooth user experience.
  • System Design & Quality: Solid understanding of web application architecture, performance tuning, and scalability concerns. Experience with profiling, benchmarking, and optimizing web applications. Commitment to writing clean, maintainable code with proper tests.
  • Product Mindset: You care about the why behind the features. You are comfortable digging into requirements, questioning assumptions, and ensuring that we build solutions that truly solve merchant problems.
  • Adaptability & Collaboration: Excellent problem-solving skills, communication, and ability to work in a fast-paced, collaborative environment. You are a self-starter who can take ownership of tasks and drive them to completion, but also know when to ask for help.


Tech Stack

  • Frontend: React, TypeScript/JavaScript, HTML5, CSS3 (Tailwind/Bootstrap), modern build tools (Webpack, Babel).
  • Backend: Ruby on Rails (REST APIs, background jobs), some services in Python.
  • Database: PostgreSQL.
  • Cloud & DevOps: Amazon Web Services (EC2, S3, RDS, CloudFront), Docker, CI/CD for daily deployments.
  • Tools: Git (GitHub), Agile issue tracking (JIRA/Trello), and a keen use of automated testing.


(Don’t worry if you aren’t familiar with every item – we value willingness to learn. This is our current stack, and we continually adopt new technologies that improve our products.)


Why Join Us

  • High Impact & Ownership: Your work will directly enhance the shopping experience of 50M+ shoppers daily. At StarApps, developers deploy code daily and see the immediate impact on thousands of merchants – you’ll own projects end-to-end and build features that matter.
  • Fast-Growing, Profitable Startup: Join a bootstrapped, profitable company on an exciting growth trajectory (from $4M to $10M ARR). There’s no bureaucracy here – just a passionate team obsessed with product quality and merchant happiness. You’ll be part of our core team as we scale, with ample opportunities to grow into leadership roles.
  • Cutting-Edge Tech & Challenges: Work with modern technologies (React, Rails, AWS) on performance-intensive applications. Tackle complex challenges in scaling, optimization, and UX for a global user base – continuously sharpen your skills in a supportive, learning-focused environment.
  • Collaborative Culture: We are a small 25-person team that operates like a close-knit family. You’ll work side by side with experienced founders and a talented team that values innovation, humility, and continuous improvement. Our culture is open, empathetic, and growth-oriented – every voice is heard, and every team member plays a crucial role in our success.


Growth & Benefits: We invest in our team’s growth. Expect a competitive salary, performance bonuses, and whatever tools you need to do your best work. We sponsor professional development (courses, conferences, books) and encourage knowledge-sharing. You’ll enjoy a flexible leave policy, team off-sites, and the excitement of building a global product from our new office in Baner, Pune.


Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
4 - 5 yrs
₹8L - ₹14L / yr
skill iconJava
skill iconSpring Boot
MySQL
skill iconAmazon Web Services (AWS)
skill iconDjango
+3 more

About Us

Euphoric Thought Technologies is a fast-growing technology company focused on delivering scalable, high-performance digital solutions. We are looking for a skilled Backend Developer to join our dynamic team and contribute to building robust and efficient systems.


Key Responsibilities

 Design, develop, and maintain scalable backend services and APIs

 Write clean, maintainable, and efficient code

 Collaborate with frontend developers, DevOps, and product teams

 Optimize applications for maximum speed and scalability

 Troubleshoot, debug, and upgrade existing systems

 Implement security and data protection best practices

 Participate in code reviews and technical discussions


Required Skills & Qualifications

 4–5 years of hands-on experience in backend development

 Strong proficiency in at least one backend language such as Java and Core Java

 Experience with frameworks like Spring Boot, Django, Express.js, etc.

 Good understanding of RESTful APIs and Microservices architecture

 Strong experience with databases (MySQL, PostgreSQL, MongoDB)

 Familiarity with version control systems (Git)

 Experience with cloud platforms (AWS/Azure/GCP) is a plus

 Knowledge of Docker, Kubernetes, CI/CD pipelines is an added advantage

 Strong problem-solving and analytical skills

Read more
ARDEM Incorporated
Isha Ashwini
Posted by Isha Ashwini
Remote only
0 - 0 yrs
₹1.5L - ₹1.8L / yr
IT operations
Network
Help desk
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
+2 more

📍 Position: IT Intern (Only candidates from BTech-IT background will be considered)

👩‍💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)

🎓 Qualification: B.Tech (IT) / M.Tech (IT) only

📌 Mode: Remote (WFH)

⏳ Shift: Willingness to work in night/rotational shifts

🗣 Communication: Excellent English


𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:

- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.

- Support user account management activities using #Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.

- Assist the IT team in configuring, monitoring, and supporting #AWS cloud services (EC2, S3, IAM, WorkSpaces).

- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.

- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.

- Help create and update technical documentation and knowledge base articles.

- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.


💻 Technical Requirements:

- Laptop with i5 or higher processor

-Reliable internet connectivity with 100mbps speed

Read more
ARDEM Incorporated
Remote only
0 - 1 yrs
₹1L - ₹1.8L / yr
skill iconAmazon Web Services (AWS)
Amazon EC2
Windows Azure
Troubleshooting
IAM
+2 more

Position Title: IT Intern (Full Time)

Department: Information Technology

Work Mode: Work From Home (WFH)

Educational Qualification: B.Tech (IT) / M.Tech (IT)

Shift: Rotational Shifts (6am to 3pm, 2pm to 11pm, and 10pm to 07am)

---

Role Summary

The IT Intern will support day-to-day IT operations and assist in maintaining the organization’s IT infrastructure. This role provides structured exposure to desktop support, cloud platforms, user management, server infrastructure, and IT security practices under the guidance of senior IT team members.

---

Key Responsibilities

· Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.

· Support user account management activities using Azure Entra ID (Azure Active Directory), Active Directory, and Microsoft 365.

· Assist the IT team in configuring, monitoring, and supporting AWS cloud services, including EC2, S3, IAM, and WorkSpaces.

· Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.

· Assist with data backups, basic disaster recovery tasks, and implementation of security procedures in line with company policies.

· Create, update, and maintain technical documentation, SOPs, and knowledge base articles.

· Collaborate with internal teams to support system upgrades, IT infrastructure improvements, and ongoing IT projects.

· Adhere to company IT policies, data security standards, and confidentiality requirements.

---

Required Skills & Competencies

· Basic understanding of IT infrastructure, networking concepts, and operating systems

· Familiarity with cloud platforms such as AWS and/or Microsoft Azure

· Fundamental knowledge of Active Directory and user access management

· Strong willingness to learn and adapt to new technologies

· Good analytical, problem-solving, and communication skills

· Ability to work independently in a remote environment

---

Technical Requirements

· Personal laptop/desktop with required specifications

· Reliable internet connectivity to support remote work

---

Learning & Development Opportunities

· Hands-on exposure to enterprise IT environments

· Practical experience with cloud technologies and infrastructure support

· Mentorship from experienced IT professionals

· Opportunity to develop technical, documentation, and operational skills


About ARDEM

ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. With over 20 years of experience, ARDEM has consistently delivered high-quality outsourcing and automation services to clients across the USA and Canada. We are growing rapidly and continuously innovating to improve our services. Our goal is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company for our customers.

Read more
Cambridge Wealth (Baker Street Fintech)
Sangeeta Bhagwat
Posted by Sangeeta Bhagwat
Pune
3 - 5 yrs
₹10L - ₹12L / yr
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
skill iconPostgreSQL
pandas
+9 more


Department

Product & Technology

Location

On-site | Prabhat Road, Pune

Experience

3-5 Years in a Data Engineering or Analytics Role

Domain

Fintech / Wealth Management — non-negotiable

Compensation

11-12 LPA Fixed + Performance Bonus

Growth

Title upgrade + salary revision at 12–18 months for strong performers


Why this role is different from most Data Engineer postings

You will work directly with the founding team on a live wealth management platform used by HNI and NRI clients. You will not spend years in a queue waiting to matter your work ships to production, your analysis influences product decisions, and you will guide junior teammates from day one. If you perform, a raise and title upgrade are on the table within 1218 months. This is the kind of early-team role that defines careers.


About Cambridge Wealth

Cambridge Wealth is a fast-growing, award-winning Financial Services and Fintech firm obsessed with quality and exceptional client service. We serve a high-profile clientele NRI, Mass Affluent, HNI, and ultra-HNI professionals and have received multiple awards from major Mutual Fund houses and BSE. We are past the zero-to-one stage and now focused on scaling our features and intelligence layer. You will be joining at exactly the right time.


What You Will Be Doing

This is a central, hands-on data engineering role at the intersection of financial analytics and applied ML. You will own the data pipelines and analytical models that power investment insights for wealth management clients transforming transaction data and portfolio information into measurable, actionable intelligence.

We are not looking for someone who just keeps the lights on. We want someone who looks at a working system and immediately sees how to make it 10x faster, cleaner, and smarter using AI and automation wherever possible.


Key Responsibilities:


Data Engineering & Pipelines

  • Build and optimize PostgreSQL-based pipelines to process large volumes of investment transaction data.
  • Design and maintain database schemas, foreign tables, and analytical structures for performance at scale.
  • Write advanced SQL — window functions, stored procedures, query optimization, index design.
  • Build Python automation scripts for data ingestion, transformation, and scheduled pipeline runs.
  • Monitor AWS RDS workloads and troubleshoot performance issues proactively.


Financial Analytics & Modelling

  • Develop analytical frameworks to evaluate client portfolios against benchmarks and category averages.
  • Build data models covering mutual fund schemes, SIPs, redemptions, switches, and transfer lifecycles.
  • Create materialized views and derived tables optimized for dashboards and internal reporting tools.
  • Analyse client transaction history to surface patterns in investment behaviour and financial discipline.


Applied ML & AI-Driven Development

  • Use Python (Pandas, NumPy, Scikit-learn) for trend analysis, forecasting, and predictive modelling.
  • Implement classification or regression models to support financial pattern detection.
  • Use AI tools — LLMs, Copilots — to accelerate ETL development, code quality, and data cleaning.
  • Identify opportunities to automate repetitive data tasks and advocate for smarter tooling.


Data Quality & Governance

  • Own data integrity end-to-end in a live, high-stakes financial environment.
  • Build and maintain validation and cleaning protocols across all financial datasets.
  • Maintain Excel models, Power Query workflows, and structured reporting outputs.


Collaboration & Junior Mentorship

  • Work directly with Product, Investment Research, and Wealth Advisory teams.
  • Translate open-ended business questions into structured queries and measurable outputs.
  • Guide 1–2 junior trainees — review their work, set code quality standards, and help them grow.
  • Present findings clearly to non-technical stakeholders — no jargon, just clarity.


Skills — What We Need vs. What Helps

Skill / Tool

Requirement


Must-Haves:

SQL & PostgreSQL (window functions, stored procedures, optimization)

Python — Pandas, NumPy for data processing and automation

ML fundamentals — classification or regression (Scikit-learn)

AWS RDS or equivalent cloud database experience

Financial domain knowledge — mutual funds, SIPs, portfolio concepts

Python data visualization — Matplotlib, Seaborn, or Plotly

Strong Advantage

Excel — Power Query, advanced modelling

Materialized views, query planning, index optimization

Experience with BI/dashboard tools

Good to Have

NoSQL databases

Prior fintech or wealth management startup experience


Financial Domain — Non-Negotiable

This is a wealth management platform. You must come in with a working understanding of:

  • Mutual fund structures, scheme types, and NAV-based transactions
  • Investment lifecycle — SIPs, Lump Sum, Redemptions, Switches, and STPs
  • Portfolio allocation and benchmarking against indices (e.g. Nifty 50, category averages)
  • How HNI/NRI clients interact with financial products differently from retail investors

You do not need to be a CFA. But if mutual funds and portfolio analytics are completely new territory, this role is not the right fit right now.


The Culture Fit — Read This Carefully

We are a small, fast-moving team. This is not a place where you wait for a ticket to arrive in your queue. The right person for this role:

  • Has worked at a small startup before and is used to wearing multiple hats
  • Finds broken or slow data systems genuinely irritating and fixes them without being asked
  • Reaches for Python or an LLM when there is a repetitive task — automating is instinctive
  • Is comfortable saying 'I don't know but I'll find out' and follows through independently
  • Wants visibility and ownership, not just a well-defined job description
  • Is looking for a role where strong performance is directly visible and rewarded


Growth Path — What Happens If You Perform

This is not a vague 'growth opportunity' pitch.

If you hit the bar in your first 12–18 months, you will receive a salary revision and a title upgrade to Senior Data Engineer or Lead Data Engineer depending on team expansion. As we scale our Data and AI team, this role is the natural stepping stone to a team lead position. You will also gain direct exposure to founding-team decision-making — the kind of access that is hard to get at larger companies.


Preferred Background

  • 2–4 years in a data engineering or analytics role at a startup or small Fintech
  • Experience in a live product environment where data errors have real consequences
  • Exposure to portfolio analytics, investment research, or wealth management platforms
  • Has mentored or reviewed code for at least one junior team member


Hiring Process

We respect your time. The process is direct and moves fast.

  • Screening Questions — 5 minutes online
  • Online Challenge — MCQ(Data, SQL, AWS, etc), and one applied ML or analytics problem, Communication Skills and Personality (focused, not trick questions)
  • People Round — 30-minute video call, culture and communication
  • Technical Deep-Dive — 1 hour in person, live financial data problems and your past work
  • Founder's Interview — 1 hour in person, growth conversation and mutual fit
  • Offer & Background Verification


Read more
CipherSonic Labs
Remote only
7 - 10 yrs
Best in industry
skill iconC++
skill iconC
skill iconAmazon Web Services (AWS)
skill iconPython

 

Job Title: Software Developer (Contractor)

Location: Remote, Up to 1-year contract

Compensation: Hourly

About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.

Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.

Key Responsibilities:

·     Develop and optimize software using C/C++ for high-performance computing applications.

·     Work on cryptographic algorithm implementations and performance tuning.

·     Optimize memory management, threading, and parallel computing techniques.

·     Debug, profile, and test software for performance and reliability.

·     Write clean, efficient, and well-documented code.

Qualifications:

·     Completed a B.S. or higher degree in Computer Science, Computer Engineering.

·     Strong programming skills in C and C++.

·     Familiarity with Linux-based development environments.

·     Basic understanding of cryptographic algorithms and security principles is a plus.

·     Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.

·     Knowledge of other programming languages such as Python, Rust, or Go is a plus.

·     Strong problem-solving skills and attention to detail.

·     Ability to work independently and collaboratively in a fast-paced startup environment.

What You’ll Gain:

·     Hands-on experience in systems programming, cryptography, and high-performance computing.

·     Opportunities to work on real-world security and privacy-focused projects.

·     Mentorship from experienced software engineers and researchers.

·     Exposure to cutting-edge cryptographic acceleration and secure computing techniques.

·     Potential for future full-time employment based on performance.

Read more
AEGION- A Legion of Agents

at AEGION- A Legion of Agents

7 candid answers
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
5 - 8 yrs
Upto ₹80L / yr (Varies
)
skill iconPython
FastAPI
skill iconNodeJS (Node.js)
TypeScript
skill iconReact.js
+4 more

We're looking for an experienced Full-Stack Engineer who can architect and build AI-powered agent systems from the ground up. You'll work across the entire stack—from designing scalable backend services and LLM orchestration pipelines to creating frontend interfaces for agent interactions through widgets, bots, plugins, and browser extensions.


You should be fluent in modern backend technologies, AI/LLM integration patterns, and frontend development, with strong systems design thinking and the ability to navigate the complexities of building reliable AI applications.


Note: This is an on-site, 6-day-a-week role. We are in a critical product development phase where sthe peed of iteration directly determines market success. At this early stage, speed of execution and clarity of thought are our strongest moats, and we are doubling down on both as we build through our 0→1 journey.


WHAT YOU BRING:

You take ownership of complex technical challenges end to end, from system architecture to deployment, and thrive in a lean team where every person is a builder. You maintain a strong bias for action, moving quickly to prototype and validate AI agent capabilities while building production-grade systems. You consistently deliver reliable, scalable solutions that leverage AI effectively — whether it's designing robust prompt chains, implementing RAG systems, building conversational interfaces, or creating seamless browser extensions.

You earn trust through technical depth, reliable execution, and the ability to bridge AI capabilities with practical business needs. Above all, you are obsessed with building intelligent systems that actually work. You think deeply about system reliability, performance, cost optimization, and you're motivated by creating AI experiences that deliver real value to our enterprise customers.


WHAT YOU WILL DO:

Your primary responsibility (95% of your time) will be designing and building AI agent systems across the full stack. Specifically, you will:

  • Architect and implement scalable backend services for AI agent orchestration, including LLM integration, prompt management, context handling, and conversation state management.
  • Design and build robust AI pipelines — implementing RAG systems, agent workflows, tool calling, and chain-of-thought reasoning patterns.
  • Develop frontend interfaces for AI interactions including embeddable widgets, Chrome extensions, chat interfaces, and integration plugins for third-party platforms.
  • Optimize LLM operations — managing token usage, implementing caching strategies, handling rate limits, and building evaluation frameworks for agent performance.
  • Build observability and monitoring systems for AI agents, including prompt versioning, conversation analytics, and quality assurance pipelines.
  • Collaborate on system design decisions around AI infrastructure, model selection, vector databases, and real-time agent capabilities.
  • Stay current with AI/LLM developments and pragmatically adopt new techniques (function calling, multi-agent systems, advanced prompting strategies) where they add value.


BASIC QUALIFICATIONS:

  • 4–6 years of full-stack development experience, with at least 1 year working with LLMs and AI systems.
  • Strong backend engineering skills: proficiency in Node.js, Python, or similar; experience with API design, database systems, and distributed architectures.
  • Hands-on AI/LLM experience: prompt engineering, working with OpenAI/Anthropic/Google APIs, implementing RAG, managing context windows, and optimizing for latency/cost.
  • Frontend development capabilities: JavaScript/TypeScript, React or Vue, browser extension development, and building embeddable widgets.
  • Systems design thinking: ability to architect scalable, fault-tolerant systems that handle the unique challenges of AI applications (non-determinism, latency, cost).
  • Experience with AI operations: prompt versioning, A/B testing for prompts, monitoring agent behavior, and implementing guardrails.
  • Understanding of vector databases, embedding models, and semantic search implementations.
  • Comfortable working in fast-moving, startup-style environments with high ownership.


PREFERRED QUALIFICATIONS:

  • Experience with advanced LLM techniques: fine-tuning, function calling, agent frameworks (LangChain, LlamaIndex, AutoGPT patterns).
  • Familiarity with ML ops tools and practices for production AI systems.
  • Prior work on conversational AI, chatbots, or virtual assistants at scale.
  • Experience with real-time systems, WebSockets, and streaming responses.
  • Knowledge of browser automation, web scraping, or RPA technologies.
  • Experience with multi-tenant SaaS architectures and enterprise security requirements.
  • Contributions to open-source AI/LLM projects or published work in the field.


WHAT WE OFFER:

  • Competitive salary + meaningful equity.
  • High ownership and the opportunity to shape product direction.
  • Direct impact on cutting-edge AI product development.
  • A collaborative team that values clarity, autonomy, and velocity.
Read more
CipherSonic Labs
Ajay Joshi
Posted by Ajay Joshi
Remote only
3 - 5 yrs
₹20L - ₹30L / yr
skill iconC++
skill iconC
Linux/Unix
skill iconAmazon Web Services (AWS)
skill iconPython
+2 more

 

Job Title: Software Developer

Location: Remote

About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.

Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.

Key Responsibilities:

·     Develop and optimize software using C/C++ for high-performance computing applications.

·     Work on cryptographic algorithm implementations and performance tuning.

·     Optimize memory management, threading, and parallel computing techniques.

·     Debug, profile, and test software for performance and reliability.

·     Write clean, efficient, and well-documented code.

Qualifications:

·     Completed a B.S. or higher degree in Computer Science, Computer Engineering.

·     Strong programming skills in C and C++.

·     Familiarity with Linux-based development environments.

·     Basic understanding of cryptographic algorithms and security principles is a plus.

·     Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.

·     Knowledge of other programming languages such as Python, Rust, or Go is a plus.

·     Strong problem-solving skills and attention to detail.

·     Ability to work independently and collaboratively in a fast-paced startup environment.

What You’ll Gain:

·     Hands-on experience in systems programming, cryptography, and high-performance computing.

·     Opportunities to work on real-world security and privacy-focused projects.

·     Mentorship from experienced software engineers and researchers.

·     Exposure to cutting-edge cryptographic acceleration and secure computing techniques.

·     Potential for future full-time employment based on performance.

Read more
AI Recruiting Platform

AI Recruiting Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
1 - 15 yrs
₹70L - ₹99L / yr
MySQL
skill iconPython
Microservices
API
skill iconJava
+18 more

Description

Join company as a Backend Developer and become a pivotal force in building the robust, scalable services that power our innovative platforms. In this role, you will design, develop, and maintain server‑side applications, ensuring high performance and reliability for millions of users. You’ll collaborate closely with cross‑functional product, front‑end, and DevOps teams to translate business requirements into clean, efficient code, while participating in code reviews and architectural discussions. Our dynamic environment encourages continuous learning, offering opportunities to work with cutting‑edge technologies, cloud infrastructures, and modern development practices. As a key contributor, your work will directly impact product quality, user satisfaction, and the overall success of company's mission to streamline hiring solutions.


Requirements:

  • 1–15 years of professional experience in backend development, with a strong focus on building APIs and microservices.
  • Proficiency in server‑side languages such as Python, Java, Node.js, or Go, and solid understanding of object‑oriented and functional programming paradigms.
  • Extensive experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis), including schema design and query optimization.
  • Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies like Docker and Kubernetes.
  • Hands‑on experience with version control (Git), CI/CD pipelines, and automated testing frameworks.
  • Strong problem‑solving abilities, effective communication skills, and a collaborative mindset for working within multidisciplinary teams.


Roles and Responsibilities:

  • Design, develop, and maintain high‑throughput backend services and RESTful APIs that support core product features.
  • Implement data models and storage solutions, ensuring data integrity, security, and optimal performance.
  • Collaborate with front‑end engineers, product managers, and designers to define technical requirements and deliver end‑to‑end solutions.
  • Participate in code reviews, provide constructive feedback, and uphold coding standards and best practices.
  • Monitor, troubleshoot, and optimize production systems, implementing robust logging, alerting, and performance tuning.
  • Contribute to the continuous improvement of development workflows, including CI/CD automation, testing strategies, and deployment processes.
  • Stay current with emerging technologies and industry trends, proposing innovative approaches to enhance system architecture.


Budget:

  • Job Type: payroll
  • Experience Range: 1–15 years


Read more
Remote only
0 - 0 yrs
₹1L - ₹1.5L / yr
skill iconAmazon Web Services (AWS)
Cyber Security
IT infrastructure
IT security
AWS CloudFormation
+11 more

📍 Position: IT Intern

👩‍💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)

🎓 Qualification: B.Tech (IT) / M.Tech (IT) only

📌 Mode: Remote (WFH)

⏳ Shift: Willingness to work in night/rotational shifts

🗣 Communication: Excellent English


𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:

- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.

- Support user account management activities using #Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.

- Assist the IT team in configuring, monitoring, and supporting #AWS cloud services (EC2, S3, IAM, WorkSpaces).

- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.

- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.

- Help create and update technical documentation and knowledge base articles.

- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.


💻 Technical Requirements:

- Laptop with i5 or higher processor

-Reliable internet connectivity with 100mbps speed

Read more
Zenius IT Services Pvt Ltd
Hyderabad, Chennai
2 - 4 yrs
₹5L - ₹9L / yr
skill iconJava
skill iconSpring Boot
Microservices
skill iconAmazon Web Services (AWS)
RESTful APIs
+3 more

Core Technical Skills

  • Strong in Core Java, Java 8, OOPs
  • Hands-on experience with Spring Boot, Spring MVC, Spring Data JPA
  • Experience in Microservices Architecture & REST API development
  • Good knowledge of SQL databases (MySQL / SQL Server / PostgreSQL)
  • Experience with AWS services (Lambda, S3, DynamoDB, EC2)
  • Familiarity with Kafka / Event-driven architecture (good to have)
  • Knowledge of Spring Security, JWT, OAuth 2.0
  • Experience with Docker, Jenkins, Git



🔧 Development Responsibilities

  • Design and develop scalable REST APIs and microservices
  • Work on backend systems handling large data and real-time processing
  • Optimize database queries, performance, and indexing
  • Collaborate with cross-functional teams for end-to-end delivery


🛠️ Support (L2) Responsibilities

  • Handle production issues, bug fixing, and root cause analysis
  • Provide post-deployment and migration support
  • Troubleshoot performance issues, DB deadlocks, and API failures
  • Work on incident resolution and system stability improvements
  • Coordinate with L1/support teams (if applicable)


Read more
Deltek
Harsha Mehrotra
Posted by Harsha Mehrotra
Remote only
4 - 7 yrs
Best in industry
Artificial Intelligence (AI)
skill icon.NET
skill iconReact.js
Microsoft Windows Azure
ASP.NET
+5 more

Position Responsibilities:

  • Collaborate with the development team to maintain, enhance, and scale the product for enterprise use.
  • Design and develop scalable, high-performance solutions using cloud technologies and containerization.
  • Contribute to all phases of the development lifecycle, following SOLID principles and best practices.
  • Write well-designed, testable, and efficient code with a strong emphasis on Test-Driven Development (TDD), ensuring comprehensive unit, integration, and performance testing.
  • Ensure software designs comply with specifications and security best practices.
  • Recommend changes to improve application architecture, maintainability, and performance.
  • Develop and optimize database queries using T-SQL.
  • Prepare and produce software component releases.
  • Develop and execute unit, integration, and performance tests.
  • Support formal testing cycles and resolve test defects.

AI-Specific Responsibilities:

  • Integrate AI-powered tools and frameworks to enhance code quality and development efficiency.
  • Utilize AI-driven analytics to identify performance bottlenecks and optimize system performance.
  • Implement AI-based security measures to proactively detect and mitigate potential threats.
  • Leverage AI for automated testing and continuous integration/continuous deployment (CI/CD) processes.
  • Guide the adoption and effective use of AI agents for automating repetitive development, deployment, and testing processes within the engineering team.


Qualifications:

  • Bachelor’s degree in Computer Science, IT, or a related field.
  • Highly proficient in ASP.NET Core (C#) and full-stack development.
  • Experience developing REST APIs.
  • Proficiency in front-end technologies (JavaScript, HTML, CSS, Bootstrap, and UI frameworks).
  • Strong database experience, particularly with T-SQL and relational database design.
  • Advanced understanding of object-oriented programming (OOP) and SOLID principles.
  • Experience with security best practices in web and API development.
  • Knowledge of Agile SCRUM methodology and experience in collaborative environments.
  • Experience with Test-Driven Development (TDD).
  • Strong analytical skills, problem-solving abilities, and curiosity to explore new technologies.
  • Ability to communicate effectively, including explaining technical concepts to non-technical stakeholders.
  • High commitment to continuous learning, innovation, and improvement.

AI-Specific Qualifications:

  • Proficiency in AI-driven development tools and platforms such as GitHub Copilot in Agentic Mode.
  • Knowledge of AI-based security protocols and threat detection systems.
  • Experience integrating GenAI or Agentic AI agents into full-stack workflows (e.g., using AI for code reviews, automated bug fixes, or system monitoring).
  • Demonstrated proficiency with AI-assisted development tools and prompt engineering for code generation, testing, or documentation.
Read more
MNC with 5000+ employees

MNC with 5000+ employees

Agency job
via True tech professionals by Saffan Shaikh
Gurugram
6 - 12 yrs
₹15L - ₹28L / yr
skill iconPython
Large Language Models (LLM)
skill iconAmazon Web Services (AWS)
FastAPI

Backend Engineer III – Senior Python Developer (LLM & AI)

Location: Gurgaon, India

Positions: 1

Experience: 6 to 9 Years

Gurgaon Hybrid

About the Role

We are seeking an experienced Backend Engineer III / Senior Python Developer to join our AI engineering team and play a critical role in building scalable, secure, and high-performance backend platforms for LLM and AI-driven applications. You will work as a hands-on individual contributor while collaborating closely with Machine Learning Engineers, Data Scientists, Product Managers, and Cloud/DevOps teams to deliver innovative, production-grade AI solutions.

Key Responsibilities

  • Design, develop, and maintain scalable backend systems and services using Python to support LLM and AI-based applications
  • Build and maintain RESTful APIs and microservices that serve machine learning models and AI components
  • Write clean, modular, efficient, and testable code following industry best practices and coding standards
  • Participate actively in code reviews, ensuring high quality, security, and maintainability of the codebase
  • Debug, profile, and optimize applications to improve performance, reliability, and scalability
  • Identify and resolve performance bottlenecks in AI/ML pipelines and backend services
  • Collaborate with ML engineers, data scientists, and product teams to translate business and technical requirements into robust backend solutions
  • Mentor and support junior developers, promoting a culture of technical excellence and continuous learning
  • Design and implement CI/CD pipelines and automate deployment workflows to ensure consistent and reliable releases
  • Stay up to date with emerging trends in Python, cloud-native development, and LLM/AI engineering practices and apply them to improve systems and processes

Required Skills & Experience

  • 6 to 9 years of strong hands-on experience in Python development
  • Solid understanding of Python software design, architecture patterns, and testing best practices
  • Proven experience working on AI, Machine Learning, or LLM-based projects
  • Strong experience in building and consuming RESTful APIs and microservices architectures
  • Hands-on experience with FastAPI, Flask, or similar model-serving frameworks
  • Strong debugging, performance profiling, and optimization skills
  • Experience with CI/CD tools and workflows (e.g., GitHub Actions, Azure DevOps, Jenkins, etc.)
  • Working knowledge of Docker and Kubernetes is a strong plus
  • Excellent analytical, problem-solving, and communication skills
  • Ability to work independently in a fast-paced, evolving AI/ML environment while mentoring junior team members

Education & Certifications

  • Bachelor’s degree in Computer Science, Software Engineering, or a related technical field
  • AWS or other relevant cloud certifications are preferred but not mandatory

Why Join Us?

  • Work on cutting-edge AI and LLM platforms
  • Collaborate with top-tier engineering and data science teams
  • Opportunity to influence system architecture and technical direction
  • Competitive compensation and career growth opportunities


Read more
MNC with 5000+ employees

MNC with 5000+ employees

Agency job
via True tech professionals by Saffan Shaikh
Gurugram
9 - 18 yrs
₹30L - ₹70L / yr
skill iconAmazon Web Services (AWS)
skill iconNodeJS (Node.js)
skill iconReact.js
Cloud Computing
Architecture
+2 more

Job Title: Principal Architect / Scalability Lead (AWS)

📍 Location: Gurgaon (Hybrid)


🕒 Employment Type: Full-Time

Role Overview

We are seeking a Principal Architect / Scalability Lead with deep expertise in AWS and large-scale distributed systems to architect and scale cloud-native products from MVP to enterprise scale.

This role demands a senior technical leader who has proven experience designing systems that handle high throughput, large concurrent workloads, and enterprise-grade reliability, while ensuring exceptional end-user experience.

You will work closely with Product, Data Engineering, AI/ML, and Backend teams to define architecture standards, scalability roadmaps, and engineering best practices.

Key Responsibilities

🏗 Architecture & Scalability Leadership

  • Architect highly scalable, resilient, and high-performance cloud-native systems on AWS.
  • Design distributed systems capable of supporting 100K+ concurrent users.
  • Lead architecture evolution from MVP to enterprise-grade deployment.
  • Translate business and consumer requirements into robust technical architecture.
  • Drive scalability planning, capacity modeling, and performance engineering.

🔄 End-to-End Ownership

  • Own full SDLC visibility from discovery and design to release, monitoring, and optimization.
  • Establish best practices for:
  • Microservices architecture
  • Distributed systems design
  • Observability & monitoring
  • DevSecOps & CI/CD
  • Ensure system uptime, fault tolerance, and cost efficiency.

☁ AWS Cloud & Infrastructure

  • Design and implement scalable systems using AWS services.
  • Lead containerization and orchestration using Docker and Kubernetes (EKS).
  • Architect secure, automated CI/CD pipelines.
  • Drive cloud cost optimization and infrastructure efficiency.

📈 Performance & Reliability Engineering

  • Define and enforce SLAs, SLOs, and reliability metrics.
  • Lead performance testing, load testing, and scalability validation.
  • Implement monitoring, alerting, and observability frameworks.
  • Design fault-tolerant and highly available systems.

🧠 Backend, Data & AI Collaboration

  • Provide architectural guidance for:
  • Backend services using Node.js and Python
  • Frontend platforms using React / Next.js
  • Data platforms using Snowflake
  • Collaborate with Data Engineering and AI/ML teams on data-intensive and AI-driven systems.
  • Design architectures supporting asynchronous processing, caching, and event-driven workflows.

👥 Leadership & Governance

  • Mentor senior engineers and guide architecture best practices.
  • Lead architecture governance and design reviews.
  • Influence senior stakeholders with data-driven technical decisions.
  • Drive cross-functional alignment across Product, Engineering, Data, and AI teams.

Required Qualifications

  • 8–15 years of experience in software engineering.
  • Proven experience scaling distributed systems handling 100K+ users or high-throughput workloads.
  • Deep hands-on expertise in AWS cloud architecture.
  • Strong experience with Docker, Kubernetes, and container orchestration.
  • Expertise in microservices, caching strategies, asynchronous processing, and distributed systems.
  • Strong understanding of performance engineering and reliability frameworks.
  • Experience building enterprise-grade systems for large-scale organizations.

Preferred Skills

  • Experience with event-driven architectures (Kafka, SQS, SNS, etc.).
  • Knowledge of database scalability and data warehousing (Snowflake).
  • Exposure to Data Engineering and AI/ML platforms.
  • Strong stakeholder communication and strategic thinking skil


Read more
VDart
Remote only
7 - 15 yrs
₹15L - ₹20L / yr
Test Automation (QA)
SaaS
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Large Language Models (LLM)
+7 more

Senior Quality Engineer – AI Products

Fulltime

Remote

Requirements

● 3-7 years of experience in software quality engineering, preferably in SaaS environments with a platform or infrastructure focus.

● Strong demonstrated experience testing distributed systems, APIs, data pipelines, or cloud-based infrastructure.

● Experience designing and executing test plans for AI/ML systems, data pipelines, or shared platform services.

● Familiarity with AI/LLM infrastructure concepts such as retrieval-augmented generation (RAG), vector search, model routing, and observability.

● Strong demonstrated proficiency in Linux distributions and CLI-based testing, including log file analysis and other troubleshooting tasks.

● Experience with AWS or other major cloud platforms.

● Basic Python/Shell scripting knowledge with ability to edit existing scripts and create new automation for pipeline validation.

● Advanced skills with API and SQL testing methodologies.

● Familiarity with test management tools such as TestRail; experience with Qase is a plus.

● Demonstrated experience leveraging Version Control Systems with a focus on GitHub.

● Experience with testing tools: Jira, Sentry, DataDog.

● Strong understanding of Agile/Scrum methodologies.

● Proven track record of mentoring junior engineers and contributing to process improvements.

● Excellent analytical and problem-solving abilities.

● Strong communication skills with ability to present to both technical and non-technical stakeholders.

● Proficiency in English (C1-C2 level).

● Most importantly: The courage to be vocal about quality concerns, platform risks, and testing impediments.

 

Preferred Qualifications

● Experience with AI/ML evaluation frameworks or tools (e.g., LLM-as-judge, Ragas, custom eval harnesses).

● Hands-on experience with document parsing, OCR, or unstructured data pipelines.

● Experience with observability tooling (e.g., Datadog, Grafana, OpenTelemetry) from a QA perspective.

● Experience testing SaaS products in regulated industries (such as PCI-compliant).

● Basic understanding of containerization, Kubernetes, and CI/CD pipelines (Jenkins, CircleCI).

● Experience with microservice architectures and distributed systems.

● Knowledge of basic non-functional testing (security, performance) with emphasis on AI-specific concerns.

● Background in security or compliance testing for AI systems.

● Certifications such as ISTQB or CSTE.

● Experience working in legal technology, fintech, or professional services software.

● Familiarity with AI-assisted testing tools and leveraging LLMs as a productivity-boosting tool.

● Experience evaluating and implementing new QE tools and processes

 

Read more
Vintronics Consulting
Remote, Gurugram, Delhi, Noida, Ghaziabad, Faridabad
4 - 8 yrs
₹8L - ₹17L / yr
skill iconJava
skill iconReact.js
skill iconAmazon Web Services (AWS)

Job Summary

We are looking for an experienced Java Full Stack Developer with strong expertise in Java, React.js, and AWS to design, develop, and maintain scalable web applications. The ideal candidate should have experience building high-performance applications and working across both front-end and back-end technologies.

 

Key Responsibilities

  • Develop and maintain full-stack web applications using Java and React.js
  • Design and build RESTful APIs and microservices using Java frameworks
  • Develop responsive and interactive frontend interfaces using React.js
  • Work with AWS services for deployment, scalability, and infrastructure
  • Collaborate with cross-functional teams including product managers, designers, and QA
  • Write clean, maintainable, and efficient code following best practices
  • Participate in code reviews, testing, debugging, and performance optimization
  • Implement CI/CD pipelines and cloud-based solutions

 

Required Skills

  • Strong experience in Java (Spring Boot / Spring Framework)
  • Good knowledge of React.js, JavaScript, HTML, CSS
  • Experience building REST APIs and microservices architecture
  • Hands-on experience with AWS services (EC2, S3, Lambda, RDS, etc.)
  • Familiarity with Git, CI/CD pipelines, and Agile development
  • Experience with database technologies (MySQL, PostgreSQL, or MongoDB)

 

Preferred Skills

  • Experience with Docker / Kubernetes
  • Knowledge of serverless architecture
  • Experience working in cloud-native environments
  • Understanding of system design and scalable architecture
Read more
Neuvamacro Technology Pvt Ltd
Chennai
3 - 6 yrs
₹12L - ₹17L / yr
skill iconJavascript
skill iconPython
skill iconDjango
skill iconFlask
skill iconNodeJS (Node.js)
+11 more

Years of Experience – 3 to 6 years

Location – Chennai

Work Mode: Hybrid – 3 days mandatory Work From Office (WFO).

Job Type: Full-Time


Role Description:

• Develops software solutions by studying information needs; conferring with users; studying

systems flow, data usage, and work processes; investigating problem areas; following the

software development lifecycle.

• Determines operational feasibility by evaluating analysis, problem definition, requirements,

solution development, and proposed solutions.

• Documents and demonstrates solutions by developing documentation, flowcharts, layouts,

diagrams, charts, code comments and clear code.

• Prepares and installs solutions by determining and designing system specifications,

standards, and programming.

• Improves operations by conducting systems analysis, recommending changes in policies and

procedures.

• Updates job knowledge by studying state-of-the-art development tools, programming

techniques, and computing equipment; participating in educational opportunities; reading

professional publications; maintaining personal networks; participating in professional

organizations.

• Protects operations by keeping information confidential.

• Provides information by collecting, analyzing, and summarizing development and service

issues. Accomplishes engineering and organization mission by completing related results as

needed.

• Supports and develops software engineers by providing advice, coaching, and educational

opportunities.


Mandatory skills:

• Hands-on experience with web development in any of the following programming languages:

Python, JavaScript

• Hands-on experience in the following JavaScript framework: React

• Hands-on experience in any of the following framework: Python (Django, Flask) or NodeJS

(Express, NestJS)

• Experience with back-end development, basic microservices implementation and

containerization using Docker

• Expertise in Relational databases such as Postgres, MySQL, Oracle, etc.

• Expertise in NoSQL DB such as MongoDB, Amazon DynamoDB, Cassandra, etc.

• Good Knowledge with any of the cloud providers such as Amazon Web Services, Microsoft

Azure or Google Cloud.

• Excellent verbal and written communication skills.

Read more
MNK Global Corporate Solutions
Rithika Raghavan
Posted by Rithika Raghavan
Bengaluru (Bangalore)
5 - 7 yrs
₹15L - ₹20L / yr
skill iconPython
skill iconDjango
skill iconAmazon Web Services (AWS)

About the Role

We are looking for an experienced Senior Backend Developer to design and build scalable, secure, and high-performance backend systems. The ideal candidate will have deep expertise in Python/Django, microservices architecture, and cloud technologies, along with strong problem-solving skills and leadership capabilities.


Key Responsibilities

•Design and develop backend services using Django and Python.

•Architect and implement microservices-based solutions for scalability and maintainability.

•Work with PostgreSQL and Redis for efficient data storage and caching.

•Build and maintain RESTful APIs and ensure robust API design principles.

•Implement system design best practices for high availability and fault tolerance.

•Containerize applications using Docker and manage deployments with Kubernetes.

•Integrate with cloud platforms (AWS/Azure) for hosting and infrastructure management.

•Apply security best practices to protect data and application integrity.

•Collaborate with frontend, QA, and DevOps teams for seamless delivery.

•Mentor junior developers and conduct code reviews to maintain quality standards.


Required Skills & Expertise

•Django/Python – Advanced proficiency in backend development.

•Microservices Architecture – Strong understanding of distributed systems.

•PostgreSQL & Redis – Expertise in relational and in-memory databases.

•Docker/Kubernetes – Hands-on experience with containerization and orchestration.

•API Design & System Design – Ability to design scalable and secure systems.

•Cloud (AWS/Azure) – Practical experience with cloud services and deployments.

•Security Best Practices – Knowledge of authentication, authorization, and data protection.


Preferred Qualifications

•Experience with CI/CD pipelines and DevOps practices.

•Familiarity with message queues (e.g., RabbitMQ, Kafka).

•Exposure to monitoring tools (Prometheus, Grafana).


What We Offer

•Competitive salary and benefits.

•Opportunity to work on cutting-edge backend technologies.

•Collaborative and growth-oriented work environment.

Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
DrSoumya Sahadevan
Posted by DrSoumya Sahadevan
Pune
7 - 15 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
PySpark
databricks
+2 more

About TVARIT

TVARIT GmbH specializes in developing and delivering cutting-edge artificial intelligence (AI) solutions for the metal industry, including steel, aluminum, copper, cast iron, and more. Our software products empower customers to make intelligent, data-driven decisions, driving advancements in Predictive Quality (PsQ), Predictive Maintenance (PdM), and Energy Consumption Reduction (PsE), etc. With a strong portfolio of renowned reference customers, state-of-the-art technology, a talented research team from prestigious universities, and recognition through esteemed awards such as the EU Horizon 2020 AI Prize, TVARIT is recognized as one of the most innovative AI companies in Germany and Europe. We are seeking a self-motivated individual with a positive "can-do" attitude and excellent oral and written communication skills in English to join our team.


Job Description: We are looking for a Senior Data Engineer with strong expertise in Azure Databricks, PySpark, and distributed computing to develop and optimize scalable ETL pipelines for manufacturing analytics. The role involves working with high-frequency industrial data to enable real-time and batch data processing.


Key Responsibilities · Build scalable real-time and batch processing workflows using Azure Databricks, PySpark, and Apache Spark.

· Perform data pre-processing, including cleaning, transformation, deduplication, normalization, encoding, and scaling to ensure high-quality input for downstream analytics.

· Design and maintain cloud-based data architectures, including data lakes, lakehouses, and warehouses, following Medallion Architecture.

· Deploy and optimize data solutions on Azure (preferred), AWS, or GCP with a focus on performance, security, and scalability.

· Develop and optimize ETL/ELT pipelines for structured and unstructured data from IoT, MES, SCADA, LIMS, and ERP systems. · Automate data workflows using CI/CD and DevOps best practices, ensuring security and compliance with industry standards

· Monitor, troubleshoot, and enhance data pipelines for high availability and reliability.

· Utilize Docker and Kubernetes for scalable data processing.

· Collaborate with automation team, data scientists and engineers to provide clean, structured data for AI/ML models.


Desired Skills and Qualifications · Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.

· 7+ years of experience in core data engineering, with a strong focus on cloud platforms such as Azure (preferred), AWS, or GCP · Proficiency in PySpark, Azure Databricks, Python and Apache Spark, etc.

. 2 years of team handling experience.

· Expertise in relational databases (e.g., SQL Server, PostgreSQL), time series databases (e.g. Influx DB), and NoSQL databases (e.g., MongoDB, Cassandra) · Experience in containerization (Docker, Kubernetes).

· Strong analytical and problem-solving skills with attention to detail.

· Good to have MLOps, DevOps including model lifecycle management

· Excellent communication and collaboration skills, with a proven ability to work effectively as a team player.

· Comfortable working in a dynamic, fast-paced startup environment, adapting quickly to changing priorities and responsibilities.

Read more
Remote, remote
0 - 0 yrs
₹1L - ₹1.5L / yr
skill iconAmazon Web Services (AWS)
IT infrastructure
Microsoft Windows Azure
AWS CloudFormation
IT consulting
+11 more

📍 Position: IT Intern

👩‍💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)

🎓 Qualification: B.Tech (IT) / M.Tech (IT) only

📌 Mode: Remote (WFH)

⏳ Shift: Willingness to work in night/rotational shifts

🗣 Communication: Excellent English


𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:

- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.

- Support user account management activities using #Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.

- Assist the IT team in configuring, monitoring, and supporting #AWS cloud services (EC2, S3, IAM, WorkSpaces).

- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.

- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.

- Help create and update technical documentation and knowledge base articles.

- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.


💻 Technical Requirements:

- Laptop with i5 or higher processor

-Reliable internet connectivity with 100mbps speed

Read more
REConnect Energy

at REConnect Energy

4 candid answers
2 recruiters
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
4.5 - 7 yrs
Upto ₹30L / yr (Varies
)
skill iconPython
MLOps
skill iconMachine Learning (ML)
SQL
skill iconAmazon Web Services (AWS)

About Us:

REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.


We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI. 


Responsibilities:

● Engineering - Take complete ownership of engineering stacks including Data Engineering and MLOps. Define and maintain software systems architecture for high availability 24x7 systems.

● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.

● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.

● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime. 


Requirements:

● 4-5 years of experience building highly available systems

● 2-3 years experience leading a team of engineers and analysts

● Bachelors or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent

● Proficient in python programming skills and expertise with data engineering and machine learning deployment

● Experience in databases including MySQL and NoSQL

● Experience in developing and maintaining critical and high availability systems will be given strong preference

● Experience in software design using design principles and architectural modeling.

● Experience working with AWS cloud platform.

● Strong analytical and data driven approach to problem solving 

Read more
Redtring
Keshav Senthil
Posted by Keshav Senthil
Hyderabad
3 - 6 yrs
₹15L - ₹20L / yr
skill iconJava
skill iconKotlin
skill iconAmazon Web Services (AWS)
skill iconRedis
Apache Kafka
+7 more

About Us:


We are hiring for a pre seed funded startup called Zeromoblt (https://zeromoblt.com/), a high-agency Hyderabad-based startup revolutionizing student transportation with lean, intelligent tech stacks.


Our mission: architect world-class systems from scratch—fast, scalable, and algorithmically sharp—using Kotlin, React, AWS (EC2, IoT, IAM), Google Maps, and multi-cloud setups. Stealth mode operations mean you're building 0→1 products with founders, not fixing tickets.


What You'll Do

  • Lead end-to-end ownership of complex systems: design, build, deploy, monitor, and iterate at scale.
  • Architect high-performance backends in Kotlin (or JVM langs) that handle real-time routing and IoT data.
  • Craft scalable React UIs that power ops dashboards and parent-facing apps.
  • Drive cloud decisions across AWS, Azure/GCP—optimising costs for our bootstrap runway.
  • Apply DSA/system design to solve hard problems like dynamic route optimization and predictive scaling.
  • Shape the engineering roadmap: propose, prioritise, and ship features with founders.
  • Mentor juniors while executing solo on high-impact bets—no layers, just results.


We're Looking For

  • 3-6 years of hands-on engineering where you've owned and shipped production systems (prove it with code/stories).
  • Elite CS fundamentals: advanced DSA, system design (distributed systems a must), design patterns.
  • Mastery of Kotlin/Java + modern React; real AWS experience (EC2, IAM, CLI—you know our stack).
  • Proven "leap-taker": startup grit, side projects, or open-source that screams hunger.
  • Figure-it-out velocity: you thrive in chaos, learn our domain overnight, and deliver 10x faster than peers.


This Role Is Not For You If…

  • You need structured roadmaps, PM hand-holding, or big-tech process.
  • Comfort > impact: stable salary over equity upside and chaos.
  • You've never worn all hats (dev, ops, product) in a resource-constrained environment.


Why Join Us

  • Massive ownership: lead tech for 10k+ students, direct founder access, shape ZeroMoblt's scale.
  • Flat, high-trust team: flexible Hyderabad/remote, no bureaucracy.
  • Hungry culture: we hire hustlers scaling from 700 to 10k students—your wins are visible daily.
  • Hungry to Leap? Apply now!
Read more
NeoGenCode Technologies Pvt Ltd
Mumbai
5 - 10 yrs
₹12L - ₹24L / yr
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
skill iconKubernetes
+12 more

Job Title : Senior DevOps Engineer (Only Mumbai Candidates)

Experience : 5+ Years

Location : Mumbai (On-site)

Notice Period : Immediate to 15 Days

Interview Process : 1 Internal Round + 1 Client Round


Mandatory Skills :

Multi-Cloud (AWS/GCP/Azure – any two), Kubernetes, Terraform, Helm (writing Helm Charts), CI/CD (GitLab CI/Jenkins/GitHub Actions), GitOps (ArgoCD/FluxCD), Multi-tenant deployments, Stateful microservices on Kubernetes, Enterprise Linux.


Role Overview :

We are looking for a Senior DevOps Engineer to design, build, and manage scalable cloud infrastructure and DevOps pipelines for product-based platforms.

The ideal candidate should have strong experience with Kubernetes, Terraform, Helm Charts, CI/CD, and GitOps practices.


Key Responsibilities :

  • Design and manage scalable cloud infrastructure across AWS/GCP/Azure.
  • Deploy and manage microservices on Kubernetes clusters.
  • Build and maintain Infrastructure as Code using Terraform and Helm.
  • Implement CI/CD pipelines using GitLab CI, Jenkins, or GitHub Actions.
  • Implement GitOps workflows using ArgoCD or FluxCD.
  • Ensure secure, scalable, and reliable DevOps architecture.
  • Implement monitoring and logging using Prometheus, Grafana, or ELK.

Good to Have :

  • Packer, OpenShift/Rancher/K3s, On-prem deployments, PaaS experience, scripting (Bash/Python), Terraform modules.
Read more
ManpowerGroup
Shirisha Jangi
Posted by Shirisha Jangi
Bengaluru (Bangalore), Hyderabad
7 - 15 yrs
₹20L - ₹27L / yr
Data engineering
skill iconJava
skill iconPython
SQL
skill iconScala
+3 more

Immediate hiring for Senior Data Engineer

📍 Location: Hyderabad/Bangalore

💼 Experience: 7+Years

🕒 Employment Type: Full-Time

🏢 Work Mode: Hybrid

📅 Notice Period: 0-1Month serving notice only

 

   We are seeking a highly skilled and motivated Data Engineer to join our innovative team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support our enterprise-wide data-driven initiatives. You will collaborate closely with cross-functional teams to ensure the availability, reliability, and performance of our data systems and solutions.

 

🔎 Key Responsibilities:

  • Data Pipeline Development
  • Data Modeling and Architecture
  • Data Integration and API Development
  • Data Infrastructure Management
  • Collaboration and Documentation

 

🎯 Required Skills:

  • Bachelor’s degree in computer science, Engineering, Information Systems, or a related field.
  • 7+ years of proven experience in data engineering, software development, or related technical roles.
  • 7+ years of experience in programming languages commonly used in data engineering (Python, Java, SQL, Stored Procedures, Scala, etc.).
  • 7+ years of experience with database systems, data modeling, and advanced SQL.
  • 7+ years of experience with ETL tools such as SSIS, Snowflake, Databricks, Azure Data Factory, Stored Procedures, etc.
  • Experience with big data technologies such as Hadoop, Spark, Kafka, etc.
  • 5+ years of experience working with cloud platforms like Azure, AWS, or Google Cloud.
  • Strong analytical, problem-solving, and debugging skills with high attention to detail.
  • Excellent communication and collaboration skills in a team-oriented, fast-paced environment.
  • Ability to adapt to rapidly evolving technologies and business requirements.

 

 

Read more
Product development MNC
Hyderabad
12 - 20 yrs
₹45L - ₹60L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconAmazon Web Services (AWS)
Fullstack Developer
TypeScript

Work Mode: 5 days in office

Notice: Max 30 days

*1 final round will be in-person


Responsibilities

●      Own and champion the development process of our web-based applications, including: SDLC, coding standards, code reviews, check-ins and builds, issue tracking, bug triages, incident management. and testing.

●      Build and maintain a high-performing software development team including hiring, training, and onboarding.

●      Identify opportunities to eliminate non-value add activities to enable our developers to do what they love best—developing! No pointless meetings, no unnecessary interruptions, no random changes of course, no new problems from on high dumped in their lap each month.

●      Identify growth opportunities for team members to continue to learn and develop in a supportive environment.

●      Provide an engaging and challenging landscape for career growth.

●      Provide leadership, mentorship, and motivation to the engineering team to sustain high levels of productivity and morale.

●      Collaborate with Product Management on product requirements.

●      Champion and advocate for the engineering team to the rest of the organization.

●      Create a positive culture of fairness, quality, and accountability while challenging the status quo and bringing new ideas to light.

●      Participate as a member of company’s Engineering Leadership team to build a high performing organization across multiple locations.

 

Requirements

●      12+ years of software development experience, 2+ years of development leadership experience.

●      Demonstrated technical leadership and people management skills.

●      Experience with agile development processes.

●      Hands-on experience in driving/leading technical efforts in cloud-based applications.

●      Proven track record of driving quality within a team, with a commitment to automated testing.

●      Strong communication skills with the ability to effectively influence product at different levels of abstraction and communicate to both technical and non-technical audiences.

●      Excellent coding skills to provide guidance and craftsmanship for our engineers

●      Technical acumen to provide solid judgment in situations so you can provide the optimal short term decisions without sacrificing long term technology goals

●      Demonstrated critical analysis skills to provide continuous improvement of technology, process, and productivity

 

Technical Experience

We are looking for someone who has experience working in environments that utilize some of the following technologies:

●      AWS & Azure

●      Typescript

●      Node.js

●      React.js

●      Material UI

●      Jira

●      GitHub

●      CI/CD

●      SQL (MySQL, PostgreSQL, SQL Server)

●      MongoDB

Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
1 - 4 yrs
Upto ₹22L / yr (Varies
)
Automation
Terraform
skill iconPython
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)

The Software Engineer – SRE will be responsible for building and maintaining highly reliable, scalable, and secure infrastructure that powers the Albert platform. This role focuses on automation, observability, and operational excellence to ensure seamless deployment, performance, and reliability of core platform services.


Responsibilities

  • Act as a passionate representative of the Albert product and brand.
  • Work closely with Product Engineering and other stakeholders to plan and deliver core platform capabilities that enable scalability, reliability, and developer productivity.
  • Work with the Site Reliability Engineering (SRE) team on shared full-stack ownership of a collection of services and/or technology areas.
  • Understand the end-to-end configuration, technical dependencies, and overall behavioral characteristics of all microservices.
  • Be responsible for the design and delivery of the mission-critical stack with a focus on security, resiliency, scale, and performance.
  • Own end-to-end performance and operability.
  • Demonstrate a clear understanding of automation and orchestration principles.
  • Act as the escalation point for complex or critical issues that have not yet been documented as Standard Operating Procedures (SOPs).
  • Use a deep understanding of service topology and dependencies to troubleshoot issues and define mitigations.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
  • 1+ years of software engineering experience, with at least 1 year in an SRE role focused on automation.
  • Strong experience with Infrastructure as Code (IaC), preferably using Terraform.
  • Strong expertise in Python or Node.js, including designing RESTful APIs and microservices architecture.
  • Strong expertise in cloud infrastructure (AWS) and platform technologies including microservices, APIs, and distributed systems.
  • Hands-on experience with observability stacks including centralized log management, metrics, and tracing.
  • Familiarity with CI/CD tools such as CircleCI and performance testing using K6.
  • Passion for bringing more automation and engineering standards to organizations.
  • Experience building high-performance APIs with low latency (<200 ms).
  • Ability to work in a fast-paced environment and collaborate with peers and leaders.
  • Ability to lead technically, mentor engineers, and contribute to hiring and team growth.

Good to Have

  • Experience with Kubernetes and container orchestration.
  • Familiarity with observability tools such as Prometheus, Grafana, OpenTelemetry, Datadog.
  • Experience building internal developer platforms (IDPs) or reusable engineering frameworks.
  • Exposure to ML infrastructure or data engineering workflows.
  • Experience working in compliance-heavy environments (SOC2, HIPAA, etc.).


About Albert Invent


Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, helping bring better products to market faster.

Why Join Albert Invent

  • Work with a mission-driven, fast-growing global team at the intersection of AI, data, and advanced materials science.
  • Collaborate with world-class scientists and technologists to redefine how new materials are discovered and developed.
  • Culture built on curiosity, collaboration, ownership, and continuous learning.
  • Opportunity to build cutting-edge AI tools that accelerate real-world R&D and solve global challenges such as sustainability and advanced manufacturing.


Read more
Redtring
Keshav Senthil
Posted by Keshav Senthil
Hyderabad
1 - 3 yrs
₹8L - ₹12L / yr
skill iconKotlin
skill iconJava
skill iconSpring Boot
skill iconReact.js
skill iconAmazon Web Services (AWS)
+6 more

Software Engineer (Backend) – Kotlin & React

About Us

We are a high-agency startup building elegant technological solutions to real-world problems.

Our mission is to build world-class systems from scratch that are lean, fast, and intelligent. We are currently operating in stealth mode, developing deeply technical products involving Kotlin, React, Azure, AWS, GCP, Google Maps integrations, and algorithmically intensive backends.

We are building a team of builders — not ticket takers. If you want to design systems, make real decisions, and own your work end-to-end, this is the place for you.

Role Overview

As a Software Engineer, you will take full ownership of building and scaling critical product systems. You will work directly with the founding team to transform complex real-world problems into scalable technical solutions.

This role is ideal for engineers who enjoy thinking deeply about systems, writing clean code, and building products from 0 → 1.

Key Responsibilities

System Development & Architecture

  • Design, develop, and maintain scalable backend services, primarily using Kotlin or JVM-based languages (Java/Scala).
  • Architect systems that are robust, high-performance, and production-ready.
  • Apply strong data structures, algorithms, and system design principles to solve complex engineering challenges.

Full Stack Development

  • Build fast, maintainable front-end applications using React.
  • Ensure seamless integration between frontend systems and backend services.

Cloud Infrastructure

  • Design and manage cloud architecture using AWS, Azure, and/or Google Cloud Platform (GCP).
  • Implement scalable deployment pipelines, monitoring, and infrastructure optimization.

Product & Technical Collaboration

  • Work closely with founders and product stakeholders to translate business problems into technical solutions.
  • Contribute actively to product and engineering roadmap decisions.

Performance Optimization

  • Continuously improve system performance, scalability, and reliability.
  • Implement efficient algorithms and system optimizations to gain a technical advantage.

Engineering Excellence

  • Write clean, well-tested, and maintainable code.
  • Maintain strong engineering standards across the codebase.

Required Skills & Qualifications

We value capability and ownership over years of experience. Whether you have 10 years of experience or none, what matters is your ability to build and solve hard problems.

Core Requirements

  • Strong computer science fundamentals (Data Structures, Algorithms, System Design).
  • Experience with Kotlin or JVM languages such as Java or Scala.
  • Experience building modern React applications.
  • Hands-on experience with cloud platforms (AWS / Azure / GCP).
  • Experience designing and deploying scalable distributed systems.
  • Strong problem-solving and analytical thinking.

Preferred / Bonus Skills

  • Experience with Google Maps APIs or geospatial integrations.
  • Prior startup experience.
  • Contributions to open-source projects.
  • Personal side projects demonstrating strong engineering ability.

Ideal Candidate

You will thrive in this role if you:

  • Take ownership of problems, not just tasks.
  • Are comfortable working in high-ambiguity environments.
  • Have a builder mindset and enjoy creating systems from scratch.
  • Learn quickly and execute with speed and precision.

This Role May Not Be For You If

  • You prefer strict task assignments and detailed specifications before starting work.
  • You want to focus only on coding tickets without product involvement.
  • You prefer large teams with multiple layers of management.

Why Join Us

  • Build 0 → 1 products with massive ownership.
  • Work in a flat organization with no unnecessary hierarchy.
  • Collaborate directly with founders and core product builders.
  • Your contributions will have immediate and visible impact.
  • Flexible remote work environment.
  • Opportunity to shape the technology, culture, and future of the company.

If you are passionate about building powerful systems, solving complex problems, and owning your work, we would love to hear from you.

Read more
Pace Wisdom Solutions
Bengaluru (Bangalore)
7 - 10 yrs
₹15L - ₹30L / yr
skill icon.NET
ASP.NET
ASP.NET MVC
MVC Framework
skill iconAmazon Web Services (AWS)
+1 more

Location: Bangalore

Experience required: 7-10 years.

Key skills: .NET core, ASP .NET, Microsoft Azure, MVC, AWS


"At Pace Wisdom Solutions, our .NET team is a dynamic and collaborative group of experts specializing in end-to-end development. With a focus on both front-end and back-end technologies, we leverage the robust .NET framework and Azure to deliver innovative and scalable solutions. Our agile approach ensures adaptability to industry changes, empowering us to provide clients with cutting-edge and tailored applications."


We are seeking a highly skilled and experienced Senior .NET Developer with a minimum of 7 years of hands-on experience. The ideal candidate will possess expertise in both front-end and back-end development, with a strong background in MVC architecture and exposure to Microsoft Azure technologies. The role requires an individual who can work independently, lead a team effectively, and contribute to the successful delivery of projects.


Engineering Culture at Pace Wisdom:

We foster a collaborative and communicative environment where engineers are empowered to share ideas freely. Teamwork is paramount, and we believe the best solutions come from diverse perspectives. We are committed to promoting from within, providing clear career paths and mentorship opportunities to help our engineers reach their full potential. Our culture prioritizes continuous learning and growth, offering a safe space to experiment, innovate, and refine your skills.


Responsibilities:

• Create scalable solutions by understanding business requirements, write code, test according to best practices.

• Own and Collaborate with the team including our customers, QA, design, and other stakeholders to drive successful project delivery.

• Advocate and mentor teams to follow best practices around: documentation, unit testing, code reviews etc.

• Comply with security policies and processes.


Qualifications:

• 7-10 years of professional experience in developing applications using .NET framework, .NET Core, Azure Services, Entity Framework

• Good knowledge of common software architecture design patterns, Object Oriented Programming, Data structures, Algorithms, Database design patterns and other best practices.

• Exposure to Cloud technologies (AWS, Azure, Google Cloud - at least one of them)

• Exposure to developing SPA on React, Angular or VueJS

• Experience with micro services, messaging systems (RabbitMQ/Kafka)

• Proven ability to lead and mentor development teams.

• Effective communication and interpersonal skills.


About the Company:

Pace Wisdom Solutions is a deep-tech Product engineering and consulting firm. We have offices in San Francisco, Bengaluru, and Singapore. We specialize in designing and developing bespoke software solutions that cater to solving niche business problems.


We engage with our clients at various stages:

• Right from the idea stage to scope out business requirements.

• Design & architect the right solution and define tangible milestones.

• Setup dedicated and on-demand tech teams for agile delivery.

• Take accountability for successful deployments to ensure efficient go-to-market Implementations.


Pace Wisdom has been working with Fortune 500 Enterprises and growth-stage startups/SMEs since 2012. We also work as an extended Tech team and at times we have played the role of a Virtual CTO too. We believe in building lasting relationships and providing value-add every time and going beyond business. 

Read more
Hyderabad
5 - 8 yrs
₹15L - ₹25L / yr
ETL
Snowflake
skill iconPython
SQL
Fivetran
+4 more

Role Overview


We are looking for a Senior Data Quality Engineer who is passionate about building reliable and scalable data platforms. In this role, you will ensure high-quality, trustworthy data across pipelines and analytics systems by designing robust data ingestion frameworks, implementing data quality checks, and optimizing data transformations.

You will work closely with data engineers, analytics teams, and product stakeholders to ensure data accuracy, consistency, and reliability across the organization.


Key Responsibilities


  • Cleanse, normalize, and enhance data quality across operational systems and new data sources flowing through the data platform.
  • Design, build, monitor, and maintain ETL/ELT pipelines using Python, SQL, and Airflow.
  • Develop and optimize data models, tables, and transformations in Snowflake.
  • Build and maintain data ingestion workflows, including API integrations, file ingestion, and database connectors.
  • Ensure data reliability, integrity, and performance across pipelines.
  • Perform comprehensive data profiling to understand data structures, detect anomalies, and resolve inconsistencies.
  • Implement data quality validation frameworks and automated checks across pipelines.
  • Use data integration and data quality tools such as Deequ, Great Expectations (GX), Splink, Fivetran, Workato, Informatica, etc., to onboard new data sources.
  • Troubleshoot pipeline failures and implement data monitoring and alerting mechanisms.
  • Collaborate with engineering, analytics, and product teams in an Agile development environment.


Required Technical Skills


Core Technologies


  • Strong hands-on experience with SQL
  • Python for data transformation and pipeline development
  • Workflow orchestration using Apache Airflow
  • Experience working with Snowflake data warehouse


Data Engineering Expertise


  • Strong understanding of ETL / ELT pipeline design
  • Data profiling and data quality validation techniques
  • Experience building data ingestion pipelines from APIs, files, and databases
  • Data modeling and schema design


Tools & Platforms


  • Data Quality Tools: Deequ, Great Expectations (GX), Splink
  • Data Integration Tools: Fivetran, Workato, Informatica
  • Cloud Platforms: AWS (preferred)
  • Version Control & DevOps: Git, CI/CD pipelines


Qualifications


  • 5–8 years of experience in Data Quality Engineering / Data Engineering
  • Strong expertise in SQL, Python, Airflow, and Snowflake
  • Experience working with large-scale datasets and distributed data systems
  • Solid understanding of data engineering best practices across the development lifecycle
  • Experience working in Agile environments (Scrum, sprint planning, etc.)
  • Strong analytical and problem-solving skills


What We Look For


  • Passion for data accuracy, reliability, and governance
  • Ability to identify and resolve complex data issues
  • Strong collaboration skills across data, engineering, and analytics teams
  • Ownership mindset and attention to data integrity and performance


Why Join Us


  • Opportunity to work on modern data platforms and large-scale datasets
  • Collaborate with high-performing data and engineering teams
  • Exposure to cloud data architecture and modern data tools
  • Competitive compensation and strong career growth opportunities
Read more
HireTo
Rishita Sharma
Posted by Rishita Sharma
Hyderabad
5 - 13 yrs
₹15L - ₹30L / yr
snowflake
skill iconPython
SQL
Windows Azure
databricks
+4 more

Position Title : Senior Data Engineer(Founding Member) - Insurtech StartUp

Location : Hyderabad(Onsite)

Immediate to 15 days Joiners

Experience : 5+ to 13 Years

Role Summary

We are looking for a Senior Data Engineer who will play a foundational role in:

  • Client onboarding from a data perspective
  • Understanding complex insurance data flows
  • Designing secure, scalable ingestion pipelines
  • Establishing strong data modeling and governance standards

This role sits at the intersection of technology, data architecture, security, and business onboarding.

.

Key Responsibilities

  • Lead end-to-end data onboarding for new clients and partners, working closely with business and product teams to understand client systems, data formats, and migration constraints
  • Define and implement data ingestion strategies supporting multiple sources and formats, including CSV, XML, JSON files, and API-based integrations
  • Design, build, and operate robust, scalable ETL/ELT pipelines, supporting both batch and near-real-time data processing
  • Handle complex insurance-domain data including Contracts, Claims, Reserves, Cancellations, and Refunds
  • Architect ingestion pipelines with security-by-design principles, including secure credential management (keys, secrets, tokens), encryption at rest and in transit, and network-level controls where required
  • Enforce role-based and attribute-based access controls, ensuring strict data isolation, tenancy boundaries, and stakeholder-specific access rules
  • Design, maintain, and evolve canonical data models that support operational workflows, reporting & analytics, and regulatory/audit requirements
  • Define and enforce data governance standards, ensuring compliance with insurance and financial data regulations and consistent definitions of business metrics across stakeholders
  • Build and operate data pipelines on a cloud-native platform, leveraging distributed processing frameworks (Spark / PySpark), data lakes, lakehouses, and warehouses
  • Implement and manage orchestration, monitoring, alerting, and cost-optimization mechanisms across the data platform
  • Contribute to long-term data strategy, platform architecture decisions, and cost-optimization initiatives while maintaining strict security and compliance standards

Required Technical Skills

  • Core Stack: Python, Advanced SQL(Complex joins, window functions, performance tuning), Pyspark
  • Platforms: Azure, AWS, Data Bricks, Snowflake
  • ETL / Orchestration: Airflow or similar frameworks
  • Data Modeling: Star/Snowflake schema, dimensional modeling, OLAP/OLTP
  • Visualization Exposure: Power BI
  • Version Control & CI/CD: GitHub, Azure Devops, or equivalent
  • Integrations: APIs, real-time data streaming, ML model integration exposure

Preferred Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
  • 5+ years of experience in data engineering or similar roles
  • Strong ability to align technical solutions with business objectives
  • Excellent communication and stakeholder management skills

What We Offer

  • Direct collaboration with the core US data leadership team
  • High ownership and trust to manage the function end-to-end
  • Exposure to a global environment with advanced tools and best practices
Read more
Remote only
2 - 7 yrs
₹5L - ₹15L / yr
DevOps
CI/CD
skill iconDocker
skill iconKubernetes
skill iconAmazon Web Services (AWS)
+8 more

BluePMS Software Solutions Pvt Ltd is hiring a talented DevOps Engineer to join our growing engineering team. In this role, you will be responsible for building and maintaining scalable infrastructure, automating deployment processes, and improving the reliability of our software delivery pipelines.


KeyResponsibilities:

 1: Design, build, and maintain CI/CD pipelines for faster and reliable deployments.

 2: Manage and monitor cloud infrastructure and servers.

 3: Automate build, testing, and deployment processes.

 4: Collaborate with development and QA teams to improve release cycles.

 5: Monitor system performance and ensure high availability and reliability.

 6: Troubleshoot infrastructure and deployment issues.

 7: Implement security best practices in DevOps workflows.


RequiredSkills:

 1: Strong understanding of DevOps principles and CI/CD pipelines.

 2: Experience with Docker, Kubernetes, or containerization technologies.

 3: Familiarity with cloud platforms such as AWS, Azure, or GCP.

 4: Experience with Git, Jenkins, GitHub Actions, or similar tools.

 5: Basic scripting knowledge (Bash, Python, or Shell).

 6: Good understanding of Linux systems and networking concepts.


Eligibility:

 1: Experience: 2 – 7 years

 2: Qualification: Bachelor's degree in Computer Science, IT, or related field

 3: Strong analytical and problem-solving skills.


Location: Chennai / Remote


Apply here: https://connectsblue.com/jobs/753/devops-engineer-at-bluepms-software-solutions-pvt-ltd

Read more
Neuvamacro Technology Pvt Ltd
Remote only
5 - 15 yrs
₹12L - ₹15L / yr
Tableau
Snow flake schema
SQL
ETL
Data modeling
+4 more

Job Description:

Position Type: Full-Time Contract (with potential to convert to Permanent)

Location: Remote (Australian Time Zone)

Availability: Immediate Joiners Preferred

About the Role

We are seeking an experienced Tableau and Snowflake Specialist with 5+ years of hands‑on expertise to join our team as a full‑time contractor for the next few months. Based on performance and business requirements, this role has a strong potential to transition into a permanent position.

The ideal candidate is highly proficient in designing scalable dashboards, managing Snowflake data warehousing environments, and collaborating with cross-functional teams to drive data‑driven insights.

Key Responsibilities

  • Develop, design, and optimize advanced Tableau dashboards, reports, and visual analytics.
  • Build, maintain, and optimize datasets and data models in Snowflake Cloud Data Warehouse.
  • Collaborate with business stakeholders to gather requirements and translate them into analytics solutions.
  • Write efficient SQL queries, stored procedures, and data pipelines to support reporting needs.
  • Perform data profiling, data validation, and ensure data quality across systems.
  • Work closely with data engineering teams to improve data structures for better reporting efficiency.
  • Troubleshoot performance issues and implement best practices for both Snowflake and Tableau.
  • Support deployment, version control, and documentation of BI solutions.
  • Ensure availability of dashboards during Australian business hours.

Required Skills & Experience

  • 5+ years of strong hands-on experience with Tableau development (Dashboards, Storyboards, Calculated Fields, LOD Expressions).
  • 5+ years of experience working with Snowflake including schema design, warehouse configuration, and query optimization.
  • Advanced knowledge of SQL and performance tuning.
  • Strong understanding of data modeling, ETL processes, and cloud data platforms.
  • Experience working in fast-paced environments with tight delivery timelines.
  • Excellent communication and stakeholder management skills.
  • Ability to work independently and deliver high‑quality outputs aligned with business objectives.

Nice-to-Have Skills

  • Knowledge of Python or any ETL tool.
  • Experience with Snowflake integrations (Fivetran, DBT, Azure/AWS/GCP).
  • Tableau Server/Prep experience.

Contract Details

  • Full-Time Contract for several months.
  • High possibility of conversion to permanent, based on performance.
  • Must be available to work on the Australian Time Zone.
  • Immediate joiners are highly encouraged.


Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹25L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconJava
skill iconAmazon Web Services (AWS)



We’re Hiring Backend Developers | Java / Go / Python | 3–5 Years | Bangalore

We are expanding our engineering team and looking for talented Backend Developers with 3–5 years of experience to join us in Bangalore.

If you enjoy building scalable systems, working with modern cloud technologies, and solving complex problems, this opportunity is for you!


💼 Position

Backend Developer (Java / Go / Python)

📍 Location: Bangalore

👨‍💻 Experience: 3–5 Years

🔎 What You Bring

✔ Strong proficiency in Go or similar backend languages like Python with Fast API or JAVA with Springboot .

✔ Experience designing RESTful APIs

✔ Hands-on experience with AWS / GCP

✔ Experience working with PostgreSQL, Redis, Kafka, or SQS

✔ Strong experience with Microservices architecture

✔ Hands-on experience with CI/CD pipelines

✔ Experience with containerized environments (Docker / Kubernetes)

✔ Familiarity with monitoring tools like Prometheus, Grafana, and Spring Actuator

✔ Strong understanding of data structures, algorithms, and system design fundamentals

✔ Ability to own features end-to-end and solve complex engineering problems

✔ Strong focus on code quality, observability, and operational ownership

✔ Comfortable working in fast-paced, high-growth environments





Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
DrSoumya Sahadevan
Posted by DrSoumya Sahadevan
Pune
5 - 15 yrs
₹20L - ₹38L / yr
skill iconReact.js
API
AWS CloudFormation
skill iconDjango
skill iconNodeJS (Node.js)
+7 more

Availability: Full time 

Location: Pune, India 

Experience: 5- 6 years

 

Tvarit Solutions Private Limited (wholly owned subsidiary of TVARIT GmbH, Germany). TVARIT provides software to reduce manufacturing waste like scrap, energy, and machine downtime using its patented technology. With its software products, and highly competent team from renowned Universities, TVARIT has gained customer trust across 4 continents within a short span of 3 years. TVARIT is awarded among the top 8 out of 490 AI companies by European Data Incubator, apart from many more awards by the German government and industrial organizations making TVARIT one of the most innovative AI companies in Germany and Europe.  

 

We are looking for a passionate Full Stack Developer Level 2 to join our technology team in Pune Centre. You will be responsible for handling architecting, design, development, testing, leading the software development team and working toward infrastructure development that will support the company’s solutions. You will get an opportunity to work closely on projects that will involve the automation of the manufacturing process. 

 

Key Responsibilities 

· Full Stack Development: Design, develop, and maintain scalable web applications using React with TypeScript for the frontend and Node.js/Python for the backend.

· AI Integration: Collaborate with data scientists and ML engineers to integrate AI/ML models into the SaaS platform, ensuring seamless performance and usability.

· API Development & Optimization: Build and optimize high-performance REST APIs in Node.js and Python (Django, Flask, or FastAPI) to support real-time data processing and analytics.

· Database Engineering: Design, manage, and optimize data storage using relational (PostgreSQL), NoSQL (MongoDB/DynamoDB), graph, and vector databases for handling complex industrial data.

· Cloud-Native Deployment: Deploy, monitor, and manage services in containerized environments using Docker and Kubernetes on Linux-based systems (Ubuntu/Debian).

· System Architecture & Design: Contribute to architectural decisions, leveraging OOPs, microservices, domain-driven design, and design patterns to ensure scalability, security, and maintainability.

· Data Handling & Processing: Work with large-scale manufacturing datasets using Python (pandas) to enable predictive analytics and AI-driven insights.

· Collaboration & Agile Delivery: Partner with cross-functional teams—including product managers, manufacturing domain experts, and AI researchers—to translate business needs into technical solutions.

· Performance & Security: Ensure robust, secure, and high-performance software by implementing best practices in algorithms, data structures, and system design.

· Continuous Improvement: Stay updated on emerging technologies in AI, SaaS, and manufacturing systems to propose innovative solutions that enhance product capability.

 

Must have worked on these technologies.

· 5+ years of experience working with React-Typescript, node.js on a production level

· Python, pandas, High performance REST APIs in node and Python (in Django or Flask or Fast API)

· Databases: Relational DB like PostgreSQL, No SQL DB like Mongo or Dynamo DB, Vector databases, Graphs DBs

· OS: Linux flavor like Ubuntu, Debian

· Source Control and CI/CD

· Software Fundamentals: Excellent command on Algorithms and Data Structures

· Software design and Architecture: OOPs, Design Patterns, Micro Services, monolithic architectures, Domain driven Design

· Containers: Docker and Kubernetes

· Cloud: Fundamentals of AWS like S3 buckets, EC2, IAMs, Security groups


Benefits and Perks:

· Be part of the product which is transforming the manufacturing landscape with AI

· Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you.

· Progressive leave policy for effective work-life balance.

· Get mentored by highly qualified internal resource groups and opportunities to avail industry-driven mentorship programs.

· Multicultural peer groups and supportive workplace policies. 

· Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work.

 

 

 

How it's like to work for a Startup?

Working for TVARIT (deep-tech German IT Startup) can offer you a unique blend of innovation, collaboration, and growth opportunities. But it's essential to approach it with a willingness to adapt and thrive in a dynamic environment.

 

If this position sparked your interest, do apply today!

Read more
Bengaluru (Bangalore)
1 - 2 yrs
₹5L - ₹6L / yr
skill iconAmazon Web Services (AWS)
DevOps

AWS DevOps Engineer (1–2 Years Experience)

We are looking for a motivated AWS DevOps Engineer with 1–2 years of experience to join our team. The ideal candidate should have hands-on experience with cloud infrastructure, CI/CD pipelines, and automation tools.

Key Responsibilities

  • Manage and maintain cloud infrastructure on Amazon Web Services
  • Build and manage CI/CD pipelines for automated deployments
  • Work with containerization tools like Docker
  • Assist in deployment and orchestration using Kubernetes
  • Monitor applications and infrastructure performance
  • Collaborate with development teams to improve deployment processes
  • Automate infrastructure using scripts and DevOps tools

Required Skills

  • 1–2 years experience in DevOps or Cloud Engineering
  • Strong knowledge of Amazon Web Services services such as EC2, S3, IAM
  • Experience with CI/CD tools like Jenkins or GitHub Actions
  • Knowledge of container tools like Docker
  • Familiarity with version control systems like Git
  • Basic scripting knowledge (Shell / Python)

Good to Have

  • Experience with Infrastructure as Code tools like Terraform
  • Knowledge of monitoring tools such as Prometheus or Grafana
  • Understanding of Linux environments


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Monika Sekaran
Posted by Monika Sekaran
Pune
7 - 11 yrs
Best in industry
skill iconJava
skill iconSpring Boot
skill iconAmazon Web Services (AWS)
Microservices
Design patterns
+2 more

Key Responsibilities:

  • Design, develop, and maintain scalable backend applications using Java and Spring Boot.
  • Build and consume RESTful APIs and ensure secure, reliable API integrations.
  • Develop microservices-based architecture and deploy applications in cloud environments.
  • Work with cloud platforms such as AWS/Azure/GCP for application deployment and management.
  • Write clean, maintainable, and efficient code following best practices.
  • Implement CI/CD pipelines and support DevOps practices.
  • Optimize applications for performance, scalability, and reliability.
  • Collaborate with cross-functional teams including frontend, QA, DevOps, and product teams.
  • Participate in code reviews, technical design discussions, and architectural decisions.
  • Troubleshoot production issues and provide timely resolution.

Required Skills & Qualifications:

  • 5–10 years of hands-on experience in Java (Java 8 or above).
  • Strong experience with Spring Boot, Spring MVC, Spring Data, Spring Security.
  • Solid understanding of RESTful API design & development.
  • Experience in microservices architecture.
  • Hands-on experience with at least one cloud platform (AWS / Azure / GCP).
  • Knowledge of containerization tools like Docker and orchestration tools like Kubernetes.
  • Experience with relational and/or NoSQL databases (MySQL, PostgreSQL, MongoDB).
  • Familiarity with CI/CD tools (Jenkins, GitHub Actions, etc.).
  • Strong understanding of Git and version control practices.
  • Good understanding of design patterns and object-oriented programming principles.


Read more
Tradelab Technologies

at Tradelab Technologies

1 candid answer
Aakanksha Yadav
Posted by Aakanksha Yadav
Mumbai
10 - 15 yrs
₹30L - ₹50L / yr
CI/CD
skill iconAmazon Web Services (AWS)
Terraform
skill icongrafana

Key Responsibilities

DevOps Strategy & Leadership

  • Define and execute the end-to-end DevOps strategy for high-frequency trading and fintech platforms.
  • Lead, mentor, and scale a high-performing DevOps team focused on automation, reliability, and performance.
  • Partner closely with engineering and product leaders to ensure infrastructure strategy supports business and technical goals.

CI/CD & Infrastructure Automation

  • Architect, implement, and optimize enterprise-grade CI/CD pipelines for ultra-low-latency trading systems.
  • Drive Infrastructure as Code (IaC) adoption using Terraform, Helm, Kubernetes, and advanced automation toolsets.
  • Establish robust release management, deployment workflows, and versioning best practices for mission‑critical environments.

Cloud & On‑Prem Infrastructure Management

  • Design and manage hybrid infrastructures across AWS, GCP, and on-premise data centers ensuring high availability and fault tolerance.
  • Implement sophisticated networking strategies for low-latency workloads including routing optimization and performance tuning.
  • Lead multi‑cloud scalability, cost optimization, and environment standardization initiatives.

Performance Monitoring & Optimization

  • Oversee large-scale monitoring systems using Prometheus, Grafana, ELK, and related observability tools.
  • Implement predictive alerting, automated remediation, and system‑wide health checks for zero‑downtime operations.
  • Conduct root-cause analyses and performance tuning for systems processing millions of transactions per second.

Security & Compliance

  • Champion DevSecOps practices and embed security across the entire development and deployment lifecycle.
  • Ensure adherence to financial regulatory standards (SEBI and global frameworks) with strong audit and compliance mechanisms.
  • Lead security automation efforts, vulnerability management, and advanced IAM policy implementation.


Required Skills & Qualifications

  • 10+ years of DevOps experience, with 5+ years in a leadership capacity.
  • Deep hands-on expertise in CI/CD tools such as Jenkins, GitLab CI/CD, and ArgoCD.
  • Strong command of AWS, GCP, and hybrid cloud infrastructures.
  • Expert-level knowledge of Kubernetes, Docker, and large-scale container orchestration.
  • Advanced proficiency in Terraform, Helm, and overall IaC workflows.
  • Strong Linux administration, networking fundamentals (TCP/IP, DNS, Firewalls), and system internals.
  • Experience with monitoring and observability platforms (Prometheus, Grafana, ELK).
  • Excellent scripting skills in Python, Bash, or Go for automation and tooling.
  • Deep understanding of security principles, encryption, IAM, and compliance frameworks.


Good to Have

  • Experience with ultra-low-latency or high-frequency trading systems.
  • Knowledge of FIX protocol, FPGA acceleration, or network‑level optimizations.
  • Familiarity with Redis, Nginx, or other high‑throughput systems.
  • Exposure to micro‑second‑level performance tuning or network acceleration technologies.


Why Join Us?

  • Be part of a team that consistently raises the bar and delivers exceptional engineering outcomes.
  • A culture where innovation, ownership, and bold thinking are valued.
  • Exceptional growth opportunities—ideal for someone who thrives in fast-paced, high-impact environments.
  • Build systems that influence markets and redefine the fintech landscape.


This isn’t just a role—it’s a challenge, a platform, and a proving ground.

Ready to step up? Apply now.

Read more
Applix

at Applix

3 candid answers
Eman Khan
Posted by Eman Khan
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹30L / yr
skill iconPython
Microsoft Windows Azure
Windows Azure
Artificial Intelligence (AI)
skill iconAmazon Web Services (AWS)
+1 more

About the Role

Applix is looking for a Python Software Engineer with strong Azure cloud experience to build and operate AI-powered applications and agentic workflows. The engineer will work closely with our enterprise client teams to develop, deploy, and maintain AI solutions running on the Azure platform.


This role combines Python application development, AI platform integration, and cloud deployment responsibilities.


Key Responsibilities

  • Build and maintain Python-based services and AI agents
  • Develop and manage agentic workflows and automation pipelines
  • Deploy and monitor applications on Azure cloud services
  • Integrate with Azure AI services such as Azure OpenAI and Azure Document Intelligence
  • Manage application deployments using Azure App Services or equivalent cloud platforms
  • Monitor system performance, logs, and reliability in production environments
  • Work with engineering teams to ensure scalable and secure deployments
  • Support CI/CD pipelines and DevOps practices for application delivery


Experience

3–8 years of relevant experience in software engineering and cloud development.


Required Skills

  • Strong programming experience in Python
  • Experience deploying applications on Microsoft Azure
  • Familiarity with Azure App Services or equivalent cloud services
  • Understanding of cloud deployment, monitoring, and DevOps practices
  • Experience building APIs, automation workflows, or backend services
  • Good problem-solving ability and communication skills
  • Experience with Azure OpenAI
  • Experience with Azure Document Intelligence
  • Familiarity with Azure AI Foundry or AI platform services
  • Exposure to LLM-based applications or AI workflows
  • Experience with CI/CD pipelines and cloud automation
Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 10 yrs
₹25L - ₹35L / yr
SQL
skill iconPython
skill iconAmazon Web Services (AWS)
Data Lake
OLTP
+6 more

Hiring for Lead Data Engineer


Exp : 6 - 10 yrs

Edu : Any Graduates

Work Location : Noida WFO


Skills :

Team Handling Experience .


Advanced SQL and PySpark


Data Engineering concepts (Data Warehouse (DW), Data Lake, OLTP vs OLAP, etc.)


API development experience (preferably in Python)


Familiarity with Docker and Kubernetes


Experience with Airflow and DBT


Exposure to Hudi, Iceberg, or Delta Lake


Strong AWS project experience

Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
4 - 6 yrs
₹10L - ₹15L / yr
PySpark
SQL
skill iconAmazon Web Services (AWS)
Apache Airflow
Hadoop
+1 more

Hiring for Data Engineer - AWS


Exp : 3 - 6 yrs

Edu : BE/B.Tech

Work Location : Noida WFO


Skills


Data Engineers, Pyspark, SQL, AWS, Data Pipelines,airflow, Hadoop 

Read more
Techjays
Agency job
via techjays by Samuel Santhosh P
Remote, Coimbatore
5 - 6.5 yrs
₹30L - ₹45L / yr
skill iconPython
skill iconDjango
skill iconFlask
RESTful APIs
WebSocket
+12 more

We are seeking an experienced Python Lead to design, develop, and scale high-performance backend systems. The ideal candidate will have strong expertise in Python-based backend development, system design, and cloud-native architectures. You will lead the development of scalable APIs, work with modern cloud platforms, and collaborate with cross-functional teams to deliver reliable and efficient applications.

Key Responsibilities

  • Design and develop scalable backend services using Python (Django/Flask).
  • Build and maintain RESTful APIs and WebSocket-based applications.
  • Implement efficient algorithms, data structures, and design patterns for high-performance systems.
  • Develop and optimize database schemas and queries using PostgreSQL, MySQL, or MongoDB.
  • Integrate caching and queuing systems to improve system performance and reliability.
  • Deploy and manage applications on AWS or GCP cloud environments.
  • Implement and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI, or GitHub Actions.
  • Work with Docker containers and Linux-based environments for development and deployment.
  • Collaborate with engineering teams to design scalable system architectures.
  • Explore and integrate AI-driven capabilities such as RAG, LLMs, and vector databases where applicable.

Required Skills

  • Strong expertise in Python backend development using Django or Flask
  • Experience with REST APIs, WebSockets, and microservices architecture
  • Solid knowledge of design patterns, algorithms, and data structures
  • Experience with relational and NoSQL databases (PostgreSQL, MySQL, MongoDB)
  • Hands-on experience with AWS or GCP cloud services
  • Experience with CI/CD pipelines and containerization (Docker)
  • Proficiency in Git and Linux environments

Preferred Skills

  • Familiarity with AI/ML concepts
  • Experience with RAG architectures and LLM integrations
  • Knowledge of vector databases such as Pinecone or ChromaDB

What We’re Looking For

  • Strong problem-solving and system design skills
  • Ability to lead backend development initiatives
  • Experience building scalable and production-grade systems
  • Excellent collaboration and communication skills


Read more
Wohlig Transformations Pvt Ltd
Apoorva Lakshkar
Posted by Apoorva Lakshkar
Mumbai
7 - 10 yrs
₹15L - ₹23L / yr
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
DevOps
skill iconKubernetes

Job Overview 


We are seeking an experienced Senior Solution Architect to join our dynamic DevOps organization. The ideal candidate will have a strong background in cloud technologies, with expertise in migration projects across platforms such as GCP, AWS, and Azure. The candidate should possess a deep understanding of DevOps principles, Kubernetes orchestration, Data migration & management and automation tools like CI/CD pipelines and Terraform.The individual should be highly skilled in designing scalable application architectures capable of handling substantial workloads while ensuring the highest standards of quality.


Key Responsibilities 


  • Lead and drive cloud migration projects from on-premises data centers or other cloud platforms to GCP, AWS, or Azure.
  • Design and implement migration strategies that ensure minimal downtime and maximum efficiency.
  • Demonstrate proficiency in GCP, AWS, and Azure, with the ability to choose and optimize solutions based on specific business requirements.
  • Provide guidance on selecting the appropriate cloud services for various workloads.
  • Design, implement, and optimize CI/CD pipelines to streamline software delivery.
  • Utilize Terraform for infrastructure as code (IaC) to automate deployment processes.
  • Collaborate with development and operations teams to enhance the overall DevOps culture.
  • Possess in-depth knowledge and practical experience with Kubernetes orchestration for containerized applications.
  • Architect and optimize Kubernetes clusters for high availability and scalability.
  • Engage in research and development activities to stay abreast of industry trends and emerging technologies.
  • Evaluate and introduce new tools and methodologies to enhance the efficiency and effectiveness of cloud solutions.
  • Architect solutions that can handle large-scale workloads and provide guidance on scaling strategies.
  • Ensure high-performance levels and reliability in production environments.
  • Design scalable and high-performance database architectures tailored to meet business needs.
  • Execute database migrations with a keen focus on data consistency, integrity, and performance.
  • Develop and implement database pipelines to automate processes such as data migrations, schema changes, and backups.
  • Optimize database workflows to enhance efficiency and reliability.
  • Work closely with clients to assess and enhance the quality of existing architectures.
  • Implement best practices to ensure robust, secure, and well-architected solutions.
  • Drive migration projects, collaborating with cross-functional teams to ensure successful execution.
  • Provide technical leadership and mentorship to junior team members.


Required Skills and Qualifications: 


  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • Relevant industry experience in a Solution Architect role.
  • Proven experience in leading cloud migration projects across GCP, AWS, and Azure.
  • Expertise in DevOps practices, CI/CD pipelines, and infrastructure automation.
  • In-depth knowledge of Kubernetes and container orchestration.
  • Strong background in scaling architectures to handle significant workloads.
  • Sound knowledge in database migrations
  • Excellent communication skills and the ability to articulate complex technical concepts to both technical and non-technical stakeholders.


Read more
PhotonMatters
Human Resource
Posted by Human Resource
Remote only
2 - 11 yrs
₹4L - ₹12L / yr
CI/CD
skill iconAmazon Web Services (AWS)
Terraform
Ansible
skill iconDocker
+4 more

Role Overview:

We are looking for a skilled DevOps Engineer to join our team. You will be responsible for managing and automating the deployment, monitoring, and scaling of our applications, ensuring high availability, security, and performance. The ideal candidate is passionate about automation, CI/CD, and cloud infrastructure.

Key Responsibilities:

  • Design, implement, and maintain CI/CD pipelines for development, testing, and production environments.
  • Manage cloud infrastructure (AWS, Azure, GCP, or others) and ensure scalability, reliability, and security.
  • Automate deployment, configuration management, and infrastructure provisioning using tools like Terraform, Ansible, or Chef.
  • Monitor application performance and infrastructure health using tools like Prometheus, Grafana, ELK Stack, or Datadog.
  • Collaborate with development and QA teams to streamline workflows and resolve deployment issues.
  • Implement security best practices in pipelines, infrastructure, and cloud environments.
  • Maintain version control and manage release cycles.
  • Troubleshoot and resolve production issues efficiently.

Required Skills & Qualifications:

  • Bachelor’s degree in Computer Science, IT, or related field.
  • Proven experience in DevOps, system administration, or cloud engineering.
  • Strong knowledge of CI/CD tools (Jenkins, GitLab CI/CD, CircleCI, etc.).
  • Hands-on experience with containerization (Docker, Kubernetes).
  • Experience with cloud platforms (AWS, Azure, or GCP).
  • Scripting skills (Python, Bash, or PowerShell).
  • Knowledge of infrastructure as code (Terraform, CloudFormation).
  • Familiarity with monitoring and logging tools.
  • Strong problem-solving, communication, and teamwork skills.

Preferred Qualifications:

  • Experience with microservices architecture.
  • Knowledge of networking, load balancing, and firewalls.
  • Exposure to Agile/Scrum methodologies.

What We Offer:

  • Competitive salary
  • Flexible working hours and remote options.
  • Learning and development opportunities.
  • Collaborative and inclusive work environment.


Read more
PhotonMatters
Human Resource
Posted by Human Resource
Remote only
4 - 13 yrs
₹8L - ₹20L / yr
skill iconPython
ETL
Spark
skill iconAmazon Web Services (AWS)
ELT
+2 more

 

 

 

Job Title: Data Engineer

Experience: 4–14 Years

Work Mode: Remote

Employment Type: Full-Time

 

Position Overview:

We are looking for highly experienced Senior Data Engineers to design, architect, and lead scalable, cloud-based data platforms on AWS. The role involves building enterprise-grade data pipelines, modernizing legacy systems, and developing high-performance scoring engines and analytics solutions and collaborate closely with architecture, analytics, risk, and business teams to deliver secure, reliable, and scalable data solutions.

 

Key Responsibilities:

·      Design and build scalable data pipelines for financial and customer data

·      Build and optimize scoring engines (credit, risk, fraud, customer scoring)

·      Design, develop, and optimize complex ETL/ELT pipelines (batch & real-time)

·      Ensure data quality, governance, reliability, and compliance standards

·      Optimize large-scale data processing using SQL, Spark/PySpark, and cloud technologies

·      Lead cloud data architecture, cost optimization, and performance tuning initiatives

·      Collaborate with Data Science, Analytics, and Product teams to deliver business-ready datasets

·      Mentor junior engineers and establish best practices for data engineering

 

Key Requirements:

·      Strong programming skills in Python and advanced SQL

·      Experience building scalable scoring or rule-based decision engines

·      Hands-on experience with Big Data technologies (Spark/PySpark/Kafka)

·      Strong expertise in designing ETL/ELT pipelines and data modeling

·      Experience with cloud platforms (AWS/Azure) and modern data architectures

·      Solid understanding of data warehousing, data lakes, and performance tuning

·      Knowledge of CI/CD, version control (Git), and production support best practices

Read more
Software and consulting company

Software and consulting company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹14L / yr
skill icon.NET
ReAct (Reason + Act)
skill iconAmazon Web Services (AWS)
Unit testing
skill iconReact Native
+22 more

FULL STACK DEVELOPER

JOB DESCRIPTION – FULL STACK DEVELOPER 

Location: Bangalore 

 

Key Responsibilities      

Establish processes, SLAs, and escalation protocols for the support & maintenance of web applications       

Manage stakeholders with effective communication & collaborate with cross functional teams to address issues and maintain business continuity.      

Design, implement, unit test, and build business applications using React, React-Native, .Net Core, .Net 8, Azure/AWS and leveraging an agile methodology and latest tech like Agentic AI & Gihub Copilot.     

Facilitate scrum ceremonies including sprint planning, retrospectives, reviews, and daily stand-ups·       

Facilitate discussion, assessment of alternatives or different approaches, decision making, and conflict resolution within the development team       

Develop and administer CI/CD pipelines in cloud-hosted Git repositories, and source control artifacts via Git in alignment with common branching strategies and workflows    

Assist Software Designer/Implementers with the creation of detailed software design specifications      

Participate in the system specification review process to ensure system requirements can be translated into valid software architecture       

Integrate internal and external product designs into a cohesive user experience       

Identify and keep track of metrics that indicate how software is performing     

Handle technical and non-technical queries from the development team and stakeholders      

Ensure that all development practices follow best practices and any relevant policies / procedures 

 

Other Duties·       Maintain project reporting including dashboards, status reports, road maps, burn down, velocity, and resource utilization.    

Own the technical solution and ensure all technical aspects are implemented as designed. ·       

Partner with the customer success team and aid in triaging and troubleshooting customer support issues spanning across a range of software components, infrastructure, integrations, and services, some of which target 24/7/365 availability     

Flexible to work in rotational shift   

 

Required Qualification     

Previous experience of leading full stack technology projects with scrum teams and stakeholder management·       

BTech or MTech in computer science, or related field·       

3-5 years of experience.  

 

Required Knowledge, Skills and Abilities: (Include any required computer skills, certifications, licenses, languages, etc)·      

With Proficiency in .NET Core/.Net 8/, React, React-Native, Redux, Material, Bootstrap, Typescript, SCSS, Microservices, EF, LINQ, SQL, Azure/AWS, CI CD, Agile, Agentic AI, Github Copilot·       

Azure Dev Ops, Design System, Micro front ends, Data Science·       

Stakeholder management & excellent communication skills.    

 

Must have skills

React - 3 years

React Native - 3 years

Redux - 1 years

Material UI - 1 years

Typescript - 1 years

Bootstrap - 1 years

Microservices - 2 years

SQL - 1 years

Azure - 1 years

 

Nice to have skills

.NET Core - 3 years

NET 8 - 3 years

AWS - 1 years

LINQ - 1 years

Read more
CAW.Tech

at CAW.Tech

5 recruiters
Ranjana Singh
Posted by Ranjana Singh
Bengaluru (Bangalore), Hyderabad
4 - 7 yrs
Best in industry
skill iconPython
FastAPI
skill iconNodeJS (Node.js)
skill iconReact.js
Large Language Models (LLM)
+5 more

Role

We are looking for a Full Stack Engineer who can own the entire technical stack, design systems that scale, and ship products fast. You will work across frontend, backend, and AI systems, making key architectural decisions while building a product used by real users.

This role offers high ownership, where engineers move ideas to production quickly and take responsibility for both technical decisions and product impact.


What would you do?

  • Build and own the end-to-end platform using React, Node.js microservices, Python AI agents, and AWS.
  • Design and implement scalable system architecture, including caching, databases, and state management between AI and UI.
  • Develop AI-powered backend services and orchestrate LLM workflows using modern frameworks.
  • Build highly interactive front-end experiences using modern React and real-time communication tools.
  • Define and maintain engineering best practices, including CI/CD pipelines, monorepo structures, and development workflows.
  • Collaborate closely with users and product teams to identify problems and ship impactful solutions.
  • Continuously simplify systems by removing unnecessary complexity and keeping architecture clean.


Who should apply?

  • Engineers with 4+ years of experience building and shipping production-grade products.
  • Strong understanding of system design, architecture, and scalable backend systems.
  • Hands-on experience with Python (FastAPI, async systems) and LLM-based applications.
  • Proficiency in JavaScript / TypeScript with Node.js and modern backend frameworks.
  • Experience building modern frontend applications using React (React 18+).
  • Familiarity with databases such as Redis, PostgreSQL, or MongoDB, and designing scalable APIs.
  • Engineers comfortable working in fast-paced environments with high ownership and minimal process overhead.


Technical Skills

  • Backend: Node.js, Express, Python, FastAPI
  • Frontend: React (React 18+), interactive UI development
  • AI/LLM Systems: LLM orchestration, multi-model integrations
  • Databases: Redis, PostgreSQL, MongoDB
  • Infrastructure: AWS, CI/CD pipelines, microservices architecture
  • Real-time Systems: Socket.IO, Server-Sent Events (SSE)


Read more
Towards AGI
Shivani Sharma
Posted by Shivani Sharma
Bengaluru (Bangalore), Chennai
5 - 11 yrs
₹20L - ₹25L / yr
Data Transformation Tool (DBT)
skill iconAmazon Web Services (AWS)
Apache Airflow
SQL
Data engineering
+4 more

We are looking for an experienced Data Engineer with strong expertise in AWS, DBT, Databricks, and Apache Airflow to join our growing data engineering team.


Immediate joiners preferred


Role Overview 


The ideal candidate will design, develop, and maintain scalable data pipelines and data platforms to support analytics and business intelligence initiatives.


Key Responsibilities

  1. Design and build scalable data pipelines using AWS, Databricks, DBT, and Airflow.
  2. Develop and optimize ETL/ELT workflows for large-scale data processing.
  3. Implement data transformation models using DBT.
  4. Orchestrate workflows using Apache Airflow.
  5. Work with Databricks for big data processing and analytics.
  6. Ensure data quality, reliability, and performance optimization.
  7. Collaborate with data analysts, engineers, and business teams.


Required Skills

  1. Strong experience with AWS data services
  2. Hands-on experience with Databricks
  3. Experience in DBT (Data Build Tool)
  4. Workflow orchestration using Apache Airflow
  5. Strong SQL and Python skills
  6. Experience in data warehousing and ETL pipelines


Read more
Generative AI Persona platform

Generative AI Persona platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 7 yrs
₹15L - ₹20L / yr
skill iconMachine Learning (ML)
skill iconPython
ETL
skill iconData Science
ELT
+6 more

Description

We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years’ experience).

 

Please find the detailed Job Description attached for your reference. We are looking for candidates with strong experience in:

  • Machine Learning model development
  • Scalable data pipeline development (ETL/ELT)
  • Python and SQL
  • Cloud platforms such as Azure/AWS/Databricks
  • ML deployment environments (SageMaker, Azure ML, etc.)

 

Kindly note:

  • Location: Pune (Work From Office)
  • Immediate joiners preferred

 

While sharing profiles, please ensure the following details are included:

  • Current CTC
  • Expected CTC
  • Notice Period
  • Current Location
  • Confirmation on Pune WFO comfort

 

Must have skills

Machine Learning - 6 years

Python - 6 years

ETL(Extract, Transform, Load) - 6 years

SQL - 6 years

Azure - 6 years

 

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Remote only
5 - 12 yrs
₹2L - ₹10L / yr
Generative AI
Data engineering
AWS Bedrock
Retrieval Augmented Generation (RAG)
Llama
+6 more

Job Title : Data / Generative AI Engineer

Experience : 5+ Years (Mid-Level) | 10+ Years (Senior)

Location : Remote

Employment Type : Contract

Open Positions : 5


Job Overview :

We are hiring Data / Generative AI Engineers for remote contract engagements supporting client-facing AI implementations. The role involves building production-grade Generative AI solutions on AWS, including conversational AI systems, RAG-based architectures, intelligent automation platforms, and scalable data engineering pipelines.


Mandatory Skills :

Amazon Bedrock, Generative AI, RAG Architecture, LangChain/LlamaIndex/Bedrock Agents, Python (3.9+), AWS Serverless (Lambda, API Gateway, Step Functions), Vector Databases, Data Engineering & ETL, AWS Glue, Amazon Athena.


Key Responsibilities :

  • Design and build production-ready Generative AI applications on AWS.
  • Implement Retrieval-Augmented Generation (RAG) architectures for enterprise AI solutions.
  • Integrate Amazon Bedrock with foundation models and enterprise systems.
  • Develop AI agent orchestration workflows using frameworks such as LangChain, LlamaIndex, or Bedrock Agents.
  • Build and manage serverless architectures using AWS services like Lambda, API Gateway, and Step Functions.
  • Implement vector databases and semantic search solutions for intelligent knowledge retrieval.
  • Design and maintain data engineering pipelines and ETL workflows for large-scale data processing.
  • Use AWS Glue for data transformation and orchestration.
  • Utilize Amazon Athena for querying large datasets and performing analytics.
  • Develop scalable Python-based APIs and backend services.
  • Collaborate with cross-functional teams and clients to deliver AI-powered solutions in production environments.


Required Skills :

  • Strong experience with Amazon Bedrock and foundation model integrations
  • Hands-on experience with LangChain, LlamaIndex, or Bedrock Agents
  • Advanced Python (3.9+) development and API building
  • Experience with AWS serverless architectures (Lambda, API Gateway, Step Functions)
  • Experience implementing vector databases and semantic search systems
  • Strong knowledge of data engineering and ETL pipeline development
  • Hands-on experience with AWS Glue for data transformation and orchestration
  • Experience using Amazon Athena for querying and analytics
  • Experience building RAG-based AI applications

Engagement Details :

  • Contract Duration : Minimum 3 to 6 Months
  • Work Timing : 8:00 AM – 4:00 PM EST
  • Start Timeline : Within 2 Weeks
  • Open Positions : 5
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort