Cutshort logo
Apache kafka jobs

50+ Apache Kafka Jobs in India

Apply to 50+ Apache Kafka Jobs on CutShort.io. Find your next job, effortlessly. Browse Apache Kafka Jobs and apply today!

icon
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
8yrs+
Upto ₹30L / yr (Varies
)
skill iconJava
skill iconSpring Boot
Microservices
Windows Azure
RESTful APIs
+7 more

About Tarento:

 

Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.

 

We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.


Job Summary:

We are seeking a highly skilled and self-driven Senior Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.


Key Responsibilities:

  • Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
  • Implement and maintain RESTful APIs, ensuring high performance and scalability.
  • Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
  • Develop and manage Docker containers, enabling efficient development and deployment pipelines.
  • Integrate messaging services like Apache Kafka into microservice architectures.
  • Design and maintain data models using PostgreSQL or other SQL databases.
  • Implement unit testing using JUnit and mocking frameworks to ensure code quality.
  • Develop and execute API automation tests using Cucumber or similar tools.
  • Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
  • Work with Kubernetes for orchestrating containerized services.
  • Utilize Couchbase or similar NoSQL technologies when necessary.
  • Participate in code reviews, design discussions, and contribute to best practices and standards.


Required Skills & Qualifications:

  • Strong experience in Java (11 or above) and Spring Boot framework.
  • Solid understanding of microservices architecture and deployment on Azure.
  • Hands-on experience with Docker, and exposure to Kubernetes.
  • Proficiency in Kafka, with real-world project experience.
  • Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
  • Experience in writing unit tests using JUnit and mocking tools.
  • Experience with Cucumber or similar frameworks for API automation testing.
  • Exposure to CI/CD tools, DevOps processes, and Git-based workflows.


Nice to Have:

  • Azure certifications (e.g., Azure Developer Associate)
  • Familiarity with Couchbase or other NoSQL databases.
  • Familiarity with other cloud providers (AWS, GCP)
  • Knowledge of observability tools (Prometheus, Grafana, ELK)


Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent verbal and written communication.
  • Ability to work in an agile environment and contribute to continuous improvement.


Why Join Us:

  • Work on cutting-edge microservice architectures
  • Strong learning and development culture
  • Opportunity to innovate and influence technical decisions
  • Collaborative and inclusive work environment
Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 9 yrs
₹15L - ₹25L / yr
Data engineering
Apache Kafka
skill iconPython
skill iconAmazon Web Services (AWS)
AWS Lambda
+11 more

Job Details

- Job Title: Lead I - Data Engineering 

- Industry: Global digital transformation solutions provider

- Domain - Information technology (IT)

- Experience Required: 6-9 years

- Employment Type: Full Time

- Job Location: Pune

- CTC Range: Best in Industry


Job Description

Job Title: Senior Data Engineer (Kafka & AWS)

Responsibilities:

  • Develop and maintain real-time data pipelines using Apache Kafka (MSK or Confluent) and AWS services.
  • Configure and manage Kafka connectors, ensuring seamless data flow and integration across systems.
  • Demonstrate strong expertise in the Kafka ecosystem, including producers, consumers, brokers, topics, and schema registry.
  • Design and implement scalable ETL/ELT workflows to efficiently process large volumes of data.
  • Optimize data lake and data warehouse solutions using AWS services such as Lambda, S3, and Glue.
  • Implement robust monitoring, testing, and observability practices to ensure reliability and performance of data platforms.
  • Uphold data security, governance, and compliance standards across all data operations.

 

Requirements:

  • Minimum of 5 years of experience in Data Engineering or related roles.
  • Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).
  • Proficient in coding with Python, SQL, and Java — with Java strongly preferred.
  • Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.
  • Excellent problem-solvingcommunication, and collaboration skills.
  • Flexibility to write production-quality code in both Python and Java as required.

 

Skills: Aws, Kafka, Python


Must-Haves

Minimum of 5 years of experience in Data Engineering or related roles.

Proven expertise with Apache Kafka and the AWS data stack (MSK, Glue, Lambda, S3, etc.).

Proficient in coding with Python, SQL, and Java — with Java strongly preferred.

Experience with Infrastructure-as-Code (IaC) tools (e.g., CloudFormation) and CI/CD pipelines.

Excellent problem-solving, communication, and collaboration skills.

Flexibility to write production-quality code in both Python and Java as required.

Skills: Aws, Kafka, Python

Notice period - 0 to 15days only

Read more
AI-First Company

AI-First Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data engineering
Data architecture
SQL
Data modeling
GCS
+47 more

ROLES AND RESPONSIBILITIES:

You will be responsible for architecting, implementing, and optimizing Dremio-based data Lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.


  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


IDEAL CANDIDATE:

  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.


PREFERRED:

  • Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) and data catalogs (Collibra, Alation, Purview).
  • Exposure to Snowflake, Databricks, or BigQuery environments.
  • Experience in high-tech, manufacturing, or enterprise data modernization programs.
Read more
Product Based Co

Product Based Co

Agency job
via Vikash Technologies by Rishika Teja
Hyderabad
6 - 10 yrs
₹30L - ₹50L / yr
skill iconJava
Thread
Socket Programming
JVM
JMS
+2 more

Required Skills & Experience:


Core Technical:


  • 5–10 years of experience with Java (strong command of core Java & concurrency).
  • Deep understanding of:
  • Threads, locks, synchronization
  • NIO, socket programming
  • File I/O, persistence, journaling
  • JVM memory model
  • Experience debugging distributed or messaging systems.

 

Messaging & Protocols :


  • Experience with JMS or other message brokers (Kafka, RabbitMQ, ActiveMQ, EMS).
  • Understanding of message delivery semantics (ACK, transactions, redelivery, selectors).
  • Familiarity with wire protocols (OpenWire, STOMP, MQTT) or similar protocol stacks.

 

Debugging & Problem Solving :


  • Strong ability to read, understand, and extend large legacy Java codebases.
  • Experience diagnosing performance bottlenecks or production failures.

Open Source Mindset :


  • Prior open-source contribution is a strong advantage.
  • Good written communication skills for interacting with the Apache community.


Read more
Trential Technologies

at Trential Technologies

1 candid answer
Garima Jangid
Posted by Garima Jangid
Gurugram
3 - 5 yrs
₹20L - ₹35L / yr
skill iconJavascript
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)
NOSQL Databases
Google Cloud Platform (GCP)
+7 more

What you'll be doing:

As a Software Developer at Trential, you will be the bridge between technical strategy and hands-on execution. You will be working with our dedicated engineering team designing, building, and deploying our core platforms and APIs. You will ensure our solutions are scalable, secure, interoperable, and aligned with open standards and our core vision. Build and maintain back-end interfaces using modern frameworks.

  • Design & Implement: Lead the design, implementation and management of Trential’s products.
  • Code Quality & Best Practices: Enforce high standards for code quality, security, and performance through rigorous code reviews, automated testing, and continuous delivery pipelines.
  • Standards Adherence: Ensure all solutions comply with relevant open standards like W3C Verifiable Credentials (VCs), Decentralized Identifiers (DIDs) & Privacy Laws, maintaining global interoperability.
  • Continuous Improvement: Lead the charge to continuously evaluate and improve the products & processes. Instill a culture of metrics-driven process improvement to boost team efficiency and product quality.
  • Cross-Functional Collaboration: Work closely with the Co-Founders & Product Team to translate business requirements and market needs into clear, actionable technical specifications and stories. Represent Trential in interactions with external stakeholders for integrations.


What we're looking for:

  • 3+ years of experience in backend development.
  • Deep proficiency in JavaScript, Node.js experience in building and operating distributed, fault tolerant systems.
  • Hands-on experience with cloud platforms (AWS & GCP) and modern DevOps practices (e.g., CI/CD, Infrastructure as Code, Docker).
  • Strong knowledge of SQL/NoSQL databases and data modeling for high-throughput, secure applications.

Preferred Qualifications (Nice to Have)

  • Knowledge of decentralized identity principles, Verifiable Credentials (W3C VCs), DIDs, and relevant protocols (e.g., OpenID4VC, DIDComm)
  • Familiarity with data privacy and security standards (GDPR, SOC 2, ISO 27001) and designing systems complying to these laws.
  • Experience integrating AI/ML models into verification or data extraction workflows.
Read more
iMerit
Bengaluru (Bangalore)
6 - 9 yrs
₹10L - ₹15L / yr
DevOps
Terraform
Apache Kafka
skill iconPython
skill iconGo Programming (Golang)
+4 more

Exp: 7- 10 Years

CTC: up to 35 LPA


Skills:

  • 6–10 years DevOps / SRE / Cloud Infrastructure experience
  • Expert-level Kubernetes (networking, security, scaling, controllers)
  • Terraform Infrastructure-as-Code mastery
  • Hands-on Kafka production experience
  • AWS cloud architecture and networking expertise
  • Strong scripting in Python, Go, or Bash
  • GitOps and CI/CD tooling experience


Key Responsibilities:

  • Design highly available, secure cloud infrastructure supporting distributed microservices at scale
  • Lead multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
  • Implement Infrastructure-as-Code using Terraform across full infrastructure lifecycle
  • Optimize Kafka-based data pipelines for throughput, fault tolerance, and low latency
  • Deliver zero-downtime CI/CD pipelines using GitOps-driven deployment models
  • Establish SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
  • Ensure production-ready disaster recovery and business continuity testing



If interested Kindly share your updated resume at 82008 31681

Read more
Oneture Technologies

at Oneture Technologies

1 recruiter
Eman Khan
Posted by Eman Khan
Mumbai
4 - 7 yrs
Upto ₹21L / yr (Varies
)
Data architecture
Data modeling
ETL
ELT
Spark
+3 more

About The Role

  • As a Data Platform Lead, you will utilize your strong technical background and hands-on development skills to design, develop, and maintain data platforms.
  • Leading a team of skilled data engineers, you will create scalable and robust data solutions that enhance business intelligence and decision-making.
  • You will ensure the reliability, efficiency, and scalability of data systems while mentoring your team to achieve excellence.
  • Collaborating closely with our client’s CXO-level stakeholders, you will oversee pre-sales activities, solution architecture, and project execution.
  • Your ability to stay ahead of industry trends and integrate the latest technologies will be crucial in maintaining our competitive edge.

Key Responsibilities

  • Client-Centric Approach: Understand client requirements deeply and translate them into robust technical specifications, ensuring solutions meet their business needs.
  • Architect for Success: Design scalable, reliable, and high-performance systems that exceed client expectations and drive business success.
  • Lead with Innovation: Provide technical guidance, support, and mentorship to the development team, driving the adoption of cutting-edge technologies and best practices.
  • Champion Best Practices: Ensure excellence in software development and IT service delivery, constantly assessing and evaluating new technologies, tools, and platforms for project suitability.
  • Be the Go-To Expert: Serve as the primary point of contact for clients throughout the project lifecycle, ensuring clear communication and high levels of satisfaction.
  • Build Strong Relationships: Cultivate and manage relationships with CxO/VP level stakeholders, positioning yourself as a trusted advisor.
  • Deliver Excellence: Manage end-to-end delivery of multiple projects, ensuring timely and high-quality outcomes that align with business goals.
  • Report with Clarity: Prepare and present regular project status reports to stakeholders, ensuring transparency and alignment.
  • Collaborate Seamlessly: Coordinate with cross-functional teams to ensure smooth and efficient project execution, breaking down silos and fostering collaboration.
  • Grow the Team: Provide timely and constructive feedback to support the professional growth of team members, creating a high-performance culture.

Qualifications

  • Master’s (M. Tech., M.S.) in Computer Science or equivalent from reputed institutes like IIT, NIT preferred
  • Overall 6–8 years of experience with minimum 2 years of relevant experience and a strong technical background
  • Experience working in mid size IT Services company is preferred

Preferred Certification

  • AWS Certified Data Analytics Specialty
  • AWS Solution Architect Professional
  • Azure Data Engineer + Solution Architect
  • Databricks Certified Data Engineer / ML Professional

Technical Expertise

  • Advanced knowledge of distributed architectures and data modeling practices.
  • Extensive experience with Data Lakehouse systems like Databricks and data warehousing solutions such as Redshift and Snowflake.
  • Hands-on experience with data technologies such as Apache Spark, SQL, Airflow, Kafka, Jenkins, Hadoop, Flink, Hive, Pig, HBase, Presto, and Cassandra.
  • Knowledge in BI tools including PowerBi, Tableau, Quicksight and open source equivalent like Superset and Metabase is good to have.
  • Strong knowledge of data storage formats including Iceberg, Hudi, and Delta.
  • Proficient programming skills in Python, Scala, Go, or Java.
  • Ability to architect end-to-end solutions from data ingestion to insights, including designing data integrations using ETL and other data integration patterns.
  • Experience working with multi-cloud environments, particularly AWS and Azure.
  • Excellent teamwork and communication skills, with the ability to thrive in a fast-paced, agile environment.


Read more
wwwwebnyayai
Ishita Jindal
Posted by Ishita Jindal
Noida
2 - 4 yrs
₹3L - ₹9L / yr
skill iconPython
skill iconDjango
Google Cloud Platform (GCP)
Apache Kafka

About the role

Webnyay is looking for an experienced Backend Developer to build and scale reliable backend systems for our legal tech platform. You will work on core product architecture, high-performance APIs, and cloud-native services that support AI-driven workflows and large-scale data processing.


Responsibilities

  • Develop backend services using Python, Django, and FastAPI
  • Build scalable APIs and microservices for product features
  • Implement event-driven and asynchronous workflows using Kafka
  • Design and maintain backend integrations and data pipelines
  • Deploy and manage services on Google Cloud Platform (GCP)
  • Ensure performance, security, and reliability of backend systems
  • Collaborate with product and engineering teams to deliver production-ready features


Requirements

  • 4+ years of backend development experience
  • Strong proficiency in Python
  • Hands-on experience with Django and FastAPI
  • Experience working with Kafka or similar messaging systems
  • Working knowledge of GCP and cloud-based deployments
  • Solid understanding of backend architecture and API design
  • Experience with databases and production systems
  • Experience building SaaS or platform-based products
  • Exposure to AI-driven or data-intensive applications


Why Webnyay

  • Build technology that improves access to justice
  • Work on real-world, high-impact legal tech problems
  • Collaborative and ownership-driven work culture
  • Opportunity to grow with a fast-scaling startup
Read more
Blurgs AI

at Blurgs AI

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Hyderabad, Bengaluru (Bangalore)
1 - 3 yrs
Upto ₹16L / yr (Varies
)
skill iconPython
Apache Kafka
skill iconMongoDB
skill iconJava

We are seeking a Senior Data Engineer to design, build, and maintain a robust, scalable on-premise data infrastructure. The role focuses on real-time and batch data processing using technologies such as Apache Pulsar, Apache Flink, MongoDB, ClickHouse, Docker, and Kubernetes.

Ideal candidates have strong systems knowledge, deep backend data experience, and a passion for building efficient, low-latency data pipelines in a non-cloud, on-prem environment.


Key Responsibilities

1. Data Pipeline & Streaming Development

  • Design and implement real-time data pipelines using Apache Pulsar and Apache Flink to support mission-critical systems.
  • Build high-throughput, low-latency data ingestion and processing workflows across streaming and batch workloads.
  • Integrate internal systems and external data sources into a unified on-premise data platform.

2. Data Storage & Modelling

  • Design efficient data models for MongoDB, ClickHouse, and other on-prem databases to support analytical and operational use cases.
  • Optimize storage formats, indexing strategies, and partitioning schemes for performance and scalability.

3. Infrastructure & Containerization

  • Deploy, manage, and monitor containerized data services using Docker and Kubernetes in on-prem environments.

4. Performance, Monitoring & Reliability

  • Monitor and fine-tune the performance of streaming jobs and database queries.
  • Implement robust logging, metrics, and alerting frameworks to ensure high availability and operational stability.
  • Identify pipeline bottlenecks and implement proactive optimizations.

Required Skills & Experience

  • Strong experience in data engineering with a focus on on-premise environments.
  • Expertise in streaming technologies such as Apache Pulsar, Apache Flink, or similar platforms.
  • Deep hands-on experience with MongoDB, ClickHouse, or other NoSQL/columnar databases.
  • Proficient in Python for data processing and backend development.
  • Practical experience deploying and managing systems using Docker and Kubernetes.
  • Strong understanding of Linux systems, performance tuning, and resource monitoring.

Preferred Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields (or equivalent experience).

Additional Responsibilities for Senior-Level Hires

Leadership & Mentorship

  • Guide, mentor, and support junior engineers; establish best practices and code quality standards.

System Architecture

  • Lead the design and optimization of complex real-time and batch data pipelines for scalability and performance.

Sensor Data Expertise

  • Build and optimize sensor-driven data pipelines and stateful stream processing systems for mission-critical domains such as maritime and defense.

End-to-End Ownership

  • Take full responsibility for the performance, reliability, and optimization of on-premise data systems.
Read more
Oneture Technologies

at Oneture Technologies

1 recruiter
Eman Khan
Posted by Eman Khan
Mumbai
2 - 4 yrs
₹6L - ₹12L / yr
PySpark
ETL
ELT
skill iconPython
Flink
+2 more

About Oneture Technologies 

Oneture Technologies is a cloud-first digital engineering company helping enterprises and high-growth startups build modern, scalable, and data-driven solutions. Our teams work on cutting-edge big data, cloud, analytics, and platform engineering engagements where ownership, innovation, and continuous learning are core values. 


Role Overview 

We are looking for an experienced Data Engineer with 2-4 years of hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate must have strong expertise in PySpark and exposure to real-time or streaming frameworks such as Apache Flink. You will work closely with architects, data scientists, and product teams to design and deliver robust, high-performance data solutions. 


Key Responsibilities

  • Design, develop, and maintain scalable ETL/ELT data pipelines using PySpark
  • Implement real-time or near real-time data processing using Apache Flink
  • Optimize data workflows for performance, scalability, and reliability
  • Work with large-scale data platforms and distributed environments
  • Collaborate with cross-functional teams to integrate data solutions into products and analytics platforms
  • Ensure data quality, integrity, and governance across pipelines
  • Conduct performance tuning, debugging, and root-cause analysis of data processes
  • Write clean, modular, and well-documented code following best engineering practices


Primary Skills

  • Strong hands-on experience in PySpark (RDD, DataFrame API, Spark SQL)
  • Experience with Apache Flink, Spark or Kafka (streaming or batch)
  • Solid understanding of distributed computing concepts
  • Proficiency in Python for data engineering workflows
  • Strong SQL skills for data manipulation and transformation
  • Experience with data pipeline orchestration tools (Airflow, Step Functions, etc.)


Secondary Skills

  • Experience with cloud platforms (AWS, Azure, or GCP)
  • Knowledge of data lakes, lakehouse architectures, and modern data stack tools
  • Familiarity with Delta Lake, Iceberg, or Hudi
  • Experience with CI/CD pipelines for data workflows
  • Understanding of messaging and streaming systems (Kafka, Kinesis)
  • Knowledge of DevOps and containerization tools (Docker)


Soft Skills

  • Strong analytical and problem-solving capabilities
  • Ability to work independently and as part of a collaborative team
  • Good communication and documentation skills
  • Ownership mindset with a willingness to learn and adapt


Education

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Information Technology, or a related field


Why Join Oneture Technologies?

  • Opportunity to work on high-impact, cloud-native data engineering projects
  • Collaborative team environment with a strong learning culture
  • Exposure to modern data platforms, scalable architectures, and real-time data systems
  • Growth-oriented role with hands-on ownership across end-to-end data engineering initiatives
Read more
Bangalore and kochi

Bangalore and kochi

Agency job
via Truetech by Nithya A
Bengaluru (Bangalore), Kochi (Cochin), Thiruvananthapuram
8 - 12 yrs
₹20L - ₹30L / yr
PySpark
Apache Kafka
Data modeling
kimball
medallian

Full time

Kochi/Trivandrum/Bangalore

Mandate skills-Data Engineer, spark, kafka, AWS/GCP/Azure, ETL, SQL, Databricks(medallian architecture), Data modeling(Kimball methodology).

Mini 8 years experience

 

Key Responsibilities:

Silver/Gold Layer Development

- Translate raw data from sources (Zuora, Commerce Tools, APIs, legacy

databases) into curated Silver and Gold tables.

- Apply business logic, standardization, and data quality checks.

- Partner with analysts and business stakeholders to validate metric definitions.


Replatforming Support

- Audit existing pipelines for dependencies on legacy data structures.

- Update models and jobs to consume data from new transactional systems.

- Ensure lineage tracking and compatibility with downstream reporting and

analytics.

Collaboration & Documentation

- Work closely with engineers, analysts, and business partners to document

requirements, transformations, and validations.

- Contribute to metadata tagging and data cataloging (Unity Catalog).

- Deliver detailed documentation including data dictionaries, constraints (PK, FK),

and pipeline logic.

Must-Have Qualifications:

- Strong data modeling skills (Kimball methodology) with hands-on Databricks experience.

- Proficiency in SQL and Python

- Expertise with Databricks features such as workflows, DLTs, Delta tables, and Unity

Catalog.

- Familiarity with Medallion Architecture (Bronze/Silver/Gold layers).

- Experience with version control systems (GitHub).

- Solid understanding of data governance, data quality frameworks, and observability

practices.

- Ability to work effectively across analytics, product, and engineering teams.

Preferred Qualifications:

- Experience with replatforming large transactional systems.

- Exposure to subscription and commerce data domains.

Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
4yrs+
Best in industry
skill iconJava
skill iconSpring Boot
Microservices
Windows Azure
RESTful APIs
+5 more

Job Summary:

We are seeking a highly skilled and self-driven Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.Key Responsibilities:

  • Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
  • Implement and maintain RESTful APIs, ensuring high performance and scalability.
  • Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
  • Develop and manage Docker containers, enabling efficient development and deployment pipelines.
  • Integrate messaging services like Apache Kafka into microservice architectures.
  • Design and maintain data models using PostgreSQL or other SQL databases.
  • Implement unit testing using JUnit and mocking frameworks to ensure code quality.
  • Develop and execute API automation tests using Cucumber or similar tools.
  • Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
  • Work with Kubernetes for orchestrating containerized services.
  • Utilize Couchbase or similar NoSQL technologies when necessary.
  • Participate in code reviews, design discussions, and contribute to best practices and standards.

Required Skills & Qualifications:

  • Strong experience in Java (11 or above) and Spring Boot framework.
  • Solid understanding of microservices architecture and deployment on Azure.
  • Hands-on experience with Docker, and exposure to Kubernetes.
  • Proficiency in Kafka, with real-world project experience.
  • Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
  • Experience in writing unit tests using JUnit and mocking tools.
  • Experience with Cucumber or similar frameworks for API automation testing.
  • Exposure to CI/CD toolsDevOps processes, and Git-based workflows.

Nice to Have:

  • Azure certifications (e.g., Azure Developer Associate)
  • Familiarity with Couchbase or other NoSQL databases.
  • Familiarity with other cloud providers (AWS, GCP)
  • Knowledge of observability tools (Prometheus, Grafana, ELK)

Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent verbal and written communication.
  • Ability to work in an agile environment and contribute to continuous improvement.

Why Join Us:

  • Work on cutting-edge microservice architectures
  • Strong learning and development culture
  • Opportunity to innovate and influence technical decisions
  • Collaborative and inclusive work environment
Read more
Upswing

Upswing

Agency job
via Talentfoxhr by ANMOL SINGH
Pune
2 - 5 yrs
₹5L - ₹7.5L / yr
skill iconPython
Google Cloud Platform (GCP)
FastAPI
RabbitMQ
Apache Kafka
+7 more

🚀 We’re Hiring: Python Developer – Pune 🚀


Are you a skilled Python Developer looking to work on high-performance, scalable backend systems?

If you’re passionate about building robust applications and working with modern technologies — this opportunity is for you! 💼✨


📍 Location: Pune

🏢 Role: Python Backend Developer

🕒 Type: Full-Time | Permanent


🔍 What We’re Looking For:

We need a strong backend professional with experience in:

🐍 Python (Advanced)

⚡ FastAPI

🛢️ MongoDB & Postgres

📦 Microservices Architecture

📨 Message Brokers (RabbitMQ / Kafka)

🌩️ Google Cloud Platform (GCP)

🧪 Unit Testing & TDD

🔐 Backend Security Standards

🔧 Git & Project Collaboration


🛠️ Key Responsibilities:

✔ Build and optimize Python backend services using FastAPI

✔ Design scalable microservices

✔ Manage and tune MongoDB & Postgres

✔ Implement message brokers for async workflows

✔ Drive code reviews and uphold coding standards

✔ Mentor team members

✔ Manage cloud deployments on GCP

✔ Ensure top-notch performance, scalability & security

✔ Write robust unit tests and follow TDD


🎓 Qualifications:

➡ 2–4 years of backend development experience

➡ Strong hands-on Python + FastAPI

➡ Experience with microservices, DB management & cloud tech

➡ Knowledge of Agile/Scrum

➡ Bonus: Docker, Kubernetes, CI/CD


Read more
Capace Software Private Limited
Bengaluru (Bangalore), Bhopal
5 - 10 yrs
₹4L - ₹10L / yr
skill iconDjango
CI/CD
Software deployment
RESTful APIs
skill iconFlask
+8 more

Senior Python Django Developer 

Experience: Back-end development: 6 years (Required)


Location:  Bangalore/ Bhopal

Job Description:

We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the start-up environment.

This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.

Responsibilities:

  • Design and develop scalable, secure, and high-performance applications using Python (Django framework).
  • Architect system components, define database schemas, and optimize backend services for speed and efficiency.
  • Lead and implement design patterns and software architecture best practices.
  • Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
  • Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
  • Drive performance improvements, monitor system health, and troubleshoot production issues.
  • Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
  • Contribute to technical decision-making and mentor junior developers.

Requirements:

  • 6 to 10 years of professional backend development experience with Python and Django.
  • Strong background in payments/financial systems or FinTech applications.
  • Proven experience in designing software architecture in a microservices or modular monolith environment.
  • Experience working in fast-paced startup environments with agile practices.
  • Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
  • Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
  • Hands-on experience with test-driven development (TDD) and frameworks like pytest, unittest, or factory_boy.
  • Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).

Preferred Skills:

  • Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
  • Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
  • Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
  • Contributions to open-source or personal finance-related projects.

Job Types: Full-time, Permanent


Schedule:

  • Day shift

Supplemental Pay:

  • Performance bonus
  • Yearly bonus

Ability to commute/relocate:

  • JP Nagar, 5th Phase, Bangalore, Karnataka or Indrapuri, Bhopal, Madhya Pradesh: Reliably commute or willing to relocate with an employer-provided relocation package (Preferred)


Read more
Tradelab Software Private Limited
Mumbai
3 - 5 yrs
₹10L - ₹15L / yr
skill iconPython
FastAPI
RabbitMQ
Apache Kafka
skill iconRedis
+1 more

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits

every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.



What You Will Do:

• We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and a deep interest in scalable, low-latency systems.

• You should have 3–4 years of experience in Python-based development and be eager to solve complex performance and scalability challenges in trading and fintech applications.

• You measure success by your own growth, not external validation.

• You thrive on challenges, not on perks or financial rewards.

• Taking calculated risks excites you—you’re here to build, break, and learn.

• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading environment.

• You understand the stakes—milliseconds can make or break trades, and precision is everything.


What We Expect:

• Develop and maintain scalable backend systems using Python.

• Design and implement REST APIs and socket-based communication.

• Optimize code for speed, performance, and reliability.

• Collaborate with frontend teams to integrate server-side logic.

• Work with RabbitMQ, Kafka, Redis, and Elasticsearch for robust backend design.

• Build fault-tolerant, multi-producer/consumer systems.


Must-Have Skills:

• 3–4 years of experience in Python and backend development.

• Strong understanding of REST APIs, sockets, and network protocols (TCP/UDP/HTTP).

• Experience with RabbitMQ/Kafka, SQL & NoSQL databases, Redis, and Elasticsearch.

• Bachelor’s degree in Computer Science or related field.


Nice-to-Have Skills:

• Past experience in fintech, trading systems, or algorithmic trading.

• Experience with GoLang, C/C++, Erlang, or Elixir.

• Exposure to trading, fintech, or low-latency systems.

• Familiarity with microservices and CI/CD pipelines.

Read more
Virtana

at Virtana

2 candid answers
Krutika Devadiga
Posted by Krutika Devadiga
Pune
4 - 10 yrs
Best in industry
skill iconJava
skill iconKubernetes
skill iconGo Programming (Golang)
skill iconPython
Apache Kafka
+13 more

Senior Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 


Work Location: Pune/ Chennai


Job Type: Hybrid

 

Role Responsibilities: 

  • The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform 
  • Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform. 
  • Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.  
  • Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation 
  • Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution 
  • Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery 

 

Required Qualifications:    

  • Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS) 
  • Experience with CI/CD and cloud-based software development and delivery 
  • Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM. 
  • Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required. 
  • Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent 
  • Highly effective verbal and written communication skills and ability to lead and participate in multiple projects 
  • Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities 
  • Must be results-focused, team-oriented and with a strong work ethic 

 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills 
  • Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 

  

About Virtana:  Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

Read more
DigitalSprint AI Solutions

at DigitalSprint AI Solutions

1 candid answer
Nalini Sanka
Posted by Nalini Sanka
Bengaluru (Bangalore)
6 - 12 yrs
₹15L - ₹25L / yr
skill iconJava
skill iconSpring Boot
skill iconAmazon Web Services (AWS)
Apache Kafka
skill iconJenkins

Requirements

  • 6–12 years of backend development experience.
  • Strong expertise in Java 11+, Spring Boot, REST APIs, AWS.
  • Solid experience with distributed, high-volume systems.
  • Strong knowledge of RDBMS (e.g., MySQL, Oracle) and NoSQL databases (e.g., DynamoDB, MongoDB, Cassandra).
  • Hands-on with CI/CD (Jenkins) and caching technologies Redis or Similar.
  • Strong debugging and system troubleshooting skills.
  • Experience in payments system is a must.


Read more
Tradelab Technologies
Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹15L / yr
MySQL
skill iconRedis
Apache Kafka

We're Hiring for Golang Developer (2–4 years experience)

Company: Tradelab Technologies

Location: Bangalore (Preferred candidates from Bangalore or nearby locations only)


We are looking for a Golang Developer with strong experience in backend development, microservices, and system-level programming. The role involves contributing to high-performance trading systems, low-latency architecture, and scalable backend solutions.


Key Responsibilities

• Develop and maintain backend services using Golang

• Build scalable, secure and high-performance microservices

• Work with REST APIs, WebSockets, message queues and distributed systems

• Collaborate with DevOps, frontend and product teams for smooth delivery

• Optimize performance, troubleshoot issues and ensure system stability


Skills & Experience Required

• 2–4 years of experience in Golang development

• Strong fundamentals in data structures, concurrency and networking

• Experience with MySQL/Redis/Kafka or similar technologies

• Understanding of microservices, APIs and cloud environments

• Experience in fintech/trading systems is a plus


If you are passionate about backend engineering and want to work on fast, scalable trading systems.


Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹15L / yr
skill iconPython
RestAPI
FastAPI
RabbitMQ
Apache Kafka
+3 more

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits

every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.


What We Expect:

• We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and

a deep interest in scalable, low-latency systems.

• You should have 3–4 years of experience in Python-based development and be eager to solve complex

performance and scalability challenges in trading and fintech applications.

• You measure success by your own growth, not external validation.

• You thrive on challenges, not on perks or financial rewards.

• Taking calculated risks excites you—you’re here to build, break, and learn.

• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading

environment.

• You understand the stakes—milliseconds can make or break trades, and precision is everything.


What You Will Do:

• Develop and maintain scalable backend systems using Python.

• Design and implement REST APIs and socket-based communication.

• Optimize code for speed, performance, and reliability.

• Collaborate with frontend teams to integrate server-side logic.

• Work with RabbitMQ, Kafka, Redis, and Elasticsearch for robust backend design.

• Build fault-tolerant, multi-producer/consumer systems.


Must-Have Skills:

• 3–4 years of experience in Python and backend development.

• Strong understanding of REST APIs, sockets, and network protocols (TCP/UDP/HTTP).

• Experience with RabbitMQ/Kafka, SQL & NoSQL databases, Redis, and Elasticsearch.

• Bachelor’s degree in Computer Science or related field.


Nice-to-Have Skills:

• Past experience in fintech, trading systems, or algorithmic trading.

• Experience with GoLang, C/C++, Erlang, or Elixir.

• Exposure to trading, fintech, or low-latency systems.

• Familiarity with microservices and CI/CD pipelines.



Read more
Inflectionio
Renu Philip
Posted by Renu Philip
Remote only
5 - 8 yrs
₹48L - ₹60L / yr
skill iconJava
skill iconSpring Boot
Microservices
skill iconPython
skill iconGo Programming (Golang)
+4 more

Inflection.io is a venture-backed B2B marketing automation company, enabling to communicate with their customers and prospects from one platform. We're used by leading SaaS companies like Sauce Labs, Sigma Computing, BILL, Mural, and Elastic, many of which pay more than $100K/yr (1 crore rupee).

And,... it’s working! We have world-class stats: our largest deal is over 3 crore, we have a 5 star rating on G2, over 100% NRR, and constantly break sales and customer records. We’ve raised $14M in total since 2021 with $7.6M of fresh funding in 2024, giving us many years of runway.

However, we’re still in startup mode with approximately 30 employees and looking for the next SDE3 to help propel Inflection forward. Do you want to join a fast growing startup that is aiming to build a very large company?

Key Responsibilities:

  • Lead the design, development, and deployment of complex software systems and applications.
  • Collaborate with engineers and product managers to define and implement innovative solutions
  • Provide technical leadership and mentorship to junior engineers, promoting best practices and fostering a culture of continuous improvement.
  • Write clean, maintainable and efficient code, ensuring high performance and scalability of the software.
  • Conduct code reviews and provide constructive feedback to ensure code quality and adherence to coding standards.
  • Troubleshoot and resolve complex technical issues, optimizing system performance and reliability.
  • Stay updated with the latest industry trends and technologies, evaluating their potential for adoption in our projects.
  • Participate in the full software development lifecycle, from requirements gathering to deployment and monitoring.

Qualifications:

  • 5+ years of professional software development experience, with a strong focus on backend development.
  • Proficiency in one or more programming languages such as Java, Python, Golang or C#
  • Strong understanding of database systems, both relational (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Cassandra).
  • Hands-on experience with message brokers such as Kafka, RabbitMQ, or Amazon SQS.
  • Experience with cloud platforms (AWS or Azure or Google Cloud) and containerization technologies (Docker, Kubernetes).
  • Proven track record of designing and implementing scalable, high-performance systems.
  • Excellent problem-solving skills and the ability to think critically and creatively.
  • Strong communication and collaboration skills, with the ability to work effectively in a fast-paced, team-oriented environment.


Read more
AI company

AI company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
5 - 17 yrs
₹30L - ₹45L / yr
Data architecture
Data engineering
SQL
Data modeling
GCS
+21 more

Review Criteria

  • Strong Dremio / Lakehouse Data Architect profile
  • 5+ years of experience in Data Architecture / Data Engineering, with minimum 3+ years hands-on in Dremio
  • Strong expertise in SQL optimization, data modeling, query performance tuning, and designing analytical schemas for large-scale systems
  • Deep experience with cloud object storage (S3 / ADLS / GCS) and file formats such as Parquet, Delta, Iceberg along with distributed query planning concepts
  • Hands-on experience integrating data via APIs, JDBC, Delta/Parquet, object storage, and coordinating with data engineering pipelines (Airflow, DBT, Kafka, Spark, etc.)
  • Proven experience designing and implementing lakehouse architecture including ingestion, curation, semantic modeling, reflections/caching optimization, and enabling governed analytics
  • Strong understanding of data governance, lineage, RBAC-based access control, and enterprise security best practices
  • Excellent communication skills with ability to work closely with BI, data science, and engineering teams; strong documentation discipline
  • Candidates must come from enterprise data modernization, cloud-native, or analytics-driven companies


Preferred

  • Preferred (Nice-to-have) – Experience integrating Dremio with BI tools (Tableau, Power BI, Looker) or data catalogs (Collibra, Alation, Purview); familiarity with Snowflake, Databricks, or BigQuery environments


Job Specific Criteria

  • CV Attachment is mandatory
  • How many years of experience you have with Dremio?
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?
  • Virtual Interview requires video to be on, are you okay with it?


Role & Responsibilities

You will be responsible for architecting, implementing, and optimizing Dremio-based data lakehouse environments integrated with cloud storage, BI, and data engineering ecosystems. The role requires a strong balance of architecture design, data modeling, query optimization, and governance enablement in large-scale analytical environments.

  • Design and implement Dremio lakehouse architecture on cloud (AWS/Azure/Snowflake/Databricks ecosystem).
  • Define data ingestion, curation, and semantic modeling strategies to support analytics and AI workloads.
  • Optimize Dremio reflections, caching, and query performance for diverse data consumption patterns.
  • Collaborate with data engineering teams to integrate data sources via APIs, JDBC, Delta/Parquet, and object storage layers (S3/ADLS).
  • Establish best practices for data security, lineage, and access control aligned with enterprise governance policies.
  • Support self-service analytics by enabling governed data products and semantic layers.
  • Develop reusable design patterns, documentation, and standards for Dremio deployment, monitoring, and scaling.
  • Work closely with BI and data science teams to ensure fast, reliable, and well-modeled access to enterprise data.


Ideal Candidate

  • Bachelor’s or master’s in computer science, Information Systems, or related field.
  • 5+ years in data architecture and engineering, with 3+ years in Dremio or modern lakehouse platforms.
  • Strong expertise in SQL optimization, data modeling, and performance tuning within Dremio or similar query engines (Presto, Trino, Athena).
  • Hands-on experience with cloud storage (S3, ADLS, GCS), Parquet/Delta/Iceberg formats, and distributed query planning.
  • Knowledge of data integration tools and pipelines (Airflow, DBT, Kafka, Spark, etc.).
  • Familiarity with enterprise data governance, metadata management, and role-based access control (RBAC).
  • Excellent problem-solving, documentation, and stakeholder communication skills.
Read more
Appiness Interactive
Remote only
6 - 10 yrs
₹10L - ₹14L / yr
skill iconPython
skill iconDjango
FastAPI
skill iconFlask
pandas
+9 more

Position Overview: The Lead Software Architect - Python & Data Engineering is a senior technical leadership role responsible for designing and owning end-to-end architecture for data-intensive, AI/ML, and analytics platforms, while mentoring developers and ensuring technical excellence across the organization. 


Key Responsibilities: 

  • Design end-to-end software architecture for data-intensive applications, AI/ML pipelines, and analytics platforms
  • Evaluate trade-offs between competing technical approaches 
  • Define data models, API approach, and integration patterns across systems 
  • Create technical specifications and architecture documentation 
  • Lead by example through production-grade Python code and mentor developers on engineering fundamentals 
  • Conduct design and code reviews focused on architectural soundness 
  • Establish engineering standards, coding practices, and design patterns for the team 
  • Translate business requirements into technical architecture 
  • Collaborate with data scientists, analysts, and other teams to design integrated solutions 
  • Whiteboard and defend system design and architectural choices 
  • Take responsibility for system performance, reliability, and maintainability 
  • Identify and resolve architectural bottlenecks proactively 


Required Skills:  

  • 8+ years of experience in software architecture and development  
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field 
  • Strong foundations in data structures, algorithms, and computational complexity 
  • Experience in system design for scale, including caching strategies, load balancing, and asynchronous processing  
  • 6+ years of Python development experience 
  • Deep knowledge of Django, Flask, or FastAPI 
  • Expert understanding of Python internals including GIL and memory management 
  • Experience with RESTful API design and event-driven architectures (Kafka, RabbitMQ) 
  • Proficiency in data processing frameworks such as Pandas, Apache Spark, and Airflow 
  • Strong SQL optimization and database design experience (PostgreSQL, MySQL, MongoDB)  Experience with AWS, GCP, or Azure cloud platforms 
  • Knowledge of containerization (Docker) and orchestration (Kubernetes) 
  • Hands-on experience designing CI/CD pipelines Preferred (Bonus) 


Skills

  • Experience deploying ML models to production (MLOps, model serving, monitoring) Understanding of ML system design including feature stores and model versioning 
  • Familiarity with ML frameworks such as scikit-learn, TensorFlow, and PyTorch  
  • Open-source contributions or technical blogging demonstrating architectural depth 
  • Experience with modern front-end frameworks for full-stack perspective


Read more
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹15L / yr
skill iconGo Programming (Golang)
CI/CD
Apache Kafka
RabbitMQ
skill iconDocker
+1 more

Location: Bengalore, India, Exp: 3-5 Yrs

Backend Developer (Golang) - Trading & Fintech


About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.


What we expect:

 You should already be exceptional at Golang. If you need hand-holding, this isn’t the place for you.

 You thrive on challenges, not on perks or financial rewards.

 You measure success by your own growth, not external validation.

 Taking calculated risks excites you—you’re here to build, break, and learn.

 You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading environment.

 You understand the stakes—milliseconds can make or break trades, and precision is everything.



What you will do:

 Develop and optimize high-performance backend systems in Golang for trading platforms and financial services.

 Architect low-latency, high-throughput microservices that push the boundaries ofspeed and efficiency.

 Build event-driven, fault-tolerant systems that can handle massive real-time data streams.

 Own your work—no babysitting, no micromanagement.

 Work alongside equally driven engineers who expect nothing less than brilliance.


Must have skills:

 Learn faster than you ever thought possible.

Proven expertise in Golang (if you need to prove yourself, this isn’t the role for you).

 Deep understanding of concurrency, memory management, and system design.

 Experience with Trading, market data processing, or low-latency systems.

 Strong knowledge of distributed systems, message queues (Kafka, RabbitMQ), and real-time processing.

 Hands-on with Docker, Kubernetes, and CI/CD pipelines.

 A portfolio of work that speaks louder than a resume.


Nice-to-Have Skills:

 Past experience in fintech, trading systems, or algorithmic trading.

 Contributions to open-source Golang projects.

 A history of building something impactful from scratch.

 Understanding of FIX protocol, WebSockets, and streaming APIs.


Why Join Us?

 Work with a team that expects and delivers excellence.

 A culture where risk-taking is rewarded, and complacency is not.

 Limitless opportunities for growth—if you can handle the pace.

 A place where learning is currency, and outperformance is the only metric that matters.


The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech. This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.



Read more
HighLevel Inc.
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
6yrs+
Upto ₹50L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconMongoDB
skill iconExpress
MERN Stack
+2 more

About Us

HighLevel is an AI powered, all-in-one white-label sales & marketing platform that empowers agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. We are proud to support a global and growing community of over 2 million businesses, comprised of agencies, consultants, and businesses of all sizes and industries. HighLevel empowers users with all the tools needed to capture, nurture, and close new leads into repeat customers. As of mid 2025, HighLevel processes over 15 billion API hits and handles more than 2.5 billion message events every day. Our platform manages over 470 terabytes of data distributed across five databases, operates with a network of over 250 microservices, and supports over 1 million domain names.


About the Role

  • We’re looking for a Lead Software Engineer with deep expertise in Node.js, MongoDB, and modern frontend frameworks, along with a bold vision for how AI-assisted development can accelerate engineering workflows.
  • You’ll join our Snapshots Team, the engine behind how agencies and SaaS creators templatise and clone complete automation systems—workflows, funnels, forms, calendars, settings, and more—instantly across thousands of client accounts.
  • Snapshots are now being cloned to over 200,000+ businesses every month, driving rapid onboarding and platform extensibility. The system saves businesses countless hours by replicating complex structures and business processes seamlessly across accounts, making automation scalable, repeatable, and fast.
  • Under the hood, Snapshots tackles engineering challenges like asset graph traversal, dependency resolution, real-time syncing across multi-tenant environments, and copying gigabytes of data across accounts—all while managing CPU and memory intensive operations at scale.
  • As a Lead Engineer, you’ll own architectural decisions and solve deep product-platform problems around high-performance snapshot cloning, rollback systems, and version management. You’ll collaborate across product, infra, and platform teams while mentoring engineers and driving the long-term roadmap for one of HighLevel’s most strategic surface areas. If you enjoy building technically rigorous systems that scale, this team is where your work will have immediate and visible impact.

Responsibilities

  • Lead the design and development of scalable, high-performance systems to improve reliability, latency, and throughput across time-sensitive APIs and workflows
  • Own features end-to-end — from architecture and implementation to testing, deployment, and ongoing optimization.
  • Work hands-on with technologies like NestJS, Firestore, MongoDB, PostgreSQL, Redis, Queuing Systems and Service Mesh-based microservices.
  • Drive technical direction across both product features and platform layers to ensure stability, scalability, and maintainability.Integrate and optimize AI-assisted development tools, improving developer productivity while ensuring accuracy and minimizing hallucinations.
  • Collaborate closely with Product, Design, and AI teams to deliver impactful, user-facing features and backend systems.Mentor and support other engineers, fostering a culture of technical excellence, learning, and ownership.Proactively identify and solve performance bottlenecks, scalability challenges, and security concerns in a multi-tenant environment.

Requirements

  • 6+ years of backend engineering experience, including designing fault-tolerant systems and working on high-scale platforms.Deep expertise in distributed systems, event-driven architectures, and asynchronous job processing
  • Strong experience with relational and NoSQL data models (especially with complex temporal data)
  • Experience with modern front-end frameworks (e.g., React, Vue, Angular) and building full-stack web applications.Proven track record of architecting complex systems and delivering scalable, high-performance web apps in production.
  • Strong understanding of software design patterns, API design, and microservices architecture in a multi-tenant environment.
  • Skilled in guiding technical architecture, making high-impact engineering decisions, and mentoring fellow engineers.Experience with code quality practices, automated testing, CI/CD pipelines, and dev tooling optimization.Excellent problem-solving skills, with clear and collaborative communication across cross-functional teams.

Our People

With over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home.


Our Impact

As of mid 2025, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve each month. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen.

Read more
prep study
Pooja Sharma
Posted by Pooja Sharma
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹15L / yr
FastAPI
RESTful APIs
RabbitMQ
Apache Kafka
skill iconElastic Search
+2 more

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.

• We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and a deep interest in scalable, low-latency systems.

• You should have 3–4 years of experience in Python-based development and be eager to solve complex performance and scalability challenges in trading and fintech applications.

• You measure success by your own growth, not external validation.

• You thrive on challenges, not on perks or financial rewards.

• Taking calculated risks excites you—you’re here to build, break, and learn.

• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading

environment.

• You understand the stakes—milliseconds can make or break trades, and precision is everything.

• Develop and maintain scalable backend systems using Python.

• Design and implement REST APIs and socket-based communication.

• Optimize code for speed, performance, and reliability.

• Collaborate with frontend teams to integrate server-side logic.

• Work with RabbitMQ, Kafka, Redis, and Elasticsearch for robust backend design.

• Build fault-tolerant, multi-producer/consumer systems.

What We Expect:

• 3–4 years of experience in Python and backend development.

• Strong understanding of REST APIs, sockets, and network protocols (TCP/UDP/HTTP).

• Experience with RabbitMQ/Kafka, SQL & NoSQL databases, Redis, and Elasticsearch.

• Bachelor’s degree in Computer Science or related field.

Read more
Bengaluru (Bangalore)
4 - 7 yrs
₹15L - ₹35L / yr
skill iconJava
skill iconSpring Boot
Apache Kafka

SE/SSE - Backend (Java)



JAVA: Software Engineer / Senior Software Engineer


Job Overview: RevSure.AI is building a world-class team of Engineers with a mandate to architect, design, build, scale, and maintain our cutting-edge Revops platform and derive insights from the customer data. If you are a great JavaScript developer (or, for that matter, in any major programming language) with experience in building SaaS web applications and looking for an opportunity to build world-class products using cutting-edge technologies, please read on -


Responsibilities:

● Contribute to the growth and development of our product by writing and maintaining code, focusing on quality and maintainability. We believe that our teams own the code they build.

● Share your knowledge, both inside and outside your own team.

● Design, implement, test, and monitor valuable solutions that achieve the teamʼs goals while keeping a smooth delivery flow and balancing tradeoffs between scope, time, and effort.

● Continuously improve your and your teamʼs way of working, expanding the boundaries of your teamʼs autonomy and self-organisation.


Requirements:


Must-Have:

● Good interpersonal and communication skills. An analytical mind and an eye for detail

● Experience of 4-7 years in building large applications, preferably from the heavy data processing & analytics side

● Experience in architecting, designing, and building end-to-end System Design of multi tenant SAAS applications

● Experience architecting interactive configuration experiences around DAGs, configuration-heavy reporting feature, interactive data pipeline design interfaces, and charting.

● Experience in Java, Spring Boot, and GCP

● In-depth understanding of design patterns, OOPs, and Functional programming.

● Passionate programmer focused on Backend Programming.


Good to have:

● Working knowledge of GCP managed services like Kubernetes, PubSub


Please feel free to share your resume to chandana at revsure.ai


**Based on your experience, the title could vary. We care more about mindset , ownership, and craft than just titles.

Read more
Whiz IT Services
Sheeba Harish
Posted by Sheeba Harish
Remote only
10 - 15 yrs
₹20L - ₹20L / yr
skill iconJava
skill iconSpring Boot
Microservices
API
Apache Kafka
+5 more

We are looking for highly experienced Senior Java Developers who can architect, design, and deliver high-performance enterprise applications using Spring Boot and Microservices . The role requires a strong understanding of distributed systems, scalability, and data consistency.

Read more
Sonatype

at Sonatype

5 candid answers
Reshika Mendiratta
Posted by Reshika Mendiratta
Hyderabad
5 - 8 yrs
Upto ₹28L / yr (Varies
)
ETL
skill iconPython
skill iconJava
databricks
SQL
+7 more

Who We Are

At Sonatype, we help organizations build better, more secure software by enabling them to understand and control their software supply chains. Our products are trusted by thousands of engineering teams globally, providing critical insights into dependency health, license risk, and software security. We’re passionate about empowering developers—and we back it with data.


The Opportunity

We’re looking for a Data Engineer with full stack expertise to join our growing Data Platform team. This role blends data engineering, microservices, and full-stack development to deliver end-to-end services that power analytics, machine learning, and advanced search across Sonatype.

You will design and build data-driven microservices and workflows using Java, Python, and Spring Batch, implement frontends for data workflows, and deploy everything through CI/CD pipelines into AWS ECS/Fargate. You’ll also ensure services are monitorable, debuggable, and reliable at scale, while clearly documenting designs with Mermaid-based sequence and dataflow diagrams.

This is a hands-on engineering role for someone who thrives at the intersection of data systems, fullstack development, ML, and cloud-native platforms.


What You’ll Do

  • Design, build, and maintain data pipelines, ETL/ELT workflows, and scalable microservices.
  • Development of complex web scraping (Playwright) and realtime pipelines (Kafka/Queues/Flink).
  • Develop end-to-end microservices with backend (Java 5+, Python 5+, Spring Batch 2+) and frontend (React or any).
  • Deploy, publish, and operate services in AWS ECS/Fargate using CI/CD pipelines (Jenkins, GitOps).
  • Architect and optimize data storage models in SQL (MySQL, PostgreSQL) and NoSQL stores.
  • Implement web scraping and external data ingestion pipelines.
  • Enable Databricks and PySpark-based workflows for large-scale analytics.
  • Build advanced data search capabilities (fuzzy matching, vector similarity search, semantic retrieval).
  • Apply ML techniques (scikit-learn, classification algorithms, predictive modeling) to data-driven solutions.
  • Implement observability, debugging, monitoring, and alerting for deployed services.
  • Create Mermaid sequence diagrams, flowcharts, and dataflow diagrams to document system architecture and workflows.
  • Drive best practices in fullstack data service development, including architecture, testing, and documentation.


What We’re Looking For


Minimum Qualifications

  • 2+ years of experience as a Data Engineer or a Software Backend engineering role
  • Strong programming skills in Python, Scala, or Java
  • Hands-on experience with HBase or similar NoSQL columnar stores
  • Hands-on experience with distributed data systems like Spark, Kafka, or Flink
  • Proficient in writing complex SQL and optimizing queries for performance
  • Experience building and maintaining robust ETL/ELT pipelines in production
  • Familiarity with workflow orchestration tools (Airflow, Dagster, or similar)
  • Understanding of data modeling techniques (star schema, dimensional modeling, etc.)
  • Familiarity with CI/CD pipelines (Jenkins or similar)
  • Ability to visualize and communicate architectures using Mermaid diagrams

Bonus Points

  • Experience working with Databricks, dbt, Terraform, or Kubernetes
  • Familiarity with streaming data pipelines or real-time processing
  • Exposure to data governance frameworks and tools
  • Experience supporting data products or ML pipelines in production
  • Strong understanding of data privacy, security, and compliance best practices


Why You’ll Love Working Here

  • Data with purpose: Work on problems that directly impact how the world builds secure software
  • Modern tooling: Leverage the best of open-source and cloud-native technologies
  • Collaborative culture: Join a passionate team that values learning, autonomy, and impact
Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 12 yrs
₹25L - ₹30L / yr
skill iconMachine Learning (ML)
AWS CloudFormation
Online machine learning
skill iconAmazon Web Services (AWS)
ECS
+20 more

MUST-HAVES: 

  • Machine Learning + Aws + (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sage maker
  • Notice period - 0 to 15 days only 
  • Hybrid work mode- 3 days office, 2 days at home


SKILLS: AWS, AWS CLOUD, AMAZON REDSHIFT, EKS


ADDITIONAL GUIDELINES:

  • Interview process: - 2 Technical round + 1 Client round
  • 3 days in office, Hybrid model. 


CORE RESPONSIBILITIES:

  • The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
  • Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
  • Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
  • Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
  • System Integration: Integrate models into existing systems and workflows.
  • Model Deployment: Deploy models to production environments and monitor performance.
  • Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
  • Continuous Improvement: Identify areas for improvement in model performance and systems.


SKILLS:

  • Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
  • Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaos search logs, etc. for troubleshooting; Other tech touch points are Scylla DB (like BigTable), OpenSearch, Neo4J graph
  • Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
  • Knowledge of model monitoring and performance evaluation.


REQUIRED EXPERIENCE:

  • Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sage maker pipeline with ability to analyze gaps and recommend/implement improvements
  • AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
  • AWS data: Redshift, Glue
  • Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Read more
Tradelab Software Private Limited
Pooja Sharma
Posted by Pooja Sharma
Bengaluru (Bangalore)
5 - 8 yrs
₹10L - ₹15L / yr
concurrency
Apache Kafka
RabbitMQ
skill iconDocker
skill iconKubernetes

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits

every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.


What We Expect:

• You should already be exceptional at Backend. If you need hand-holding, this isn’t the place for you.

• You thrive on challenges, not on perks or financial rewards.

• You measure success by your own growth, not external validation.

• Taking calculated risks excites you—you’re here to build, break, and learn.

• You don’t clock in for a pay check; you clock in to outperform yourself in a high-frequency trading

environment.

• You understand the stakes—milliseconds can make or break trades, and precision is everything.


What You Will Do:

• Develop and optimize high-performance backend systems in for trading platforms and financial services.

• Architect low-latency, high-throughput microservices that push the boundaries of speed and efficiency.

• Build event-driven, fault-tolerant systems that can handle massive real-time data streams.

• Own your work—no babysitting, no micromanagement.

• Work alongside equally driven engineers who expect nothing less than brilliance.

• Learn faster than you ever thought possible.


Must-Have Skills:

Proven expertise in Backend (if you need to prove yourself, this isn’t the role for you).

• Deep understanding of concurrency, memory management, and system design.

• Experience with Trading, market data processing, or low-latency systems.

• Strong knowledge of distributed systems, message queues (Kafka, RabbitMQ), and real-time processing.

• Hands-on with Docker, Kubernetes, and CI/CD pipelines.

• A portfolio of work that speaks louder than a resume.


Nice-to-Have Skills:

• Past experience in fintech, trading systems, or algorithmic trading.

• Contributions to open-source projects.

• A history of building something impactful from scratch.

• Understanding of FIX protocol, WebSockets, and streaming APIs.


Why Join Us?

• Work with a team that expects and delivers excellence.

• A culture where risk-taking is rewarded, and complacency is not.

• Limitless opportunities for growth—if you can handle the pace.

• A place where learning is currency, and outperformance is the only metric that matters.

• The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.

Read more
Tradelab Software Private Limited
Pooja Sharma
Posted by Pooja Sharma
Bengaluru (Bangalore)
3 - 5 yrs
₹10L - ₹15L / yr
RESTful APIs
RabbitMQ
Apache Kafka
skill iconRedis
skill iconElastic Search
+2 more

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits every day. If you thrive in high-stakes environments and have a deep passion for performance. driven backend systems, we want you.


What We Expect:

• We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and

a deep interest in scalable, low-latency systems.

• You should have 3–4 years of experience in Python-based development and be eager to solve complex

performance and scalability challenges in trading and fintech applications.

• You measure success by your own growth, not external validation.

• You thrive on challenges, not on perks or financial rewards.

• Taking calculated risks excites you—you’re here to build, break, and learn.

• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading

environment.

• You understand the stakes—milliseconds can make or break trades, and precision is everything.


What You Will Do:

• Develop and maintain scalable backend systems using Python.

• Design and implement REST APIs and socket-based communication.

• Optimize code for speed, performance, and reliability.

• Collaborate with frontend teams to integrate server-side logic.

• Work with RabbitMQ, Kafka, Redis, and Elasticsearch for robust backend design.

• Build fault-tolerant, multi-producer/consumer systems.


Must-Have Skills:

• 3–4 years of experience in Python and backend development.

• Strong understanding of REST APIs, sockets, and network protocols (TCP/UDP/HTTP).

• Experience with RabbitMQ/Kafka, SQL & NoSQL databases, Redis, and Elasticsearch.

• Bachelor’s degree in Computer Science or related field

Read more
ADTRAN

at ADTRAN

1 recruiter
Reema Meshram
Posted by Reema Meshram
Bengaluru (Bangalore)
3 - 7 yrs
₹12L - ₹35L / yr
skill iconScala
Functional programming
Akka
Apache Kafka
skill iconPostgreSQL

Key Responsibilities

  • Provide technical leadership in the design, development, and delivery of scalable, high-performance software systems.
  • Partner with product managers, architects, and cross-functional teams to define technical strategy and ensure alignment with business objectives.
  • Lead by example in writing high-quality, testable, and maintainable code.
  • Drive best practices in software engineering, including code reviews, system design, and performance optimization.
  • Mentor and guide engineers across teams, fostering a culture of technical excellence and continuous learning.
  • Evaluate and introduce new technologies, tools, and frameworks to improve productivity, scale and system robustness.

Required Skills & Qualifications

  • Strong foundation in computer science fundamentals: data structures, algorithms, and functional programming techniques.
  • Expertise in Scala, with strong preference for functional programming.
  • Solid experience in software design, implementation, and debugging, including inter-process communication and multi-threading.
  • Hands-on experience with distributed systems and event-driven architectures.
  • Familiarity with databases (Postgres preferred).
  • Proficiency with Apache Kafka for messaging and persistence.
  • Working knowledge of Python for unit and integration testing.
  • Basic to intermediate experience with Ansible for automation.
  • Strong problem-solving, analytical, and communication skills.

Nice-to-Have / Bonus Skills

  • Experience with modeling in YANG.
  • Experience with Scala libraries such as Cats Effect (2/3), Monix, and Akka.
  • Experience working in Agile/Scrum environments.

What We Offer

  • Opportunity to work on cutting-edge technologies in a collaborative environment.
  • A role with strong ownership, technical influence, and visibility across teams.
  • Competitive compensation and benefits.


Read more
Lookup

at Lookup

2 candid answers
1 recruiter
Ajay Kumar
Posted by Ajay Kumar
Bengaluru (Bangalore)
3 - 8 yrs
₹30L - ₹35L / yr
skill iconPython
skill iconPostgreSQL
FastAPI
skill iconRedis
skill iconAmazon Web Services (AWS)
+2 more

Our Mission

To make video as accessible to machines as text and voice are today.


At lookup, we believe the world's most valuable asset is trapped. Video is everywhere, but it's unsearchable—a black box of insight that no one can open or atleast open affordably. We’re changing that. We're building the search engine for the visual world, so anyone can find or do anything with video just by asking.


Text is queryable. Voice is transcribed. Video, the largest and richest data source of all, is still a black box. A computer can't understand it, and so its value remains trapped.


Our mission at lookup is to fix this.


About the Role

We are looking for founding Backend Engineers to build a highly performant, reliable, and scalable API platform that makes enterprise video knowledge readily available for video search, summarization, and natural‑language Q&A. You will partner closely with our ML team working on vision‑language models to productionize research and deliver fast, trustworthy APIs for customers.


Examples of technical challenges you will work on include: distributed video storage, a unified application framework and data model for indexing large video libraries, low‑latency clip retrieval, vector search at scale, and end‑to‑end build, test, deploy, and observability in cloud environments.


What You’ll Do:

  • Design and build robust backend services and APIs (REST, gRPC) for vector search, video summarization, and video Q&A.
  • Own API performance and reliability, including low‑latency retrieval, pagination, rate limiting, and backwards‑compatible versioning.
  • Design schemas and tune queries in Postgres, and integrate with unstructured storage.
  • Implement observability across metrics, logs, and traces. Set error budgets and SLOs.
  • Write clear design docs and ship high‑quality, well‑tested code.
  • Collaborate with ML engineers to integrate and productionize VLMs and retrieval pipelines.
  • Take ownership of architecture from inception to production launch.


Who You Are:

  • 3+ years of professional experience in backend development.
  • Proven experience building and scaling polished WebSocket, gRPC, and REST APIs.
  • Exposure to distributed systems and container orchestration (Docker and Kubernetes).
  • Hands‑on experience with AWS.
  • Strong knowledge of SQL (Postgres) and NoSQL (e.g., Cassandra), including schema design, query optimization, and scaling.
  • Familiarity with our stack is a plus, but not mandatory: Python (FastAPI), Celery, Kafka, Postgres, Redis, Weaviate, React.
  • Ability to diagnose complex issues, identify root causes, and implement effective fixes.
  • Comfortable working in a fast‑paced startup environment.


Nice to have:

  • Hands-on work with LLM agents, vector embeddings, or RAG applications.
  • Building video streaming pipelines and storage systems at scale (FFmpeg, RTSP, WebRTC).
  • Proficiency with modern frontend frameworks (React, TypeScript, Tailwind CSS) and responsive UI design.


Location & Culture

  • Full-time, in-office role in Bangalore (we’re building fast and hands-on).
  • Must be comfortable with a high-paced environment and collaboration across PST time zones for our US customers and investors.
  • Expect startup speed — daily founder syncs, rapid design-to-prototype cycles, and a culture of deep ownership.


Why You’ll Love This Role

  • Work on the frontier of video understanding and real-world AI — products that can redefine trust and automation.
  • Build core APIs that make video queryable and power real customer use.
  • Own systems end to end: performance, reliability, and developer experience.
  • Work closely with founders and collaborate in person in Bangalore.
  • Competitive salary with meaningful early equity.
Read more
Beyond Seek Technologies Pvt Ltd
Remote only
4 - 7 yrs
₹12L - ₹24L / yr
Process automation
RPA
skill iconPython
AWS Lambda
AWS Simple Queuing Service (SQS)
+7 more

Responsibilities:


  • Design, build, and maintain backend services and APIs using Python frameworks such as FastAPI or Django.
  • Implement RAG-based features and services, including document ingestion pipelines, vector indexing, and retrieval logic using modern LLM tooling.
  • Build robust data ingestion, scraping, and automation workflows (web scraping, headless browsers, APIs) to integrate with third-party systems and internal tools.
  • Develop and operate ETL/ELT pipelines to move, clean, and transform data across databases, file stores, and external platforms.
  • Own reliability, performance, and observability of services: logging, metrics, alerting, and debugging in production.
  • Collaborate closely with product and business stakeholders to translate ambiguous workflows into clear technical designs and automation logic.
  • Write clean, testable code with solid unit/integration coverage, and contribute to internal libraries, tooling, and best practices documentation.
  • Participate in code reviews, architectural discussions, and mentor junior engineers on Python, RAG patterns, and automation best practices.


Requirements:


  • 4–6 years of hands-on experience as a Python engineer building production systems (FastAPI, Django, or similar).
  • Strong understanding of backend fundamentals: REST APIs, authentication/authorisation, async patterns, background jobs, and task queues.
  • Experience with event-driven architectures (Kafka, SQS, RabbitMQ) and workflow engines (e.g., Temporal, Airflow, Prefect).
  • Practical experience with at least one LLM/RAG stack (e.g., LangChain, LlamaIndex, custom vector store integrations) and working with embeddings, chunking, and retrieval.
  • Solid experience with web scraping and automation: requests/HTTP clients, Selenium/Playwright or similar, rate-limiting, anti-bot handling, and resilient scraping patterns.
  • Experience building data pipelines or ETLs: extracting from APIs/files/DBs, transforming/cleaning, and loading into relational or NoSQL stores.
  • Hands-on experience with AWS or similar cloud platforms (e.g., Lambda, S3, API Gateway, ECS/Fargate, or equivalent).
  • Strong debugging skills and comfort with distributed, asynchronous systems and eventual consistency.
  • Ability to take loosely defined business workflows and design clean, maintainable technical solutions.
  • Strong communication skills and a habit of documenting decisions, APIs, and workflows.


Nice to Have:


  • Experience with vector databases (e.g., Pinecone, Weaviate, Qdrant, OpenSearch vector, etc.) and search tuning.
  • Exposure to building internal tools or low-code-like automation platforms for operations or support teams.
  • Prior experience integrating with ERP/CRM/marketplace or other enterprise/legacy systems.
  • Understanding of cloud security, IAM, and secret management best practices.
Read more
Cspar Enterprises Private Limited
Bhopal, Bengaluru (Bangalore)
4 - 10 yrs
₹3L - ₹8L / yr
skill iconDjango
RESTful APIs
deployment tools
RabbitMQ
Apache Kafka
+11 more

Designation: Senior Python Django Developer 

Position: Senior Python Developer

Job Types: Full-time, Permanent

Pay: Up to ₹800,000.00 per year

Schedule: Day shift

Ability to commute/relocate: Bhopal Indrapuri (MP) And Bangalore JP Nagar

 

Experience: Back-end development: 4 years (Required)

 

Job Description:

We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.

This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.

 

Responsibilities:

  • Design and develop scalable, secure, and high-performance applications using Python (Django framework).
  • Architect system components, define database schemas, and optimize backend services for speed and efficiency.
  • Lead and implement design patterns and software architecture best practices.
  • Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
  • Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
  • Drive performance improvements, monitor system health, and troubleshoot production issues.
  • Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
  • Contribute to technical decision-making and mentor junior developers.

 

Requirements:

  • 4 to 10 years of professional backend development experience with Python and Django.
  • Strong background in payments/financial systems or FinTech applications.
  • Proven experience in designing software architecture in a microservices or modular monolith environment.
  • Experience working in fast-paced startup environments with agile practices.
  • Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
  • Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
  • Hands-on experience with test-driven development (TDD) and frameworks like pytest, unit test, or factory boy.
  • Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).

 

Preferred Skills:

  • Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
  • Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
  • Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
  • Contributions to open-source or personal finance-related projects.


Read more
Bengaluru (Bangalore)
2 - 5 yrs
₹10L - ₹15L / yr
Apache Kafka
RabbitMQ
skill iconDocker
skill iconKubernetes
CI/CD

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits

every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.


What You Will Do:

• Develop and optimize high-performance backend systems in Golang for trading platforms and financial

services.

• Architect low-latency, high-throughput microservices that push the boundaries of speed and efficiency.

• Build event-driven, fault-tolerant systems that can handle massive real-time data streams.

• Own your work—no babysitting, no micromanagement.

• Work alongside equally driven engineers who expect nothing less than brilliance.

• Learn faster than you ever thought possible.


Must-Have Skills:

Proven expertise in Golang (if you need to prove yourself, this isn’t the role for you).

• Deep understanding of concurrency, memory management, and system design.

• Experience with Trading, market data processing, or low-latency systems.

• Strong knowledge of distributed systems, message queues (Kafka, RabbitMQ), and real-time processing.

• Hands-on with Docker, Kubernetes, and CI/CD pipelines.

• A portfolio of work that speaks louder than a resume.


Nice-to-Have Skills:

• Past experience in fintech, trading systems, or algorithmic trading.

• Contributions to open-source Golang projects.

• A history of building something impactful from scratch.

• Understanding of FIX protocol, WebSockets, and streaming APIs.

Read more
Inncircles
Gangadhar M
Posted by Gangadhar M
Hyderabad
3 - 5 yrs
Best in industry
PySpark
Spark
skill iconPython
ETL
Amazon EMR
+7 more


We are looking for a highly skilled Sr. Big Data Engineer with 3-5 years of experience in

building large-scale data pipelines, real-time streaming solutions, and batch/stream

processing systems. The ideal candidate should be proficient in Spark, Kafka, Python, and

AWS Big Data services, with hands-on experience in implementing CDC (Change Data

Capture) pipelines and integrating multiple data sources and sinks.


Responsibilities

  • Design, develop, and optimize batch and streaming data pipelines using Apache Spark and Python.
  • Build and maintain real-time data ingestion pipelines leveraging Kafka and AWS Kinesis.
  • Implement CDC (Change Data Capture) pipelines using Kafka Connect, Debezium or similar frameworks.
  • Integrate data from multiple sources and sinks (databases, APIs, message queues, file systems, cloud storage).
  • Work with AWS Big Data ecosystem: Glue, EMR, Kinesis, Athena, S3, Lambda, Step Functions.
  • Ensure pipeline scalability, reliability, and performance tuning of Spark jobs and EMR clusters.
  • Develop data transformation and ETL workflows in AWS Glue and manage schema evolution.
  • Collaborate with data scientists, analysts, and product teams to deliver reliable and high-quality data solutions.
  • Implement monitoring, logging, and alerting for critical data pipelines.
  • Follow best practices for data security, compliance, and cost optimization in cloud environments.


Required Skills & Experience

  • Programming: Strong proficiency in Python (PySpark, data frameworks, automation).
  • Big Data Processing: Hands-on experience with Apache Spark (batch & streaming).
  • Messaging & Streaming: Proficient in Kafka (brokers, topics, partitions, consumer groups) and AWS Kinesis.
  • CDC Pipelines: Experience with Debezium / Kafka Connect / custom CDC frameworks.
  • AWS Services: AWS Glue, EMR, S3, Athena, Lambda, IAM, CloudWatch.
  • ETL/ELT Workflows: Strong knowledge of data ingestion, transformation, partitioning, schema management.
  • Databases: Experience with relational databases (MySQL, Postgres, Oracle) and NoSQL (MongoDB, DynamoDB, Cassandra).
  • Data Formats: JSON, Parquet, Avro, ORC, Delta/Iceberg/Hudi.
  • Version Control & CI/CD: Git, GitHub/GitLab, Jenkins, or CodePipeline.
  • Monitoring/Logging: CloudWatch, Prometheus, ELK/Opensearch.
  • Containers & Orchestration (nice-to-have): Docker, Kubernetes, Airflow/Step
  • Functions for workflow orchestration.


Preferred Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
  • Experience in large-scale data lake / lake house architectures.
  • Knowledge of data warehousing concepts and query optimisation.
  • Familiarity with data governance, lineage, and cataloging tools (Glue Data Catalog, Apache Atlas).
  • Exposure to ML/AI data pipelines is a plus.


Tools & Technologies (must-have exposure)

  • Big Data & Processing: Apache Spark, PySpark, AWS EMR, AWS Glue
  • Streaming & Messaging: Apache Kafka, Kafka Connect, Debezium, AWS Kinesis
  • Cloud & Storage: AWS (S3, Athena, Lambda, IAM, CloudWatch)
  • Programming & Scripting: Python, SQL, Bash
  • Orchestration: Airflow / Step Functions
  • Version Control & CI/CD: Git, Jenkins/CodePipeline
  • Data Formats: Parquet, Avro, ORC, JSON, Delta, Iceberg, Hudi
Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Delhi
10 - 15 yrs
₹105L - ₹140L / yr
Data engineering
Apache Spark
Apache
Apache Kafka
skill iconJava
+25 more

MANDATORY:

  • Super Quality Data Architect, Data Engineering Manager / Director Profile
  • Must have 12+ YOE in Data Engineering roles, with at least 2+ years in a Leadership role
  • Must have 7+ YOE in hands-on Tech development with Java (Highly preferred) or Python, Node.JS, GoLang
  • Must have strong experience in large data technologies, tools like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto etc.
  • Strong expertise in HLD and LLD, to design scalable, maintainable data architectures.
  • Must have managed a team of at least 5+ Data Engineers (Read Leadership role in CV)
  • Product Companies (Prefers high-scale, data-heavy companies)


PREFERRED:

  • Must be from Tier - 1 Colleges, preferred IIT
  • Candidates must have spent a minimum 3 yrs in each company.
  • Must have recent 4+ YOE with high-growth Product startups, and should have implemented Data Engineering systems from an early stage in the Company


ROLES & RESPONSIBILITIES:

  • Lead and mentor a team of data engineers, ensuring high performance and career growth.
  • Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
  • Drive the development and implementation of data governance frameworks and best practices.
  • Work closely with cross-functional teams to define and execute a data roadmap.
  • Optimize data processing workflows for performance and cost efficiency.
  • Ensure data security, compliance, and quality across all data platforms.
  • Foster a culture of innovation and technical excellence within the data team.


IDEAL CANDIDATE:

  • 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.
  • Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.
  • Proficiency in SQL, Python, and Scala for data processing and analytics.
  • Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.
  • Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice
  • Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.
  • Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery
  • Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).
  • Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.
  • Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.
  • Proven ability to drive technical strategy and align it with business objectives.
  • Strong leadership, communication, and stakeholder management skills.


PREFERRED QUALIFICATIONS:

  • Experience in machine learning infrastructure or MLOps is a plus.
  • Exposure to real-time data processing and analytics.
  • Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture.
  • Prior experience in a SaaS or high-growth tech company.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Robin Silverster
Posted by Robin Silverster
Bengaluru (Bangalore)
5 - 11 yrs
₹10L - ₹35L / yr
skill iconPython
Spark
Apache Kafka
Snow flake schema
databricks
+1 more

Required Skills:

· 8+ years of being a practitioner in data engineering or a related field.

· Proficiency in programming skills in Python

· Experience with data processing frameworks like Apache Spark or Hadoop.

· Experience working on Databricks.

· Familiarity with cloud platforms (AWS, Azure) and their data services.

· Experience with data warehousing concepts and technologies.

· Experience with message queues and streaming platforms (e.g., Kafka).

· Excellent communication and collaboration skills.

· Ability to work independently and as part of a geographically distributed team.

Read more
Gurugram
3 - 8 yrs
₹8L - ₹20L / yr
skill iconPython
FastAPI
LangChain
Web Realtime Communication (WebRTC)
WebSocket
+6 more

Job Title : Full Stack Engineer (Real-Time Audio Systems) – Voice AI

Experience : 4+ Years

Location : Gurgaon (Hybrid)


About the Role :

We’re looking for a Voice AI / Full Stack Engineer to build our real-time Voice AI platform for low-latency, intelligent voice-driven agents in healthcare and beyond.

You’ll work closely with the founding team, combining audio infrastructure, AI, and full stack development to deliver natural, production-grade voice experiences.


Hands-on experience with WebRTC, WebSocket, and streaming services is required.

Experience with TTS (Text-to-Speech) and STT (Speech-to-Text) modules is a strong plus.


Mandatory Skills :

Python (FastAPI, Async frameworks, LangChain), WebRTC, WebSockets, Redis, Kafka, Docker, AWS, real-time streaming systems, TTS/STT modules.


Responsibilities :

  • Build and optimize voice-driven AI systems integrating ASR, TTS, and LLM inference with WebRTC & WebSocket.
  • Develop scalable backend APIs and streaming pipelines for real-time communication.
  • Translate AI audio models into reliable, production-ready services.
  • Collaborate across teams for rapid prototyping and deployment.
  • Monitor and improve system performance, latency, and reliability.

Requirements :

  • 4+ years of experience in real-time systems, streaming, or conversational AI.
  • Strong in Python (FastAPI, Async frameworks, LangChain).
  • Hands-on with WebRTC, WebSockets, Redis, Kafka, Docker, AWS.
  • Familiarity with Next.js or similar frontend frameworks is a plus.
  • Experience in healthcare tech or regulated domains preferred.

Bonus Skills :

  • Contributions to open-source audio/media projects.
  • Background in DSP, live streaming, or media infrastructure.
  • Familiarity with Grafana, Prometheus, or other observability tools.

Why Join Us :

Be part of a team working at the intersection of AI research and product engineering, shaping next-gen voice intelligence for real-world applications.

Own your systems end-to-end, innovate fast, and make a direct impact in healthcare AI.


Interview Process :

  1. Screening & Technical Task
  2. Technical Discussion
  3. HR/Leadership Round
Read more
Tata Consultancy Services
Agency job
via Risk Resources LLP hyd by Jhansi Padiy
Hyderabad
4 - 10 yrs
₹5L - ₹30L / yr
Kafka tester
Partitions
Consumer
producer
security
+2 more

Desired Competencies (Technical/Behavioral Competency)

Must-Have

·       Strong understanding of Kafka concepts, including topics, partitions, consumers, producers, and security.

·       Experience with testing Kafka Connect, Kafka Streams, and other Kafka ecosystem components.

·       API Testing Experience

·       X-RAY and Test Automation Experience

·       Expertise with Postman/SOAP

·       Agile/JIRA/Confluence

·       Strong familiarity such as XML, JSON, CSV, Avro, etc.

·       Strong hands-on SQL, Mongo.

·       Continuous integration and automated testing.

·       Working knowledge and experience of Git.

Good-to-Have

·       Troubleshoot Kafka related issues, Strong in Kafka client configuration and troubleshooting

 

SN

Responsibility of / Expectations from the Role

1

Engage with the customer to understand the requirements, provide technical solutions, provide value added suggestions

2

Help build and manage the team of Kafka and Java developers in the near future.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Manisha Gouda
Posted by Manisha Gouda
Bengaluru (Bangalore)
3 - 6 yrs
₹5L - ₹18L / yr
DevOps
Red Hat Linux
Apache Kafka
IBM WebSphere MQ
skill iconKubernetes
+1 more

- Experience comparable to DevOps SIRE providing SME-tevel application or platform support with responsibility for designing and automating operational procedures and best practices 


-Experience writing python and shell scripts to perform health checks and automations 


- Experience with Linux System Administration (preferably Red Hat)


- Hands-on experience with multi-tenant hosting environments for middleware applications (for example: centrally managed platform or infrastructure as a service) 


- Experience with implementing observabitity, monitoring and alerting tools 


- Excellent written and oral English communication skills. The candidate must write user-facing documentation, prepare and deliver presentations to an internal audience and effectively interact with upper management, colleagues, and customers 


- Independent problem-solving skills, self-motivated, and a mindset for taking ownership


- A minimum of 5 years of infrastructure production support or DevOps experience 




Additional Technical Skills 


Experience with broker-based messaging infrastructure such as Apache Kafka, IBM MQ (or similar technology like ActiveMQ, Azure Service Bus) including configuration and performance tuning 

Experience with public/private cloud and containerization technologies (e.g. Kubernetes) 

Experience with Agile development methodology, Cl/CD and automated build pipelines 

Experience with DevOps methodology (e.g. Phoenix Project) 

Experience with tools such as Jira, Confluence and ServiceNow 

Experience working with JSON, XML, Google Protocol Buffers, Avro, FIX 

Experience with troubleshooting tools such as TCPdump and Wireshark

Experience with NoSQL databases such as MongoDB and Redis interest and understanding of emerging IT trends 

Experience with system architecture design


Read more
Noida
6 - 10 yrs
₹25L - ₹35L / yr
skill iconPostgreSQL
Apache Kafka
CI/CD
Apache Airflow
Slowly changing dimensions
+2 more

Title: Data Platform / Database Architect (Postgres + Kafka) — AI‑Ready Data Infrastructure

Location: Noida (Hybrid). Remote within IST±3 considered for exceptional candidates.

Employment: Full‑time


About Us

We are building a high‑throughput, audit‑friendly data platform that powers a SaaS for financial data automation and reconciliation. The stack blends OLTP (Postgres), streaming (Kafka/Debezium), and OLAP (ClickHouse/Snowflake/BigQuery), with hooks for AI use‑cases (vector search, feature store, RAG).


Role Summary

Own the end‑to‑end design and performance of our data platform—from multi‑tenant Postgres schemas to CDC pipelines and analytics stores—while laying the groundwork for AI‑powered product features.


What You’ll Do

• Design multi‑tenant Postgres schemas (partitioning, indexing, normalization, RLS), and define retention/archival strategies.  

• Make Postgres fast and reliable: EXPLAIN/ANALYZE, connection pooling, vacuum/bloat control, query/index tuning, replication.  

• Build event‑streaming/CDC with Kafka/Debezium (topics, partitions, schema registry), and deliver data to ClickHouse/Snowflake/BigQuery.  

• Model analytics layers (star/snowflake), orchestrate jobs (Airflow/Dagster), and implement dbt‑based transformations.  

• Establish observability and SLOs for data: query/queue metrics, tracing, alerting, capacity planning.  

• Implement data security: encryption, masking, tokenization of PII, IAM boundaries; contribute to PCI‑like audit posture.  

• Integrate AI plumbing: vector embeddings (pgvector/Milvus), basic feature‑store patterns (Feast), retrieval pipelines and metadata lineage.  

• Collaborate with backend/ML/product to review designs, coach engineers, write docs/runbooks, and lead migrations.


Must‑Have Qualifications

• 6+ years building high‑scale data platforms with deep PostgreSQL experience (partitioning, advanced indexing, query planning, replication/HA).  

• Hands‑on with Kafka (or equivalent) and Debezium/CDC patterns; schema registry (Avro/Protobuf) and exactly‑once/at‑least‑once tradeoffs.  

• One or more analytics engines at scale: ClickHouse, Snowflake, or BigQuery, plus strong SQL.  

• Python for data tooling (pydantic, SQLAlchemy, or similar); orchestration with Airflow or Dagster; transformations with dbt.  

• Solid cloud experience (AWS/GCP/Azure)—networking, security groups/IAM, secrets management, cost controls.  

• Pragmatic performance engineering mindset; excellent communication and documentation.


Nice‑to‑Have

• Vector/semantic search (pgvector/Milvus/Pinecone), feature store (Feast), or RAG data pipelines.  

• Experience in fintech‑style domains (reconciliation, ledgers, payments) and SOX/PCI‑like controls.  

• Infra‑as‑Code (Terraform), containerized services (Docker/K8s), and observability stacks (Prometheus/Grafana/OpenTelemetry).  

• Exposure to Go/Java for stream processors/consumers.  

• Lakehouse formats (Delta/Iceberg/Hudi).

Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Dipika
Posted by Dipika
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Hyderabad, Pune
5 - 7 yrs
₹5L - ₹20L / yr
skill iconJava
Microservices
06692
Apache Kafka
Apache ActiveMQ
+3 more

1 Senior Associate Technology L1 – Java Microservices


Company Description

Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.


Job Description

We are looking for a Senior Associate Technology Level 1 - Java Microservices Developer to join our team of bright thinkers and doers. You’ll use your problem-solving creativity to design, architect, and develop high-end technology solutions that solve our clients’ most complex and challenging problems across different industries.

We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions.


Your Impact:

• Drive the design, planning, and implementation of multifaceted applications, giving you breadth and depth of knowledge across the entire project lifecycle.

• Combine your technical expertise and problem-solving passion to work closely with clients, turning • complex ideas into end-to-end solutions that transform our clients’ business

• Constantly innovate and evaluate emerging technologies and methods to provide scalable and elegant solutions that help clients achieve their business goals.


Qualifications

➢ 5 to 7 Years of software development experience

➢ Strong development skills in Java JDK 1.8 or above

➢ Java fundamentals like Exceptional handling, Serialization/Deserialization and Immutability concepts

➢ Good fundamental knowledge in Enums, Collections, Annotations, Generics, Auto boxing and Data Structure

➢ Database RDBMS/No SQL (SQL, Joins, Indexing)

➢ Multithreading (Re-entrant Lock, Fork & Join, Sync, Executor Framework)

➢ Spring Core & Spring Boot, security, transactions ➢ Hands-on experience with JMS (ActiveMQ, RabbitMQ, Kafka etc)

➢ Memory Mgmt (JVM configuration, Profiling, GC), profiling, Perf tunning, Testing, Jmeter/similar tool)

➢ Devops (CI/CD: Maven/Gradle, Jenkins, Quality plugins, Docker and containersization)

➢ Logical/Analytical skills. Thorough understanding of OOPS concepts, Design principles and implementation of

➢ different type of Design patterns. ➢ Hands-on experience with any of the logging frameworks (SLF4J/LogBack/Log4j) ➢ Experience of writing Junit test cases using Mockito / Powermock frameworks.

➢ Should have practical experience with Maven/Gradle and knowledge of version control systems like Git/SVN etc.

➢ Good communication skills and ability to work with global teams to define and deliver on projects.

➢ Sound understanding/experience in software development process, test-driven development.

➢ Cloud – AWS / AZURE / GCP / PCF or any private cloud would also be fine

➢ Experience in Microservices

Read more
Victrix Systems  Labs

at Victrix Systems Labs

1 recruiter
Vijayalaxmi Yadav
Posted by Vijayalaxmi Yadav
Pune
6 - 9 yrs
₹20L - ₹30L / yr
Apache Kafka
skill iconElastic Search
skill iconAmazon Web Services (AWS)
skill iconJava
skill iconSpring Boot
+1 more

Role & Responsibilities :


- Lead the design, analysis, and implementation of technical solutions.


- Take full ownership of product features.


- Participate in detailed discussions with the product management team regarding requirements.


- Work closely with the engineering team to design and implement scalable solutions.


- Create detailed functional and technical specifications.


- Follow Test-Driven Development (TDD) and deliver high-quality code.


- Communicate proactively with your manager regarding risks and progress.


- Mentor junior team members and provide technical guidance.


- Troubleshoot and resolve production issues with RCA and long-term solutions


Required Skills & Experience :


- Bachelors/Masters degree in Computer Science or related field with a solid academic track record.


- 6+ years of hands-on experience in backend development for large-scale enterprise products.


- Strong programming skills in Java; familiarity with Python is a plus.


- Deep understanding of data structures, algorithms, and problem-solving.


- Proficient in Spring Boot and RESTful APIs.


- Experience with cloud technologies like ElasticSearch, Kafka, MongoDB, Hazelcast, Ceph, etc.


- Strong experience in building scalable, concurrent applications.


- Exposure to Service-Oriented Architecture (SOA) and Test-Driven Development (TDD).


- Excellent communication and collaboration skills.


Preferred Technologies :


- Java


- Spring Boot, J2EE


- ElasticSearch


- Kafka


- MongoDB, Ceph


- AWS


- Storm, Hazelcast


- TDD, SOA



Read more
iRage

at iRage

3 recruiters
Jyosana Jadhav
Posted by Jyosana Jadhav
Mumbai
4 - 7 yrs
₹25L - ₹40L / yr
skill iconReact.js
skill iconJavascript
DOM
WebSocket
Chart.js
+20 more

We are seeking a highly skilled React JS Developer with exceptional DOM manipulation expertise and real-time data handling experience to join our team. You'll be building and optimizing high-performance user interfaces for stock market trading applications where milliseconds matter and data flows continuously.


The ideal candidate thrives in fast-paced environments, understands the intricacies of browser performance, and has hands-on experience with WebSockets and real-time data streaming architectures.


Key Responsibilities


Core Development

  • Advanced DOM Operations: Implement complex, performance-optimized DOM manipulations for real-time trading interfaces
  • Real-time Data Management: Build robust WebSocket connections and handle high-frequency data streams with minimal latency
  • Performance Engineering: Create lightning-fast, scalable front-end applications that process thousands of market updates per second
  • Custom Component Architecture: Design and build reusable, high-performance React components optimized for trading workflows


Collaboration & Integration

  • Work closely with traders, quants, and backend developers to translate complex trading requirements into intuitive interfaces
  • Collaborate with UX/UI designers and product managers to create responsive, trader-focused experiences
  • Integrate with real-time market data APIs and trading execution systems


Technical Excellence

  • Implement sophisticated data visualizations and interactive charts using libraries like Chartjs, TradingView, or custom D3.js solutions
  • Ensure cross-browser compatibility and responsiveness across multiple devices and screen sizes
  • Debug and resolve complex performance issues, particularly in real-time data processing and rendering
  • Maintain high-quality code through reviews, testing, and comprehensive documentation


Required Skills & Experience


React & JavaScript Mastery

  • 5+ years of professional React.js development with deep understanding of React internals, hooks, and advanced patterns
  • Expert-level JavaScript (ES6+) with strong proficiency in asynchronous programming, closures, and memory management
  • Advanced HTML5 & CSS3 skills with focus on performance and cross-browser compatibility


Real-time & Performance Expertise

  • Proven experience with WebSockets and real-time data streaming protocols
  • Strong DOM manipulation skills - direct DOM access, virtual scrolling, efficient updates, and performance optimization
  • RESTful API integration with experience in handling high-frequency data feeds
  • Browser performance optimization - understanding of rendering pipeline, memory management, and profiling tools


Development Tools & Practices

  • Proficiency with modern build tools: Webpack, Babel, Vite, or similar
  • Experience with Git version control and collaborative development workflows
  • Agile/Scrum development environment experience
  • Understanding of testing frameworks (Jest, React Testing Library)


Financial Data Visualization

  • Experience with financial charting libraries: Chartjs, TradingView, D3.js, or custom visualization solutions
  • Understanding of market data structures, order books, and trading terminology
  • Knowledge of data streaming optimization techniques for financial applications


Nice-to-Have Skills


Domain Expertise

  • Prior experience in stock market, trading, or financial services - understanding of trading workflows, order management, risk systems
  • Algorithmic trading knowledge or exposure to quantitative trading systems
  • Financial market understanding - equities, derivatives, commodities


Technical Plus Points

  • Backend development experience with GoLang, Python, or Node.js
  • Database knowledge: SQL, NoSQL, time-series databases (InfluxDB, TimescaleDB)
  • Cloud platform experience: AWS, Azure, GCP for deploying scalable applications
  • Message queue systems: Redis, RabbitMQ, Kafka, NATS for real-time data processing
  • Microservices architecture understanding and API design principles


Advanced Skills

  • Service Worker implementation for offline-first applications
  • Progressive Web App (PWA) development
  • Mobile-first responsive design expertise


Qualifications

  • Bachelor's degree in Computer Science, Engineering, or related field (or equivalent professional experience)
  • 5+ years of professional React.js development with demonstrable experience in performance-critical applications
  • Portfolio or examples of complex real-time applications you've built
  • Financial services experience strongly preferred


Why You'll Love Working Here


We're a team that hustles—plain and simple. But we also believe life outside work matters. No cubicles, no suits—just great people doing great work in a space built for comfort and creativity.


What We Offer

💰 Competitive salary – Get paid what you're worth

🌴 Generous paid time off – Recharge and come back sharper

🌍 Work with the best – Collaborate with top-tier global talent

✈️ Adventure together – Annual offsites (mostly outside India) and regular team outings

🎯 Performance rewards – Multiple bonuses for those who go above and beyond

🏥 Health covered – Comprehensive insurance so you're always protected

Fun, not just work – On-site sports, games, and a lively workspace

🧠 Learn and lead – Regular knowledge-sharing sessions led by your peers

📚 Annual Education Stipend – Take any external course, bootcamp, or certification that makes you better at your craft

🏋️ Stay fit – Gym memberships with equal employer contribution to keep you at your best

🚚 Relocation support – Smooth move? We've got your back

🏆 Friendly competition – Work challenges and extracurricular contests to keep things exciting


We work hard, play hard, and grow together. Join us.



Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
6 - 8 yrs
Best in industry
Salesforce
Salesforce Apex
Lightning Web Component
Salesforce API
Sales Cloud
+7 more

Key Responsibilities:-

  • Design, build, and enhance Salesforce applications using Apex, Lightning Web Components (LWC), Visualforce, and SOQL.
  • Implement integrations with external systems using REST APIs and event-driven messaging (e.g., Kafka).
  • Collaborate with architects and business analysts to translate requirements into scalable, maintainable solutions.
  • Establish and follow engineering best practices, including source control (Git), code reviews, branching strategies, CI/CD pipelines, automated testing, and environment management.
  • Establish and maintain Azure DevOps-based workflows (repos, pipelines, automated testing) for Salesforce engineering.
  • Ensure solutions follow Salesforce security, data modeling, and performance guidelines.
  • Participate in Agile ceremonies, providing technical expertise and leadership within sprints and releases.
  • Optimize workflows, automations, and data processes across Sales Cloud, Service Cloud, and custom Salesforce apps.
  • Provide technical mentoring and knowledge sharing when required.
  • Support production environments, troubleshoot issues, and drive root-cause analysis for long-term reliability.
  • Stay current on Salesforce platform updates, releases, and new features, recommending adoption where beneficial.


Required Qualifications:-

  • Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience).
  • 6+ years of Salesforce development experience with strong knowledge of Apex, Lightning Web Components, and Salesforce APIs.
  • Proven experience with Salesforce core clouds (Sales Cloud, Service Cloud, or equivalent).
  • Strong hands-on experience with API integrations (REST/SOAP) and event-driven architectures (Kafka, Pub/Sub).
  • Solid understanding of engineering practices: Git-based source control (Salesforce DX/metadata), branching strategies, CI/CD, automated testing, and deployment management.
  • Familiarity with Azure DevOps repositories and pipelines.
  • Strong knowledge of Salesforce data modeling, security, and sharing rules.
  • Excellent problem-solving skills and ability to collaborate across teams.


Preferred Qualifications:-

  • Salesforce Platform Developer II certification (or equivalent advanced credentials).
  • Experience with Health Cloud, Financial Services Cloud, or other industry-specific Salesforce products.
  • Experience implementing logging, monitoring, and observability within Salesforce and integrated systems.
  • Background in Agile/Scrum delivery with strong collaboration skills.
  • Prior experience establishing or enforcing engineering standards across Salesforce teams.
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Gurugram
2 - 6 yrs
₹4L - ₹9L / yr
skill iconPython
skill iconDjango
skill iconRedis
RabbitMQ
Celery
+5 more

Job Title : Python Django Developer

Experience : 3+ Years

Location : Gurgaon (Work from Office)


Job Summary :

We are looking for an experienced Python Django Developer with strong expertise in building scalable web applications and distributed systems. The ideal candidate must have hands-on experience with Django, Redis, Celery, RabbitMQ, PostgreSQL, and Kafka to design and optimize high-performance applications.


Mandatory Skills :

Python, Django, Redis, Celery, RabbitMQ, PostgreSQL, Kafka


Key Responsibilities :

  • Design, develop, and maintain web applications using Python & Django.
  • Implement asynchronous tasks and background job processing using Celery with RabbitMQ/Redis.
  • Work with PostgreSQL for database design, optimization, and complex queries.
  • Integrate and optimize messaging/streaming systems using Kafka.
  • Write clean, scalable, and efficient code following best practices.
  • Troubleshoot, debug, and optimize application performance.
  • Collaborate with cross-functional teams (frontend, DevOps, QA) for end-to-end delivery.
  • Stay updated with the latest backend development trends and technologies.

Requirements :

  • Minimum 3+ years of experience in backend development using Python & Django.
  • Hands-on experience with Redis, Celery, RabbitMQ, Kafka, and PostgreSQL.
  • Strong understanding of REST APIs, microservices architecture, and asynchronous task management.
  • Knowledge of performance tuning, caching strategies, and scalable system design.
  • Familiarity with Git, CI/CD pipelines, and cloud deployment (AWS/GCP/Azure) is a plus.
  • Excellent problem-solving and communication skills.
Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Gurugram
3 - 6 yrs
₹6L - ₹9L / yr
skill iconPython
skill iconDjango
skill iconRedis
Celery
RabbitMQ
+2 more

Job Title: Python Developer - Django (Full Time)

Location: Gurgaon, Onsite

Interview: Virtual Interview

Experience Required: 3+ Years

About the Role

We are looking for a skilled Python Developer with hands-on experience in building scalable backend systems. The ideal candidate should have strong expertise in Python, Django, distributed task queues using Celery, Redis, RabbitMQ, and experience working with event streaming platforms like Kafka.

Key Responsibilities

  • Design, develop, and maintain backend services using Python and Django.
  • Implement and optimize task queues using Celery with Redis/RabbitMQ as brokers.
  • Develop and integrate event-driven systems using Apache Kafka.
  • Write clean, reusable, and efficient code following best practices.
  • Build RESTful APIs and integrate with external services.
  • Ensure performance, scalability, and security of applications.
  • Collaborate with frontend developers, DevOps, and product teams to deliver high-quality solutions.
  • Troubleshoot and debug issues in production and staging environments.

Required Skills & Experience

  • 2+ years of professional experience in Python backend development.
  • Strong knowledge of Django Framework.
  • Hands-on experience with Celery, Redis, RabbitMQ, and Kafka.
  • Good understanding of REST API design principles.
  • Experience with relational databases (PostgreSQL/MySQL).
  • Familiarity with version control (Git) and Agile development.
  • Strong problem-solving skills and ability to work in a fast-paced environment.


Read more
MindCrew Technologies

at MindCrew Technologies

3 recruiters
Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Mumbai
6 - 8 yrs
₹10L - ₹15L / yr
skill iconC#
ADO.NET
Entity Framework
skill iconReact.js
skill iconPostgreSQL
+6 more

Role: Dot net +React Developer

Experience- 6+Years

Location- Andheri (Navi Mumbai)

Budget- 18 LPA

Opportunity - Contract 


Technical Expertise:

* Proficiency in OOP concepts, C#, .NET Core, Entity Framework, React, SQL Server, PostgreSQL, Dapper, ADO.NET, LINQ, and Web API Development.

* Experience with Kafka or Rabbit MQ for event-driven architecture and messaging systems.

* Debugging and troubleshooting skills with an understanding of performance optimization.

* Strong knowledge of database development, including tables, views, stored procedures, triggers, and functions.

* Familiarity with unit testing frameworks such as XUnit.

* Experience with JWT services, Git, and third-party API integration.

* Experience in code review of Jr. developer.


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort