Cutshort logo
Python Jobs in Mumbai

50+ Python Jobs in Mumbai | Python Job openings in Mumbai

Apply to 50+ Python Jobs in Mumbai on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.

icon
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
5 - 12 yrs
₹20L - ₹46L / yr
skill iconData Science
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Generative AI
skill iconDeep Learning
+14 more

Review Criteria

  • Strong Senior Data Scientist (AI/ML/GenAI) Profile
  • 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
  • Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
  • 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
  • Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows

 

Preferred

  • Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
  • Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.

 

Job Specific Criteria

  • CV Attachment is mandatory
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?
  • Virtual Interview requires video to be on, are you okay with it?


Role & Responsibilities

Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.

 

Responsibilities:

  • Own the full ML lifecycle: model design, training, evaluation, deployment
  • Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
  • Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
  • Build agentic workflows for reasoning, planning, and decision-making
  • Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
  • Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
  • Collaborate with product and engineering teams to integrate AI models into business applications
  • Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices


Ideal Candidate

  • 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
  • Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
  • Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
  • Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
  • Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
  • Strong software engineering background with experience in testing, version control, and APIs
  • Proven ability to balance innovation with scalable deployment
  • B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
  • Bonus: Open-source contributions, GenAI research, or applied systems at scale


Read more
Auxo AI
kusuma Gullamajji
Posted by kusuma Gullamajji
Bengaluru (Bangalore), Hyderabad, Mumbai, Gurugram
2 - 8 yrs
₹10L - ₹35L / yr
GCP
skill iconPython
SQL
Google Cloud Platform (GCP)

Responsibilities:

Build and optimize batch and streaming data pipelines using Apache Beam (Dataflow)

Design and maintain BigQuery datasets using best practices in partitioning, clustering, and materialized views

Develop and manage Airflow DAGs in Cloud Composer for workflow orchestration

Implement SQL-based transformations using Dataform (or dbt)

Leverage Pub/Sub for event-driven ingestion and Cloud Storage for raw/lake layer data architecture

Drive engineering best practices across CI/CD, testing, monitoring, and pipeline observability

Partner with solution architects and product teams to translate data requirements into technical designs

Mentor junior data engineers and support knowledge-sharing across the team

Contribute to documentation, code reviews, sprint planning, and agile ceremonies

Requirements

2+ years of hands-on experience in data engineering, with at least 2 years on GCP

Proven expertise in BigQuery, Dataflow (Apache Beam), Cloud Composer (Airflow)

Strong programming skills in Python and/or Java

Experience with SQL optimization, data modeling, and pipeline orchestration

Familiarity with Git, CI/CD pipelines, and data quality monitoring frameworks

Exposure to Dataform, dbt, or similar tools for ELT workflows

Solid understanding of data architecture, schema design, and performance tuning

Excellent problem-solving and collaboration skills

Bonus Skills:

GCP Professional Data Engineer certification

Experience with Vertex AI, Cloud Functions, Dataproc, or real-time streaming architectures

Familiarity with data governance tools (e.g., Atlan, Collibra, Dataplex)

Exposure to Docker/Kubernetes, API integration, and infrastructure-as-code (Terraform)

Read more
venanalytics

at venanalytics

2 candid answers
Rincy jain
Posted by Rincy jain
Remote, Mumbai
3 - 4 yrs
₹7L - ₹10L / yr
skill iconPython
SQL
PowerBI
Client Servicing
Team Management
+6 more

About Ven Analytics


At Ven Analytics, we don’t just crunch numbers — we decode them to uncover insights that drive real business impact. We’re a data-driven analytics company that partners with high-growth startups and enterprises to build powerful data products, business intelligence systems, and scalable reporting solutions. With a focus on innovation, collaboration, and continuous learning, we empower our teams to solve real-world business problems using the power of data.


Role Overview


We’re looking for a Power BI Data Analyst who is not just proficient in tools but passionate about building insightful, scalable, and high-performing dashboards. The ideal candidate should have strong fundamentals in data modeling, a flair for storytelling through data, and the technical skills to implement robust data solutions using Power BI, Python, and SQL..


Key Responsibilities


  • Technical Expertise: Develop scalable, accurate, and maintainable data models using Power BI, with a clear understanding of Data Modeling, DAX, Power Query, and visualization principles.


  • Programming Proficiency: Use SQL and Python for complex data manipulation, automation, and analysis.


  • Business Problem Translation: Collaborate with stakeholders to convert business problems into structured data-centric solutions considering performance, scalability, and commercial goals.


  • Hypothesis Development: Break down complex use-cases into testable hypotheses and define relevant datasets required for evaluation.


  • Solution Design: Create wireframes, proof-of-concepts (POC), and final dashboards in line with business requirements.


  • Dashboard Quality: Ensure dashboards meet high standards of data accuracy, visual clarity, performance, and support SLAs.


  • Performance Optimization: Continuously enhance user experience by improving performance, maintainability, and scalability of Power BI solutions.


  • Troubleshooting & Support: Quick resolution of access, latency, and data issues as per defined SLAs.


  • Power BI Development: Use power BI desktop for report building and service for distribution 


  • Backend development: Develop optimized SQL queries that are easy to consume, maintain and debug.


  • Version Control: Strict control on versions by tracking CRs and Bugfixes. Ensuring the maintenance of Prod and Dev dashboards. 


  • Client Servicing : Engage with clients to understand their data needs, gather requirements, present insights, and ensure timely, clear communication throughout project cycles.


  • Team Management : Lead and mentor a small team by assigning tasks, reviewing work quality, guiding technical problem-solving, and ensuring timely delivery of dashboards and reports..


Must-Have Skills


  • Strong experience building robust data models in Power BI
  • Hands-on expertise with DAX (complex measures and calculated columns)
  • Proficiency in M Language (Power Query) beyond drag-and-drop UI
  • Clear understanding of data visualization best practices (less fluff, more insight)
  • Solid grasp of SQL and Python for data processing
  • Strong analytical thinking and ability to craft compelling data stories
  • Client Servicing Background.


Good-to-Have (Bonus Points)


  • Experience using DAX Studio and Tabular Editor
  • Prior work in a high-volume data processing production environment
  • Exposure to modern CI/CD practices or version control with BI tools

 

Why Join Ven Analytics?


  • Be part of a fast-growing startup that puts data at the heart of every decision.
  • Opportunity to work on high-impact, real-world business challenges.
  • Collaborative, transparent, and learning-oriented work environment.
  • Flexible work culture and focus on career development.


Read more
Avhan Technologies Pvt Ltd
Avhan Technologies
Posted by Avhan Technologies
Mumbai
3 - 6 yrs
₹5L - ₹9L / yr
Artificial Intelligence (AI)
Generative AI
Natural Language Processing (NLP)
skill iconNodeJS (Node.js)
Large Language Models (LLM)
+5 more

Location: Mumbai / Remote

Department: AI & Automation

Role Objective

Build intelligent, multilingual AI agents that combine LLM reasoning with live communication channels to assist humans in real-time.

Key Responsibilities

  • Design and deploy AI agent workflows using LangChain / Bedrock / OpenAI APIs.
  • Develop contextual memory, persona, and tone control.
  • Integrate agents into Jodo Online, Jodo QA, Jodo Admin, Jodo C3, and Toolbar Apps.
  • Optimize latency and conversation flow for real-time interactions.
  • Implement compliance and audit hooks within AI pipelines.

Required Skills & Experience

  • 3–7 years in AI engineering, NLP, or chatbot frameworks.
  • Experience with Python/Node.js LLM APIs.
  • Familiar with RAG architectures and vector DBs.
  • Understanding of multilingual processing and ethics AI.

What Success Looks Like

  • < 2 s response time.
  • 95 % contextual accuracy.

Why Join Us

Shape the digital workforce of the borderless economy — where humans and AI execute seamlessly together.


https://www.avhan.com/job/ai-agent-developer/

Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Bengaluru (Bangalore), Pune, Mumbai, Nagpur, Ahmedabad
3 - 7 yrs
₹3L - ₹14L / yr
skill iconPython
skill iconDjango
skill iconFlask
FastAPI
skill iconAmazon Web Services (AWS)
+2 more

Job Summary:


Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.


Key Responsibilities:

  • Design, develop, and deploy backend services and APIs using Python.
  • Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
  • Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
  • Implement containerized environments using Docker and manage orchestration via Kubernetes.
  • Write automation and scripting solutions in Bash/Shell to streamline operations.
  • Work with relational databases like MySQL and SQL, including query optimization.
  • Collaborate directly with clients to understand requirements and provide technical solutions.
  • Ensure system reliability, performance, and scalability across environments.


Required Skills:

  • 3.5+ years of hands-on experience in Python development.
  • Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
  • Good understanding of Terraform or other Infrastructure as Code tools.
  • Proficient with Docker and container orchestration using Kubernetes.
  • Experience with CI/CD tools like Jenkins or GitHub Actions.
  • Strong command of SQL/MySQL and scripting with Bash/Shell.
  • Experience working with external clients or in client-facing roles

.

Preferred Qualifications:

  • AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
  • Familiarity with Agile/Scrum methodologies.
  • Strong analytical and problem-solving skills.
  • Excellent communication and stakeholder management abilities.


Read more
Hashone Careers

at Hashone Careers

2 candid answers
Madhavan I
Posted by Madhavan I
Mumbai
5 - 8 yrs
₹12L - ₹24L / yr
Data engineering
skill iconPython
SQL

Job Description

Location: Mumbai (with short/medium-term travel opportunities within India & foreign location)

Experience: 5 -8 years

Job Type: Full-time

About the Role

We are looking for experienced data engineers who can independently build, optimize, and manage scalable data pipelines and platforms. In this role, you’ll work closely with clients and internal teams to deliver robust data solutions that power analytics, AI/ML, and operational systems. You’ll also help mentor junior engineers and bring engineering discipline into our data engagements.

Key Responsibilities

Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.


Implement scalable data models, data warehouses/lakehouses, and data lakes to support analytics and decision-making.


Collaborate with cross-functional stakeholders to understand business requirements and translate them into technical data solutions.


Drive performance tuning, monitoring, and reliability of data pipelines.


Write clean, modular, and production-ready code with proper documentation and testing.


Contribute to architectural discussions, tool evaluations, and platform setup.


Mentor junior engineers and participate in code/design reviews.


Must-Have Skills

Strong programming skills in Python and advanced SQL expertise.


Deep understanding of data engineering concepts such as ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.


Experience with distributed data processing frameworks (e.g., Apache Spark, Flink, or similar).


Exposure with Java is mandate


Experience with building pipelines using orchestration tools like Airflow or similar.


Familiarity with CI/CD pipelines and version control tools like Git.


Ability to debug, optimize, and scale data pipelines in real-world settings.


Good to Have

Experience working on any major cloud platform (AWS preferred; GCP or Azure also welcome).


Exposure to Databricks, dbt, or similar platforms is a plus.


Experience with Snowflake is preferred.


Understanding of data governance, data quality frameworks, and observability.


Certification in AWS (e.g., Data Analytics, Solutions Architect) or Databricks is a plus.


Other Expectations

Comfortable working in fast-paced, client-facing environments.


Strong analytical and problem-solving skills with attention to detail.


Ability to adapt across tools, stacks, and business domains.


Willingness to travel within India for short/medium-term client engagements as needed.



Read more
Daten  Wissen Pvt Ltd

at Daten Wissen Pvt Ltd

1 recruiter
Ashwini poojari
Posted by Ashwini poojari
Mumbai
1 - 3 yrs
₹4L - ₹7L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+2 more

Artificial Intelligence Resercher (Computer vision)


Responsibility

• Work on Various SOTA Computer Vision Models, Dataset Augmentation & Dataset Generation

Techniques that help improve model accuracy & precision.

• Work on development & improvement of End-to-End Pipeline use cases running at scale.

• Programming skills with multi-threaded GPU CUDA computing and API Solutions.

• Proficient with Training of Detection, Classification & Segmentation Models with TensorFlow,

Pytorch, MX Net etc


Required Skills

• Strong development skills required in Python and C++.

• Ability to architect a solution based on given requirements and convert the business requirements into a technical computer vision problem statement.

• Ability to work in a fast-paced environment and coordinate across different parts of different projects.

• Bringing in the technical expertise around the implementation of best coding standards and

practices across the team.

• Extensive experience of working on edge devices like Jetson Nano, Raspberry Pi and other GPU powered low computational devices.

• Experience with using Docker, Nvidia Docker, Nvidia NGC containers for Computer Vision Deep

Learning

• Experience with Scalable Cloud Deployment Architecture for Video Analytics(Involving Kubernetes

and or Kafka)

• Good experience with any of one cloud technologies like AWS, Azure and Google Cloud.

• Experience in working with Model Optimisation for Nvidia Hardware (Tensors Conversion of both TensorFlow & Pytorch models.

• Proficient understanding of code versioning tools, such as Git.

• Proficient in Data Structures & Algorithms.

• Well versed in software design paradigms and good development practices.

• Experience with Scalable Cloud Deployment Architecture for Video Analytics(Involving Kubernetes

and or Kafka).






Read more
Big Rattle Technologies
Sreelakshmi Nair (Big Rattle Technologies)
Posted by Sreelakshmi Nair (Big Rattle Technologies)
Remote, Mumbai
5 - 7 yrs
₹8L - ₹12L / yr
skill iconPython
SQL
skill iconMachine Learning (ML)
Data profiling
E2E
+8 more

Position: QA Engineer – Machine Learning Systems (5 - 7 years)

Location: Remote (Company in Mumbai)

Company: Big Rattle Technologies Private Limited


Immediate Joiners only.


Summary:

The QA Engineer will own quality assurance across the ML lifecycle—from raw data validation through feature engineering checks, model training/evaluation verification, batch prediction/optimization validation, and end-to-end (E2E) workflow testing. The role is hands-on with Python automation, data profiling, and pipeline test harnesses in Azure ML and Azure DevOps. Success means probably correct data, models, and outputs at production scale and cadence.


Key Responsibilities:

Test Strategy & Governance

  • ○ Define an ML-specific Test Strategy covering data quality KPIs, feature consistency
  • checks, model acceptance gates (metrics + guardrails), and E2E run acceptance
  • (timeliness, completeness, integrity).
  • ○ Establish versioned test datasets & golden baselines for repeatable regression of
  • features, models, and optimizers.


Data Quality & Transformation

  • Validate raw data extracts and landed data lake data: schema/contract checks, null/outlier thresholds, time-window completeness, duplicate detection, site/material coverage.
  • Validate transformed/feature datasets: deterministic feature generation, leakage detection, drift vs. historical distributions, feature parity across runs (hash or statistical similarity tests).
  • Implement automated data quality checks (e.g., Great Expectations/pytest + Pandas/SQL) executed in CI and AML pipelines.

Model Training & Evaluation

  • Verify training inputs (splits, windowing, target leakage prevention) and hyperparameter configs per site/cluster.
  • Automate metric verification (e.g., MAPE/MAE/RMSE, uplift vs. last model, stability tests) with acceptance thresholds and champion/challenger logic.
  • Validate feature importance stability and sensitivity/elasticity sanity checks (price/volume monotonicity where applicable).
  • Gate model registration/promotion in AML based on signed test artifacts and reproducible metrics.


Predictions, Optimization & Guardrails

  • Validate batch predictions: result shapes, coverage, latency, and failure handling.
  • Test model optimization outputs and enforced guardrails: detect violations and prove idempotent writes to DB.
  • Verify API push to third party system (idempotency keys, retry/backoff, delivery receipts).


Pipelines & E2E

  • Build pipeline test harnesses for AML pipelines (data-gen nightly, training weekly,
  • prediction/optimization) including orchestrated synthetic runs and fault injection
  • (missing slice, late competitor data, SB backlog).
  • Run E2E tests from raw data store -> ADLS -> AML -> RDBMS -> APIM/Frontend, assert
  • freshness SLOs and audit event completeness (Event Hubs -> ADLS immutable).


Automation & Tooling

  • Develop Python-based automated tests (pytest) for data checks, model metrics, and API contracts; integrate with Azure DevOps (pipelines, badges, gates).
  • Implement data-driven test runners (parameterized by site/material/model-version) and store signed test artifacts alongside models in AML Registry.
  • Create synthetic test data generators and golden fixtures to cover edge cases (price gaps, competitor shocks, cold starts).


Reporting & Quality Ops

  • Publish weekly test reports and go/no-go recommendations for promotions; maintain a defect taxonomy (data vs. model vs. serving vs. optimization).
  • Contribute to SLI/SLO dashboards (prediction timeliness, queue/DLQ, push success, data drift) used for release gates.


Required Skills (hands-on experience in the following):

  • Python automation (pytest, pandas, NumPy), SQL (PostgreSQL/Snowflake), and CI/CD (Azure
  • DevOps) for fully automated ML QA.
  • Strong grasp of ML validation: leakage checks, proper splits, metric selection
  • (MAE/MAPE/RMSE), drift detection, sensitivity/elasticity sanity checks.
  • Experience testing AML pipelines (pipelines/jobs/components), and message-driven integrations
  • (Service Bus/Event Hubs).
  • API test skills (FastAPI/OpenAPI, contract tests, Postman/pytest-httpx) + idempotency and retry
  • patterns.
  • Familiar with feature stores/feature engineering concepts and reproducibility.
  • Solid understanding of observability (App Insights/Log Analytics) and auditability requirements.


Required Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
  • 5–7+ years in QA with 3+ years focused on ML/Data systems (data pipelines + model validation).
  • Certification in Azure Data or ML Engineer Associate is a plus.



Why should you join Big Rattle?

Big Rattle Technologies specializes in AI/ ML Products and Solutions as well as Mobile and Web Application Development. Our clients include Fortune 500 companies. Over the past 13 years, we have delivered multiple projects for international and Indian clients from various industries like FMCG, Banking and Finance, Automobiles, Ecommerce, etc. We also specialise in Product Development for our clients.

Big Rattle Technologies Private Limited is ISO 27001:2022 certified and CyberGRX certified.

What We Offer:

  • Opportunity to work on diverse projects for Fortune 500 clients.
  • Competitive salary and performance-based growth.
  • Dynamic, collaborative, and growth-oriented work environment.
  • Direct impact on product quality and client satisfaction.
  • 5-day hybrid work week.
  • Certification reimbursement.
  • Healthcare coverage.

How to Apply:

Interested candidates are invited to submit their resume detailing their experience. Please detail out your work experience and the kind of projects you have worked on. Ensure you highlight your contributions and accomplishments to the projects.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Praffull Shinde
Posted by Praffull Shinde
Pune, Mumbai, Bengaluru (Bangalore)
8 - 14 yrs
Best in industry
Google Cloud Platform (GCP)
Terraform
skill iconKubernetes
DevOps
skill iconPython

JD for Cloud engineer

 

Job Summary:


We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.


You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.

 

Key Responsibilities:

1. Cloud Infrastructure Design & Management

  • Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
  • Implement Google Cloud Storage, Cloud SQL, filestore,  for data storage and processing needs.
  • Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
  • Optimize resource allocation, monitoring, and cost efficiency across GCP environments.


2. Kubernetes & Container Orchestration

  • Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
  • Work with Helm charts for microservices deployments.
  • Automate scaling, rolling updates, and zero-downtime deployments.

 

3. Serverless & Compute Services

  • Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
  • Optimize containerized applications running on Cloud Run for cost efficiency and performance.

 

4. CI/CD & DevOps Automation

  • Design, implement, and manage CI/CD pipelines using Azure DevOps.
  • Automate infrastructure deployment using Terraform, Bash and Powershell scripting
  • Integrate security and compliance checks into the DevOps workflow (DevSecOps).

 

 

Required Skills & Qualifications:

Experience: 8+ years in Cloud Engineering, with a focus on GCP.

Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.

Read more
CoffeeBeans

at CoffeeBeans

2 candid answers
Ariba Khan
Posted by Ariba Khan
Mumbai, Hyderabad
4 - 6 yrs
Upto ₹27L / yr (Varies
)
skill iconPython
SQL
skill iconJava
Data engineering

We are looking for experienced Data Engineers who can independently build, optimize, and manage scalable data pipelines and data platforms. In this role, you will collaborate with clients and internal teams to deliver robust data solutions that support analytics, AI/ML, and operational systems. You will also mentor junior engineers and bring strong engineering discipline to our data engagements.


Key Responsibilities

  • Design, build, and optimize large-scale, distributed batch and streaming data pipelines.
  • Implement scalable data models, data warehouses/lakehouses, and data lakes to support analytics and decision-making.
  • Work closely with cross-functional stakeholders to translate business requirements into technical data solutions.
  • Drive performance tuning, monitoring, and reliability of data pipelines.
  • Write clean, modular, production-ready code with proper documentation and testing.
  • Contribute to architecture discussions, tool evaluations, and platform setup.
  • Mentor junior engineers and participate in code/design reviews.

Must-Have Skills

  • Strong programming skills in Python (exp with Java is a good to have).
  • Advanced SQL expertise with ability to work on complex queries and optimizations.
  • Deep understanding of data engineering concepts such as ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.
  • Experience with distributed processing frameworks like Apache Spark, Flink, or similar.
  • Experience with Snowflake (preferred).
  • Hands-on experience building pipelines using orchestration tools such as Airflow or similar.
  • Familiarity with CI/CD, version control (Git), and modern development practices.
  • Ability to debug, optimize, and scale data pipelines in real-world environments.

Good to Have

  • Experience with major cloud platforms (AWS preferred; GCP/Azure also welcome).
  • Exposure to Databricks, dbt, or similar platforms.
  • Understanding of data governance, data quality frameworks, and observability.
  • Certifications in AWS (Data Analytics / Solutions Architect) or Databricks.

Other Expectations

  • Comfortable working in fast-paced, client-facing environments.
  • Strong analytical and problem-solving skills with excellent attention to detail.
  • Ability to adapt across tools, stacks, and business domains.
  • Willingness to travel within India for short/medium-term client engagements as needed.
Read more
shaadi.com

at shaadi.com

3 recruiters
Agency job
via hirezyai by Aardra Suresh
Mumbai
2 - 8 yrs
₹24L - ₹30L / yr
skill iconMachine Learning (ML)
skill iconPython
SQL
Neural networks

What We’re Looking For

  • 3-5 years of Data Science & ML experience in consumer internet / B2C products.
  • Degree in Statistics, Computer Science, or Engineering (or certification in Data Science).
  • Machine Learning wizardry: recommender systems, NLP, user profiling, image processing, anomaly detection.
  • Statistical chops: finding meaningful insights in large data sets.
  • Programming ninja: R, Python, SQL + hands-on with Numpy, Pandas, scikit-learn, Keras, TensorFlow (or similar).
  • Visualization skills: Redshift, Tableau, Looker, or similar.
  • A strong problem-solver with curiosity hardwired into your DNA.
  • Brownie Points
  • Experience with big data platforms: Hadoop, Spark, Hive, Pig.
  • Extra love if you’ve played with BI tools like Tableau or Looker.


Read more
Daten  Wissen Pvt Ltd
Mumbai, BHAYANDER, Thane
1 - 2 yrs
₹2L - ₹4L / yr
skill iconDjango
RESTful APIs
DRF
skill iconPython
skill iconAmazon Web Services (AWS)
+1 more

About the Role:


We are looking for a highly motivated Backend Developer with hands-on experience in the Django framework to join our dynamic team. The ideal candidate should be passionate about backend development and eager to learn and grow in a fast-paced environment. You’ll be involved in developing web applications, APIs, and automation workflows.


Key Responsibilities:

  • Develop and maintain Python-based web applications using Django and Django Rest Framework.
  • Build and integrate RESTful APIs.
  • Work collaboratively with frontend developers to integrate user-facing elements with server-side logic.
  • Contribute to improving development workflows through automation.
  • Assist in deploying applications using cloud platforms like Heroku or AWS.
  • Write clean, maintainable, and efficient code.


Requirements:

Backend:

  • Strong understanding of Django and Django Rest Framework (DRF).
  • Experience with task queues like Celery.


Frontend (Basic Understanding):

  • Proficiency in HTML, CSS, Bootstrap, JavaScript, and jQuery.


Hosting & Deployment:

  • Familiarity with at least one hosting service such as Heroku, AWS, or similar platforms.

Linux/Server Knowledge:

  • Basic to intermediate understanding of Linux commands and server environments.
  • Ability to work with terminal, virtual environments, SSH, and basic server configurations.

Python Knowledge:

  • Good grasp of OOP concepts.
  • Familiarity with Pandas for data manipulation is a plus.

Soft & Team Skills:

  • Strong collaboration and team management abilities.
  • Ability to work in a team-driven environment and coordinate tasks smoothly.
  • Problem-solving mindset and attention to detail.
  • Good communication skills and eagerness to learn

What We Offer:

  • A collaborative, friendly, and growth-focused work environment.
  • Opportunity to work on real-time projects using modern technologies.
  • Guidance and mentorship to help you advance in your career.
  • Flexible and supportive work culture.
  • Opportunities for continuous learning and skill development.


Read more
Oneture Technologies

at Oneture Technologies

1 recruiter
Eman Khan
Posted by Eman Khan
Mumbai
5 - 8 yrs
₹15L - ₹23L / yr
skill iconPython
FastAPI
skill iconDjango
skill iconReact.js
skill iconAmazon Web Services (AWS)

About The Role

We are seeking a Full Stack Cloud Engineer with strong hands-on experience in Python (FastAPI), React.js, and AWS Serverless architecture to lead and contribute to the design and development of scalable, modern web applications. The ideal candidate will bring both technical depth and leadership skills, mentoring a small team of developers while remaining actively involved in coding, architectural decisions, and deployment.


You will play a key role in building and optimizing cloud-native, serverless applications using AWS services, integrating front-end and back-end components, and ensuring reliability, scalability, and performance.

 

Responsibilities


Technical Leadership

  • Lead and mentor a small team of engineers, ensuring adherence to coding standards and best practices.
  • Drive architectural and design decisions aligned with scalability, performance, and maintainability.
  • Conduct code reviews, guide junior developers, and foster a collaborative engineering culture.

Backend Development

  • Design, build, and maintain RESTful APIs using FastAPI or Flask.
  • Develop and deploy serverless microservices on AWS Lambda using AWS SAM.
  • Work with relational databases (PostgreSQL/MySQL) and optimize SQL queries.
  • Manage asynchronous task queues with Celery and Redis/SQS.

Frontend Development

  • Build and maintain responsive, scalable front-end applications using React.js.
  • Implement reusable components using Redux, Hooks, and TypeScript.
  • Integrate APIs and optimize front-end performance, accessibility, and security.

AWS Cloud & DevOps

  • Architect and deploy applications using AWS SAM, Lambda, Glue, Cognito, AppSync, and Amplify.
  • Good-to-have) Experience designing and consuming GraphQL APIs via AWS AppSync.
  • Implement CI/CD pipelines and manage deployments via Amplify, CodePipeline, or equivalent.
  • Ensure proper authentication, authorization, and identity management with Cognito.
  • Use Gitlabs/Devops, Docker and AWS ECS/EKS for containerized deployments where required.


Preferred Skills

  • Experience with GraphQL (AppSync) and data integrations.
  • Exposure to container orchestration (ECS/EKS).
  • AWS Certification (e.g., AWS Developer or Architect Associate) is a plus.


Soft Skills

  • Strong communication and leadership abilities.
  • Ability to mentor and motivate team members.
  • Problem-solving mindset with attention to detail and scalability.
  • Passion for continuous learning and improvement.


About Oneture Technologies

 

Founded in 2016, Oneture is a cloud-first, full-service digital solutions company, helping clients harness the power of Digital Technologies and Data to drive transformations and turning ideas into business realities.

 

Our team is full of curious, full-stack, innovative thought leaders who are dedicated to providing outstanding customer experiences and building authentic relationships. We are compelled by our core values to drive transformational results from Ideas to Reality for clients across all company sizes, geographies, and industries. The Oneture team delivers full lifecycle solutions - from ideation,

project inception, planning through deployment to ongoing support and maintenance.

 

Our core competencies and technical expertise includes Cloud powered: Product Engineering, Big Data and AI ML. Our deep commitment to value creation for our clients and partners and “Startups-like agility with Enterprises-like maturity” philosophy has helped us establish long-term relationships with our clients and enabled us to build and manage mission-critical platforms for

them.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Mumbai, Pune
3 - 7 yrs
Best in industry
Google Cloud Platform (GCP)
AZURE
skill iconJava
skill iconRuby
Oracle NoSQL Database
+5 more

Required skills and experience

• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)

• Master’s degree a plus

• 3-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.

• Excellent problem-solving/troubleshooting skills, fast learner

• Strong knowledge of Unix Administration.

• Strong scripting skills in Shell, Python, Batch is must.

• Strong Database experience – Oracle

• Strong knowledge of Software Development Life Cycle

• Power shell is nice to have

• Software development skillsets in Java or Ruby.

• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have

Read more
Chtrbox
Smruti Kedare
Posted by Smruti Kedare
Mumbai
2 - 8 yrs
₹10L - ₹18L / yr
skill iconMongoDB
skill iconAmazon Web Services (AWS)
RESTful APIs
API
skill iconNextJs (Next.js)
+2 more

Backend Engineer (MongoDB / API Integrations / AWS / Vectorization)


Position Summary

We are hiring a Backend Engineer with expertise in MongoDB, data vectorization, and advanced AI/LLM integrations. The ideal candidate will have hands-on experience developing backend systems that power intelligent data-driven applications, including robust API integrations with major social media platforms (Meta, Instagram, Facebook, with expansion to TikTok, Snapchat, etc.). In addition, this role requires deep AWS experience (Lambda, S3, EventBridge) to manage serverless workflows, automate cron jobs, and execute both scheduled and manual data pulls. You will collaborate closely with frontend developers and AI engineers to deliver scalable, resilient APIs that power our platform.


Key Responsibilities

  • Design, implement, and maintain backend services with MongoDB and scalable data models.
  • Build pipelines to vectorize data for retrieval-augmented generation (RAG) and other AI-driven features.
  • Develop robust API integrations with major social platforms (Meta, Instagram Graph API, Facebook API; expand to TikTok, Snapchat, etc.).
  • Implement and maintain AWS Lambda serverless functions for scalable backend processes.
  • Use AWS EventBridge to schedule cron jobs and manage event-driven workflows.
  • Leverage AWS S3 for structured and unstructured data storage, retrieval, and processing.
  • Build workflows for manual and automated data pulls from external APIs.
  • Optimize backend systems for performance, scalability, and reliability at high data volumes.
  • Collaborate with frontend engineers to ensure smooth integration into Next.js applications.
  • Ensure security, compliance, and best practices in API authentication (OAuth, tokens, etc.).
  • Contribute to architecture planning, documentation, and system design reviews.


Required Skills/Qualifications

  • Strong expertise with MongoDB (including Atlas) and schema design.
  • Experience with data vectorization and embeddings (OpenAI, Pinecone, MongoDB Atlas Vector Search, etc.).
  • Proven track record of social media API integrations (Meta, Instagram, Facebook; additional platforms a plus).
  • Proficiency in Node.js, Python, or other backend languages for API development.
  • Deep understanding of AWS services:
  • Lambda for serverless functions.
  • S3 for structured/unstructured data storage.
  • EventBridge for cron jobs, scheduled tasks, and event-driven workflows.
  • Strong understanding of REST and GraphQL API design.
  • Experience with data optimization, caching, and large-scale API performance.


Preferred Skills/Experience

  • Experience with real-time data pipelines (Kafka, Kinesis, or similar).
  • Familiarity with CI/CD pipelines and automated deployments on AWS.
  • Knowledge of serverless architecture best practices.
  • Background in SaaS platform development or data analytics systems.


Read more
Apprication pvt ltd

at Apprication pvt ltd

1 recruiter
Adam patel
Posted by Adam patel
Mumbai
2.5 - 4 yrs
₹6L - ₹12L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Huggingface
skill iconPython
PyTorch
+13 more

Job Title: AI / Machine Learning Engineer

 Company: Apprication Pvt Ltd

 Location: Goregaon East

 Employment Type: Full-time

 Experience: 2.5-4 Years


  • Bachelor’s or Master’s in Computer Science, Machine Learning, Data Science, or related field.
  • Proven experience of 2.5-4 years as an AI/ML Engineer, Data Scientist, or AI Application Developer.
  • Strong programming skills in Python (TensorFlow, PyTorch, Scikit-learn); familiarity with LangChain, Hugging Face, OpenAI API is a plus.
  • Experience in model deployment, serving, and optimization (FastAPI, Flask, Django, or Node.js).
  • Proficiency with databases (SQL and NoSQL: MySQL, PostgreSQL, MongoDB).
  • Hands-on experience with cloud ML services (Sage Maker, Vertex AI, Azure ML) and DevOps tools (Docker, Kubernetes, CI/CD).
  • Knowledge of MLOps practices: model versioning, monitoring, retraining, experiment tracking.
  • Familiarity with frontend frameworks (React.js, Angular, Vue.js) for building AI-driven interfaces (nice to have).
  • Strong understanding of data structures, algorithms, APIs, and distributed systems.
  • Excellent problem-solving, analytical, and communication skills.
  • Develop and maintain ETL pipelines, data preprocessing workflows, and feature engineering processes.
  • Ensure solutions meet security, compliance, and performance standards.
  • Stay updated with the latest research and trends in deep learning, generative AI, and LLMs.
Read more
Apprication pvt ltd

at Apprication pvt ltd

1 recruiter
Adam patel
Posted by Adam patel
Mumbai
2 - 3 yrs
₹6L - ₹12L / yr
skill iconPython
Natural Language Processing (NLP)
Langchaing

Apprication Pvt Ltd is Hiring Senior Data Analyst with minimum 2 Years Full time experience excluding internship.

-Lead and mentor Data Science team members, ensuring knowledge sharing and growth through structured guidance.

- Architect and deploy end-to-end AI/ML solutions including LLM applications, RAG systems, and multi-agent workflows. Collaborate with cross-functional teams (engineering, product, domain experts)

-To align AI solutions with business goals. Establish MLOps practices, CI/CD pipelines, and standardized evaluation frameworks for production-ready AI. Drive innovation by researching, prototyping, and implementing state-of-the-art techniques in Generative AI and Machine Learning


Read more
HaystackAnalytics
Careers Hr
Posted by Careers Hr
Navi Mumbai
2 - 4 yrs
₹5L - ₹10L / yr
skill iconNextJs (Next.js)
skill iconReact.js
skill iconReact Native
skill iconNodeJS (Node.js)
skill iconPython
+11 more


Job Description


Position -   Full stack Developer

Location - Mumbai

    Experience - 2-5 Years 


Who are we

Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics based diagnostic solution for Tuberculosis was recognized as one of top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.


Objectives of this Role:

  • Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
  • Ideate and develop new product features in collaboration with domain experts in healthcare and genomics 
  • Develop state of the art enterprise standard front-end and backend services
  • Develop cloud platform services based on container orchestration platform 
  • Continuously embrace automation  for repetitive tasks
  • Ensure application performance, uptime, and scale, maintaining high standards of code quality  by using clean coding principles and solid design patterns 
  • Build robust tech modules  that are Unit Testable, Automating recurring tasks and processes  
  • Engage effectively with team members and collaborate to upskill and unblock each other



Frontend Skills 

  • HTML 5  
  • CSS framework  (  LESS/ SASS / Tailwind ) 
  • Es6 / Typescript 
  • Electron app /Tauri)
  • Component library  ( Bootstrap , material UI, Lit ) 
  • Responsive web layout ( Flex layout , Grid layout ) 
  • Package manager --> yarn-/ npm / turbo
  • Build tools - > (Vite/Webpack/Parcel)
  • Frameworks -- > React  with Redux of Mobx / Next JS
  • Design patterns 
  • Testing - JEST / MOCHA / JASMINE / Cypress)
  • Functional  Programming concepts  
  • Scripting  ( powershell , bash , python )



Backend Skills 

  • Nodejs - Express / NEST JS 
  • Python /  Rust
  • REST API 
  • SOLID Design Principles
  • Database (postgresql / mysql / redis /  cassandra / mongodb ) 
  • Caching  ( Redis ) 
  • Container Technology  ( Docker / Kubernetes )  
  • Cloud ( azure , aws , openshift, google cloud) 
  • Version  Control - GIT 
  • GITOPS 
  • Automation ( terraform , ansible ) 


Cloud  Skills 

  • Object storage
  • VPC   concepts 
  • Containerize Deployment
  • Serverless architecture 


Other  Skills 

  • Innovation and thought leadership
  • UI - UX design skills  
  • Interest in in learning new tools, languages, workflows, and philosophies to grow
  • Communication 


To know more about us- https://haystackanalytics.in/




Read more
Wissen Technology

at Wissen Technology

4 recruiters
Bipasha Rath
Posted by Bipasha Rath
Pune, Mumbai
5 - 8 yrs
Best in industry
Google Cloud Platform (GCP)
AZURE
Terraform
skill icon.NET
skill iconPython
+2 more

Job Description:


Position - Cloud Developer

Experience - 5 - 8 years

Location - Mumbai & Pune


Responsibilities:

  • Design, develop, and maintain robust software applications using most common and popular coding languages suitable for the application design, with a strong focus on clean, maintainable, and efficient code.
  • Develop, maintain, and enhance Terraform modules to encapsulate common infrastructure patterns and promote code reuse and standardization.
  • Develop RESTful APIs and backend services aligned with modern architectural practices.
  • Apply object-oriented programming principles and design patterns to build scalable systems.
  • Build and maintain automated test frameworks and scripts to ensure high product quality.
  • Troubleshoot and resolve technical issues across application layers, from code to infrastructure.
  • Work with cloud platforms such as Azure or Google Cloud Platform (GCP).
  • Use Git and related version control practices effectively in a team-based development environment.
  • Integrate and experiment with AI development tools like GitHub Copilot, Azure OpenAI, or similar to boost engineering efficiency.


Skills:

  • 5+ years of experience
  • Experience with IaC Module
  • Terraform coding experience along with Terraform Module as a part of central platform team
  • Azure/GCP cloud experience is a must
  • Experience with C#/Python/Java Coding - is good to have


Read more
Wissen Technology
Pune, Mumbai, Bengaluru (Bangalore)
4 - 10 yrs
Best in industry
Google Cloud Platform (GCP)
skill iconPython
skill iconKubernetes
Shell Scripting
SRE Engineer
+1 more

Dear Candidate,


Greetings from Wissen Technology. 

We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.   

 About Wissen Technology:

  • The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
  • Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
  • Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
  • Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
  • Globally present with offices US, India, UK, Australia, Mexico, and Canada.
  • We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
  • Wissen Technology has been certified as a Great Place to Work®.
  • Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
  • Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
  • The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.

We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few.



Job Description: 

Please find below details:


Experience - 4+ Years

Location- Bangalore/Mumbai/Pune


Team Responsibilities

The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.

 

Resource's key Responsibilities


• Provide Tier 2/3 product technical support.

• Building software to help operations and support activities.

• Manage system\software configurations and troubleshoot environment issues.

• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.

• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.

• Collaborate with development and software testing teams throughout the release process.

• Analyze release and deployment processes to identify key areas for automation and optimization.

• Manage hardware and software resources & coordinate maintenance, planned downtimes with

infrastructure group across all the environments. (Production / Non-Production).

• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.

 

Required skills and experience

• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)

• Master’s degree a plus

• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.

• Excellent problem-solving/troubleshooting skills, fast learner

• Strong knowledge of Unix Administration.

• Strong scripting skills in Shell, Python, Batch is must.

• Strong Database experience – Oracle

• Strong knowledge of Software Development Life Cycle

• Power shell is nice to have

• Software development skillsets in Java or Ruby.

• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have




Read more
Talent Pro
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
5 - 8 yrs
₹25L - ₹35L / yr
skill iconPython
skill iconReact.js

Strong Full stack developer Profile

Mandatory (Experience 1) - Must Have Minimum 5+ YOE in Software Development,

Mandatory (Experience 2) - Must have 4+ YOE in backend using Python.

Mandatory (Experience 3) - Must have good experience in frontend using React JS with knowledge of HTML, CSS, and JavaScript.

Mandatory (Experience 4) - Must have Experience in any databases - MySQL / PostgreSQL / Postgres / Oracle / SQL Server /

Read more
One of the reputed Client in India

One of the reputed Client in India

Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Hyderabad, Pune
6 - 8 yrs
₹12L - ₹13L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark

Our Client is looking to hire Databricks Amin immediatly.


This is PAN-INDIA Bulk hiring


Minimum of 6-8+ years with Databricks, Pyspark/Python and AWS.

Must have AWS


Notice 15-30 days is preferred.


Share profiles at hr at etpspl dot com

Please refer/share our email to your friends/colleagues who are looking for job.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Sonali RajeshKumar
Posted by Sonali RajeshKumar
Bengaluru (Bangalore), Pune, Mumbai
4 - 9 yrs
Best in industry
Google Cloud Platform (GCP)
Reliability engineering
skill iconPython
Shell Scripting

Dear Candidate,


Greetings from Wissen Technology. 

We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.   

 About Wissen Technology:

  • The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
  • Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
  • Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
  • Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
  • Globally present with offices US, India, UK, Australia, Mexico, and Canada.
  • We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
  • Wissen Technology has been certified as a Great Place to Work®.
  • Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
  • Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
  • The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.

We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few


Job Description: 

Please find below details:


Experience - 4+ Years

Location- Bangalore/Mumbai/Pune


Team Responsibilities

The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.

 

Resource's key Responsibilities


• Provide Tier 2/3 product technical support.

• Building software to help operations and support activities.

• Manage system\software configurations and troubleshoot environment issues.

• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.

• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.

• Collaborate with development and software testing teams throughout the release process.

• Analyze release and deployment processes to identify key areas for automation and optimization.

• Manage hardware and software resources & coordinate maintenance, planned downtimes with

infrastructure group across all the environments. (Production / Non-Production).

• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.

 

Required skills and experience

• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)

• Master’s degree a plus

• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.

• Excellent problem-solving/troubleshooting skills, fast learner

• Strong knowledge of Unix Administration.

• Strong scripting skills in Shell, Python, Batch is must.

• Strong Database experience – Oracle

• Strong knowledge of Software Development Life Cycle

• Power shell is nice to have

• Software development skillsets in Java or Ruby.

• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have


Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Chennai
7 - 10 yrs
₹10L - ₹18L / yr
full stack
skill iconReact.js
skill iconPython
skill iconGo Programming (Golang)
CI/CD
+9 more

Full-Stack Developer

Exp: 5+ years required

Night shift: 8 PM-5 AM/9PM-6 AM

Only Immediate Joinee Can Apply


We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.

Key Responsibilities

● Develop and maintain user-facing features using React.js and TypeScript.

● Write clean, efficient, and well-documented JavaScript/TypeScript code.

● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.

● Contribute to the design, implementation, and maintenance of our databases.

● Collaborate with senior developers and product managers to deliver high-quality software.

● Troubleshoot and debug issues across the full stack.

● Participate in code reviews to maintain code quality and share knowledge.

Qualifications

● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.

● 5+ years of professional experience in web development.

● Proficiency in JavaScript and/or TypeScript.

● Proficiency in Golang and Python.

● Hands-on experience with the React.js library for building user interfaces.

● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).

● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).

● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.

● Strong problem-solving skills and a willingness to learn.

● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.

● Knowledge of CI/CD pipelines and automated testing.


Read more
Hunarstreet technologies pvt ltd

Hunarstreet technologies pvt ltd

Agency job
Chennai, Hyderabad, Bengaluru (Bangalore), Mumbai, Pune, Gurugram, Mohali, Panchkula
5 - 15 yrs
₹10L - ₹15L / yr
Fullstack Developer
Web Development
skill iconJavascript
TypeScript
skill iconGo Programming (Golang)
+5 more

We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.


Key Responsibilities

● Develop and maintain user-facing features using React.js and TypeScript.

● Write clean, efficient, and well-documented JavaScript/TypeScript code.

● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.

● Contribute to the design, implementation, and maintenance of our databases.

● Collaborate with senior developers and product managers to deliver high-quality software.

● Troubleshoot and debug issues across the full stack.

● Participate in code reviews to maintain code quality and share knowledge.


Qualifications

● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.

● 5+ years of professional experience in web development.

● Proficiency in JavaScript and/or TypeScript.

● Proficiency in Golang and Python.

● Hands-on experience with the React.js library for building user interfaces.

● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).

● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).

● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.

● Strong problem-solving skills and a willingness to learn.

● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.

● Knowledge of CI/CD pipelines and automated testing.

Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Nagpur, Ahmedabad, Jaipur, Kochi (Cochin)
3.6 - 8 yrs
₹4L - ₹18L / yr
skill iconPython
skill iconDjango
skill iconFlask
skill iconAmazon Web Services (AWS)
AWS Lambda
+3 more

Job Summary:

Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.


Key Responsibilities:

  • Design, develop, and deploy backend services and APIs using Python.
  • Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
  • Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
  • Implement containerized environments using Docker and manage orchestration via Kubernetes.
  • Write automation and scripting solutions in Bash/Shell to streamline operations.
  • Work with relational databases like MySQL and SQL, including query optimization.
  • Collaborate directly with clients to understand requirements and provide technical solutions.
  • Ensure system reliability, performance, and scalability across environments.


Required Skills:

  • 3.5+ years of hands-on experience in Python development.
  • Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
  • Good understanding of Terraform or other Infrastructure as Code tools.
  • Proficient with Docker and container orchestration using Kubernetes.
  • Experience with CI/CD tools like Jenkins or GitHub Actions.
  • Strong command of SQL/MySQL and scripting with Bash/Shell.
  • Experience working with external clients or in client-facing roles.


Preferred Qualifications:

  • AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
  • Familiarity with Agile/Scrum methodologies.
  • Strong analytical and problem-solving skills.
  • Excellent communication and stakeholder management abilities.


Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Mohali, Dehradun, Panchkula, Chennai
6 - 14 yrs
₹12L - ₹28L / yr
Test Automation (QA)
skill iconKubernetes
helm
skill iconDocker
skill iconAmazon Web Services (AWS)
+13 more

Job Title : Senior QA Automation Architect (Cloud & Kubernetes)

Experience : 6+ Years

Location : India (Multiple Offices)

Shift Timings : 12 PM to 9 PM (Noon Shift)

Working Days : 5 Days WFO (NO Hybrid)


About the Role :

We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.

You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.


Key Responsibilities :

  • Architect and maintain test automation frameworks for microservices.
  • Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
  • Ensure reliability, scalability, and observability of test systems.
  • Work closely with DevOps and Cloud teams to streamline automation infrastructure.

Mandatory Skills :

  • Kubernetes, Helm, Docker, Linux
  • Cloud Platforms : AWS / Azure / GCP
  • CI/CD Tools : Jenkins, GitHub Actions
  • Scripting : Python, Pytest, Bash
  • Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
  • IaC Practices : Terraform / Ansible

Good to Have :

  • Experience with Service Mesh (Istio/Linkerd).
  • Container Security or DevSecOps exposure.
Read more
Mumbai
2.5 - 4 yrs
₹5L - ₹10L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconData Science
skill iconPython
TensorFlow
+14 more

Job Title: AI / Machine Learning Engineer

 Company: Apprication Pvt Ltd

 Location: Goregaon East

 Employment Type: Full-time

 Experience: 2.5-4 Years




 About the Role

We’re seeking a highly motivated AI / Machine Learning Engineer to join our growing engineering team. You will design, build, and deploy AI-powered solutions for web and application platforms, bringing cutting-edge machine learning research into real-world production systems.




This role blends applied machine learning, backend engineering, and cloud deployment, with opportunities to work on NLP, computer vision, generative AI, and intelligent automation across diverse industries.




Key Responsibilities

  • Design, train, and deploy machine learning models for NLP, computer vision, recommendation systems, and other AI-driven use cases.
  • Integrate ML models into production-ready web and mobile applications, ensuring scalability and reliability.
  • Collaborate with data scientists to optimize algorithms, pipelines, and inference performance.
  • Build APIs and microservices for model serving, monitoring, and scaling.
  • Leverage cloud platforms (AWS, Azure, GCP) for ML workflows, containerization (Docker/Kubernetes), and CI/CD pipelines.
  • Implement AI-powered features such as chatbots, personalization engines, predictive analytics, or automation systems.
  • Develop and maintain ETL pipelines, data preprocessing workflows, and feature engineering processes.
  • Ensure solutions meet security, compliance, and performance standards.
  • Stay updated with the latest research and trends in deep learning, generative AI, and LLMs.

Skills & Qualifications

  • Bachelor’s or Master’s in Computer Science, Machine Learning, Data Science, or related field.
  • Proven experience of 4 years as an AI/ML Engineer, Data Scientist, or AI Application Developer.
  • Strong programming skills in Python (TensorFlow, PyTorch, Scikit-learn); familiarity with LangChain, Hugging Face, OpenAI API is a plus.
  • Experience in model deployment, serving, and optimization (FastAPI, Flask, Django, or Node.js).
  • Proficiency with databases (SQL and NoSQL: MySQL, PostgreSQL, MongoDB).
  • Hands-on experience with cloud ML services (SageMaker, Vertex AI, Azure ML) and DevOps tools (Docker, Kubernetes, CI/CD).
  • Knowledge of MLOps practices: model versioning, monitoring, retraining, experiment tracking.
  • Familiarity with frontend frameworks (React.js, Angular, Vue.js) for building AI-driven interfaces (nice to have).
  • Strong understanding of data structures, algorithms, APIs, and distributed systems.
  • Excellent problem-solving, analytical, and communication skills.
Read more
Mumbai
4 - 8 yrs
₹3L - ₹7L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconData Science
skill iconPython
TensorFlow
+15 more

Job Title: AI / Machine Learning Engineer

 Company: Apprication Pvt Ltd

 Location: Goregaon East

 Employment Type: Full-time

 Experience: 4 Years




 About the Role

We’re seeking a highly motivated AI / Machine Learning Engineer to join our growing engineering team. You will design, build, and deploy AI-powered solutions for web and application platforms, bringing cutting-edge machine learning research into real-world production systems.




This role blends applied machine learning, backend engineering, and cloud deployment, with opportunities to work on NLP, computer vision, generative AI, and intelligent automation across diverse industries.




Key Responsibilities

  • Design, train, and deploy machine learning models for NLP, computer vision, recommendation systems, and other AI-driven use cases.
  • Integrate ML models into production-ready web and mobile applications, ensuring scalability and reliability.
  • Collaborate with data scientists to optimize algorithms, pipelines, and inference performance.
  • Build APIs and microservices for model serving, monitoring, and scaling.
  • Leverage cloud platforms (AWS, Azure, GCP) for ML workflows, containerization (Docker/Kubernetes), and CI/CD pipelines.
  • Implement AI-powered features such as chatbots, personalization engines, predictive analytics, or automation systems.
  • Develop and maintain ETL pipelines, data preprocessing workflows, and feature engineering processes.
  • Ensure solutions meet security, compliance, and performance standards.
  • Stay updated with the latest research and trends in deep learning, generative AI, and LLMs.

Skills & Qualifications

  • Bachelor’s or Master’s in Computer Science, Machine Learning, Data Science, or related field.
  • Proven experience of 4 years as an AI/ML Engineer, Data Scientist, or AI Application Developer.
  • Strong programming skills in Python (TensorFlow, PyTorch, Scikit-learn); familiarity with LangChain, Hugging Face, OpenAI API is a plus.
  • Experience in model deployment, serving, and optimization (FastAPI, Flask, Django, or Node.js).
  • Proficiency with databases (SQL and NoSQL: MySQL, PostgreSQL, MongoDB).
  • Hands-on experience with cloud ML services (SageMaker, Vertex AI, Azure ML) and DevOps tools (Docker, Kubernetes, CI/CD).
  • Knowledge of MLOps practices: model versioning, monitoring, retraining, experiment tracking.
  • Familiarity with frontend frameworks (React.js, Angular, Vue.js) for building AI-driven interfaces (nice to have).
  • Strong understanding of data structures, algorithms, APIs, and distributed systems.
  • Excellent problem-solving, analytical, and communication skills.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Gagandeep Kaur
Posted by Gagandeep Kaur
Bengaluru (Bangalore), Mumbai, Pune
4 - 7 yrs
Best in industry
skill iconPython
PySpark
pandas
Airflow
Data engineering

Wissen Technology is hiring for Data Engineer

About Wissen Technology: At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.

Job Summary: Wissen Technology is hiring a Data Engineer with expertise in Python, Pandas, Airflow, and Azure Cloud Services. The ideal candidate will have strong communication skills and experience with Kubernetes.

Experience: 4-7 years

Notice Period: Immediate- 15 days

Location: Pune, Mumbai, Bangalore

Mode of Work: Hybrid

Key Responsibilities:

  • Develop and maintain data pipelines using Python and Pandas.
  • Implement and manage workflows using Airflow.
  • Utilize Azure Cloud Services for data storage and processing.
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions.
  • Ensure data quality and integrity throughout the data lifecycle.
  • Optimize and scale data infrastructure to meet business needs.

Qualifications and Required Skills:

  • Proficiency in Python (Must Have).
  • Strong experience with Pandas (Must Have).
  • Expertise in Airflow (Must Have).
  • Experience with Azure Cloud Services.
  • Good communication skills.

Good to Have Skills:

  • Experience with Pyspark.
  • Knowledge of Kubernetes.

Wissen Sites:


Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Hyderabad, Noida, Mumbai, Navi Mumbai, Ahmedabad, Chennai, Coimbatore, Gurugram, Kochi (Cochin), Kolkata, Calcutta, Pune, Thiruvananthapuram, Trivandrum
7 - 15 yrs
₹15L - ₹30L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
Data Lake

SENIOR DATA ENGINEER:

ROLE SUMMARY:

Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.



RESPONSIBILITIES:

  • Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
  • Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
  • Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
  • Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
  • DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
  • Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
  • Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
  • Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
  • Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
  • Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.



REQUIRED QUALIFICATIONS:

  • Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
  •  Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
  • Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
  • ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
  • Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
  • DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
  • Serverless and events: Design event-driven distributed systems on AWS.
  • NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
  • AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.



NICE-TO-HAVE QUALIFICATIONS:

  • Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
  • Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
  • Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
  • Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.



OUTCOMES AND MEASURES:

  • Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
  • Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
  • Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
  • Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
  • Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.



LOCATION AND SCHEDULE:

●      Location: Outside US (OUS).

●      Schedule: Minimum 6 hours of overlap with US time zones.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Bipasha Rath
Posted by Bipasha Rath
Mumbai, Bengaluru (Bangalore), Pune
3 - 7 yrs
Best in industry
skill iconPython
pandas
PySpark

Experience: 3–7 Years

Locations: Pune / Bangalore / Mumbai

Notice Period :Immediate joiner only


Employment Type: Full-time

🛠️ Key Skills (Mandatory):

  • Python: Strong coding skills for data manipulation and automation.
  • PySpark: Experience with distributed data processing using Spark.
  • SQL: Proficient in writing complex queries for data extraction and transformation.
  • Azure Databricks: Hands-on experience with notebooks, Delta Lake, and MLflow


Interested candidates please share resume with details below.


Total Experience -

Relevant Experience in Python,Pyspark,AQL,Azure Data bricks-

Current CTC -

Expected CTC -

Notice period -

Current Location -

Desired Location -


Read more
Snaphyr

Snaphyr

Agency job
via SnapHyr by MUKESHKUMAR CHAUHAN
Mumbai
3 - 6 yrs
₹10L - ₹25L / yr
skill iconPython
Data-flow analysis
Backend testing
Market analysis
Market Research
+1 more

🚀 We’re Hiring: Python Developer – Quant Strategies & Backtesting | Mumbai (Goregaon East)


Are you a skilled Python Developer passionate about financial markets and quantitative trading?


We’re looking for someone to join our growing Quant Research & Algo Trading team, where you’ll work on:

🔹 Developing & optimizing trading strategies in Python

🔹 Building backtesting frameworks across multiple asset classes

🔹 Processing and analyzing large market datasets

🔹 Collaborating with quant researchers & traders on real-world strategies


What we’re looking for:

✔️ 3+ years of experience in Python development (preferably in fintech/trading/quant domains)

✔️ Strong knowledge of Pandas, NumPy, SciPy, SQL

✔️ Experience in backtesting, data handling & performance optimization

✔️ Familiarity with financial markets is a big plus


📍 Location: Goregaon East, Mumbai

💼 Competitive package + exposure to cutting-edge quant strategies


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Pune, Mumbai
7 - 12 yrs
Best in industry
skill iconPython
pandas
PySpark
SQL
Data engineering

Wissen Technology is hiring for Data Engineer

About Wissen Technology:At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.

Job Summary:Wissen Technology is hiring a Data Engineer with a strong background in Python, data engineering, and workflow optimization. The ideal candidate will have experience with Delta Tables, Parquet, and be proficient in Pandas and PySpark.

Experience:7+ years

Location:Pune, Mumbai, Bangalore

Mode of Work:Hybrid

Key Responsibilities:

  • Develop and maintain data pipelines using Python (Pandas, PySpark).
  • Optimize data workflows and ensure efficient data processing.
  • Work with Delta Tables and Parquet for data storage and management.
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions.
  • Ensure data quality and integrity throughout the data lifecycle.
  • Implement best practices for data engineering and workflow optimization.

Qualifications and Required Skills:

  • Proficiency in Python, specifically with Pandas and PySpark.
  • Strong experience in data engineering and workflow optimization.
  • Knowledge of Delta Tables and Parquet.
  • Excellent problem-solving skills and attention to detail.
  • Ability to work collaboratively in a team environment.
  • Strong communication skills.

Good to Have Skills:

  • Experience with Databricks.
  • Knowledge of Apache Spark, DBT, and Airflow.
  • Advanced Pandas optimizations.
  • Familiarity with PyTest/DBT testing frameworks.

Wissen Sites:

 

Wissen | Driving Digital Transformation

A technology consultancy that drives digital innovation by connecting strategy and execution, helping global clients to strengthen their core technology.

 

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Ritika Mehra
Posted by Ritika Mehra
Mumbai
8 - 15 yrs
Best in industry
skill iconPython
Data engineering
SQL
skill iconAmazon Web Services (AWS)
skill iconData Analytics

Job Title:  Data Engineering Support Engineer / Manager

Experience range:-8+ Years

Location:- Mumbai 

 Experience :

Knowledge, Skills and Abilities 

- Python, SQL 

- Familiarity with data engineering 

- Experience with AWS data and analytics services or similar cloud vendor services 

- Strong problem solving and communication skills 

- Ablity to organise and prioritise work effectively 

Key Responsibilities 

- Incident and user management for data and analytics platform  

- Development and maintenance of Data Quliaty framework (including anomaly detection) 

- Implemenation of Python & SQL hotfixes and working with data engineers on more complex issues 

- Diagnostic tools implementation and automation of operational processes 


Key Relationships  

- Work closely with data scientists, data engineers, and platform engineers in a highly commercial environment 

- Support research analysts and traders with issue resolution 



Read more
Teknobuilt Solutions Pvt Ltd
Mumbai, Navi Mumbai
3 - 6 yrs
₹8L - ₹10L / yr
skill iconPython
skill iconJava
Agile/Scrum
QTP
ALM

Teknobuilt is an innovative construction technology company accelerating Digital and AI platform to help all aspects of program management and execution for workflow automation, collaborative manual tasks and siloed systems. Our platform has received innovation awards and grants in Canada, UK and S. Korea and we are at the frontiers of solving key challenges in the built environment and digital health, safety and quality.

Teknobuilt's vision is helping the world build better- safely, smartly and sustainably. We are on a mission to modernize construction by bringing Digitally Integrated Project Execution System - PACE and expert services for midsize to large construction and infrastructure projects. PACE is an end-to-end digital solution that helps in Real Time Project Execution, Health and Safety, Quality and Field management for greater visibility and cost savings. PACE enables digital workflows, remote working, AI based analytics to bring speed, flow and surety in project delivery. Our platform has received recognition globally for innovation and we are experiencing a period of significant growth for our solutions.

 

Job Responsibilities

As a Quality Analyst Engineer, you will be expected to:

· Thoroughly analyze project requirements, design specifications, and user stories to understand the scope and objectives.

· Arrange, set up, and configure necessary test environments for effective test case execution.

· Participate in and conduct review meetings to discuss test plans, test cases, and defect statuses.

Execute manual test cases with precision, analyze results, and identify deviations from expected behavior.

· Accurately track, log, prioritize, and manage defects through their lifecycle, ensuring clear communication with developers until resolution.

· Maintain continuous and clear communication with the Test Manager and development team regarding testing progress, roadblocks, and critical findings.

· Develop, maintain, and manage comprehensive test documentation, including:

o Detailed Test Plans

o Well-structured Test Cases for various testing processes

o Concise Summary Reports on test execution and defect status

o Thorough Test Data preparation for test cases

o "Lessons Learned" documents based on testing inputs from previous projects

o "Suggestion Documents" aimed at improving overall software quality

o Clearly defined Test Scenarios

· Clearly report identified bugs to developers with precise steps to reproduce, expected results, and actual results, facilitating efficient defect resolution

Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Mumbai, Pune, Hyderabad
4 - 8 yrs
₹18L - ₹30L / yr
skill iconJava
skill iconSpring Boot
skill iconAmazon Web Services (AWS)
RESTful APIs
CI/CD
+3 more

Job Overview:

We are looking for a skilled Senior Backend Engineer to join our team. The ideal candidate will have a strong foundation in Java and Spring, with proven experience in building scalable microservices and backend systems. This role also requires familiarity with automation tools, Python development, and working knowledge of AI technologies.


Responsibilities:


  • Design, develop, and maintain backend services and microservices.
  • Build and integrate RESTful APIs across distributed systems.
  • Ensure performance, scalability, and reliability of backend systems.
  • Collaborate with cross-functional teams and participate in agile development.
  • Deploy and maintain applications on AWS cloud infrastructure.
  • Contribute to automation initiatives and AI/ML feature integration.
  • Write clean, testable, and maintainable code following best practices.
  • Participate in code reviews and technical discussions.


Required Skills:

  • 4+ years of backend development experience.
  • Strong proficiency in Java and Spring/Spring Boot frameworks.
  • Solid understanding of microservices architecture.
  • Experience with REST APIs, CI/CD, and debugging complex systems.
  • Proficient in AWS services such as EC2, Lambda, S3.
  • Strong analytical and problem-solving skills.
  • Excellent communication in English (written and verbal).


Good to Have:

  • Experience with automation tools like Workato or similar.
  • Hands-on experience with Python development.
  • Familiarity with AI/ML features or API integrations.
  • Comfortable working with US-based teams (flexible hours).


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Archana M
Posted by Archana M
Mumbai
5 - 7 yrs
Best in industry
ETL
skill iconPython
Apache Spark

📢 DATA SOURCING & ANALYSIS EXPERT (L3 Support) – Mumbai 📢

Are you ready to supercharge your Data Engineering career in the financial domain?

We’re seeking a seasoned professional (5–7 years experience) to join our Mumbai team and lead in data sourcing, modelling, and analysis. If you’re passionate about solving complex challenges in Relational & Big Data ecosystems, this role is for you.

What You’ll Be Doing

  • Translate business needs into robust data models, program specs, and solutions
  • Perform advanced SQL optimization, query tuning, and L3-level issue resolution
  • Work across the entire data stack: ETL, Python / Spark, Autosys, and related systems
  • Debug, monitor, and improve data pipelines in production
  • Collaborate with business, analytics, and engineering teams to deliver dependable data services

What You Should Bring

  • 5+ years in financial / fintech / capital markets environment
  • Proven expertise in relational databases and big data technologies
  • Strong command over SQL tuning, query optimization, indexing, partitioning
  • Hands-on experience with ETL pipelines, Spark / PySpark, Python scripting, job scheduling (e.g. Autosys)
  • Ability to troubleshoot issues at the L3 level, root cause analysis, performance tuning
  • Good communication skills — you’ll coordinate with business users, analytics, and tech teams


Read more
Arcitech
Arcitech HR Department
Posted by Arcitech HR Department
Navi Mumbai
2 - 5 yrs
₹4L - ₹12L / yr
AIML
Langchain
Retrieval Augmented Generation (RAG)
skill iconMachine Learning (ML)
Websockets
+5 more

Designation: Python Developer

Experienced in AI/ML

Location: Turbhe, Navi Mumbai

CTC: 6-12 LPA

Years of Experience: 2-5 years

At Arcitech.ai, we’re redefining the future with AI-powered software solutions across education, recruitment, marketplaces, and beyond. We’re looking for a Python Developer passionate about AI/ML, who’s ready to work on scalable, cloud-native platforms and help build the next generation of intelligent, LLM-driven products.

💼 Your Responsibilities

AI/ML Engineering

  • Develop, train, and optimize ML models using PyTorch/TensorFlow/Keras.
  • Build end-to-end LLM and RAG (Retrieval-Augmented Generation) pipelines using LangChain.
  • Collaborate with data scientists to convert prototypes into production-grade AI applications.
  • Integrate NLP, Computer Vision, and Recommendation Systems into scalable products.
  • Work with transformer-based architectures (BERT, GPT, LLaMA, etc.) for real-world AI use cases.

Backend & Systems Development

  • Design, develop, and maintain robust Python microservices with REST/GraphQL APIs.
  • Implement real-time communication with Django Channels/WebSockets.
  • Containerize AI services with Docker and deploy on Kubernetes (EKS/GKE/AKS).
  • Configure and manage AWS (EC2, S3, RDS, SageMaker, CloudWatch) for AI/ML workloads.

Reliability & Automation

  • Develop background task queues with Celery, ensuring smart retries and monitoring.
  • Implement CI/CD pipelines for automated model training, testing, and deployment.
  • Write automated unit & integration tests (pytest/unittest) with ≥80% coverage.

Collaboration

  • Contribute to MLOps best practices and mentor peers in LangChain/AI integration.
  • Participate in tech talks, code reviews, and AI learning sessions within the team.

🎓 Required Qualifications

  • Bachelor’s or Master’s degree in Computer Science, AI/ML, or related field.
  • 2–5 years of experience in Python development with strong AI/ML exposure.
  • Hands-on experience with LangChain for building LLM-powered workflows and RAG systems.
  • Deep learning experience with PyTorch or TensorFlow.
  • Experience deploying ML models and LLM apps into production systems.
  • Familiarity with REST/GraphQL APIs and cloud platforms (AWS/Azure/GCP).
  • Skilled in Git workflows, automated testing, and CI/CD practices.

🌟 Nice to Have

  • Experience with vector databases (Pinecone, Weaviate, FAISS, Milvus) for retrieval pipelines.
  • Knowledge of LLM fine-tuning, prompt engineering, and evaluation frameworks.
  • Familiarity with Airflow/Prefect/Dagster for data and model pipelines.
  • Background in statistics, optimization, or applied mathematics.
  • Contributions to AI/ML or LangChain open-source projects.
  • Experience with model monitoring and drift detection in production.

🎁 Why Join Us

  • Competitive compensation and benefits 💰
  • Work on cutting-edge LLM and AI/ML applications 🤖
  • A collaborative, innovation-driven work culture 📚
  • Opportunities to grow into AI/ML leadership roles 🚀


Read more
Jio Haptik
Arjun Pillai
Posted by Arjun Pillai
Mumbai
4 - 11 yrs
₹25L - ₹35L / yr
skill iconPython
API
Cloud Computing
Software Testing (QA)
Software Development
+1 more

What we want to accomplish and why we need you? 

Jio Haptik is an AI leader having pioneered AI-powered innovation since 2013. Reliance Jio Digital Services acquired Haptik in April 2019. Haptik currently leads India’s AI market having become the first to process 15 billion+ two-way conversations across 10+ channels and in 135 languages. Haptik is also a Category Leader across platforms including Gartner, G2, Opus Research & more. Recently Haptik won the award for “Tech Startup of the Year” in the AI category at Entrepreneur India Awards 2023, and gold medal for “Best Chat & Conversational Bot” at Martequity Awards 2023. Haptik has a headcount of 200+ employees with offices in Mumbai, Delhi, and Bangalore. 


What will you do every day?


Our Implementation & Professional Services team plays a pivotal role in ensuring clients derive maximum value from our solutions. We are looking for an Engineering Manager to lead a delivery-focused team that bridges technology, client success, and execution excellence. You will ensure seamless solution implementation, timely project delivery, and a high-quality client experience.


  • Manage and lead a team of solution engineers, developers, and QA professionals working on client implementations and custom delivery projects.
  • Collaborate with project managers and client-facing teams to ensure smooth execution of implementation roadmaps.
  • Translate client requirements into scalable, maintainable technical solutions while ensuring alignment with Haptik’s platform architecture.
  • Own sprint planning, resource allocation, and delivery commitments across multiple concurrent client projects.
  • Establish and enforce delivery best practices across coding, testing, deployment, and documentation.
  • Act as the escalation point for technical challenges during delivery, unblocking teams and ensuring minimal disruption to project timelines.
  • Drive continuous improvements in tools, processes, and automation to accelerate implementations.
  • Coach and mentor engineers to develop their skills while keeping them aligned to client-centric delivery goals.
  • Partner closely with cross-functional teams (Product, Solutions, Customer Success, and Sales Engineering) to ensure successful hand-offs and long-term customer satisfaction.
  • Support recruitment and team scaling as we grow our services and delivery footprint.


Ok, you're sold, but what are we looking for in the perfect candidate? 


  • Strong technical foundation with the ability to guide teams on architecture, integrations, and scalable solution design.
  • Demonstrated experience managing agile delivery teams that have implemented enterprise-grade software solutions.
  • Experience in Professional Services, Client Implementations, or Delivery Engineering is highly preferred.
  • Proven ability to balance customer requirements, delivery commitments, and engineering best practices.
  • Strong exposure to technologies relevant to distributed systems: programming languages (Python preferred), APIs, cloud platforms, databases, CI/CD, containerization, queues, caches, Elasticsearch/SOLR, etc.
  • Excellent stakeholder management and communication skills — comfortable working with both internal teams and enterprise customers.
  • Ability to mentor, motivate, and inspire teams to deliver on ambitious goals while ensuring a culture of ownership and accountability.
  • 5–8 years of professional experience, including 2–4 years in leading delivery or implementation-focused engineering teams.

Requirements*:


  • B.E. / B.Tech. / MCA in Computer Science, Engineering or a related field is required.
  • Overall 5-8 years of professional experience. 2-4 years of hands-on experience in managing and leading agile teams delivering tech products.
  • Deep understanding of best practices in development and testing processes.
  • Good exposure to various technical and architectural concepts of building distributed systems - not limited to but including at least one programming language/framework, version control, CI/CD, queues, caches, SOLR/Elasticsearch, databases, containerization, cloud platforms.
  • Excellent written and verbal communication skills.
  • Exceptional organizational skills and time management abilities.
  • Prior experience of working with Python Programming language.
  • Background of being part of a high-paced development team having delivered client-facing products with hands-on involvement.


* Requirements is such a strong word. We don’t necessarily expect to find a candidate who has done everything listed, but you should be able to make a credible case that you’ve done most of it and are ready for the challenge of adding some new things to your resume. 


Tell me more about Haptik


  • On a roll: Announced major strategic partnership in April 2019 with Jio in a $100 million deal.
  • Great team: You will be working with great leaders who have been listed in Business World 40 Under 40, Forbes 30 Under 30 and MIT 35 Under 35 Innovators.
  • Great culture: The freedom to think and innovate is something that defines the culture of Haptik. Every person is approachable. While we are working hard, it is also important to take breaks to not get too worked up.
  • Huge market: Disrupting a massive, growing AI market. The global market is projected to attain a valuation of $9 billion by the end of 2024.
  • Emerging technology: We are moving to a Gen AI first world, and Haptik is one of the largest Generative AI first companies globally, based out of India.
  • Great customers: Some of the most notable brands in the world - Jio, Paytm, Adani, Paisabazaar, Puma & Whirlpool
  • Impact: A fun and exciting start-up culture that empowers its people to make a huge impact.


Working hard for things that we don't care about is stress, but working hard for something we love is called passion! At Haptik we passionately solve problems in order to be able to move faster and each Haptikan imbibes our key values of honesty, ownership, perseverance, communication, impact, curiosity, courage, agility and selflessness.

Read more
Dolat Capital Market Private Ltd.
Mahima Desai
Posted by Mahima Desai
Mumbai
1 - 2 yrs
₹3L - ₹4L / yr
Perl
Bash
SQL
Linux administration
Red Hat Linux
+2 more

About The Company:

Dolat is a dynamic team of traders, puzzle solvers, and coding enthusiasts focused on tackling complex challenges in the financial world. We specialize in trading in volatile markets and developing cutting-edge technologies and strategies. We're seeking a skilled Linux Support Engineer to manage over 400 servers and support 100+ users. Our engineers ensure a high-performance, low-latency environment while maintaining simplicity and control. If you're passionate about technology, trading, and problem-solving, this is the place to engineer your skills into a rewarding career.


Qualifications:

  • B.E/ B.Tech
  • Experience: 1-3 years.
  • Job location – Andheri West, Mumbai.


Responsibilities:

  • Troubleshoot network issues, kernel panics, system hangs, and performance bottlenecks.
  • Fine-tune processes for minimal jitter in a low-latency environment.
  • Support low-latency servers, lines, and networks, participating in on-call rotations.
  • Install, configure, and deploy fully-distributed Red Hat Linux systems.
  • Deploy, configure, and monitor complex trading applications.
  • Provide hands-on support to trading, risk, and compliance teams (Linux & Windows platforms).
  • Automate processes and analyze performance metrics to improve system efficiency.
  • Collaborate with development teams to maintain a stable, high-performance trading environment.
  • Drive continuous improvement, system reliability, and simplicity.
  • Resolve issues in a fast-paced, results-driven IT team.
  • Provide level one and two support for tools systems testing and production release.


Skills Required:

  • Expertise in Linux kernel tuning, configuration management (cfengine).
  • Experience with hardware testing/integration and IT security.
  • Proficient in maintaining Cisco, Windows, and PC hardware.
  • Good knowledge in Perl, Python, Powershell & Bash.
  • Hands-on knowledge of SSH, iptables, NFS, DNS, DHCP, and LDAP.
  • Experience with Open Source tools (Nagios, SmokePing, MRTG) for enterprise-level systems.
  • Knowledge of maintaining Cisco ports/VLANs/dot1.x
  • Solid understanding of OS and network architectures.
  • REDHAT Certification and SQL/database knowledge.
  • Ability to manage multiple tasks in a fast-paced environment.
  • Excellent communication skills and fluency in English.
  • Proven technical problem-solving capabilities.
  • Strong documentation and knowledge management skills.
  • Software development skills are a bonus.
  • Preferred SQL and database administration skills.

Industry

  • Financial Services



Read more
Pluginlive

at Pluginlive

1 recruiter
Harsha Saggi
Posted by Harsha Saggi
Mumbai, Chennai
1 - 3 yrs
₹5L - ₹8L / yr
skill iconPython
SQL
Data Structures
ETL
Dashboard
+3 more

About Us:

PluginLive is an all-in-one tech platform that bridges the gap between all its stakeholders - Corporates, Institutes Students, and Assessment & Training Partners. This ecosystem helps Corporates in brand building/positioning with colleges and the student community to scale its human capital, at the same time increasing student placements for Institutes, and giving students a real time perspective of the corporate world to help upskill themselves into becoming more desirable candidates.


Role Overview:

Entry-level Data Engineer position focused on building and maintaining data pipelines while developing visualization skills. You'll work alongside senior engineers to support our data infrastructure and create meaningful insights through data visualization.


Responsibilities:

  • Assist in building and maintaining ETL/ELT pipelines for data processing
  • Write SQL queries to extract and analyze data from various sources
  • Support data quality checks and basic data validation processes
  • Create simple dashboards and reports using visualization tools
  • Learn and work with Oracle Cloud services under guidance
  • Use Python for basic data manipulation and cleaning tasks
  • Document data processes and maintain data dictionaries
  • Collaborate with team members to understand data requirements
  • Participate in troubleshooting data issues with senior support
  • Contribute to data migration tasks as needed


Qualifications:

Required:

  • Bachelor's degree in Computer Science, Information Systems, or related field
  • around 2 years of experience in data engineering or related field
  • Strong SQL knowledge and database concepts
  • Comfortable with Python programming
  • Understanding of data structures and ETL concepts
  • Problem-solving mindset and attention to detail
  • Good communication skills
  • Willingness to learn cloud technologies


Preferred:

  • Exposure to Oracle Cloud or any cloud platform (AWS/GCP)
  • Basic knowledge of data visualization tools (Tableau, Power BI, or Python libraries like Matplotlib)
  • Experience with Pandas for data manipulation
  • Understanding of data warehousing concepts
  • Familiarity with version control (Git)
  • Academic projects or internships involving data processing


Nice-to-Have:

  • Knowledge of dbt, BigQuery, or Snowflake
  • Exposure to big data concepts
  • Experience with Jupyter notebooks
  • Comfort with AI-assisted coding tools (Copilot, GPTs)
  • Personal projects showcasing data work


What We Offer:

  • Mentorship from senior data engineers
  • Hands-on learning with modern data stack
  • Access to paid AI tools and learning resources
  • Clear growth path to mid-level engineer
  • Direct impact on product and data strategy
  • No unnecessary meetings — focused execution
  • Strong engineering culture with continuous learning opportunities
Read more
Highfly Sourcing

at Highfly Sourcing

2 candid answers
Highfly Hr
Posted by Highfly Hr
Dubai, Augsburg, Germany, Zaragoza (Spain), Qatar, Salalah (Oman), Kuwait, Lebanon, Marseille (France), Genova (Italy), Winnipeg (Canada), Denmark, Poznan (Poland), Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Mumbai, Hyderabad, Pune
3 - 10 yrs
₹25L - ₹30L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+14 more

Job Description

We are looking for a talented Java Developer to work in abroad countries. You will be responsible for developing high-quality software solutions, working on both server-side components and integrations, and ensuring optimal performance and scalability.


Preferred Qualifications

  • Experience with microservices architecture.
  • Knowledge of cloud platforms (AWS, Azure).
  • Familiarity with Agile/Scrum methodologies.
  • Understanding of front-end technologies (HTML, CSS, JavaScript) is a plus.


Requirment Details

Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience).

Proven experience as a Java Developer or similar role.

Strong knowledge of Java programming language and its frameworks (Spring, Hibernate).

Experience with relational databases (e.g., MySQL, PostgreSQL) and ORM tools.

Familiarity with RESTful APIs and web services.

Understanding of version control systems (e.g., Git).

Solid understanding of object-oriented programming (OOP) principles.

Strong problem-solving skills and attention to detail.

Read more
Mumbai, Pune, Hyderabad, Bengaluru (Bangalore), Panchkula, Mohali
5 - 8 yrs
₹10L - ₹20L / yr
skill iconPython
FastAPI
skill iconFlask
skill iconDjango
skill iconGit

Job Title: Python Developer (FastAPI)

Experience Required: 4+ years

Location: Pune, Bangalore, Hyderabad, Mumbai, Panchkula, Mohali 

Shift: Night Shift 6:30 pm to 3:30 AM IST

About the Role

We are seeking an experienced Python Developer with strong expertise in FastAPI to join our engineering team. The ideal candidate should have a solid background in backend development, RESTful API design, and scalable application development.


Required Skills & Qualifications

· 4+ years of professional experience in backend development with Python.

· Strong hands-on experience with FastAPI (or Flask/Django with migration experience).

· Familiarity with asynchronous programming in Python.

· Working knowledge of version control systems (Git).

· Good problem-solving and debugging skills.

· Strong communication and collaboration abilities.

Read more
Faclon LABS
HR Faclon
Posted by HR Faclon
Mumbai
3 - 4 yrs
₹19L - ₹25L / yr
skill iconPython
Statistical Analysis
skill iconMachine Learning (ML)
skill iconDeep Learning
Generative AI
+3 more

Lead Data Scientist


Location: Mumbai


Application Link: https://flpl.keka.com/careers/jobdetails/40052


What you’ll do

  • Manage end-to-end data science projects from scoping to deployment, ensuring accuracy, reliability and measurable business impact
  • Translate business needs into actionable DS tasks, lead data wrangling, feature engineering, and model optimization
  • Communicate insights to non-technical stakeholders to guide decisions while mentoring a 14 member DS team.
  • Implement scalable MLOps, automated pipelines, and reusable frameworks to accelerate delivery and experimentation


What we’re looking for

  • 4-5 years of hands-on experience in Data Science/ML with strong foundations in statistics, Linear Algebra, and optimization
  • Proficient in Python (NumbPy, pandas, scikit-learn, XGBoost) and experienced with at least one cloud platform (AWS, GCP or Azure)
  • Skilled in building data pipelines (Airflow, Spark) and deploying models using Docker, FastAPI, etc
  • Adept at communicating insights effectively to both technical and non-technical audiences
  • Bachelor’s from any field


You might have an edge over others if

  • Experience with LLMs or GenAI apps
  • Contributions to open-source or published research
  • Exposure to real-time analytics and industrial datasets


You should not apply with us if

  • You don’t want to work in agile environments
  • The unpredictability and super iterative nature of startups scare you
  • You hate working with people who are smarter than you
  • You don’t thrive in self-driven, “owner mindset” environments- nothing wrong- just not our type!


About us

We’re Faclon Labs – a high-growth, deep-tech startup on a mission to make infrastructure and utilities smarter using IoT and SaaS. Sounds heavy? That’s because we do heavy lifting — in tech, in thinking, and in creating real-world impact.

We’re not your average startup. We don’t do corporate fluff. We do ownership, fast iterations, and big ideas. If you're looking for ping-pong tables, we're still saving up. But if you want to shape the soul of the company while it's being built- this is the place!


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Mumbai ( Andheri East ), Mumbai
2 - 5 yrs
₹4L - ₹15L / yr
skill iconPython
skill iconDjango
skill iconPostgreSQL
skill iconMongoDB

🚀 Hiring: Python Developer

⭐ Experience: 2+ Years

📍 Location: Mumbai

⭐ Work Mode:- 5 Days Work From Office

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


Looking for a skilled Python Developer with experience in Django / FastAPI and MongoDB / PostgreSQL.


⭐ Must-Have Skills:-

✅ 2+ years of professional experience as a Python Developer

✅Proficient in Django or FastAPI

✅Hands-on with MongoDB & PostgreSQL

✅Strong understanding of REST APIs & Git

Read more
IT Industry - Night Shifts

IT Industry - Night Shifts

Agency job
Bengaluru (Bangalore), Hyderabad, Mumbai, Navi Mumbai, Pune, Mohali, Delhi
5 - 10 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
IT infrastructure
skill iconMachine Learning (ML)
DevOps
Automation
+1 more

🚀 We’re Hiring: Senior Cloud & ML Infrastructure Engineer 🚀


We’re looking for an experienced engineer to lead the design, scaling, and optimization of cloud-native ML infrastructure on AWS.

If you’re passionate about platform engineering, automation, and running ML systems at scale, this role is for you.


What you’ll do:

🔹 Architect and manage ML infrastructure with AWS (SageMaker, Step Functions, Lambda, ECR)

🔹 Build highly available, multi-region solutions for real-time & batch inference

🔹 Automate with IaC (AWS CDK, Terraform) and CI/CD pipelines

🔹 Ensure security, compliance, and cost efficiency

🔹 Collaborate across DevOps, ML, and backend teams


What we’re looking for:

✔️ 6+ years AWS cloud infrastructure experience

✔️ Strong ML pipeline experience (SageMaker, ECS/EKS, Docker)

✔️ Proficiency in Python/Go/Bash scripting

✔️ Knowledge of networking, IAM, and security best practices

✔️ Experience with observability tools (CloudWatch, Prometheus, Grafana)


✨ Nice to have: Robotics/IoT background (ROS2, Greengrass, Edge Inference)


📍 Location: Bengaluru, Hyderabad, Mumbai, Pune, Mohali, Delhi

5 days working, Work from Office

Night shifts: 9pm to 6am IST

👉 If this sounds like you (or someone you know), let’s connect!


Apply here:

Read more
Nagpur, Maharashtra, Mumbai, Pune
2 - 4 yrs
₹6L - ₹12L / yr
skill iconPython
skill iconDjango
FastAPI
skill iconAmazon Web Services (AWS)
skill iconJavascript
+5 more

Job Title : Software Development Engineer (Python, Django & FastAPI + React.js)

Experience : 2+ Years

Location : Nagpur / Remote (India)

Job Type : Full Time

Collaboration Hours : 11:00 AM – 7:00 PM IST


About the Role :

We are seeking a Software Development Engineer to join our growing team. The ideal candidate will have strong expertise in backend development with Python, Django, and FastAPI, as well as working knowledge of AWS.

While backend development is the primary focus, you should also be comfortable contributing to frontend development using JavaScript, TypeScript, and React.


Mandatory Skills : Python, Django, FastAPI, AWS, JavaScript/TypeScript, React, REST APIs, SQL/NoSQL.


Key Responsibilities :

  • Design, develop, and maintain backend services using Python (Django / FastAPI).
  • Deploy, scale, and manage applications on AWS cloud services.
  • Collaborate with frontend developers and contribute to React (JS/TS) development when required.
  • Write clean, efficient, and maintainable code following best practices.
  • Ensure system performance, scalability, and security.
  • Participate in the full software development lifecycle : planning, design, development, testing, and deployment.
  • Work collaboratively with cross-functional teams to deliver high-quality solutions.

Requirements :

  • Bachelor’s degree in Computer Science, Computer Engineering, or related field.
  • 2+ years of professional software development experience.
  • Strong proficiency in Python, with hands-on experience in Django and FastAPI.
  • Practical experience with AWS cloud services.
  • Basic proficiency in JavaScript, TypeScript, and React for frontend development.
  • Solid understanding of REST APIs, databases (SQL/NoSQL), and software design principles.
  • Familiarity with Git and collaborative workflows.
  • Strong problem-solving ability and adaptability in a fast-paced environment.

Good to Have :

  • Experience with Docker for containerization.
  • Knowledge of CI/CD pipelines and DevOps practices.
Read more
Hunarstreet Technologies

Hunarstreet Technologies

Agency job
via Hunarstreet Technologies pvt ltd by Priyanka Londhe
Mumbai, Pune, Bengaluru (Bangalore), Hyderabad, Panchkula, Mohali
5 - 8 yrs
₹15L - ₹22L / yr
skill iconPython
FastAPI
skill iconDjango
skill iconFlask
backend development
+2 more

Required Skills & Qualifications

  • 4+ years of professional experience in backend development with Python.
  • Strong hands-on experience with FastAPI (or Flask/Django with migration experience).
  • Familiarity with asynchronous programming in Python.
  • Working knowledge of version control systems (Git).
  • Good problem-solving and debugging skills.
  • Strong communication and collaboration abilities.
  • should have a solid background in backend development, RESTful API design, and scalable application development.


Shift: Night Shift 6:30 pm to 3:30 AM IST

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Annie Varghese
Posted by Annie Varghese
Pune, Mumbai, Bengaluru (Bangalore)
3 - 8 yrs
Best in industry
snowflake
Apache Airflow
ETL
skill iconPython
PySpark
+1 more

Job Summary:

We are looking for a highly skilled and experienced Data Engineer with deep expertise in Airflow, dbt, Python, and Snowflake. The ideal candidate will be responsible for designing, building, and managing scalable data pipelines and transformation frameworks to enable robust data workflows across the organization.

Key Responsibilities:

  • Design and implement scalable ETL/ELT pipelines using Apache Airflow for orchestration.
  • Develop modular and maintainable data transformation models using dbt.
  • Write high-performance data processing scripts and automation using Python.
  • Build and maintain data models and pipelines on Snowflake.
  • Collaborate with data analysts, data scientists, and business teams to deliver clean, reliable, and timely data.
  • Monitor and optimize pipeline performance and troubleshoot issues proactively.
  • Follow best practices in version control, testing, and CI/CD for data projects.

Must-Have Skills:

  • Strong hands-on experience with Apache Airflow for scheduling and orchestrating data workflows.
  • Proficiency in dbt (data build tool) for building scalable and testable data models.
  • Expert-level skills in Python for data processing and automation.
  • Solid experience with Snowflake, including SQL performance tuning, data modeling, and warehouse management.
  • Strong understanding of data engineering best practices including modularity, testing, and deployment.

Good to Have:

  • Experience working with cloud platforms (AWS/GCP/Azure).
  • Familiarity with CI/CD pipelines for data (e.g., GitHub Actions, GitLab CI).
  • Exposure to modern data stack tools (e.g., Fivetran, Stitch, Looker).
  • Knowledge of data security and governance best practices.


Note : One face-to-face (F2F) round is mandatory, and as per the process, you will need to visit the office for this.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort