50+ Python Jobs in Mumbai | Python Job openings in Mumbai
Apply to 50+ Python Jobs in Mumbai on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.
Job Description – Backend Python Developer(Mid-Level)
📍 Location: Mumbai/Gurgaon | Full-time
Backend Python Developer
Role Overview
We are seeking a skilled Backend Python Developer to design, develop, and maintain backend services, APIs, and integrations that power our AI-driven automation solutions.
You will collaborate closely with senior engineers, AI/ML teams, and frontend developers to build scalable, high-performance systems. This role is ideal for professionals with solid backend experience who are eager to deepen their expertise in Python, cloud technologies, and AI-based applications.
Key Responsibilities
- Develop and maintain backend APIs, services, and system integrations using Python
- Collaborate on system design and architecture discussions with senior engineers
- Write clean, scalable, and well-documented code following best practices
- Ensure performance, scalability, and reliability in cloud environments
- Design and manage SQL/NoSQL databases for structured and unstructured data
- Support integration of AI/ML models into production workflows
- Participate in code reviews, unit testing, and debugging
- Contribute to CI/CD pipelines, containerization, and DevOps processes
Required Skills & Qualifications
- 3–5 years of experience in backend development
- Strong proficiency in Python
- Hands-on experience with frameworks such as FastAPI, Flask, or Django
- Experience building and consuming REST APIs (GraphQL is a plus)
- Strong database knowledge: PostgreSQL, MySQL, MongoDB, or Redis
- Familiarity with cloud platforms (AWS, GCP, or Azure)
- Hands-on experience with Docker and Kubernetes
- Strong understanding of OOP, data structures, algorithms, and design patterns
Preferred Skills
- Exposure to AI/ML workflows or a strong interest in learning
- Experience with message brokers such as Kafka, RabbitMQ, or Celery
- Knowledge of asynchronous programming (asyncio, Celery, etc.)
- Experience with unit testing frameworks (PyTest, unittest)
- Understanding of API security and authentication (OAuth2, JWT)
What We Offer
- Competitive compensation with growth opportunities
- Opportunity to work on AI-first automation products used globally
- Mentorship from experienced senior engineers
- Flexible work environment
- Continuous learning support in Python, Cloud, and AI/Automation technologies
Job Title : AI Analyst (Fresher / Associate)
Experience : 0 to 3 Years
Location : Andheri West, Mumbai (Onsite)
Reporting To : AI Architect
Employment Type : Full-Time
About the Role :
We are hiring an AI Analyst to work with enterprise clients on the assessment, design, and validation of AI systems. This is a hands-on role at the intersection of business, technology, and responsible AI, focused on building production-ready, scalable, and governed AI solutions aligned with real business outcomes.
Mandatory Skills :
Artificial Intelligence (AI), Large Language Models (LLM), AI Agents, Generative AI, Machine Learning basics, Python, Prompt Engineering, Analytical Thinking.
Key Responsibilities :
- Review existing AI workflows, agents, and LLM usage to identify risks, gaps, and inefficiencies.
- Support the design of AI agent workflows aligned with business requirements.
- Help implement AI guardrails, governance frameworks, and safety mechanisms.
- Design evaluation and validation frameworks to test accuracy, reliability, and cost efficiency.
- Support AI pilot launches and production readiness.
- Communicate AI system behavior and insights to technical and non-technical stakeholders.
Required Skills :
- Strong analytical and systems thinking.
- Exposure to LLMs, AI agents, or AI workflows.
- Ability to translate business requirements into AI solutions.
- Good problem-solving and communication skills.
- Comfortable working in fast-paced environments.
Preferred :
- Consulting or client-facing experience.
- Exposure to enterprise AI deployments or regulated environments.
Education :
- Degree in Computer Science, Engineering, AI, or Data Science preferred.
- Strong practical AI skills are also valued.
Why Join Us :
- Work on real-world AI systems with enterprise clients, gain exposure to production AI and responsible AI deployment, and build a strong foundation in Applied AI and AI Systems Architecture.
Location: Mumbai, Maharashtra, India
Sector: Technology, Information & Media
Company Size: 500 - 1,000 Employees
Employment: Full-Time, Permanent
Experience: 10 - 14 Years (Engineering Leadership)
Level: Engineering Manager / Group EM
ABOUT THIS MANDATE :
Recruiting Bond has been exclusively retained by one of India's most prominent and well-established digital platform organisations operating at the intersection of Technology, Information, and Media to identify and place an exceptional Engineering Manager who can lead engineering teams through an enterprise-wide AI adoption and digital transformation agenda.
This is a high-impact, hands-on leadership role at the nexus of people, product, and technology. The organisation is executing one of the most ambitious AI transformation programmes in its sector and this Engineering Manager will be a core driver of that change. You will lead multiple squads, own engineering delivery end-to-end, embed AI tooling and practices into the team's DNA, and shape the engineering culture of tomorrow.
We are seeking leaders who code when it matters, who build systems and teams with equal conviction, and who view AI not as a trend but as a fundamental shift in how great software is built.
THE OPPORTUNITY AT A GLANCE :
AI-First Engineering Culture :
- Own AI adoption across your squads - from LLM tooling integration to automation-first delivery workflows. Make AI a default, not an afterthought.
Hands-On Engineering Leadership :
- Stay close to the code. Lead architecture reviews, unblock engineers, and set the technical bar - not just the management agenda.
People & Org Builder :
- Grow engineers into leaders. Build squads of 615 across functions. Drive hiring, career frameworks, and a culture of psychological safety.
KEY RESPONSIBILITIES :
1. Hands-On Technical Engagement :
- Remain deeply embedded in the technical work participate in design reviews, architecture decisions, and critical code reviews
- Set and uphold the engineering quality bar : performance benchmarks, security standards, test coverage, and release quality
- Provide technical direction on backend platform strategy, API design, service decomposition, and data architecture
- Identify and resolve systemic technical debt and architectural risks across team-owned services
- Unblock engineers by diving into complex problems debugging, pair programming, and system analysis when it matters
- Own key technical decisions in collaboration with Tech Leads and Principal Engineers; balance pragmatism with long-term sustainability
2. AI Adoption, Integration & Transformation (2026 Mandate) :
- Define and execute the team's AI adoption roadmap - from developer tooling to product-facing AI features
- Champion the integration of GenAI tools (GitHub Copilot, Cursor, Claude, ChatGPT) across the full engineering workflow coding, testing, documentation, incident response
- Embed LLM-powered capabilities into the product : recommendation engines, intelligent search, conversational interfaces, content generation, and predictive systems
- Lead evaluation and adoption of AI-assisted SDLC practices : automated code review, AI-generated test suites, intelligent observability, and anomaly detection
- Partner with Data Science and ML Platform teams to productionise ML models with robust MLOps pipelines
- Build team literacy in prompt engineering, RAG (Retrieval-Augmented Generation), and AI agent frameworks
- Create an experimentation culture : run structured AI pilots, measure productivity impact, and scale what works
- Stay ahead of the AI tooling landscape and advise senior leadership on strategic AI investments and engineering implications
3. People Leadership & Team Development :
- Lead, manage, and grow squads of 6 - 15 engineers across seniority levels (L2 through L6 / Junior through Staff)
- Conduct structured 1 : 1s, career growth conversations, and development planning with every direct report
- Design and execute personalised AI upskilling programmes ensure every engineer develops practical AI fluency by end of 2026
- Build and maintain a high-performance team culture : clarity of ownership, accountability, fast feedback loops, and psychological safety
- Drive performance management fairly and rigorously recognise top performers, manage underperformance constructively
- Lead technical hiring end-to-end : define job requirements, conduct bar-raising interviews, and make data-driven hire decisions
- Contribute to engineering career frameworks and level definitions in partnership with the VP / Director of Engineering
4. Engineering Delivery & Execution Excellence :
- Own end-to-end delivery for multiple product squads from planning and scoping through production release and post-launch stability
- Implement and refine agile delivery frameworks (Scrum, Kanban, Shape Up) calibrated to squad needs and product cadence
- Drive predictable delivery : maintain healthy sprint velocity, manage WIP limits, and ensure dependency resolution across teams.
- Establish and own engineering KPIs : DORA metrics (deployment frequency, lead time, MTTR, change failure rate), uptime SLOs, and velocity trends
- Lead incident management : build blameless post-mortem culture, own RCA processes, and drive systemic reliability improvements
- Balance technical debt repayment with feature velocity negotiate prioritisation transparently with Product leadership
5. Strategic Leadership & Cross-Functional Influence :
- Serve as the primary engineering partner for Product, Design, Data, and Business stakeholders translate ambiguity into executable engineering plans
- Participate in quarterly roadmap planning, capacity forecasting, and OKR definition for engineering teams
- Represent engineering in leadership forums articulate technical constraints, risks, and opportunities in business terms
- Contribute to org-wide engineering strategy : platform investments, build-vs-buy decisions, and shared infrastructure priorities
- Build relationships across geographies (Mumbai HQ + distributed teams) to maintain alignment and delivery cohesion
- Act as a culture carrier and ambassador for engineering excellence, innovation, and responsible AI use
AI TRANSFORMATION LEADERSHIP 2026 EXPECTATIONS :
In 2026, Engineering Managers at this organisation are expected to be active architects of AI transformation not passive observers. The following outlines the specific AI leadership expectations for this role :
AI Developer Productivity
- Drive measurable uplift in developer velocity through AI tooling adoption. Target : 30%+ reduction in code review cycle time and 40%+ increase in test coverage automation by Q3 2026.
LLM & GenAI Product Features
- Own delivery of GenAI-powered product capabilities : intelligent content, semantic search, personalisation, and conversational UX in production, at scale.
AI-Augmented Observability
- Implement AI-driven monitoring and anomaly detection pipelines. Reduce MTTR by leveraging predictive alerting, intelligent runbooks, and auto-remediation scripts.
Team AI Fluency :
- Build mandatory AI literacy across all engineering levels.
- Every engineer understands prompt engineering basics, AI ethics guardrails, and responsible AI deployment practices.
Responsible AI Governance :
- Partner with Security, Legal, and Data Privacy to ensure all AI deployments meet compliance standards, bias mitigation requirements, and explainability benchmarks.
TECHNOLOGY STACK & DOMAIN FAMILIARITY REQUIRED :
- Languages: Java/ Go/ Python/ Node.js /PHP /Rust (must be hands-on in at least 2)
- Cloud: AWS / GCP / Azure (multi-cloud exposure strongly preferred)
- AI & GenAI: OpenAI / Anthropic / Gemini APIs /LangChain /LlamaIndex / RAG / Vector DBs / GitHub
- Copilot: Cursor /Hugging Face
- Containers: Docker /Kubernetes /Helm /Service Mesh (Istio / Linkerd)
- Databases: PostgreSQL /MongoDB / Redis / Cassandra / Elasticsearch / Pinecone (Vector DB)
- Messaging: Apache Kafka /RabbitMQ /AWS SQS/SNS /Google Pub/Sub
- MLOps & DataOps: MLflow /Kubeflow / SageMaker / Vertex AI /Airflow /dbt
- Observability: Datadog /Prometheus /Grafana /OpenTelemetry / Jaeger /ELK Stack
- CI/CD & IaC: GitHub Actions ArgoCD / Jenkins / Terraform /Ansible /Backstage (IDP)
QUALIFICATIONS & CANDIDATE PROFILE :
Education :
- B.E. / B.Tech or M.E. / M.Tech from a Tier-I or Tier-II Institution - CS, IS, ECE, AI/ML streams strongly preferred
- Demonstrated engineering depth and leadership impact may complement institution pedigree
Experience :
- 10 to 14 years of progressive engineering experience, with at least 3 years in a formal Engineering Manager or equivalent people-leadership role
- Proven track record of managing and scaling engineering teams (615+ engineers) in a fast-growing SaaS or digital product environment
- Hands-on backend engineering background must be able to read, write, and critique production code
- Direct experience driving AI/ML feature delivery or AI tooling adoption within engineering organisations
- Exposure across start-up, mid-size, and large-scale product organisations, preferred adaptability is a core requirement
- Strong CS fundamentals: distributed systems, algorithms, system design, and software architecture
- Demonstrated career stability minimum of 2 years of average tenure per organisation.
The Ideal Engineering Manager in 2026 :
- Leads with context, not control, empowers engineers while maintaining accountability and quality
- Is fluent in both people language and technical language, switches registers naturally with engineers and executives alike
- Sees AI as a force multiplier for the team, not a threat. Actively experiments with and advocates for AI tooling
- Measures success by team outcomes, not personal output. Takes pride in what the team ships, not what they build alone
- Creates feedback loops obsessively between product and engineering, between seniors and juniors, between metrics and decisions
- Has strong opinions, loosely held, brings conviction to discussions but updates on evidence
- Invests in engineering excellence as seriously as delivery velocity knows that quality and speed are not opposites
WHY THIS ROLE STANDS APART :
AI Transformation at Scale :
- Lead one of the most significant AI adoption programmes in India's digital media sector.
- Our decisions will shape how hundreds of engineers work in 2026 and beyond.
Hands-On & Strategic Balance :
- A rare EM role that actively encourages technical depth.
- Stay close to the code while owning the people agenda - the best of both worlds.
Established Platform, Real Scale :
- 5001,000 engineers, proven product-market fit, and the org maturity to execute.
- This is not a greenfield startup gamble it is a serious company with serious ambition.
Clear Leadership Growth Path :
- A visible, direct path toward Director / VP of Engineering.
- Senior leadership is invested in growing its next generation of technology executives.
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits
every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.
What You Will Do:
• We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and a deep interest in scalable, low-latency systems.
• You should have 2–5 years of experience in Python-based development and be eager to solve complex performance and scalability challenges in trading and fintech applications.
• You measure success by your own growth, not external validation.
• You thrive on challenges, not on perks or financial rewards.
• Taking calculated risks excites you—you’re here to build, break, and learn.
• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading environment.
• You understand the stakes—milliseconds can make or break trades, and precision is everything.
What We Expect:
• Develop and maintain scalable backend systems using Python.
• Design and implement REST APIs and socket-based communication.
• Optimize code for speed, performance, and reliability.
• Collaborate with frontend teams to integrate server-side logic.
• Work with RabbitMQ, Kafka, Redis, and Elasticsearch for robust backend design.
• Build fault-tolerant, multi-producer/consumer systems.
Must-Have Skills:
• 2–5 years of experience in Python and backend development.
• Strong understanding of REST APIs, sockets, and network protocols (TCP/UDP/HTTP).
• Experience with RabbitMQ/Kafka, SQL & NoSQL databases, Redis, and Elasticsearch.
• Bachelor’s degree in Computer Science or related field.
Nice-to-Have Skills:
• Past experience in fintech, trading systems, or algorithmic trading.
• Experience with GoLang, C/C++, Erlang, or Elixir.
• Exposure to trading, fintech, or low-latency systems.
• Familiarity with microservices and CI/CD pipelines.
Responsible for developing, enhancing, modifying, and maintaining chatbot applications in the Global Markets environment. The role involves designing, coding, testing, debugging, and documenting conversational AI solutions, along with supporting activities aligned to the corporate systems architecture.
You will work closely with business partners to understand requirements, analyze data, and deliver optimal, market-ready conversational AI and automation solutions.
Key Responsibilities
- Design, develop, test, debug, and maintain chatbot and virtual agent applications
- Collaborate with business stakeholders to define and translate requirements into technical solutions
- Analyze large volumes of conversational data to improve chatbot accuracy and performance
- Develop automation workflows for data handling and refinement
- Train and optimize chatbots using historical chat logs and user-generated content
- Ensure solutions align with enterprise architecture and best practices
- Document solutions, workflows, and technical designs clearly
Required Skills
- Hands-on experience in developing virtual agents (chatbots/voicebots) and Natural Language Processing (NLP)
- Experience with one or more AI/NLP platforms such as:
- Dialogflow, Amazon Lex, Alexa, Rasa, LUIS, Kore.AI
- Microsoft Bot Framework, IBM Watson, Wit.ai, Salesforce Einstein, Converse.ai
- Strong programming knowledge in Python, JavaScript, or Node.js
- Experience training chatbots using historical conversations or large-scale text datasets
- Practical knowledge of:
- Formal syntax and semantics
- Corpus analysis
- Dialogue management
- Strong written communication skills
- Strong problem-solving ability and willingness to learn emerging technologies
Nice-to-Have Skills
- Understanding of conversational UI and voice-based processing (Text-to-Speech, Speech-to-Text)
- Experience building voice apps for Amazon Alexa or Google Home
- Experience with Test-Driven Development (TDD) and Agile methodologies
- Ability to design and implement end-to-end pipelines for AI-based conversational applications
- Experience in text mining, hypothesis generation, and historical data analysis
- Strong knowledge of regular expressions for data cleaning and preprocessing
- Understanding of API integrations, SSO, and token-based authentication
- Experience writing unit test cases as per project standards
- Knowledge of HTTP, REST APIs, sockets, and web services
- Ability to perform keyword and topic extraction from chat logs
- Experience training and tuning topic modeling algorithms such as LDA and NMF
- Understanding of classical Machine Learning algorithms and appropriate evaluation metrics
- Experience with NLP frameworks such as NLTK and spaCy
JOB DESCRIPTION:
Location: Pune, Mumbai, Bangalore
Mode of Work : 3 days from Office
* Python : Strong expertise in data workflows and automation
* Pandas: For detailed data analysis and validation
* SQL: Querying and performing operations on Delta tables
* AWS Cloud: Compute and storage services
* OOPS concepts
At Dolat Capital, we blend cutting-edge technology with quantitative finance to drive high-performance trading across Equities, Futures, and Options. We're a fast-moving team of traders, engineers, and data scientists building ultra-low latency systems and intelligent trading strategies.
🎯 What You’ll Work On
1. Designing and deploying high-frequency, high-sharpe trading strategies
2. Building low-latency, high-throughput trading infrastructure (C++/Python/Linux).
3. Leveraging AI/ML to detect alpha and market patterns from large datasets
Real-time risk systems, simulation tools, and performance optimization
4. Collaborating across tech and trading teams to push innovation in live markets.
🧠 What We’re Looking For
1. Master’s (U.S.) in CS or Computational Finance (MANDATORY)
2. 1–2 years of experience in a quant/tech-heavy role
3. Strong in C++, Python, algorithms, Linux, TCP/UDP
4. Experience with AI/ML tools like TensorFlow, PyTorch, or Scikit-learn
5. Passion for high-performance systems and market innovation.
* Python (3 to 6 years): Strong expertise in data workflows and automation
* Spark (PySpark): Hands-on experience with large-scale data processing
* Pandas: For detailed data analysis and validation
* Delta Lake: Managing structured and semi-structured datasets at scale
* SQL: Querying and performing operations on Delta tables
* Azure Cloud: Compute and storage services
* Orchestrator: Good experience with either ADF or Airflow
About the Role:
We are looking for a highly skilled Data Engineer with a strong foundation in Power BI, SQL, Python, and Big Data ecosystems to help design, build, and optimize end-to-end data solutions. The ideal candidate is passionate about solving complex data problems, transforming raw data into actionable insights, and contributing to data-driven decision-making across the organization.
Key Responsibilities:
Data Modelling & Visualization
- Build scalable and high-quality data models in Power BI using best practices.
- Define relationships, hierarchies, and measures to support effective storytelling.
- Ensure dashboards meet standards in accuracy, visualization principles, and timelines.
Data Transformation & ETL
- Perform advanced data transformation using Power Query (M Language) beyond UI-based steps.
- Design and optimize ETL pipelines using SQL, Python, and Big Data tools.
- Manage and process large-scale datasets from various sources and formats.
Business Problem Translation
- Collaborate with cross-functional teams to translate complex business problems into scalable, data-centric solutions.
- Decompose business questions into testable hypotheses and identify relevant datasets for validation.
Performance & Troubleshooting
- Continuously optimize performance of dashboards and pipelines for latency, reliability, and scalability.
- Troubleshoot and resolve issues related to data access, quality, security, and latency, adhering to SLAs.
Analytical Storytelling
- Apply analytical thinking to design insightful dashboards—prioritizing clarity and usability over aesthetics.
- Develop data narratives that drive business impact.
Solution Design
- Deliver wireframes, POCs, and final solutions aligned with business requirements and technical feasibility.
Required Skills & Experience:
- Minimum 3+ years of experience as a Data Engineer or in a similar data-focused role.
- Strong expertise in Power BI: data modeling, DAX, Power Query (M Language), and visualization best practices.
- Hands-on with Python and SQL for data analysis, automation, and backend data transformation.
- Deep understanding of data storytelling, visual best practices, and dashboard performance tuning.
- Familiarity with DAX Studio and Tabular Editor.
- Experience in handling high-volume data in production environments.
Preferred (Good to Have):
- Exposure to Big Data technologies such as:
- PySpark
- Hadoop
- Hive / HDFS
- Spark Streaming (optional but preferred)
Why Join Us?
- Work with a team that's passionate about data innovation.
- Exposure to modern data stack and tools.
- Flat structure and collaborative culture.
- Opportunity to influence data strategy and architecture decisions.
About us
Cere Labs is a Mumbai based company working in the field of Artificial Intelligence. It is a product company that utilizes the latest technologies such as Python, Redis, neo4j, MVC, Docker, Kubernetes to build its AI platform. Cere Labs’ clients are primarily from the Banking and Finance domain in India and US. The company has a great environment for its employees to learn and grow in technology.
Software Developer
Job brief
Cere Labs is seeking to hire a skilled and passionate software developer to help with the development of our current projects and product. Your duties will primarily revolve around building software by writing code, as well as modifying software to fix errors, improve its performance. You will also be involved in writing of the test cases and testing
To be successful in this role, you will need extensive knowledge of programming languages like Java, Python, Java Script, React.
Ultimately, the role of the Software Engineer is to build high-quality, innovative and fully performing software that complies with coding standards and technical design
Responsibilities
- Develop flowcharts, layouts and documentation to identify requirements and solutions
- Write well-designed, testable code
- Develop software verification plans and quality assurance procedures
- Document and maintain software functionality
- Troubleshoot, debug and upgrade existing systems
- Deploy programs and test the deployed code
- Comply with project plans and industry standards
Requirements
- BE (CS/IT) degree in Computer Science
- Ability to understand the requirements given and generate the design based on specification given.
- Ability to develop unit testing of code components or complete applications.
- Must be a full-stack developer and understand concepts of software engineering.
- Ability to develop software in Python, Java, Java Script
- Excellent knowledge of relational databases, MySQL and ORM technologies (JPA2, Hibernate), in-memory data stores such as Redis
- Experience developing web applications using at least one popular web framework (JSF, Spring MVC, React) is preferred
- Experience with test-driven development
- Proficiency in software engineering tools including popular IDE’s such as PyCharm, Visual Studio Code and Eclipse
- Proven work experience as a Software Engineer or Software Developer will be an added advantage
Working conditions
Hours: 9:00 AM to 6:00 PM
Weekly off: Sunday, First and Third Saturdays
Mode: Work from office
Recruitment process
The selection process includes:
- Written test
- Technical interview
- Final interview
Compensation
CTC: Rs. 3-4 lacs pa, depending on performance in the selection process.
Skills - MLOps Pipeline Development | CI/CD (Jenkins) | Automation Scripting | Model Deployment & Monitoring | ML Lifecycle Management | Version Control & Governance | Docker & Kubernetes | Performance Optimization | Troubleshooting | Security & Compliance
Responsibilities:
1. Design, develop, and implement MLOps pipelines for the continuous deployment and
integration of machine learning models
2. Collaborate with data scientists and engineers to understand model requirements and
optimize deployment processes
3. Automate the training, testing and deployment processes for machine learning models
4. Continuously monitor and maintain models in production, ensuring optimal
performance, accuracy and reliability
5. Implement best practices for version control, model reproducibility and governance
6. Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
7. Troubleshoot and resolve issues related to model deployment and performance
8. Ensure compliance with security and data privacy standards in all MLOps activities
9. Keep up to date with the latest MLOps tools, technologies and trends
10. Provide support and guidance to other team members on MLOps practices
Required skills and experience:
• 3-10 years of experience in MLOps, DevOps or a related field
• Bachelor’s degree in computer science, Data Science or a related field
• Strong understanding of machine learning principles and model lifecycle management
• Experience in Jenkins pipeline development
• Experience in automation scripting
Responsibilities
- Design, develop, and implement MLOps pipelines for the continuous deployment and integration of machine learning models
- Collaborate with data scientists and engineers to understand model requirements and optimize deployment processes
- Automate the training, testing and deployment processes for machine learning models
- Continuously monitor and maintain models in production, ensuring optimal performance, accuracy and reliability
- Implement best practices for version control, model reproducibility and governance
- Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
- Troubleshoot and resolve issues related to model deployment and performance
- Ensure compliance with security and data privacy standards in all MLOps activities
- Keep up to date with the latest MLOps tools, technologies and trends
- Provide support and guidance to other team members on MLOps practices
Required Skills And Experience
- 3-10 years of experience in MLOps, DevOps or a related field
- Bachelors degree in computer science, Data Science or a related field
- Strong understanding of machine learning principles and model lifecycle management
- Experience in Jenkins pipeline development
- Experience in automation scripting
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
🚀 Hiring: QA Engineer at Deqode
⭐ Experience: 3+ Years
📍 Location: Mumbai and Banaglore
⭐ Work Mode:- 5 Days Work From Office
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
We are looking for a Backend API Automation Engineer with strong experience in API testing and automation. The candidate should be skilled in scripting and capable of handling both manual and automated testing.
Key Skills Required:
- Backend API automation testing experience
- Strong scripting skills in Python or JavaScript (Java acceptable)
- Hands-on experience with REST Assured and Postman
- Experience in manual testing along with test automation
Responsibilities:
- Design, develop, and execute automated tests for backend APIs
- Perform manual and automated API testing to ensure quality and reliability
- Collaborate with development teams to identify, report, and resolve issues
Job Title: Python Developer (5–8+ Years Experience)
Location: Mumbai (Onsite)
Experience: 5–8+ Years
Salary: ₹9,00,000 – ₹12,00,000 per Annum (depending on experience & skill set)
Employment Type: Full-time
Job Description
We are looking for an experienced Python Developer to join our growing team in Mumbai. The ideal candidate will have strong hands-on experience in Python development, building scalable backend systems, and working with databases and APIs.
Key Responsibilities
- Design, develop, test, and maintain Python-based applications
- Build and integrate RESTful APIs
- Work with frameworks such as Django / Flask / FastAPI
- Write clean, reusable, and efficient code
- Collaborate with frontend developers, QA, and project managers
- Optimize application performance and scalability
- Debug, troubleshoot, and resolve technical issues
- Participate in code reviews and follow best coding practices
- Work with databases and ensure data security and integrity
- Deploy and maintain applications in staging/production environments
Required Skills & Qualifications
- 5–8+ years of hands-on experience in Python development
- Strong experience with Django / Flask / FastAPI
- Good understanding of REST APIs
- Experience with MySQL / PostgreSQL / MongoDB
- Familiarity with Git and version control workflows
- Knowledge of OOP concepts and design principles
- Experience with Linux-based environments
- Understanding of basic security and performance optimization
- AI tool integration: GitHub Copilot, Windsurf, Cursor, AIDE, etc
- Ability to work independently as well as in a team
Good to Have (Preferred Skills)
- Experience with AWS / cloud services
- Knowledge of Docker / CI-CD pipelines
- Good level understanding of prompt engineering
- Exposure to Microservices Architecture
- Basic frontend knowledge (HTML, CSS, JavaScript)
- Experience working in an Agile/Scrum environment
- Experience working with AI APIs such as ChatGPT, OpenAI, Gemini, Claude APIs
- Integrating AI APIs into web applications
- Experience using AI for automation, content generation, data processing, or workflow optimization
Experience:
- Total: 5+ years (Required)
- Python: 5 years (Required)
Location: Mumbai (Onsite)
Experience: 4–6 Years
Salary: ₹75,000 – ₹1,200,000 per month (depending on experience & skill set)
Employment Type: Full-time
Job Description
We are looking for a skilled React Developer to join our team in Mumbai. The ideal candidate should have strong hands-on experience in building modern, responsive web applications using React and be comfortable working with at least one backend technology such as Python, Node.js, or PHP.
Key Responsibilities
- Develop and maintain user-friendly web applications using React.js
- Convert UI/UX designs into high-quality, reusable components
- Work with REST APIs and integrate frontend with backend services
- Collaborate with backend developers (Python / Node.js / PHP)
- Optimize applications for performance, scalability, and responsiveness
- Manage application state using Redux / Context API / similar
- Write clean, maintainable, and well-documented code
- Participate in code reviews and sprint planning
- Debug and resolve frontend and integration issues
- Ensure cross-browser and cross-device compatibility
Required Skills & Qualifications
- 6–8 years of experience in frontend development
- Strong expertise in React.js
- Proficiency in JavaScript (ES6+)
- Experience with HTML5, CSS3, Responsive Design
- Hands-on experience with RESTful APIs
- Working knowledge of at least one backend technology:
- Python (Django / Flask / FastAPI) OR
- Node.js (Express / NestJS) OR
- PHP (Laravel preferred)
- Familiarity with Git / version control systems
- Understanding of component-based architecture
- Experience working in Linux environments
Good to Have (Preferred Skills)
- Experience with Next.js
- Knowledge of TypeScript
- Familiarity with Redux / React Query
- Basic understanding of databases (MySQL / MongoDB)
- Experience with CI/CD pipelines
- Exposure to AWS or cloud platforms
- Experience working in Agile/Scrum teams
What We Offer
- Competitive salary based on experience and skills
- Onsite role with a collaborative team in Mumbai
- Opportunity to work on modern tech stack and real-world projects
- Career growth and learning opportunities
Interested candidates can share their resumes at
Job Type: Full-time
Application Question(s):
- If selected, how soon can you join?
- Are you okay with the salary slab (50,000-90,000) , depending upon your experience?
- Have you worked on a production React application where you integrated REST APIs and handled authentication and error scenarios with a backend (Python / Node.js / PHP)?
Experience:
- Total: 5 years (Required)
- Python: 5 years (Required)
Location:
- Mumbai, Maharashtra (Required)
Work Location: In person
Specific Knowledge/Skills
- 4-6 years of experience
- Proficiency in Python programming.
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with RESTful APIs
- Familiarity with Cloud (Azure /AWS) technologies
Role: Senior Platform Engineer (GCP Cloud)
Experience Level: 3 to 6 Years
Work location: Mumbai
Mode : Hybrid
Role & Responsibilities:
- Build automation software for cloud platforms and applications
- Drive Infrastructure as Code (IaC) adoption
- Design self-service, self-healing monitoring and alerting tools
- Automate CI/CD pipelines (Git, Jenkins, SonarQube, Docker)
- Build Kubernetes container platforms
- Introduce new cloud technologies for business innovation
Requirements:
- Hands-on experience with GCP Cloud
- Knowledge of cloud services (compute, storage, network, messaging)
- IaC tools experience (Terraform/CloudFormation)
- SQL & NoSQL databases (Postgres, Cassandra)
- Automation tools (Puppet/Chef/Ansible)
- Strong Linux administration skills
- Programming: Bash/Python/Java/Scala
- CI/CD pipeline expertise (Jenkins, Git, Maven)
- Multi-region deployment experience
- Agile/Scrum/DevOps methodology
Junior PHP Developer (Full-Time)
Malad, Mumbai (Mindspace) | Work from Office
We’re hiring a Junior PHP Developer at Websites.co.in, a platform where small businesses create their website in 2 minutes.
Your role
- Develop and maintain backend logic using PHP (Laravel or Core PHP)
- Write clean, reusable, and efficient code
- Work with MySQL databases (queries, joins, optimization)
- Integrate REST APIs and troubleshoot backend issues
- Collaborate with frontend, QA, and product teams for feature implementation
- Participate in code reviews, testing, and deployment activities
- Debug production issues and provide quick fixes
What we expect
- Hands-on development experience with PHP (mandatory)
- Strong knowledge of MySQL, queries, and database structures
- Understanding of MVC architecture (Laravel preferred)
- Basic knowledge of HTML, CSS, JavaScript
- Familiarity with Git version control
- Problem-solving mindset and willingness to take ownership
- 0–3.5 years of experience (freshers with strong projects are welcome)
Good to have
- Experience working with APIs, JSON, cURL
- Understanding of server basics (Linux, Apache, hosting environments)
What you get
- Real product ownership, not agency project hopping
- Direct collaboration with CTO and senior devs
- Steep learning curve in a fast-moving SaaS environment
We are seeking a motivated Data Analyst to support business operations by analyzing data, preparing reports, and delivering meaningful insights. The ideal candidate should be comfortable working with data, identifying patterns, and presenting findings in a clear and actionable way.
Key Responsibilities:
- Collect, clean, and organize data from internal and external sources
- Analyze large datasets to identify trends, patterns, and opportunities
- Prepare regular and ad-hoc reports for business stakeholders
- Create dashboards and visualizations using tools like Power BI or Tableau
- Work closely with cross-functional teams to understand data requirements
- Ensure data accuracy, consistency, and quality across reports
- Document data processes and analysis methods
We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and
a deep interest in scalable, low-latency systems.
Key Responsibilities
• Develop, maintain, and optimize backend applications using Python.
• Build and integrate RESTful APIs and microservices.
• Work with relational and NoSQL databases for data storage, retrieval, and optimization.
• Write clean, efficient, and reusable code while following best practices.
• Collaborate with cross-functional teams (frontend, QA, DevOps) to deliver high quality features.
• Participate in code reviews to maintain high coding standards.
• Troubleshoot, debug, and upgrade existing applications.
• Ensure application security, performance, and scalability.
Required Skills & Qualifications:
• 2–4 years of hands-on experience in Python development.
• Strong command over Python frameworks such as Django, Flask, or FastAPI.
• Solid understanding of Object-Oriented Programming (OOP) principles.
• Experience working with databases such as PostgreSQL, MySQL, or MongoDB.
• Proficiency in writing and consuming REST APIs.
• Familiarity with Git and version control workflows.
• Experience with unit testing and frameworks like PyTest or Unittest.
• Knowledge of containerization (Docker) is a plus.
AI Agent Builder – Internal Functions and Data Platform Development Tools
About the Role:
We are seeking a forward-thinking AI Agent Builder to lead the design, development, and deployment, and usage reporting of Microsoft Copilot and other AI-powered agents across our data platform development tools and internal business functions. This role will be instrumental in driving automation, improving onboarding, and enhancing operational efficiency through intelligent, context-aware assistants.
This role is central to our GenAI transformation strategy. You will help shape the future of how our teams interact with data, reduce administrative burden, and unlock new efficiencies across the organization. Your work will directly contribute to our “Art of the Possible” initiative—demonstrating tangible business value through AI.
You Will:
• Copilot Agent Development: Use Microsoft Copilot Studio and Agent Builder to create, test, and deploy AI agents that automate workflows, answer queries, and support internal teams.
• Data Engineering Enablement: Build agents that assist with data connector scaffolding, pipeline generation, and onboarding support for engineers.
• Knowledge Base Integration: Curate and integrate documentation (e.g., ERDs, connector specs) into Copilot-accessible repositories (SharePoint, Confluence) to support contextual AI responses.
• Prompt Engineering: Design reusable prompt templates and conversational flows to streamline repeated tasks and improve agent usability.
• Tool Evaluation & Integration: Assess and integrate complementary AI tools (e.g., GitLab Duo, Databricks AI, Notebook LM) to extend Copilot capabilities.
• Cross-Functional Collaboration: Partner with product, delivery, PMO, and security teams to identify high-value use cases and scale successful agent implementations.
• Governance & Monitoring: Ensure agents align with Responsible AI principles, monitor performance, and iterate based on feedback and evolving business needs.
• Adoption and Usage Reporting: Use Microsoft Viva Insights and other tools to report on user adoption, usage and business value delivered.
What We're Looking For:
• Proven experience with Microsoft 365 Copilot, Copilot Studio, or similar AI platforms, ChatGPT, Claude, etc.
• Strong understanding of data engineering workflows, tools (e.g., Git, Databricks, Unity Catalog), and documentation practices.
• Familiarity with SharePoint, Confluence, and Microsoft Graph connectors.
• Experience in prompt engineering and conversational UX design.
• Ability to translate business needs into scalable AI solutions.
• Excellent communication and collaboration skills across technical and non-technical
Bonus Points:
• Experience with GitLab Duo, Notebook LM, or other AI developer tools.
• Background in enterprise data platforms, ETL pipelines, or internal business systems.
• Exposure to AI governance, security, and compliance frameworks.
• Prior work in a regulated industry (e.g., healthcare, finance) is a plus.
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
Job Description: Python-Azure AI Developer
Experience: 5+ years
Locations: Bangalore | Pune | Chennai | Jaipur | Hyderabad | Gurgaon | Bhopal
Mandatory Skills:
- Python: Expert-level proficiency with FastAPI/Flask
- Azure Services: Hands-on experience integrating Azure cloud services
- Databases: PostgreSQL, Redis
- AI Expertise: Exposure to Agentic AI technologies, frameworks, or SDKs with strong conceptual understanding
Good to Have:
- Workflow automation tools (n8n or similar)
- Experience with LangChain, AutoGen, or other AI agent frameworks
- Azure OpenAI Service knowledge
Key Responsibilities:
- Develop AI-powered applications using Python and Azure
- Build RESTful APIs with FastAPI/Flask
- Integrate Azure services for AI/ML workloads
- Implement agentic AI solutions
- Database optimization and management
- Workflow automation implementation
Software Tester – Automation (On-Site)
📍 Location: Navi Mumbai
Budget - 4lpa to 7lpa
Years of Experience - 2 to 5years
🕒 Immediate Joiners Preferred
✨ Why Join Us?
🚀 Growth-driven environment with modern, automation-first projects
📆 Weekends off + Provident Fund benefits
🤝 Supportive, collaborative & innovation-first culture
🔍 Role Overview
We are looking for an Automation Tester with strong hands-on experience in Python-based UI, API, and WebSocket automation. You will collaborate closely with developers, project managers, and QA peers to ensure product quality, performance, and reliability, while also exploring AI-led testing initiatives.
🧩 Key Responsibilities
🧾 Requirement Analysis & Test Planning
Participate in client interactions to understand testing and automation requirements.
Convert functional/technical specifications into automation-ready test scenarios.
🤖 Automation Testing & Framework Development
Develop and maintain automation scripts using Python, Selenium, and Pytest.
Build scalable automation frameworks for UI, API, and WebSocket testing.
Improve script reusability, modularity, and performance.
🌐 API & WebSocket Testing
Perform REST API validations using Postman/Swagger.
Develop automated API test suites using Python/Pytest.
Execute WebSocket test scenarios (real-time event/message validations, latency, connection stability).
🧪 Manual Testing (As Needed)
Execute functional, UI, smoke, sanity, and exploratory tests.
Validate applications in development, QA, and production environments.
🐞 Defect Management
Log, track, and retest defects using Jira or Zoho Projects.
Ensure high-quality bug reporting with clear steps and severity/priority tagging.
⚡ Performance Testing
Use JMeter to conduct load, stress, and performance tests for APIs/WebSocket-based systems.
Analyze system performance and highlight bottlenecks.
🧠 AI-Driven Testing Exploration
Research and experiment with AI tools to enhance automation coverage and efficiency.
Propose AI-driven improvements for regression, analytics, and test optimization.
🤝 Collaboration & Communication
Participate in daily stand-ups and regular QA syncs.
Communicate blockers, automation progress, and risks clearly.
📊 Test Reporting & Metrics
Create reports on automation execution, defect trends, and performance benchmarks.
🛠 Key Technical Skills
✔ Strong proficiency in Python
✔ UI Automation using Selenium (Python)
✔ Pytest Framework
✔ API Testing – Postman/Swagger
✔ WebSocket Testing
✔ Performance Testing using JMeter
✔ Knowledge of CI/CD tools (such as Jenkins)
✔ Knowledge of Git
✔ SQL knowledge (added advantage)
✔ Functional/Manual Testing expertise
✔ Solid understanding of SDLC/STLC & QA processes
🧰 Tools You Will Work With
Automation: Selenium, Pytest
API & WebSockets: Postman, Swagger, Python libraries
Performance: JMeter
Project/Defect Tracking: Jira, Zoho Projects
CI/CD & Version Control: Jenkins, Git
🌟 Soft Skills
Strong communication & teamwork
Detail-oriented and analytical
Problem-solving mindset
Ownership and accountability
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits
every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.
What You Will Do:
• We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and a deep interest in scalable, low-latency systems.
• You should have 3–4 years of experience in Python-based development and be eager to solve complex performance and scalability challenges in trading and fintech applications.
• You measure success by your own growth, not external validation.
• You thrive on challenges, not on perks or financial rewards.
• Taking calculated risks excites you—you’re here to build, break, and learn.
• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading environment.
• You understand the stakes—milliseconds can make or break trades, and precision is everything.
What We Expect:
• Develop and maintain scalable backend systems using Python.
• Design and implement REST APIs and socket-based communication.
• Optimize code for speed, performance, and reliability.
• Collaborate with frontend teams to integrate server-side logic.
• Work with RabbitMQ, Kafka, Redis, and Elasticsearch for robust backend design.
• Build fault-tolerant, multi-producer/consumer systems.
Must-Have Skills:
• 3–4 years of experience in Python and backend development.
• Strong understanding of REST APIs, sockets, and network protocols (TCP/UDP/HTTP).
• Experience with RabbitMQ/Kafka, SQL & NoSQL databases, Redis, and Elasticsearch.
• Bachelor’s degree in Computer Science or related field.
Nice-to-Have Skills:
• Past experience in fintech, trading systems, or algorithmic trading.
• Experience with GoLang, C/C++, Erlang, or Elixir.
• Exposure to trading, fintech, or low-latency systems.
• Familiarity with microservices and CI/CD pipelines.
Required Skills: Strong SQL Expertise, Data Reporting & Analytics, Database Development, Stakeholder & Client Communication, Independent Problem-Solving & Automation Skills
Review Criteria
· Must have Strong SQL skills (queries, optimization, procedures, triggers)
· Must have Advanced Excel skills
· Should have 3+ years of relevant experience
· Should have Reporting + dashboard creation experience
· Should have Database development & maintenance experience
· Must have Strong communication for client interactions
· Should have Ability to work independently
· Willingness to work from client locations.
Description
Who is an ideal fit for us?
We seek professionals who are analytical, demonstrate self-motivation, exhibit a proactive mindset, and possess a strong sense of responsibility and ownership in their work.
What will you get to work on?
As a member of the Implementation & Analytics team, you will:
● Design, develop, and optimize complex SQL queries to extract, transform, and analyze data
● Create advanced reports and dashboards using SQL, stored procedures, and other reporting tools
● Develop and maintain database structures, stored procedures, functions, and triggers
● Optimize database performance by tuning SQL queries, and indexing to handle large datasets efficiently
● Collaborate with business stakeholders and analysts to understand analytics requirements
● Automate data extraction, transformation, and reporting processes to improve efficiency
What do we expect from you?
For the SQL/Oracle Developer role, we are seeking candidates with the following skills and Expertise:
● Proficiency in SQL (Window functions, stored procedures) and MS Excel (advanced Excel skills)
● More than 3 plus years of relevant experience
● Java / Python experience is a plus but not mandatory
● Strong communication skills to interact with customers to understand their requirements
● Capable of working independently with minimal guidance, showcasing self-reliance and initiative
● Previous experience in automation projects is preferred
● Work From Office: Bangalore/Navi Mumbai/Pune/Client locations
About Us
Dolat Capital is a multi-strategy quantitative trading firm specializing in high-frequency and fully automated trading systems across global markets. We build proprietary algorithms using advanced mathematical, statistical, and computational techniques.
We are looking for an Experienced Quantitative Researcher to develop, test, and optimize quantitative trading strategies—primarily for APAC markets. The ideal candidate brings strong mathematical thinking, hands-on trading experience, and a track record of building profitable models.
Key Responsibilities
- Research, design & develop quantitative trading strategies
- Analyse large datasets and build predictive models / regression models
- Implement models in Python / C++ / Matlab
- Monitor, execute, and improve existing trading strategies
- Collaborate closely with traders, developers, and researchers
- Optimize trading systems, reduce latency, and enhance execution
- Identify new trading opportunities across listed products
- Oversee and manage risk for options, equities, futures, and other instruments
Required Skills & Experience
- Minimum 3+ years experience on a high-volume equities, futures, options, or market-making desk
- Strong background in Statistics, Mathematics, Physics, or related field (PhD)
- Proven track record of profitable real-world trading strategies
- Strong programming experience: C++, Python, R, Matlab
- Experience with automated trading systems and exchange protocols
- Ability to work in a fast-paced, high-pressure trading environment
- Excellent analytical skills, precision, and attention to detail
Review Criteria
- Strong Senior Data Scientist (AI/ML/GenAI) Profile
- 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
- Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
- 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
- Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows
Preferred
- Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
- Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.
Job Specific Criteria
- CV Attachment is mandatory
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.
Responsibilities:
- Own the full ML lifecycle: model design, training, evaluation, deployment
- Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
- Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
- Build agentic workflows for reasoning, planning, and decision-making
- Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
- Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
- Collaborate with product and engineering teams to integrate AI models into business applications
- Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices
Ideal Candidate
- 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
- Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
- Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
- Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
- Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
- Strong software engineering background with experience in testing, version control, and APIs
- Proven ability to balance innovation with scalable deployment
- B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
- Bonus: Open-source contributions, GenAI research, or applied systems at scale
About Ven Analytics
At Ven Analytics, we don’t just crunch numbers — we decode them to uncover insights that drive real business impact. We’re a data-driven analytics company that partners with high-growth startups and enterprises to build powerful data products, business intelligence systems, and scalable reporting solutions. With a focus on innovation, collaboration, and continuous learning, we empower our teams to solve real-world business problems using the power of data.
Role Overview
We’re looking for a Power BI Data Analyst who is not just proficient in tools but passionate about building insightful, scalable, and high-performing dashboards. The ideal candidate should have strong fundamentals in data modeling, a flair for storytelling through data, and the technical skills to implement robust data solutions using Power BI, Python, and SQL..
Key Responsibilities
- Technical Expertise: Develop scalable, accurate, and maintainable data models using Power BI, with a clear understanding of Data Modeling, DAX, Power Query, and visualization principles.
- Programming Proficiency: Use SQL and Python for complex data manipulation, automation, and analysis.
- Business Problem Translation: Collaborate with stakeholders to convert business problems into structured data-centric solutions considering performance, scalability, and commercial goals.
- Hypothesis Development: Break down complex use-cases into testable hypotheses and define relevant datasets required for evaluation.
- Solution Design: Create wireframes, proof-of-concepts (POC), and final dashboards in line with business requirements.
- Dashboard Quality: Ensure dashboards meet high standards of data accuracy, visual clarity, performance, and support SLAs.
- Performance Optimization: Continuously enhance user experience by improving performance, maintainability, and scalability of Power BI solutions.
- Troubleshooting & Support: Quick resolution of access, latency, and data issues as per defined SLAs.
- Power BI Development: Use power BI desktop for report building and service for distribution
- Backend development: Develop optimized SQL queries that are easy to consume, maintain and debug.
- Version Control: Strict control on versions by tracking CRs and Bugfixes. Ensuring the maintenance of Prod and Dev dashboards.
- Client Servicing : Engage with clients to understand their data needs, gather requirements, present insights, and ensure timely, clear communication throughout project cycles.
- Team Management : Lead and mentor a small team by assigning tasks, reviewing work quality, guiding technical problem-solving, and ensuring timely delivery of dashboards and reports..
Must-Have Skills
- Strong experience building robust data models in Power BI
- Hands-on expertise with DAX (complex measures and calculated columns)
- Proficiency in M Language (Power Query) beyond drag-and-drop UI
- Clear understanding of data visualization best practices (less fluff, more insight)
- Solid grasp of SQL and Python for data processing
- Strong analytical thinking and ability to craft compelling data stories
- Client Servicing Background.
Good-to-Have (Bonus Points)
- Experience using DAX Studio and Tabular Editor
- Prior work in a high-volume data processing production environment
- Exposure to modern CI/CD practices or version control with BI tools
Why Join Ven Analytics?
- Be part of a fast-growing startup that puts data at the heart of every decision.
- Opportunity to work on high-impact, real-world business challenges.
- Collaborative, transparent, and learning-oriented work environment.
- Flexible work culture and focus on career development.
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles
.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
JD for Cloud engineer
Job Summary:
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
- Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
- Implement Google Cloud Storage, Cloud SQL, filestore, for data storage and processing needs.
- Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
- Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
- Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
- Work with Helm charts for microservices deployments.
- Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
- Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
- Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Automate infrastructure deployment using Terraform, Bash and Powershell scripting
- Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
What We’re Looking For
- 3-5 years of Data Science & ML experience in consumer internet / B2C products.
- Degree in Statistics, Computer Science, or Engineering (or certification in Data Science).
- Machine Learning wizardry: recommender systems, NLP, user profiling, image processing, anomaly detection.
- Statistical chops: finding meaningful insights in large data sets.
- Programming ninja: R, Python, SQL + hands-on with Numpy, Pandas, scikit-learn, Keras, TensorFlow (or similar).
- Visualization skills: Redshift, Tableau, Looker, or similar.
- A strong problem-solver with curiosity hardwired into your DNA.
- Brownie Points
- Experience with big data platforms: Hadoop, Spark, Hive, Pig.
- Extra love if you’ve played with BI tools like Tableau or Looker.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 3-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Backend Engineer (MongoDB / API Integrations / AWS / Vectorization)
Position Summary
We are hiring a Backend Engineer with expertise in MongoDB, data vectorization, and advanced AI/LLM integrations. The ideal candidate will have hands-on experience developing backend systems that power intelligent data-driven applications, including robust API integrations with major social media platforms (Meta, Instagram, Facebook, with expansion to TikTok, Snapchat, etc.). In addition, this role requires deep AWS experience (Lambda, S3, EventBridge) to manage serverless workflows, automate cron jobs, and execute both scheduled and manual data pulls. You will collaborate closely with frontend developers and AI engineers to deliver scalable, resilient APIs that power our platform.
Key Responsibilities
- Design, implement, and maintain backend services with MongoDB and scalable data models.
- Build pipelines to vectorize data for retrieval-augmented generation (RAG) and other AI-driven features.
- Develop robust API integrations with major social platforms (Meta, Instagram Graph API, Facebook API; expand to TikTok, Snapchat, etc.).
- Implement and maintain AWS Lambda serverless functions for scalable backend processes.
- Use AWS EventBridge to schedule cron jobs and manage event-driven workflows.
- Leverage AWS S3 for structured and unstructured data storage, retrieval, and processing.
- Build workflows for manual and automated data pulls from external APIs.
- Optimize backend systems for performance, scalability, and reliability at high data volumes.
- Collaborate with frontend engineers to ensure smooth integration into Next.js applications.
- Ensure security, compliance, and best practices in API authentication (OAuth, tokens, etc.).
- Contribute to architecture planning, documentation, and system design reviews.
Required Skills/Qualifications
- Strong expertise with MongoDB (including Atlas) and schema design.
- Experience with data vectorization and embeddings (OpenAI, Pinecone, MongoDB Atlas Vector Search, etc.).
- Proven track record of social media API integrations (Meta, Instagram, Facebook; additional platforms a plus).
- Proficiency in Node.js, Python, or other backend languages for API development.
- Deep understanding of AWS services:
- Lambda for serverless functions.
- S3 for structured/unstructured data storage.
- EventBridge for cron jobs, scheduled tasks, and event-driven workflows.
- Strong understanding of REST and GraphQL API design.
- Experience with data optimization, caching, and large-scale API performance.
Preferred Skills/Experience
- Experience with real-time data pipelines (Kafka, Kinesis, or similar).
- Familiarity with CI/CD pipelines and automated deployments on AWS.
- Knowledge of serverless architecture best practices.
- Background in SaaS platform development or data analytics systems.
Job Description
Position - Full stack Developer
Location - Mumbai
Experience - 2-5 Years
Who are we
Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics based diagnostic solution for Tuberculosis was recognized as one of top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.
Objectives of this Role:
- Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
- Ideate and develop new product features in collaboration with domain experts in healthcare and genomics
- Develop state of the art enterprise standard front-end and backend services
- Develop cloud platform services based on container orchestration platform
- Continuously embrace automation for repetitive tasks
- Ensure application performance, uptime, and scale, maintaining high standards of code quality by using clean coding principles and solid design patterns
- Build robust tech modules that are Unit Testable, Automating recurring tasks and processes
- Engage effectively with team members and collaborate to upskill and unblock each other
Frontend Skills
- HTML 5
- CSS framework ( LESS/ SASS / Tailwind )
- Es6 / Typescript
- Electron app /Tauri)
- Component library ( Bootstrap , material UI, Lit )
- Responsive web layout ( Flex layout , Grid layout )
- Package manager --> yarn-/ npm / turbo
- Build tools - > (Vite/Webpack/Parcel)
- Frameworks -- > React with Redux of Mobx / Next JS
- Design patterns
- Testing - JEST / MOCHA / JASMINE / Cypress)
- Functional Programming concepts
- Scripting ( powershell , bash , python )
Backend Skills
- Nodejs - Express / NEST JS
- Python / Rust
- REST API
- SOLID Design Principles
- Database (postgresql / mysql / redis / cassandra / mongodb )
- Caching ( Redis )
- Container Technology ( Docker / Kubernetes )
- Cloud ( azure , aws , openshift, google cloud)
- Version Control - GIT
- GITOPS
- Automation ( terraform , ansible )
Cloud Skills
- Object storage
- VPC concepts
- Containerize Deployment
- Serverless architecture
Other Skills
- Innovation and thought leadership
- UI - UX design skills
- Interest in in learning new tools, languages, workflows, and philosophies to grow
- Communication
To know more about us- https://haystackanalytics.in/
Job Description:
Position - Cloud Developer
Experience - 5 - 8 years
Location - Mumbai & Pune
Responsibilities:
- Design, develop, and maintain robust software applications using most common and popular coding languages suitable for the application design, with a strong focus on clean, maintainable, and efficient code.
- Develop, maintain, and enhance Terraform modules to encapsulate common infrastructure patterns and promote code reuse and standardization.
- Develop RESTful APIs and backend services aligned with modern architectural practices.
- Apply object-oriented programming principles and design patterns to build scalable systems.
- Build and maintain automated test frameworks and scripts to ensure high product quality.
- Troubleshoot and resolve technical issues across application layers, from code to infrastructure.
- Work with cloud platforms such as Azure or Google Cloud Platform (GCP).
- Use Git and related version control practices effectively in a team-based development environment.
- Integrate and experiment with AI development tools like GitHub Copilot, Azure OpenAI, or similar to boost engineering efficiency.
Skills:
- 5+ years of experience
- Experience with IaC Module
- Terraform coding experience along with Terraform Module as a part of central platform team
- Azure/GCP cloud experience is a must
- Experience with C#/Python/Java Coding - is good to have
Dear Candidate,
Greetings from Wissen Technology.
We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.
About Wissen Technology:
- The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
- Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
- Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
- Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
- Globally present with offices US, India, UK, Australia, Mexico, and Canada.
- We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
- Wissen Technology has been certified as a Great Place to Work®.
- Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
- Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
- The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few.
Job Description:
Please find below details:
Experience - 4+ Years
Location- Bangalore/Mumbai/Pune
Team Responsibilities
The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.
Resource's key Responsibilities
• Provide Tier 2/3 product technical support.
• Building software to help operations and support activities.
• Manage system\software configurations and troubleshoot environment issues.
• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.
• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.
• Collaborate with development and software testing teams throughout the release process.
• Analyze release and deployment processes to identify key areas for automation and optimization.
• Manage hardware and software resources & coordinate maintenance, planned downtimes with
infrastructure group across all the environments. (Production / Non-Production).
• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Strong Full stack developer Profile
Mandatory (Experience 1) - Must Have Minimum 5+ YOE in Software Development,
Mandatory (Experience 2) - Must have 4+ YOE in backend using Python.
Mandatory (Experience 3) - Must have good experience in frontend using React JS with knowledge of HTML, CSS, and JavaScript.
Mandatory (Experience 4) - Must have Experience in any databases - MySQL / PostgreSQL / Postgres / Oracle / SQL Server /

One of the reputed Client in India
Our Client is looking to hire Databricks Amin immediatly.
This is PAN-INDIA Bulk hiring
Minimum of 6-8+ years with Databricks, Pyspark/Python and AWS.
Must have AWS
Notice 15-30 days is preferred.
Share profiles at hr at etpspl dot com
Please refer/share our email to your friends/colleagues who are looking for job.
Dear Candidate,
Greetings from Wissen Technology.
We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.
About Wissen Technology:
- The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
- Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
- Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
- Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
- Globally present with offices US, India, UK, Australia, Mexico, and Canada.
- We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
- Wissen Technology has been certified as a Great Place to Work®.
- Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
- Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
- The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few
Job Description:
Please find below details:
Experience - 4+ Years
Location- Bangalore/Mumbai/Pune
Team Responsibilities
The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.
Resource's key Responsibilities
• Provide Tier 2/3 product technical support.
• Building software to help operations and support activities.
• Manage system\software configurations and troubleshoot environment issues.
• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.
• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.
• Collaborate with development and software testing teams throughout the release process.
• Analyze release and deployment processes to identify key areas for automation and optimization.
• Manage hardware and software resources & coordinate maintenance, planned downtimes with
infrastructure group across all the environments. (Production / Non-Production).
• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Full-Stack Developer
Exp: 5+ years required
Night shift: 8 PM-5 AM/9PM-6 AM
Only Immediate Joinee Can Apply
We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.
We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
Job Title : Senior QA Automation Architect (Cloud & Kubernetes)
Experience : 6+ Years
Location : India (Multiple Offices)
Shift Timings : 12 PM to 9 PM (Noon Shift)
Working Days : 5 Days WFO (NO Hybrid)
About the Role :
We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.
You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.
Key Responsibilities :
- Architect and maintain test automation frameworks for microservices.
- Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
- Ensure reliability, scalability, and observability of test systems.
- Work closely with DevOps and Cloud teams to streamline automation infrastructure.
Mandatory Skills :
- Kubernetes, Helm, Docker, Linux
- Cloud Platforms : AWS / Azure / GCP
- CI/CD Tools : Jenkins, GitHub Actions
- Scripting : Python, Pytest, Bash
- Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
- IaC Practices : Terraform / Ansible
Good to Have :
- Experience with Service Mesh (Istio/Linkerd).
- Container Security or DevSecOps exposure.
Wissen Technology is hiring for Data Engineer
About Wissen Technology: At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.
Job Summary: Wissen Technology is hiring a Data Engineer with expertise in Python, Pandas, Airflow, and Azure Cloud Services. The ideal candidate will have strong communication skills and experience with Kubernetes.
Experience: 4-7 years
Notice Period: Immediate- 15 days
Location: Pune, Mumbai, Bangalore
Mode of Work: Hybrid
Key Responsibilities:
- Develop and maintain data pipelines using Python and Pandas.
- Implement and manage workflows using Airflow.
- Utilize Azure Cloud Services for data storage and processing.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Ensure data quality and integrity throughout the data lifecycle.
- Optimize and scale data infrastructure to meet business needs.
Qualifications and Required Skills:
- Proficiency in Python (Must Have).
- Strong experience with Pandas (Must Have).
- Expertise in Airflow (Must Have).
- Experience with Azure Cloud Services.
- Good communication skills.
Good to Have Skills:
- Experience with Pyspark.
- Knowledge of Kubernetes.
Wissen Sites:
- Website: http://www.wissen.com
- LinkedIn: https://www.linkedin.com/company/wissen-technology
- Wissen Leadership: https://www.wissen.com/company/leadership-team/
- Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All
- Wissen Thought Leadership: https://www.wissen.com/articles/
SENIOR DATA ENGINEER:
ROLE SUMMARY:
Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.
RESPONSIBILITIES:
- Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
- Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
- Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
- Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
- DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
- Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
- Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
- Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
- Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
- Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.
REQUIRED QUALIFICATIONS:
- Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
- Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
- Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
- ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
- Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
- DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
- Serverless and events: Design event-driven distributed systems on AWS.
- NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
- AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.
NICE-TO-HAVE QUALIFICATIONS:
- Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
- Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
- Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
- Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.
OUTCOMES AND MEASURES:
- Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
- Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
- Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
- Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
- Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.
LOCATION AND SCHEDULE:
● Location: Outside US (OUS).
● Schedule: Minimum 6 hours of overlap with US time zones.
Experience: 3–7 Years
Locations: Pune / Bangalore / Mumbai
Notice Period :Immediate joiner only
Employment Type: Full-time
🛠️ Key Skills (Mandatory):
- Python: Strong coding skills for data manipulation and automation.
- PySpark: Experience with distributed data processing using Spark.
- SQL: Proficient in writing complex queries for data extraction and transformation.
- Azure Databricks: Hands-on experience with notebooks, Delta Lake, and MLflow
Interested candidates please share resume with details below.
Total Experience -
Relevant Experience in Python,Pyspark,AQL,Azure Data bricks-
Current CTC -
Expected CTC -
Notice period -
Current Location -
Desired Location -
🚀 We’re Hiring: Python Developer – Quant Strategies & Backtesting | Mumbai (Goregaon East)
Are you a skilled Python Developer passionate about financial markets and quantitative trading?
We’re looking for someone to join our growing Quant Research & Algo Trading team, where you’ll work on:
🔹 Developing & optimizing trading strategies in Python
🔹 Building backtesting frameworks across multiple asset classes
🔹 Processing and analyzing large market datasets
🔹 Collaborating with quant researchers & traders on real-world strategies
What we’re looking for:
✔️ 3+ years of experience in Python development (preferably in fintech/trading/quant domains)
✔️ Strong knowledge of Pandas, NumPy, SciPy, SQL
✔️ Experience in backtesting, data handling & performance optimization
✔️ Familiarity with financial markets is a big plus
📍 Location: Goregaon East, Mumbai
💼 Competitive package + exposure to cutting-edge quant strategies
Wissen Technology is hiring for Data Engineer
About Wissen Technology:At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.
Job Summary:Wissen Technology is hiring a Data Engineer with a strong background in Python, data engineering, and workflow optimization. The ideal candidate will have experience with Delta Tables, Parquet, and be proficient in Pandas and PySpark.
Experience:7+ years
Location:Pune, Mumbai, Bangalore
Mode of Work:Hybrid
Key Responsibilities:
- Develop and maintain data pipelines using Python (Pandas, PySpark).
- Optimize data workflows and ensure efficient data processing.
- Work with Delta Tables and Parquet for data storage and management.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Ensure data quality and integrity throughout the data lifecycle.
- Implement best practices for data engineering and workflow optimization.
Qualifications and Required Skills:
- Proficiency in Python, specifically with Pandas and PySpark.
- Strong experience in data engineering and workflow optimization.
- Knowledge of Delta Tables and Parquet.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a team environment.
- Strong communication skills.
Good to Have Skills:
- Experience with Databricks.
- Knowledge of Apache Spark, DBT, and Airflow.
- Advanced Pandas optimizations.
- Familiarity with PyTest/DBT testing frameworks.
Wissen Sites:
- Website: http://www.wissen.com
- LinkedIn: https://www.linkedin.com/company/wissen-technology
- Wissen Leadership: https://www.wissen.com/company/leadership-team/
- Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All
- Wissen Thought Leadership: https://www.wissen.com/articles/
Wissen | Driving Digital Transformation
A technology consultancy that drives digital innovation by connecting strategy and execution, helping global clients to strengthen their core technology.



















