50+ Python Jobs in Mumbai | Python Job openings in Mumbai
Apply to 50+ Python Jobs in Mumbai on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.
About the Role:
We are looking for a highly skilled Data Engineer with a strong foundation in Power BI, SQL, Python, and Big Data ecosystems to help design, build, and optimize end-to-end data solutions. The ideal candidate is passionate about solving complex data problems, transforming raw data into actionable insights, and contributing to data-driven decision-making across the organization.
Key Responsibilities:
Data Modelling & Visualization
- Build scalable and high-quality data models in Power BI using best practices.
- Define relationships, hierarchies, and measures to support effective storytelling.
- Ensure dashboards meet standards in accuracy, visualization principles, and timelines.
Data Transformation & ETL
- Perform advanced data transformation using Power Query (M Language) beyond UI-based steps.
- Design and optimize ETL pipelines using SQL, Python, and Big Data tools.
- Manage and process large-scale datasets from various sources and formats.
Business Problem Translation
- Collaborate with cross-functional teams to translate complex business problems into scalable, data-centric solutions.
- Decompose business questions into testable hypotheses and identify relevant datasets for validation.
Performance & Troubleshooting
- Continuously optimize performance of dashboards and pipelines for latency, reliability, and scalability.
- Troubleshoot and resolve issues related to data access, quality, security, and latency, adhering to SLAs.
Analytical Storytelling
- Apply analytical thinking to design insightful dashboards—prioritizing clarity and usability over aesthetics.
- Develop data narratives that drive business impact.
Solution Design
- Deliver wireframes, POCs, and final solutions aligned with business requirements and technical feasibility.
Required Skills & Experience:
- Minimum 3+ years of experience as a Data Engineer or in a similar data-focused role.
- Strong expertise in Power BI: data modeling, DAX, Power Query (M Language), and visualization best practices.
- Hands-on with Python and SQL for data analysis, automation, and backend data transformation.
- Deep understanding of data storytelling, visual best practices, and dashboard performance tuning.
- Familiarity with DAX Studio and Tabular Editor.
- Experience in handling high-volume data in production environments.
Preferred (Good to Have):
- Exposure to Big Data technologies such as:
- PySpark
- Hadoop
- Hive / HDFS
- Spark Streaming (optional but preferred)
Why Join Us?
- Work with a team that's passionate about data innovation.
- Exposure to modern data stack and tools.
- Flat structure and collaborative culture.
- Opportunity to influence data strategy and architecture decisions.
About us
Cere Labs is a Mumbai based company working in the field of Artificial Intelligence. It is a product company that utilizes the latest technologies such as Python, Redis, neo4j, MVC, Docker, Kubernetes to build its AI platform. Cere Labs’ clients are primarily from the Banking and Finance domain in India and US. The company has a great environment for its employees to learn and grow in technology.
Software Developer
Job brief
Cere Labs is seeking to hire a skilled and passionate software developer to help with the development of our current projects and product. Your duties will primarily revolve around building software by writing code, as well as modifying software to fix errors, improve its performance. You will also be involved in writing of the test cases and testing
To be successful in this role, you will need extensive knowledge of programming languages like Java, Python, Java Script, React.
Ultimately, the role of the Software Engineer is to build high-quality, innovative and fully performing software that complies with coding standards and technical design
Responsibilities
- Develop flowcharts, layouts and documentation to identify requirements and solutions
- Write well-designed, testable code
- Develop software verification plans and quality assurance procedures
- Document and maintain software functionality
- Troubleshoot, debug and upgrade existing systems
- Deploy programs and test the deployed code
- Comply with project plans and industry standards
Requirements
- BE (CS/IT) degree in Computer Science
- Ability to understand the requirements given and generate the design based on specification given.
- Ability to develop unit testing of code components or complete applications.
- Must be a full-stack developer and understand concepts of software engineering.
- Ability to develop software in Python, Java, Java Script
- Excellent knowledge of relational databases, MySQL and ORM technologies (JPA2, Hibernate), in-memory data stores such as Redis
- Experience developing web applications using at least one popular web framework (JSF, Spring MVC, React) is preferred
- Experience with test-driven development
- Proficiency in software engineering tools including popular IDE’s such as PyCharm, Visual Studio Code and Eclipse
- Proven work experience as a Software Engineer or Software Developer will be an added advantage
Working conditions
Hours: 9:00 AM to 6:00 PM
Weekly off: Sunday, First and Third Saturdays
Mode: Work from office
Recruitment process
The selection process includes:
- Written test
- Technical interview
- Final interview
Compensation
CTC: Rs. 3-4 lacs pa, depending on performance in the selection process.
Skills - MLOps Pipeline Development | CI/CD (Jenkins) | Automation Scripting | Model Deployment & Monitoring | ML Lifecycle Management | Version Control & Governance | Docker & Kubernetes | Performance Optimization | Troubleshooting | Security & Compliance
Responsibilities:
1. Design, develop, and implement MLOps pipelines for the continuous deployment and
integration of machine learning models
2. Collaborate with data scientists and engineers to understand model requirements and
optimize deployment processes
3. Automate the training, testing and deployment processes for machine learning models
4. Continuously monitor and maintain models in production, ensuring optimal
performance, accuracy and reliability
5. Implement best practices for version control, model reproducibility and governance
6. Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
7. Troubleshoot and resolve issues related to model deployment and performance
8. Ensure compliance with security and data privacy standards in all MLOps activities
9. Keep up to date with the latest MLOps tools, technologies and trends
10. Provide support and guidance to other team members on MLOps practices
Required skills and experience:
• 3-10 years of experience in MLOps, DevOps or a related field
• Bachelor’s degree in computer science, Data Science or a related field
• Strong understanding of machine learning principles and model lifecycle management
• Experience in Jenkins pipeline development
• Experience in automation scripting
Responsibilities
- Design, develop, and implement MLOps pipelines for the continuous deployment and integration of machine learning models
- Collaborate with data scientists and engineers to understand model requirements and optimize deployment processes
- Automate the training, testing and deployment processes for machine learning models
- Continuously monitor and maintain models in production, ensuring optimal performance, accuracy and reliability
- Implement best practices for version control, model reproducibility and governance
- Optimize machine learning pipelines for scalability, efficiency and cost-effectiveness
- Troubleshoot and resolve issues related to model deployment and performance
- Ensure compliance with security and data privacy standards in all MLOps activities
- Keep up to date with the latest MLOps tools, technologies and trends
- Provide support and guidance to other team members on MLOps practices
Required Skills And Experience
- 3-10 years of experience in MLOps, DevOps or a related field
- Bachelors degree in computer science, Data Science or a related field
- Strong understanding of machine learning principles and model lifecycle management
- Experience in Jenkins pipeline development
- Experience in automation scripting
Position: Automation Test Engineer (0-1 year)
Location: Mumbai
Company: Big Rattle Technologies Private Limited
Immediate Joiners only
Job Summary:
We are looking for a motivated and detail-oriented QA Automation Engineer (Fresher) who is eager to learn and grow in software testing and automation. The candidate will work under the guidance of senior QA engineers to design, execute, and maintain test cases for custom software and products developed within the organisation.
Key Responsibilities:
● Testing & Quality Assurance:
○ Understand business and technical requirements with guidance from senior team members
○ Design and execute manual test cases for web and mobile applications
○ Assist in identifying, documenting, and tracking defects using bug-tracking tools
● Automation Testing:
○ Learn to design, develop, and maintain basic automation test scripts for Web applications, Mobile applications, APIs
○ Execute automated test suites and analyze test results
○ Support regression testing activities during releases
● Collaboration & Documentation:
○ Work closely with developers, QA leads, and product teams to understand features and acceptance criteria
○ Prepare and maintain test documentation for Test cases, Test execution reports and Basic automation documentation
Required Skills:
● Testing Fundamentals
○ Understanding of SDLC, STLC, and basic Agile concepts
○ Knowledge of different testing types - Manual, Functional & Regression testing
● Automation & Tools (Exposure is sufficient)
○ Awareness or hands-on practice with automation tools such as Selenium / Cypress / Playwright and TestNG / JUnit / PyTest
○ Basic understanding of Mobile automation concepts (Appium – optional) API testing using tools like Postman
● Technical Skills
○ Basic programming knowledge in Java / Python / JavaScript
○ Understanding of SQL queries for basic data validation
● Tools & Reporting
○ Familiarity with bug-tracking or test management tools (e.g., JIRA/ Zoho Sprints)
○ Ability to prepare simple test execution reports and defect summaries
● Soft Skills:
○ Good communication and interpersonal skills
○ Strong attention to detail and quality mindset
Good to Have (Not Mandatory):
● Academic or personal project experience in automation testing
● Awareness of Performance testing tools (JMeter – basic understanding) and Security testing concepts (VAPT – theoretical knowledge is sufficient)
● ISTQB Foundation Level certification (or willingness to pursue)
Required Qualification:
● Bachelor’s degree in Computer Science, Engineering, or a related field
● Freshers or candidates with up to 1 year of experience in Software Testing
● Strong understanding of SDLC, STLC, and Agile methodologies.
● Solid foundation in testing principles, and eagerness to build a career in QA Automation.
Why should you join Big Rattle?
Big Rattle Technologies specializes in AI/ ML Products and Solutions as well as Mobile and Web Application Development. Our clients include Fortune 500 companies. Over the past 13 years, we have delivered multiple projects for international and Indian clients from various industries like FMCG, Banking and Finance, Automobiles, Ecommerce, etc. We also specialise in Product Development for our clients. Big Rattle Technologies Private Limited is ISO 27001:2022 certified and CyberGRX certified.
What We Offer:
● Opportunity to work on diverse projects for Fortune 500 clients.
● Competitive salary and performance-based growth.
● Dynamic, collaborative, and growth-oriented work environment.
● Direct impact on product quality and client satisfaction.
● 5-day hybrid work week.
● Certification reimbursement.
● Healthcare coverage.
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
SimplyFI is a fast-growing AI and blockchain-powered product company transforming trade finance and banking through digital innovation. We are looking for a Full Stack Developer with strong expertise in ReactJS (primary) and solid working knowledge of Python (secondary) to join our team in Thane, Mumbai.
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications with ReactJS as the primary
- technology.
- Build and integrate backend services using Python (Flask/Django/FastAPI).
- Develop and manage RESTful APIs for system integration.
- Collaborate on AI-driven product features and support machine learning model integration
- when required.
- Work closely with DevOps teams to deploy, monitor, and optimize applications on AWS.
- Ensure application performance, scalability, security, and code quality.
- Collaborate with product managers, designers, and QA teams to deliver high-quality product
- features.
- Write clean, maintainable, and testable code following best practices.
- Participate in agile processes—code reviews, sprint planning, and daily standups.
Required Skills & Qualifications
- Strong hands-on experience with ReactJS, including hooks, state management, Redux, and API
- integrations.
- Proficient in backend development using Python with frameworks like Flask, Django, or FastAPI.
- Solid understanding of RESTful API design and secure authentication (OAuth2, JWT).
- Experience working with databases: MySQL, PostgreSQL, MongoDB.
- Familiarity with microservices architecture and modern software design patterns.
- Experience with Git, CI/CD pipelines, Docker, and Kubernetes.
- Strong problem-solving, debugging, and performance optimization skills.
About the Company
SimplyFI Softech India Pvt. Ltd. is a product-led company working across AI, Blockchain, and Cloud. The team builds intelligent platforms for fintech, SaaS, and enterprise use cases, focused on solving real business problems with production-grade systems.
Role Overview
This role is for someone who enjoys working hands-on with data and machine learning models. You’ll support real-world AI use cases end to end, from data prep to model integration, while learning how AI systems are built and deployed in production.
Key Responsibilities
- Design, develop, and deploy machine learning models with guidance from senior engineers
- Work with structured and unstructured datasets for cleaning, preprocessing, and feature engineering
- Implement ML algorithms using Python and standard ML libraries
- Train, test, and evaluate models and track performance metrics
- Assist in integrating AI/ML models into applications and APIs
- Perform basic data analysis and visualization to extract insights
- Participate in code reviews, documentation, and team discussions
- Stay updated on ML, AI, and Generative AI trends
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, AI, Data Science, or a related field
- Strong foundation in Python
- Clear understanding of core ML concepts: supervised and unsupervised learning
- Hands-on exposure to NumPy, Pandas, and Scikit-learn
- Basic familiarity with TensorFlow or PyTorch
- Understanding of data structures, algorithms, and statistics
- Good analytical thinking and problem-solving skills
- Comfortable working in a fast-moving product environment
Good to Have
- Exposure to NLP, Computer Vision, or Generative AI
- Experience with Jupyter Notebook or Google Colab
- Basic knowledge of SQL or NoSQL databases
- Understanding of REST APIs and model deployment concepts
- Familiarity with Git/GitHub
- AI/ML internships or academic projects
Responsible for developing, enhancing, modifying, and maintaining chatbot applications in the Global Markets environment. The role involves designing, coding, testing, debugging, and documenting conversational AI solutions, along with supporting activities aligned to the corporate systems architecture.
You will work closely with business partners to understand requirements, analyze data, and deliver optimal, market-ready conversational AI and automation solutions.
Key Responsibilities
- Design, develop, test, debug, and maintain chatbot and virtual agent applications
- Collaborate with business stakeholders to define and translate requirements into technical solutions
- Analyze large volumes of conversational data to improve chatbot accuracy and performance
- Develop automation workflows for data handling and refinement
- Train and optimize chatbots using historical chat logs and user-generated content
- Ensure solutions align with enterprise architecture and best practices
- Document solutions, workflows, and technical designs clearly
Required Skills
- Hands-on experience in developing virtual agents (chatbots/voicebots) and Natural Language Processing (NLP)
- Experience with one or more AI/NLP platforms such as:
- Dialogflow, Amazon Lex, Alexa, Rasa, LUIS, Kore.AI
- Microsoft Bot Framework, IBM Watson, Wit.ai, Salesforce Einstein, Converse.ai
- Strong programming knowledge in Python, JavaScript, or Node.js
- Experience training chatbots using historical conversations or large-scale text datasets
- Practical knowledge of:
- Formal syntax and semantics
- Corpus analysis
- Dialogue management
- Strong written communication skills
- Strong problem-solving ability and willingness to learn emerging technologies
Nice-to-Have Skills
- Understanding of conversational UI and voice-based processing (Text-to-Speech, Speech-to-Text)
- Experience building voice apps for Amazon Alexa or Google Home
- Experience with Test-Driven Development (TDD) and Agile methodologies
- Ability to design and implement end-to-end pipelines for AI-based conversational applications
- Experience in text mining, hypothesis generation, and historical data analysis
- Strong knowledge of regular expressions for data cleaning and preprocessing
- Understanding of API integrations, SSO, and token-based authentication
- Experience writing unit test cases as per project standards
- Knowledge of HTTP, REST APIs, sockets, and web services
- Ability to perform keyword and topic extraction from chat logs
- Experience training and tuning topic modeling algorithms such as LDA and NMF
- Understanding of classical Machine Learning algorithms and appropriate evaluation metrics
- Experience with NLP frameworks such as NLTK and spaCy
🚀 Hiring: QA Engineer at Deqode
⭐ Experience: 3+ Years
📍 Location: Mumbai and Banaglore
⭐ Work Mode:- 5 Days Work From Office
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
We are looking for a Backend API Automation Engineer with strong experience in API testing and automation. The candidate should be skilled in scripting and capable of handling both manual and automated testing.
Key Skills Required:
- Backend API automation testing experience
- Strong scripting skills in Python or JavaScript (Java acceptable)
- Hands-on experience with REST Assured and Postman
- Experience in manual testing along with test automation
Responsibilities:
- Design, develop, and execute automated tests for backend APIs
- Perform manual and automated API testing to ensure quality and reliability
- Collaborate with development teams to identify, report, and resolve issues
We are looking for a Senior Backend Engineer to build and operate the core AI/ML-backed systems that power large-scale, consumer-facing products. You will work on production-grade AI runtimes, retrieval systems, and ML-adjacent backend infrastructure, making pragmatic tradeoffs across quality, latency, reliability, and cost.
This role is not an entry point into AI/ML. You are expected to already have hands-on experience shipping ML-backed backend systems in production.
At Proximity, you won’t just build APIs - you’ll own critical backend systems end-to-end, collaborate closely with Applied ML and Product teams, and help define the foundations that power intelligent experiences at scale.
Responsibilities -
- Own and deliver end-to-end backend systems for AI product runtime, including orchestration, request lifecycle management, state/session handling, and policy enforcement.
- Design and implement retrieval and memory primitives end-to-end — document ingestion, chunking strategies, embeddings generation, indexing, vector/hybrid search, re-ranking, caching, freshness, and deletion semantics.
- Productionize ML workflows and interfaces, including feature and metadata services, online/offline parity, model integration contracts, and evaluation instrumentation.
- Drive performance, reliability, and cost optimization, owning P50/P95 latency, throughput, cache hit rates, token and inference costs, and infrastructure efficiency.
- Build observability by default, including structured logs, metrics, distributed tracing, guardrail signals, failure taxonomies, and reliable fallback paths.
- Collaborate closely with Applied ML teams on model routing, prompt and tool schemas, evaluation datasets, and release safety gates.
- Write clean, testable, and maintainable backend code, contributing to design reviews, code reviews, and operational best practices.
- Take systems from design → build → deploy → operate, including on-call ownership and incident response.
- Continuously identify bottlenecks and failure modes in AI-backed systems and proactively improve system robustness.
Requirements
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- 6–10 years of experience building backend systems in production, with 2–3+ years working on ML/AI-backed products such as search, recommendations, ranking, RAG pipelines, or AI assistants.
- Strong practical understanding of ML system fundamentals, including embeddings, vector similarity, reranking, retrieval quality, and evaluation metrics (precision/recall, nDCG, MRR).
- Proven experience implementing or operating RAG pipelines, covering ingestion, chunking, indexing, query understanding, hybrid retrieval, and rerankers.
- Solid distributed systems fundamentals, including API design, idempotency, concurrency, retries, circuit breakers, rate limiting, and multi-tenant reliability.
- Experience with common ML/AI platform components, such as feature stores, metadata systems, streaming or batch pipelines, offline evaluation jobs, and A/B measurement hooks.
- Strong proficiency in backend programming languages and frameworks (e.g., Go, Java, Python, or similar) and API development.
- Ability to work independently, take ownership of complex systems, and collaborate effectively with cross-functional teams.
- Strong problem-solving, communication, and system-design skills.
Desired Skills -
- Experience with agentic runtimes, including tool-calling or function-calling patterns, structured outputs, and production guardrails.
- Hands-on exposure to vector and hybrid retrieval stacks such as FAISS, Milvus, Pinecone, or Elasticsearch.
- Experience running systems on Kubernetes, with strong knowledge of observability stacks like OpenTelemetry, Prometheus, Grafana, and distributed tracing.
- Familiarity with privacy, security, and data governance considerations for user and model data.
Benefits
- Best in class compensation: We hire only the best, and we pay accordingly.
- Proximity Talks: Meet engineers, designers, and product leaders — and learn from experts across domains.
Keep on learning with a world-class team: Work on real, production AI systems at scale, challenge yourself daily, and grow alongside some of the best minds in the industry.
We are looking for a Data Engineer to help build and scale the data pipelines and core datasets that power analytics, AI model evaluation, safety systems, and business decision-making across Bharat AI’s agentic AI platform.
This role sits at the heart of how data flows through the organization. You will work closely with Product, Data Science, Infrastructure, Marketing, Finance, and AI/Research teams to ensure data is reliable, accessible, and production-ready as the platform scales rapidly.
At Proximity, you won’t just move data — your work will directly influence how AI systems are trained, evaluated, monitored, and improved.
Responsibilities -
- Design, build, and manage scalable data pipelines, ensuring user event data is reliably ingested into the data warehouse.
- Develop and maintain canonical datasets to track key product metrics such as user growth, engagement, retention, and revenue.
- Collaborate with Infrastructure, Data Science, Product, Marketing, Finance, and Research teams to understand data needs and deliver effective solutions.
- Implement robust, fault-tolerant systems for data ingestion, transformation, and processing.
- Participate actively in data architecture and engineering decisions, contributing best practices and long-term scalability thinking.
- Ensure data security, integrity, and compliance in line with company policies and industry standards.
- Monitor pipeline health, troubleshoot failures, and continuously improve reliability and performance.
Requirements
- 3-5 years of professional experience working as a Data Engineer or in a similar role.
- Proficiency in at least one data engineering programming language such as Python, Scala, or Java.
- Experience with distributed data processing frameworks and technologies such as Hadoop, Flink, and distributed storage systems (e.g., HDFS).
- Strong expertise with ETL orchestration tools, such as Apache Airflow.
- Solid understanding of Apache Spark, with the ability to write, debug, and optimize Spark jobs.
- Experience designing and maintaining data pipelines for analytics, reporting, or ML use cases.
- Strong problem-solving skills and the ability to work across teams with varied data requirements.
Desired Skills -
- Hands-on experience working with Databricks in production environments.
- Familiarity with the GCP data stack, including Pub/Sub, Dataflow, BigQuery, and Google Cloud Storage (GCS).
- Exposure to data quality frameworks, data validation, or schema management tools.
- Understanding of analytics use cases, experimentation, or ML data workflows.
Benefits
- Best in class compensation: We hire only the best, and we pay accordingly.
- Proximity Talks: Learn from experienced engineers, data scientists, and product leaders.
- High-impact work: Build data systems that directly power AI models and business decisions.
- Continuous learning: Work with a strong, collaborative team and grow your data engineering skills every day.
Job Title: Python Developer (4–6 Years Experience)
Location: Mumbai (Onsite)
Experience: 4–6 Years
Salary: ₹50,000 – ₹90,000 per month (depending on experience & skill set)
Employment Type: Full-time
Job Description
We are looking for an experienced Python Developer to join our growing team in Mumbai. The ideal candidate will have strong hands-on experience in Python development, building scalable backend systems, and working with databases and APIs.
Key Responsibilities
- Design, develop, test, and maintain Python-based applications
- Build and integrate RESTful APIs
- Work with frameworks such as Django / Flask / FastAPI
- Write clean, reusable, and efficient code
- Collaborate with frontend developers, QA, and project managers
- Optimize application performance and scalability
- Debug, troubleshoot, and resolve technical issues
- Participate in code reviews and follow best coding practices
- Work with databases and ensure data security and integrity
- Deploy and maintain applications in staging/production environments
Required Skills & Qualifications
- 4–6 years of hands-on experience in Python development
- Strong experience with Django / Flask / FastAPI
- Good understanding of REST APIs
- Experience with MySQL / PostgreSQL / MongoDB
- Familiarity with Git and version control workflows
- Knowledge of OOP concepts and design principles
- Experience with Linux-based environments
- Understanding of basic security and performance optimization
- Ability to work independently as well as in a team
Good to Have (Preferred Skills)
- Experience with AWS / cloud services
- Knowledge of Docker / CI-CD pipelines
- Exposure to microservices architecture
- Basic frontend knowledge (HTML, CSS, JavaScript)
- Experience working in an Agile/Scrum environment
Job Type: Full-time
Application Question(s):
- If selected, how soon can you join?
Experience:
- Total: 3 years (Required)
- Python: 3 years (Required)
Location:
- Mumbai, Maharashtra (Required)
Work Location: In person
Location: Mumbai (Onsite)
Experience: 4–6 Years
Salary: ₹75,000 – ₹1,200,000 per month (depending on experience & skill set)
Employment Type: Full-time
Job Description
We are looking for a skilled React Developer to join our team in Mumbai. The ideal candidate should have strong hands-on experience in building modern, responsive web applications using React and be comfortable working with at least one backend technology such as Python, Node.js, or PHP.
Key Responsibilities
- Develop and maintain user-friendly web applications using React.js
- Convert UI/UX designs into high-quality, reusable components
- Work with REST APIs and integrate frontend with backend services
- Collaborate with backend developers (Python / Node.js / PHP)
- Optimize applications for performance, scalability, and responsiveness
- Manage application state using Redux / Context API / similar
- Write clean, maintainable, and well-documented code
- Participate in code reviews and sprint planning
- Debug and resolve frontend and integration issues
- Ensure cross-browser and cross-device compatibility
Required Skills & Qualifications
- 6–8 years of experience in frontend development
- Strong expertise in React.js
- Proficiency in JavaScript (ES6+)
- Experience with HTML5, CSS3, Responsive Design
- Hands-on experience with RESTful APIs
- Working knowledge of at least one backend technology:
- Python (Django / Flask / FastAPI) OR
- Node.js (Express / NestJS) OR
- PHP (Laravel preferred)
- Familiarity with Git / version control systems
- Understanding of component-based architecture
- Experience working in Linux environments
Good to Have (Preferred Skills)
- Experience with Next.js
- Knowledge of TypeScript
- Familiarity with Redux / React Query
- Basic understanding of databases (MySQL / MongoDB)
- Experience with CI/CD pipelines
- Exposure to AWS or cloud platforms
- Experience working in Agile/Scrum teams
What We Offer
- Competitive salary based on experience and skills
- Onsite role with a collaborative team in Mumbai
- Opportunity to work on modern tech stack and real-world projects
- Career growth and learning opportunities
Interested candidates can share their resumes at
Job Type: Full-time
Application Question(s):
- If selected, how soon can you join?
- Are you okay with the salary slab (50,000-90,000) , depending upon your experience?
- Have you worked on a production React application where you integrated REST APIs and handled authentication and error scenarios with a backend (Python / Node.js / PHP)?
Experience:
- Total: 5 years (Required)
- Python: 5 years (Required)
Location:
- Mumbai, Maharashtra (Required)
Work Location: In person
About Allvest :
- AI-driven financial planning and portfolio management platform
- Secure, data-backed portfolio oversight aligned with regulatory standards
- Building cutting-edge fintech solutions for intelligent investment decisions
Role Overview :
- Architect and build scalable, high-performance backend systems
- Work on mission-critical systems handling real-time market data and portfolio analytics
- Ensure regulatory compliance and secure financial transactions
Key Responsibilities :
- Design, develop, and maintain robust backend services and APIs using NodeJS and Python
- Build event-driven architectures using RabbitMQ and Kafka for real-time data processing
- Develop data pipelines integrating PostgreSQL and BigQuery for analytics and warehousing
- Ensure system reliability, performance, and security with focus on low-latency operations
- Lead technical design discussions, code reviews, and mentor junior developers
- Optimize database queries, implement caching strategies, and enhance system performance
- Collaborate with cross-functional teams to deliver end-to-end features
- Implement monitoring, logging, and observability solutions
Required Skills & Experience :
- 5+ years of professional backend development experience
- Strong expertise in NodeJS and Python for production-grade applications
- Proven experience designing RESTful APIs and microservices architectures
- Strong proficiency in PostgreSQL including query optimization and database design
- Hands-on experience with RabbitMQ and Kafka for event-driven systems
- Experience with BigQuery or similar data warehousing solutions
- Solid understanding of distributed systems, scalability patterns, and high-traffic applications
- Strong knowledge of authentication, authorization, and security best practices in financial applications
- Experience with Git, CI/CD pipelines, and modern development workflows
- Excellent problem-solving and debugging skills across distributed systems
Preferred Qualifications :
- Prior experience in fintech, banking, or financial services
- Familiarity with cloud platforms (GCP/AWS/Azure) and containerization (Docker, Kubernetes)
- Knowledge of frontend technologies for full-stack collaboration
- Experience with Redis or Memcached
- Understanding of regulatory requirements (KYC, compliance, data privacy)
- Open-source contributions or tech community participation
What We Offer :
- Opportunity to work on cutting-edge fintech platform with modern technology stack
- Collaborative environment with experienced team from leading financial institutions
- Competitive compensation with equity participation
- Challenging problems at the intersection of finance, AI, and technology
- Career growth in fast-growing startup environment
Location: Mumbai (Phoenix Market City, Kurla West)
Also Apply at https://wohlig.keka.com/careers/jobdetails/122768
Key Responsibilities
- Automation & Reliability: Automate infrastructure and operational processes to ensure high reliability, scalability, and security.
- Cloud Infrastructure Design: Gather GCP infrastructure requirements, evaluate solution options, and implement best-fit cloud architectures.
- Infrastructure as Code (IaC): Design, develop, and maintain infrastructure using Terraform and Ansible.
- CI/CD Ownership: Build, manage, and maintain robust CI/CD pipelines using Jenkins, ensuring system reliability and performance.
- Container Orchestration: Manage Docker containers and self-managed Kubernetes clusters across multiple cloud environments.
- Monitoring & Observability: Implement and manage cloud-native monitoring solutions using Prometheus, Grafana, and the ELK stack.
- Proactive Issue Resolution: Troubleshoot and resolve infrastructure and application issues across development, testing, and production environments.
- Scripting & Automation: Develop efficient automation scripts using Python and one or more of Node.js, Go, or Shell scripting.
- Security Best Practices: Maintain and enhance the security of cloud services, Kubernetes clusters, and deployment pipelines.
- Cross-functional Collaboration: Work closely with engineering, product, and security teams to design and deploy secure, scalable infrastructure.
Specific Knowledge/Skills
- 4-6 years of experience
- Proficiency in Python programming.
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with RESTful APIs
- Familiarity with Cloud (Azure /AWS) technologies
Role: Senior Platform Engineer (GCP Cloud)
Experience Level: 3 to 6 Years
Work location: Mumbai
Mode : Hybrid
Role & Responsibilities:
- Build automation software for cloud platforms and applications
- Drive Infrastructure as Code (IaC) adoption
- Design self-service, self-healing monitoring and alerting tools
- Automate CI/CD pipelines (Git, Jenkins, SonarQube, Docker)
- Build Kubernetes container platforms
- Introduce new cloud technologies for business innovation
Requirements:
- Hands-on experience with GCP Cloud
- Knowledge of cloud services (compute, storage, network, messaging)
- IaC tools experience (Terraform/CloudFormation)
- SQL & NoSQL databases (Postgres, Cassandra)
- Automation tools (Puppet/Chef/Ansible)
- Strong Linux administration skills
- Programming: Bash/Python/Java/Scala
- CI/CD pipeline expertise (Jenkins, Git, Maven)
- Multi-region deployment experience
- Agile/Scrum/DevOps methodology
Junior PHP Developer (Full-Time)
Malad, Mumbai (Mindspace) | Work from Office
We’re hiring a Junior PHP Developer at Websites.co.in, a platform where small businesses create their website in 2 minutes.
Your role
- Develop and maintain backend logic using PHP (Laravel or Core PHP)
- Write clean, reusable, and efficient code
- Work with MySQL databases (queries, joins, optimization)
- Integrate REST APIs and troubleshoot backend issues
- Collaborate with frontend, QA, and product teams for feature implementation
- Participate in code reviews, testing, and deployment activities
- Debug production issues and provide quick fixes
What we expect
- Hands-on development experience with PHP (mandatory)
- Strong knowledge of MySQL, queries, and database structures
- Understanding of MVC architecture (Laravel preferred)
- Basic knowledge of HTML, CSS, JavaScript
- Familiarity with Git version control
- Problem-solving mindset and willingness to take ownership
- 0–3.5 years of experience (freshers with strong projects are welcome)
Good to have
- Experience working with APIs, JSON, cURL
- Understanding of server basics (Linux, Apache, hosting environments)
What you get
- Real product ownership, not agency project hopping
- Direct collaboration with CTO and senior devs
- Steep learning curve in a fast-moving SaaS environment
About the Role
We are looking for a hands-on and solution-oriented Senior Data Scientist – Generative AI to join our growing AI practice. This role is ideal for someone who thrives in designing and deploying Gen AI solutions on AWS, enjoys working with customers directly, and can lead end-to-end implementations. You will play a key role in architecting AI solutions, driving project delivery, and guiding junior team members.
Key Responsibilities
- Design and implement end-to-end Generative AI solutions for customers on AWS.
- Work closely with customers to understand business challenges and translate them into Gen AI use-cases.
- Own technical delivery, including data preparation, model integration, prompt engineering, deployment, and performance monitoring.
- Lead project execution – ensure timelines, manage stakeholder communications, and collaborate across internal teams.
- Provide technical guidance and mentorship to junior data scientists and engineers.
- Develop reusable components and reference architectures to accelerate delivery.
- Stay updated with latest developments in Gen AI, particularly AWS offerings like Bedrock, SageMaker, LangChain integrations, etc.
Required Skills & Experience
- 3-7 years of hands-on experience in Data Science/AI/ML, with at least 2–3 years in Generative AI projects.
- Proficient in building solutions using AWS AI/ML services (e.g., SageMaker, Amazon Bedrock, Lambda, API Gateway, S3, etc.).
- Experience with LLMs, prompt engineering, RAG pipelines, and deployment best practices.
- Solid programming experience in Python, with exposure to libraries such as Hugging Face, LangChain, etc.
- Strong problem-solving skills and ability to work independently in customer-facing roles.
- Experience in collaborating with Systems Integrators (SIs) or working with startups in India is a major plus.
Soft Skills
- Strong verbal and written communication for effective customer engagement.
- Ability to lead discussions, manage project milestones, and coordinate across stakeholders.
- Team-oriented with a proactive attitude and strong ownership mindset.
What We Offer
- Opportunity to work on cutting-edge Generative AI projects across industries.
- Collaborative, startup-like work environment with flexibility and ownership.
- Exposure to full-stack AI/ML project lifecycle and client-facing roles.
- Competitive compensation and learning opportunities in the AWS AI ecosystem.
About Oneture Technologies
Founded in 2016, Oneture is a cloud-first, full-service digital solutions company, helping clients harness the power of Digital Technologies and Data to drive transformations and turning ideas into business realities. Our team is full of curious, full-stack, innovative thought leaders who are dedicated to providing outstanding customer experiences and building authentic relationships. We are compelled by our core values to drive transformational results from Ideas to Reality for clients across all company sizes, geographies, and industries. The Oneture team delivers full lifecycle solutions— from ideation, project inception, planning through deployment to ongoing support and maintenance.
Our core competencies and technical expertise includes Cloud powered: Product Engineering, Big Data and AI ML. Our deep commitment to value creation for our clients and partners and “Startups-like agility with Enterprises-like maturity” philosophy has helped us establish long-term relationships with our clients and enabled us to build and manage mission-critical platforms for them.
About the Role
Oneture Technologies is helping global clients on their digital transformation journey to build modern, scalable, and integrated digital platforms. To strengthen our Technology and Leadership capabilities, we are looking for an experienced Technical Lead who can drive solution design, mentor teams, and ensure high-quality delivery of large-scale systems.
As a Technical Lead, you will own the technical architecture, delivery execution, and team leadership across complex projects, while working closely with clients and internal stakeholders.
Key Responsibilities
- Design, develop, and maintain highly scalable and secure application systems
- Lead solution architecture, technical design, effort estimation, and delivery planning
- Drive the implementation of cloud-native solutions following best practices for security, scalability, and reliability
- Lead and manage a team of 5–10 engineers, ensuring adherence to engineering processes and quality standards
- Mentor junior developers and provide hands-on technical guidance on day-to-day work
- Own end-to-end technical delivery, including architecture, development, testing, and release
- Collaborate closely with clients and internal stakeholders; provide regular status updates and manage expectations
- Troubleshoot complex technical issues and propose robust, long-term solutions
- Establish strong engineering practices around test-driven development, CI/CD, and automated deployments
- Contribute to continuous improvement of engineering standards, tooling, and delivery processes
Required Experience & Qualifications
- 4–6+ years of hands-on experience with proven success in technical leadership roles
- Strong experience building and scaling large, complex, high-traffic platforms
- Demonstrated ability to handle high workload, performance-sensitive, and secure systems
- Bachelor’s degree (B.E. / B.Tech) in Computer Science or a related field from a reputed institute
Technical Expertise
- Proficiency in one or more backend programming languages such as GoLang, Java, Node.js, or Python
- Strong experience architecting and implementing solutions on AWS
- Hands-on experience with cloud architecture, scalability, and security best practices
- Experience with caching technologies such as Redis or Memcached
- Familiarity with containerization and orchestration tools (Docker, Kubernetes)
- Strong understanding of RESTful services, authentication mechanisms, data formats (JSON/XML), and SQL
- Experience with unit testing, functional testing, and CI/CD pipelines
- Solid understanding of system design, performance optimization, and release management
- Ability to think from a product and user-impact mindset
Good to Have
- AWS certifications (Solutions Architect / Professional / Specialty)
- Experience with observability tools (logging, monitoring, alerting)
- Exposure to distributed systems and microservices architecture
- Experience working in fast-paced, high-growth environments
Soft Skills & Leadership Qualities
- Strong ownership and accountability for technical outcomes
- Excellent communication and stakeholder management skills
- Ability to mentor, guide, and inspire engineering teams
- Comfortable working in a fast-paced, evolving environment
- Strong problem-solving and decision-making ability
Why Join Oneture?
- Work on large-scale, high-impact digital transformation projects
- Strong emphasis on engineering excellence and leadership growth
- Collaborative, learning-driven culture
- Opportunity to influence architecture and technology direction
- Exposure to modern cloud-native and scalable system design
SimplyFI is a fast-growing AI- and blockchain-powered product company transforming trade finance and banking through digital innovation. We build scalable, intelligent platforms that simplify complex financial workflows for enterprises and financial institutions.
We are looking for a Full Stack Tech Lead with strong expertise in ReactJS (primary) and solid working knowledge of Python (secondary) to join our team in Thane, Mumbai.
Role: Full Stack Tech Lead (ReactJS + Python)
Key Responsibilities:
- Design, develop, and maintain scalable full-stack applications, with ReactJS as the primary frontend technology
- Build and integrate backend services using Python (Flask / Django / FastAPI)
- Design and manage RESTful APIs for internal and external system integrations
- Collaborate on AI-driven product features and support machine-learning model integrations when required
- Work closely with DevOps teams to deploy, monitor, and optimize applications on AWS
- Ensure performance, scalability, security, and code quality across the application stack
- Collaborate with product managers, designers, and QA teams to deliver high-quality features
- Write clean, maintainable, and testable code following engineering best practices
- Participate in agile processes, including code reviews, sprint planning, and daily stand-ups
Required Skills & Qualifications:
- Strong hands-on experience with ReactJS, including hooks, state management, Redux, and API integrations
- Proficiency in backend development using Python (Flask, Django, or FastAPI)
- Solid understanding of RESTful API design and secure authentication mechanisms (OAuth2, JWT)
- Experience working with databases such as MySQL, PostgreSQL, and MongoDB
- Familiarity with microservices architecture and modern software design patterns
- Hands-on experience with Git, CI/CD pipelines, Docker, and Kubernetes
- Strong problem-solving, debugging, and performance optimization skills
We are seeking a motivated Data Analyst to support business operations by analyzing data, preparing reports, and delivering meaningful insights. The ideal candidate should be comfortable working with data, identifying patterns, and presenting findings in a clear and actionable way.
Key Responsibilities:
- Collect, clean, and organize data from internal and external sources
- Analyze large datasets to identify trends, patterns, and opportunities
- Prepare regular and ad-hoc reports for business stakeholders
- Create dashboards and visualizations using tools like Power BI or Tableau
- Work closely with cross-functional teams to understand data requirements
- Ensure data accuracy, consistency, and quality across reports
- Document data processes and analysis methods
We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and
a deep interest in scalable, low-latency systems.
Key Responsibilities
• Develop, maintain, and optimize backend applications using Python.
• Build and integrate RESTful APIs and microservices.
• Work with relational and NoSQL databases for data storage, retrieval, and optimization.
• Write clean, efficient, and reusable code while following best practices.
• Collaborate with cross-functional teams (frontend, QA, DevOps) to deliver high quality features.
• Participate in code reviews to maintain high coding standards.
• Troubleshoot, debug, and upgrade existing applications.
• Ensure application security, performance, and scalability.
Required Skills & Qualifications:
• 2–4 years of hands-on experience in Python development.
• Strong command over Python frameworks such as Django, Flask, or FastAPI.
• Solid understanding of Object-Oriented Programming (OOP) principles.
• Experience working with databases such as PostgreSQL, MySQL, or MongoDB.
• Proficiency in writing and consuming REST APIs.
• Familiarity with Git and version control workflows.
• Experience with unit testing and frameworks like PyTest or Unittest.
• Knowledge of containerization (Docker) is a plus.
AI Agent Builder – Internal Functions and Data Platform Development Tools
About the Role:
We are seeking a forward-thinking AI Agent Builder to lead the design, development, and deployment, and usage reporting of Microsoft Copilot and other AI-powered agents across our data platform development tools and internal business functions. This role will be instrumental in driving automation, improving onboarding, and enhancing operational efficiency through intelligent, context-aware assistants.
This role is central to our GenAI transformation strategy. You will help shape the future of how our teams interact with data, reduce administrative burden, and unlock new efficiencies across the organization. Your work will directly contribute to our “Art of the Possible” initiative—demonstrating tangible business value through AI.
You Will:
• Copilot Agent Development: Use Microsoft Copilot Studio and Agent Builder to create, test, and deploy AI agents that automate workflows, answer queries, and support internal teams.
• Data Engineering Enablement: Build agents that assist with data connector scaffolding, pipeline generation, and onboarding support for engineers.
• Knowledge Base Integration: Curate and integrate documentation (e.g., ERDs, connector specs) into Copilot-accessible repositories (SharePoint, Confluence) to support contextual AI responses.
• Prompt Engineering: Design reusable prompt templates and conversational flows to streamline repeated tasks and improve agent usability.
• Tool Evaluation & Integration: Assess and integrate complementary AI tools (e.g., GitLab Duo, Databricks AI, Notebook LM) to extend Copilot capabilities.
• Cross-Functional Collaboration: Partner with product, delivery, PMO, and security teams to identify high-value use cases and scale successful agent implementations.
• Governance & Monitoring: Ensure agents align with Responsible AI principles, monitor performance, and iterate based on feedback and evolving business needs.
• Adoption and Usage Reporting: Use Microsoft Viva Insights and other tools to report on user adoption, usage and business value delivered.
What We're Looking For:
• Proven experience with Microsoft 365 Copilot, Copilot Studio, or similar AI platforms, ChatGPT, Claude, etc.
• Strong understanding of data engineering workflows, tools (e.g., Git, Databricks, Unity Catalog), and documentation practices.
• Familiarity with SharePoint, Confluence, and Microsoft Graph connectors.
• Experience in prompt engineering and conversational UX design.
• Ability to translate business needs into scalable AI solutions.
• Excellent communication and collaboration skills across technical and non-technical
Bonus Points:
• Experience with GitLab Duo, Notebook LM, or other AI developer tools.
• Background in enterprise data platforms, ETL pipelines, or internal business systems.
• Exposure to AI governance, security, and compliance frameworks.
• Prior work in a regulated industry (e.g., healthcare, finance) is a plus.
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
About Oneture Technologies
Oneture Technologies is a cloud-first digital engineering company helping enterprises and high-growth startups build modern, scalable, and data-driven solutions. Our teams work on cutting-edge big data, cloud, analytics, and platform engineering engagements where ownership, innovation, and continuous learning are core values.
Role Overview
We are looking for an experienced Data Engineer with 2-4 years of hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate must have strong expertise in PySpark and exposure to real-time or streaming frameworks such as Apache Flink. You will work closely with architects, data scientists, and product teams to design and deliver robust, high-performance data solutions.
Key Responsibilities
- Design, develop, and maintain scalable ETL/ELT data pipelines using PySpark
- Implement real-time or near real-time data processing using Apache Flink
- Optimize data workflows for performance, scalability, and reliability
- Work with large-scale data platforms and distributed environments
- Collaborate with cross-functional teams to integrate data solutions into products and analytics platforms
- Ensure data quality, integrity, and governance across pipelines
- Conduct performance tuning, debugging, and root-cause analysis of data processes
- Write clean, modular, and well-documented code following best engineering practices
Primary Skills
- Strong hands-on experience in PySpark (RDD, DataFrame API, Spark SQL)
- Experience with Apache Flink, Spark or Kafka (streaming or batch)
- Solid understanding of distributed computing concepts
- Proficiency in Python for data engineering workflows
- Strong SQL skills for data manipulation and transformation
- Experience with data pipeline orchestration tools (Airflow, Step Functions, etc.)
Secondary Skills
- Experience with cloud platforms (AWS, Azure, or GCP)
- Knowledge of data lakes, lakehouse architectures, and modern data stack tools
- Familiarity with Delta Lake, Iceberg, or Hudi
- Experience with CI/CD pipelines for data workflows
- Understanding of messaging and streaming systems (Kafka, Kinesis)
- Knowledge of DevOps and containerization tools (Docker)
Soft Skills
- Strong analytical and problem-solving capabilities
- Ability to work independently and as part of a collaborative team
- Good communication and documentation skills
- Ownership mindset with a willingness to learn and adapt
Education
- Bachelor’s or Master’s degree in Computer Science, Engineering, Information Technology, or a related field
Why Join Oneture Technologies?
- Opportunity to work on high-impact, cloud-native data engineering projects
- Collaborative team environment with a strong learning culture
- Exposure to modern data platforms, scalable architectures, and real-time data systems
- Growth-oriented role with hands-on ownership across end-to-end data engineering initiatives
Backend Developer (Django)
About the Role:
We are looking for a highly motivated Backend Developer with hands-on experience in the Django framework to join our dynamic team. The ideal candidate should be passionate about backend development and eager to learn and grow in a fast-paced environment. You’ll be involved in developing web applications, APIs, and automation workflows.
Key Responsibilities:
- Develop and maintain Python-based web applications using Django and Django Rest Framework.
- Build and integrate RESTful APIs.
- Work collaboratively with frontend developers to integrate user-facing elements with server-side logic.
- Contribute to improving development workflows through automation.
- Assist in deploying applications using cloud platforms like Heroku or AWS.
- Write clean, maintainable, and efficient code.
Requirements:
Backend:
- Strong understanding of Django and Django Rest Framework (DRF).
- Experience with task queues like Celery.
Frontend (Basic Understanding):
- Proficiency in HTML, CSS, Bootstrap, JavaScript, and jQuery.
Hosting & Deployment:
- Familiarity with at least one hosting service such as Heroku, AWS, or similar platforms.
Linux/Server Knowledge:
- Basic to intermediate understanding of Linux commands and server environments.
- Ability to work with terminal, virtual environments, SSH, and basic server configurations.
Python Knowledge:
- Good grasp of OOP concepts.
- Familiarity with Pandas for data manipulation is a plus.
Soft & Team Skills:
- Strong collaboration and team management abilities.
- Ability to work in a team-driven environment and coordinate tasks smoothly.
- Problem-solving mindset and attention to detail.
- Good communication skills and eagerness to learn
What We Offer:
- A collaborative, friendly, and growth-focused work environment.
- Opportunity to work on real-time projects using modern technologies.
- Guidance and mentorship to help you advance in your career.
- Flexible and supportive work culture.
- Opportunities for continuous learning and skill development.
Location : Bhayander (Onsite)
Immediate to 30-day joiner and Mumbai-based candidate preferred.
Job Description: Python-Azure AI Developer
Experience: 5+ years
Locations: Bangalore | Pune | Chennai | Jaipur | Hyderabad | Gurgaon | Bhopal
Mandatory Skills:
- Python: Expert-level proficiency with FastAPI/Flask
- Azure Services: Hands-on experience integrating Azure cloud services
- Databases: PostgreSQL, Redis
- AI Expertise: Exposure to Agentic AI technologies, frameworks, or SDKs with strong conceptual understanding
Good to Have:
- Workflow automation tools (n8n or similar)
- Experience with LangChain, AutoGen, or other AI agent frameworks
- Azure OpenAI Service knowledge
Key Responsibilities:
- Develop AI-powered applications using Python and Azure
- Build RESTful APIs with FastAPI/Flask
- Integrate Azure services for AI/ML workloads
- Implement agentic AI solutions
- Database optimization and management
- Workflow automation implementation
Software Tester – Automation (On-Site)
📍 Location: Navi Mumbai
Budget - 4lpa to 7lpa
Years of Experience - 2 to 5years
🕒 Immediate Joiners Preferred
✨ Why Join Us?
🚀 Growth-driven environment with modern, automation-first projects
📆 Weekends off + Provident Fund benefits
🤝 Supportive, collaborative & innovation-first culture
🔍 Role Overview
We are looking for an Automation Tester with strong hands-on experience in Python-based UI, API, and WebSocket automation. You will collaborate closely with developers, project managers, and QA peers to ensure product quality, performance, and reliability, while also exploring AI-led testing initiatives.
🧩 Key Responsibilities
🧾 Requirement Analysis & Test Planning
Participate in client interactions to understand testing and automation requirements.
Convert functional/technical specifications into automation-ready test scenarios.
🤖 Automation Testing & Framework Development
Develop and maintain automation scripts using Python, Selenium, and Pytest.
Build scalable automation frameworks for UI, API, and WebSocket testing.
Improve script reusability, modularity, and performance.
🌐 API & WebSocket Testing
Perform REST API validations using Postman/Swagger.
Develop automated API test suites using Python/Pytest.
Execute WebSocket test scenarios (real-time event/message validations, latency, connection stability).
🧪 Manual Testing (As Needed)
Execute functional, UI, smoke, sanity, and exploratory tests.
Validate applications in development, QA, and production environments.
🐞 Defect Management
Log, track, and retest defects using Jira or Zoho Projects.
Ensure high-quality bug reporting with clear steps and severity/priority tagging.
⚡ Performance Testing
Use JMeter to conduct load, stress, and performance tests for APIs/WebSocket-based systems.
Analyze system performance and highlight bottlenecks.
🧠 AI-Driven Testing Exploration
Research and experiment with AI tools to enhance automation coverage and efficiency.
Propose AI-driven improvements for regression, analytics, and test optimization.
🤝 Collaboration & Communication
Participate in daily stand-ups and regular QA syncs.
Communicate blockers, automation progress, and risks clearly.
📊 Test Reporting & Metrics
Create reports on automation execution, defect trends, and performance benchmarks.
🛠 Key Technical Skills
✔ Strong proficiency in Python
✔ UI Automation using Selenium (Python)
✔ Pytest Framework
✔ API Testing – Postman/Swagger
✔ WebSocket Testing
✔ Performance Testing using JMeter
✔ Knowledge of CI/CD tools (such as Jenkins)
✔ Knowledge of Git
✔ SQL knowledge (added advantage)
✔ Functional/Manual Testing expertise
✔ Solid understanding of SDLC/STLC & QA processes
🧰 Tools You Will Work With
Automation: Selenium, Pytest
API & WebSockets: Postman, Swagger, Python libraries
Performance: JMeter
Project/Defect Tracking: Jira, Zoho Projects
CI/CD & Version Control: Jenkins, Git
🌟 Soft Skills
Strong communication & teamwork
Detail-oriented and analytical
Problem-solving mindset
Ownership and accountability
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits
every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.
What You Will Do:
• We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and a deep interest in scalable, low-latency systems.
• You should have 3–4 years of experience in Python-based development and be eager to solve complex performance and scalability challenges in trading and fintech applications.
• You measure success by your own growth, not external validation.
• You thrive on challenges, not on perks or financial rewards.
• Taking calculated risks excites you—you’re here to build, break, and learn.
• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading environment.
• You understand the stakes—milliseconds can make or break trades, and precision is everything.
What We Expect:
• Develop and maintain scalable backend systems using Python.
• Design and implement REST APIs and socket-based communication.
• Optimize code for speed, performance, and reliability.
• Collaborate with frontend teams to integrate server-side logic.
• Work with RabbitMQ, Kafka, Redis, and Elasticsearch for robust backend design.
• Build fault-tolerant, multi-producer/consumer systems.
Must-Have Skills:
• 3–4 years of experience in Python and backend development.
• Strong understanding of REST APIs, sockets, and network protocols (TCP/UDP/HTTP).
• Experience with RabbitMQ/Kafka, SQL & NoSQL databases, Redis, and Elasticsearch.
• Bachelor’s degree in Computer Science or related field.
Nice-to-Have Skills:
• Past experience in fintech, trading systems, or algorithmic trading.
• Experience with GoLang, C/C++, Erlang, or Elixir.
• Exposure to trading, fintech, or low-latency systems.
• Familiarity with microservices and CI/CD pipelines.
Required Skills: Strong SQL Expertise, Data Reporting & Analytics, Database Development, Stakeholder & Client Communication, Independent Problem-Solving & Automation Skills
Review Criteria
· Must have Strong SQL skills (queries, optimization, procedures, triggers)
· Must have Advanced Excel skills
· Should have 3+ years of relevant experience
· Should have Reporting + dashboard creation experience
· Should have Database development & maintenance experience
· Must have Strong communication for client interactions
· Should have Ability to work independently
· Willingness to work from client locations.
Description
Who is an ideal fit for us?
We seek professionals who are analytical, demonstrate self-motivation, exhibit a proactive mindset, and possess a strong sense of responsibility and ownership in their work.
What will you get to work on?
As a member of the Implementation & Analytics team, you will:
● Design, develop, and optimize complex SQL queries to extract, transform, and analyze data
● Create advanced reports and dashboards using SQL, stored procedures, and other reporting tools
● Develop and maintain database structures, stored procedures, functions, and triggers
● Optimize database performance by tuning SQL queries, and indexing to handle large datasets efficiently
● Collaborate with business stakeholders and analysts to understand analytics requirements
● Automate data extraction, transformation, and reporting processes to improve efficiency
What do we expect from you?
For the SQL/Oracle Developer role, we are seeking candidates with the following skills and Expertise:
● Proficiency in SQL (Window functions, stored procedures) and MS Excel (advanced Excel skills)
● More than 3 plus years of relevant experience
● Java / Python experience is a plus but not mandatory
● Strong communication skills to interact with customers to understand their requirements
● Capable of working independently with minimal guidance, showcasing self-reliance and initiative
● Previous experience in automation projects is preferred
● Work From Office: Bangalore/Navi Mumbai/Pune/Client locations
About Us
Dolat Capital is a multi-strategy quantitative trading firm specializing in high-frequency and fully automated trading systems across global markets. We build proprietary algorithms using advanced mathematical, statistical, and computational techniques.
We are looking for an Experienced Quantitative Researcher to develop, test, and optimize quantitative trading strategies—primarily for APAC markets. The ideal candidate brings strong mathematical thinking, hands-on trading experience, and a track record of building profitable models.
Key Responsibilities
- Research, design & develop quantitative trading strategies
- Analyse large datasets and build predictive models / regression models
- Implement models in Python / C++ / Matlab
- Monitor, execute, and improve existing trading strategies
- Collaborate closely with traders, developers, and researchers
- Optimize trading systems, reduce latency, and enhance execution
- Identify new trading opportunities across listed products
- Oversee and manage risk for options, equities, futures, and other instruments
Required Skills & Experience
- Minimum 3+ years experience on a high-volume equities, futures, options, or market-making desk
- Strong background in Statistics, Mathematics, Physics, or related field (PhD)
- Proven track record of profitable real-world trading strategies
- Strong programming experience: C++, Python, R, Matlab
- Experience with automated trading systems and exchange protocols
- Ability to work in a fast-paced, high-pressure trading environment
- Excellent analytical skills, precision, and attention to detail
Review Criteria
- Strong Senior Data Scientist (AI/ML/GenAI) Profile
- 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
- Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
- 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
- Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows
Preferred
- Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
- Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.
Job Specific Criteria
- CV Attachment is mandatory
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.
Responsibilities:
- Own the full ML lifecycle: model design, training, evaluation, deployment
- Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
- Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
- Build agentic workflows for reasoning, planning, and decision-making
- Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
- Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
- Collaborate with product and engineering teams to integrate AI models into business applications
- Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices
Ideal Candidate
- 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
- Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
- Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
- Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
- Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
- Strong software engineering background with experience in testing, version control, and APIs
- Proven ability to balance innovation with scalable deployment
- B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
- Bonus: Open-source contributions, GenAI research, or applied systems at scale
Responsibilities:
- Build and optimize batch and streaming data pipelines using Apache Beam (Dataflow)
- Design and maintain BigQuery datasets using best practices in partitioning, clustering, and materialized views
- Develop and manage Airflow DAGs in Cloud Composer for workflow orchestration
- Implement SQL-based transformations using Dataform (or dbt)
- Leverage Pub/Sub for event-driven ingestion and Cloud Storage for raw/lake layer data architecture
- Drive engineering best practices across CI/CD, testing, monitoring, and pipeline observability
- Partner with solution architects and product teams to translate data requirements into technical designs
- Mentor junior data engineers and support knowledge-sharing across the team
- Contribute to documentation, code reviews, sprint planning, and agile ceremonies
Requirements
- 5+ years of hands-on experience in data engineering, with at least 2 years on GCP
- Proven expertise in BigQuery, Dataflow (Apache Beam), Cloud Composer (Airflow)
- Strong programming skills in Python and/or Java
- Experience with SQL optimization, data modeling, and pipeline orchestration
- Familiarity with Git, CI/CD pipelines, and data quality monitoring frameworks
- Exposure to Dataform, dbt, or similar tools for ELT workflows
- Solid understanding of data architecture, schema design, and performance tuning
- Excellent problem-solving and collaboration skills
Bonus Skills:
- GCP Professional Data Engineer certification
- Experience with Vertex AI, Cloud Functions, Dataproc, or real-time streaming architectures
- Familiarity with data governance tools (e.g., Atlan, Collibra, Dataplex)
- Exposure to Docker/Kubernetes, API integration, and infrastructure-as-code (Terraform)
About Ven Analytics
At Ven Analytics, we don’t just crunch numbers — we decode them to uncover insights that drive real business impact. We’re a data-driven analytics company that partners with high-growth startups and enterprises to build powerful data products, business intelligence systems, and scalable reporting solutions. With a focus on innovation, collaboration, and continuous learning, we empower our teams to solve real-world business problems using the power of data.
Role Overview
We’re looking for a Power BI Data Analyst who is not just proficient in tools but passionate about building insightful, scalable, and high-performing dashboards. The ideal candidate should have strong fundamentals in data modeling, a flair for storytelling through data, and the technical skills to implement robust data solutions using Power BI, Python, and SQL..
Key Responsibilities
- Technical Expertise: Develop scalable, accurate, and maintainable data models using Power BI, with a clear understanding of Data Modeling, DAX, Power Query, and visualization principles.
- Programming Proficiency: Use SQL and Python for complex data manipulation, automation, and analysis.
- Business Problem Translation: Collaborate with stakeholders to convert business problems into structured data-centric solutions considering performance, scalability, and commercial goals.
- Hypothesis Development: Break down complex use-cases into testable hypotheses and define relevant datasets required for evaluation.
- Solution Design: Create wireframes, proof-of-concepts (POC), and final dashboards in line with business requirements.
- Dashboard Quality: Ensure dashboards meet high standards of data accuracy, visual clarity, performance, and support SLAs.
- Performance Optimization: Continuously enhance user experience by improving performance, maintainability, and scalability of Power BI solutions.
- Troubleshooting & Support: Quick resolution of access, latency, and data issues as per defined SLAs.
- Power BI Development: Use power BI desktop for report building and service for distribution
- Backend development: Develop optimized SQL queries that are easy to consume, maintain and debug.
- Version Control: Strict control on versions by tracking CRs and Bugfixes. Ensuring the maintenance of Prod and Dev dashboards.
- Client Servicing : Engage with clients to understand their data needs, gather requirements, present insights, and ensure timely, clear communication throughout project cycles.
- Team Management : Lead and mentor a small team by assigning tasks, reviewing work quality, guiding technical problem-solving, and ensuring timely delivery of dashboards and reports..
Must-Have Skills
- Strong experience building robust data models in Power BI
- Hands-on expertise with DAX (complex measures and calculated columns)
- Proficiency in M Language (Power Query) beyond drag-and-drop UI
- Clear understanding of data visualization best practices (less fluff, more insight)
- Solid grasp of SQL and Python for data processing
- Strong analytical thinking and ability to craft compelling data stories
- Client Servicing Background.
Good-to-Have (Bonus Points)
- Experience using DAX Studio and Tabular Editor
- Prior work in a high-volume data processing production environment
- Exposure to modern CI/CD practices or version control with BI tools
Why Join Ven Analytics?
- Be part of a fast-growing startup that puts data at the heart of every decision.
- Opportunity to work on high-impact, real-world business challenges.
- Collaborative, transparent, and learning-oriented work environment.
- Flexible work culture and focus on career development.
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles
.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
JD for Cloud engineer
Job Summary:
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
- Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
- Implement Google Cloud Storage, Cloud SQL, filestore, for data storage and processing needs.
- Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
- Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
- Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
- Work with Helm charts for microservices deployments.
- Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
- Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
- Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Automate infrastructure deployment using Terraform, Bash and Powershell scripting
- Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
What We’re Looking For
- 3-5 years of Data Science & ML experience in consumer internet / B2C products.
- Degree in Statistics, Computer Science, or Engineering (or certification in Data Science).
- Machine Learning wizardry: recommender systems, NLP, user profiling, image processing, anomaly detection.
- Statistical chops: finding meaningful insights in large data sets.
- Programming ninja: R, Python, SQL + hands-on with Numpy, Pandas, scikit-learn, Keras, TensorFlow (or similar).
- Visualization skills: Redshift, Tableau, Looker, or similar.
- A strong problem-solver with curiosity hardwired into your DNA.
- Brownie Points
- Experience with big data platforms: Hadoop, Spark, Hive, Pig.
- Extra love if you’ve played with BI tools like Tableau or Looker.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 3-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Backend Engineer (MongoDB / API Integrations / AWS / Vectorization)
Position Summary
We are hiring a Backend Engineer with expertise in MongoDB, data vectorization, and advanced AI/LLM integrations. The ideal candidate will have hands-on experience developing backend systems that power intelligent data-driven applications, including robust API integrations with major social media platforms (Meta, Instagram, Facebook, with expansion to TikTok, Snapchat, etc.). In addition, this role requires deep AWS experience (Lambda, S3, EventBridge) to manage serverless workflows, automate cron jobs, and execute both scheduled and manual data pulls. You will collaborate closely with frontend developers and AI engineers to deliver scalable, resilient APIs that power our platform.
Key Responsibilities
- Design, implement, and maintain backend services with MongoDB and scalable data models.
- Build pipelines to vectorize data for retrieval-augmented generation (RAG) and other AI-driven features.
- Develop robust API integrations with major social platforms (Meta, Instagram Graph API, Facebook API; expand to TikTok, Snapchat, etc.).
- Implement and maintain AWS Lambda serverless functions for scalable backend processes.
- Use AWS EventBridge to schedule cron jobs and manage event-driven workflows.
- Leverage AWS S3 for structured and unstructured data storage, retrieval, and processing.
- Build workflows for manual and automated data pulls from external APIs.
- Optimize backend systems for performance, scalability, and reliability at high data volumes.
- Collaborate with frontend engineers to ensure smooth integration into Next.js applications.
- Ensure security, compliance, and best practices in API authentication (OAuth, tokens, etc.).
- Contribute to architecture planning, documentation, and system design reviews.
Required Skills/Qualifications
- Strong expertise with MongoDB (including Atlas) and schema design.
- Experience with data vectorization and embeddings (OpenAI, Pinecone, MongoDB Atlas Vector Search, etc.).
- Proven track record of social media API integrations (Meta, Instagram, Facebook; additional platforms a plus).
- Proficiency in Node.js, Python, or other backend languages for API development.
- Deep understanding of AWS services:
- Lambda for serverless functions.
- S3 for structured/unstructured data storage.
- EventBridge for cron jobs, scheduled tasks, and event-driven workflows.
- Strong understanding of REST and GraphQL API design.
- Experience with data optimization, caching, and large-scale API performance.
Preferred Skills/Experience
- Experience with real-time data pipelines (Kafka, Kinesis, or similar).
- Familiarity with CI/CD pipelines and automated deployments on AWS.
- Knowledge of serverless architecture best practices.
- Background in SaaS platform development or data analytics systems.
Job Description
Position - Full stack Developer
Location - Mumbai
Experience - 2-5 Years
Who are we
Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics based diagnostic solution for Tuberculosis was recognized as one of top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.
Objectives of this Role:
- Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
- Ideate and develop new product features in collaboration with domain experts in healthcare and genomics
- Develop state of the art enterprise standard front-end and backend services
- Develop cloud platform services based on container orchestration platform
- Continuously embrace automation for repetitive tasks
- Ensure application performance, uptime, and scale, maintaining high standards of code quality by using clean coding principles and solid design patterns
- Build robust tech modules that are Unit Testable, Automating recurring tasks and processes
- Engage effectively with team members and collaborate to upskill and unblock each other
Frontend Skills
- HTML 5
- CSS framework ( LESS/ SASS / Tailwind )
- Es6 / Typescript
- Electron app /Tauri)
- Component library ( Bootstrap , material UI, Lit )
- Responsive web layout ( Flex layout , Grid layout )
- Package manager --> yarn-/ npm / turbo
- Build tools - > (Vite/Webpack/Parcel)
- Frameworks -- > React with Redux of Mobx / Next JS
- Design patterns
- Testing - JEST / MOCHA / JASMINE / Cypress)
- Functional Programming concepts
- Scripting ( powershell , bash , python )
Backend Skills
- Nodejs - Express / NEST JS
- Python / Rust
- REST API
- SOLID Design Principles
- Database (postgresql / mysql / redis / cassandra / mongodb )
- Caching ( Redis )
- Container Technology ( Docker / Kubernetes )
- Cloud ( azure , aws , openshift, google cloud)
- Version Control - GIT
- GITOPS
- Automation ( terraform , ansible )
Cloud Skills
- Object storage
- VPC concepts
- Containerize Deployment
- Serverless architecture
Other Skills
- Innovation and thought leadership
- UI - UX design skills
- Interest in in learning new tools, languages, workflows, and philosophies to grow
- Communication
To know more about us- https://haystackanalytics.in/
Job Description:
Position - Cloud Developer
Experience - 5 - 8 years
Location - Mumbai & Pune
Responsibilities:
- Design, develop, and maintain robust software applications using most common and popular coding languages suitable for the application design, with a strong focus on clean, maintainable, and efficient code.
- Develop, maintain, and enhance Terraform modules to encapsulate common infrastructure patterns and promote code reuse and standardization.
- Develop RESTful APIs and backend services aligned with modern architectural practices.
- Apply object-oriented programming principles and design patterns to build scalable systems.
- Build and maintain automated test frameworks and scripts to ensure high product quality.
- Troubleshoot and resolve technical issues across application layers, from code to infrastructure.
- Work with cloud platforms such as Azure or Google Cloud Platform (GCP).
- Use Git and related version control practices effectively in a team-based development environment.
- Integrate and experiment with AI development tools like GitHub Copilot, Azure OpenAI, or similar to boost engineering efficiency.
Skills:
- 5+ years of experience
- Experience with IaC Module
- Terraform coding experience along with Terraform Module as a part of central platform team
- Azure/GCP cloud experience is a must
- Experience with C#/Python/Java Coding - is good to have
Dear Candidate,
Greetings from Wissen Technology.
We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.
About Wissen Technology:
- The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
- Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
- Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
- Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
- Globally present with offices US, India, UK, Australia, Mexico, and Canada.
- We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
- Wissen Technology has been certified as a Great Place to Work®.
- Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
- Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
- The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few.
Job Description:
Please find below details:
Experience - 4+ Years
Location- Bangalore/Mumbai/Pune
Team Responsibilities
The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.
Resource's key Responsibilities
• Provide Tier 2/3 product technical support.
• Building software to help operations and support activities.
• Manage system\software configurations and troubleshoot environment issues.
• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.
• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.
• Collaborate with development and software testing teams throughout the release process.
• Analyze release and deployment processes to identify key areas for automation and optimization.
• Manage hardware and software resources & coordinate maintenance, planned downtimes with
infrastructure group across all the environments. (Production / Non-Production).
• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Strong Full stack developer Profile
Mandatory (Experience 1) - Must Have Minimum 5+ YOE in Software Development,
Mandatory (Experience 2) - Must have 4+ YOE in backend using Python.
Mandatory (Experience 3) - Must have good experience in frontend using React JS with knowledge of HTML, CSS, and JavaScript.
Mandatory (Experience 4) - Must have Experience in any databases - MySQL / PostgreSQL / Postgres / Oracle / SQL Server /

One of the reputed Client in India
Our Client is looking to hire Databricks Amin immediatly.
This is PAN-INDIA Bulk hiring
Minimum of 6-8+ years with Databricks, Pyspark/Python and AWS.
Must have AWS
Notice 15-30 days is preferred.
Share profiles at hr at etpspl dot com
Please refer/share our email to your friends/colleagues who are looking for job.
Dear Candidate,
Greetings from Wissen Technology.
We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.
About Wissen Technology:
- The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
- Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
- Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
- Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
- Globally present with offices US, India, UK, Australia, Mexico, and Canada.
- We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
- Wissen Technology has been certified as a Great Place to Work®.
- Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
- Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
- The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few
Job Description:
Please find below details:
Experience - 4+ Years
Location- Bangalore/Mumbai/Pune
Team Responsibilities
The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.
Resource's key Responsibilities
• Provide Tier 2/3 product technical support.
• Building software to help operations and support activities.
• Manage system\software configurations and troubleshoot environment issues.
• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.
• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.
• Collaborate with development and software testing teams throughout the release process.
• Analyze release and deployment processes to identify key areas for automation and optimization.
• Manage hardware and software resources & coordinate maintenance, planned downtimes with
infrastructure group across all the environments. (Production / Non-Production).
• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Full-Stack Developer
Exp: 5+ years required
Night shift: 8 PM-5 AM/9PM-6 AM
Only Immediate Joinee Can Apply
We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.
We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.























