50+ Python Jobs in Mumbai | Python Job openings in Mumbai
Apply to 50+ Python Jobs in Mumbai on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.
Job Title: Python Developer (4–6 Years Experience)
Location: Mumbai (Onsite)
Experience: 4–6 Years
Salary: ₹50,000 – ₹90,000 per month (depending on experience & skill set)
Employment Type: Full-time
Job Description
We are looking for an experienced Python Developer to join our growing team in Mumbai. The ideal candidate will have strong hands-on experience in Python development, building scalable backend systems, and working with databases and APIs.
Key Responsibilities
- Design, develop, test, and maintain Python-based applications
- Build and integrate RESTful APIs
- Work with frameworks such as Django / Flask / FastAPI
- Write clean, reusable, and efficient code
- Collaborate with frontend developers, QA, and project managers
- Optimize application performance and scalability
- Debug, troubleshoot, and resolve technical issues
- Participate in code reviews and follow best coding practices
- Work with databases and ensure data security and integrity
- Deploy and maintain applications in staging/production environments
Required Skills & Qualifications
- 4–6 years of hands-on experience in Python development
- Strong experience with Django / Flask / FastAPI
- Good understanding of REST APIs
- Experience with MySQL / PostgreSQL / MongoDB
- Familiarity with Git and version control workflows
- Knowledge of OOP concepts and design principles
- Experience with Linux-based environments
- Understanding of basic security and performance optimization
- Ability to work independently as well as in a team
Good to Have (Preferred Skills)
- Experience with AWS / cloud services
- Knowledge of Docker / CI-CD pipelines
- Exposure to microservices architecture
- Basic frontend knowledge (HTML, CSS, JavaScript)
- Experience working in an Agile/Scrum environment
Job Type: Full-time
Application Question(s):
- If selected, how soon can you join?
Experience:
- Total: 3 years (Required)
- Python: 3 years (Required)
Location:
- Mumbai, Maharashtra (Required)
Work Location: In person
Teknobuilt is an innovative construction technology company accelerating Digital and AI platform to help all aspects of program management and execution for workflow automation, collaborative manual tasks and siloed systems. Our platform has received innovation awards and grants in Canada, UK and S. Korea and we are at the frontiers of solving key challenges in the built environment and digital health, safety and quality.
Teknobuilt's vision is helping the world build better- safely, smartly and sustainably. We are on a mission to modernize construction by bringing Digitally Integrated Project Execution System - PACE and expert services for midsize to large construction and infrastructure projects. PACE is an end-to-end digital solution that helps in Real Time Project Execution, Health and Safety, Quality and Field management for greater visibility and cost savings. PACE enables digital workflows, remote working, AI based analytics to bring speed, flow and surety in project delivery. Our platform has received recognition globally for innovation and we are experiencing a period of significant growth for our solutions.
Job Responsibilities
As a Quality Analyst Engineer, you will be expected to:
· Thoroughly analyze project requirements, design specifications, and user stories to understand the scope and objectives.
· Arrange, set up, and configure necessary test environments for effective test case execution.
· Participate in and conduct review meetings to discuss test plans, test cases, and defect statuses.
Execute manual test cases with precision, analyze results, and identify deviations from expected behavior.
· Accurately track, log, prioritize, and manage defects through their lifecycle, ensuring clear communication with developers until resolution.
· Maintain continuous and clear communication with the Test Manager and development team regarding testing progress, roadblocks, and critical findings.
· Develop, maintain, and manage comprehensive test documentation, including:
o Detailed Test Plans
o Well-structured Test Cases for various testing processes
o Concise Summary Reports on test execution and defect status
o Thorough Test Data preparation for test cases
o "Lessons Learned" documents based on testing inputs from previous projects
o "Suggestion Documents" aimed at improving overall software quality
o Clearly defined Test Scenarios
· Clearly report identified bugs to developers with precise steps to reproduce, expected results, and actual results, facilitating efficient defect resolution
Company Description
Appiness Interactive Pvt. Ltd. is a Bangalore-based product development and UX firm that specializes in digital services for startups to fortune-500s. We work closely with our clients to create a comprehensive soul for their brand in the online world, engaged through multiple platforms of digital media. Our team is young, passionate, and aggressive, not afraid to think out of the box or tread the un-trodden path in order to deliver the best results for our clients. We pride ourselves on Practical Creativity where the idea is only as good as the returns it fetches for our clients.
Position Overview:
Senior backend engineering role focused on building and operating ML-backed backend systems powering a large-scale AI product. This is a core foundation/platform role with end-to-end system ownership in a fast-moving, ambiguous environment within a high-intent foundation
engineering pod of 10 engineers.
Key Responsibilities:
● Design, build, and operate ML-backed backend systems at scale
● Own runtime orchestration, session/state management, and retrieval/memory pipelines (chunking, embeddings, indexing, vector search, re-ranking, caching, freshness & deletion)
● Productionize ML workflows: feature/metadata services, model integration contracts, offline/online parity, and evaluation instrumentation
● Drive performance, reliability, and cost efficiency across latency, throughput, infra usage,
and token economics
● Build observability-first systems with tracing, metrics, logs, guardrails, and fallback paths
● Partner closely with applied ML teams on prompt/tool schemas, routing, evaluation
datasets, and safe releases
● Ship independently and own systems end-to-end
Required Skills:
● 6+ years of backend/platform engineering experience
● Strong experience building distributed, production-grade systems
● Hands-on exposure to ML-adjacent systems (serving, retrieval, orchestration, inference pipelines)
● Proven ownership of reliability, performance, and cost optimization in production
● Must be based in Mumbai or Bangalore
● Ability to work mandatory in-office
Preferred (Bonus) Skills:
● Experience with greenfield AI platform development
● Already based in Mumbai
● Experience working with US enterprise clients
● Foundation/platform engineering background
Location: Mumbai (Onsite)
Experience: 4–6 Years
Salary: ₹50,000 – ₹90,000 per month (depending on experience & skill set)
Employment Type: Full-time
Job Description
We are looking for a skilled React Developer to join our team in Mumbai. The ideal candidate should have strong hands-on experience in building modern, responsive web applications using React and be comfortable working with at least one backend technology such as Python, Node.js, or PHP.
Key Responsibilities
- Develop and maintain user-friendly web applications using React.js
- Convert UI/UX designs into high-quality, reusable components
- Work with REST APIs and integrate frontend with backend services
- Collaborate with backend developers (Python / Node.js / PHP)
- Optimize applications for performance, scalability, and responsiveness
- Manage application state using Redux / Context API / similar
- Write clean, maintainable, and well-documented code
- Participate in code reviews and sprint planning
- Debug and resolve frontend and integration issues
- Ensure cross-browser and cross-device compatibility
Required Skills & Qualifications
- 4–6 years of experience in frontend development
- Strong expertise in React.js
- Proficiency in JavaScript (ES6+)
- Experience with HTML5, CSS3, Responsive Design
- Hands-on experience with RESTful APIs
- Working knowledge of at least one backend technology:
- Python (Django / Flask / FastAPI) OR
- Node.js (Express / NestJS) OR
- PHP (Laravel preferred)
- Familiarity with Git / version control systems
- Understanding of component-based architecture
- Experience working in Linux environments
Good to Have (Preferred Skills)
- Experience with Next.js
- Knowledge of TypeScript
- Familiarity with Redux / React Query
- Basic understanding of databases (MySQL / MongoDB)
- Experience with CI/CD pipelines
- Exposure to AWS or cloud platforms
- Experience working in Agile/Scrum teams
What We Offer
- Competitive salary based on experience and skills
- Onsite role with a collaborative team in Mumbai
- Opportunity to work on modern tech stack and real-world projects
- Career growth and learning opportunities
Interested candidates can share their resumes at
Job Type: Full-time
Application Question(s):
- If selected, how soon can you join?
- Are you okay with the salary slab (50,000-90,000) , depending upon your experience?
- Have you worked on a production React application where you integrated REST APIs and handled authentication and error scenarios with a backend (Python / Node.js / PHP)?
Experience:
- Total: 3 years (Required)
- Python: 3 years (Required)
Location:
- Mumbai, Maharashtra (Required)
Work Location: In person
About Allvest :
- AI-driven financial planning and portfolio management platform
- Secure, data-backed portfolio oversight aligned with regulatory standards
- Building cutting-edge fintech solutions for intelligent investment decisions
Role Overview :
- Architect and build scalable, high-performance backend systems
- Work on mission-critical systems handling real-time market data and portfolio analytics
- Ensure regulatory compliance and secure financial transactions
Key Responsibilities :
- Design, develop, and maintain robust backend services and APIs using NodeJS and Python
- Build event-driven architectures using RabbitMQ and Kafka for real-time data processing
- Develop data pipelines integrating PostgreSQL and BigQuery for analytics and warehousing
- Ensure system reliability, performance, and security with focus on low-latency operations
- Lead technical design discussions, code reviews, and mentor junior developers
- Optimize database queries, implement caching strategies, and enhance system performance
- Collaborate with cross-functional teams to deliver end-to-end features
- Implement monitoring, logging, and observability solutions
Required Skills & Experience :
- 5+ years of professional backend development experience
- Strong expertise in NodeJS and Python for production-grade applications
- Proven experience designing RESTful APIs and microservices architectures
- Strong proficiency in PostgreSQL including query optimization and database design
- Hands-on experience with RabbitMQ and Kafka for event-driven systems
- Experience with BigQuery or similar data warehousing solutions
- Solid understanding of distributed systems, scalability patterns, and high-traffic applications
- Strong knowledge of authentication, authorization, and security best practices in financial applications
- Experience with Git, CI/CD pipelines, and modern development workflows
- Excellent problem-solving and debugging skills across distributed systems
Preferred Qualifications :
- Prior experience in fintech, banking, or financial services
- Familiarity with cloud platforms (GCP/AWS/Azure) and containerization (Docker, Kubernetes)
- Knowledge of frontend technologies for full-stack collaboration
- Experience with Redis or Memcached
- Understanding of regulatory requirements (KYC, compliance, data privacy)
- Open-source contributions or tech community participation
What We Offer :
- Opportunity to work on cutting-edge fintech platform with modern technology stack
- Collaborative environment with experienced team from leading financial institutions
- Competitive compensation with equity participation
- Challenging problems at the intersection of finance, AI, and technology
- Career growth in fast-growing startup environment
Location: Mumbai (Phoenix Market City, Kurla West)
Also Apply at https://wohlig.keka.com/careers/jobdetails/122768
Key Responsibilities
- Automation & Reliability: Automate infrastructure and operational processes to ensure high reliability, scalability, and security.
- Cloud Infrastructure Design: Gather GCP infrastructure requirements, evaluate solution options, and implement best-fit cloud architectures.
- Infrastructure as Code (IaC): Design, develop, and maintain infrastructure using Terraform and Ansible.
- CI/CD Ownership: Build, manage, and maintain robust CI/CD pipelines using Jenkins, ensuring system reliability and performance.
- Container Orchestration: Manage Docker containers and self-managed Kubernetes clusters across multiple cloud environments.
- Monitoring & Observability: Implement and manage cloud-native monitoring solutions using Prometheus, Grafana, and the ELK stack.
- Proactive Issue Resolution: Troubleshoot and resolve infrastructure and application issues across development, testing, and production environments.
- Scripting & Automation: Develop efficient automation scripts using Python and one or more of Node.js, Go, or Shell scripting.
- Security Best Practices: Maintain and enhance the security of cloud services, Kubernetes clusters, and deployment pipelines.
- Cross-functional Collaboration: Work closely with engineering, product, and security teams to design and deploy secure, scalable infrastructure.
We are looking for a DevOps Engineer with hands-on experience in automating, monitoring, and scaling cloud-native infrastructure.
You will play a critical role in building and maintaining high-availability, secure, and scalable CI/CD pipelines for our AI- and blockchain-powered FinTech platforms.
You will work closely with Engineering, QA, and Product teams to streamline deployments, optimize cloud environments, and ensure reliable production systems.
Key Responsibilities
- Design, build, and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI
- Manage cloud infrastructure using Infrastructure as Code (IaC) tools such as Terraform, Ansible, or CloudFormation
- Deploy, manage, and monitor applications on AWS, Azure, or GCP
- Ensure high availability, scalability, and performance of production environments
- Implement security best practices across infrastructure and DevOps workflows
- Automate environment provisioning, deployments, backups, and monitoring
- Configure and manage containerized applications using Docker and Kubernetes
- Collaborate with developers to improve build, release, and deployment processes
- Monitor systems using tools like Prometheus, Grafana, ELK Stack, or CloudWatch
- Perform root cause analysis (RCA) and support production incident response
Required Skills & Experience
- 2+ years of experience in DevOps, Cloud Engineering, or Infrastructure Automation
- Strong hands-on experience with AWS, Azure, or GCP
- Proven experience in setting up and managing CI/CD pipelines
- Proficiency in Docker, Kubernetes, and container orchestration
- Experience with Terraform, Ansible, or similar IaC tools
- Knowledge of monitoring, logging, and alerting systems
- Strong scripting skills using Shell, Bash, or Python
- Good understanding of Git, version control, and branching strategies
- Experience supporting production-grade SaaS or enterprise platforms
Specific Knowledge/Skills
- 4-6 years of experience
- Proficiency in Python programming.
- Basic knowledge of front-end development.
- Basic knowledge of Data manipulation and analysis libraries
- Code versioning and collaboration. (Git)
- Knowledge for Libraries for extracting data from websites.
- Knowledge of SQL and NoSQL databases
- Familiarity with RESTful APIs
- Familiarity with Cloud (Azure /AWS) technologies
Role: Senior Platform Engineer (GCP Cloud)
Experience Level: 3 to 6 Years
Work location: Mumbai
Mode : Hybrid
Role & Responsibilities:
- Build automation software for cloud platforms and applications
- Drive Infrastructure as Code (IaC) adoption
- Design self-service, self-healing monitoring and alerting tools
- Automate CI/CD pipelines (Git, Jenkins, SonarQube, Docker)
- Build Kubernetes container platforms
- Introduce new cloud technologies for business innovation
Requirements:
- Hands-on experience with GCP Cloud
- Knowledge of cloud services (compute, storage, network, messaging)
- IaC tools experience (Terraform/CloudFormation)
- SQL & NoSQL databases (Postgres, Cassandra)
- Automation tools (Puppet/Chef/Ansible)
- Strong Linux administration skills
- Programming: Bash/Python/Java/Scala
- CI/CD pipeline expertise (Jenkins, Git, Maven)
- Multi-region deployment experience
- Agile/Scrum/DevOps methodology
Junior PHP Developer (Full-Time)
Malad, Mumbai (Mindspace) | Work from Office
We’re hiring a Junior PHP Developer at Websites.co.in, a platform where small businesses create their website in 2 minutes.
Your role
- Develop and maintain backend logic using PHP (Laravel or Core PHP)
- Write clean, reusable, and efficient code
- Work with MySQL databases (queries, joins, optimization)
- Integrate REST APIs and troubleshoot backend issues
- Collaborate with frontend, QA, and product teams for feature implementation
- Participate in code reviews, testing, and deployment activities
- Debug production issues and provide quick fixes
What we expect
- Hands-on development experience with PHP (mandatory)
- Strong knowledge of MySQL, queries, and database structures
- Understanding of MVC architecture (Laravel preferred)
- Basic knowledge of HTML, CSS, JavaScript
- Familiarity with Git version control
- Problem-solving mindset and willingness to take ownership
- 0–3.5 years of experience (freshers with strong projects are welcome)
Good to have
- Experience working with APIs, JSON, cURL
- Understanding of server basics (Linux, Apache, hosting environments)
What you get
- Real product ownership, not agency project hopping
- Direct collaboration with CTO and senior devs
- Steep learning curve in a fast-moving SaaS environment
About the Role
We are looking for a hands-on and solution-oriented Senior Data Scientist – Generative AI to join our growing AI practice. This role is ideal for someone who thrives in designing and deploying Gen AI solutions on AWS, enjoys working with customers directly, and can lead end-to-end implementations. You will play a key role in architecting AI solutions, driving project delivery, and guiding junior team members.
Key Responsibilities
- Design and implement end-to-end Generative AI solutions for customers on AWS.
- Work closely with customers to understand business challenges and translate them into Gen AI use-cases.
- Own technical delivery, including data preparation, model integration, prompt engineering, deployment, and performance monitoring.
- Lead project execution – ensure timelines, manage stakeholder communications, and collaborate across internal teams.
- Provide technical guidance and mentorship to junior data scientists and engineers.
- Develop reusable components and reference architectures to accelerate delivery.
- Stay updated with latest developments in Gen AI, particularly AWS offerings like Bedrock, SageMaker, LangChain integrations, etc.
Required Skills & Experience
- 3-7 years of hands-on experience in Data Science/AI/ML, with at least 2–3 years in Generative AI projects.
- Proficient in building solutions using AWS AI/ML services (e.g., SageMaker, Amazon Bedrock, Lambda, API Gateway, S3, etc.).
- Experience with LLMs, prompt engineering, RAG pipelines, and deployment best practices.
- Solid programming experience in Python, with exposure to libraries such as Hugging Face, LangChain, etc.
- Strong problem-solving skills and ability to work independently in customer-facing roles.
- Experience in collaborating with Systems Integrators (SIs) or working with startups in India is a major plus.
Soft Skills
- Strong verbal and written communication for effective customer engagement.
- Ability to lead discussions, manage project milestones, and coordinate across stakeholders.
- Team-oriented with a proactive attitude and strong ownership mindset.
What We Offer
- Opportunity to work on cutting-edge Generative AI projects across industries.
- Collaborative, startup-like work environment with flexibility and ownership.
- Exposure to full-stack AI/ML project lifecycle and client-facing roles.
- Competitive compensation and learning opportunities in the AWS AI ecosystem.
About Oneture Technologies
Founded in 2016, Oneture is a cloud-first, full-service digital solutions company, helping clients harness the power of Digital Technologies and Data to drive transformations and turning ideas into business realities. Our team is full of curious, full-stack, innovative thought leaders who are dedicated to providing outstanding customer experiences and building authentic relationships. We are compelled by our core values to drive transformational results from Ideas to Reality for clients across all company sizes, geographies, and industries. The Oneture team delivers full lifecycle solutions— from ideation, project inception, planning through deployment to ongoing support and maintenance.
Our core competencies and technical expertise includes Cloud powered: Product Engineering, Big Data and AI ML. Our deep commitment to value creation for our clients and partners and “Startups-like agility with Enterprises-like maturity” philosophy has helped us establish long-term relationships with our clients and enabled us to build and manage mission-critical platforms for them.
Build robust testing automation frameworks that assure reliability across Jodo World’s omnichannel products — web, mobile, and real-time communication systems.
Key Responsibilities
- Design and maintain automation frameworks (Selenium, Appium, Cypress).
- Create regression, smoke, and performance suites.
- Integrate automated testing into CI/CD pipelines.
- Execute functional, UI, Mobile App, API, load, and security testing.
- Ensure 100 % traceability between requirements and test cases.
Required Skills & Experience
- 3–6 years in Test automation.
- Hands-on with Java, JavaScript/Python-based frameworks.
- Proficiency in JMeter or Playwright for performance testing.
- Familiar with mobile app and REST API testing.
What Success Looks Like
- 95 % automation coverage.
- Zero critical bugs post-release.
Why Join Us
Help deliver reliability for a global AI-powered communications platform — every test secures millions of user interactions.
About the Role
Oneture Technologies is helping global clients on their digital transformation journey to build modern, scalable, and integrated digital platforms. To strengthen our Technology and Leadership capabilities, we are looking for an experienced Technical Lead who can drive solution design, mentor teams, and ensure high-quality delivery of large-scale systems.
As a Technical Lead, you will own the technical architecture, delivery execution, and team leadership across complex projects, while working closely with clients and internal stakeholders.
Key Responsibilities
- Design, develop, and maintain highly scalable and secure application systems
- Lead solution architecture, technical design, effort estimation, and delivery planning
- Drive the implementation of cloud-native solutions following best practices for security, scalability, and reliability
- Lead and manage a team of 5–10 engineers, ensuring adherence to engineering processes and quality standards
- Mentor junior developers and provide hands-on technical guidance on day-to-day work
- Own end-to-end technical delivery, including architecture, development, testing, and release
- Collaborate closely with clients and internal stakeholders; provide regular status updates and manage expectations
- Troubleshoot complex technical issues and propose robust, long-term solutions
- Establish strong engineering practices around test-driven development, CI/CD, and automated deployments
- Contribute to continuous improvement of engineering standards, tooling, and delivery processes
Required Experience & Qualifications
- 4–6+ years of hands-on experience with proven success in technical leadership roles
- Strong experience building and scaling large, complex, high-traffic platforms
- Demonstrated ability to handle high workload, performance-sensitive, and secure systems
- Bachelor’s degree (B.E. / B.Tech) in Computer Science or a related field from a reputed institute
Technical Expertise
- Proficiency in one or more backend programming languages such as GoLang, Java, Node.js, or Python
- Strong experience architecting and implementing solutions on AWS
- Hands-on experience with cloud architecture, scalability, and security best practices
- Experience with caching technologies such as Redis or Memcached
- Familiarity with containerization and orchestration tools (Docker, Kubernetes)
- Strong understanding of RESTful services, authentication mechanisms, data formats (JSON/XML), and SQL
- Experience with unit testing, functional testing, and CI/CD pipelines
- Solid understanding of system design, performance optimization, and release management
- Ability to think from a product and user-impact mindset
Good to Have
- AWS certifications (Solutions Architect / Professional / Specialty)
- Experience with observability tools (logging, monitoring, alerting)
- Exposure to distributed systems and microservices architecture
- Experience working in fast-paced, high-growth environments
Soft Skills & Leadership Qualities
- Strong ownership and accountability for technical outcomes
- Excellent communication and stakeholder management skills
- Ability to mentor, guide, and inspire engineering teams
- Comfortable working in a fast-paced, evolving environment
- Strong problem-solving and decision-making ability
Why Join Oneture?
- Work on large-scale, high-impact digital transformation projects
- Strong emphasis on engineering excellence and leadership growth
- Collaborative, learning-driven culture
- Opportunity to influence architecture and technology direction
- Exposure to modern cloud-native and scalable system design
SimplyFI is a fast-growing AI- and blockchain-powered product company transforming trade finance and banking through digital innovation. We build scalable, intelligent platforms that simplify complex financial workflows for enterprises and financial institutions.
We are looking for a Full Stack Tech Lead with strong expertise in ReactJS (primary) and solid working knowledge of Python (secondary) to join our team in Thane, Mumbai.
Role: Full Stack Tech Lead (ReactJS + Python)
Key Responsibilities:
- Design, develop, and maintain scalable full-stack applications, with ReactJS as the primary frontend technology
- Build and integrate backend services using Python (Flask / Django / FastAPI)
- Design and manage RESTful APIs for internal and external system integrations
- Collaborate on AI-driven product features and support machine-learning model integrations when required
- Work closely with DevOps teams to deploy, monitor, and optimize applications on AWS
- Ensure performance, scalability, security, and code quality across the application stack
- Collaborate with product managers, designers, and QA teams to deliver high-quality features
- Write clean, maintainable, and testable code following engineering best practices
- Participate in agile processes, including code reviews, sprint planning, and daily stand-ups
Required Skills & Qualifications:
- Strong hands-on experience with ReactJS, including hooks, state management, Redux, and API integrations
- Proficiency in backend development using Python (Flask, Django, or FastAPI)
- Solid understanding of RESTful API design and secure authentication mechanisms (OAuth2, JWT)
- Experience working with databases such as MySQL, PostgreSQL, and MongoDB
- Familiarity with microservices architecture and modern software design patterns
- Hands-on experience with Git, CI/CD pipelines, Docker, and Kubernetes
- Strong problem-solving, debugging, and performance optimization skills
We are seeking a motivated Data Analyst to support business operations by analyzing data, preparing reports, and delivering meaningful insights. The ideal candidate should be comfortable working with data, identifying patterns, and presenting findings in a clear and actionable way.
Key Responsibilities:
- Collect, clean, and organize data from internal and external sources
- Analyze large datasets to identify trends, patterns, and opportunities
- Prepare regular and ad-hoc reports for business stakeholders
- Create dashboards and visualizations using tools like Power BI or Tableau
- Work closely with cross-functional teams to understand data requirements
- Ensure data accuracy, consistency, and quality across reports
- Document data processes and analysis methods
Machine Learning Engineer | 3+ Years | Mumbai (Onsite)
Location: Ghansoli, Mumbai
Work Mode: Onsite | 5 days working
Notice Period: Immediate to 30 Days preferred
About the Role
We are hiring a Machine Learning Engineer with 3+ years of experience to build and deploy prediction, classification, and recommendation models. You’ll work on end-to-end ML pipelines and production-grade AI systems.
Must-Have Skills
- 3+ years of hands-on ML experience
- Strong Python (Pandas, NumPy, Scikit-learn, TensorFlow / PyTorch)
- Experience with feature engineering, model training & evaluation
- Hands-on with Azure ML / Azure Storage / Azure Functions
- Knowledge of modern AI concepts (embeddings, transformers, LLMs)
Good to Have
- MLOps tools (MLflow, Docker, CI/CD)
- Time-series forecasting
- Model serving using FastAPI
Why Join Us?
- Work on real-world ML use cases
- Exposure to modern AI & LLM-based systems
- Collaborative engineering environment
- High ownership & learning opportunities
We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and
a deep interest in scalable, low-latency systems.
Key Responsibilities
• Develop, maintain, and optimize backend applications using Python.
• Build and integrate RESTful APIs and microservices.
• Work with relational and NoSQL databases for data storage, retrieval, and optimization.
• Write clean, efficient, and reusable code while following best practices.
• Collaborate with cross-functional teams (frontend, QA, DevOps) to deliver high quality features.
• Participate in code reviews to maintain high coding standards.
• Troubleshoot, debug, and upgrade existing applications.
• Ensure application security, performance, and scalability.
Required Skills & Qualifications:
• 2–4 years of hands-on experience in Python development.
• Strong command over Python frameworks such as Django, Flask, or FastAPI.
• Solid understanding of Object-Oriented Programming (OOP) principles.
• Experience working with databases such as PostgreSQL, MySQL, or MongoDB.
• Proficiency in writing and consuming REST APIs.
• Familiarity with Git and version control workflows.
• Experience with unit testing and frameworks like PyTest or Unittest.
• Knowledge of containerization (Docker) is a plus.
AI Agent Builder – Internal Functions and Data Platform Development Tools
About the Role:
We are seeking a forward-thinking AI Agent Builder to lead the design, development, and deployment, and usage reporting of Microsoft Copilot and other AI-powered agents across our data platform development tools and internal business functions. This role will be instrumental in driving automation, improving onboarding, and enhancing operational efficiency through intelligent, context-aware assistants.
This role is central to our GenAI transformation strategy. You will help shape the future of how our teams interact with data, reduce administrative burden, and unlock new efficiencies across the organization. Your work will directly contribute to our “Art of the Possible” initiative—demonstrating tangible business value through AI.
You Will:
• Copilot Agent Development: Use Microsoft Copilot Studio and Agent Builder to create, test, and deploy AI agents that automate workflows, answer queries, and support internal teams.
• Data Engineering Enablement: Build agents that assist with data connector scaffolding, pipeline generation, and onboarding support for engineers.
• Knowledge Base Integration: Curate and integrate documentation (e.g., ERDs, connector specs) into Copilot-accessible repositories (SharePoint, Confluence) to support contextual AI responses.
• Prompt Engineering: Design reusable prompt templates and conversational flows to streamline repeated tasks and improve agent usability.
• Tool Evaluation & Integration: Assess and integrate complementary AI tools (e.g., GitLab Duo, Databricks AI, Notebook LM) to extend Copilot capabilities.
• Cross-Functional Collaboration: Partner with product, delivery, PMO, and security teams to identify high-value use cases and scale successful agent implementations.
• Governance & Monitoring: Ensure agents align with Responsible AI principles, monitor performance, and iterate based on feedback and evolving business needs.
• Adoption and Usage Reporting: Use Microsoft Viva Insights and other tools to report on user adoption, usage and business value delivered.
What We're Looking For:
• Proven experience with Microsoft 365 Copilot, Copilot Studio, or similar AI platforms, ChatGPT, Claude, etc.
• Strong understanding of data engineering workflows, tools (e.g., Git, Databricks, Unity Catalog), and documentation practices.
• Familiarity with SharePoint, Confluence, and Microsoft Graph connectors.
• Experience in prompt engineering and conversational UX design.
• Ability to translate business needs into scalable AI solutions.
• Excellent communication and collaboration skills across technical and non-technical
Bonus Points:
• Experience with GitLab Duo, Notebook LM, or other AI developer tools.
• Background in enterprise data platforms, ETL pipelines, or internal business systems.
• Exposure to AI governance, security, and compliance frameworks.
• Prior work in a regulated industry (e.g., healthcare, finance) is a plus.
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
About Oneture Technologies
Oneture Technologies is a cloud-first digital engineering company helping enterprises and high-growth startups build modern, scalable, and data-driven solutions. Our teams work on cutting-edge big data, cloud, analytics, and platform engineering engagements where ownership, innovation, and continuous learning are core values.
Role Overview
We are looking for an experienced Data Engineer with 2-4 years of hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate must have strong expertise in PySpark and exposure to real-time or streaming frameworks such as Apache Flink. You will work closely with architects, data scientists, and product teams to design and deliver robust, high-performance data solutions.
Key Responsibilities
- Design, develop, and maintain scalable ETL/ELT data pipelines using PySpark
- Implement real-time or near real-time data processing using Apache Flink
- Optimize data workflows for performance, scalability, and reliability
- Work with large-scale data platforms and distributed environments
- Collaborate with cross-functional teams to integrate data solutions into products and analytics platforms
- Ensure data quality, integrity, and governance across pipelines
- Conduct performance tuning, debugging, and root-cause analysis of data processes
- Write clean, modular, and well-documented code following best engineering practices
Primary Skills
- Strong hands-on experience in PySpark (RDD, DataFrame API, Spark SQL)
- Experience with Apache Flink, Spark or Kafka (streaming or batch)
- Solid understanding of distributed computing concepts
- Proficiency in Python for data engineering workflows
- Strong SQL skills for data manipulation and transformation
- Experience with data pipeline orchestration tools (Airflow, Step Functions, etc.)
Secondary Skills
- Experience with cloud platforms (AWS, Azure, or GCP)
- Knowledge of data lakes, lakehouse architectures, and modern data stack tools
- Familiarity with Delta Lake, Iceberg, or Hudi
- Experience with CI/CD pipelines for data workflows
- Understanding of messaging and streaming systems (Kafka, Kinesis)
- Knowledge of DevOps and containerization tools (Docker)
Soft Skills
- Strong analytical and problem-solving capabilities
- Ability to work independently and as part of a collaborative team
- Good communication and documentation skills
- Ownership mindset with a willingness to learn and adapt
Education
- Bachelor’s or Master’s degree in Computer Science, Engineering, Information Technology, or a related field
Why Join Oneture Technologies?
- Opportunity to work on high-impact, cloud-native data engineering projects
- Collaborative team environment with a strong learning culture
- Exposure to modern data platforms, scalable architectures, and real-time data systems
- Growth-oriented role with hands-on ownership across end-to-end data engineering initiatives
Backend Developer (Django)
About the Role:
We are looking for a highly motivated Backend Developer with hands-on experience in the Django framework to join our dynamic team. The ideal candidate should be passionate about backend development and eager to learn and grow in a fast-paced environment. You’ll be involved in developing web applications, APIs, and automation workflows.
Key Responsibilities:
- Develop and maintain Python-based web applications using Django and Django Rest Framework.
- Build and integrate RESTful APIs.
- Work collaboratively with frontend developers to integrate user-facing elements with server-side logic.
- Contribute to improving development workflows through automation.
- Assist in deploying applications using cloud platforms like Heroku or AWS.
- Write clean, maintainable, and efficient code.
Requirements:
Backend:
- Strong understanding of Django and Django Rest Framework (DRF).
- Experience with task queues like Celery.
Frontend (Basic Understanding):
- Proficiency in HTML, CSS, Bootstrap, JavaScript, and jQuery.
Hosting & Deployment:
- Familiarity with at least one hosting service such as Heroku, AWS, or similar platforms.
Linux/Server Knowledge:
- Basic to intermediate understanding of Linux commands and server environments.
- Ability to work with terminal, virtual environments, SSH, and basic server configurations.
Python Knowledge:
- Good grasp of OOP concepts.
- Familiarity with Pandas for data manipulation is a plus.
Soft & Team Skills:
- Strong collaboration and team management abilities.
- Ability to work in a team-driven environment and coordinate tasks smoothly.
- Problem-solving mindset and attention to detail.
- Good communication skills and eagerness to learn
What We Offer:
- A collaborative, friendly, and growth-focused work environment.
- Opportunity to work on real-time projects using modern technologies.
- Guidance and mentorship to help you advance in your career.
- Flexible and supportive work culture.
- Opportunities for continuous learning and skill development.
Location : Bhayander (Onsite)
Immediate to 30-day joiner and Mumbai-based candidate preferred.
Job Description: Python-Azure AI Developer
Experience: 5+ years
Locations: Bangalore | Pune | Chennai | Jaipur | Hyderabad | Gurgaon | Bhopal
Mandatory Skills:
- Python: Expert-level proficiency with FastAPI/Flask
- Azure Services: Hands-on experience integrating Azure cloud services
- Databases: PostgreSQL, Redis
- AI Expertise: Exposure to Agentic AI technologies, frameworks, or SDKs with strong conceptual understanding
Good to Have:
- Workflow automation tools (n8n or similar)
- Experience with LangChain, AutoGen, or other AI agent frameworks
- Azure OpenAI Service knowledge
Key Responsibilities:
- Develop AI-powered applications using Python and Azure
- Build RESTful APIs with FastAPI/Flask
- Integrate Azure services for AI/ML workloads
- Implement agentic AI solutions
- Database optimization and management
- Workflow automation implementation
Software Tester – Automation (On-Site)
📍 Location: Navi Mumbai
Budget - 4lpa to 7lpa
Years of Experience - 2 to 5years
🕒 Immediate Joiners Preferred
✨ Why Join Us?
🚀 Growth-driven environment with modern, automation-first projects
📆 Weekends off + Provident Fund benefits
🤝 Supportive, collaborative & innovation-first culture
🔍 Role Overview
We are looking for an Automation Tester with strong hands-on experience in Python-based UI, API, and WebSocket automation. You will collaborate closely with developers, project managers, and QA peers to ensure product quality, performance, and reliability, while also exploring AI-led testing initiatives.
🧩 Key Responsibilities
🧾 Requirement Analysis & Test Planning
Participate in client interactions to understand testing and automation requirements.
Convert functional/technical specifications into automation-ready test scenarios.
🤖 Automation Testing & Framework Development
Develop and maintain automation scripts using Python, Selenium, and Pytest.
Build scalable automation frameworks for UI, API, and WebSocket testing.
Improve script reusability, modularity, and performance.
🌐 API & WebSocket Testing
Perform REST API validations using Postman/Swagger.
Develop automated API test suites using Python/Pytest.
Execute WebSocket test scenarios (real-time event/message validations, latency, connection stability).
🧪 Manual Testing (As Needed)
Execute functional, UI, smoke, sanity, and exploratory tests.
Validate applications in development, QA, and production environments.
🐞 Defect Management
Log, track, and retest defects using Jira or Zoho Projects.
Ensure high-quality bug reporting with clear steps and severity/priority tagging.
⚡ Performance Testing
Use JMeter to conduct load, stress, and performance tests for APIs/WebSocket-based systems.
Analyze system performance and highlight bottlenecks.
🧠 AI-Driven Testing Exploration
Research and experiment with AI tools to enhance automation coverage and efficiency.
Propose AI-driven improvements for regression, analytics, and test optimization.
🤝 Collaboration & Communication
Participate in daily stand-ups and regular QA syncs.
Communicate blockers, automation progress, and risks clearly.
📊 Test Reporting & Metrics
Create reports on automation execution, defect trends, and performance benchmarks.
🛠 Key Technical Skills
✔ Strong proficiency in Python
✔ UI Automation using Selenium (Python)
✔ Pytest Framework
✔ API Testing – Postman/Swagger
✔ WebSocket Testing
✔ Performance Testing using JMeter
✔ Knowledge of CI/CD tools (such as Jenkins)
✔ Knowledge of Git
✔ SQL knowledge (added advantage)
✔ Functional/Manual Testing expertise
✔ Solid understanding of SDLC/STLC & QA processes
🧰 Tools You Will Work With
Automation: Selenium, Pytest
API & WebSockets: Postman, Swagger, Python libraries
Performance: JMeter
Project/Defect Tracking: Jira, Zoho Projects
CI/CD & Version Control: Jenkins, Git
🌟 Soft Skills
Strong communication & teamwork
Detail-oriented and analytical
Problem-solving mindset
Ownership and accountability
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits
every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.
What You Will Do:
• We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and a deep interest in scalable, low-latency systems.
• You should have 3–4 years of experience in Python-based development and be eager to solve complex performance and scalability challenges in trading and fintech applications.
• You measure success by your own growth, not external validation.
• You thrive on challenges, not on perks or financial rewards.
• Taking calculated risks excites you—you’re here to build, break, and learn.
• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading environment.
• You understand the stakes—milliseconds can make or break trades, and precision is everything.
What We Expect:
• Develop and maintain scalable backend systems using Python.
• Design and implement REST APIs and socket-based communication.
• Optimize code for speed, performance, and reliability.
• Collaborate with frontend teams to integrate server-side logic.
• Work with RabbitMQ, Kafka, Redis, and Elasticsearch for robust backend design.
• Build fault-tolerant, multi-producer/consumer systems.
Must-Have Skills:
• 3–4 years of experience in Python and backend development.
• Strong understanding of REST APIs, sockets, and network protocols (TCP/UDP/HTTP).
• Experience with RabbitMQ/Kafka, SQL & NoSQL databases, Redis, and Elasticsearch.
• Bachelor’s degree in Computer Science or related field.
Nice-to-Have Skills:
• Past experience in fintech, trading systems, or algorithmic trading.
• Experience with GoLang, C/C++, Erlang, or Elixir.
• Exposure to trading, fintech, or low-latency systems.
• Familiarity with microservices and CI/CD pipelines.
Required Skills: Strong SQL Expertise, Data Reporting & Analytics, Database Development, Stakeholder & Client Communication, Independent Problem-Solving & Automation Skills
Review Criteria
· Must have Strong SQL skills (queries, optimization, procedures, triggers)
· Must have Advanced Excel skills
· Should have 3+ years of relevant experience
· Should have Reporting + dashboard creation experience
· Should have Database development & maintenance experience
· Must have Strong communication for client interactions
· Should have Ability to work independently
· Willingness to work from client locations.
Description
Who is an ideal fit for us?
We seek professionals who are analytical, demonstrate self-motivation, exhibit a proactive mindset, and possess a strong sense of responsibility and ownership in their work.
What will you get to work on?
As a member of the Implementation & Analytics team, you will:
● Design, develop, and optimize complex SQL queries to extract, transform, and analyze data
● Create advanced reports and dashboards using SQL, stored procedures, and other reporting tools
● Develop and maintain database structures, stored procedures, functions, and triggers
● Optimize database performance by tuning SQL queries, and indexing to handle large datasets efficiently
● Collaborate with business stakeholders and analysts to understand analytics requirements
● Automate data extraction, transformation, and reporting processes to improve efficiency
What do we expect from you?
For the SQL/Oracle Developer role, we are seeking candidates with the following skills and Expertise:
● Proficiency in SQL (Window functions, stored procedures) and MS Excel (advanced Excel skills)
● More than 3 plus years of relevant experience
● Java / Python experience is a plus but not mandatory
● Strong communication skills to interact with customers to understand their requirements
● Capable of working independently with minimal guidance, showcasing self-reliance and initiative
● Previous experience in automation projects is preferred
● Work From Office: Bangalore/Navi Mumbai/Pune/Client locations
About Us
Dolat Capital is a multi-strategy quantitative trading firm specializing in high-frequency and fully automated trading systems across global markets. We build proprietary algorithms using advanced mathematical, statistical, and computational techniques.
We are looking for an Experienced Quantitative Researcher to develop, test, and optimize quantitative trading strategies—primarily for APAC markets. The ideal candidate brings strong mathematical thinking, hands-on trading experience, and a track record of building profitable models.
Key Responsibilities
- Research, design & develop quantitative trading strategies
- Analyse large datasets and build predictive models / regression models
- Implement models in Python / C++ / Matlab
- Monitor, execute, and improve existing trading strategies
- Collaborate closely with traders, developers, and researchers
- Optimize trading systems, reduce latency, and enhance execution
- Identify new trading opportunities across listed products
- Oversee and manage risk for options, equities, futures, and other instruments
Required Skills & Experience
- Minimum 3+ years experience on a high-volume equities, futures, options, or market-making desk
- Strong background in Statistics, Mathematics, Physics, or related field (PhD)
- Proven track record of profitable real-world trading strategies
- Strong programming experience: C++, Python, R, Matlab
- Experience with automated trading systems and exchange protocols
- Ability to work in a fast-paced, high-pressure trading environment
- Excellent analytical skills, precision, and attention to detail
Review Criteria
- Strong Senior Data Scientist (AI/ML/GenAI) Profile
- 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
- Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
- 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
- Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows
Preferred
- Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
- Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.
Job Specific Criteria
- CV Attachment is mandatory
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.
Responsibilities:
- Own the full ML lifecycle: model design, training, evaluation, deployment
- Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
- Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
- Build agentic workflows for reasoning, planning, and decision-making
- Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
- Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
- Collaborate with product and engineering teams to integrate AI models into business applications
- Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices
Ideal Candidate
- 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
- Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
- Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
- Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
- Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
- Strong software engineering background with experience in testing, version control, and APIs
- Proven ability to balance innovation with scalable deployment
- B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
- Bonus: Open-source contributions, GenAI research, or applied systems at scale
Responsibilities:
- Build and optimize batch and streaming data pipelines using Apache Beam (Dataflow)
- Design and maintain BigQuery datasets using best practices in partitioning, clustering, and materialized views
- Develop and manage Airflow DAGs in Cloud Composer for workflow orchestration
- Implement SQL-based transformations using Dataform (or dbt)
- Leverage Pub/Sub for event-driven ingestion and Cloud Storage for raw/lake layer data architecture
- Drive engineering best practices across CI/CD, testing, monitoring, and pipeline observability
- Partner with solution architects and product teams to translate data requirements into technical designs
- Mentor junior data engineers and support knowledge-sharing across the team
- Contribute to documentation, code reviews, sprint planning, and agile ceremonies
Requirements
- 5+ years of hands-on experience in data engineering, with at least 2 years on GCP
- Proven expertise in BigQuery, Dataflow (Apache Beam), Cloud Composer (Airflow)
- Strong programming skills in Python and/or Java
- Experience with SQL optimization, data modeling, and pipeline orchestration
- Familiarity with Git, CI/CD pipelines, and data quality monitoring frameworks
- Exposure to Dataform, dbt, or similar tools for ELT workflows
- Solid understanding of data architecture, schema design, and performance tuning
- Excellent problem-solving and collaboration skills
Bonus Skills:
- GCP Professional Data Engineer certification
- Experience with Vertex AI, Cloud Functions, Dataproc, or real-time streaming architectures
- Familiarity with data governance tools (e.g., Atlan, Collibra, Dataplex)
- Exposure to Docker/Kubernetes, API integration, and infrastructure-as-code (Terraform)
About Ven Analytics
At Ven Analytics, we don’t just crunch numbers — we decode them to uncover insights that drive real business impact. We’re a data-driven analytics company that partners with high-growth startups and enterprises to build powerful data products, business intelligence systems, and scalable reporting solutions. With a focus on innovation, collaboration, and continuous learning, we empower our teams to solve real-world business problems using the power of data.
Role Overview
We’re looking for a Power BI Data Analyst who is not just proficient in tools but passionate about building insightful, scalable, and high-performing dashboards. The ideal candidate should have strong fundamentals in data modeling, a flair for storytelling through data, and the technical skills to implement robust data solutions using Power BI, Python, and SQL..
Key Responsibilities
- Technical Expertise: Develop scalable, accurate, and maintainable data models using Power BI, with a clear understanding of Data Modeling, DAX, Power Query, and visualization principles.
- Programming Proficiency: Use SQL and Python for complex data manipulation, automation, and analysis.
- Business Problem Translation: Collaborate with stakeholders to convert business problems into structured data-centric solutions considering performance, scalability, and commercial goals.
- Hypothesis Development: Break down complex use-cases into testable hypotheses and define relevant datasets required for evaluation.
- Solution Design: Create wireframes, proof-of-concepts (POC), and final dashboards in line with business requirements.
- Dashboard Quality: Ensure dashboards meet high standards of data accuracy, visual clarity, performance, and support SLAs.
- Performance Optimization: Continuously enhance user experience by improving performance, maintainability, and scalability of Power BI solutions.
- Troubleshooting & Support: Quick resolution of access, latency, and data issues as per defined SLAs.
- Power BI Development: Use power BI desktop for report building and service for distribution
- Backend development: Develop optimized SQL queries that are easy to consume, maintain and debug.
- Version Control: Strict control on versions by tracking CRs and Bugfixes. Ensuring the maintenance of Prod and Dev dashboards.
- Client Servicing : Engage with clients to understand their data needs, gather requirements, present insights, and ensure timely, clear communication throughout project cycles.
- Team Management : Lead and mentor a small team by assigning tasks, reviewing work quality, guiding technical problem-solving, and ensuring timely delivery of dashboards and reports..
Must-Have Skills
- Strong experience building robust data models in Power BI
- Hands-on expertise with DAX (complex measures and calculated columns)
- Proficiency in M Language (Power Query) beyond drag-and-drop UI
- Clear understanding of data visualization best practices (less fluff, more insight)
- Solid grasp of SQL and Python for data processing
- Strong analytical thinking and ability to craft compelling data stories
- Client Servicing Background.
Good-to-Have (Bonus Points)
- Experience using DAX Studio and Tabular Editor
- Prior work in a high-volume data processing production environment
- Exposure to modern CI/CD practices or version control with BI tools
Why Join Ven Analytics?
- Be part of a fast-growing startup that puts data at the heart of every decision.
- Opportunity to work on high-impact, real-world business challenges.
- Collaborative, transparent, and learning-oriented work environment.
- Flexible work culture and focus on career development.
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles
.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
JD for Cloud engineer
Job Summary:
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
- Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
- Implement Google Cloud Storage, Cloud SQL, filestore, for data storage and processing needs.
- Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
- Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
- Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
- Work with Helm charts for microservices deployments.
- Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
- Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
- Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Automate infrastructure deployment using Terraform, Bash and Powershell scripting
- Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
What We’re Looking For
- 3-5 years of Data Science & ML experience in consumer internet / B2C products.
- Degree in Statistics, Computer Science, or Engineering (or certification in Data Science).
- Machine Learning wizardry: recommender systems, NLP, user profiling, image processing, anomaly detection.
- Statistical chops: finding meaningful insights in large data sets.
- Programming ninja: R, Python, SQL + hands-on with Numpy, Pandas, scikit-learn, Keras, TensorFlow (or similar).
- Visualization skills: Redshift, Tableau, Looker, or similar.
- A strong problem-solver with curiosity hardwired into your DNA.
- Brownie Points
- Experience with big data platforms: Hadoop, Spark, Hive, Pig.
- Extra love if you’ve played with BI tools like Tableau or Looker.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 3-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Backend Engineer (MongoDB / API Integrations / AWS / Vectorization)
Position Summary
We are hiring a Backend Engineer with expertise in MongoDB, data vectorization, and advanced AI/LLM integrations. The ideal candidate will have hands-on experience developing backend systems that power intelligent data-driven applications, including robust API integrations with major social media platforms (Meta, Instagram, Facebook, with expansion to TikTok, Snapchat, etc.). In addition, this role requires deep AWS experience (Lambda, S3, EventBridge) to manage serverless workflows, automate cron jobs, and execute both scheduled and manual data pulls. You will collaborate closely with frontend developers and AI engineers to deliver scalable, resilient APIs that power our platform.
Key Responsibilities
- Design, implement, and maintain backend services with MongoDB and scalable data models.
- Build pipelines to vectorize data for retrieval-augmented generation (RAG) and other AI-driven features.
- Develop robust API integrations with major social platforms (Meta, Instagram Graph API, Facebook API; expand to TikTok, Snapchat, etc.).
- Implement and maintain AWS Lambda serverless functions for scalable backend processes.
- Use AWS EventBridge to schedule cron jobs and manage event-driven workflows.
- Leverage AWS S3 for structured and unstructured data storage, retrieval, and processing.
- Build workflows for manual and automated data pulls from external APIs.
- Optimize backend systems for performance, scalability, and reliability at high data volumes.
- Collaborate with frontend engineers to ensure smooth integration into Next.js applications.
- Ensure security, compliance, and best practices in API authentication (OAuth, tokens, etc.).
- Contribute to architecture planning, documentation, and system design reviews.
Required Skills/Qualifications
- Strong expertise with MongoDB (including Atlas) and schema design.
- Experience with data vectorization and embeddings (OpenAI, Pinecone, MongoDB Atlas Vector Search, etc.).
- Proven track record of social media API integrations (Meta, Instagram, Facebook; additional platforms a plus).
- Proficiency in Node.js, Python, or other backend languages for API development.
- Deep understanding of AWS services:
- Lambda for serverless functions.
- S3 for structured/unstructured data storage.
- EventBridge for cron jobs, scheduled tasks, and event-driven workflows.
- Strong understanding of REST and GraphQL API design.
- Experience with data optimization, caching, and large-scale API performance.
Preferred Skills/Experience
- Experience with real-time data pipelines (Kafka, Kinesis, or similar).
- Familiarity with CI/CD pipelines and automated deployments on AWS.
- Knowledge of serverless architecture best practices.
- Background in SaaS platform development or data analytics systems.
Job Description
Position - Full stack Developer
Location - Mumbai
Experience - 2-5 Years
Who are we
Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics based diagnostic solution for Tuberculosis was recognized as one of top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.
Objectives of this Role:
- Work across the full stack, building highly scalable distributed solutions that enable positive user experiences and measurable business growth
- Ideate and develop new product features in collaboration with domain experts in healthcare and genomics
- Develop state of the art enterprise standard front-end and backend services
- Develop cloud platform services based on container orchestration platform
- Continuously embrace automation for repetitive tasks
- Ensure application performance, uptime, and scale, maintaining high standards of code quality by using clean coding principles and solid design patterns
- Build robust tech modules that are Unit Testable, Automating recurring tasks and processes
- Engage effectively with team members and collaborate to upskill and unblock each other
Frontend Skills
- HTML 5
- CSS framework ( LESS/ SASS / Tailwind )
- Es6 / Typescript
- Electron app /Tauri)
- Component library ( Bootstrap , material UI, Lit )
- Responsive web layout ( Flex layout , Grid layout )
- Package manager --> yarn-/ npm / turbo
- Build tools - > (Vite/Webpack/Parcel)
- Frameworks -- > React with Redux of Mobx / Next JS
- Design patterns
- Testing - JEST / MOCHA / JASMINE / Cypress)
- Functional Programming concepts
- Scripting ( powershell , bash , python )
Backend Skills
- Nodejs - Express / NEST JS
- Python / Rust
- REST API
- SOLID Design Principles
- Database (postgresql / mysql / redis / cassandra / mongodb )
- Caching ( Redis )
- Container Technology ( Docker / Kubernetes )
- Cloud ( azure , aws , openshift, google cloud)
- Version Control - GIT
- GITOPS
- Automation ( terraform , ansible )
Cloud Skills
- Object storage
- VPC concepts
- Containerize Deployment
- Serverless architecture
Other Skills
- Innovation and thought leadership
- UI - UX design skills
- Interest in in learning new tools, languages, workflows, and philosophies to grow
- Communication
To know more about us- https://haystackanalytics.in/
Job Description:
Position - Cloud Developer
Experience - 5 - 8 years
Location - Mumbai & Pune
Responsibilities:
- Design, develop, and maintain robust software applications using most common and popular coding languages suitable for the application design, with a strong focus on clean, maintainable, and efficient code.
- Develop, maintain, and enhance Terraform modules to encapsulate common infrastructure patterns and promote code reuse and standardization.
- Develop RESTful APIs and backend services aligned with modern architectural practices.
- Apply object-oriented programming principles and design patterns to build scalable systems.
- Build and maintain automated test frameworks and scripts to ensure high product quality.
- Troubleshoot and resolve technical issues across application layers, from code to infrastructure.
- Work with cloud platforms such as Azure or Google Cloud Platform (GCP).
- Use Git and related version control practices effectively in a team-based development environment.
- Integrate and experiment with AI development tools like GitHub Copilot, Azure OpenAI, or similar to boost engineering efficiency.
Skills:
- 5+ years of experience
- Experience with IaC Module
- Terraform coding experience along with Terraform Module as a part of central platform team
- Azure/GCP cloud experience is a must
- Experience with C#/Python/Java Coding - is good to have
Dear Candidate,
Greetings from Wissen Technology.
We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.
About Wissen Technology:
- The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
- Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
- Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
- Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
- Globally present with offices US, India, UK, Australia, Mexico, and Canada.
- We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
- Wissen Technology has been certified as a Great Place to Work®.
- Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
- Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
- The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few.
Job Description:
Please find below details:
Experience - 4+ Years
Location- Bangalore/Mumbai/Pune
Team Responsibilities
The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.
Resource's key Responsibilities
• Provide Tier 2/3 product technical support.
• Building software to help operations and support activities.
• Manage system\software configurations and troubleshoot environment issues.
• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.
• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.
• Collaborate with development and software testing teams throughout the release process.
• Analyze release and deployment processes to identify key areas for automation and optimization.
• Manage hardware and software resources & coordinate maintenance, planned downtimes with
infrastructure group across all the environments. (Production / Non-Production).
• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Strong Full stack developer Profile
Mandatory (Experience 1) - Must Have Minimum 5+ YOE in Software Development,
Mandatory (Experience 2) - Must have 4+ YOE in backend using Python.
Mandatory (Experience 3) - Must have good experience in frontend using React JS with knowledge of HTML, CSS, and JavaScript.
Mandatory (Experience 4) - Must have Experience in any databases - MySQL / PostgreSQL / Postgres / Oracle / SQL Server /

One of the reputed Client in India
Our Client is looking to hire Databricks Amin immediatly.
This is PAN-INDIA Bulk hiring
Minimum of 6-8+ years with Databricks, Pyspark/Python and AWS.
Must have AWS
Notice 15-30 days is preferred.
Share profiles at hr at etpspl dot com
Please refer/share our email to your friends/colleagues who are looking for job.
Dear Candidate,
Greetings from Wissen Technology.
We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.
About Wissen Technology:
- The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
- Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
- Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
- Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
- Globally present with offices US, India, UK, Australia, Mexico, and Canada.
- We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
- Wissen Technology has been certified as a Great Place to Work®.
- Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
- Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
- The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.
We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few
Job Description:
Please find below details:
Experience - 4+ Years
Location- Bangalore/Mumbai/Pune
Team Responsibilities
The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.
Resource's key Responsibilities
• Provide Tier 2/3 product technical support.
• Building software to help operations and support activities.
• Manage system\software configurations and troubleshoot environment issues.
• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.
• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.
• Collaborate with development and software testing teams throughout the release process.
• Analyze release and deployment processes to identify key areas for automation and optimization.
• Manage hardware and software resources & coordinate maintenance, planned downtimes with
infrastructure group across all the environments. (Production / Non-Production).
• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.
Required skills and experience
• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)
• Master’s degree a plus
• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.
• Excellent problem-solving/troubleshooting skills, fast learner
• Strong knowledge of Unix Administration.
• Strong scripting skills in Shell, Python, Batch is must.
• Strong Database experience – Oracle
• Strong knowledge of Software Development Life Cycle
• Power shell is nice to have
• Software development skillsets in Java or Ruby.
• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have
Full-Stack Developer
Exp: 5+ years required
Night shift: 8 PM-5 AM/9PM-6 AM
Only Immediate Joinee Can Apply
We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.
We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
Job Title : Senior QA Automation Architect (Cloud & Kubernetes)
Experience : 6+ Years
Location : India (Multiple Offices)
Shift Timings : 12 PM to 9 PM (Noon Shift)
Working Days : 5 Days WFO (NO Hybrid)
About the Role :
We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.
You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.
Key Responsibilities :
- Architect and maintain test automation frameworks for microservices.
- Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
- Ensure reliability, scalability, and observability of test systems.
- Work closely with DevOps and Cloud teams to streamline automation infrastructure.
Mandatory Skills :
- Kubernetes, Helm, Docker, Linux
- Cloud Platforms : AWS / Azure / GCP
- CI/CD Tools : Jenkins, GitHub Actions
- Scripting : Python, Pytest, Bash
- Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
- IaC Practices : Terraform / Ansible
Good to Have :
- Experience with Service Mesh (Istio/Linkerd).
- Container Security or DevSecOps exposure.
Wissen Technology is hiring for Data Engineer
About Wissen Technology: At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.
Job Summary: Wissen Technology is hiring a Data Engineer with expertise in Python, Pandas, Airflow, and Azure Cloud Services. The ideal candidate will have strong communication skills and experience with Kubernetes.
Experience: 4-7 years
Notice Period: Immediate- 15 days
Location: Pune, Mumbai, Bangalore
Mode of Work: Hybrid
Key Responsibilities:
- Develop and maintain data pipelines using Python and Pandas.
- Implement and manage workflows using Airflow.
- Utilize Azure Cloud Services for data storage and processing.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Ensure data quality and integrity throughout the data lifecycle.
- Optimize and scale data infrastructure to meet business needs.
Qualifications and Required Skills:
- Proficiency in Python (Must Have).
- Strong experience with Pandas (Must Have).
- Expertise in Airflow (Must Have).
- Experience with Azure Cloud Services.
- Good communication skills.
Good to Have Skills:
- Experience with Pyspark.
- Knowledge of Kubernetes.
Wissen Sites:
- Website: http://www.wissen.com
- LinkedIn: https://www.linkedin.com/company/wissen-technology
- Wissen Leadership: https://www.wissen.com/company/leadership-team/
- Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All
- Wissen Thought Leadership: https://www.wissen.com/articles/
SENIOR DATA ENGINEER:
ROLE SUMMARY:
Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.
RESPONSIBILITIES:
- Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
- Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
- Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
- Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
- DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
- Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
- Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
- Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
- Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
- Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.
REQUIRED QUALIFICATIONS:
- Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
- Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
- Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
- ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
- Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
- DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
- Serverless and events: Design event-driven distributed systems on AWS.
- NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
- AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.
NICE-TO-HAVE QUALIFICATIONS:
- Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
- Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
- Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
- Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.
OUTCOMES AND MEASURES:
- Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
- Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
- Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
- Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
- Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.
LOCATION AND SCHEDULE:
● Location: Outside US (OUS).
● Schedule: Minimum 6 hours of overlap with US time zones.
Experience: 3–7 Years
Locations: Pune / Bangalore / Mumbai
Notice Period :Immediate joiner only
Employment Type: Full-time
🛠️ Key Skills (Mandatory):
- Python: Strong coding skills for data manipulation and automation.
- PySpark: Experience with distributed data processing using Spark.
- SQL: Proficient in writing complex queries for data extraction and transformation.
- Azure Databricks: Hands-on experience with notebooks, Delta Lake, and MLflow
Interested candidates please share resume with details below.
Total Experience -
Relevant Experience in Python,Pyspark,AQL,Azure Data bricks-
Current CTC -
Expected CTC -
Notice period -
Current Location -
Desired Location -
🚀 We’re Hiring: Python Developer – Quant Strategies & Backtesting | Mumbai (Goregaon East)
Are you a skilled Python Developer passionate about financial markets and quantitative trading?
We’re looking for someone to join our growing Quant Research & Algo Trading team, where you’ll work on:
🔹 Developing & optimizing trading strategies in Python
🔹 Building backtesting frameworks across multiple asset classes
🔹 Processing and analyzing large market datasets
🔹 Collaborating with quant researchers & traders on real-world strategies
What we’re looking for:
✔️ 3+ years of experience in Python development (preferably in fintech/trading/quant domains)
✔️ Strong knowledge of Pandas, NumPy, SciPy, SQL
✔️ Experience in backtesting, data handling & performance optimization
✔️ Familiarity with financial markets is a big plus
📍 Location: Goregaon East, Mumbai
💼 Competitive package + exposure to cutting-edge quant strategies
Wissen Technology is hiring for Data Engineer
About Wissen Technology:At Wissen Technology, we deliver niche, custom-built products that solve complex business challenges across industries worldwide. Founded in 2015, our core philosophy is built around a strong product engineering mindset—ensuring every solution is architected and delivered right the first time. Today, Wissen Technology has a global footprint with 2000+ employees across offices in the US, UK, UAE, India, and Australia. Our commitment to excellence translates into delivering 2X impact compared to traditional service providers. How do we achieve this? Through a combination of deep domain knowledge, cutting-edge technology expertise, and a relentless focus on quality. We don’t just meet expectations—we exceed them by ensuring faster time-to-market, reduced rework, and greater alignment with client objectives. We have a proven track record of building mission-critical systems across industries, including financial services, healthcare, retail, manufacturing, and more. Wissen stands apart through its unique delivery models. Our outcome-based projects ensure predictable costs and timelines, while our agile pods provide clients the flexibility to adapt to their evolving business needs. Wissen leverages its thought leadership and technology prowess to drive superior business outcomes. Our success is powered by top-tier talent. Our mission is clear: to be the partner of choice for building world-class custom products that deliver exceptional impact—the first time, every time.
Job Summary:Wissen Technology is hiring a Data Engineer with a strong background in Python, data engineering, and workflow optimization. The ideal candidate will have experience with Delta Tables, Parquet, and be proficient in Pandas and PySpark.
Experience:7+ years
Location:Pune, Mumbai, Bangalore
Mode of Work:Hybrid
Key Responsibilities:
- Develop and maintain data pipelines using Python (Pandas, PySpark).
- Optimize data workflows and ensure efficient data processing.
- Work with Delta Tables and Parquet for data storage and management.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Ensure data quality and integrity throughout the data lifecycle.
- Implement best practices for data engineering and workflow optimization.
Qualifications and Required Skills:
- Proficiency in Python, specifically with Pandas and PySpark.
- Strong experience in data engineering and workflow optimization.
- Knowledge of Delta Tables and Parquet.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a team environment.
- Strong communication skills.
Good to Have Skills:
- Experience with Databricks.
- Knowledge of Apache Spark, DBT, and Airflow.
- Advanced Pandas optimizations.
- Familiarity with PyTest/DBT testing frameworks.
Wissen Sites:
- Website: http://www.wissen.com
- LinkedIn: https://www.linkedin.com/company/wissen-technology
- Wissen Leadership: https://www.wissen.com/company/leadership-team/
- Wissen Live: https://www.linkedin.com/company/wissen-technology/posts/feedView=All
- Wissen Thought Leadership: https://www.wissen.com/articles/
Wissen | Driving Digital Transformation
A technology consultancy that drives digital innovation by connecting strategy and execution, helping global clients to strengthen their core technology.
Job Title: Data Engineering Support Engineer / Manager
Experience range:-8+ Years
Location:- Mumbai
Experience :
Knowledge, Skills and Abilities
- Python, SQL
- Familiarity with data engineering
- Experience with AWS data and analytics services or similar cloud vendor services
- Strong problem solving and communication skills
- Ablity to organise and prioritise work effectively
Key Responsibilities
- Incident and user management for data and analytics platform
- Development and maintenance of Data Quliaty framework (including anomaly detection)
- Implemenation of Python & SQL hotfixes and working with data engineers on more complex issues
- Diagnostic tools implementation and automation of operational processes
Key Relationships
- Work closely with data scientists, data engineers, and platform engineers in a highly commercial environment
- Support research analysts and traders with issue resolution























