50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!
Required Skills: Advanced AWS Infrastructure Expertise, CI/CD Pipeline Automation, Monitoring, Observability & Incident Management, Security, Networking & Risk Management, Infrastructure as Code & Scripting
Criteria:
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies (B2C scale preferred)
- Strong hands-on AWS expertise across core and advanced services (EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, VPC, IAM, ELB/ALB, Route53)
- Proven experience designing high-availability, fault-tolerant cloud architectures for large-scale traffic
- Strong experience building & maintaining CI/CD pipelines (Jenkins mandatory; GitHub Actions/GitLab CI a plus)
- Prior experience running production-grade microservices deployments and automated rollout strategies (Blue/Green, Canary)
- Hands-on experience with monitoring & observability tools (Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.)
- Solid hands-on experience with MongoDB in production, including performance tuning, indexing & replication
- Strong scripting skills (Bash, Shell, Python) for automation
- Hands-on experience with IaC (Terraform, CloudFormation, or Ansible)
- Deep understanding of networking fundamentals (VPC, subnets, routing, NAT, security groups)
- Strong experience in incident management, root cause analysis & production firefighting
Description
Role Overview
Company is seeking an experienced Senior DevOps Engineer to design, build, and optimize cloud infrastructure on AWS, automate CI/CD pipelines, implement monitoring and security frameworks, and proactively identify scalability challenges. This role requires someone who has hands-on experience running infrastructure at B2C product scale, ideally in media/OTT or high-traffic applications.
Key Responsibilities
1. Cloud Infrastructure — AWS (Primary Focus)
- Architect, deploy, and manage scalable infrastructure using AWS services such as EC2, ECS/EKS, Lambda, S3, CloudFront, RDS, ELB/ALB, VPC, IAM, Route53, etc.
- Optimize cloud cost, resource utilization, and performance across environments.
- Design high-availability, fault-tolerant systems for streaming workloads.
2. CI/CD Automation
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI.
- Automate deployments for microservices, mobile apps, and backend APIs.
- Implement blue/green and canary deployments for seamless production rollouts.
3. Observability & Monitoring
- Implement logging, metrics, and alerting using tools like Grafana, Prometheus, ELK, CloudWatch, New Relic, etc.
- Perform proactive performance analysis to minimize downtime and bottlenecks.
- Set up dashboards for real-time visibility into system health and user traffic spikes.
4. Security, Compliance & Risk Highlighting
• Conduct frequent risk assessments and identify vulnerabilities in:
o Cloud architecture
o Access policies (IAM)
o Secrets & key management
o Data flows & network exposure
• Implement security best practices including VPC isolation, WAF rules, firewall policies, and SSL/TLS management.
5. Scalability & Reliability Engineering
- Analyze traffic patterns for OTT-specific load variations (weekends, new releases, peak hours).
- Identify scalability gaps and propose solutions across:
- o Microservices
- o Caching layers
- o CDN distribution (CloudFront)
- o Database workloads
- Perform capacity planning and load testing to ensure readiness for 10x traffic growth.
6. Database & Storage Support
- Administer and optimize MongoDB for high-read/low-latency use cases.
- Design backup, recovery, and data replication strategies.
- Work closely with backend teams to tune query performance and indexing.
7. Automation & Infrastructure as Code
- Implement IaC using Terraform, CloudFormation, or Ansible.
- Automate repetitive infrastructure tasks to ensure consistency across environments.
Required Skills & Experience
Technical Must-Haves
- 5+ years of DevOps/SRE experience in cloud-native, product-based companies.
- Strong hands-on experience with AWS (core and advanced services).
- Expertise in Jenkins CI/CD pipelines.
- Solid background working with MongoDB in production environments.
- Good understanding of networking: VPCs, subnets, security groups, NAT, routing.
- Strong scripting experience (Bash, Python, Shell).
- Experience handling risk identification, root cause analysis, and incident management.
Nice to Have
- Experience with OTT, video streaming, media, or any content-heavy product environments.
- Familiarity with containers (Docker), orchestration (Kubernetes/EKS), and service mesh.
- Understanding of CDN, caching, and streaming pipelines.
Personality & Mindset
- Strong sense of ownership and urgency—DevOps is mission critical at OTT scale.
- Proactive problem solver with ability to think about long-term scalability.
- Comfortable working with cross-functional engineering teams.
Why Join company?
• Build and operate infrastructure powering millions of monthly users.
• Opportunity to shape DevOps culture and cloud architecture from the ground up.
• High-impact role in a fast-scaling Indian OTT product.
You will architect, implement, and operationalize the core systems that power our platform. Specifically, you will:
- Design and implement a correctness-critical ledger for tracking value movements, state transitions, and automated operations.
- Build and own event-driven pipelines that process transactions, orchestrate workflows, and ensure reliable execution across internal and external systems.
- Develop orchestration services powering automated, agent-driven decisioning and multi-step execution flows.
- Integrate external providers for identity, verification, and digital value movement with an emphasis on security and predictable behavior.
- Define schemas, invariants, error models, and execution guarantees to uphold system integrity.
- Establish operational tooling including monitoring dashboards, alerting, logs, and distributed tracing.
- Lead engineering standards and processes, including code review practices, architectural rigor, and release discipline.
- Mentor engineers and support the hiring process as we expand the team.
- Ensure delivery by breaking down complex requirements into clear milestones and driving them to completion.
BASIC QUALIFICATIONS
- 7+ years of backend or distributed systems engineering experience.
- Extensive experience designing correctness-critical systems (ledgers, orchestration engines, transactional backends, etc.).
- Expert-level proficiency in Go, Rust, or similar systems languages.
- Deep experience with schema design, data modeling, consistency models, and fault-tolerant systems.
- Proven experience integrating multiple high-reliability external systems.
- Demonstrated ability to lead technically, mentor engineers, and influence engineering practices.
- Ability to ship complex systems end-to-end with minimal oversight.
PREFERRED QUALIFICATIONS
- Experience with verification, trust-minimized execution, or blockchain-integrated systems.
- Experience leading or setting foundational processes for an engineering team.
- Strong intuition around agentic or automated decisioning flows.
- Prior work building consumer-scale systems or financial-grade infrastructure.
WHAT WE OFFER
- Competitive compensation.
- High ownership and the opportunity to shape product direction.
- Direct impact on foundational cryptographic and blockchain infrastructure.
- A collaborative team that values clarity, autonomy, and velocity.
Note: This role can be remote; however, Bengaluru or Mumbai candidates will be prioritized.
Job Description: Applied Scientist
Location: Bangalore / Hybrid / Remote
Company: LodgIQ
Industry: Hospitality / SaaS / Machine Learning
About LodgIQ
Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the
travel industry. By leveraging machine learning and artificial intelligence, we enable precise
forecasting and optimized pricing for hotel revenue management. Backed by Highgate
Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a
global presence.
About the Role
We are seeking a highly motivated Applied Scientist to join our Data Science team. This
individual will play a key role in enhancing and scaling our existing forecasting and pricing
systems and developing new capabilities that support our intelligent decision-making
platform.
We are looking for team members who: ● Are deeply curious and passionate about applying machine learning to real-world
problems. ● Demonstrate strong ownership and the ability to work independently. ● Excel in both technical execution and collaborative teamwork. ● Have a track record of shipping products in complex environments.
What You’ll Do ● Build, train, and deploy machine learning and operations research models for
forecasting, pricing, and inventory optimization. ● Work with large-scale, noisy, and temporally complex datasets. ● Collaborate cross-functionally with engineering and product teams to move models
from research to production. ● Generate interpretable and trusted outputs to support adoption of AI-driven rate
recommendations. ● Contribute to the development of an AI-first platform that redefines hospitality revenue
management.
Required Qualifications ● Bachelor's or Master’s degree or PhD in Computer Science or related field. ● 3-5 years of hands-on experience in a product-centric company, ideally with full model
lifecycle exposure.
Commented [1]: Leaving note here
Acceptable Degree types - Masters or PhD
Fields
Operations Research
Industrial/Systems Engineering
Computer Science
Applied Mathematics
● Demonstrated ability to apply machine learning and optimization techniques to solve
real-world business problems.
● Proficient in Python and machine learning libraries such as PyTorch, statsmodel,
LightGBM, scikit-learn, XGBoost
● Strong knowledge of Operations Research models (Stochastic optimization, dynamic
programming) and forecasting models (time-series and ML-based).
● Understanding of machine learning and deep learning foundations.
● Translate research into commercial solutions
● Strong written and verbal communication skills to explain complex technical concepts
clearly to cross-functional teams.
● Ability to work independently and manage projects end-to-end.
Preferred Experience
● Experience in revenue management, pricing systems, or demand forecasting,
particularly within the hotel and hospitality domain.
● Applied knowledge of reinforcement learning techniques (e.g., bandits, Q-learning,
model-based control).
● Familiarity with causal inference methods (e.g., DAGs, treatment effect estimation).
● Proven experience in collaborative product development environments, working closely
with engineering and product teams.
Why LodgIQ?
● Join a fast-growing, mission-driven company transforming the future of hospitality.
● Work on intellectually challenging problems at the intersection of machine learning,
decision science, and human behavior.
● Be part of a high-impact, collaborative team with the autonomy to drive initiatives from
ideation to production.
● Competitive salary and performance bonuses.
● For more information, visit https://www.lodgiq.com
🚀 About Us
At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews — all while improving their SEO.
We’re also the creators of Convertlens, our generative AI-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.
We’re a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solves real problems — you’ll fit right in.
🛠️ What You’ll Do
- Build and maintain scalable Python back-end systems that power Convertlens and internal applications.
- Develop Agentic AI applications and workflows to drive automation and insights.
- Design and implement connectors to third-party systems (APIs, CRMs, marketing tools) to source and unify data.
- Ensure system reliability with strong practices in observability, monitoring, and troubleshooting.
⚙️ What You Bring
- 2+ years of hands-on experience in Python back-end development.
- Strong understanding of REST API design and integration.
- Proficiency with relational databases (MySQL/PostgreSQL).
- Familiarity with observability tools (logging, monitoring, tracing — e.g., OpenTelemetry, Prometheus, Grafana, ELK).
- Experience maintaining production systems with a focus on reliability and scalability.
- Bonus: Exposure to Node.js and modern front-end frameworks like ReactJs.
- Strong problem-solving skills and comfort working in a startup/product environment.
- A builder mindset — scrappy, curious, and ready to ship.
💼 Perks & Culture
- Flexible work setup — remote-first for most, hybrid if you’re in Delhi NCR.
- A high-growth, high-impact environment where your code goes live fast.
- Opportunities to work with Agentic AI and cutting-edge tech.
- Small team, big vision — your work truly matters here.
🚀 We’re Hiring: Python Developer – Pune 🚀
Are you a skilled Python Developer looking to work on high-performance, scalable backend systems?
If you’re passionate about building robust applications and working with modern technologies — this opportunity is for you! 💼✨
📍 Location: Pune
🏢 Role: Python Backend Developer
🕒 Type: Full-Time | Permanent
🔍 What We’re Looking For:
We need a strong backend professional with experience in:
🐍 Python (Advanced)
⚡ FastAPI
🛢️ MongoDB & Postgres
📦 Microservices Architecture
📨 Message Brokers (RabbitMQ / Kafka)
🌩️ Google Cloud Platform (GCP)
🧪 Unit Testing & TDD
🔐 Backend Security Standards
🔧 Git & Project Collaboration
🛠️ Key Responsibilities:
✔ Build and optimize Python backend services using FastAPI
✔ Design scalable microservices
✔ Manage and tune MongoDB & Postgres
✔ Implement message brokers for async workflows
✔ Drive code reviews and uphold coding standards
✔ Mentor team members
✔ Manage cloud deployments on GCP
✔ Ensure top-notch performance, scalability & security
✔ Write robust unit tests and follow TDD
🎓 Qualifications:
➡ 2–4 years of backend development experience
➡ Strong hands-on Python + FastAPI
➡ Experience with microservices, DB management & cloud tech
➡ Knowledge of Agile/Scrum
➡ Bonus: Docker, Kubernetes, CI/CD
Responsibilities
Develop and maintain web and backend components using Python, Node.js, and Zoho tools
Design and implement custom workflows and automations in Zoho
Perform code reviews to maintain quality standards and best practices
Debug and resolve technical issues promptly
Collaborate with teams to gather and analyze requirements for effective solutions
Write clean, maintainable, and well-documented code
Manage and optimize databases to support changing business needs
Contribute individually while mentoring and supporting team members
Adapt quickly to a fast-paced environment and meet expectations within the first month
Leadership Opportunities
Lead and mentor junior developers in the team
Drive projects independently while collaborating with the broader team
Act as a technical liaison between the team and stakeholders to deliver effective solutions
Selection Process
1. HR Screening: Review of qualifications and experience
2. Online Technical Assessment: Test coding and problem-solving skills
3. Technical Interview: Assess expertise in web development, Python, Node.js, APIs, and Zoho
4. Leadership Evaluation: Evaluate team collaboration and leadership abilities
5. Management Interview: Discuss cultural fit and career opportunities
6. Offer Discussion: Finalize compensation and role specifics
Experience Required
2-6 years of relevant experience as a Software Developer
Proven ability to work as a self-starter and contribute individually
Strong technical and interpersonal skills to support team members effectively
Role: Technical Co-Founder
Experience: 3+ years (Mandatory)
Compensation: Equity Only (No Salary)
Requirements:
Strong full-stack development skills
Experience building web applications from scratch
Able to manage complete tech independently
Startup mindset & owne
rship attitude
The Software Engineer – SRE will be responsible for building and maintaining highly reliable, scalable, and secure infrastructure that powers the Albert platform. This role focuses on automation, observability, and operational excellence to ensure seamless deployment, performance, and reliability of core platform services.
Key Responsibilities
- Act as a passionate representative of the Albert product and brand.
- Collaborate with Product Engineering and other stakeholders to plan and deliver core platform capabilities that enable scalability, reliability, and developer productivity.
- Work with the Site Reliability Engineering (SRE) team on shared full-stack ownership of a collection of services and/or technology areas.
- Understand the end-to-end configuration, technical dependencies, and overall behavioral characteristics of all microservices.
- Design and deliver the mission-critical stack, focusing on security, resiliency, scale, and performance.
- Take ownership of end-to-end performance and operability.
- Apply strong knowledge of automation and orchestration principles.
- Serve as the ultimate escalation point for complex or critical issues not yet documented as Standard Operating Procedures (SOPs).
- Troubleshoot and define mitigations using a deep understanding of service topology and dependencies.
Requirements
- Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
- 2+ years of software engineering experience, with at least 1 year in an SRE role focused on automation.
- Strong experience in Infrastructure as Code (IAC), preferably using Terraform.
- Proficiency in Python or Node.js, with experience designing RESTful APIs and working in microservices architecture.
- Solid expertise in AWS cloud infrastructure and platform technologies including APIs, distributed systems, and microservices.
- Hands-on experience with observability stacks, including centralized log management, metrics, and tracing.
- Familiarity with CI/CD tools (e.g., CircleCI) and performance testing tools like K6.
- Passion for bringing automation and standardization to engineering operations.
- Ability to build high-performance APIs with low latency (<200ms).
- Ability to work in a fast-paced environment, learning from peers and leaders.
- Demonstrated ability to mentor other engineers and contribute to team growth, including participation in recruiting activities.
Good to Have
- Experience with Kubernetes and container orchestration.
- Familiarity with observability tools such as Prometheus, Grafana, OpenTelemetry, or Datadog.
- Experience building Internal Developer Platforms (IDPs) or reusable frameworks for engineering teams.
- Exposure to ML infrastructure or data engineering workflows.
- Experience working in compliance-heavy environments (e.g., SOC2, HIPAA).
Job Description: Python-Azure AI Developer
Experience: 5+ years
Locations: Bangalore | Pune | Chennai | Jaipur | Hyderabad | Gurgaon | Bhopal
Mandatory Skills:
- Python: Expert-level proficiency with FastAPI/Flask
- Azure Services: Hands-on experience integrating Azure cloud services
- Databases: PostgreSQL, Redis
- AI Expertise: Exposure to Agentic AI technologies, frameworks, or SDKs with strong conceptual understanding
Good to Have:
- Workflow automation tools (n8n or similar)
- Experience with LangChain, AutoGen, or other AI agent frameworks
- Azure OpenAI Service knowledge
Key Responsibilities:
- Develop AI-powered applications using Python and Azure
- Build RESTful APIs with FastAPI/Flask
- Integrate Azure services for AI/ML workloads
- Implement agentic AI solutions
- Database optimization and management
- Workflow automation implementation
Description
Our Engineering team is changing gears to meet the growing needs of our customers - from a handful of robots to hundreds of robots; from a small team to multiple squads. The team works closely with some of the premier enterprise customers in Japan to build state-of-the-art robotics solutions by leveraging rapyuta.io, our cloud robotics platform, and the surrounding ecosystem. The team’s mission is to pioneer scalable, collaborative, and flexible robotics solutions.
This role includes testing with real robots in a physical environment, testing virtual robots in a simulated environment, automating API tests, and automating systems level testing.
The ideal candidate is interested in working in a hands-on role with state-of-the-art robots.
In this role, the QA Engineer will be responsible for:
- Assisting in reviewing and analyzing the system specifications to define test cases
- Creating and maintaining test plans
- Executing test plans in a simulated environment and on hardware
- Defect tracking and generating bug and test reports
- Participating in implementing and improving QA processes
- Implementation of test automation for robotics systems
Requirements
Minimum qualifications
- 2.5+ years of technical experience in software Quality Assurance as an Individual Contributor
- Bachelor's degree in engineering, or combination of equivalent education and experience
- Experience writing, maintaining and executing software test cases, both manual and automated
- Experience writing, maintaining and executing software test cases that incorporate hardware interactions, including manual and automated tests to validate the integration of software with robotics systems
- Demonstrated experience with Python testing frameworks
- Expertise in Linux ecosystem
- Advanced knowledge of testing approaches: test levels; BDD/TDD; Blackbox/Whitebox approaches; regression testing
- Knowledge and practical experience of Agile principles and methodologies such as SCRUM
- HTTP API testing experience
Preferred qualifications
- Knowledge of HWIL, simulations, ROS
- Basic understanding of embedded systems and electronics
- Experience with developing/QA for robotics or hardware products will be a plus.
- Experience with testing frameworks such as TestNG, JUnit, Pytest, Playwright, Selenium, or similar tool
- ISTQB certification
- Japanese language proficiency
- Proficiency with version control repositories such as git
- Understanding of CI/CD systems such as: GHA; Jenkins; CircleCI
Benefits
- Competitive salary
- International working environment
- Bleeding edge technology
- Working with exceptionally talented engineers
Job Overview:
As a Technical Lead, you will be responsible for leading the design, development, and deployment of AI-powered Edtech solutions. You will mentor a team of engineers, collaborate with data scientists, and work closely with product managers to build scalable and efficient AI systems. The ideal candidate should have 8-10 years of experience in software development, machine learning, AI use case development and product creation along with strong expertise in cloud-based architectures.
Key Responsibilities:
AI Tutor & Simulation Intelligence
- Architect the AI intelligence layer that drives contextual tutoring, retrieval-based reasoning, and fact-grounded explanations.
- Build RAG (retrieval-augmented generation) pipelines and integrate verified academic datasets from textbooks and internal course notes.
- Connect the AI Tutor with the Simulation Lab, enabling dynamic feedback — the system should read experiment results, interpret them, and explain why outcomes occur.
- Ensure AI responses remain transparent, syllabus-aligned, and pedagogically accurate.
Platform & System Architecture
- Lead the development of a modular, full-stack platform unifying courses, explainers, AI chat, and simulation windows in a single environment.
- Design microservice architectures with API bridges across content systems, AI inference, user data, and analytics.
- Drive performance, scalability, and platform stability — every millisecond and every click should feel seamless.
Reliability, Security & Analytics
- Establish system observability and monitoring pipelines (usage, engagement, AI accuracy).
- Build frameworks for ethical AI, ensuring transparency, privacy, and student safety.
- Set up real-time learning analytics to measure comprehension and identify concept gaps.
Leadership & Collaboration
- Mentor and elevate engineers across backend, ML, and front-end teams.
- Collaborate with the academic and product teams to translate physics pedagogy into engineering precision.
- Evaluate and integrate emerging tools — multi-modal AI, agent frameworks, explainable AI — into the product roadmap.
Qualifications & Skills:
- 8–10 years of experience in software engineering, ML systems, or scalable AI product builds.
- Proven success leading cross-functional AI/ML and full-stack teams through 0→1 and scale-up phases.
- Expertise in cloud architecture (AWS/GCP/Azure) and containerization (Docker, Kubernetes).
- Experience designing microservices and API ecosystems for high-concurrency platforms.
- Strong knowledge of LLM fine-tuning, RAG pipelines, and vector databases (Pinecone, Weaviate, etc.).
- Demonstrated ability to work with educational data, content pipelines, and real-time systems.
Bonus Skills (Nice to Have):
- Experience with multi-modal AI models (text, image, audio, video).
- Knowledge of AI safety, ethical AI, and explain ability techniques.
- Prior work in AI-powered automation tools or AI-driven SaaS products.

Global digital transformation solutions provider.
Role Proficiency:
This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.
Skill Examples:
- Proficiency in SQL Python or other programming languages used for data manipulation.
- Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
- Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
- Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
- Experience in performance tuning.
- Experience in data warehouse design and cost improvements.
- Apply and optimize data models for efficient storage retrieval and processing of large datasets.
- Communicate and explain design/development aspects to customers.
- Estimate time and resource requirements for developing/debugging features/components.
- Participate in RFP responses and solutioning.
- Mentor team members and guide them in relevant upskilling and certification.
Knowledge Examples:
- Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.
- Proficient in SQL for analytics and windowing functions.
- Understanding of data schemas and models.
- Familiarity with domain-related data.
- Knowledge of data warehouse optimization techniques.
- Understanding of data security concepts.
- Awareness of patterns frameworks and automation practices.
Additional Comments:
# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026
Project Overview:
Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025.
The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency.
Design, build, and maintain scalable data pipelines using Databricks and PySpark.
Develop and optimize complex SQL queries for data extraction, transformation, and analysis.
Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).
Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets.
Ensure data quality, performance, and reliability across data workflows.
Participate in code reviews, data architecture discussions, and performance optimization initiatives.
Support migration and modernization efforts for legacy data systems to modern cloud-based solutions.
Key Skills:
Hands-on experience with Databricks, PySpark & Python for building ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred).
Excellent problem-solving, communication, and collaboration skills.
Skills: Databricks, Pyspark & Python, Sql, Aws Services
Must-Haves
Python/PySpark (5+ years), SQL (5+ years), Databricks (3+ years), AWS Services (3+ years), ETL tools (Informatica, Glue, DataProc) (3+ years)
Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).
******
Notice period - Immediate to 15 days
Location: Bangalore
We are seeking a hands-on eCommerce Analytics & Insights Lead to help establish and scale our newly launched eCommerce business. The ideal candidate is highly data-savvy, understands eCommerce deeply, and can lead KPI definition, performance tracking, insights generation, and data-driven decision-making.
You will work closely with cross-functional teams—Buying, Marketing, Operations, and Technology—to build dashboards, uncover growth opportunities, and guide the evolution of our online channel.
Key Responsibilities
Define & Monitor eCommerce KPIs
- Set up and track KPIs across the customer journey: traffic, conversion, retention, AOV/basket size, repeat rate, etc.
- Build KPI frameworks aligned with business goals.
Data Tracking & Infrastructure
- Partner with marketing, merchandising, operations, and tech teams to define data tracking requirements.
- Collaborate with eCommerce and data engineering teams to ensure data quality, completeness, and availability.
Dashboards & Reporting
- Build dashboards and automated reports to track:
- Overall site performance
- Category & product performance
- Marketing ROI and acquisition effectiveness
Insights & Performance Diagnosis
Identify trends, opportunities, and root causes of underperformance in areas such as:
- Product availability & stock health
- Pricing & promotions
- Checkout funnel drop-offs
- Customer retention & cohort behavior
- Channel acquisition performance
Conduct:
- Cohort analysis
- Funnel analytics
- Customer segmentation
- Basket analysis
Data-Driven Growth Initiatives
- Propose and evaluate experiments, optimization ideas, and quick wins.
- Help business teams interpret KPIs and take informed decisions.
Required Skills & Experience
- 2–5 years experience in eCommerce analytics (grocery retail experience preferred).
- Strong understanding of eCommerce metrics and analytics frameworks (Traffic → Conversion → Repeat → LTV).
- Proficiency with tools such as:
- Google Analytics / GA4
- Excel
- SQL
- Power BI or Tableau
- Experience working with:
- Digital marketing data
- CRM and customer data
- Product/category performance data
- Ability to convert business questions into analytical tasks and produce clear, actionable insights.
- Familiarity with:
- Customer journey mapping
- Funnel analysis
- Basket and behavioral analysis
- Comfortable working in fast-paced, ambiguous, and build-from-scratch environments.
- Strong communication and stakeholder management skills.
- Strong technical capability in at least one programming language: SQL or PySpark.
Good to Have
- Experience with eCommerce platforms (Shopify, Magento, Salesforce Commerce, etc.).
- Exposure to A/B testing, recommendation engines, or personalization analytics.
- Knowledge of Python/R for deeper analytics (optional).
- Experience with tracking setup (GTM, event tagging, pixel/event instrumentation).
Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems
Criteria:
- Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
- Must be strong in one core backend language: Node.js, Go, Java, or Python.
- Deep understanding of distributed systems, caching, high availability, and microservices architecture.
- Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
- Strong command over system design, data structures, performance tuning, and scalable architecture
- Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.
Description
What This Role Is All About
We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.
What You’ll Own
● Architect backend systems that handle India-scale traffic without breaking a sweat.
● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.
● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.
● Partner with Product, Data, and Infra to ship features that are reliable and delightful.
● Set high engineering standards—clean architecture, performance, automation, and testing.
● Lead discussions on system design, performance tuning, and infra choices.
● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.
● Identify gaps proactively and push for improvements instead of waiting for fires.
What Makes You a Great Fit
● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.
● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.
● Deep understanding of distributed systems, caching, high-availability, and microservices.
● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.
● You think data structures and system design are not interviews — they’re daily tools.
● You write code that future-you won’t hate.
● Strong communication and a let’s figure this out attitude.
Bonus Points If You Have
● Built or scaled consumer apps with millions of DAUs.
● Experimented with event-driven architecture, streaming systems, or real-time pipelines.
● Love startups and don’t mind wearing multiple hats.
● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.
Why company Might Be Your Best Move
● Work on products used by real people every single day.
● Ownership from day one—your decisions will shape our core architecture.
● No unnecessary hierarchy; direct access to founders and senior leadership.
● A team that cares about quality, speed, and impact in equal measure.
● Build for Bharat — complex constraints, huge scale, real impact.

Global digital transformation solutions provider.
Job Description – Senior Technical Business Analyst
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
About the Role
We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.
As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.
Key Responsibilities
Business & Analytical Responsibilities
- Partner with business teams to understand one-line problem statements and translate them into detailed business requirements, opportunities, and project scope.
- Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
- Create documentation including Business Requirement Documents (BRDs), user stories, process flows, and analytical models.
- Break down business needs into concise, actionable, and development-ready user stories in Jira.
Data & Technical Responsibilities
- Collaborate with data engineering teams to design, review, and validate data pipelines, data models, and ETL/ELT workflows.
- Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
- Apply foundational data science concepts such as statistical analysis, predictive modeling, and machine learning fundamentals.
- Validate and ensure data quality, consistency, and accuracy across datasets and systems.
Collaboration & Execution
- Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
- Assist in development, testing, and rollout of data-driven solutions.
- Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.
Required Skillsets
Core Technical Skills
- 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
- Data Analytics: SQL, descriptive analytics, business problem framing.
- Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
- Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
- Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.
Soft Skills
- Strong analytical thinking and structured problem-solving capability.
- Ability to convert business problems into clear technical requirements.
- Excellent communication, documentation, and presentation skills.
- High curiosity, adaptability, and eagerness to learn new tools and techniques.
Educational Qualifications
- BE/B.Tech or equivalent in:
- Computer Science / IT
- Data Science
What We Look For
- Demonstrated passion for data and analytics through projects and certifications.
- Strong commitment to continuous learning and innovation.
- Ability to work both independently and in collaborative team environments.
- Passion for solving business problems using data-driven approaches.
- Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.
Why Join Us?
- Exposure to modern data platforms, analytics tools, and AI technologies.
- A culture that promotes innovation, ownership, and continuous learning.
- Supportive environment to build a strong career in data and analytics.
Skills: Data Analytics, Business Analysis, Sql
Must-Haves
Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R
******
Notice period - 0 to 15 days (Max 30 Days)
Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
About the Role
We are looking for a motivated QA Engineer with 0–2 years of experience to join our team at Quanteon. The role is evolving from traditional manual testing to modern, AI-driven and automation-focused QA practices. The ideal candidate should be open to learning development concepts, working with new AI tools, and contributing to intelligent test automation.
Key Responsibilities
- Perform functional, regression, and integration testing.
- Develop and execute test cases, test plans, and test scripts.
- Work closely with developers to understand requirements and identify defects.
- Learn and implement automation using AI-based test automation tools.
- Assist in building automated test suites using scripting or programming fundamentals.
- Analyze test results, document defects, and track issues to closure.
- Contribute to improving QA processes and adopting modern QA methodologies.
Required Skills
- Strong understanding of software testing concepts (STLC, SDLC, test design techniques).
- Basic knowledge of programming concepts (Java, Python, or JavaScript preferred).
- Understanding of API testing (Postman, Swagger is a plus).
- Familiarity with automation concepts (Selenium, Playwright, or similar – optional but preferred).
- Interest/experience in AI-powered testing tools (e.g., TestGPT, Mabl, Katalon AI, etc.).
- Good analytical and problem-solving skills.
- Strong communication and documentation abilities.
Preferred Skills (Good to Have)
- Knowledge of version control (Git/GitHub).
- Basic understanding of databases and SQL queries.
- Exposure to CI/CD pipelines is an added advantage.
- Experience with any bug tracking tools (JIRA, Azure DevOps, etc.).
Who Should Apply?
- Freshers with strong testing fundamentals and willingness to learn automation & AI testing.
- QA professionals up to 2 years of experience looking to enhance their skills in AI-driven QA.
- Candidates eager to grow into full-stack QA roles (Manual + Automation + AI tools).
Educational Qualifications
- B.Tech / B.E in IT, CSE, AI/ML, ECE
- M.Tech / M.E in IT, CSE, AI/ML, ECE
- Strong academic foundation in programming, software engineering, or testing concepts is preferred
- Certifications in Software Testing, Automation, or AI tools (optional but an added advantage)
Artificial Intelligence Research Intern
We are looking for a passionate and skilled AI Intern to join our dynamic team for a 6-month full-time internship. This is an excellent opportunity to work on cutting-edge technologies in Artificial Intelligence, Machine Learning, Deep Learning, and Natural Language Processing (NLP), contributing to real-world projects that create a tangible impact.
Key Responsibilities:
• Research, design, develop, and implement AI and Deep Learning algorithms.
• Work on NLP systems and models for tasks such as text classification, sentiment analysis, and
data extraction.
• Evaluate and optimize machine learning and deep learning models.
• Collect, process, and analyze large-scale datasets.
• Use advanced techniques for text representation and classification.
• Write clean, efficient, and testable code for production-ready applications.
• Perform web scraping and data extraction using Python (requests, BeautifulSoup, Selenium, APIs, etc.).
• Collaborate with cross-functional teams and clearly communicate technical concepts to both technical and non-technical audiences.
Required Skills and Experience:
• Theoretical and practical knowledge of AI, ML, and DL concepts.
• Good Understanding of Python and libraries such as:TensorFlow, PyTorch, Keras, Scikit-learn, Numpy, Pandas, Scipy, Matplotlib NLP tools like NLTK, spaCy, etc.
• Strong understanding of Neural Network Architectures (CNNs, RNNs, LSTMs).
• Familiarity with data structures, data modeling, and software architecture.
• Understanding of text representation techniques (n-grams, BoW, TF-IDF, etc.).
• Comfortable working in Linux/UNIX environments.
• Basic knowledge of HTML, JavaScript, HTTP, and Networking.
• Strong communication skills and a collaborative mindset.
Job Type: Full-Time Internship
Location: In-Office (Bhayander)
To design, build, and optimize scalable data infrastructure and pipelines that enable efficient
data collection, transformation, and analysis across the organization. The Senior Data Engineer
will play a key role in driving data architecture decisions, ensuring data quality and availability,
and empowering analytics, product, and engineering teams with reliable, well-structured data to
support business growth and strategic decision-making.
Responsibilities:
• Develop, and maintain SQL and NoSQL databases, ensuring high performance,
scalability, and reliability.
• Collaborate with the API team and Data Science team to build robust data pipelines and
automations.
• Work closely with stakeholders to understand database requirements and provide
technical solutions.
• Optimize database queries and performance tuning to enhance overall system
efficiency.
• Implement and maintain data security measures, including access controls and
encryption.
• Monitor database systems and troubleshoot issues proactively to ensure uninterrupted
service.
• Develop and enforce data quality standards and processes to maintain data integrity.
• Create and maintain documentation for database architecture, processes, and
procedures.
• Stay updated with the latest database technologies and best practices to drive
continuous improvement.
• Expertise in SQL queries and stored procedures, with the ability to optimize and fine-tune
complex queries for performance and efficiency.
• Experience with monitoring and visualization tools such as Grafana to monitor database
performance and health.
Requirements:
• 4+ years of experience in data engineering, with a focus on large-scale data systems.
• Proven experience designing data models and access patterns across SQL and NoSQL
ecosystems.
• Hands-on experience with technologies like PostgreSQL, DynamoDB, S3, GraphQL, or
vector databases.
• Proficient in SQL stored procedures with extensive expertise in MySQL schema design,
query optimization, and resolvers, along with hands-on experience in building and
maintaining data warehouses.
• Strong programming skills in Python or JavaScript, with the ability to write efficient,
maintainable code.
• Familiarity with distributed systems, data partitioning, and consistency models.
• Familiarity with observability stacks (Prometheus, Grafana, OpenTelemetry) and
debugging production bottlenecks.
• Deep understanding of cloud infrastructure (preferably AWS), including networking, IAM,
and cost optimization.
• Prior experience building multi-tenant systems with strict performance and isolation
guarantees.
• Excellent communication and collaboration skills to influence cross-functional technical
decisions.
The Senior Software Developer is responsible for development of CFRA’s report generation framework using a modern technology stack: Python on AWS cloud infrastructure, SQL, and Web technologies. This is an opportunity to make an impact on both the team and the organization by being part of the design and development of a new customer-facing report generation framework that will serve as the foundation for all future report development at CFRA.
The ideal candidate has a passion for solving business problems with technology and can effectively communicate business and technical needs to stakeholders. We are looking for candidates that value collaboration with colleagues and having an immediate, tangible impact for a leading global independent financial insights and data company.
Key Responsibilities
- Analyst Workflows: Design and development of CFRA’s integrated content publishing platform using a proprietary 3rd party editorial and publishing platform for integrated digital publishing.
- Designing and Developing APIs: Design and development of robust, scalable, and secure APIs on AWS, considering factors like performance, reliability, and cost-efficiency.
- AWS Service Integration: Integrate APIs with various AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, AWS Glue, and others, to build comprehensive and efficient solutions.
- Performance Optimization: Identify and implement optimizations to improve performance, scalability, and efficiency, leveraging AWS services and tools.
- Security and Compliance: Ensure APIs are developed following best security practices, including authentication, authorization, encryption, and compliance with relevant standards and regulations.
- Monitoring and Logging: Implement monitoring and logging solutions for APIs using AWS CloudWatch, AWS X-Ray, or similar tools, to ensure availability, performance, and reliability.
- Continuous Integration and Deployment (CI/CD): Establish and maintain CI/CD pipelines for API development, automating testing, deployment, and monitoring processes on AWS.
- Documentation and Training: Create and maintain comprehensive documentation for internal and external users, and provide training and support to developers and stakeholders.
- Team Collaboration: Collaborate effectively with cross-functional teams, including product managers, designers, and other developers, to deliver high-quality solutions that meet business requirements.
- Problem Solving: troubleshooting efforts, identifying root causes and implementing solutions to ensure system stability and performance.
- Stay Updated: Stay updated with the latest trends, tools, and technologies related to development on AWS, and continuously improve your skills and knowledge.
Desired Skills and Experience
- Development: 5+ years of extensive experience in designing, developing, and deploying using modern technologies, with a focus on scalability, performance, and security.
- AWS Services: proficiency in using AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, Amazon SES, Amazon RDS, Amazon DynamoDB, and others, to build and deploy API solutions.
- Programming Languages: Proficiency in programming languages commonly used for development, such as Python, Node.js, or others, as well as experience with serverless frameworks like AWS.
- Architecture Design: Ability to design scalable and resilient API architectures using microservices, serverless, or other modern architectural patterns, considering factors like performance, reliability, and cost-efficiency.
- Security: Strong understanding of security principles and best practices, including authentication, authorization, encryption, and compliance with standards like OAuth, OpenID Connect, and AWS IAM.
- DevOps Practices: Familiarity with DevOps practices and tools, including CI/CD pipelines, infrastructure as code (IaC), and automated testing, to ensure efficient and reliable deployment on AWS.
- Problem-solving Skills: Excellent problem-solving skills, with the ability to troubleshoot complex issues, identify root causes, and implement effective solutions to ensure the stability and performance.
- Communication Skills: Strong communication skills, with the ability to effectively communicate technical concepts to both technical and non-technical stakeholders, and collaborate with cross- functional teams.
- Agile Methodologies: Experience working in Agile development environments, following practices like Scrum or Kanban, and ability to adapt to changing requirements and priorities.
- Continuous Learning: A commitment to continuous learning and staying updated with the latest trends, tools, and technologies related to development and AWS services.
- Bachelor's Degree: A bachelor's degree in Computer Science, Software Engineering, or a related field is often preferred, although equivalent experience and certifications can also be valuable.
The Lead Software Developer is responsible for development of CFRA’s report generation framework using a modern technology stack: Python on AWS cloud infrastructure, SQL, and Web technologies. This is an opportunity to make an impact on both the team and the organization by being part of the design and development of a new customer-facing report generation framework that will serve as the foundation for all future report development at CFRA.
The ideal candidate has a passion for solving business problems with technology and can effectively communicate business and technical needs to stakeholders. We are looking for candidates that value collaboration with colleagues and having an immediate, tangible impact for a leading global independent financial insights and data company.
Key Responsibilities
- Analyst Workflows: Lead the design and development of CFRA’s integrated content publishing platform using a proprietary 3rd party editorial and publishing platform for integrated digital publishing.
- Designing and Developing APIs: Lead the design and development of robust, scalable, and secure APIs on AWS, considering factors like performance, reliability, and cost-efficiency.
- Architecture Planning: Collaborate with architects and stakeholders to define architecture, including API gateway, microservices, and serverless components, ensuring alignment with business goals and AWS best practices.
- Technical Leadership: Provide technical guidance and leadership to the development team, ensuring adherence to coding standards, best practices, and AWS guidelines.
- AWS Service Integration: Integrate APIs with various AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, AWS Glue, and others, to build comprehensive and efficient solutions.
- Performance Optimization: Identify and implement optimizations to improve performance, scalability, and efficiency, leveraging AWS services and tools.
- Security and Compliance: Ensure APIs are developed following best security practices, including authentication, authorization, encryption, and compliance with relevant standards and regulations.
- Monitoring and Logging: Implement monitoring and logging solutions for APIs using AWS CloudWatch, AWS X-Ray, or similar tools, to ensure availability, performance, and reliability.
- Continuous Integration and Deployment (CI/CD): Establish and maintain CI/CD pipelines for API development, automating testing, deployment, and monitoring processes on AWS.
- Documentation and Training: Create and maintain comprehensive documentation for internal and external users, and provide training and support to developers and stakeholders.
- Team Collaboration: Collaborate effectively with cross-functional teams, including product managers, designers, and other developers, to deliver high-quality solutions that meet business requirements.
- Problem Solving: Lead troubleshooting efforts, identifying root causes and implementing solutions to ensure system stability and performance.
- Stay Updated: Stay updated with the latest trends, tools, and technologies related to development on AWS, and continuously improve your skills and knowledge.
Desired Skills and Experience
- Development: 10+ years of extensive experience in designing, developing, and deploying using modern technologies, with a focus on scalability, performance, and security.
- AWS Services: Strong proficiency in using AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, Amazon SES, Amazon RDS, Amazon DynamoDB, and others, to build and deploy API solutions.
- Programming Languages: Proficiency in programming languages commonly used for development, such as Python, Node.js, or others, as well as experience with serverless frameworks like AWS.
- Architecture Design: Ability to design scalable and resilient API architectures using microservices, serverless, or other modern architectural patterns, considering factors like performance, reliability, and cost-efficiency.
- Security: Strong understanding of security principles and best practices, including authentication, authorization, encryption, and compliance with standards like OAuth, OpenID Connect, and AWS IAM.
- DevOps Practices: Familiarity with DevOps practices and tools, including CI/CD pipelines, infrastructure as code (IaC), and automated testing, to ensure efficient and reliable deployment on AWS.
- Problem-solving Skills: Excellent problem-solving skills, with the ability to troubleshoot complex issues, identify root causes, and implement effective solutions to ensure the stability and performance.
- Team Leadership: Experience leading and mentoring a team of developers, providing technical guidance, code reviews, and fostering a collaborative and innovative environment.
- Communication Skills: Strong communication skills, with the ability to effectively communicate technical concepts to both technical and non-technical stakeholders, and collaborate with cross- functional teams.
- Agile Methodologies: Experience working in Agile development environments, following practices like Scrum or Kanban, and ability to adapt to changing requirements and priorities.
- Continuous Learning: A commitment to continuous learning and staying updated with the latest trends, tools, and technologies related to development and AWS services.
- Bachelor's Degree: A bachelor's degree in Computer Science, Software Engineering, or a related field is often preferred, although equivalent experience and certifications can also be valuable.
Key Responsibilities
We are seeking an experienced Data Engineer with a strong background in Databricks, Python, Spark/PySpark and SQL to design, develop, and optimize large-scale data processing applications. The ideal candidate will build scalable, high-performance data engineering solutions and ensure seamless data flow across cloud and on-premise platforms.
Key Responsibilities:
- Design, develop, and maintain scalable data processing applications using Databricks, Python, and PySpark/Spark.
- Write and optimize complex SQL queries for data extraction, transformation, and analysis.
- Collaborate with data engineers, data scientists, and other stakeholders to understand business requirements and deliver high-quality solutions.
- Ensure data integrity, performance, and reliability across all data processing pipelines.
- Perform data analysis and implement data validation to ensure high data quality.
- Implement and manage CI/CD pipelines for automated testing, integration, and deployment.
- Contribute to continuous improvement of data engineering processes and tools.
Required Skills & Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Proven experience as a Databricks with strong expertise in Python, SQL and Spark/PySpark.
- Strong proficiency in SQL, including working with relational databases and writing optimized queries.
- Solid programming experience in Python, including data processing and automation.
Position Responsibilities:
- Design & Develop integration and automation solutions based on technical specifications.
- Support in testing activities, including integration testing, end-to-end (business process) testing and UAT
- Being aware of CI-CD, engineering best practices, and SDLC process
- Should have an excellent understanding of all existing integration and automation.
- Understand the product integration requirement and solve it right, which is scalable, performant & resilient.
- Develop using TDD methodology, apply appropriate design methodologies & coding standards
- Able to conduct code reviews, quick at debugging
- Able to deconstruct a complex issue & resolve it
- Support in testing activities, including integration testing, end-to-end (business process) testing, and UAT
- Able to work with the stakeholders/customers, able to synthesise the business requirements, and suggest the best integration approaches – Process analyst
- Able to suggest, own & adapt to new technical frameworks/solutions & implement continuous process improvements for better delivery
Qualifications:
- A minimum of 7-9 years of experience in developing integration/automation solutions or related experience
- 3-4 years of experience in a technical architect or lead role
- Strong working experience in Python is preferred
- Good understanding of integration concepts, methodologies, and technologies
- Good communication, presentation skills, Strong interpersonal skills with the ability to convey and relate ideas to others and work collaboratively to get things done.
What you will do:-
● Partnering with Product Managers and cross-functional teams to define metrics, build dashboards, and track product performance.
● Conducting deep-dive analyses of large-scale data to identify trends, user behavior patterns, growth gaps, and improvement opportunities.
● Performing competitive benchmarking and industry research to support product strategy and prioritization.
● Generating data-backed insights to drive feature enhancements, product experiments, and business decisions.
● Tracking post-launch impact by measuring adoption, engagement, retention, and ROI of new features.
● Working with Data, Engineering, Business, and Ops teams to design and measure experiments (A/B tests, cohorts, funnels).
● Creating reports, visualizations, and presentations that simplify complex data for stakeholdersand leadership.
● Supporting the product lifecycle with relevant data inputs during research, ideation, launch, and optimization phases.
What we are looking for:-
● Bachelor’s degree in engineering, statistics, business, economics, mathematics, data science, or a related field.
● Strong analytical, quantitative, and problem-solving skills.
● Proficiency in SQL and ability to work with large datasets.
● Experience with data visualization/reporting tools (e.g., Excel, Google Sheets, Power BI, Tableau, Looker, Mixpanel, GA).
● Excellent communication skills — able to turn data into clear narratives and actionable recommendations.
● Ability to work collaboratively in cross-functional teams.
● Passion for product, user behavior, and data-driven decision-making
● Prior internship or work experience in product analytics, business analysis, consulting, or growth teams.
● Familiarity with experimentation techniques (A/B testing, funnels, cohorts, retention metrics).
● Understanding of product management concepts and tools (Jira, Confluence, etc.).
● Knowledge of Python or R for data analysis (optional but beneficial).
● Exposure to consumer tech, mobility, travel, or marketplaces. .
- Candidate Must be a graduate from IIT, NIT, NSUT, or DTU.
- Need candidate with 1–2 years of pure Product Analyst experience is mandatory.
- Candidate must have strong hands-on experience in Product + Data Analysis + Python.
- Candidate should have Python skill on the scale of 3/5 at least.
- Proficiency in SQL and ability to work with large datasets.
- The candidate must experience with A/B testing, cohorts, funnels, retention, and product metrics.
- Hands-on experience with data visualization tools (Tableau, Power BI, Looker, Mixpanel, GA, etc.).
- Candidate must have experienece in Jira.
Job Description -Technical Project Manager
Job Title: Technical Project Manager
Location: Bhopal / Bangalore (On-site)
Experience Required: 7+ Years
Industry: Fintech / SaaS / Software Development
Role Overview
We are looking for a Technical Project Manager (TPM) who can bridge the gap between management and developers. The TPM will manage Android, Frontend, and Backend teams, ensure smooth development processes, track progress, evaluate output quality, resolve technical issues, and deliver timely reports.
Key Responsibilities
Project & Team Management
- Manage daily tasks for Android, Frontend, and Backend developers
- Conduct daily stand-ups, weekly planning, and reviews
- Track progress, identify blockers, and ensure timely delivery
- Maintain sprint boards, task estimations, and timelines
Technical Requirement Translation
- Convert business requirements into technical tasks
- Communicate requirements clearly to developers
- Create user stories, flow diagrams, and PRDs
- Ensure requirements are understood and implemented correctly
Quality & Build Review
- Validate build quality, UI/UX flow, functionality
- Check API integrations, errors, performance issues
- Ensure coding practices and architecture guidelines are followed
- Perform preliminary QA before handover to testing or clients
Issue Resolution
- Identify development issues early
- Coordinate with developers to fix bugs
- Escalate major issues to founders with clear insights
Reporting & Documentation
- Daily/weekly reports to management
- Sprint documentation, release notes
- Maintain project documentation & version control processes
Cross-Team Communication
- Act as the single point of contact for management
- Align multiple tech teams with business goals
- Coordinate with HR and operations for resource planning
Required Skills
- Strong understanding of Android, Web (Frontend/React), Backend development flows
- Knowledge of APIs, Git, CI/CD, basic testing
- Experience with Agile/Scrum methodologies
- Ability to review builds and suggest improvements
- Strong documentation skills (Jira, Notion, Trello, Asana)
- Excellent communication & leadership
- Ability to handle pressure and multiple projects
Good to Have
- Prior experience in Fintech projects
- Basic knowledge of UI/UX
- Experience in preparing FSD/BRD/PRD
- QA experience or understanding of test cases
Salary Range: 9 to 12 LPA
Job Description -Technical Project Manager
Job Title: Technical Project Manager
Location: Bhopal / Bangalore (On-site)
Experience Required: 7+ Years
Industry: Fintech / SaaS / Software Development
Role Overview
We are looking for a Technical Project Manager (TPM) who can bridge the gap between management and developers. The TPM will manage Android, Frontend, and Backend teams, ensure smooth development processes, track progress, evaluate output quality, resolve technical issues, and deliver timely reports.
Key Responsibilities
Project & Team Management
- Manage daily tasks for Android, Frontend, and Backend developers
- Conduct daily stand-ups, weekly planning, and reviews
- Track progress, identify blockers, and ensure timely delivery
- Maintain sprint boards, task estimations, and timelines
Technical Requirement Translation
- Convert business requirements into technical tasks
- Communicate requirements clearly to developers
- Create user stories, flow diagrams, and PRDs
- Ensure requirements are understood and implemented correctly
Quality & Build Review
- Validate build quality, UI/UX flow, functionality
- Check API integrations, errors, performance issues
- Ensure coding practices and architecture guidelines are followed
- Perform preliminary QA before handover to testing or clients
Issue Resolution
- Identify development issues early
- Coordinate with developers to fix bugs
- Escalate major issues to founders with clear insights
Reporting & Documentation
- Daily/weekly reports to management
- Sprint documentation, release notes
- Maintain project documentation & version control processes
Cross-Team Communication
- Act as the single point of contact for management
- Align multiple tech teams with business goals
- Coordinate with HR and operations for resource planning
Required Skills
- Strong understanding of Android, Web (Frontend/React), Backend development flows
- Knowledge of APIs, Git, CI/CD, basic testing
- Experience with Agile/Scrum methodologies
- Ability to review builds and suggest improvements
- Strong documentation skills (Jira, Notion, Trello, Asana)
- Excellent communication & leadership
- Ability to handle pressure and multiple projects
Good to Have
- Prior experience in Fintech projects
- Basic knowledge of UI/UX
- Experience in preparing FSD/BRD/PRD
- QA experience or understanding of test cases
Salary Range: 9 to 12 LPA

Job Title: QA Automation Analyst – End-to-End Framework Development (Playwright)
Location: Brookefield
Experience: 2+ years
Domain: Banking / Financial Services
Job Description
We are looking for a hands-on QA Automation Analyst to design, build, and maintain end-to-end automation frameworks for high-quality banking and financial applications. You will be responsible for ensuring robust test coverage, validating business workflows, and integrating testing within CI/CD pipelines. You’ll collaborate closely with product, engineering, and DevOps teams to uphold compliance, audit readiness, and rapid delivery in an agile environment.
Key Responsibilities
- Design, develop, and maintain end-to-end automation frameworks from scratch using modern tools and best practices.
- Develop and execute test plans, test cases, and automation scripts for functional, regression, integration, and API testing.
- Build automation using Selenium, PyTest, Robot Framework, Playwright (highlighted), or Cypress.
- Perform API testing for REST services using Postman, Swagger, or Rest Assured; validate responses, contracts, and data consistency.
- Integrate automation frameworks with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI/CD, or similar).
- Participate in requirement analysis, impact assessment, sprint ceremonies, and cross-functional discussions.
- Validate data using SQL and support User Acceptance Testing (UAT); generate reports and release sign-offs.
- Log, track, and close defects using standard bug-tracking tools; perform root-cause analysis for recurring issues.
- Maintain QA artifacts for audit and compliance purposes.
- Mentor junior QA team members and contribute to process improvements.
Qualifications & Skills
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 2+ years of hands-on experience in QA automation for enterprise applications, preferably in the banking/financial domain.
- Strong understanding of SDLC, STLC, QA methodologies, tools, and best practices.
- Experience designing end-to-end automation frameworks from scratch.
- Hands-on with manual and automation testing (Selenium, PyTest, Robot Framework, Playwright, Cypress).
- Experience in API testing and validating RESTful services; knowledge of Rest Assured is a plus.
- Proficient with databases and SQL (PostgreSQL, MySQL, Oracle).
- Solid experience in Agile/Scrum environments and tools like Jira, TestLink, or equivalent.
- Strong understanding of CI/CD pipelines and deployment automation using Jenkins or similar tools.
- Knowledge of version control tools (Git) and collaborative workflows.
- Excellent analytical, problem-solving, documentation, and communication skills.
Nice to Have / Bonus
- Exposure to performance testing tools like JMeter or Gatling.
- Programming experience in Java, Python, or JavaScript for automation scripting.
- ISTQB or equivalent QA certification.
Why Join Us
- Opportunity to work on mission-critical banking applications.
- Hands-on exposure to modern automation tools and frameworks.
- Work in a collaborative, agile, and fast-paced environment.
- Contribute to cutting-edge CI/CD automation and testing strategies.
- 5+ years full-stack development
- Proficiency in AWS cloud-native development
- Experience with microservices & async architectures
- Strong TypeScript proficiency
- Strong Python proficiency
- React.js expertise
- Next.js expertise
- PostgreSQL + PostGIS experience
- GraphQL development experience
- Prisma ORM experience
- Experience in B2C product development(Retail/Ecommerce)
- Looking for candidates based out of Bangalore only
CTC: up to 40 LPA
If interested kindly share your updated resume at 82008 31681
Job Description -Technical Project Manager
Job Title: Technical Project Manager
Location: Bhopal / Bangalore (On-site)
Experience Required: 7+ Years
Industry: Fintech / SaaS / Software Development
Role Overview
We are looking for a Technical Project Manager (TPM) who can bridge the gap between management and developers. The TPM will manage Android, Frontend, and Backend teams, ensure smooth development processes, track progress, evaluate output quality, resolve technical issues, and deliver timely reports.
Key Responsibilities
Project & Team Management
- Manage daily tasks for Android, Frontend, and Backend developers
- Conduct daily stand-ups, weekly planning, and reviews
- Track progress, identify blockers, and ensure timely delivery
- Maintain sprint boards, task estimations, and timelines
Technical Requirement Translation
- Convert business requirements into technical tasks
- Communicate requirements clearly to developers
- Create user stories, flow diagrams, and PRDs
- Ensure requirements are understood and implemented correctly
Quality & Build Review
- Validate build quality, UI/UX flow, functionality
- Check API integrations, errors, performance issues
- Ensure coding practices and architecture guidelines are followed
- Perform preliminary QA before handover to testing or clients
Issue Resolution
- Identify development issues early
- Coordinate with developers to fix bugs
- Escalate major issues to founders with clear insights
Reporting & Documentation
- Daily/weekly reports to management
- Sprint documentation, release notes
- Maintain project documentation & version control processes
Cross-Team Communication
- Act as the single point of contact for management
- Align multiple tech teams with business goals
- Coordinate with HR and operations for resource planning
Required Skills
- Strong understanding of Android, Web (Frontend/React), Backend development flows
- Knowledge of APIs, Git, CI/CD, basic testing
- Experience with Agile/Scrum methodologies
- Ability to review builds and suggest improvements
- Strong documentation skills (Jira, Notion, Trello, Asana)
- Excellent communication & leadership
- Ability to handle pressure and multiple projects
Good to Have
- Prior experience in Fintech projects
- Basic knowledge of UI/UX
- Experience in preparing FSD/BRD/PRD
- QA experience or understanding of test cases
Salary Range: 9 to 12 LPA
Job Summary:
Full-time 6+ Years We are looking for a Lead Data Scientist with the ability to lead a data science team and help us gain useful insight out of raw data. Lead Data Scientist responsibilities include managing the client, data science team, planning projects and building analytics models. You should have a strong problem-solving ability and a knack for statistical analysis. If you’re also able to align our data products with our business goals, we’d like to talk to you. Responsibilities
● You would be required to Identify, develop and implement the appropriate statistical techniques, algorithms and Deep learning / ML Models to create new, scalable solutions that address business challenges across industry domains.
● Define, develop, maintain and evolve data models, tools and capabilities.
● Communicate your findings to the appropriate teams through visualizations
● Provide solutions for but not limited to: Object detection/Image recognition, natural language processing, Sentiment Analysis, Topic Modeling, Concept Extraction, Recommender Systems, Text Classification, Clustering , Customer Segmentation & Targeting, Propensity Modeling, Churn Modeling, Lifetime Value Estimation, Forecasting, Modeling Response to Incentives, Marketing Mix Optimization, Price Optimization, GenAI and LLMs.
● Ability to build, train and lead a team of data scientists
. ● Follow/maintain an agile methodology for delivering on project milestones.
● Experience applying statistical ideas and methods to data sets to answer business problems.
● Mine and analyze data to drive optimization and improvement of product development, marketing techniques and business strategies.
● Working on ML Model Containerisation.
● Creating ML Model Inference Pipelines.
● Testing and Monitoring ML Models. Tech Stack
: ● Strong Python programming Knowledge- Data Science and Advanced Concepts like Abstract Class, Function Overloading + Overriding, Inheritance, Modular Programming and Reusability
● Working knowledge of Transformers, Hugging face etc.
. ● Working knowledge in implementing large language modes for enterprise applications.
● Cloud Experience in using Azure services,
● REST API using Flask or fastapi frameworks
● Good to have - Pyspark: Spark Dataframe Operations, SQL functions API, Parallel Execution
● Good to have - Unit testing using Python pytest or Unittest Preferred Qualifications:
● Bachelors/ Masters/ PhD degree in Math, Computer Science, Information Systems, Machine Learning, Statistics, Applied Mathematics or related technical degree.
● Minimum of 6 years of experience in a related position, as a senior data scientist building Predictive analytics, Computer Vision, NLP and GenAI solutions for various types of business problems.
● Advanced knowledge of statistical techniques, machine learning algorithms and deep learning frameworks like Tensorflow, Keras, Pytorch, etc..
● Strong planning and project management skills, able to
Key Responsibilities
● Design and maintain high-performance backend applications and microservices
● Architect scalable, cloud-native systems and collaborate across engineering teams
● Write high-quality, performant code and conduct thorough code reviews
● Build and operate CI/CD pipelines and production systems
● Work with databases, containerization (Docker/Kubernetes), and cloud platforms
● Lead agile practices and continuously improve service reliability
Required Qualifications
● 4+ years of professional software development experience
● 2+ years contributing to service design and architecture
● Strong expertise in modern languages like Golang, Python
● Deep understanding of scalable, cloud-native architectures and microservices
● Production experience with distributed systems and database technologies
● Experience with Docker, software engineering best practices
● Bachelor's Degree in Computer Science or related technical field
Preferred Qualifications
● Experience with Golang, AWS, and Kubernetes
● CI/CD pipeline experience with GitHub Actions
● Start-up environment experience
Backend Engineer
Job Overview
We are seeking an experienced Backend Engineer to architect and deploy scalable backend applications and services. The ideal candidate will have expertise in modern programming languages, distributed systems, and team leadership.
Key Responsibilities
● Design and maintain high-performance backend applications and microservices
● Architect scalable, cloud-native systems and collaborate across engineering teams
● Write high-quality, performant code and conduct thorough code reviews
● Build and operate CI/CD pipelines and production systems
● Work with databases, containerization (Docker/Kubernetes), and cloud platforms
● Lead agile practices and continuously improve service reliability
Required Qualifications
● 4+ years of professional software development experience
● 2+ years contributing to service design and architecture
● Strong expertise in modern languages like Golang, Python
● Deep understanding of scalable, cloud-native architectures and microservices
● Production experience with distributed systems and database technologies
● Experience with Docker, software engineering best practices
● Bachelor's Degree in Computer Science or related technical field
Preferred Qualifications
● Experience with Golang, AWS, and Kubernetes
● CI/CD pipeline experience with GitHub Actions
● Start-up environment experience
ML Intern
Hyperworks Imaging is a cutting-edge technology company based out of Bengaluru, India since 2016. Our team uses the latest advances in deep learning and multi-modal machine learning techniques to solve diverse real world problems. We are rapidly growing, working with multiple companies around the world.
JOB OVERVIEW
We are seeking a talented and results-oriented ML Intern to join our growing team in India. In this role, you will be responsible for developing and implementing new advanced ML algorithms and AI agents for creating assistants of the future.
The ideal candidate will work on a complete ML pipeline starting from extraction, transformation and analysis of data to developing novel ML algorithms. The candidate will implement latest research papers and closely work with various stakeholders to ensure data-driven decisions and integrate the solutions into a robust ML pipeline.
RESPONSIBILITIES:
- Create AI agents using Model Context Protocols (MCPs), Claude Code, DsPy etc.
- Develop custom evals for AI agents.
- Build and maintain ML pipelines
- Optimize and evaluate ML models to ensure accuracy and performance.
- Define system requirements and integrate ML algorithms into cloud based workflows.
- Write clean, well-documented, and maintainable code following best practices
REQUIREMENTS:
- 1-3+ years of experience in data science, machine learning, or a similar role.
- Demonstrated expertise with python, PyTorch, and TensorFlow.
- Graduated/Graduating with B.Tech/M.Tech/PhD degrees in Electrical Engg./Electronics Engg./Computer Science/Maths and Computing/Physics
- Has done coursework in Linear Algebra, Probability, Image Processing, Deep Learning and Machine Learning.
- Has demonstrated experience with Model Context Protocols (MCPs), DsPy, AI Agents, MLOps etc
WHO CAN APPLY:
Only those candidates will be considered who,
- have relevant skills and interests
- can commit full time
- Can show prior work and deployed projects
- can start immediately
Please note that we will reach out to ONLY those applicants who satisfy the criteria listed above.
SALARY DETAILS: Commensurate with experience.
JOINING DATE: Immediate
JOB TYPE: Full-time
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.
Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
About the Role
We are looking for a Software Developer Intern (Zoho Ecosystem) to join our HR Tech and Automation team at MyOperator’s Noida office. This role is ideal for passionate coders who are eager to explore the Zoho platform and learn how to automate business workflows, integrate APIs, and build internal tools that enhance organisational efficiency.
You will work directly with our Zoho Developer and Engineering Operations team, gaining hands-on experience in Deluge scripting, API integrations, and system automation within one of the fastest-growing SaaS environments.
Key Responsibilities
- Develop and test API integrations between Zoho applications and third-party platforms.
- Learn and apply Deluge scripting (Zoho’s proprietary language) to automate workflows.
- Assist in creating custom functions, dashboards, and workflow logic within Zoho apps.
- Debug and document automation setups to ensure smooth internal operations.
- Collaborate with HR Tech and cross-functional teams to bring automation ideas to life.
- Support ongoing enhancement and optimization of existing Zoho systems.
Required Skills & Qualifications
- Strong understanding of at least one programming language (JavaScript or Python).
- Basic knowledge of APIs, JSON, and REST.
- Logical and analytical problem-solving mindset.
- Eagerness to explore Zoho applications (People, Recruit, Creator, CRM, etc.).
- Excellent communication and documentation skills.
Good to Have (Optional)
- Exposure to HTML, CSS, or SQL.
- Experience with workflow automation or no-code platforms.
- Familiarity with SaaS ecosystems or business process automation tools.
Internship Details
- Location: 91Springboard, Plot No. D-107, Sector 2, Noida, Uttar Pradesh – 201301
- Duration: 6 Months (Full-time, Office-based)
- Working Days: Monday to Friday
- Conversion: Strong possibility of a Full-Time Offer based on performance
Why Join Us
At MyOperator, you’ll gain hands-on experience with one of the largest SaaS ecosystems, working on real-world automations, API integrations, and workflow engineering. You’ll learn directly from experienced developers, gain exposure to internal business systems, and contribute to automating operations for a fast-scaling AI-led company.
This internship provides a strong foundation to grow into roles such as Zoho Developer, Automation Engineer, or Internal Tools Engineer, along with an opportunity for full-time employment upon successful completion.
Software Tester – Automation (On-Site)
📍 Location: Navi Mumbai
Budget - 4lpa to 7lpa
Years of Experience - 2 to 5years
🕒 Immediate Joiners Preferred
✨ Why Join Us?
🚀 Growth-driven environment with modern, automation-first projects
📆 Weekends off + Provident Fund benefits
🤝 Supportive, collaborative & innovation-first culture
🔍 Role Overview
We are looking for an Automation Tester with strong hands-on experience in Python-based UI, API, and WebSocket automation. You will collaborate closely with developers, project managers, and QA peers to ensure product quality, performance, and reliability, while also exploring AI-led testing initiatives.
🧩 Key Responsibilities
🧾 Requirement Analysis & Test Planning
Participate in client interactions to understand testing and automation requirements.
Convert functional/technical specifications into automation-ready test scenarios.
🤖 Automation Testing & Framework Development
Develop and maintain automation scripts using Python, Selenium, and Pytest.
Build scalable automation frameworks for UI, API, and WebSocket testing.
Improve script reusability, modularity, and performance.
🌐 API & WebSocket Testing
Perform REST API validations using Postman/Swagger.
Develop automated API test suites using Python/Pytest.
Execute WebSocket test scenarios (real-time event/message validations, latency, connection stability).
🧪 Manual Testing (As Needed)
Execute functional, UI, smoke, sanity, and exploratory tests.
Validate applications in development, QA, and production environments.
🐞 Defect Management
Log, track, and retest defects using Jira or Zoho Projects.
Ensure high-quality bug reporting with clear steps and severity/priority tagging.
⚡ Performance Testing
Use JMeter to conduct load, stress, and performance tests for APIs/WebSocket-based systems.
Analyze system performance and highlight bottlenecks.
🧠 AI-Driven Testing Exploration
Research and experiment with AI tools to enhance automation coverage and efficiency.
Propose AI-driven improvements for regression, analytics, and test optimization.
🤝 Collaboration & Communication
Participate in daily stand-ups and regular QA syncs.
Communicate blockers, automation progress, and risks clearly.
📊 Test Reporting & Metrics
Create reports on automation execution, defect trends, and performance benchmarks.
🛠 Key Technical Skills
✔ Strong proficiency in Python
✔ UI Automation using Selenium (Python)
✔ Pytest Framework
✔ API Testing – Postman/Swagger
✔ WebSocket Testing
✔ Performance Testing using JMeter
✔ Knowledge of CI/CD tools (such as Jenkins)
✔ Knowledge of Git
✔ SQL knowledge (added advantage)
✔ Functional/Manual Testing expertise
✔ Solid understanding of SDLC/STLC & QA processes
🧰 Tools You Will Work With
Automation: Selenium, Pytest
API & WebSockets: Postman, Swagger, Python libraries
Performance: JMeter
Project/Defect Tracking: Jira, Zoho Projects
CI/CD & Version Control: Jenkins, Git
🌟 Soft Skills
Strong communication & teamwork
Detail-oriented and analytical
Problem-solving mindset
Ownership and accountability
Job Summary
We are seeking a highly skilled Full Stack Engineer with 2+ years of hands-on experience to join our high-impact engineering team. You will work across the full stack—building scalable, high-performance frontends using Typescript & Next.js and developing robust backend services using Python (FastAPI/Django).
This role is crucial in shaping product experiences and driving innovation at scale.
Mandatory Candidate Background
- Experience working in product-based companies only
- Strong academic background
- Stable work history
- Excellent coding skills and hands-on development experience
- Strong foundation in Data Structures & Algorithms (DSA)
- Strong problem-solving mindset
- Understanding of clean architecture and code quality best practices
Key Responsibilities
- Design, develop, and maintain scalable full-stack applications
- Build responsive, performant, user-friendly UIs using Typescript & Next.js
- Develop APIs and backend services using Python (FastAPI/Django)
- Collaborate with product, design, and business teams to translate requirements into technical solutions
- Ensure code quality, security, and performance across the stack
- Own features end-to-end: architecture, development, deployment, and monitoring
- Contribute to system design, best practices, and the overall technical roadmap
Requirements
Must-Have:
- 2+ years of professional full-stack engineering experience
- Strong expertise in Typescript / Next.js OR Python (FastAPI, Django) — must be familiar with both areas
- Experience building RESTful APIs and microservices
- Hands-on experience with Git, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure)
- Strong debugging, optimization, and problem-solving abilities
- Comfortable working in fast-paced startup environments
Good-to-Have:
- Experience with containerization (Docker/Kubernetes)
- Exposure to message queues or event-driven architectures
- Familiarity with modern DevOps and observability tooling
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry. If you are looking for just another backend role, this isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits
every day. If you thrive in high-stakes environments and have a deep passion for performance driven backend systems, we want you.
What You Will Do:
• We’re looking for a Backend Developer (Python) with a strong foundation in backend technologies and a deep interest in scalable, low-latency systems.
• You should have 3–4 years of experience in Python-based development and be eager to solve complex performance and scalability challenges in trading and fintech applications.
• You measure success by your own growth, not external validation.
• You thrive on challenges, not on perks or financial rewards.
• Taking calculated risks excites you—you’re here to build, break, and learn.
• You don’t clock in for a paycheck; you clock in to outperform yourself in a high-frequency trading environment.
• You understand the stakes—milliseconds can make or break trades, and precision is everything.
What We Expect:
• Develop and maintain scalable backend systems using Python.
• Design and implement REST APIs and socket-based communication.
• Optimize code for speed, performance, and reliability.
• Collaborate with frontend teams to integrate server-side logic.
• Work with RabbitMQ, Kafka, Redis, and Elasticsearch for robust backend design.
• Build fault-tolerant, multi-producer/consumer systems.
Must-Have Skills:
• 3–4 years of experience in Python and backend development.
• Strong understanding of REST APIs, sockets, and network protocols (TCP/UDP/HTTP).
• Experience with RabbitMQ/Kafka, SQL & NoSQL databases, Redis, and Elasticsearch.
• Bachelor’s degree in Computer Science or related field.
Nice-to-Have Skills:
• Past experience in fintech, trading systems, or algorithmic trading.
• Experience with GoLang, C/C++, Erlang, or Elixir.
• Exposure to trading, fintech, or low-latency systems.
• Familiarity with microservices and CI/CD pipelines.
Required Skills: Strong SQL Expertise, Data Reporting & Analytics, Database Development, Stakeholder & Client Communication, Independent Problem-Solving & Automation Skills
Review Criteria
· Must have Strong SQL skills (queries, optimization, procedures, triggers)
· Must have Advanced Excel skills
· Should have 3+ years of relevant experience
· Should have Reporting + dashboard creation experience
· Should have Database development & maintenance experience
· Must have Strong communication for client interactions
· Should have Ability to work independently
· Willingness to work from client locations.
Description
Who is an ideal fit for us?
We seek professionals who are analytical, demonstrate self-motivation, exhibit a proactive mindset, and possess a strong sense of responsibility and ownership in their work.
What will you get to work on?
As a member of the Implementation & Analytics team, you will:
● Design, develop, and optimize complex SQL queries to extract, transform, and analyze data
● Create advanced reports and dashboards using SQL, stored procedures, and other reporting tools
● Develop and maintain database structures, stored procedures, functions, and triggers
● Optimize database performance by tuning SQL queries, and indexing to handle large datasets efficiently
● Collaborate with business stakeholders and analysts to understand analytics requirements
● Automate data extraction, transformation, and reporting processes to improve efficiency
What do we expect from you?
For the SQL/Oracle Developer role, we are seeking candidates with the following skills and Expertise:
● Proficiency in SQL (Window functions, stored procedures) and MS Excel (advanced Excel skills)
● More than 3 plus years of relevant experience
● Java / Python experience is a plus but not mandatory
● Strong communication skills to interact with customers to understand their requirements
● Capable of working independently with minimal guidance, showcasing self-reliance and initiative
● Previous experience in automation projects is preferred
● Work From Office: Bangalore/Navi Mumbai/Pune/Client locations
We’re looking for a full-stack generalist who can handle the entire product lifecycle from frontend, backend, APIs, AI integrations, deployment, and everything in between. Someone who enjoys owning features from concept to production and working across the entire stack
This opportunity through ClanX is for Parspec (direct payroll with Parspec)
Please note you would be interviewed for this role at Parspec while ClanX is a recruitment partner helping Parspec find the right candidates and manage the hiring process.
Parspec is hiring a Machine Learning Engineer to build state-of-the-art AI systems, focusing on NLP, LLMs, and computer vision in a high-growth, product-led environment.
Company Details:
Parspec is a fast-growing technology startup revolutionizing the construction supply chain. By digitizing and automating critical workflows, Parspec empowers sales teams with intelligent tools to drive efficiency and close more deals. With a mission to bring a data-driven future to the construction industry, Parspec blends deep domain knowledge with AI expertise.
Requirements:
- Bachelor’s or Master’s degree in Science or Engineering.
- 5-7 years of experience in ML and data science.
- Recent hands-on work with LLMs, including fine-tuning, RAG, agent flows, and integrations.
- Strong understanding of foundational models and transformers.
- Solid grasp of ML and DL fundamentals, with experience in CV and NLP.
- Recent experience working with large datasets.
- Python experience with ML libraries like numpy, pandas, sklearn, matplotlib, nltk and others.
- Experience with frameworks like Hugging Face, Spacy, BERT, TensorFlow, PyTorch, OpenRouter or Modal.
Requirements:
Must haves
- Design, develop, and deploy NLP, CV, and recommendation systems
- Train and implement deep learning models
- Research and explore novel ML architectures
- Build and maintain end-to-end ML pipelines
- Collaborate across product, design, and engineering teams
- Work closely with business stakeholders to shape product features
- Ensure high scalability and performance of AI solutions
- Uphold best practices in engineering and contribute to a culture of excellence
- Actively participate in R&D and innovation within the team
Good to haves
- Experience building scalable AI pipelines for extracting structured data from unstructured sources.
- Experience with cloud platforms, containerization, and managed AI services.
- Knowledge of MLOps practices, CI/CD, monitoring, and governance.
- Experience with AWS or Django.
- Familiarity with databases and web application architecture.
- Experience with OCR or PDF tools.
Responsibilities:
- Design, develop, and deploy NLP, CV, and recommendation systems
- Train and implement deep learning models
- Research and explore novel ML architectures
- Build and maintain end-to-end ML pipelines
- Collaborate across product, design, and engineering teams
- Work closely with business stakeholders to shape product features
- Ensure high scalability and performance of AI solutions
- Uphold best practices in engineering and contribute to a culture of excellence
- Actively participate in R&D and innovation within the team
Interview Process
- Technical interview (coding, ML concepts, project walkthrough)
- System design and architecture round
- Culture fit and leadership interaction
- Final offer discussion
This opportunity through ClanX is for Parspec (direct payroll with Parspec)
Please note you would be interviewed for this role at Parspec while ClanX is a recruitment partner helping Parspec find the right candidates and manage the hiring process.
Parspec is hiring a Machine Learning Engineer to build state-of-the-art AI systems, focusing on NLP, LLMs, and computer vision in a high-growth, product-led environment.
Company Details:
Parspec is a fast-growing technology startup revolutionizing the construction supply chain. By digitizing and automating critical workflows, Parspec empowers sales teams with intelligent tools to drive efficiency and close more deals. With a mission to bring a data-driven future to the construction industry, Parspec blends deep domain knowledge with AI expertise.
Requirements:
- 3 to 4 years of relevant experience in ML and AI roles
- Strong grasp of ML, deep learning, and model deployment
- Proficient in Python and libraries like numpy, pandas, sklearn, etc.
- Experience with TensorFlow/Keras or PyTorch
- Familiar with AWS/GCP platforms
- Strong coding skills and ability to ship production-ready solutions
- Bachelor's/Master's in Engineering or related field
- Curious, self-driven, and a fast learner
- Passionate about NLP, LLMs, and state-of-the-art AI technologies
- Comfortable with collaboration across globally distributed teams
Preferred (Not Mandatory):
- Experience with Django, databases, and full-stack environments
- Familiarity with OCR and PDF processing
- Competitive programming or Kaggle participation
- Prior work with distributed teams across time zones
Responsibilities:
- Design, develop, and deploy NLP, CV, and recommendation systems
- Train and implement deep learning models
- Research and explore novel ML architectures
- Build and maintain end-to-end ML pipelines
- Collaborate across product, design, and engineering teams
- Work closely with business stakeholders to shape product features
- Ensure high scalability and performance of AI solutions
- Uphold best practices in engineering and contribute to a culture of excellence
- Actively participate in R&D and innovation within the team
Interview Process
- Technical interview (coding, ML concepts, project walkthrough)
- System design and architecture round
- Culture fit and leadership interaction
- Final offer discussion
Senior Software Engineer
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform
- Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform.
- Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.
- Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation
- Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution
- Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery
Required Qualifications:
- Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS)
- Experience with CI/CD and cloud-based software development and delivery
- Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required.
- Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent
- Highly effective verbal and written communication skills and ability to lead and participate in multiple projects
- Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities
- Must be results-focused, team-oriented and with a strong work ethic
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills
- Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
About Virtana: Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
Entry Level | On-Site | Pune
Internship Opportunity: Data + AI Intern
Location: Pune, India (In-office)
Duration: 2 Months
Start Date: Between 11th July 2025 and 15th August 2025
Work Days: Monday to Friday
Stipend: As per company policy
About ImmersiveData.AI
Smarter Data. Smarter Decisions. Smarter Enterprises.™
At ImmersiveData.AI, we don’t just transform data—we challenge and redefine business models. By leveraging cutting-edge AI, intelligent automation, and modern data platforms, we empower enterprises to unlock new value and drive strategic transformation.
About the Internship
As a Data + AI Intern, you will gain hands-on experience at the intersection of data engineering and AI. You’ll be part of a collaborative team working on real-world data challenges using modern tools like Snowflake, DBT, Airflow, and LLM frameworks. This internship is a launchpad for students looking to enter the rapidly evolving field of Data & AI.
Key Responsibilities
- Assist in designing, building, and optimizing data pipelines and ETL workflows
- Work with structured and unstructured datasets across various sources
- Contribute to AI-driven automation and analytics use cases
- Support backend integration of large language models (LLMs)
- Collaborate in building data platforms using tools like Snowflake, DBT, and Airflow
Required Skills
- Proficiency in Python
- Strong understanding of SQL and relational databases
- Basic knowledge of Data Engineering and Data Analysis concepts
- Familiarity with cloud data platforms or willingness to learn (e.g., Snowflake)
Preferred Learning Certifications (Optional but Recommended)
- Python Programming
- SQL & MySQL/PostgreSQL
- Statistical Modeling
- Tableau / Power BI
- Voice App Development (Bonus)
Who Can Apply
Only candidates who:
- Are available full-time (in-office, Pune)
- Can start between 11th July and 15th August 2025
- Are available for a minimum of 2 months
- Have relevant skills and interest in data and AI
Perks
- Internship Certificate
- Letter of Recommendation
- Work with cutting-edge tools and technologies
- Informal dress code
- Exposure to real industry use cases and mentorship
About the company:
Inteliment is a niche business analytics company with almost 2 decades proven track record of partnering with hundreds of fortunes 500 global companies. Inteliment operates its ISO certified development centre in Pune, India and has business operations in multiple countries through subsidiaries in Singapore, Europe and headquarter in India.
About the Role:
As a Data Engineer, you will contribute to cutting-edge global projects and innovative product initiatives, delivering impactful solutions for our Fortune clients. In this role, you will take ownership of the entire data pipeline and infrastructure development lifecycle—from ideation and design to implementation and ongoing optimization. Your efforts will ensure the delivery of high-performance, scalable, and reliable data solutions. Join us to become a driving force in shaping the future of data infrastructure and innovation, paving the way for transformative advancements in the data ecosystem.
Qualifications:
- Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
- Certifications with related field will be an added advantage.
Key Competencies:
- Must have experience with SQL, Python and Hadoop
- Good to have experience with Cloud Computing Platforms (AWS, Azure, GCP, etc.), DevOps Practices, Agile Development Methodologies
- ETL or other similar technologies will be an advantage.
- Core Skills: Proficiency in SQL, Python, or Scala for data processing and manipulation
- Data Platforms: Experience with cloud platforms such as AWS, Azure, or Google Cloud.
- Tools: Familiarity with tools like Apache Spark, Kafka, and modern data warehouses (e.g., Snowflake, Big Query, Redshift).
- Soft Skills: Strong problem-solving abilities, collaboration, and communication skills to work effectively with technical and non-technical teams.
- Additional: Knowledge of SAP would be an advantage
Key Responsibilities:
- Data Pipeline Development: Build, maintain, and optimize ETL/ELT pipelines for seamless data flow.
- Data Integration: Consolidate data from various sources into unified systems.
- Database Management: Design and optimize scalable data storage solutions.
- Data Quality Assurance: Ensure data accuracy, consistency, and completeness.
- Collaboration: Work with analysts, scientists, and stakeholders to meet data needs.
- Performance Optimization: Enhance pipeline efficiency and database performance.
- Data Security: Implement and maintain robust data security and governance policies
- Innovation: Adopt new tools and design scalable solutions for future growth.
- Monitoring: Continuously monitor and maintain data systems for reliability.
- Data Engineers ensure reliable, high-quality data infrastructure for analytics and decision-making.
About Us
Dolat Capital is a multi-strategy quantitative trading firm specializing in high-frequency and fully automated trading systems across global markets. We build proprietary algorithms using advanced mathematical, statistical, and computational techniques.
We are looking for an Experienced Quantitative Researcher to develop, test, and optimize quantitative trading strategies—primarily for APAC markets. The ideal candidate brings strong mathematical thinking, hands-on trading experience, and a track record of building profitable models.
Key Responsibilities
- Research, design & develop quantitative trading strategies
- Analyse large datasets and build predictive models / regression models
- Implement models in Python / C++ / Matlab
- Monitor, execute, and improve existing trading strategies
- Collaborate closely with traders, developers, and researchers
- Optimize trading systems, reduce latency, and enhance execution
- Identify new trading opportunities across listed products
- Oversee and manage risk for options, equities, futures, and other instruments
Required Skills & Experience
- Minimum 3+ years experience on a high-volume equities, futures, options, or market-making desk
- Strong background in Statistics, Mathematics, Physics, or related field (PhD)
- Proven track record of profitable real-world trading strategies
- Strong programming experience: C++, Python, R, Matlab
- Experience with automated trading systems and exchange protocols
- Ability to work in a fast-paced, high-pressure trading environment
- Excellent analytical skills, precision, and attention to detail
Apply- https://lnkd.in/gVHVTMG6
About Us
Binocs.co empowers institutional lenders with next-generation loan management software, streamlining the entire loan lifecycle and facilitating seamless interaction among stakeholders.
Team: Binocs.co is led by a passionate team with extensive experience in financial technology, lending, AI, and software development.
Investors: Our journey is backed by renowned investors who share our vision for transforming the loan management landscape: Beenext, Arkam Ventures, Accel, Saison Capital, Blume Ventures, Premji Invest, and Better Capital.
What we're looking for
We seek a motivated, talented, and intelligent individual who shares our vision of being a changemaker. We value individuals dissatisfied with the status quo, strive to make improvements, and envision making significant contributions. We look for those who embrace challenges and dedicate themselves to solutions. We seek individuals who push for data-driven decisions, are unconstrained by titles, and value collaboration. We are building a fast-paced team to shape various business and technology aspects.
Responsibilities
- Be a part of the initial team to define and set up a best-in-class digital platform for the Private Credit industry, and take full ownership of the components of the digital platform
- You will build robust and scalable web-based applications and need to think of platforms & reuse
- Driving and active contribution to High-Level Designs(HLDs) and Low-Level Designs (LLDs).
- Collaborate with frontend developers, product managers, and other stakeholders to understand requirements and translate them into technical specifications.
- Mentor team members in adopting effective coding practices. Conduct comprehensive code reviews, focusing on both functional and non-functional aspects.
- Ensure the security, performance, and reliability of backend systems through proper testing, monitoring, and optimization.
- Participate in code reviews, sprint planning, and agile development processes to maintain code quality and project timelines.
- Simply, be an owner of the platform and do whatever it takes to get the required output for customers
- Be curious about product problems and have an open mind to dive into new domains eg: gen-AI.
- Stay up-to-date with the latest development trends, tools, and technologies.
Qualifications
- 3-5 years of experience in backend development, with a strong track record of successfully architecting and implementing scalable and high-performance backend systems.
- Proficiency in at least one backend programming language (e.g.,Python, Golang, Node.js, Java) & tech stack to write maintainable, scalable, unit-tested code.
- Good understanding of databases (e.g. MySQL, PostgreSQL) and NoSQL (e.g. MongoDB, Elasticsearch, etc)
- Solid understanding of RESTful API design principles and best practices.
- Experience with multi-threading and concurrency programming
- Extensive experience in object-oriented design skills, knowledge of design patterns, and huge passion and ability to design intuitive module and class-level interfaces.
- Experience of cloud computing platforms and services (e.g., AWS, Azure, Google Cloud Platform)
- Knowledge of Test Driven Development
Good to have
- Experience with microservices architecture
- Knowledge of serverless computing and event-driven architectures (e.g., AWS Lambda, Azure Functions)
- Understanding of DevOps practices and tools for continuous integration and deployment (CI/CD).
- Contributions to open-source projects or active participation in developer communities.
- Experience working LLMs and AI technologies
Benefits
By joining Binocs, you’ll become part of a vibrant and dynamic team dedicated to disrupting the fintech space with cutting-edge solutions. We offer a stimulating work environment where innovation is at the heart of everything we do. Our competitive compensation package, inclusive of equity, is designed to reward your contributions to our success.
Required Qualifications
- 4+ years of professional software development experience
- 2+ years contributing to service design and architecture
- Strong expertise in modern languages like Golang, Python
- Deep understanding of scalable, cloud-native architectures and microservices
- Production experience with distributed systems and database technologies
- Experience with Docker, software engineering best practices
- Bachelor's Degree in Computer Science or related technical field
Preferred Qualifications
- Experience with Golang, AWS, and Kubernetes
- CI/CD pipeline experience with GitHub Actions
Start-up environment experience
Job Summary: Lead/Senior ML Data Engineer (Cloud-Native, Healthcare AI)
Experience Required: 8+ Years
Work Mode: Remote
We are seeking a highly autonomous and experienced Lead/Senior ML Data Engineer to drive the critical data foundation for our AI analytics and Generative AI platforms. This is a specialized hybrid position, focusing on designing, building, and optimizing scalable data pipelines (ETL/ELT) that transform complex, messy clinical and healthcare data into high-quality, production-ready feature stores for Machine Learning and NLP models.
The successful candidate will own technical work streams end-to-end, ensuring data quality, governance, and low-latency delivery in a cloud-native environment.
Key Responsibilities & Focus Areas:
- ML Data Pipeline Ownership (70-80% Focus): Design and implement high-performance, scalable ETL/ELT pipelines using PySpark and a Lakehouse architecture (such as Databricks) to ingest, clean, and transform large-scale healthcare datasets.
- AI Data Preparation: Specialize in Feature Engineering and data preparation for complex ML workloads, including transforming unstructured clinical data (e.g., medical notes) for Generative AI and NLP model training.
- Cloud Architecture & Orchestration: Deploy, manage, and optimize data workflows using Airflow in a production AWS environment.
- Data Governance & Compliance: Mandatorily implement pipelines with robust data masking, pseudonymization, and security controls to ensure continuous adherence to HIPAA and other relevant health data privacy regulations.
- Technical Leadership: Lead and define technical requirements from ambiguous business problems, acting as a key contributor to the data architecture strategy for the core AI platform.
Non-Negotiable Requirements (The "Must-Haves"):
- 5+ years of progressive experience as a Data Engineer, with a clear focus on ML/AI support.
- Deep expertise in PySpark/Python for distributed data processing.
- Mandatory proficiency with Lakehouse platforms (e.g., Databricks) in an AWS production environment.
- Proven experience handling complex clinical/healthcare data (EHR, Claims), including unstructured text.
- Hands-on experience with HIPAA/GDPR compliance in data pipeline design.
About the Role
We are looking for a passionate GenAI Developer to join our dynamic team at Hardwin Software Solutions. In this role, you will design and develop scalable backend systems, leverage AWS services for data processing, and work on cutting-edge Generative AI solutions. If you enjoy solving complex problems and building impactful applications, we’d love to hear from you.
What You Will Do
- Develop robust and scalable backend services and APIs using Python, integrating with various AWS services.
- Design, implement, and maintain data processing pipelines leveraging AWS (e.g., S3, Lambda).
- Collaborate with cross-functional teams to translate requirements into efficient technical solutions.
- Write clean, maintainable code while following agile engineering practices (CI/CD, version control, release cycles).
- Optimize application performance and scalability by fine-tuning AWS resources and leveraging advanced Python techniques.
- Contribute to the development and integration of Generative AI techniques into business applications.
What You Should Have
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 3+ years of professional experience in software development.
- Strong programming skills in Python and good understanding of data structures & algorithms.
- Hands-on experience with AWS services: S3, Lambda, DynamoDB, OpenSearch.
- Experience with Relational Databases, Source Control, and CI/CD pipelines.
- Practical knowledge of Generative AI techniques (mandatory).
- Strong analytical and mathematical problem-solving abilities.
- Excellent communication skills in English.
- Ability to work both independently and collaboratively, with a proactive and self-motivated attitude.
- Strong organizational skills with the ability to prioritize tasks and meet deadlines.
Review Criteria
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
Job Summary
We are seeking an experienced Databricks Developer with strong skills in PySpark, SQL, Python, and hands-on experience deploying data solutions on AWS (preferred), Azure. The role involves designing, developing, and optimizing scalable data pipelines and analytics workflows on the Databricks platform.
Key Responsibilities
- Develop and optimize ETL/ELT pipelines using Databricks and PySpark.
- Build scalable data workflows on AWS (EC2, S3, Glue, Lambda, IAM) or Azure (ADF, ADLS, Synapse).
- Implement and manage Delta Lake (ACID, schema evolution, time travel).
- Write efficient, complex SQL for transformation and analytics.
- Build and support batch and streaming ingestion (Kafka, Kinesis, EventHub).
- Optimize Databricks clusters, jobs, notebooks, and PySpark performance.
- Collaborate with cross-functional teams to deliver reliable data solutions.
- Ensure data governance, security, and compliance.
- Troubleshoot pipelines and support CI/CD deployments.
Required Skills & Experience
- 4–8 years in Data Engineering / Big Data development.
- Strong hands-on experience with Databricks (clusters, jobs, workflows).
- Advanced PySpark and strong Python skills.
- Expert-level SQL (complex queries, window functions).
- Practical experience with AWS (preferred) or Azure cloud services.
- Experience with Delta Lake, Parquet, and data lake architectures.
- Familiarity with CI/CD tools (GitHub Actions, Azure DevOps, Jenkins).
- Good understanding of data modeling, optimization, and distributed systems.


























