50+ Python Jobs in India
Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!



Experience in working with various ML libraries and packages like Scikit learn, Numpy, Pandas, Tensorflow, Matplotlib, Caffe, etc. 2. Deep Learning Frameworks: PyTorch, spaCy, Keras 3. Deep Learning Architectures: LSTM, CNN, Self-Attention and Transformers 4. Experience in working with Image processing, computer vision is must 5. Designing data science applications, Large Language Models(LLM) , Generative Pre-trained Transformers (GPT), generative AI techniques, Natural Language Processing (NLP), machine learning techniques, Python, Jupyter Notebook, common data science packages (tensorflow, scikit-learn,keras etc.,.) , LangChain, Flask, FastAPI, prompt engineering. 6. Programming experience in Python 7. Strong written and verbal communications 8. Excellent interpersonal and collaboration skills.
Good-to-Have 1. Experience working with vectored databases and graph representation of documents. 2. Experience with building or maintaining MLOps pipelines. 3. Experience in Cloud computing infrastructures like AWS Sagemaker or Azure ML for implementing ML solutions is preferred. 4. Exposure to Docker, Kubernetes
SN Role descriptions / Expectations from the Role
1 Design and implement scalable and efficient data architectures to support generative AI workflows.
2 Fine tune and optimize large language models (LLM) for generative AI, conduct performance evaluation and benchmarking for LLMs and machine learning models
3 Apply prompt engineer techniques as required by the use case
4 Collaborate with research and development teams to build large language models for generative AI use cases, plan and breakdown of larger data science tasks to lower-level tasks
5 Lead junior data engineers on tasks such as design data pipelines, dataset creation, and deployment, use data visualization tools, machine learning techniques, natural language processing , feature engineering, deep learning , statistical modelling as required by the use case.

About Us:
At Vahan, we are building India’s first AI powered recruitment marketplace for India’s 300 million strong Blue Collar workforce, opening doors to economic opportunities and brighter futures. Already India’s largest recruitment platform, Vahan is supported by marquee investors like Khosla Ventures, Bharti Airtel, Vijay Shekhar Sharma (CEO, Paytm), and leading executives from Google and Facebook. Our customers include names like Swiggy, Zomato, Rapido, Zepto, and many more. We leverage cutting-edge technology and AI to recruit for the workforces of some of the most recognized companies in the country.
Our vision is ambitious: to become the go-to platform for blue-collar professionals worldwide, empowering them with not just earning opportunities but also the tools, benefits, and support they need to thrive. We aim to impact over a billion lives worldwide, creating a future where everyone has access to economic prosperity. If our vision excites you, Vahan might just be your next adventure. We’re on the hunt for driven individuals who love tackling big challenges. If this sounds like your kind of journey, dive into the details and see where you can make your mark.
What you will be doing:
- Architect and Implement Data Infrastructure: Design, build, and maintain robust and scalable data pipelines and a data warehouse/lake solution using open-source and cloud-based technologies, optimized for both high-frequency small file and large file data ingestion, and real-time data streams. This includes implementing efficient mechanisms for handling high volumes of data arriving at frequent intervals.
- Develop and Optimize Data Processes: Create custom tools, primarily using Python, for data validation, processing, analysis, and automation. Continuously improve ETL/ELT processes for efficiency, reliability, and scalability. This includes building processes to bridge gaps between different databases and data sources, ensuring data consistency and accessibility. This also includes processing and integrating data from streaming sources.
- Lead and Mentor: Collaborate with product, engineering, and business teams to understand data requirements and provide data-driven solutions. Mentor and guide junior data engineers (as the team grows) and foster a culture of data excellence.
- Data Quality and Governance: Proactively identify and address data quality issues. Implement and maintain robust data quality monitoring, alerting, and measurement systems to ensure the accuracy, completeness, and consistency of our data assets. Implement and enforce data governance and security best practices, taking proactive ownership.
- Research: Research and adapt newer technologies to suit the requirements.
You will thrive in this role if you:
- Are a Hands-On Technical Leader: You possess deep technical expertise in data engineering and are comfortable leading by example, diving into code, and setting technical direction.
- Are a Startup-Minded Problem Solver: You thrive in a fast-paced, dynamic environment, are comfortable with ambiguity, and are eager to build from the ground up. You proactively identify and address challenges.
- Are a Collaborative Communicator: You can effectively communicate complex technical concepts to both technical and non-technical audiences and build strong relationships with stakeholders.
- Are a Strategic Thinker: You can think ahead and architect long lasting systems.
At Vahan, you’ll have the opportunity to make a real impact in a sector that touches millions of lives. We’re committed to not only advancing the livelihoods of our workforce but also in taking care of the people who make this mission possible. Here’s what we offer:
- Unlimited PTO: Trust and flexibility to manage your time in the way that works best for you.
- Comprehensive Medical Insurance: We’ve got you covered with plans designed to support you and your loved ones.
- Monthly Wellness Leaves: Regular time off to recharge and focus on what matters most.
- Competitive Pay: Your contributions are recognized and rewarded with a compensation package that reflects your impact.
Join us, and be part of something bigger—where your work drives real, positive change in the world.
Job Title: Senior/Lead Performance Test Engineer (JMeter Specialist)
Experience: 5-10 Years
Location: Remote / Pune, India
Job Summary:
We are looking for a highly skilled and experienced Senior/Lead Performance Test Engineer with a strong background in Apache JMeter to lead and execute performance testing initiatives for our web and mobile applications. The ideal candidate will be a hands-on expert in designing, scripting, executing, and analyzing complex performance tests, identifying bottlenecks, and collaborating with cross-functional teams to optimize system performance. This role is critical in ensuring our applications deliver exceptional user experiences under various load conditions.
Key Responsibilities:
Performance Test Strategy & Planning:
Define, develop, and implement comprehensive performance test strategies and plans aligned with project requirements and business objectives for web and mobile applications.
Collaborate with product owners, developers, architects, and operations teams to understand non-functional requirements (NFRs) and service level agreements (SLAs).
Determine appropriate performance test types (Load, Stress, Endurance, Spike, Scalability) and define relevant performance metrics and acceptance criteria.
Scripting & Test Development (JMeter Focus):
Design, develop, and maintain robust and scalable performance test scripts using Apache JMeter for various protocols (HTTP/S, REST, SOAP, JDBC, etc.).
Implement advanced JMeter features including correlation, parameterization, assertions, custom listeners, and logic controllers to simulate realistic user behavior.
Develop modular and reusable test assets.
Integrate performance test scripts into CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps) for continuous performance monitoring.
Test Execution & Monitoring:
Set up and configure performance test environments, ensuring they accurately mimic production infrastructure (including cloud environments like AWS, Azure, GCP).
Execute performance tests in various environments, managing large-scale load generation using JMeter (standalone or distributed mode).
Monitor system resources (CPU, Memory, Disk I/O, Network) and application performance metrics using various tools (e.g., Grafana, Prometheus, ELK stack, AppDynamics, Dynatrace, New Relic) during test execution.
Analysis & Reporting:
Analyze complex performance test results, identify performance bottlenecks, and pinpoint root causes across application, database, and infrastructure layers.
Interpret monitoring data, logs, and profiling reports to provide actionable insights and recommendations for performance improvements.
Prepare clear, concise, and comprehensive performance test reports, presenting findings, risks, and optimization recommendations to technical and non-technical stakeholders.
Collaboration & Mentorship:
Work closely with development and DevOps teams to troubleshoot, optimize, and resolve performance issues.
Act as a subject matter expert in performance testing, providing technical guidance and mentoring to junior team members.
Contribute to the continuous improvement of performance testing processes, tools, and best practices.
Required Skills & Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field.
5-10 years of hands-on experience in performance testing, with a strong focus on web and mobile applications.
Expert-level proficiency with Apache JMeter for scripting, execution, and analysis.
Strong understanding of performance testing methodologies, concepts (e.g., throughput, response time, latency, concurrency), and lifecycle.
Experience with performance monitoring tools such as Grafana, Prometheus, CloudWatch, Azure Monitor, GCP Monitoring, AppDynamics, Dynatrace, or New Relic.
Solid understanding of web technologies (HTTP/S, REST APIs, WebSockets, HTML, CSS, JavaScript) and modern application architectures (Microservices, Serverless).
Experience with database performance analysis (SQL/NoSQL) and ability to write complex SQL queries.
Familiarity with cloud platforms (AWS, Azure, GCP) and experience in testing applications deployed in cloud environments.
Proficiency in scripting languages (e.g., Groovy, Python, Shell scripting) for custom scripting and automation.
Excellent analytical, problem-solving, and debugging skills.
Strong communication (written and verbal) and interpersonal skills, with the ability to effectively collaborate with diverse teams and stakeholders.
Ability to work independently, manage multiple priorities, and thrive in a remote or hybrid work setup.
Good to Have Skills:
Experience with other performance testing tools (e.g., LoadRunner, Gatling, k6, BlazeMeter).
Knowledge of CI/CD pipelines and experience integrating performance tests into automated pipelines.
Understanding of containerization technologies (Docker, Kubernetes).
Experience with mobile application performance testing tools and techniques (e.g., device-level monitoring, network emulation).
Certifications in performance testing or cloud platforms.
Mumbai malad work from office
6 Days working
1 & 3 Saturday off
AWS Expertise: Minimum 2 years of experience working with AWS services like RDS, S3, EC2, and Lambda.
Roles and Responsibilities
1. Backend Development: Develop scalable and high-performance APIs and backend systems using Node.js. Write clean, modular, and reusable code following best practices. Debug, test, and optimize backend services for performance and scalability.
2. Database Management: Design and maintain relational databases using MySQL, PostgreSQL, or AWS RDS. Optimize database queries and ensure data integrity. Implement data backup and recovery plans.
3. AWS Cloud Services: Deploy, manage, and monitor applications using AWS infrastructure. Work with AWS services including RDS, S3, EC2, Lambda, API Gateway, and CloudWatch. Implement security best practices for AWS environments (IAM policies, encryption, etc.).
4. Integration and Microservices:Integrate third-party APIs and services. Develop and manage microservices architecture for modular application development.
5. Version Control and Collaboration: Use Git for code versioning and maintain repositories. Collaborate with front-end developers and project managers for end-to-end project delivery.
6. Troubleshooting and Debugging: Analyze and resolve technical issues and bugs. Provide maintenance and support for existing backend systems.
7. DevOps and CI/CD: Set up and maintain CI/CD pipelines. Automate deployment processes and ensure zero-downtime releases.
8. Agile Development:
Participate in Agile/Scrum ceremonies such as daily stand-ups, sprint planning, and retrospectives.
Deliver tasks within defined timelines while maintaining high quality.
Required Skills
Strong proficiency in Node.js and JavaScript/TypeScript.
Expertise in working with relational databases like MySQL/PostgreSQL and AWS RDS.
Proficient with AWS services including Lambda, S3, EC2, and API Gateway.
Experience with RESTful API design and GraphQL (optional).
Knowledge of containerization using Docker is a plus.
Strong problem-solving and debugging skills.
Familiarity with tools like Git, Jenkins, and Jira.
Salesforce DevOps/Release Engineer
Resource type - Salesforce DevOps/Release Engineer
Experience - 5 to 8 years
Norms - PF & UAN mandatory
Resource Availability - Immediate or Joining time in less than 15 days
Job - Remote
Shift timings - UK timing (1pm to 10 pm or 2pm to 11pm)
Required Experience:
- 5–6 years of hands-on experience in Salesforce DevOps, release engineering, or deployment management.
- Strong expertise in Salesforce deployment processes, including CI/CD pipelines.
- Significant hands-on experience with at least two of the following tools: Gearset, Copado,Flosum.
- Solid understanding of Salesforce architecture, metadata, and development lifecycle.
- Familiarity with version control systems (e.g., Git) and agile methodologies
Key Responsibilities:
- Design, implement, and manage CI/CD pipelines for Salesforce deployments using Gearset, Copado, or Flosum.
- Automate and optimize deployment processes to ensure efficient, reliable, and repeatable releases across Salesforce environments.
- Collaborate with development, QA, and operations teams to gather requirements and ensurealignment of deployment strategies.
- Monitor, troubleshoot, and resolve deployment and release issues.
- Maintain documentation for deployment processes and provide training on best practices.
- Stay updated on the latest Salesforce DevOps tools, features, and best practices.
Technical Skills:
- Deployment ToolsHands-on with Gearset, Copado, Flosum for Salesforce deployments
- CI/CDBuilding and maintaining pipelines, automation, and release management
- Version ControlProficiency with Git and related workflows
- Salesforce PlatformUnderstanding of metadata, SFDX, and environment management
- Scripting
- Familiarity with scripting (e.g., Shell, Python) for automation (preferred)
- Communication
- Strong written and verbal communication skills
Preferred Qualifications:
Bachelor’s degree in Computer Science, Information Technology, or related field.
Certifications:
Salesforce certifications (e.g., Salesforce Administrator, Platform Developer I/II) are a plus.
Experience with additional DevOps tools (Jenkins, GitLab, Azure DevOps) is beneficial.
Experience with Salesforce DX and deployment strategies for large-scale orgs.


Job Description : Quantitative R&D Engineer
As a Quantitative R&D Engineer, you’ll explore data and design logic that becomes live trading strategies. You’ll bridge the gap between raw research and deployed, autonomous capital systems.
What You’ll Work On
- Analyze on-chain and market data to identify inefficiencies and behavioral patterns.
- Develop and prototype systematic trading strategies using statistical and ML-based techniques.
- Contribute to signal research, backtesting infrastructure, and strategy evaluation frameworks.
- Monitor and interpret DeFi protocol mechanics (AMMs, perps, lending markets) for alpha generation.
- Collaborate with engineers to turn research into production-grade, automated trading systems.
Ideal Traits
- Strong in data structures, algorithms, and core CS fundamentals.
- Proficiency in any programming language
- Understanding of probability, statistics, or ML concepts.
- Self-driven and comfortable with ambiguity, iteration, and fast learning cycles.
- Strong interest in markets, trading, or algorithmic systems.
Bonus Points For
- Experience with backtesting or feature engineering.
- Exposure to crypto primitives (AMMs, perps, mempools, etc.)
- Projects involving alpha signals, strategy testing, or DeFi bots.
- Participation in quant contests, hackathons, or open-source work.
What You’ll Gain:
- Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology.
- Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one.
- Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions.
- Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters
What We Value:
- Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes.
- Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here.
- Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build.
- Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface.
Compensation:
- INR 6 - 12 LPA
- Performance Bonuses: Linked to contribution, delivery, and impact.


A Concise Glimpse into the Role
We’re on the hunt for young, energetic, and hustling talent ready to bring fresh ideas and unstoppable drive to the table.
This isn’t just another role—it’s a launchpad for change-makers. If you’re driven to disrupt, innovate, and challenge the norm, we want you to make your mark with us.
Are you ready to redefine the future?
Apply now and step into a career where your ideas power the impossible!
Your Time Will Be Invested In
· AI/ML Model Innovation and Research
· Are you ready to lead transformative projects at the cutting edge of AI and machine learning? We're looking for a visionary mind with a passion for building ground breaking solutions that redefine the possible.
What You'll Own
Pioneering AI/ML Model Innovation
· Take ownership of designing, developing, and deploying sophisticated AI and ML models that push boundaries.
· Spearhead the creation of generative AI applications that revolutionize real-world experiences.
· Drive end-to-end implementation of AI-driven products with a focus on measurable impact.
Data Engineering and Advanced Development
· Architect robust pipelines for data collection, pre-processing, and analysis, ensuring precision at every stage.
· Deliver clean, scalable, and high-performance Python code that empowers our AI systems to excel.
Trailblazing Research and Strategic Collaboration
· Dive into the latest research to stay ahead of AI/ML trends, identifying opportunities to integrate state-of-the-art techniques.
· Foster innovation by brainstorming with a dynamic team to conceptualize novel AI solutions.
· Elevate the team's expertise by preparing insightful technical documentation and presenting actionable findings.
What We Want You to Have
· 1-2 years' experience with live AI project experience, from conceptualization to real-world deployment.
· Foundational knowledge in AI, ML, and generative AI applications.
· Proficient in Python and familiar with libraries like TensorFlow, PyTorch, Scikit-learn.
· Experience working with structured & unstructured data, as well as predictive analytics.
· Basic understanding of Deep Learning Techniques.
· Knowledge of AutoGen for building scalable multi-agent AI systems & familiarity with LangChain or similar frameworks for building AI Agents.
· Knowledge of using AI tools like VS Copilot.
· Proficient in working with vector databases for managing and retrieving data.
· Understanding of AI/ML deployment tools such as Docker, Kubernetes.
· Understanding JavaScript, TypeScript with React and Tailwind.
· Proficiency in Prompt Engineering for various use cases, including content generation and data extraction.
· Ability to work independently and as part of a collaborative team.
· Excellent communication skills and a strong willingness to learn.
Nice to Have
· Prior project or coursework experience in AI/ML.
· Background in Big Data technologies (Spark, Hadoop, Databricks).
· Experience with containerization and deployment tools.
· Proficiency in SQL & NoSQL databases.
· Familiarity with Data Visualization tools (e.g., Matplotlib, Seaborn).
Soft Skills
· Strong problem-solving and analytical capabilities.
· Excellent teamwork and interpersonal communication.
· Ability to thrive in a fast-paced and innovation-driven environment.

We are looking for an experienced and dynamic to join our team. The ideal candidate will be responsible for designing, developing, and delivering high-quality technical training programs to students.


🧩 About the Role
We’re seeking a highly skilled Generative AI Engineer to join our growing AI/ML team. You’ll take ownership of building and deploying RAG and Graph-RAG systems using cutting-edge LLMs, vector/graph databases, and Agentic AI frameworks. If you’re passionate about operationalizing intelligent AI systems at scale, this is the role for you.
🔧 What You’ll Do
- Design, build, and deploy RAG and Graph-RAG pipelines in production environments
- Work hands-on with LLMs, embedding models, and vector/graph retrieval systems
- Integrate and optimize vector databases and graph DBs (e.g., Neo4j, TigerGraph)
- Automate workflows using MLOps tools (MLflow, Airflow, Docker, Kubernetes)
- Experiment with agentic AI frameworks (LangChain Agents, AutoGPT, CrewAI)
- Collaborate with cross-functional teams to scale and productionize AI solutions
- Communicate complex system designs and project outcomes to varied stakeholders
🧠 What You Bring
- ✅ 5+ years of experience building RAG and Graph-RAG systems
- ✅ Strong Python programming and familiarity with retrieval frameworks
- ✅ Hands-on experience with LLMs (e.g., OpenAI, Cohere, Mistral)
- ✅ Deep knowledge of vector DBs (Pinecone, FAISS, Weaviate) and graph DBs (Neo4j, ArangoDB)
- ✅ Proficiency in Cypher, Gremlin, or SPARQL query languages
- ✅ Exposure to agentic systems and autonomous agent frameworks
- ✅ Experience with MLOps (MLflow, Docker, Kubernetes, Airflow)
🛠️ Tech Stack
RAG | Graph-RAG | LLMs | LangChain | AutoGPT | Python | FAISS | Pinecone | Neo4j | Cypher | MLflow | Airflow | Kubernetes | Docker
🚀 Why Join Us?
- Work on state-of-the-art AI with a strong, collaborative team
- Lead AI solutions from concept to production
- Explore Agentic AI and intelligent system automation
- Flexible remote setup with competitive compensation


Job description
Opportunity to work on cutting-edge tech pieces & build from scratch
Ensure seamless performance while handling large volumes of data without system slowdowns
Collaborate with cross-functional teams to meet business goals
Required Skills:
Frontend: ReactJS (Next.js must)
Backend: Exp in Node.js, Python, or Java
Databases: SQL (must), MongoDB (nice to have)
Caching & Messaging: Redis, Kafka, or Cassandra exp
Cloud certification is a bonus

JOB REQUIREMENT:
Wissen Technology is now hiring a Azure Data Engineer with 7+ years of relevant experience.
We are solving complex technical problems in the financial industry and need talented software engineers to join our mission and be a part of a global software development team. A brilliant opportunity to become a part of a highly motivated and expert team, which has made a mark as a high-end technical consultant.
Required Skills:
· 6+ years of being a practitioner in data engineering or a related field.
· Proficiency in programming skills in Python
· Experience with data processing frameworks like Apache Spark or Hadoop.
· Experience working on Snowflake and Databricks.
· Familiarity with cloud platforms (AWS, Azure) and their data services.
· Experience with data warehousing concepts and technologies.
· Experience with message queues and streaming platforms (e.g., Kafka).
· Excellent communication and collaboration skills.
· Ability to work independently and as part of a geographically distributed team.

Role Overview:
As a Backend Developer at LearnTube.ai, you will ship the backbone that powers 2.3 million learners in 64 countries—owning APIs that crunch 1 billion learning events & the AI that supports it with <200 ms latency.
What You'll Do:
At LearnTube, we’re pushing the boundaries of Generative AI to revolutionise how the world learns. As a Backend Engineer, you will be building the backend for an AI system and working directly on AI. Your roles and responsibilities will include:
- Ship Micro-services – Build FastAPI services that handle ≈ 800 req/s today and will triple within a year (sub-200 ms p95).
- Power Real-Time Learning – Drive the quiz-scoring & AI-tutor engines that crunch millions of events daily.
- Design for Scale & Safety – Model data (Postgres, Mongo, Redis, SQS) and craft modular, secure back-end components from scratch.
- Deploy Globally – Roll out Dockerised services behind NGINX on AWS (EC2, S3, SQS) and GCP (GKE) via Kubernetes.
- Automate Releases – GitLab CI/CD + blue-green / canary = multiple safe prod deploys each week.
- Own Reliability – Instrument with Prometheus / Grafana, chase 99.9 % uptime, trim infra spend.
- Expose Gen-AI at Scale – Publish LLM inference & vector-search endpoints in partnership with the AI team.
- Ship Fast, Learn Fast – Work with founders, PMs, and designers in weekly ship rooms; take a feature from Figma to prod in < 2 weeks.
What makes you a great fit?
Must-Haves:
- 2+ yrs Python back-end experience (FastAPI)
- Strong with Docker & container orchestration
- Hands-on with GitLab CI/CD, AWS (EC2, S3, SQS) or GCP (GKE / Compute) in production
- SQL/NoSQL (Postgres, MongoDB) + You’ve built systems from scratch & have solid system-design fundamentals
Nice-to-Haves
- k8s at scale, Terraform,
- Experience with AI/ML inference services (LLMs, vector DBs)
- Go / Rust for high-perf services
- Observability: Prometheus, Grafana, OpenTelemetry
About Us:
At LearnTube, we’re on a mission to make learning accessible, affordable, and engaging for millions of learners globally. Using Generative AI, we transform scattered internet content into dynamic, goal-driven courses with:
- AI-powered tutors that teach live, solve doubts in real time, and provide instant feedback.
- Seamless delivery through WhatsApp, mobile apps, and the web, with over 1.4 million learners across 64 countries.
Meet the Founders:
LearnTube was founded by Shronit Ladhani and Gargi Ruparelia, who bring deep expertise in product development and ed-tech innovation. Shronit, a TEDx speaker, is an advocate for disrupting traditional learning, while Gargi’s focus on scalable AI solutions drives our mission to build an AI-first company that empowers learners to achieve career outcomes. We’re proud to be recognised by Google as a Top 20 AI Startup and are part of their 2024 Startups Accelerator: AI First Program, giving us access to cutting-edge technology, credits, and mentorship from industry leaders.
Why Work With Us?
At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity:
- Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, multimodal AI, and real-time systems.
- Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment.
- Rapid Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one.
- Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions.
- Team Culture: Join a close-knit team of high-performing engineers and innovators, where every voice matters, and Monday morning meetings are something to look forward to.
- Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone.


We are looking for a dynamic and skilled Business Analyst Trainer with 2 to 5 years of hands-on industry and/or teaching experience. The ideal candidate should be able to simplify complex data concepts, mentor aspiring professionals, and deliver effective training programs in Business Analyst, Power BI, Tableau, Machine learning

Only candidates currently in Bihar or Open to relocate to Bihar, please apply:
Job Description:
This is an exciting opportunity for an experienced industry professional with strong analytical and technical skills to join and add value to a dedicated and friendly team. We are looking for a Data Analyst who is driven by data-driven decision-making and insights. As a core member of the Analytics Team, the candidate will take ownership of data analysis projects by working independently with little supervision.
The ideal candidate is a highly resourceful and innovative professional with extensive experience in data analysis, statistical modeling, and data visualization. The candidate must have a strong command of data analysis tools like SAS/SPSS, Power BI/Tableau, or R, along with expertise in MS Excel and MS PowerPoint. The role requires optimizing data collection procedures, generating reports, and applying statistical techniques for hypothesis testing and data interpretation.
Key Responsibilities:
• Perform data analysis using tools like SAS, SPSS, Power BI, Tableau, or R.
• Optimize data collection procedures and generate reports on a weekly, monthly, and quarterly basis.
• Utilize statistical techniques for hypothesis testing to validate data and interpretations.
• Apply data mining techniques and OLAP methodologies for in-depth insights.
• Develop dashboards and data visualizations to present findings effectively.
• Collaborate with cross-functional teams to define, design, and execute data-driven strategies.
• Ensure the accuracy and integrity of data used for analysis and reporting.
• Utilize advanced Excel skills to manipulate and analyze large datasets.
• Prepare technical documentation and presentations for stakeholders.
Candidate Profile:
Required Qualifications:
• Qualification: MCA / Graduate / Post Graduate in Statistics or MCA or BE/B.Tech in Computer Science & Engineering, Information Technology, or Electronics.
• A minimum of 2 years' experience in data analysis using SAS/SPSS, Power BI/Tableau, or R.
• Proficiency in MS Office with expertise in MS Excel & MS PowerPoint.
• Strong analytical skills with attention to detail.
• Experience in data mining and OLAP methodologies.
• Ability to generate insights and reports based on data trends.
• Excellent communication and presentation skills.
Desired Qualifications:
• Experience in predictive analytics and machine learning techniques.
• Knowledge of SQL and database management.
• Familiarity with Python for data analysis.
• Experience in automating reporting processes.



General Summary:
The Senior Software Engineer will be responsible for designing, developing, testing, and maintaining full-stack solutions. This role involves hands-on coding (80% of time), performing peer code reviews, handling pull requests and engaging in architectural discussions with stakeholders. You'll contribute to the development of large-scale, data-driven SaaS solutions using best practices like TDD, DRY, KISS, YAGNI, and SOLID principles. The ideal candidate is an experienced full-stack developer who thrives in a fast-paced, Agile environment.
Essential Job Functions:
- Design, develop, and maintain scalable applications using Python and Django.
- Build responsive and dynamic user interfaces using React and TypeScript.
- Implement and integrate GraphQL APIs for efficient data querying and real-time updates.
- Apply design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure maintainable and scalable code.
- Develop and manage RESTful APIs for seamless integration with third-party services.
- Design, optimize, and maintain SQL databases like PostgreSQL, MySQL, and MSSQL.
- Use version control systems (primarily Git) and follow collaborative workflows.
- Work within Agile methodologies such as Scrum or Kanban, participating in daily stand-ups, sprint planning, and retrospectives.
- Write and maintain unit tests, integration tests, and end-to-end tests, following Test-Driven Development (TDD).
- Collaborate with cross-functional teams, including Product Managers, DevOps, and UI/UX Designers, to deliver high-quality products
Essential functions are the basic job duties that an employee must be able to perform, with or without reasonable accommodation. The function is considered essential if the reason the position exists is to perform that function.
Supportive Job Functions:
- Remain knowledgeable of new emerging technologies and their impact on internal systems.
- Available to work on call when needed.
- Perform other miscellaneous duties as assigned by management.
These tasks do not meet the Americans with Disabilities Act definition of essential job functions and usually equal 5% or less of time spent. However, these tasks still constitute important performance aspects of the job.
Skills
- The ideal candidate must have strong proficiency in Python and Django, with a solid understanding of Object-Oriented Programming (OOPs) principles. Expertise in JavaScript,
- TypeScript, and React is essential, along with hands-on experience in GraphQL for efficient data querying.
- The candidate should be well-versed in applying design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure scalable and maintainable code architecture.
- Proficiency in building and integrating REST APIs is required, as well as experience working with SQL databases like PostgreSQL, MySQL, and MSSQL.
- Familiarity with version control systems (especially Git) and working within Agile methodologies like Scrum or Kanban is a must.
- The candidate should also have a strong grasp of Test-Driven Development (TDD) principles.
- In addition to the above, it is good to have experience with Next.js for server-side rendering and static site generation, as well as knowledge of cloud infrastructure such as AWS or GCP.
- Familiarity with NoSQL databases, CI/CD pipelines using tools like GitHub Actions or Jenkins, and containerization technologies like Docker and Kubernetes is highly desirable.
- Experience with microservices architecture and event-driven systems (using tools like Kafka or RabbitMQ) is a plus, along with knowledge of caching technologies such as Redis or
- Memcached. Understanding OAuth2.0, JWT, SSO authentication mechanisms, and adhering to API security best practices following OWASP guidelines is beneficial.
- Additionally, experience with Infrastructure as Code (IaC) tools like Terraform or CloudFormation, and familiarity with performance monitoring tools such as New Relic or Datadog will be considered an advantage.
Abilities:
- Ability to organize, prioritize, and handle multiple assignments on a daily basis.
- Strong and effective inter-personal and communication skills
- Ability to interact professionally with a diverse group of clients and staff.
- Must be able to work flexible hours on-site and remote.
- Must be able to coordinate with other staff and provide technological leadership.
- Ability to work in a complex, dynamic team environment with minimal supervision.
- Must possess good organizational skills.
Education, Experience, and Certification:
- Associate or bachelor’s degree preferred (Computer Science, Engineer, etc.), but equivalent work experience in a technology related area may substitute.
- 2+ years relevant experience, required.
- Experience using version control daily in a developer environment.
- Experience with Python, JavaScript, and React is required.
- Experience using rapid development frameworks like Django or Flask.
- Experience using front end build tools.
Scope of Job:
- No direct reports.
- No supervisory responsibility.
- Consistent work week with minimal travel
- Errors may be serious, costly, and difficult to discover.
- Contact with others inside and outside the company is regular and frequent.
- Some access to confidential data.

A cloud tech firm offering secure, scalable hybrid storage s

Role: Python Developer (Immediate Joiner)
Location: Gurugram, India (Onsite)
Experience: 4+ years
Company: Mizzle Cloud Pvt Ltd
Working Days: 6 Days( 5 days- WFO, Sat- WFH)
Job Summary
We are seeking a skilled Python Django Developer with expertise in building robust, scalable, and efficient web applications. Must have 3+ years of core work experience. The ideal candidate will have hands-on experience with RabbitMQ, Redis, Celery, and PostgreSQL to ensure seamless background task management, caching, and database performance.
Key Responsibilities
- Develop, maintain, and enhance Django-based web applications and APIs.
- Design and implement message broker solutions using RabbitMQ to manage asynchronous communication.
- Integrate Redis for caching and session storage to optimize performance.
- Implement and manage task queues using Celery for background processes.
- Work with PostgreSQL for database design, optimization, and query tuning.
- Collaborate with front-end developers, DevOps engineers, and stakeholders to deliver high-quality software solutions.
- Write clean, modular, and well-documented code following best practices and standards.
- Debug, troubleshoot, and resolve issues across the application stack.
- Participate in code reviews, system design discussions, and team meetings.
- Ensure scalability, reliability, and security of applications.
Technical Skills:
- Must have minimum 4+ years of relevant work experience.
- Strong proficiency in Python and Django framework.
- Experience with message brokers, particularly RabbitMQ.
- Familiarity with Redis for caching and session management.
- Hands-on experience with Celery for distributed task queues.
- Proficiency in PostgreSQL, including database design and optimization.
- Knowledge of RESTful API design and development.
- Understanding of Docker and containerized applications.
Preferred Skills:
- Experience with CI/CD pipelines.
- Familiarity with cloud platforms (AWS, GCP).
- Knowledge of Django ORM and its optimizations.
- Basic understanding of front-end technologies (HTML, CSS, JavaScript).
Soft Skills:
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork abilities.
- Ability to work in an agile environment and adapt to changing requirements.
Educational Requirements
- Bachelor’s degree in Computer Science, Software Engineering, or a related field.

Job Title: Full Stack Developer (MERN + Python)
Location: Bangalore
Job Type: Full-time
Experience: 4–8 years
About Miror
Miror is a pioneering FemTech company transforming how midlife women navigate perimenopause and menopause. We offer medically-backed solutions, expert consultations, community engagement, and wellness products to empower women in their health journey. Join us to make a meaningful difference through technology.
Role Overview
· We are seeking a passionate and experienced Full Stack Developer skilled in MERN stack and Python (Django/Flask) to build and scale high-impact features across our web and mobile platforms. You will collaborate with cross-functional teams to deliver seamless user experiences and robust backend systems.
Key Responsibilities
· Design, develop, and maintain scalable web applications using MySQL/Postgres, MongoDB, Express.js, React.js, and Node.js
· Build and manage RESTful APIs and microservices using Python (Django/Flask/FastAPI)
· Integrate with third-party platforms like OpenAI, WhatsApp APIs (Whapi), Interakt, and Zoho
· Optimize performance across the frontend and backend
· Collaborate with product managers, designers, and other developers to deliver high-quality features
· Ensure security, scalability, and maintainability of code
· Write clean, reusable, and well-documented code
· Contribute to DevOps, CI/CD, and server deployment workflows (AWS/Lightsail)
· Participate in code reviews and mentor junior developers if needed
Required Skills
· Strong experience with MERN Stack: MongoDB, Express.js, React.js, Node.js
· Proficiency in Python and web frameworks like Django, Flask, or FastAPI
· Experience working with REST APIs, JWT/Auth, and WebSockets
· Good understanding of frontend design systems, state management (Redux/Context), and responsive UI
· Familiarity with database design and queries (MongoDB, PostgreSQL/MySQL)
· Experience with Git, Docker, and deployment pipelines
· Comfortable working in Linux-based environments (e.g., Ubuntu on AWS)
Bonus Skills
· Experience with AI integrations (e.g., OpenAI, LangChain)
· Familiarity with WooCommerce, WordPress APIs
· Experience in chatbot development or WhatsApp API integration
Who You Are
· You are a problem-solver with a product-first mindset
· You care about user experience and performance
· You enjoy working in a fast-paced, collaborative environment
· You have a growth mindset and are open to learning new technologies
Why Join Us?
· Work at the intersection of healthcare, community, and technology
· Directly impact the lives of women across India and beyond
· Flexible work environment and collaborative team
· Opportunity to grow with a purpose-driven startup
·
In you are interested please apply here and drop me a message here in cutshort.

Job Title : Python Developer (Immediate Joiner)
Experience Required : 3+ Years
Employment Type : Full-time
Location : Gurugram, India (Onsite)
Working Days : 6 Days (5 Days WFO + 1 Day WFH)
Job Summary :
We are seeking a talented and experienced Python Developer with a strong background in Django and a proven ability to build scalable, secure, and high-performance web applications. The ideal candidate will have hands-on experience with RabbitMQ, Redis, Celery, and PostgreSQL, and will play a key role in developing and maintaining robust backend systems. This is an onsite opportunity for immediate joiners.
Mandatory Skills : Python, Django, RabbitMQ, Redis, Celery, PostgreSQL, RESTful APIs, Docker.
Key Responsibilities :
- Design, develop, and maintain Django-based web applications and APIs.
- Implement asynchronous task handling using RabbitMQ and Celery.
- Optimize application performance using Redis for caching and session storage.
- Manage database operations, including schema design and query optimization, using PostgreSQL.
- Collaborate with front-end developers, DevOps teams, and stakeholders to deliver full-featured solutions.
- Write clean, modular, and well-documented code aligned with industry best practices.
- Troubleshoot, debug, and resolve issues across the application stack.
- Participate in architecture discussions, code reviews, and sprint planning.
- Ensure the scalability, performance, and security of backend services.
Required Technical Skills :
- Minimum 3 Years of experience in Python development.
- Strong hands-on experience with the Django framework.
- Proficiency in RabbitMQ for message brokering.
- Practical experience with Redis for caching and session management.
- Experience using Celery for background job/task queue management.
- Solid knowledge of PostgreSQL (database design, indexing, and optimization).
- Understanding of RESTful API development and integration.
- Familiarity with Docker and containerization.
Preferred Skills :
- Exposure to CI/CD tools and pipelines.
- Experience working with cloud platforms such as AWS or GCP.
- Knowledge of Django ORM optimization techniques.
- Basic familiarity with front-end technologies like HTML, CSS, and JavaScript.
Soft Skills :
- Strong analytical and problem-solving capabilities.
- Effective communication and interpersonal skills.
- Ability to thrive in an agile, fast-paced development environment.


About Us
DAITA is a German AI startup revolutionizing the global textile supply chain by digitizing factory-to-brand workflows. We are building cutting-edge AI-powered SaaS and Agentic Systems that automate order management, production tracking, and compliance — making the supply chain smarter, faster, and more transparent.
Fresh off a $500K pre-seed raise, our passionate team is on the ground in India, collaborating directly with factories and brands to build our MVP and create real-world impact. If you’re excited by the intersection of AI, SaaS, and supply chain innovation, join us to help reshape how textiles move from factory floors to global brands.
Role Overview
We’re seeking a versatile Full-Stack Engineer to join our growing engineering team. You’ll be instrumental in designing and building scalable, secure, and high-performance applications that power our AI-driven platform. Working closely with Founders, ML Engineers, and Pilot Customers, you’ll transform complex AI workflows into intuitive, production-ready features.
What You’ll Do
• Design, develop, and deploy backend services, APIs, and microservices powering our platform.
• Build responsive, user-friendly frontend applications tailored for factory and brand users.
• Integrate AI/ML models and agentic workflows into seamless production environments.
• Develop features supporting order parsing, supply chain tracking, compliance, and reporting.
• Collaborate cross-functionally to iterate rapidly, test with users, and deliver impactful releases.
• Optimize applications for performance, scalability, and cost-efficiency on cloud platforms.
• Establish and improve CI/CD pipelines, deployment processes, and engineering best practices.
• Write clear documentation and maintain clean, maintainable code.
Required Skills
• 3–5 years of professional Full-Stack development experience
• Strong backend skills with frameworks like Node.js, Python (FastAPI, Django), Go, or similar
• Frontend experience with React, Vue.js, Next.js, or similar modern frameworks
• Solid knowledge and experience with relational databases (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Redis, Neon)
• Strong API design skills (REST mandatory; GraphQL a plus)
• Containerization expertise with Docker
• Container orchestration and management with Kubernetes (including experience with Helm charts, operators, or custom resource definitions)
• Cloud deployment and infrastructure experience on AWS, GCP or Azure
• Hands-on experience deploying AI/ML models in cloud-native environments (AWS, GCP or Azure) with scalable infrastructure and monitoring.
• Experience with managed AI/ML services like AWS SageMaker, GCP Vertex AI, Azure ML, Together.ai, or similar
• Experience with CI/CD pipelines and DevOps tools such as Jenkins, GitHub Actions, Terraform, Ansible, or ArgoCD
• Familiarity with monitoring, logging, and observability tools like Prometheus, Grafana, ELK stack (Elasticsearch, Logstash, Kibana), or Helicone
Nice-to-have
• Experience with TypeScript for full-stack AI SaaS development
• Use of modern UI frameworks and tooling like Tailwind CSS
• Familiarity with modern AI-first SaaS concepts viz. vector databases for fast ML data retrieval, prompt engineering for LLM integration, integrating with OpenRouter or similar LLM orchestration frameworks etc.
• Knowledge of MLOps tools like Kubeflow, MLflow, or Seldon for model lifecycle management.
• Background in building data pipelines, real-time analytics, and predictive modeling.
• Knowledge of AI-driven security tools and best practices for SaaS compliance.
• Proficiency in cloud automation, cost optimization, and DevOps for AI workflows.
• Ability to design and implement hyper-personalized, adaptive user experiences.
What We Value
• Ownership: You take full responsibility for your work and ship high-quality solutions quickly.
• Bias for Action: You’re pragmatic, proactive, and focused on delivering results.
• Clear Communication: You articulate ideas, challenges, and solutions effectively across teams.
• Collaborative Spirit: You thrive in a cross-functional, distributed team environment.
• Customer Focus: You build with empathy for end users and real-world usability.
• Curiosity & Adaptability: You embrace learning, experimentation, and pivoting when needed.
• Quality Mindset: You write clean, maintainable, and well-tested code.
Why Join DAITA?
• Be part of a mission-driven startup transforming a $1+ Trillion global industry.
• Work closely with founders and AI experts on cutting-edge technology.
• Directly impact real-world supply chains and sustainability.
• Grow your skills in AI, SaaS, and supply chain tech in a fast-paced environment.

🔍 We’re Hiring: Solution Engineer (Observability)
📍 Location: ["Remote/Onsite"]
🏢 Company: Product-Based Client
🕒 Experience: 5+ Years
💼 Employment Type: Full-Time
About the Role:
We are hiring for a Solution Engineer – Observability role with one of our fast-scaling product-based clients. This position is ideal for someone with strong technical acumen and exceptional communication skills who enjoys working at the intersection of engineering and customer success.
As a Solution Engineer, you will lead technical conversations with a range of personas—from DevOps teams to C-suite executives—while delivering innovative observability solutions that showcase real value.
Key Responsibilities:
- 🤝 Collaborate closely with Account Executives on technical sales strategy and execution for complex deals.
- 🎤 Deliver engaging product demos and technical presentations tailored to various stakeholder levels.
- 🛠️ Manage technical sales activities including discovery, sizing, architecture planning, and Proof of Concepts (POCs).
- 🔧 Design and implement custom solutions to bridge product gaps and extend core functionality.
- 💡 Provide expert guidance on observability best practices, tools, and frameworks that align with customer needs.
- 📈 Stay current with industry trends, tools, and the evolving Observability ecosystem.
Requirements:
- ✅ 5+ years in a customer-facing technical role such as Pre-Sales Engineer, Solutions Architect, or Technical Consultant.
- ✅ Strong communication, interpersonal, and presentation skills—able to convey complex topics clearly and persuasively.
- ✅ Proven experience with technical integration, conducting POCs, and building tailored observability solutions.
- ✅ Proficiency in one or more programming languages: Java, Go, or Python.
- ✅ Solid understanding of Observability, Monitoring, Log Management, and SIEM tools and methodologies.
- ✅ Familiarity with observability-related platforms such as APM, RUM, and Log Analytics is desirable.
- ✅ Strong hands-on expertise in:
- Cloud platforms: AWS, Azure, GCP
- Containerization & Orchestration: Docker, Kubernetes
- Monitoring stacks: Prometheus, OpenTelemetry
Bonus Points For:
- 🧠 Previous experience in technical sales within APM, Logging, Monitoring, or SIEM platforms
Why Join?
- Work with a cutting-edge product solving complex observability challenges
- Be a key voice in the pre-sales and solutioning cycle
- Partner with cross-functional teams and engage directly with top-tier clients
- Enjoy a collaborative, high-growth environment focused on innovation and performance
DevOps Engineer
AiSensy
Gurugram, Haryana, India (On-site)
About AiSensy
AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Adani, Delhi Transport Corporation, Yakult, Godrej, Aditya Birla Hindalco., Wipro, Asian Paints, India Today Group Skullcandy, Vivo, Physicswallah, Cosco grow their revenues via WhatsApp.
- Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing
- 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year
- Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more
- High Impact as Businesses drive 25-80% Revenues using AiSensy Platform
- Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors
Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀
What You’ll Do (Key Responsibilities)
🔹 CI/CD & Automation:
- Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins.
- Automate deployment processes to improve efficiency and reduce downtime.
🔹 Infrastructure Management:
- Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code.
- Deploy and maintain Dockerized applications on Kubernetes clusters for scalability.
🔹 Cloud & Security:
- Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure.
- Optimize cloud costs and ensure security best practices are in place.
🔹 Monitoring & Troubleshooting:
- Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime.
- Proactively identify and resolve infrastructure-related issues.
🔹 Scripting & Automation:
- Use Python or Bash scripting to automate repetitive DevOps tasks.
- Build internal tools for system health monitoring, logging, and debugging.
What We’re Looking For (Must-Have Skills)
✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket)
✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins
✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi
✅ Containerization & Orchestration: Experience with Docker & Kubernetes
✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers
✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana
✅ Scripting Knowledge: Python or Bash for automation
Bonus Skills (Good to Have, Not Mandatory)
➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking
➕ Experience with Microsoft/Linux/F5 Technologies
➕ Hands-on knowledge of Database servers
Employment type- Contract basis
Key Responsibilities
- Design, develop, and maintain scalable data pipelines using PySpark and distributed computing frameworks.
- Implement ETL processes and integrate data from structured and unstructured sources into cloud data warehouses.
- Work across Azure or AWS cloud ecosystems to deploy and manage big data workflows.
- Optimize performance of SQL queries and develop stored procedures for data transformation and analytics.
- Collaborate with Data Scientists, Analysts, and Business teams to ensure reliable data availability and quality.
- Maintain documentation and implement best practices for data architecture, governance, and security.
⚙️ Required Skills
- Programming: Proficient in PySpark, Python, and SQL.
- Cloud Platforms: Hands-on experience with Azure Data Factory, Databricks, or AWS Glue/Redshift.
- Data Engineering Tools: Familiarity with Apache Spark, Kafka, Airflow, or similar tools.
- Data Warehousing: Strong knowledge of designing and working with data warehouses like Snowflake, BigQuery, Synapse, or Redshift.
- Data Modeling: Experience in dimensional modeling, star/snowflake schema, and data lake architecture.
- CI/CD & Version Control: Exposure to Git, Terraform, or other DevOps tools is a plus.
🧰 Preferred Qualifications
- Bachelor's or Master's in Computer Science, Engineering, or related field.
- Certifications in Azure/AWS are highly desirable.
- Knowledge of business intelligence tools (Power BI, Tableau) is a bonus.


· 3 to 5 years of full-stack development experience implementing applications using Python, React.js
· In-depth knowledge of Python – Data Analytics, NLP and Flask API
§ Experience working in SQL Databases (MySQL/Postgres – min. 2 years)
§ Ability to use Gen AI tools for Productivity.
§ Gen AI for Natural Language processing Use cases – Using ChatGPT 4/Gemini flash or other cutting-edge tools
§ Hands-on exposure in working with messaging systems like RabbitMQ
§ Experience in the end to end and unit testing frameworks (jest/cypress)
§ Experience working in NoSQL Databases like MongoDB
· Understanding differences between multiple delivery platforms, such as mobile vs. desktop, and optimizing output to match the specific platform.
§ Cloud architectural knowledge of Azure cloud.
§ Proficient understanding of code versioning tools, such as Git, SVN
§ Knowledge of CI/CD (Jenkins/Hudson)
§ Self-organizing & experience working in Agile/Scrum culture
Good to have.
§ Experience working in Angular, Elasticsearch and Redis
§ Understanding accessibility and security compliances
§ Understanding of UI/UX


We are seeking a passionate and knowledgeable Data Science and Data Analyst Trainer to deliver engaging and industry-relevant training programs. The trainer will be responsible for teaching core concepts in data analytics, machine learning, data visualization, and related tools and technologies. The ideal candidate will have hands-on experience in the data domain with 2-5 years and a flair for teaching and mentoring students or working professionals.


We are looking for a dynamic and skilled Data Science and Data Analyst Trainer with 2 to 5 years of hands-on industry and/or teaching experience. The ideal candidate should be able to simplify complex data concepts, mentor aspiring professionals, and deliver effective training programs in data analytics, data science, and business intelligence tools.


Proficient in Golang, Python, Java, C++, or Ruby (at least one)
Strong grasp of system design, data structures, and algorithms
Experience with RESTful APIs, relational and NoSQL databases
Proven ability to mentor developers and drive quality delivery
Track record of building high-performance, scalable systems
Excellent communication and problem-solving skills
Experience in consulting or contractor roles is a plus

Supply Wisdom: Full Stack Developer
Location: Hybrid Position based in Bangalore
Reporting to: Tech Lead Manager
Supply Wisdom is a global leader in transformative risk intelligence, offering real-time insights to drive business growth, reduce costs, enhance security and compliance, and identify revenue opportunities. Our AI-based SaaS products cover various risk domains, including financial, cyber, operational, ESG, and compliance. With a diverse workforce that is 57% female, our clients include Fortune 100 and Global 2000 firms in sectors like financial services, insurance, healthcare, and technology.
Objective: We are seeking a skilled Full Stack Developer to design and build scalable software solutions. You will be part of a cross-functional team responsible for the full software development life cycle, from conception to deployment.
As a Full Stack Developer, you should be proficient in both front-end and back-end technologies, development frameworks, and third-party libraries. We’re looking for a team player with strong problem-solving abilities, attention to visual design, and a focus on utility. Familiarity with Agile methodologies, including Scrum and Kanban, is essential.
Responsibilities
- Collaborate with the development team and product manager to ideate software solutions.
- Write effective and secure REST APIs.
- Integrate third-party libraries for product enhancement.
- Design and implement client-side and server-side architecture.
- Work with data scientists and analysts to enhance software using RPA and AI/ML techniques.
- Develop and manage well-functioning databases and applications.
- Ensure software responsiveness and efficiency through testing.
- Troubleshoot, debug, and upgrade software as needed.
- Implement security and data protection settings.
- Create features and applications with mobile-responsive design.
- Write clear, maintainable technical documentation.
- Build front-end applications with appealing, responsive visual design.
Requirements
- Degree in Computer Science (or related field) with 4+ years of hands-on experience in Python development, with strong expertise in the Django framework and Django REST Framework (DRF).
- Proven experience in designing and building RESTful APIs, with a solid understanding of API versioning, authentication (JWT/OAuth2), and best practices.
- Experience with relational databases such as PostgreSQL or MySQL; familiarity with query optimization and database migrations.
- Basic front-end development skills using HTML, CSS, and JavaScript; experience with any JavaScript framework (like React or Next Js) is a plus.
- Good understanding of Object-Oriented Programming (OOP) and design patterns in Python.
- Familiarity with Git and collaborative development workflows (e.g., GitHub, GitLab).
- Knowledge of Docker, CI/CD pipelines.
- Hands-on experience with AWS services, Nginx web server, RabbitMQ (or similar message brokers), event handling, and synchronization.
- Familiarity with Postgres, SSO implementation (desirable), and integration of third-party libraries.
- Experience with unit testing, debugging, and code reviews.
- Experience using tools like Jira and Confluence.
- Ability to work in Agile/Scrum teams with good communication and problem-solving skills.
Our Commitment to You:
We offer a competitive salary and generous benefits. In addition, we offer a vibrant work environment, a global team filled with passionate and fun-loving people coming from diverse cultures and backgrounds.
If you are looking to make an impact in delivering market-leading risk management solutions, empowering our clients, and making the world a better place, then Supply Wisdom is the place for you.
You can learn more at supplywisdom.com and on LinkedIn.
- 2+ years in a DevOps/SRE/System Engineering role
- Hands-on experience with Linux-based systems
- Experience with cloud platforms like AWS, GCP, or Azure
- Proficient in scripting (Bash, Python, or Go)
- Experience with monitoring tools (Prometheus, Grafana, ELK, Datadog, etc.)
- Knowledge of containerization (Docker, Kubernetes)
- Familiarity with Git and CI/CD tools (Jenkins, GitLab CI, etc.)


Job Title: Senior SDET - Test
Location- Bangalore / Hybrid
Desired skills- Java / Python/ Javascript / Go, Docker, Kubernetes, Playwright, Jenkins, Postman, Jmeter, Rest API
Exp range- 10-12 yrs
Your responsibilities:
● Execute manual and automated tests throughout all stages of the product lifecycle.
● Work collaboratively within an Agile team and help define or improve processes as
needed.
● Build and maintain automation tools and utilities to improve testing efficiency and
coverage.
● Integrate automated tests into the CI/CD pipeline; triage test failures and support in
resolving production issues.
● Execute performance tests (load, stress, etc.)
● Continuously enhance your skills and actively support the growth of your teammates.
Key qualifications:
● Minimum 10-12+ years of experience as an SDET.
● Provide subject-matter expertise, best practices, and strategic direction for quality
assurance technology solutions in the commercial software/services space.
● Collaborate, architect, and execute on deploying industry standard quality testing,
metrics gathering and reporting capabilities for all commercially facing products.
● Plan, develop, and execute various quality tests that will shape understanding and
adjustments to the application portfolio.
● Ability to perform manual tests, create test documents and build a regression suite.
● Proven experience designing and implementing UI and API test automation
frameworks from scratch.
● Hands-on experience with tools and frameworks such as Playwright, Jenkins, GitHub,
Postman, JMeter, etc.
● Proficiency in at least one programming or scripting language, such as Python, Java,
Go, or JavaScript.
● Good grasp of OOP and SOLID principles to design reusable, modular, and
maintainable automated test code.
● Strong understanding of REST APIs, JSON, OAuth, and related web technologies.
● Ability to work effectively both as an individual contributor and within a collaborative
team environment.
● Excellent communication skills, with the ability to collaborate effectively across
cross-functional and distributed teams.
● Project management experience is a plus.

Company Description
I Vision Infotech is a full-fledged IT company delivering high-quality, cost-effective, and reliable web and e-commerce solutions. Founded in 2011 and based in India, the company serves both domestic and international clients from countries including the USA, Malaysia, Australia, Canada, and the United Kingdom. We specialize in web design, development, e-commerce, and mobile app services across various platforms such as Android, iOS, Windows, and Blackberry.
Role Description
This is a full-time role for an AI/ML Developer, based on-site in Ahmedabad. The AI/ML Developer will work on designing and implementing machine learning models and algorithms. Day-to-day tasks include developing software applications, conducting research on pattern recognition and neural networks, and performing tasks related to Natural Language Processing (NLP). The developer will collaborate closely with other team members to ensure the successful deployment and integration of AI/ML solutions.
Key Responsibilities:
- Develop and deploy machine learning models to solve real-world problems.
- Work on data preprocessing, feature engineering, and model optimization.
- Collaborate with cross-functional teams (UI/UX, Backend) to integrate AI/ML solutions.
- Conduct experiments and performance tuning of ML models.
- Stay updated with the latest research and development in AI/ML.
Required Skills & Experience:
- Proficiency in Python and ML libraries (TensorFlow, PyTorch, Scikit-learn).
- Strong knowledge of Supervised and Unsupervised Learning algorithms.
- Experience with data visualization and tools like Pandas, NumPy, Matplotlib.
- Experience in NLP, Computer Vision, or Recommendation Systems is a plus.
- Familiarity with Jupyter, Google Colab, or similar platforms.
- Understanding of model deployment and REST API integration is an advantage.
Qualifications:
- Bachelor’s or Master’s in Computer Science, IT, Data Science, or related field.
- Minimum 2 years of relevant AI/ML development experience.
- Strong problem-solving and analytical thinking.
- Good communication and team collaboration skills.
Perks:
- Fixed working hours
- Friendly team & work culture
- Exposure to real projects
- Growth in AI/ML domain
Note: This is an urgent requirement. Only serious and immediate joiners apply and local candidates will be considered for this role.

Position Overview
We are a UAE-based company looking for a talented AI Applications Engineer based in India on a work-from-home basis to join our team and help build cutting-edge AI-powered applications. You will be responsible for developing, implementing, and optimizing AI solutions using large language models and multi-agent systems to solve real-world business problems.
Key Responsibilities
- Design and develop AI applications using large language models (LLMs)
- Implement and manage AI agent systems using CrewAI framework
- Fine-tune language models for specific use cases and domain requirements
- Build robust APIs and backend services using Python and FastAPI
- Collaborate with cross-functional teams to integrate AI solutions into existing systems
- Optimize model performance and implement best practices for AI application deployment
- Research and evaluate new AI technologies and methodologies
- Monitor and maintain AI systems in production environments
- Document AI workflows, model architectures, and implementation details
Required Skills & Experience
- Strong proficiency in Python programming
- Hands-on experience with large language models (LLMs) and their practical applications
- Experience with model fine-tuning techniques and frameworks
- Knowledge of AI agent systems, particularly CrewAI framework
- Understanding of machine learning workflows and model lifecycle management
- Experience with API development and backend services
- Strong problem-solving skills and ability to work with complex AI systems
- Familiarity with AI/ML libraries and frameworks (transformers, langchain, etc.)
Nice to Have
- Experience with FastAPI for building high-performance APIs
- Docker containerization knowledge for AI application deployment
- Understanding of prompt engineering and optimization techniques
- Experience with vector databases and semantic search
- Knowledge of MLOps practices and tools
- Familiarity with cloud platforms (AWS, GCP, Azure) for AI workloads
- Experience with other AI agent frameworks and orchestration tools
What We Offer
- Opportunity to work on cutting-edge AI technologies
- Collaborative environment with experienced AI practitioners
- Access to latest AI tools and computational resources
- Competitive salary and comprehensive benefits
- Professional development and conference attendance opportunities
- Flexible work arrangements
How to Apply
Please submit your resume along with examples of AI projects you've worked on, particularly those involving LLMs, fine-tuning, or AI agents.


Role Objective
Develop business relevant, high quality, scalable web applications. You will be part of a dynamic AdTech team solving big problems in the Media and Entertainment Sector.
Roles & Responsibilities
* Application Design: Understand requirements from the user, create stories and be a part of the design team. Check designs, give regular feedback and ensure that the designs are as per user expectations.
* Architecture: Create scalable and robust system architecture. The design should be in line with the client infra. This could be on-prem or cloud (Azure, AWS or GCP).
* Development: You will be responsible for the development of the front-end and back-end. The application stack will comprise of (depending on the project) SQL, Django, Angular/React, HTML, CSS. Knowledge of GoLang and Big Data is a plus point.
* Deployment: Suggest and implement a deployment strategy that is scalable and cost-effective. Create a detailed resource architecture and get it approved. CI/CD deployment on IIS or Linux. Knowledge of dockers is a plus point.
* Maintenance: Maintaining development and production environments will be a key part of your job profile. This will also include trouble shooting, fixing bugs and suggesting ways for improving the application.
* Data Migration: In the case of database migration, you will be expected to suggest appropriate strategies and implementation plans.
* Documentation: Create a detailed document covering important aspects like HLD, Technical Diagram, Script Design, SOP etc.
* Client Interaction: You will be interacting with the client on a day-to-day basis and hence having good communication skills is a must.
Requirements
Education- B. Tech (Comp. Sc, IT) or equivalent
Experience- 3+ years of experience developing applications on Django, Angular/React, HTML and CSS
Behavioural Skills-
1. Clear and Assertive communication
2. Ability to comprehend the business requirement
3. Teamwork and collaboration
4. Analytics thinking
5. Time Management
6. Strong Trouble shooting and problem-solving skills
Technical Skills-
1. Back-end and Front-end Technologies: Django, Angular/React, HTML and CSS.
2. Cloud Technologies: AWS, GCP and Azure
3. Big Data Technologies: Hadoop and Spark
4. Containerized Deployment: Dockers and Kubernetes is a plus.
5. Other: Understanding of Golang is a plus.


Job Title: Data Science Intern
Location: 6th Sector HSR Layout, Bangalore - Work from Office 5.5 Days
Duration: 3 Months | Stipend: Upto ₹12,000 per month
Post-Internship Offer (PPO): Available based on performance
🧑💻 About the Role
We are looking for a passionate and proactive Data Science Assistant Intern who is equally excited about mentoring learners and gaining hands-on experience with real-world data operations.
This is a 50% technical + 50% mentorship role that blends classroom support with practical data work. Ideal for those looking to build a career in EdTech and Applied Data Science.
🚀 What You'll Do
- Technical Responsibilities (50%)Create and manage dashboards using Python or BI tools like Power BI/Tableau
- Write and optimize SQL queries to extract and analyze backend data
- Support in data gathering, cleaning, and basic analysis
- Contribute to building data pipelines to assist internal decision-making and analytics
🚀Mentorship & Support (50%)
- Assist instructors during live Data Science sessions
- Solve doubts related to Python, Machine Learning, and Statistics
- Create and review quizzes, assignments, and other content
- Provide one-on-one academic support and mentoring
- Foster a positive and interactive learning environment
✅ Requirements
- Bachelor’s degree in Data Science, Computer Science, Statistics, or a related field
- Strong knowledge of:
- Python (Data Structures, Functions, OOP, Debugging)
- Pandas, NumPy, Matplotlib
- Machine Learning algorithms (scikit-learn)
- SQL and basic data wrangling
- APIs, Web Scraping, and Time-Series basics
- Advanced Excel: Lookup & reference (VLOOKUP, INDEX+MATCH, XLOOKUP, SUMIF), Logical functions (IF, AND, OR), Statistical & Aggregate Functions: (COUNTIFS, STDEV, PERCENTILE), Text cleanup (TRIM, SUBSTITUTE), Time functions (DATEDIF, NETWORKDAYS), Pivot Tables, Power Query, Conditional Formatting, Data Validation, What-If Analysis, and dynamic dashboards using charts & slicers.
- Excellent communication and interpersonal skills
- Prior mentoring, teaching, or tutoring experience is a big plus
- Passion for helping others learn and grow


Job Title: Data Science Intern
Location: 6th Sector HSR Layout, Bangalore - Work from Office 5.5 Days
Duration: 3 Months | Stipend: Upto ₹12,000 per month
Post-Internship Offer (PPO): Available based on performance
🧑💻 About the Role
We are looking for a passionate and proactive Data Science Assistant Intern who is equally excited about mentoring learners and gaining hands-on experience with real-world data operations.
This is a 50% technical + 50% mentorship role that blends classroom support with practical data work. Ideal for those looking to build a career in EdTech and Applied Data Science.
🚀 What You'll Do
- Technical Responsibilities (50%)Create and manage dashboards using Python or BI tools like Power BI/Tableau
- Write and optimize SQL queries to extract and analyze backend data
- Support in data gathering, cleaning, and basic analysis
- Contribute to building data pipelines to assist internal decision-making and analytics
🚀Mentorship & Support (50%)
- Assist instructors during live Data Science sessions
- Solve doubts related to Python, Machine Learning, and Statistics
- Create and review quizzes, assignments, and other content
- Provide one-on-one academic support and mentoring
- Foster a positive and interactive learning environment
✅ Requirements
- Bachelor’s degree in Data Science, Computer Science, Statistics, or a related field
- Strong knowledge of:
- Python (Data Structures, Functions, OOP, Debugging)
- Pandas, NumPy, Matplotlib
- Machine Learning algorithms (scikit-learn)
- SQL and basic data wrangling
- APIs, Web Scraping, and Time-Series basics
- Advanced Excel: Lookup & reference (VLOOKUP, INDEX+MATCH, XLOOKUP), Logical functions (IF, AND, OR), Statistical & Aggregate Functions: (COUNTIFS, STDEV, PERCENTILE), Text cleanup (TRIM, SUBSTITUTE), Time functions (DATEDIF, NETWORKDAYS), Pivot Tables, Power Query, Conditional Formatting, Data Validation, What-If Analysis, and dynamic dashboards using charts & slicers.
- Excellent communication and interpersonal skills
- Prior mentoring, teaching, or tutoring experience is a big plus
- Passion for helping others learn and grow

About the role
We are looking for a Senior Automation Engineer to architect and implement automated testing frameworks that validate the runtime behavior of code generated by our AI platform. This role is critical in ensuring that our platform's output performs correctly in production environments. You'll work at the intersection of AI and quality assurance, creating innovative testing solutions that can validate AI-generated applications during actual execution.
What Success Looks Like
- You architect and implement automated testing frameworks that validate the runtime behavior and performance of AI-generated applications
- You develop intelligent test suites that can automatically assess application functionality in production environments
- You create testing frameworks that can validate runtime behavior across multiple languages and frameworks
- You establish quality metrics and testing protocols that measure real-world performance of generated applications
- You build systems to automatically detect and flag runtime issues in deployed applications
- You collaborate with our AI team to improve the platform based on runtime performance data
- You implement automated integration and end-to-end testing that ensures generated applications work as intended in production
- You develop metrics and monitoring systems to track runtime performance across different customer deployments
Areas of Ownership
Our hiring process is designed for you to demonstrate deep expertise in automation testing with a focus on AI-powered systems.
Required Technical Experience:
- 4+ years of experience with Selenium and automated testing frameworks
- Strong expertise in Python (our primary automation language)
- Experience with CI/CD tools (Jenkins, CircleCI, or similar)
- Proficiency in version control systems (Git)
- Experience testing distributed systems
- Understanding of modern software development practices
- Experience working with cloud platforms (GCP preferred)
Ways to stand out
- Experience with runtime monitoring and testing of distributed systems
- Knowledge of performance testing and APM (Application Performance Monitoring)
- Experience with end-to-end testing of complex applications
- Background in developing testing systems for enterprise-grade applications
- Understanding of distributed tracing and monitoring
- Experience with chaos engineering and resilience testing
1. Software Development Engineer - Salesforce
What we ask for
We are looking for strong engineers to build best in class systems for commercial &
wholesale banking at Bank, using Salesforce service cloud. We seek experienced
developers who bring deep understanding of salesforce development practices, patterns,
anti-patterns, governor limits, sharing & security model that will allow us to architect &
develop robust applications.
You will work closely with business, product teams to build applications which provide end
users with intuitive, clean, minimalist, easy to navigate experience
Develop systems by implementing software development principles and clean code
practices scalable, secure, highly resilient, have low latency
Should be open to work in a start-up environment and have confidence to deal with complex
issues keeping focus on solutions and project objectives as your guiding North Star
Technical Skills:
● Strong hands-on frontend development using JavaScript and LWC
● Expertise in backend development using Apex, Flows, Async Apex
● Understanding of Database concepts: SOQL, SOSL and SQL
● Hands-on experience in API integration using SOAP, REST API, graphql
● Experience with ETL tools , Data migration, and Data governance
● Experience with Apex Design Patterns, Integration Patterns and Apex testing
framework
● Follow agile, iterative execution model using CI-CD tools like Azure Devops, gitlab,
bitbucket
● Should have worked with at least one programming language - Java, python, c++
and have good understanding of data structures
Preferred qualifications
● Graduate degree in engineering
● Experience developing with India stack
● Experience in fintech or banking domain

About Us:
Heyo & MyOperator are India’s largest conversational platforms, delivering Call + WhatsApp engagement solutions to over 40,000+ businesses. Trusted by brands like Astrotalk, Lenskart, and Caratlane, we power customer engagement at scale. We support a hybrid work model, foster a collaborative environment, and offer fast-track growth opportunities.
Job Overview:
We are looking for a skilled Quality Analyst with 2-4 years of experience in software quality assurance. The ideal candidate should have a strong understanding of testing methodologies, automation tools, and defect tracking to ensure high-quality software products. This is a fully
remote role.
Key Responsibilities:
● Develop and execute test plans, test cases, and test scripts for software products.
● Conduct manual and automated testing to ensure reliability and performance.
● Identify, document, and collaborate with developers to resolve defects and issues.
● Report testing progress and results to stakeholders and management.
● Improve automation testing processes for efficiency and accuracy.
● Stay updated with the latest QA trends, tools, and best practices.
Requirements Skills:
● 2-4 years of experience in software quality assurance.
● Strong understanding of testing methodologies and automated testing.
● Proficiency in Selenium, Rest Assured, Java, and API Testing (mandatory).
● Familiarity with Appium, JMeter, TestNG, defect tracking, and version control tools.
● Strong problem-solving, analytical, and debugging skills.
● Excellent communication and collaboration abilities.
● Detail-oriented with a commitment to delivering high-quality results.
Why Join Us?
● Fully remote work with flexible hours.
● Exposure to industry-leading technologies and practices.
● Collaborative team culture with growth opportunities.
● Work with top brands and innovative projects.

We’re seeking a passionate and skilled Technical Trainer to deliver engaging, hands-on training in HTML, CSS, and Python-based front-end development. You’ll mentor learners, design curriculum, and guide them through real-world projects to build strong foundational and practical skills.


Brandzzy is a forward-thinking technology company dedicated to building innovative and scalable Software-as-a-Service (SaaS) solutions. We are a passionate team focused on creating products that solve real-world problems and deliver exceptional user experiences. Join us as we scale our platform to new heights.
Role Summary:
We are seeking an experienced and visionary Senior Full Stack Developer to lead the technical design and development of our core SaaS platform. In this role, you will be responsible for making critical architectural decisions, mentoring other engineers, and ensuring our application is built for massive scale and high performance. You are not just a coder; you are a technical leader who will shape the future of our product and drive our engineering culture forward.
Key Responsibilities:
- Lead the architecture and design of highly scalable, secure, and resilient full-stack web applications.
- Take ownership of major features and system components, from technical strategy through to deployment and long-term maintenance.
- Mentor and guide junior and mid-level developers, conducting code reviews and fostering a culture of technical excellence.
- Drive technical strategy and make key decisions on technology stacks, frameworks, and infrastructure.
- Engineer and implement solutions specifically for SaaS scalability, including microservices, containerization (Docker, Kubernetes), and efficient cloud resource management.
- Establish and champion best practices for code quality, automated testing, and robust CI/CD pipelines.
- Collaborate with product leadership to translate business requirements into concrete technical roadmaps.
Skills & Qualifications:
- 5+ years of professional experience in full-stack development, with a proven track record of building and launching complex SaaS products.
- Deep expertise in both front-end (React, Angular, Vue.js) and back-end (Node.js, Python, Java, Go) technologies.
- Expert-level knowledge of designing and scaling applications on a major cloud platform (AWS, Azure, or GCP).
- Proven, hands-on experience architecting for scale, including deep knowledge of microservices architecture, message queues, and database scaling strategies (e.g., sharding, replication).
- In-depth understanding of database technologies (both SQL and NoSQL) and how to choose the right one for the job.
- Expertise in implementing and managing CI/CD pipelines and advocating for DevOps principles.
- Strong leadership and communication skills, with the ability to articulate complex technical ideas to both technical and non-technical stakeholders.
- A passion for solving complex problems and a proactive, self-starter attitude.

We're seeking a Software Development Engineer in Test (SDET) to ensure product feature quality through meticulous test design, automation, and result analysis. Collaborate closely with developers to optimize test coverage, resolve bugs, and streamline project delivery.
Responsibilities:
Ensure the quality of product feature development.
Test Design: Understand the necessary functionalities and implementation strategies for straightforward feature development. Inspect code changes, identify key test scenarios and impact areas, and create a thorough test plan.
Test Automation: Work with developers to build reusable test scripts. Review unit/functional test scripts, and aim to maximize test coverage to minimize manual testing, using Python.
Test Execution and Analysis: Monitor test results and identify areas lacking in test coverage. Address these areas by creating additional test scripts and deliver transparent test metrics to the team.
Support & Bug Fixes: Handle issues reported by customers and aid in bug resolution.
Collaboration: Participate in project planning and execution with the team for efficient project delivery.
Requirements:
A Bachelor's degree in computer science, IT, engineering, or a related field, with a genuine interest in software quality assurance, issue detection, and analysis.
2-5 years of solid experience in software testing, with a focus on automation. Proficiency in using a defect tracking system, Code repositories & IDEs.
A good grasp of programming languages like Python/Java/Javascript. Must be able to understand and write code.
Familiarity with testing frameworks (e.g., Selenium, Appium, JUnit).
Good team player with a proactive approach to continuous learning.
Sound understanding of the Agile software development methodology.
Experience in a SaaS-based product company or a fast-paced startup environment is a plus.


Job Title : Senior Python Developer
Experience : 7+ Years
Location : Remote or Hybrid (Gurgaon / Coimbatore / Hyderabad)
Job Summary :
We are looking for a highly skilled and motivated Senior Python Developer to join our dynamic engineering team.
The ideal candidate will have a strong foundation in web application development using Python and related frameworks. A passion for writing clean, scalable code and solving complex technical challenges is essential for success in this role.
Mandatory Skills : Python (3.x), FastAPI or Flask, PostgreSQL or Oracle, ORM, API Microservices, Agile Methodologies, Clean Code Practices.
Required Skills and Qualifications :
- 7+ Years of hands-on experience in Python (3.x) development.
- Strong proficiency in FastAPI or Flask frameworks.
- Experience with relational databases like PostgreSQL, Oracle, or similar, along with ORM tools.
- Demonstrated experience in building and maintaining API-based microservices.
- Solid grasp of Agile development methodologies and version control practices.
- Strong analytical and problem-solving skills.
- Ability to write clean, maintainable, and well-documented code.
Nice to Have :
- Experience with Google Cloud Platform (GCP) or other cloud providers.
- Exposure to Kubernetes and container orchestration tools.


Desired Competencies (Technical/Behavioral Competency)
Must-Have 1. Experience in working with various ML libraries and packages like Scikit learn, Numpy, Pandas, Tensorflow, Matplotlib, Caffe, etc.
2. Deep Learning Frameworks: PyTorch, spaCy, Keras
3. Deep Learning Architectures: LSTM, CNN, Self-Attention and Transformers
4. Experience in working with Image processing, computer vision is must
5. Designing data science applications, Large Language Models(LLM) , Generative Pre-trained Transformers (GPT), generative AI techniques, Natural Language Processing (NLP), machine learning techniques, Python, Jupyter Notebook, common data science packages (tensorflow, scikit-learn,kerasetc.,.) , LangChain, Flask,FastAPI, prompt engineering.
6. Programming experience in Python
7. Strong written and verbal communications
8. Excellent interpersonal and collaboration skills.
Good-to-Have 1. Experience working with vectored databases and graph representation of documents.
2. Experience with building or maintaining MLOpspipelines.
3. Experience in Cloud computing infrastructures like AWS Sagemaker or Azure ML for implementing ML solutions is preferred.
4. Exposure to Docker, Kubernetes


Duties
About Us:
We are a UK-based conveyancing firm dedicated to transforming property transactions through cutting-edge artificial intelligence. We are seeking a talented Machine Learning Engineer with 1–2 years of experience to join our growing AI team. This role offers a unique opportunity to work on scalable ML systems and Generative AI applications in a dynamic and impactful environment.
Responsibilities:
Design, Build, and Deploy Scalable ML Models
You will be responsible for end-to-end development of machine learning and deep learning models that can be scaled to handle real-world data and use cases. This includes training, testing, validating, and deploying models efficiently in production environments.
Develop NLP-Based Automation Solutions
You'll create natural language processing pipelines that automate tasks such as document understanding, text classification, and summarisation, enabling intelligent handling of property-related documents.
Prototype and Implement Generative AI Tools
Work closely with AI researchers and developers to experiment with and implement Generative AI techniques for tasks like content generation, intelligent suggestions, and workflow automation.
Integrate ML Models with APIs and Tools
Integrate machine learning models with external APIs and internal systems to support business operations and enhance customer service workflows.
Maintain CI/CD for ML Features
Collaborate with DevOps teams to manage CI/CD pipelines that automate testing, validation, and deployment of ML features and updates.
Review, Debug, and Optimise Models
Participate in thorough code reviews and model debugging sessions. Continuously monitor and fine-tune deployed models to improve their performance and reliability.
Cross-Team Communication
Communicate technical concepts effectively across teams, translating complex ML ideas into actionable business value.
· Design, build, and deploy scalable ML and deep learning models for real-world applications.
· Develop NLP-based and Gen AI based solutions for automating document understanding, classification, and summarisation.
· Collaborate with AI researchers and developers to prototype and implement Generative AI tools.
· Integrate ML and Gen AI models with APIs and internal tools to support business operations.
· Work with CI/CD pipelines to ensure continuous delivery of ML features and updates.
· Participate in code reviews, debugging, and performance optimisation of deployed models.
· Communicate technical concepts effectively across cross-functional teams.
Essentials From Day 1:
Security and Compliance:
• Ensure ML systems are built with GDPR compliance in mind.
• Adhere to RBAC policies and maintain secure handling of personal and property data.
Sandboxing and Risk Management:
• Use sandboxed environments for testing new ML features.
• Conduct basic risk analysis for model performance and data bias.
• Use sandboxed environments for testing and development.
• Evaluate and mitigate potential risks in model behavior and data pipelines
Qualifications:
· 1–2 years of professional experience in Machine Learning and Deep Learning projects.
· Proficient in Python, Object-Oriented Programming (OOPs), and Data Structures & Algorithms (DSA).
· Strong understanding of NLP and its real-world applications.
· Exposure to building scalable ML systems and deploying models into production.
· Basic working knowledge of Generative AI techniques and frameworks.
· Familiarity with CI/CD tools and experience with API-based integration.
· Excellent analytical thinking and debugging capabilities.
· Strong interpersonal and communication skills for effective team collaboration.

Hybrid work mode
(Azure) EDW Experience working in loading Star schema data warehouses using framework
architectures including experience loading type 2 dimensions. Ingesting data from various
sources (Structured and Semi Structured), hands on experience ingesting via APIs to lakehouse architectures.
Key Skills: Azure Databricks, Azure Data Factory, Azure Datalake Gen 2 Storage, SQL (expert),
Python (intermediate), Azure Cloud Services knowledge, data analysis (SQL), data warehousing,documentation – BRD, FRD, user story creation.

We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems.
Key Responsibilities:
- Design, develop, test, and maintain scalable ETL data pipelines using Python.
- Work extensively on Google Cloud Platform (GCP) services such as:
- Dataflow for real-time and batch data processing
- Cloud Functions for lightweight serverless compute
- BigQuery for data warehousing and analytics
- Cloud Composer for orchestration of data workflows (based on Apache Airflow)
- Google Cloud Storage (GCS) for managing data at scale
- IAM for access control and security
- Cloud Run for containerized applications
- Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery.
- Implement and enforce data quality checks, validation rules, and monitoring.
- Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions.
- Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects.
- Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL.
- Document pipeline designs, data flow diagrams, and operational support procedures.
Required Skills:
- 4–8 years of hands-on experience in Python for backend or data engineering projects.
- Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.).
- Solid understanding of data pipeline architecture, data integration, and transformation techniques.
- Experience in working with version control systems like GitHub and knowledge of CI/CD practices.
- Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.).

Job Description:
- He / She candidate must possess a strong technology background with advanced knowledge of Java and Python based technology stack.
- Java, JEE, Spring MVC, Python, JPA, Spring Boot, REST API, Database, Playwright, CI/CD pipelines
- * At least 3 years of Hand-on Java EE and Core Java experience with strong leadership qualities.
- * Experience with Web Service development, REST and Services Oriented Architecture.
- * Expertise in Object Oriented Design, Design patterns, Architecture and Application Integration.
- * Working knowledge of Databases including Design, SOL proficiency.
- * Strong experience with frameworks used for development and automated testing like SpringBoot, Junit, BDD etc.
- * Experience with Unix/Linux Operating System and Basic Linux Commands.
- * Strong development skills with ability to understand technical design and translate the same into workable solution.
- * Basic knowledge of Python and Hand-on experience on Python scripting
- * Build, deploy, and monitor applications using CI/CD pipelines, * Experience with agile development methodology.
- Good to Have - Elastic Index Database, MongoDB. - No SQL Database Docker Deployments, Cloud Deployments Any Al ML. snowflake Experience

Job Overview:
We are seeking a highly experienced and innovative Senior AI Engineer with a strong background in Generative AI, including LLM fine-tuning and prompt engineering. This role requires hands-on expertise across NLP, Computer Vision, and AI agent-based systems, with the ability to build, deploy, and optimize scalable AI solutions using modern tools and frameworks.
Required Skills & Qualifications:
- Bachelor’s or Master’s in Computer Science, AI, Machine Learning, or related field.
- 5+ years of hands-on experience in AI/ML solution development.
- Proven expertise in fine-tuning LLMs (e.g., LLaMA, Mistral, Falcon, GPT-family) using techniques like LoRA, QLoRA, PEFT.
- Deep experience in prompt engineering, including zero-shot, few-shot, and retrieval-augmented generation (RAG).
- Proficient in key AI libraries and frameworks:
- LLMs & GenAI: Hugging Face Transformers, LangChain, LlamaIndex, OpenAI API, Diffusers
- NLP: SpaCy, NLTK.
- Vision: OpenCV, MMDetection, YOLOv5/v8, Detectron2
- MLOps: MLflow, FastAPI, Docker, Git
- Familiarity with vector databases (Pinecone, FAISS, Weaviate) and embedding generation.
- Experience with cloud platforms like AWS, GCP, or Azure, and deployment on in house GPU-backed infrastructure.
- Strong communication skills and ability to convert business problems into technical solutions.
Preferred Qualifications:
- Experience building multimodal systems (text + image, etc.)
- Practical experience with agent frameworks for autonomous or goal-directed AI.
- Familiarity with quantization, distillation, or knowledge transfer for efficient model deployment.
Key Responsibilities:
- Design, fine-tune, and deploy generative AI models (LLMs, diffusion models, etc.) for real-world applications.
- Develop and maintain prompt engineering workflows, including prompt chaining, optimization, and evaluation for consistent output quality.
- Build NLP solutions for Q&A, summarization, information extraction, text classification, and more.
- Develop and integrate Computer Vision models for image processing, object detection, OCR, and multimodal tasks.
- Architect and implement AI agents using frameworks such as LangChain, AutoGen, CrewAI, or custom pipelines.
- Collaborate with cross-functional teams to gather requirements and deliver tailored AI-driven features.
- Optimize models for performance, cost-efficiency, and low latency in production.
- Continuously evaluate new AI research, tools, and frameworks and apply them where relevant.
- Mentor junior AI engineers and contribute to internal AI best practices and documentation.


Job Title: Software Engineer (Node.js)
Experience: 4+ Years
Location:Pune
About the Role:
We are looking for a talented and experienced Node.js Developer with a minimum of 4 years of hands-on experience to join our dynamic team. In this role, you will design, develop, and maintain high-performance applications. You should be passionate about writing clean, efficient, and scalable code.
Key Responsbilities:
- Develop and maintain secure, scalable, and high-performance server-side applications.
- Implement authentication, authorization, and data protection measures across the backend.
- Follow backend best practices in code structure, error handling, and system design.
- Stay up to date with backend security trends and evolving best practices.
Mandatory skills:
- Strong hands-on experience in Node.js development (4+ years).
- Knowledge of security best practices in backend development (e.g., input validation and sanitize, secure data storage).
- Familiarity with authentication and authorization methods such as JWT, OAuth2, or session-based auth.
Good to Have Skills:
- Experience with React.js for building dynamic user interfaces.
- Working knowledge of Python for scripting or backend tasks.
Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent practical experience).
Required Soft Skills:
• Verbal Communication
• Written Communication
• Cooperation, Teamwork & Interpersonal Skills
• Customer Focus & Business Acumen
• Critical Thinking
• Initiative, Accountability & Result Orientation
• Learning and Continuous Improvement



Data Scientist
Job Id: QX003
About Us:
QX impact was launched with a mission to make AI accessible and affordable and deliver AI Products/Solutions at scale for enterprises by bringing the power of Data, AI, and Engineering to drive digital transformation. We believe without insights, businesses will continue to face challenges to better understand their customers and even lose them; Secondly, without insights businesses won't’ be able to deliver differentiated products/services; and finally, without insights, businesses can’t achieve a new level of “Operational Excellence” is crucial to remain competitive, meeting rising customer expectations, expanding markets, and digitalization.
Position Overview:
We are seeking a collaborative and analytical Data Scientist who can bridge the gap between business needs and data science capabilities. In this role, you will lead and support projects that apply machine learning, AI, and statistical modeling to generate actionable insights and drive business value.
Key Responsibilities:
- Collaborate with stakeholders to define and translate business challenges into data science solutions.
- Conduct in-depth data analysis on structured and unstructured datasets.
- Build, validate, and deploy machine learning models to solve real-world problems.
- Develop clear visualizations and presentations to communicate insights.
- Drive end-to-end project delivery, from exploration to production.
- Contribute to team knowledge sharing and mentorship activities.
Must-Have Skills:
- 3+ years of progressive experience in data science, applied analytics, or a related quantitative role, demonstrating a proven track record of delivering impactful data-driven solutions.
- Exceptional programming proficiency in Python, including extensive experience with core libraries such as Pandas, NumPy, Scikit-learn, NLTK and XGBoost.
- Expert-level SQL skills for complex data extraction, transformation, and analysis from various relational databases.
- Deep understanding and practical application of statistical modeling and machine learning techniques, including but not limited to regression, classification, clustering, time series analysis, and dimensionality reduction.
- Proven expertise in end-to-end machine learning model development lifecycle, including robust feature engineering, rigorous model validation and evaluation (e.g., A/B testing), and model deployment strategies.
- Demonstrated ability to translate complex business problems into actionable analytical frameworks and data science solutions, driving measurable business outcomes.
- Proficiency in advanced data analysis techniques, including Exploratory Data Analysis (EDA), customer segmentation (e.g., RFM analysis), and cohort analysis, to uncover actionable insights.
- Experience in designing and implementing data models, including logical and physical data modeling, and developing source-to-target mappings for robust data pipelines.
- Exceptional communication skills, with the ability to clearly articulate complex technical findings, methodologies, and recommendations to diverse business stakeholders (both technical and non-technical audiences).
- Experience in designing and implementing data models, including logical and physical data modeling, and developing source-to-target mappings for robust data pipelines.
- Exceptional communication skills, with the ability to clearly articulate complex technical findings, methodologies, and recommendations to diverse business stakeholders (both technical and non-technical audiences).
Good-to-Have Skills:
- Experience with cloud platforms (Azure, AWS, GCP) and specific services like Azure ML, Synapse, Azure Kubernetes and Databricks.
- Familiarity with big data processing tools like Apache Spark or Hadoop.
- Exposure to MLOps tools and practices (e.g., MLflow, Docker, Kubeflow) for model lifecycle management.
- Knowledge of deep learning libraries (TensorFlow, PyTorch) or experience with Generative AI (GenAI) and Large Language Models (LLMs).
- Proficiency with business intelligence and data visualization tools such as Tableau, Power BI, or Plotly.
- Experience working within Agile project delivery methodologies.
Competencies:
· Tech Savvy - Anticipating and adopting innovations in business-building digital and technology applications.
· Self-Development - Actively seeking new ways to grow and be challenged using both formal and informal development channels.
· Action Oriented - Taking on new opportunities and tough challenges with a sense of urgency, high energy, and enthusiasm.
· Customer Focus - Building strong customer relationships and delivering customer-centric solutions.
· Optimizes Work Processes - Knowing the most effective and efficient processes to get things done, with a focus on continuous improvement.
Why Join Us?
- Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions.
- Work on impactful projects that make a difference across industries.
- Opportunities for professional growth and continuous learning.
- Competitive salary and benefits package.

Role overview:
As a founding senior software engineer, you will play a key role in shaping our AI-powered visual search engine for fashion and e-commerce. Responsibilities include solving complex deep-tech challenges to build scalable AI/ML solutions, leading backend development for performance and scalability, and architecting and integrating software aligned with product strategy and innovation goals. You will collaborate with cross-functional teams to address real consumer problems and build robust AI/ML pipelines to drive product innovation.
What we’re looking for:
- 3–5 years of Python experience (Golang is a plus), with expertise in concurrency, FastAPI, restful APIs, and microservices.
- proficiency in PostgreSQL/MongoDB, cloud platforms (AWS/GCP/Azure), and containerization tools like Docker/Kubernetes.
- Strong experience in asynchronous programming, CI/CD pipelines, and version control (git).
- Excellent problem-solving and communication skills are essential.
What we offer:
- Competitive salary and ESOPs, along with hackerhouse living: live and work with a Gen-Z team in a 7bhk house on MG Road, Gurgaon.
- Hands-on experience in shipping world-class products, professional development opportunities, flexible hours, and a collaborative, supportive culture.