50+ Python Jobs in Bangalore (Bengaluru) | Python Job openings in Bangalore (Bengaluru)
Apply to 50+ Python Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have delivered over 170 automation projects for 65+ global clients, including Fortune 500 enterprises that trust us with mission-critical infrastructure and operations. We're bootstrapped, profitable, and scaling quickly by consistently solving complex engineering challenges with precision and reliability.
What We Value
- Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
- High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.
Who We Seek
We are hiring DevOps Engineers (6 months - 1 year experience) to join our DevOps team. You will work on infrastructure automation, CI/CD pipelines, cloud deployments, container orchestration, and system reliability.
This role is ideal for someone who wants to work with modern DevOps tooling and contribute to high-impact engineering decisions.
🌏 Job Location: Bengaluru (Work From Office)
What You Will Be Doing
CI/CD Pipeline Management
- Design, implement, and maintain efficient CI/CD pipelines using Jenkins, GitLab CI, Azure DevOps, or similar tools.
- Automate build, test, and deployment processes to increase delivery speed and reliability.
Infrastructure as Code (IaC)
- Provision and manage infrastructure on AWS, Azure, or GCP using Terraform, CloudFormation, or Ansible.
- Maintain scalable, secure, and cost-optimized environments.
Containerization & Orchestration
- Build and manage Docker-based environments.
- Deploy and scale workloads using Kubernetes.
Monitoring & Alerting
- Implement monitoring, logging, and alerting systems using Prometheus, Grafana, ELK Stack, Datadog, or similar.
- Develop dashboards and alerts to detect issues proactively.
System Reliability & Performance
- Implement systems for high availability, disaster recovery, and fault tolerance.
- Troubleshoot and optimize infrastructure performance.
Scripting & Automation
- Write automation scripts in Python, Bash, or Shell to streamline operations.
- Automate repetitive workflows to reduce manual intervention.
Collaboration & Best Practices
- Work closely with Development, QA, and Security teams to embed DevOps best practices into the SDLC.
- Follow security standards for deployments and infrastructure.
- Work efficiently with Unix/Linux systems and understand core networking concepts (DNS, DHCP, NAT, VPN, TCP/IP).
What We’re Looking For
- Strong understanding of Linux distributions (Ubuntu, CentOS, RHEL) and Windows environments.
- Proficiency with Git and experience using GitHub, GitLab, or Bitbucket.
- Ability to write automation scripts using Bash/Shell or Python.
- Basic knowledge of relational databases like MySQL or PostgreSQL.
- Familiarity with web servers such as NGINX or Apache2.
- Experience working with AWS, Azure, GCP, or DigitalOcean.
- Foundational understanding of Ansible for configuration management.
- Basic knowledge of Terraform or CloudFormation for IaC.
- Hands-on experience with Jenkins or GitLab CI/CD pipelines.
- Strong knowledge of Docker for containerization.
- Basic exposure to Kubernetes for orchestration.
- Familiarity with at least one programming language (Java, Node.js, or Python).
Benefits
🤝 Work directly with founders and engineering leaders.
💪 Drive projects that create real business impact, not busywork.
💡 Gain practical, industry-relevant skills you won’t learn in college.
🚀 Accelerate your growth by working on meaningful engineering challenges.
📈 Learn continuously with mentorship and structured development opportunities.
🤗 Be part of a collaborative, high-energy workplace that values innovation.
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have delivered over 170 automation projects for 65+ global clients, including Fortune 500 enterprises that trust us with mission-critical infrastructure and operations. We're bootstrapped, profitable, and scaling quickly by consistently solving complex engineering challenges with precision and reliability.
What We Value
- Ownership: As part of our team, you're responsible for strategy and outcomes, not just completing assigned tasks.
- High Velocity: We move fast, iterate faster, and amplify our impact, always prioritizing quality over speed.
Who We Seek
We are hiring DevOps Engineers (6 months - 1 year experience) to join our DevOps team. You will work on infrastructure automation, CI/CD pipelines, cloud deployments, container orchestration, and system reliability.
This role is ideal for someone who wants to work with modern DevOps tooling and contribute to high-impact engineering decisions.
🌏 Job Location: Bengaluru (Work From Office)
What You Will Be Doing
CI/CD Pipeline Management
- Design, implement, and maintain efficient CI/CD pipelines using Jenkins, GitLab CI, Azure DevOps, or similar tools.
- Automate build, test, and deployment processes to increase delivery speed and reliability.
Infrastructure as Code (IaC)
- Provision and manage infrastructure on AWS, Azure, or GCP using Terraform, CloudFormation, or Ansible.
- Maintain scalable, secure, and cost-optimized environments.
Containerization & Orchestration
- Build and manage Docker-based environments.
- Deploy and scale workloads using Kubernetes.
Monitoring & Alerting
- Implement monitoring, logging, and alerting systems using Prometheus, Grafana, ELK Stack, Datadog, or similar.
- Develop dashboards and alerts to detect issues proactively.
System Reliability & Performance
- Implement systems for high availability, disaster recovery, and fault tolerance.
- Troubleshoot and optimize infrastructure performance.
Scripting & Automation
- Write automation scripts in Python, Bash, or Shell to streamline operations.
- Automate repetitive workflows to reduce manual intervention.
Collaboration & Best Practices
- Work closely with Development, QA, and Security teams to embed DevOps best practices into the SDLC.
- Follow security standards for deployments and infrastructure.
- Work efficiently with Unix/Linux systems and understand core networking concepts (DNS, DHCP, NAT, VPN, TCP/IP).
What We’re Looking For
- Strong understanding of Linux distributions (Ubuntu, CentOS, RHEL) and Windows environments.
- Proficiency with Git and experience using GitHub, GitLab, or Bitbucket.
- Ability to write automation scripts using Bash/Shell or Python.
- Basic knowledge of relational databases like MySQL or PostgreSQL.
- Familiarity with web servers such as NGINX or Apache2.
- Experience working with AWS, Azure, GCP, or DigitalOcean.
- Foundational understanding of Ansible for configuration management.
- Basic knowledge of Terraform or CloudFormation for IaC.
- Hands-on experience with Jenkins or GitLab CI/CD pipelines.
- Strong knowledge of Docker for containerization.
- Basic exposure to Kubernetes for orchestration.
- Familiarity with at least one programming language (Java, Node.js, or Python).
Benefits
🤝 Work directly with founders and engineering leaders.
💪 Drive projects that create real business impact, not busywork.
💡 Gain practical, industry-relevant skills you won’t learn in college.
🚀 Accelerate your growth by working on meaningful engineering challenges.
📈 Learn continuously with mentorship and structured development opportunities.
🤗 Be part of a collaborative, high-energy workplace that values innovation.
Who We Are
We're a DevOps and Automation company based in Bengaluru, India. We have successfully delivered over 170 automation projects for 65+ global businesses, including Fortune 500 companies that trust us with their mission-critical infrastructure and operations. We're bootstrapped, profitable, and scaling quickly by consistently solving high-impact engineering problems.
What We Value
Ownership: You take accountability for outcomes, not just tasks.
High Velocity: We iterate fast, learn constantly, and deliver with precision.
Who We Seek
We are looking for a DevOps Intern to join our DevOps team and gain hands-on experience working with real infrastructure, automation pipelines, and deployment environments. You will support CI/CD processes, cloud environments, monitoring, and system reliability while learning industry-standard tools and practices.
We’re seeking someone who is technically curious, eager to learn, and driven to build reliable systems in a fast-paced engineering environment.
🌏 Job Location: Bengaluru (Work From Office)
What You Will Be Doing
- Assist in deploying product updates, monitoring system performance, and identifying production issues.
- Contribute to building and improving CI/CD pipelines for automated deployments.
- Support the provisioning, configuration, and maintenance of cloud infrastructure.
- Work with tools like Docker, Jenkins, Git, and monitoring systems to streamline workflows.
- Help automate recurring operational processes using scripting and DevOps tools.
- Participate in backend integrations aligned with product or customer requirements.
- Collaborate with developers, QA, and operations to improve reliability and scalability.
- Gain exposure to containerization, infrastructure-as-code, and cloud platforms.
- Document processes, configurations, and system behaviours to support team efficiency.
- Learn and apply DevOps best practices in real-world environments.
What We’re Looking For
- Hands-on experience or coursework with Docker, Linux, or cloud fundamentals.
- Familiarity with Jenkins, Git, or basic CI/CD concepts.
- Basic understanding of AWS, Azure, or Google Cloud environments.
- Exposure to configuration management tools like Ansible, Puppet, or similar.
- Interest in Kubernetes, Terraform, or infrastructure-as-code practices.
- Ability to write or modify simple shell or Python scripts.
- Strong analytical and troubleshooting mindset.
- Good communication skills with the ability to articulate technical concepts clearly.
- Eagerness to learn, take initiative, and adapt in a fast-moving engineering environment.
- Attention to detail and a commitment to accuracy and reliability.
Benefits
🤝 Work directly with founders and senior engineers.
💪 Contribute to live projects that impact real customers and systems.
💡 Learn tools and practices that engineering programs rarely teach.
🚀 Accelerate your growth through real-world problem solving.
📈 Build a strong DevOps foundation with continuous learning opportunities.
🤗 Thrive in a collaborative environment that encourages experimentation and growth.
As a Python Engineer, you will play a critical role in building and scaling data pipelines, developing prompts for large language models (LLMs), and deploying them as efficient, scalable APIs. You will collaborate closely with data scientists, product managers, and other engineers to ensure seamless integration of data solutions and LLM functionalities. This role requires expertise in Python, API design, data engineering tools, and a strong understanding of LLMs and their applications.
Review Criteria:
- Strong Software Engineer fullstack profile using NodeJS / Python and React
- 6+ YOE in Software Development using Python OR NodeJS (For backend) & React (For frontend)
- Must have strong experience in working on Typescript
- Must have experience in message-based systems like Kafka, RabbitMq, Redis
- Databases - PostgreSQL & NoSQL databases like MongoDB
- Product Companies Only
- Tier 1 Engineering Institutes (IIT, NIT, BITS, IIIT, DTU or equivalent)
Preferred:
- Experience in Fin-Tech, Payment, POS and Retail products is highly preferred
- Experience in mentoring, coaching the team.
Role & Responsibilities:
We are currently seeking a Senior Engineer to join our Financial Services team, contributing to the design and development of scalable system.
The Ideal Candidate Will Be Able To-
- Take ownership of delivering performant, scalable and high-quality cloud-based software, both frontend and backend side.
- Mentor team members to develop in line with product requirements.
- Collaborate with Senior Architect for design and technology choices for product development roadmap.
- Do code reviews.
Ideal Candidate:
- Thorough knowledge of developing cloud-based software including backend APIs and react based frontend.
- Thorough knowledge of scalable design patterns and message-based systems such as Kafka, RabbitMq, Redis, MongoDB, ORM, SQL etc.
- Experience with AWS services such as S3, IAM, Lambda etc.
- Expert level coding skills in Python FastAPI/Django, NodeJs, TypeScript, ReactJs.
- Eye for user responsive designs on the frontend.
Perks, Benefits and Work Culture:
- We prioritize people above all else. While we're recognized for our innovative technology solutions, it's our people who drive our success. That’s why we offer a comprehensive and competitive benefits package designed to support your well-being and growth:
- Medical Insurance with coverage up to INR 8,00,000 for the employee and their family
To lead the design, development, and optimization of high-scale search and discovery systems
leveraging deep expertise in OpenSearch. The Search Staff Engineer will enhance search
relevance, query performance, and indexing efficiency by utilizing OpenSearch’s full-text, vector
search, and analytics capabilities. This role focuses on building real-time search pipelines,
implementing advanced ranking models, and architecting distributed indexing solutions to
deliver a high-performance, scalable, and intelligent search experience.
Responsibilities:
• Architect, develop, and maintain a scalable OpenSearch-based search infrastructure for
high-traffic applications.
• Optimize indexing strategies, sharding, replication, and query execution to improve
search performance and reliability.
• Implement cross-cluster search, multi-tenant search solutions, and real-time search
capabilities.
• Ensure efficient log storage, retention policies, and lifecycle management in
OpenSearch.
• Monitor and troubleshoot performance bottlenecks, ensuring high availability and
resilience.
• Design and implement real-time and batch indexing pipelines for structured and
unstructured data.
• Optimize schema design, field mappings, and tokenization strategies for improved
search performance.
• Manage custom analyzers, synonyms, stopwords, and stemming filters for multilingual
search.
• Ensure search infrastructure adheres to security best practices, including encryption,
access control, and audit logging.
• Optimize search for low latency, high throughput, and cost efficiency.
• Collaborate cross-functionally with engineering, product, and operations teams to
ensure seamless platform delivery.
• Define and communicate a strategic roadmap for Search initiatives aligned with business
goals.
• Work closely with stakeholders to understand database requirements and provide
technical solutions.
Requirements:
• 8+ years of experience in search engineering, with at least 3+ years of deep experience in
OpenSearch.
• Strong expertise in search indexing, relevance tuning, ranking algorithms, and query
parsing.
• Hands-on experience with OpenSearch configurations, APIs, shards, replicas, and
cluster scaling.
• Strong programming skills in Node.js and Python and experience with OpenSearch SDKs.
• Proficiency in REST APIs, OpenSearch DSL queries, and aggregation frameworks.
• Knowledge of observability, logging, and monitoring tools (Prometheus, OpenTelemetry,
Grafana).
• Experience managing OpenSearch clusters on AWS OpenSearch, Containers, or self-
hosted environments.
• Strong understanding of security best practices, role-based access control (RBAC),
encryption, and IAM.
• Familiarity with multi-region, distributed search architectures.
• Strong analytical and debugging skills, with a proactive approach to identifying and
mitigating risks.
• Exceptional communication skills, with the ability to influence and drive consensus
among stakeholders.
Job Overview:
We are looking for a full-time Infrastructure & DevOps Engineer to support and enhance our cloud, server, and network operations. The role involves managing virtualization platforms, container environments, automation tools, and CI/CD workflows while ensuring smooth, secure, and reliable infrastructure performance. The ideal candidate should be proactive, technically strong, and capable of working collaboratively across teams.
Qualifications and Requirements
- Bachelor’s/Master’s degree in Computer Science, Engineering, or related field (B.E/B.Tech/BCA/MCA/M.Tech).
- Strong understanding of cloud platforms (AWS, Azure, GCP),including core services and IT infrastructure concepts.
- Hands-on experience with virtualization tools including vCenter, hypervisors, nested virtualization, and bare-metal servers and concepts.
- Practical knowledge of Linux and Windows servers, including cron jobs and essential Linux commands.
- Experience working with Docker, Kubernetes, and CI/CD pipelines.
- Strong understanding of Terraform and Ansible for infrastructure automation.
- Scripting proficiency in Python and Bash (PowerShell optional).
- Networking fundamentals (IP, routing, subnetting, LAN/WAN/WLAN).
- Experience with firewalls, basic security concepts, and tools like pfSense.
- Familiarity with Git/GitHub for version control and team collaboration.
- Ability to perform API testing using cURL and Postman.
- Strong understanding of the application deployment lifecycle and basic application deployment processes.
- Good problem-solving, analytical thinking, and documentation skills.
Roles and Responsibility
- Manage and maintain Linux/Windows servers, virtualization environments, and cloud infrastructure across AWS/Azure/GCP.
- Use Terraform and Ansible to provision, automate, and manage infrastructure components.
- Support application deployment lifecycle—from build and testing to release and rollout.
- Deploy and maintain Kubernetes clusters and containerized workloads using Docker.
- Develop, enhance, and troubleshoot CI/CD pipelines and integrate DevSecOps practices.
- Write automation scripts using Python/Bash to optimize recurring tasks.
- Conduct API testing using curl and Postman to validate integrations and service functionality.
- Configure and monitor firewalls including pfSense for secure access control.
- Troubleshoot network, server, and application issues using tools like Wireshark, ping, traceroute, and SNMP.
- Maintain Git/GitHub repos, manage branching strategies, and participate in code reviews.
- Prepare clear, detailed documentation including infrastructure diagrams, workflows, SOPs, and configuration records.
Experience: 3+ years
Responsibilities:
- Build, train and fine tune ML models
- Develop features to improve model accuracy and outcomes.
- Deploy models into production using Docker, kubernetes and cloud services.
- Proficiency in Python, MLops, expertise in data processing and large scale data set.
- Hands on experience in Cloud AI/ML services.
- Exposure in RAG Architecture
Strong Software Engineer fullstack profile using NodeJS / Python and React
Mandatory (Experience) - Must have 6+ YOE in Software Development using Python OR NodeJS (For backend) & React (For frontend)
Mandatory (Core Skills 1): Must have strong experience in working on Typescript
Mandatory (Core Skills 2): Must have experience in message based systems like Kafka, RabbitMq, Redis
Mandatory (Core Skills 3): Databases - PostgreSQL & NoSQL databases like MongoDB
Mandatory (Company) - Product Companies Only
Mandatory (Education) - Tier 1 Engineering Institutes (IIT, NIT, BITS, IIIT, DTU or equivalent)
MANDATORY CRITERIA:
- Education: B.Tech / M.Tech in ECE / CSE / IT
- Experience: 10–12 years in hardware board design, system hardware engineering, and full product deployment cycles
- Proven expertise in digital, analog, and power electronic circuit analysis & design
- Strong hands-on experience designing boards with SoCs, FPGAs, CPLDs, and MPSoC architectures
- Deep understanding of signal integrity, EMI/EMC, and high-speed design considerations
- Must have successfully completed at least two hardware product development cycles from high-level design to final deployment
- Ability to independently handle schematic design, design analysis (DC drop, SI), and cross-team design reviews
- Experience in sourcing & procurement of electronic components, PCBs, and mechanical parts for embedded/IoT/industrial hardware
- Strong experience in board bring-up, debugging, issue investigation, and cross-functional triage with firmware/software teams
- Expertise in hardware validation, test planning, test execution, equipment selection, debugging, and report preparation
- Proficiency in Cadence Allegro or Altium EDA tools (mandatory)
- Experience coordinating with layout, mechanical, SI, EMC, manufacturing, and supply chain teams
- Strong understanding of manufacturing services, production pricing models, supply chain, and logistics for electronics/electromechanical components
DESCRIPTION:
COMPANY OVERVIEW:
The company is a semiconductor and embedded system design company with a focus on Embedded, Turnkey ASICs, Mixed Signal IP, Semiconductor & Product Engineering and IoT solutions catering to Aerospace & Defence, Consumer Electronics, Automotive, Medical and Networking & Telecommunications.
REQUIRED SKILLS:
- Extensive experience in hardware board designs and towards multiple product field deployment cycles.
- Strong foundation and expertise in analyzing digital, Analog and power electronic circuits.
- Proficient with SoC, FPGAs, CPLD and MPSOC architecture-based board designs.
- Knowledgeable in signal integrity, EMI/EMC concepts for digital and power electronics.
- Completed at least two project from high-level design to final product level deployment.
- Capable of independently managing product’s schematic, design analysis DC Drop, Signal Integrity, and coordinating reviews with peer of layout, mechanical, SI, and EMC teams.
- Sourcing and procurement of electronic components, PCBs, and mechanical parts for cutting-edge IoT, embedded, and industrial product development.
- Experienced in board bring-up, issue investigation, and triage in collaboration with firmware and software teams.
- Skilled in preparing hardware design documentation, validation test planning, identifying necessary test equipment, test development, execution, debugging, and report preparation.
- Effective communication and interpersonal skills for collaborative work with cross-functional teams, including post-silicon bench validation, BIOS, and driver development/QA.
- Hands-on experience with Cadence Allegro/Altium EDA tools is essential.
- Familiarity with programming and scripting languages like Python and Perl, and experience in test automation is advantageous.
- Should have excellent exposure with coordination of Manufacturing Services, pricing model for production value supply chain & Logistics in electronics and electromechanical components domain.
Like us, you'll be deeply committed to delivering impactful outcomes for customers.
- 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
- Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
- Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
- Experience writing batch/cron jobs using Python and Shell scripting.
- Experience in web application development using JavaScript and JavaScript libraries.
- Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
- Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
- Understanding of code versioning tools such as Git.
- Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
- Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
Required Skills
• Minimum 3+yrs of experience in Python is mandatory.
• You are responsible for growing your team
• You take ownership of the product/service you are writing
• You are able to write clean, pragmatic, and testable code
• Comfortable with basic Unix commands (+ Shell scripting)
• Very Proficient in Git & GitHub
• Have a GitHub & StackOverflow profile
• Proficient in writing test-first code (a.k.a Writing testable code)
• You have to stick to the timeline of the sprint
• Must have worked on AWS.
Skills: Python 3.5+, Django 2.0 or higher or Flask, ORM (Django-ORM, SQL Alchemy), Celery, Redis/RabbitMQ, Elastic Search/Solr, Django Rest Framework, Graph QL, Pandas, NumPy, SciPy, Linux OS, GIT, DevOps, Docker, AWS,
knowledge on front end technologies is good to have (HTML5, CSS3, SASS/LESS, Object-Oriented Javascript, TypeScript).
Knowledge of Machine Learning/AI Concepts and Keen interest/exposure/experience in other languages (Golang, Elixir, Rust) is a huge plus.
Mode Employment – Fulltime and Permanent
Working Location: Bommasandra Industrial Area, Hosur Main Road, Bangalore
Working Days: 5 days
Working Model: Hybrid - 3 days WFO and 2 days Home
Position Overview
As the Lead Software Engineer in our Research & Innovation team, you’ll play a strategic role in establishing and driving the technical vision for industrial AI solutions. Working closely with the Lead AI Engineer, you will form a leadership tandem to define the roadmap for the team, cultivate an innovative culture, and ensure that projects are strategically aligned with the organization’s goals. Your leadership will be crucial in developing, mentoring, and empowering the team as we expand, helping create an environment where innovative ideas can translate seamlessly from research to industry-ready products.
Key Responsibilities:
- Define and drive the technical strategy for embedding AI into industrial automation products, with a focus on scalability, quality, and industry compliance.
- Lead the development of a collaborative, high-performing engineering team, mentoring junior engineers, automation experts, and researchers.
- Establish and oversee processes and standards for agile and DevOps practices, ensuring project alignment with strategic goals.
- Collaborate with stakeholders to align project goals, define priorities, and manage timelines, while driving innovative, research-based solutions.
- Act as a key decision-maker on technical issues, architecture, and system design, ensuring long-term maintainability and scalability of solutions.
- Ensure adherence to industry standards, certifications, and compliance, and advocate for industry best practices within the team.
- Stay updated on software engineering trends and AI applications in embedded systems, incorporating the latest advancements into the team’s strategic planning.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
- Extensive experience in software engineering, with a proven track record of leading technical teams, ideally in manufacturing or embedded systems.
- Strong expertise in Python and C++/Rust, Gitlab toolchains, and system architecture for embedded applications.
- Experience in DevOps, CI/CD, and agile methodologies, with an emphasis on setting and maintaining high standards across a team.
- Exceptional communication and collaboration skills in English.
- Willingness to travel as needed.
Preferred:
- Background in driving team culture, agile project management, and experience embedding AI in industrial products.
- Familiarity with sociocratic or consent-based management practices.
- Knowledge in embedded programming is an advantage.
Job Description:
Exp Range - [6y to 10y]
Qualifications:
- Minimum Bachelors Degree in Engineering or Computer Applications or AI/Data science
- Experience working in product companies/Startups for developing, validating, productionizing AI model in the recent projects in last 3 years.
- Prior experience in Python, Numpy, Scikit, Pandas, ETL/SQL, BI tools in previous roles preferred
Require Skills:
- Must Have – Direct hands-on experience working in Python for scripting automation analysis and Orchestration
- Must Have – Experience working with ML Libraries such as Scikit-learn, TensorFlow, PyTorch, Pandas, NumPy etc.
- Must Have – Experience working with models such as Random forest, Kmeans clustering, BERT…
- Should Have – Exposure to querying warehouses and APIs
- Should Have – Experience with writing moderate to complex SQL queries
- Should Have – Experience analyzing and presenting data with BI tools or Excel
- Must Have – Very strong communication skills to work with technical and non technical stakeholders in a global environment
Roles and Responsibilities:
- Work with Business stakeholders, Business Analysts, Data Analysts to understand various data flows and usage.
- Analyse and present insights about the data and processes to Business Stakeholders
- Validate and test appropriate AI/ML models based on the prioritization and insights developed while working with the Business Stakeholders
- Develop and deploy customized models on Production data sets to generate analytical insights and predictions
- Participate in cross functional team meetings and provide estimates of work as well as progress in assigned tasks.
- Highlight risks and challenges to the relevant stakeholders so that work is delivered in a timely manner.
- Share knowledge and best practices with broader teams to make everyone aware and more productive.
Mode Employment – Fulltime and Permanent
Working Location: Bommasandra Industrial Area, Hosur Main Road, Bangalore
Working Days: 5 days
Working Model: Hybrid - 3 days WFO and 2 days Home
Responsibilities
- Design, implement, and test embedded software for industrial automation products
- Collaborate within an agile team on industrial communication, cybersecurity, and closed-loop control projects
- Assist in developing and enhancing infrastructure for continuous integration and industrial Ethernet
- Integrate and reuse software components from the embedded platform
- Ensure software meets quality, performance, and functional standards
- Troubleshoot, test, and support embedded software throughout the development lifecycle
Qualification
- Degree in Computer Science, Software Engineering, Electrical Engineering, or related field.
- Familiarity with electric and pneumatic systems and willingness to engage with them
- 5 to 8 years of experience in embedded systems and industrial communication software development
- Proficiency in object-oriented design, C++, Python for scripting and automation
- Knowledge of version control with Git, unit and integration testing, and troubleshooting embedded software
- Familiarity with ARM v7/v8 Cortex-M / Cortex-A microcontrollers and their ecosystems
- Understanding industrial communication protocols (EtherCAT, Profinet, Modbus, IO-Link) and controllers (SIEMENS, Beckhoff)
- Experience with modern development tools such as VS Code, LLVM, GitLab, CMake, and Conan Awareness of software development processes, architectural design principles, and quality best practice
- Excellent written and spoken English communication skills
Founding Engineer – RetainSure (Bengaluru)
Full-stack • Python/FastAPI • React • Postgres • AWS • Early-stage.
RetainSure is an AI Customer Success Manager that eliminates the need for CSMs for low-touch accounts and reduces headcount by 80% for high-touch teams. We integrate deeply across product, CRM, support, billing, and external signals to build a unified intelligence layer, and deploy specialized AI agents that behave like a human CSM across backstage, in-meeting, and in-product workflows.
We’re now hiring a Founding Engineer to join our core team and help build the next $100M+ AI infra layer for Customer Success.
What You’ll Own
- Architect and build core product components across backend, frontend, infrastructure, and data pipelines.
- Develop scalable services using Python, FastAPI, Postgres, and AWS-native components.
- Build high-quality user-facing experiences in React.
- Design, optimize, and maintain database schemas and high-throughput workflows.
- Work directly with the founders to define product, ship fast, and iterate with customers.
- Own modules end-to-end, from concept → architecture → implementation → monitoring.
- Set engineering standards, improve dev velocity, and lay the foundation for the engineering culture.
What We’re Looking For
Must-haves
- 2+ years of experience as a Full-Stack Engineer in high-velocity environments.
- Strong backend fundamentals: Python, FastAPI, SQL/Postgres, distributed systems basics.
- Solid frontend engineering with React.
- Strong grasp of DSA, algorithms, and database internals.
- Deep comfort with AWS (EC2, S3, Lambda, RDS, VPC, IAM, etc.).
- Bias toward action, ownership, and solving real customer problems.
- Ability to move fast without compromising code quality.
Cherry on Top
- Ambition to start your own company in the future: we love founders-in-the-making.
- Experience building large-scale SaaS systems, data platforms, or AI-driven workflows.
- Worked in early-stage startups or shipped 0→1 products before.
What You’ll Get
- Founding-level ESOPs with outsized ownership.
- Work with founders experienced in building/scaling SaaS (Rubrik, Mailmodo, ClearTax).
- High trust, zero bureaucracy, rapid execution environment.
- Opportunity to architect systems that will scale globally across enterprise customers.
- First-hand exposure to building and scaling a company from scratch.
Salary Range
15-18 LPA + founding engineer level ESOPs
Location
Bengaluru
As a Senior Software Engineer, you’ll be responsible for building and maintaining high-performance web applications across the stack. You’ll collaborate with product managers, designers, and business stakeholders to translate complex business needs into reliable digital systems.
Key Responsibilities
- Design, build, and maintain scalable web applications end-to-end.
- Work closely with product and design teams to deliver user-centric, high-performance interfaces.
- Develop and optimize backend APIs, database queries, and integrations.
- Write clean, maintainable, and testable code following best practices.
- Mentor junior developers and contribute to team-wide tech decisions.
Requirements
- Experience: 5+ years of hands-on full-stack development experience.
- Backend: Proficiency in Python
- Frontend: Experience with React, Angular, or Vue.js.
- Database: Strong knowledge of SQL databases (MySQL, PostgreSQL, or Oracle).
- Communication: Comfortable in English or Hindi.
- Location: Bangalore, 5 days a week (Work from Office).
- Availability: Immediate joiners preferred.
Why Join Us
- Be part of a fast-growing global diamond brand backed by two industry leaders.
- Collaborate with a sharp, experienced tech and product team solving real-world business challenges.
- Work at the intersection of luxury, data, and innovation — building systems that directly impact global operations.
Objectives of this role
- Develop, test and maintain high-quality software using Python programming language.
- Participate in the entire software development lifecycle, building, testing and delivering high-quality solutions.
- Collaborate with cross-functional teams to identify and solve complex problems.
- Write clean and reusable code that can be easily maintained and scaled.
Your tasks
- Create large-scale data processing pipelines to help developers build and train novel machine learning algorithms.
- Participate in code reviews, ensure code quality and identify areas for improvement to implement practical solutions.
- Debugging codes when required and troubleshooting any Python-related queries.
- Keep up to date with emerging trends and technologies in Python development.
Required skills and qualifications
- 3+ years of experience as a Python Developer with a strong portfolio of projects.
- Hands on exp in angular 12+v
- Bachelor's degree in Computer Science, Software Engineering or a related field.
- In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch.
- Experience with front-end development using HTML, CSS, and JavaScript.
- Familiarity with database technologies such as SQL and NoSQL.
- Excellent problem-solving ability with solid communication and collaboration skills.
Preferred skills and qualifications
- Experience with popular Python frameworks such as Django, Flask or Pyramid.
- Knowledge of Angular
- A working understanding of cloud platforms such as AWS, Google Cloud or Azure.
- Contributions to open-source Python projects or active involvement in the Python community.
An L2 Technical Support Engineer with Python knowledge is responsible for handling escalated, more complex technical issues that the Level 1 (L1) support team cannot resolve. Your primary goal is to perform deep-dive analysis, troubleshooting, and problem resolution to minimize customer downtime and ensure system stability.
Python is a key skill, used for scripting, automation, debugging, and data analysis in this role.
Key Responsibilities
- Advanced Troubleshooting & Incident Management:
- Serve as the escalation point for complex technical issues (often involving software bugs, system integrations, backend services, and APIs) that L1 support cannot resolve.
- Diagnose, analyze, and resolve problems, often requiring in-depth log analysis, code review, and database querying.
- Own the technical resolution of incidents end-to-end, adhering strictly to established Service Level Agreements (SLAs).
- Participate in on-call rotation for critical (P1) incident support outside of regular business hours.
- Python-Specific Tasks:
- Develop and maintain Python scripts for automation of repetitive support tasks, system health checks, and data manipulation.
- Use Python for debugging and troubleshooting by analyzing application code, API responses, or data pipeline issues.
- Write ad-hoc scripts to extract, analyze, or modify data in databases for diagnostic or resolution purposes.
- Potentially apply basic-to-intermediate code fixes in Python applications in collaboration with development teams.
- Collaboration and Escalation:
- Collaborate closely with L3 Support, Software Engineers, DevOps, and Product Teams to report bugs, propose permanent fixes, and provide comprehensive investigation details.
- Escalate issues that require significant product changes or deeper engineering expertise to the L3 team, providing clear, detailed documentation of all steps taken.
- Documentation and Process Improvement:
- Conduct Root Cause Analysis (RCA) for major incidents, documenting the cause, resolution, and preventative actions.
- Create and maintain a Knowledge Base (KB), runbooks, and Standard Operating Procedures (SOPs) for recurring issues to empower L1 and enable customer self-service.
- Proactively identify technical deficiencies in processes and systems and recommend improvements to enhance service quality.
- Customer Communication:
- Maintain professional, clear, and timely communication with customers, explaining complex technical issues and resolutions in an understandable manner.
Required Technical Skills
- Programming/Scripting:
- Strong proficiency in Python (for scripting, automation, debugging, and data manipulation).
- Experience with other scripting languages like Bash or Shell
- Databases:
- Proficiency in SQL for complex querying, debugging data flow issues, and data extraction.
- Application/Web Technologies:
- Understanding of API concepts (RESTful/SOAP) and experience troubleshooting them using tools like Postman or curl.
- Knowledge of application architectures (e.g., microservices, SOA) is a plus.
- Monitoring & Tools:
- Experience with support ticketing systems (e.g., JIRA, ServiceNow).
- Familiarity with log aggregation and monitoring tools (Kibana, Splunk, ELK Stack, Grafana)
An L2 Technical Support Engineer with Python knowledge is responsible for handling escalated, more complex technical issues that the Level 1 (L1) support team cannot resolve. Your primary goal is to perform deep-dive analysis, troubleshooting, and problem resolution to minimize customer downtime and ensure system stability.
Python is a key skill, used for scripting, automation, debugging, and data analysis in this role.
Key Responsibilities
- Advanced Troubleshooting & Incident Management:
- Serve as the escalation point for complex technical issues (often involving software bugs, system integrations, backend services, and APIs) that L1 support cannot resolve.
- Diagnose, analyze, and resolve problems, often requiring in-depth log analysis, code review, and database querying.
- Own the technical resolution of incidents end-to-end, adhering strictly to established Service Level Agreements (SLAs).
- Participate in on-call rotation for critical (P1) incident support outside of regular business hours.
- Python-Specific Tasks:
- Develop and maintain Python scripts for automation of repetitive support tasks, system health checks, and data manipulation.
- Use Python for debugging and troubleshooting by analyzing application code, API responses, or data pipeline issues.
- Write ad-hoc scripts to extract, analyze, or modify data in databases for diagnostic or resolution purposes.
- Potentially apply basic-to-intermediate code fixes in Python applications in collaboration with development teams.
- Collaboration and Escalation:
- Collaborate closely with L3 Support, Software Engineers, DevOps, and Product Teams to report bugs, propose permanent fixes, and provide comprehensive investigation details.
- Escalate issues that require significant product changes or deeper engineering expertise to the L3 team, providing clear, detailed documentation of all steps taken.
- Documentation and Process Improvement:
- Conduct Root Cause Analysis (RCA) for major incidents, documenting the cause, resolution, and preventative actions.
- Create and maintain a Knowledge Base (KB), runbooks, and Standard Operating Procedures (SOPs) for recurring issues to empower L1 and enable customer self-service.
- Proactively identify technical deficiencies in processes and systems and recommend improvements to enhance service quality.
- Customer Communication:
- Maintain professional, clear, and timely communication with customers, explaining complex technical issues and resolutions in an understandable manner.
Required Technical Skills
- Programming/Scripting:
- Strong proficiency in Python (for scripting, automation, debugging, and data manipulation).
- Experience with other scripting languages like Bash or Shell
- Databases:
- Proficiency in SQL for complex querying, debugging data flow issues, and data extraction.
- Application/Web Technologies:
- Understanding of API concepts (RESTful/SOAP) and experience troubleshooting them using tools like Postman or curl.
- Knowledge of application architectures (e.g., microservices, SOA) is a plus.
- Monitoring & Tools:
- Experience with support ticketing systems (e.g., JIRA, ServiceNow).
- Familiarity with log aggregation and monitoring tools (Kibana, Splunk, ELK Stack, Grafana)
Responsibilities :
- Design and develop user-friendly web interfaces using HTML, CSS, and JavaScript.
- Utilize modern frontend frameworks and libraries such as React, Angular, or Vue.js to build dynamic and responsive web applications.
- Develop and maintain server-side logic using programming languages such as Java, Python, Ruby, Node.js, or PHP.
- Build and manage APIs for seamless communication between the frontend and backend systems.
- Integrate third-party services and APIs to enhance application functionality.
- Implement CI/CD pipelines to automate testing, integration, and deployment processes.
- Monitor and optimize the performance of web applications to ensure a high-quality user experience.
- Stay up-to-date with emerging technologies and industry trends to continuously improve development processes and application performance.
Qualifications :
- Bachelors/master's in computer science or related subjects or hands-on experience demonstrating working understanding of software applications.
- Knowledge of building applications that can be deployed in a cloud environment or are cloud native applications.
- Strong expertise in building backend applications using Java/C#/Python with demonstrable experience in using frameworks such as Spring/Vertx/.Net/FastAPI.
- Deep understanding of enterprise design patterns, API development and integration and Test-Driven Development (TDD)
- Working knowledge in building applications that leverage databases such as PostgreSQL, MySQL, MongoDB, Neo4J or storage technologies such as AWS S3, Azure Blob Storage.
- Hands-on experience in building enterprise applications adhering to their needs of security and reliability.
- Hands-on experience building applications using one of the major cloud providers (AWS, Azure, GCP).
- Working knowledge of CI/CD tools for application integration and deployment.
- Working knowledge of using reliability tools to monitor the performance of the application.
We are seeking a motivated Data Analyst to support business operations by analyzing data, preparing reports, and delivering meaningful insights. The ideal candidate should be comfortable working with data, identifying patterns, and presenting findings in a clear and actionable way.
Key Responsibilities:
- Collect, clean, and organize data from internal and external sources
- Analyze large datasets to identify trends, patterns, and opportunities
- Prepare regular and ad-hoc reports for business stakeholders
- Create dashboards and visualizations using tools like Power BI or Tableau
- Work closely with cross-functional teams to understand data requirements
- Ensure data accuracy, consistency, and quality across reports
- Document data processes and analysis methods
Qualification
• Bachelor’s or master’s degree in computer science, Electrical Engineering, Electronics, or related field
• 5 – 8 years of hands-on experience in Embedded Systems and AI/ML development
• Strong programming proficiency in Python, C/C++, and embedded development
• Practical experience with embedded ML frameworks such as TensorFlow Lite, ONNX Runtime, Edge Impulse, or similar
• Solid understanding of real-time embedded systems, microcontroller architectures, and communication protocols
• Experience with model optimization techniques (quantization, pruning, hardware-specific acceleration) is a plus
• Familiarity with edge hardware platforms such as ARM Cortex, STM32, ESP32, NVIDIA Jetson, or similar is desirable.
Your job Responsibilities :
• Develop, optimize, and deploy machine learning models for embedded and edge devices
• Ensure real-time performance, memory efficiency, and low-power operation of on-device AI solutions
• Integrate ML algorithms with microcontrollers, embedded controllers, and edge computing platforms
• Implement pipelines for model conversion, quantization, and hardware acceleration
• Collaborate with software, controls, and hardware engineers to deliver production-grade embedded AI systems
• Participate in prototyping, testing, benchmarking, and continuous improvement of embedded AI solutions
• Work with the global research teams to translate concepts into scalable prototypes for industrial automation.
Flam is building AI Infrastructure for Brands in Immersive Advertising spanning across all channels viz. Digital, Broadcast TV, Retail, Communications, Print, OOH etc.
Vision
The Immersive & Interactive Layer for Every Screen & Surface
Flam aims to redefine how consumers interact with ads, content in every shape and form, retail aisles, live broadcasts and fan moments—turning content and interfaces into shoppable, shareable experiences that deliver measurable ROI.
Flam has raised a $14 million Series A round led by global technology investor RTP Global with participation from Dovetail and select others bringing the total funding to $22 million.
The next phase of growth is to accelerate R&D on its app-less GenAI infrastructure that lets brands create, publish and measure high-fidelity MR, 3D & Digital experiences in <300 ms on any smartphone—no app download required.
The same infra already powers advertising for Google, Samsung, Emirates and hundreds of global enterprises & agency powerhouses.
Key Focus Areas
Product Roadmap
- Upcoming releases include GenAI-driven 3D asset generation
- Democratising MR deployment at scale
- Enterprise Suite of Products across Industries
- Infrastructure for broadcasters and fan engagement
Geography
- Funds will support new enterprise pods in North America, Europe and the Middle East
- Deepening Asia operations
Partnerships
- Flam will expand its partner program for creative studios and global platforms
- Enabling Fortune 500 brands to move from pilot to rapid global roll-out
Role Overview
We’re looking for a Mobile Automation Engineer (Native iOS & Android) who can write and maintain automated test cases in Swift and Java for our native iOS and Android apps. You’ll help test Mixed Reality (MR) features, backend-integrated flows, and ensure app quality at scale. You’ll collaborate with product, development, and QA teams to enable fast, bug-free releases.
Key Responsibilities
- Write and maintain automation test cases in Swift (XCUITest) for iOS and Java (Espresso/UIAutomator) for Android.
- Build and evolve robust test suites for core app features, including MR interactions, media workflows, and camera/sensor integrations.
- Automate end-to-end flows that include backend APIs (authentication, media processing, analytics, etc.).
- Collaborate with developers to integrate automation into CI/CD pipelines (GitHub Actions, Bitrise).
- Perform regression, smoke, and exploratory testing across a wide range of devices and OS versions.
- Raise and track bugs with detailed reproduction steps and logs.
Required Qualifications
- 2–6 years of experience in mobile test automation.
- Strong programming experience with Java (for Android) and Swift (for iOS).
- Hands-on experience with Espresso, UIAutomator, and XCUITest.
- Proficient in testing REST APIs and using tools like Postman, Charles Proxy, etc.
- Experience working with CI/CD tools and device testing infrastructure such as Firebase Test Lab or BrowserStack.
Nice-to-Have (Bonus)
- Exposure to ARKit (iOS) or ARCore (Android).
- Basic understanding of graphics/rendering stacks such as OpenGL, Metal, or Vulkan.
- Experience testing camera, sensor, or motion-driven features.
- Background in early-stage startup environments or rapid product development cycles.
What We Offer
- Opportunity to work on a cutting-edge MR platform with a highly creative and driven team.
- Full ownership and autonomy of mobile automation strategy.
- Flexible remote work environment with a performance-driven culture.
- Competitive salary and ESOPs.
- A real chance to make an impact in the fast-evolving world of mobile Mixed Reality.

a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & health\care, retail, and BFSI. it is expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value.
- Skills: Gen AI, Machine learning Models, AWS/ Azure, redshift, Python, Apachi, Airflow, Devops, minimum 4-5years experience as Architect, should be from Data Engineering background.
• Strong hands-on background in data engineering, analytics, or data science.
• Expertise in building data platforms using:
o Cloud: AWS (Glue, S3, Redshift), Azure (Data Factory, Synapse), GCP (BigQuery,
Dataflow).
o Compute: Spark, Databricks, Flink.
o Data modelling: dimensional, relational, NoSQL, graph.
• Proficiency with Python, SQL, and data pipeline orchestration tools.
• Understanding of ML frameworks and tools: TensorFlow, PyTorch, Scikit-learn, MLflow, etc.
• Experience implementing MLOps, model deployment, monitoring, logging, and versioning.
• 8+ years of experience in data engineering, data science, or architecture roles.
• Experience designing enterprise-grade AI platforms.
• Certification in major cloud platforms (AWS/Azure/GCP).
• Experience with governance tooling (Collibra, Alation) and lineage systems
Job Description:
Required skills and qualifications
- 3+ years of experience as a Python Developer with a strong portfolio of projects.
- hands on exp in react.js
- Bachelor's degree in Computer Science, Software Engineering or a related field.
- In-depth understanding of the Python software development stacks, ecosystems, frameworks and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch.
- Experience with front-end development using Angular.js HTML, CSS, and JavaScript.
- Familiarity with database technologies such as SQL and NoSQL.
- Excellent problem-solving ability with solid communication and collaboration skills.
Preferred skills and qualifications
- Experience with popular Python frameworks such as Django, Flask or Pyramid.
- Knowledge of data science and machine learning concepts and tools.
- A working understanding of cloud platforms such as AWS, Google Cloud or Azure.
- Contributions to open-source Python projects or active involvement in the Python community.
About Impacto Digifin Technologies
Impacto Digifin Technologies enables enterprises to adopt digital transformation through intelligent, AI-powered solutions. Our platforms reduce manual work, improve accuracy, automate complex workflows, and ensure compliance—empowering organizations to operate with speed, clarity, and confidence.
We combine automation where it’s fastest with human oversight where it matters most. This hybrid approach ensures trust, reliability, and measurable efficiency across fintech and enterprise operations.
Role Overview
We are looking for an AI Engineer Voice with strong applied experience in machine learning, deep learning, NLP, GenAI, and full-stack voice AI systems.
This role requires someone who can design, build, deploy, and optimize end-to-end voice AI pipelines, including speech-to-text, text-to-speech, real-time streaming voice interactions, voice-enabled AI applications, and voice-to-LLM integrations.
You will work across core ML/DL systems, voice models, predictive analytics, banking-domain AI applications, and emerging AGI-aligned frameworks. The ideal candidate is an applied engineer with strong fundamentals, the ability to prototype quickly, and the maturity to contribute to R&D when needed.
This role is collaborative, cross-functional, and hands-on.
Key Responsibilities
Voice AI Engineering
- Build end-to-end voice AI systems, including STT, TTS, VAD, audio processing, and conversational voice pipelines.
- Implement real-time voice pipelines involving streaming interactions with LLMs and AI agents.
- Design and integrate voice calling workflows, bi-directional audio streaming, and voice-based user interactions.
- Develop voice-enabled applications, voice chat systems, and voice-to-AI integrations for enterprise workflows.
- Build and optimize audio preprocessing layers (noise reduction, segmentation, normalization)
- Implement voice understanding modules, speech intent extraction, and context tracking.
Machine Learning & Deep Learning
- Build, deploy, and optimize ML and DL models for prediction, classification, and automation use cases.
- Train and fine-tune neural networks for text, speech, and multimodal tasks.
- Build traditional ML systems where needed (statistical, rule-based, hybrid systems).
- Perform feature engineering, model evaluation, retraining, and continuous learning cycles.
NLP, LLMs & GenAI
- Implement NLP pipelines including tokenization, NER, intent, embeddings, and semantic classification.
- Work with LLM architectures for text + voice workflows
- Build GenAI-based workflows and integrate models into production systems.
- Implement RAG pipelines and agent-based systems for complex automation.
Fintech & Banking AI
- Work on AI-driven features related to banking, financial risk, compliance automation, fraud patterns, and customer intelligence.
- Understand fintech data structures and constraints while designing AI models.
Engineering, Deployment & Collaboration
- Deploy models on cloud or on-prem (AWS / Azure / GCP / internal infra).
- Build robust APIs and services for voice and ML-based functionalities.
- Collaborate with data engineers, backend developers, and business teams to deliver end-to-end AI solutions.
- Document systems and contribute to internal knowledge bases and R&D.
Security & Compliance
- Follow fundamental best practices for AI security, access control, and safe data handling.
- Awareness of financial compliance standards (plus, not mandatory).
- Follow internal guidelines on PII, audio data, and model privacy.
Primary Skills (Must-Have)
Core AI
- Machine Learning fundamentals
- Deep Learning architectures
- NLP pipelines and transformers
- LLM usage and integration
- GenAI development
- Voice AI (STT, TTS, VAD, real-time pipelines)
- Audio processing fundamentals
- Model building, tuning, and retraining
- RAG systems
- AI Agents (orchestration, multi-step reasoning)
Voice Engineering
- End-to-end voice application development
- Voice calling & telephony integration (framework-agnostic)
- Realtime STT ↔ LLM ↔ TTS interactive flows
- Voice chat system development
- Voice-to-AI model integration for automation
Fintech/Banking Awareness
- High-level understanding of fintech and banking AI use cases
- Data patterns in core banking analytics (advantageous)
Programming & Engineering
- Python (strong competency)
- Cloud deployment understanding (AWS/Azure/GCP)
- API development
- Data processing & pipeline creation
Secondary Skills (Good to Have)
- MLOps & CI/CD for ML systems
- Vector databases
- Prompt engineering
- Model monitoring & evaluation frameworks
- Microservices experience
- Basic UI integration understanding for voice/chat
- Research reading & benchmarking ability
Qualifications
- 2–3 years of practical experience in AI/ML/DL engineering.
- Bachelor’s/Master’s degree in CS, AI, Data Science, or related fields.
- Proven hands-on experience building ML/DL/voice pipelines.
- Experience in fintech or data-intensive domains preferred.
Soft Skills
- Clear communication and requirement understanding
- Curiosity and research mindset
- Self-driven problem solving
- Ability to collaborate cross-functionally
- Strong ownership and delivery discipline
- Ability to explain complex AI concepts simply
Machine Learning Engineer | 3+ Years | Mumbai (Onsite)
Location: Ghansoli, Mumbai
Work Mode: Onsite | 5 days working
Notice Period: Immediate to 30 Days preferred
About the Role
We are hiring a Machine Learning Engineer with 3+ years of experience to build and deploy prediction, classification, and recommendation models. You’ll work on end-to-end ML pipelines and production-grade AI systems.
Must-Have Skills
- 3+ years of hands-on ML experience
- Strong Python (Pandas, NumPy, Scikit-learn, TensorFlow / PyTorch)
- Experience with feature engineering, model training & evaluation
- Hands-on with Azure ML / Azure Storage / Azure Functions
- Knowledge of modern AI concepts (embeddings, transformers, LLMs)
Good to Have
- MLOps tools (MLflow, Docker, CI/CD)
- Time-series forecasting
- Model serving using FastAPI
Why Join Us?
- Work on real-world ML use cases
- Exposure to modern AI & LLM-based systems
- Collaborative engineering environment
- High ownership & learning opportunities
Exp: 7- 10 Years
CTC: up to 35 LPA
Skills:
- 6–10 years DevOps / SRE / Cloud Infrastructure experience
- Expert-level Kubernetes (networking, security, scaling, controllers)
- Terraform Infrastructure-as-Code mastery
- Hands-on Kafka production experience
- AWS cloud architecture and networking expertise
- Strong scripting in Python, Go, or Bash
- GitOps and CI/CD tooling experience
Key Responsibilities:
- Design highly available, secure cloud infrastructure supporting distributed microservices at scale
- Lead multi-cluster Kubernetes strategy optimized for GPU and multi-tenant workloads
- Implement Infrastructure-as-Code using Terraform across full infrastructure lifecycle
- Optimize Kafka-based data pipelines for throughput, fault tolerance, and low latency
- Deliver zero-downtime CI/CD pipelines using GitOps-driven deployment models
- Establish SRE practices with SLOs, p95 and p99 monitoring, and FinOps discipline
- Ensure production-ready disaster recovery and business continuity testing
If interested Kindly share your updated resume at 82008 31681
Role Description
This is a full-time on-site role in Bengaluru for a Full Stack Python Developer at Euphoric Thought Technologies Pvt. Ltd. The developer will be responsible for back-end and front-end web development, software development, full-stack development, and using Cascading Style Sheets (CSS) to build effective and efficient applications.
Qualifications
- Back-End Web Development and Full-Stack Development skills
- Front-End Development and Software Development skills
- Proficiency in Cascading Style Sheets (CSS)
- Experience with Python, Django, and Flask frameworks
- Strong problem-solving and analytical skills
- Ability to work collaboratively in a team environment
- Bachelor's or Master's degree in Computer Science or relevant field
- Agile Methodologies: Proven experience working in agile teams, demonstrating the application of agile principles with lean thinking.
- Front end - React.js
- Data Engineering: Useful experience blending data engineering with core software engineering.
- Additional Programming Skills: Desirable experience with other programming languages (C++, .NET) and frameworks.
- CI/CD Tools: Familiarity with Github Actions is a plus.
- Cloud Platforms: Experience with cloud platforms (e.g., Azure, AWS,) and containerization technologies (e.g., Docker, Kubernetes).
- Code Optimization: Proficient in profiling and optimizing Python code.
AI Agent Builder – Internal Functions and Data Platform Development Tools
About the Role:
We are seeking a forward-thinking AI Agent Builder to lead the design, development, and deployment, and usage reporting of Microsoft Copilot and other AI-powered agents across our data platform development tools and internal business functions. This role will be instrumental in driving automation, improving onboarding, and enhancing operational efficiency through intelligent, context-aware assistants.
This role is central to our GenAI transformation strategy. You will help shape the future of how our teams interact with data, reduce administrative burden, and unlock new efficiencies across the organization. Your work will directly contribute to our “Art of the Possible” initiative—demonstrating tangible business value through AI.
You Will:
• Copilot Agent Development: Use Microsoft Copilot Studio and Agent Builder to create, test, and deploy AI agents that automate workflows, answer queries, and support internal teams.
• Data Engineering Enablement: Build agents that assist with data connector scaffolding, pipeline generation, and onboarding support for engineers.
• Knowledge Base Integration: Curate and integrate documentation (e.g., ERDs, connector specs) into Copilot-accessible repositories (SharePoint, Confluence) to support contextual AI responses.
• Prompt Engineering: Design reusable prompt templates and conversational flows to streamline repeated tasks and improve agent usability.
• Tool Evaluation & Integration: Assess and integrate complementary AI tools (e.g., GitLab Duo, Databricks AI, Notebook LM) to extend Copilot capabilities.
• Cross-Functional Collaboration: Partner with product, delivery, PMO, and security teams to identify high-value use cases and scale successful agent implementations.
• Governance & Monitoring: Ensure agents align with Responsible AI principles, monitor performance, and iterate based on feedback and evolving business needs.
• Adoption and Usage Reporting: Use Microsoft Viva Insights and other tools to report on user adoption, usage and business value delivered.
What We're Looking For:
• Proven experience with Microsoft 365 Copilot, Copilot Studio, or similar AI platforms, ChatGPT, Claude, etc.
• Strong understanding of data engineering workflows, tools (e.g., Git, Databricks, Unity Catalog), and documentation practices.
• Familiarity with SharePoint, Confluence, and Microsoft Graph connectors.
• Experience in prompt engineering and conversational UX design.
• Ability to translate business needs into scalable AI solutions.
• Excellent communication and collaboration skills across technical and non-technical
Bonus Points:
• Experience with GitLab Duo, Notebook LM, or other AI developer tools.
• Background in enterprise data platforms, ETL pipelines, or internal business systems.
• Exposure to AI governance, security, and compliance frameworks.
• Prior work in a regulated industry (e.g., healthcare, finance) is a plus.
We’re looking for a passionate Data & Automation Engineer to join our team and assist in managing and processing large volumes of structured and unstructured data. You'll work closely with our engineering and product teams to extract, transform, and load (ETL) data, automate data workflows, and format data for different use cases.
Key Responsibilities:
- Write efficient scripts using Python and Node.js to process and manipulate data
- Scrape and extract data from public and private sources (APIs, websites, files)
- Format and clean raw datasets for consistency and usability
- Upload data to various databases, including MongoDB and other storage solutions
- Create and maintain data pipelines and automation scripts
- Document processes, scripts, and schema changes clearly
- Collaborate with backend and product teams to support data-related needs
Skills Required:
- Proficiency in Python (especially for data manipulation using libraries like pandas, requests, etc.)
- Experience with Node.js for backend tasks or scripting
- Familiarity with MongoDB and understanding of NoSQL databases
- Basic knowledge of web scraping tools (e.g., BeautifulSoup, Puppeteer, or Cheerio)
- Understanding of JSON, APIs, and data formatting best practices
- Attention to detail, debugging skills, and a data-driven mindset
Good to Have:
- Experience with data visualization or reporting tools
- Knowledge of other databases like PostgreSQL or Redis
- Familiarity with version control (Git) and working in agile teams
We are seeking a Senior Data Engineer to design, build, and maintain a robust, scalable on-premise data infrastructure. The role focuses on real-time and batch data processing using technologies such as Apache Pulsar, Apache Flink, MongoDB, ClickHouse, Docker, and Kubernetes.
Ideal candidates have strong systems knowledge, deep backend data experience, and a passion for building efficient, low-latency data pipelines in a non-cloud, on-prem environment.
Key Responsibilities
1. Data Pipeline & Streaming Development
- Design and implement real-time data pipelines using Apache Pulsar and Apache Flink to support mission-critical systems.
- Build high-throughput, low-latency data ingestion and processing workflows across streaming and batch workloads.
- Integrate internal systems and external data sources into a unified on-premise data platform.
2. Data Storage & Modelling
- Design efficient data models for MongoDB, ClickHouse, and other on-prem databases to support analytical and operational use cases.
- Optimize storage formats, indexing strategies, and partitioning schemes for performance and scalability.
3. Infrastructure & Containerization
- Deploy, manage, and monitor containerized data services using Docker and Kubernetes in on-prem environments.
4. Performance, Monitoring & Reliability
- Monitor and fine-tune the performance of streaming jobs and database queries.
- Implement robust logging, metrics, and alerting frameworks to ensure high availability and operational stability.
- Identify pipeline bottlenecks and implement proactive optimizations.
Required Skills & Experience
- Strong experience in data engineering with a focus on on-premise environments.
- Expertise in streaming technologies such as Apache Pulsar, Apache Flink, or similar platforms.
- Deep hands-on experience with MongoDB, ClickHouse, or other NoSQL/columnar databases.
- Proficient in Python for data processing and backend development.
- Practical experience deploying and managing systems using Docker and Kubernetes.
- Strong understanding of Linux systems, performance tuning, and resource monitoring.
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related fields (or equivalent experience).
Additional Responsibilities for Senior-Level Hires
Leadership & Mentorship
- Guide, mentor, and support junior engineers; establish best practices and code quality standards.
System Architecture
- Lead the design and optimization of complex real-time and batch data pipelines for scalability and performance.
Sensor Data Expertise
- Build and optimize sensor-driven data pipelines and stateful stream processing systems for mission-critical domains such as maritime and defense.
End-to-End Ownership
- Take full responsibility for the performance, reliability, and optimization of on-premise data systems.
Job Description -Technical Project Manager
Job Title: Technical Project Manager
Location: Bhopal / Bangalore (On-site)
Experience Required: 7+ Years
Industry: Fintech / SaaS / Software Development
Role Overview
We are looking for a Technical Project Manager (TPM) who can bridge the gap between management and developers. The TPM will manage Android, Frontend, and Backend teams, ensure smooth development processes, track progress, evaluate output quality, resolve technical issues, and deliver timely reports.
Key Responsibilities
Project & Team Management
- Manage daily tasks for Android, Frontend, and Backend developers
- Conduct daily stand-ups, weekly planning, and reviews
- Track progress, identify blockers, and ensure timely delivery
- Maintain sprint boards, task estimations, and timelines
Technical Requirement Translation
- Convert business requirements into technical tasks
- Communicate requirements clearly to developers
- Create user stories, flow diagrams, and PRDs
- Ensure requirements are understood and implemented correctly
Quality & Build Review
- Validate build quality, UI/UX flow, functionality
- Check API integrations, errors, performance issues
- Ensure coding practices and architecture guidelines are followed
- Perform preliminary QA before handover to testing or clients
Issue Resolution
- Identify development issues early
- Coordinate with developers to fix bugs
- Escalate major issues to founders with clear insights
Reporting & Documentation
- Daily/weekly reports to management
- Sprint documentation, release notes
- Maintain project documentation & version control processes
Cross-Team Communication
- Act as the single point of contact for management
- Align multiple tech teams with business goals
- Coordinate with HR and operations for resource planning
Required Skills
- Strong understanding of Android, Web (Frontend/React), Backend development flows
- Knowledge of APIs, Git, CI/CD, basic testing
- Experience with Agile/Scrum methodologies
- Ability to review builds and suggest improvements
- Strong documentation skills (Jira, Notion, Trello, Asana)
- Excellent communication & leadership
- Ability to handle pressure and multiple projects
Good to Have
- Prior experience in Fintech projects
- Basic knowledge of UI/UX
- Experience in preparing FSD/BRD/PRD
- QA experience or understanding of test cases
Salary Range: 9 to 12 LPA
Job Title:Full Stack Developer
Location: Bangalore, India
About Us:
Meraki Labs stands at the forefront of India's deep-tech innovation landscape, operating as a dynamic venture studio established by the visionary entrepreneur Mukesh Bansal. Our core mission revolves around the creation and rapid scaling of AI-first and truly "moonshot" startups, nurturing them from their nascent stages into industry leaders. We achieve this through an intensive, hands-on partnership model, working side-by-side with exceptional founders who possess both groundbreaking ideas and the drive to execute them.
Currently, Meraki Labs is channeling its significant expertise and resources into a particularly ambitious endeavor: a groundbreaking EdTech platform. This initiative is poised to revolutionize the field of education by democratizing access to world-class STEM learning for students globally. Our immediate focus is on fundamentally redefining how physics is taught and experienced, moving beyond traditional methodologies to deliver an immersive, intuitive, and highly effective learning journey that transcends geographical and socioeconomic barriers. Through this platform, we aim to inspire a new generation of scientists, engineers, and innovators, ensuring that cutting-edge educational resources are within reach of every aspiring learner, everywhere.
Role Overview:
As a Full Stack Developer, you will be at the foundation of building this intelligent learning ecosystem by connecting the front-end experience, backend architecture, and AI-driven components that bring the platform to life. You’ll own key systems that power the AI Tutor, Simulation Lab, and learning content delivery, ensuring everything runs smoothly, securely, and at scale. This role is ideal for engineers who love building end-to-end products that blend technology, user experience, and real-time intelligence.
Your Core Impact
- You will build the spine of the platform, ensuring seamless communication between AI models, user interfaces, and data systems.
- You’ll translate learning and AI requirements into tangible, performant product features.
- Your work will directly shape how thousands of students experience physics through our AI Tutor and simulation environment.
Key Responsibilities:
Platform Architecture & Backend Development
- Design and implement robust, scalable APIs that power user authentication, course delivery, and AI Tutor integration.
- Build the data pipelines connecting LLM responses, simulation outputs, and learner analytics.
- Create and maintain backend systems that ensure real-time interaction between the AI layer and the front-end interface.
- Ensure security, uptime, and performance across all services.
Front-End Development & User Experience
- Develop responsive, intuitive UIs (React, Next.js or similar) for learning dashboards, course modules, and simulation interfaces.
- Collaborate with product designers to implement layouts for AI chat, video lessons, and real-time lab interactions.
- Ensure smooth cross-device functionality for students accessing the platform on mobile or desktop.
AI Integration & Support
- Work closely with the AI/ML team to integrate the AI Tutor and Simulation Lab outputs within the platform experience.
- Build APIs that pass context, queries, and results between learners, models, and the backend in real time.
- Optimize for low latency and high reliability, ensuring students experience immediate and natural interactions with the AI Tutor.
Data, Analytics & Reporting
- Build dashboards and data views for educators and product teams to derive insights from learner behavior.
- Implement secure data storage and export pipelines for progress analytics.
Collaboration & Engineering Culture
- Work closely with AI Engineers, Prompt Engineers, and Product Leads to align backend logic with learning outcomes.
- Participate in code reviews, architectural discussions, and system design decisions.
- Help define engineering best practices that balance innovation, maintainability, and performance.
Required Qualifications & Skills
- 3–5 years of professional experience as a Full Stack Developer or Software Engineer.
- Strong proficiency in Python or Node.js for backend services.
- Hands-on experience with React / Next.js or equivalent modern front-end frameworks.
- Familiarity with databases (SQL/NoSQL), REST APIs, and microservices.
- Experience with real-time data systems (WebSockets or event-driven architectures).
- Exposure to AI/ML integrations or data-intensive backends.
- Knowledge of AWS/GCP/Azure and containerized deployment (Docker, Kubernetes).
- Strong problem-solving mindset and attention to detail.
Job Title: AI Engineer
Location: Bangalore, India
About Us:
Meraki Labs stands at the forefront of India's deep-tech innovation landscape, operating as a dynamic venture studio established by the visionary entrepreneur Mukesh Bansal. Our core mission revolves around the creation and rapid scaling of AI-first and truly "moonshot" startups, nurturing them from their nascent stages into industry leaders. We achieve this through an intensive, hands-on partnership model, working side-by-side with exceptional founders who possess both groundbreaking ideas and the drive to execute them.
Currently, Meraki Labs is channeling its significant expertise and resources into a particularly ambitious endeavor: a groundbreaking EdTech platform. This initiative is poised to revolutionize the field of education by democratizing access to world-class STEM learning for students globally. Our immediate focus is on fundamentally redefining how physics is taught and experienced, moving beyond traditional methodologies to deliver an immersive, intuitive, and highly effective learning journey that transcends geographical and socioeconomic barriers. Through this platform, we aim to inspire a new generation of scientists, engineers, and innovators, ensuring that cutting-edge educational resources are within reach of every aspiring learner, everywhere.
Role Overview:
As an AI Engineer on the Capacity team, you will design, build, and deploy the intelligent systems that power our AI Tutor and Simulation Lab.
You’ll collaborate closely with prompt engineers, product managers, and full-stack developers to build scalable AI features that connect language, reasoning, and real-world learning. This is not a traditional ML ops role, it’s an opportunity to engineer how intelligence flows across the product: from tutoring interactions to real-time physics reasoning.
Your Core Impact
- Build the AI backbone that drives real-time tutoring, contextual reasoning, and simulation feedback.
- Translate learning logic and educational goals into deployable, scalable AI systems.
- Enable the AI Tutor to think, reason, and respond based on structured academic material and live learner inputs.
Key Responsibilities:
1. AI System Architecture & Development
- Design and develop scalable AI systems that enable chat-based tutoring, concept explainability, and interactive problem solving.
- Implement and maintain model-serving APIs, vector databases, and context pipelines to connect content, learners, and the tutor interface.
- Contribute to the design of the AI reasoning layer that interprets simulation outputs and translates them into learner-friendly explanations.
2. Simulation Lab Intelligence
- Work with the ML team to integrate LLMs with the Simulation Lab; enabling the system to read experiment variables, predict outcomes, and explain results dynamically.
- Create evaluation loops that compare student actions against expected results and generate personalized feedback through the tutor.
- Support the underlying ML logic for physics-based prediction and real-time data flow between lab modules and the tutor layer.
3. Model Integration & Optimization
- Fine-tune, evaluate, and deploy LLMs or smaller domain models that serve specific platform functions.
- Design retrieval and grounding workflows so that all model outputs reference the correct textbook or course material.
- Optimize performance, latency, and scalability for high-traffic, interactive learning environments.
4. Collaboration & Research
- Partner with Prompt Engineers to ensure reasoning consistency across tutoring and simulations.
- Work with Product and Education teams to define use cases that align AI behavior with learning goals.
- Stay updated with new model capabilities and research advancements in RAG, tool use, and multi-modal learning systems.
5. Data & Infrastructure
- Maintain robust data pipelines for model inputs (textbooks, transcripts, lab data) and evaluation sets.
- Ensure privacy-safe data handling and continuous model performance tracking.
- Deploy and monitor AI workloads using cloud platforms (AWS, GCP, or Azure).etc.
Soft Skills:
- Strong problem-solving and analytical abilities.
- Eagerness to learn, innovate and deliver impactful results.
Required Qualifications & Skills
- 3–4 years of experience in AI engineering, ML integration, or backend systems for AI-driven products.
- Strong proficiency in Python, with experience in frameworks like FastAPI, Flask, or LangChain.
- Familiarity with LLMs, embeddings, RAG systems, and vector databases (Pinecone, FAISS, Chroma, etc.).
- Experience building APIs and integrating with frontend components.
- Working knowledge of cloud platforms (AWS, GCP, Azure) and model deployment environments.
- Understanding of data structures, algorithms, and OOP principles.
Job Description: Applied Scientist
Location: Bangalore / Hybrid / Remote
Company: LodgIQ
Industry: Hospitality / SaaS / Machine Learning
About LodgIQ
Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the
travel industry. By leveraging machine learning and artificial intelligence, we enable precise
forecasting and optimized pricing for hotel revenue management. Backed by Highgate
Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a
global presence.
About the Role
We are seeking a highly motivated Applied Scientist to join our Data Science team. This
individual will play a key role in enhancing and scaling our existing forecasting and pricing
systems and developing new capabilities that support our intelligent decision-making
platform.
We are looking for team members who: ● Are deeply curious and passionate about applying machine learning to real-world
problems. ● Demonstrate strong ownership and the ability to work independently. ● Excel in both technical execution and collaborative teamwork. ● Have a track record of shipping products in complex environments.
What You’ll Do ● Build, train, and deploy machine learning and operations research models for
forecasting, pricing, and inventory optimization. ● Work with large-scale, noisy, and temporally complex datasets. ● Collaborate cross-functionally with engineering and product teams to move models
from research to production. ● Generate interpretable and trusted outputs to support adoption of AI-driven rate
recommendations. ● Contribute to the development of an AI-first platform that redefines hospitality revenue
management.
Required Qualifications ● Bachelor's or Master’s degree or PhD in Computer Science or related field. ● 3-5 years of hands-on experience in a product-centric company, ideally with full model
lifecycle exposure.
Commented [1]: Leaving note here
Acceptable Degree types - Masters or PhD
Fields
Operations Research
Industrial/Systems Engineering
Computer Science
Applied Mathematics
● Demonstrated ability to apply machine learning and optimization techniques to solve
real-world business problems.
● Proficient in Python and machine learning libraries such as PyTorch, statsmodel,
LightGBM, scikit-learn, XGBoost
● Strong knowledge of Operations Research models (Stochastic optimization, dynamic
programming) and forecasting models (time-series and ML-based).
● Understanding of machine learning and deep learning foundations.
● Translate research into commercial solutions
● Strong written and verbal communication skills to explain complex technical concepts
clearly to cross-functional teams.
● Ability to work independently and manage projects end-to-end.
Preferred Experience
● Experience in revenue management, pricing systems, or demand forecasting,
particularly within the hotel and hospitality domain.
● Applied knowledge of reinforcement learning techniques (e.g., bandits, Q-learning,
model-based control).
● Familiarity with causal inference methods (e.g., DAGs, treatment effect estimation).
● Proven experience in collaborative product development environments, working closely
with engineering and product teams.
Why LodgIQ?
● Join a fast-growing, mission-driven company transforming the future of hospitality.
● Work on intellectually challenging problems at the intersection of machine learning,
decision science, and human behavior.
● Be part of a high-impact, collaborative team with the autonomy to drive initiatives from
ideation to production.
● Competitive salary and performance bonuses.
● For more information, visit https://www.lodgiq.com
Job Description: Python-Azure AI Developer
Experience: 5+ years
Locations: Bangalore | Pune | Chennai | Jaipur | Hyderabad | Gurgaon | Bhopal
Mandatory Skills:
- Python: Expert-level proficiency with FastAPI/Flask
- Azure Services: Hands-on experience integrating Azure cloud services
- Databases: PostgreSQL, Redis
- AI Expertise: Exposure to Agentic AI technologies, frameworks, or SDKs with strong conceptual understanding
Good to Have:
- Workflow automation tools (n8n or similar)
- Experience with LangChain, AutoGen, or other AI agent frameworks
- Azure OpenAI Service knowledge
Key Responsibilities:
- Develop AI-powered applications using Python and Azure
- Build RESTful APIs with FastAPI/Flask
- Integrate Azure services for AI/ML workloads
- Implement agentic AI solutions
- Database optimization and management
- Workflow automation implementation
Job Overview:
As a Technical Lead, you will be responsible for leading the design, development, and deployment of AI-powered Edtech solutions. You will mentor a team of engineers, collaborate with data scientists, and work closely with product managers to build scalable and efficient AI systems. The ideal candidate should have 8-10 years of experience in software development, machine learning, AI use case development and product creation along with strong expertise in cloud-based architectures.
Key Responsibilities:
AI Tutor & Simulation Intelligence
- Architect the AI intelligence layer that drives contextual tutoring, retrieval-based reasoning, and fact-grounded explanations.
- Build RAG (retrieval-augmented generation) pipelines and integrate verified academic datasets from textbooks and internal course notes.
- Connect the AI Tutor with the Simulation Lab, enabling dynamic feedback — the system should read experiment results, interpret them, and explain why outcomes occur.
- Ensure AI responses remain transparent, syllabus-aligned, and pedagogically accurate.
Platform & System Architecture
- Lead the development of a modular, full-stack platform unifying courses, explainers, AI chat, and simulation windows in a single environment.
- Design microservice architectures with API bridges across content systems, AI inference, user data, and analytics.
- Drive performance, scalability, and platform stability — every millisecond and every click should feel seamless.
Reliability, Security & Analytics
- Establish system observability and monitoring pipelines (usage, engagement, AI accuracy).
- Build frameworks for ethical AI, ensuring transparency, privacy, and student safety.
- Set up real-time learning analytics to measure comprehension and identify concept gaps.
Leadership & Collaboration
- Mentor and elevate engineers across backend, ML, and front-end teams.
- Collaborate with the academic and product teams to translate physics pedagogy into engineering precision.
- Evaluate and integrate emerging tools — multi-modal AI, agent frameworks, explainable AI — into the product roadmap.
Qualifications & Skills:
- 8–10 years of experience in software engineering, ML systems, or scalable AI product builds.
- Proven success leading cross-functional AI/ML and full-stack teams through 0→1 and scale-up phases.
- Expertise in cloud architecture (AWS/GCP/Azure) and containerization (Docker, Kubernetes).
- Experience designing microservices and API ecosystems for high-concurrency platforms.
- Strong knowledge of LLM fine-tuning, RAG pipelines, and vector databases (Pinecone, Weaviate, etc.).
- Demonstrated ability to work with educational data, content pipelines, and real-time systems.
Bonus Skills (Nice to Have):
- Experience with multi-modal AI models (text, image, audio, video).
- Knowledge of AI safety, ethical AI, and explain ability techniques.
- Prior work in AI-powered automation tools or AI-driven SaaS products.

Global digital transformation solutions provider.
Role Proficiency:
This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.
Skill Examples:
- Proficiency in SQL Python or other programming languages used for data manipulation.
- Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
- Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
- Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
- Experience in performance tuning.
- Experience in data warehouse design and cost improvements.
- Apply and optimize data models for efficient storage retrieval and processing of large datasets.
- Communicate and explain design/development aspects to customers.
- Estimate time and resource requirements for developing/debugging features/components.
- Participate in RFP responses and solutioning.
- Mentor team members and guide them in relevant upskilling and certification.
Knowledge Examples:
- Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.
- Proficient in SQL for analytics and windowing functions.
- Understanding of data schemas and models.
- Familiarity with domain-related data.
- Knowledge of data warehouse optimization techniques.
- Understanding of data security concepts.
- Awareness of patterns frameworks and automation practices.
Additional Comments:
# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026
Project Overview:
Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025.
The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency.
Design, build, and maintain scalable data pipelines using Databricks and PySpark.
Develop and optimize complex SQL queries for data extraction, transformation, and analysis.
Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).
Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets.
Ensure data quality, performance, and reliability across data workflows.
Participate in code reviews, data architecture discussions, and performance optimization initiatives.
Support migration and modernization efforts for legacy data systems to modern cloud-based solutions.
Key Skills:
Hands-on experience with Databricks, PySpark & Python for building ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred).
Excellent problem-solving, communication, and collaboration skills.
Skills: Databricks, Pyspark & Python, Sql, Aws Services
Must-Haves
Python/PySpark (5+ years), SQL (5+ years), Databricks (3+ years), AWS Services (3+ years), ETL tools (Informatica, Glue, DataProc) (3+ years)
Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).
******
Notice period - Immediate to 15 days
Location: Bangalore
We are seeking a hands-on eCommerce Analytics & Insights Lead to help establish and scale our newly launched eCommerce business. The ideal candidate is highly data-savvy, understands eCommerce deeply, and can lead KPI definition, performance tracking, insights generation, and data-driven decision-making.
You will work closely with cross-functional teams—Buying, Marketing, Operations, and Technology—to build dashboards, uncover growth opportunities, and guide the evolution of our online channel.
Key Responsibilities
Define & Monitor eCommerce KPIs
- Set up and track KPIs across the customer journey: traffic, conversion, retention, AOV/basket size, repeat rate, etc.
- Build KPI frameworks aligned with business goals.
Data Tracking & Infrastructure
- Partner with marketing, merchandising, operations, and tech teams to define data tracking requirements.
- Collaborate with eCommerce and data engineering teams to ensure data quality, completeness, and availability.
Dashboards & Reporting
- Build dashboards and automated reports to track:
- Overall site performance
- Category & product performance
- Marketing ROI and acquisition effectiveness
Insights & Performance Diagnosis
Identify trends, opportunities, and root causes of underperformance in areas such as:
- Product availability & stock health
- Pricing & promotions
- Checkout funnel drop-offs
- Customer retention & cohort behavior
- Channel acquisition performance
Conduct:
- Cohort analysis
- Funnel analytics
- Customer segmentation
- Basket analysis
Data-Driven Growth Initiatives
- Propose and evaluate experiments, optimization ideas, and quick wins.
- Help business teams interpret KPIs and take informed decisions.
Required Skills & Experience
- 2–5 years experience in eCommerce analytics (grocery retail experience preferred).
- Strong understanding of eCommerce metrics and analytics frameworks (Traffic → Conversion → Repeat → LTV).
- Proficiency with tools such as:
- Google Analytics / GA4
- Excel
- SQL
- Power BI or Tableau
- Experience working with:
- Digital marketing data
- CRM and customer data
- Product/category performance data
- Ability to convert business questions into analytical tasks and produce clear, actionable insights.
- Familiarity with:
- Customer journey mapping
- Funnel analysis
- Basket and behavioral analysis
- Comfortable working in fast-paced, ambiguous, and build-from-scratch environments.
- Strong communication and stakeholder management skills.
- Strong technical capability in at least one programming language: SQL or PySpark.
Good to Have
- Experience with eCommerce platforms (Shopify, Magento, Salesforce Commerce, etc.).
- Exposure to A/B testing, recommendation engines, or personalization analytics.
- Knowledge of Python/R for deeper analytics (optional).
- Experience with tracking setup (GTM, event tagging, pixel/event instrumentation).
Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems
Criteria:
- Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
- Must be strong in one core backend language: Node.js, Go, Java, or Python.
- Deep understanding of distributed systems, caching, high availability, and microservices architecture.
- Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
- Strong command over system design, data structures, performance tuning, and scalable architecture
- Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.
Description
What This Role Is All About
We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.
What You’ll Own
● Architect backend systems that handle India-scale traffic without breaking a sweat.
● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.
● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.
● Partner with Product, Data, and Infra to ship features that are reliable and delightful.
● Set high engineering standards—clean architecture, performance, automation, and testing.
● Lead discussions on system design, performance tuning, and infra choices.
● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.
● Identify gaps proactively and push for improvements instead of waiting for fires.
What Makes You a Great Fit
● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.
● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.
● Deep understanding of distributed systems, caching, high-availability, and microservices.
● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.
● You think data structures and system design are not interviews — they’re daily tools.
● You write code that future-you won’t hate.
● Strong communication and a let’s figure this out attitude.
Bonus Points If You Have
● Built or scaled consumer apps with millions of DAUs.
● Experimented with event-driven architecture, streaming systems, or real-time pipelines.
● Love startups and don’t mind wearing multiple hats.
● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.
Why company Might Be Your Best Move
● Work on products used by real people every single day.
● Ownership from day one—your decisions will shape our core architecture.
● No unnecessary hierarchy; direct access to founders and senior leadership.
● A team that cares about quality, speed, and impact in equal measure.
● Build for Bharat — complex constraints, huge scale, real impact.

Global digital transformation solutions provider.
Job Description – Senior Technical Business Analyst
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
About the Role
We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.
As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.
Key Responsibilities
Business & Analytical Responsibilities
- Partner with business teams to understand one-line problem statements and translate them into detailed business requirements, opportunities, and project scope.
- Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
- Create documentation including Business Requirement Documents (BRDs), user stories, process flows, and analytical models.
- Break down business needs into concise, actionable, and development-ready user stories in Jira.
Data & Technical Responsibilities
- Collaborate with data engineering teams to design, review, and validate data pipelines, data models, and ETL/ELT workflows.
- Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
- Apply foundational data science concepts such as statistical analysis, predictive modeling, and machine learning fundamentals.
- Validate and ensure data quality, consistency, and accuracy across datasets and systems.
Collaboration & Execution
- Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
- Assist in development, testing, and rollout of data-driven solutions.
- Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.
Required Skillsets
Core Technical Skills
- 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
- Data Analytics: SQL, descriptive analytics, business problem framing.
- Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
- Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
- Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.
Soft Skills
- Strong analytical thinking and structured problem-solving capability.
- Ability to convert business problems into clear technical requirements.
- Excellent communication, documentation, and presentation skills.
- High curiosity, adaptability, and eagerness to learn new tools and techniques.
Educational Qualifications
- BE/B.Tech or equivalent in:
- Computer Science / IT
- Data Science
What We Look For
- Demonstrated passion for data and analytics through projects and certifications.
- Strong commitment to continuous learning and innovation.
- Ability to work both independently and in collaborative team environments.
- Passion for solving business problems using data-driven approaches.
- Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.
Why Join Us?
- Exposure to modern data platforms, analytics tools, and AI technologies.
- A culture that promotes innovation, ownership, and continuous learning.
- Supportive environment to build a strong career in data and analytics.
Skills: Data Analytics, Business Analysis, Sql
Must-Haves
Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R
******
Notice period - 0 to 15 days (Max 30 Days)
Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
To design, build, and optimize scalable data infrastructure and pipelines that enable efficient
data collection, transformation, and analysis across the organization. The Senior Data Engineer
will play a key role in driving data architecture decisions, ensuring data quality and availability,
and empowering analytics, product, and engineering teams with reliable, well-structured data to
support business growth and strategic decision-making.
Responsibilities:
• Develop, and maintain SQL and NoSQL databases, ensuring high performance,
scalability, and reliability.
• Collaborate with the API team and Data Science team to build robust data pipelines and
automations.
• Work closely with stakeholders to understand database requirements and provide
technical solutions.
• Optimize database queries and performance tuning to enhance overall system
efficiency.
• Implement and maintain data security measures, including access controls and
encryption.
• Monitor database systems and troubleshoot issues proactively to ensure uninterrupted
service.
• Develop and enforce data quality standards and processes to maintain data integrity.
• Create and maintain documentation for database architecture, processes, and
procedures.
• Stay updated with the latest database technologies and best practices to drive
continuous improvement.
• Expertise in SQL queries and stored procedures, with the ability to optimize and fine-tune
complex queries for performance and efficiency.
• Experience with monitoring and visualization tools such as Grafana to monitor database
performance and health.
Requirements:
• 4+ years of experience in data engineering, with a focus on large-scale data systems.
• Proven experience designing data models and access patterns across SQL and NoSQL
ecosystems.
• Hands-on experience with technologies like PostgreSQL, DynamoDB, S3, GraphQL, or
vector databases.
• Proficient in SQL stored procedures with extensive expertise in MySQL schema design,
query optimization, and resolvers, along with hands-on experience in building and
maintaining data warehouses.
• Strong programming skills in Python or JavaScript, with the ability to write efficient,
maintainable code.
• Familiarity with distributed systems, data partitioning, and consistency models.
• Familiarity with observability stacks (Prometheus, Grafana, OpenTelemetry) and
debugging production bottlenecks.
• Deep understanding of cloud infrastructure (preferably AWS), including networking, IAM,
and cost optimization.
• Prior experience building multi-tenant systems with strict performance and isolation
guarantees.
• Excellent communication and collaboration skills to influence cross-functional technical
decisions.
Key Responsibilities
We are seeking an experienced Data Engineer with a strong background in Databricks, Python, Spark/PySpark and SQL to design, develop, and optimize large-scale data processing applications. The ideal candidate will build scalable, high-performance data engineering solutions and ensure seamless data flow across cloud and on-premise platforms.
Key Responsibilities:
- Design, develop, and maintain scalable data processing applications using Databricks, Python, and PySpark/Spark.
- Write and optimize complex SQL queries for data extraction, transformation, and analysis.
- Collaborate with data engineers, data scientists, and other stakeholders to understand business requirements and deliver high-quality solutions.
- Ensure data integrity, performance, and reliability across all data processing pipelines.
- Perform data analysis and implement data validation to ensure high data quality.
- Implement and manage CI/CD pipelines for automated testing, integration, and deployment.
- Contribute to continuous improvement of data engineering processes and tools.
Required Skills & Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Proven experience as a Databricks with strong expertise in Python, SQL and Spark/PySpark.
- Strong proficiency in SQL, including working with relational databases and writing optimized queries.
- Solid programming experience in Python, including data processing and automation.
Position Responsibilities:
- Design & Develop integration and automation solutions based on technical specifications.
- Support in testing activities, including integration testing, end-to-end (business process) testing and UAT
- Being aware of CI-CD, engineering best practices, and SDLC process
- Should have an excellent understanding of all existing integration and automation.
- Understand the product integration requirement and solve it right, which is scalable, performant & resilient.
- Develop using TDD methodology, apply appropriate design methodologies & coding standards
- Able to conduct code reviews, quick at debugging
- Able to deconstruct a complex issue & resolve it
- Support in testing activities, including integration testing, end-to-end (business process) testing, and UAT
- Able to work with the stakeholders/customers, able to synthesise the business requirements, and suggest the best integration approaches – Process analyst
- Able to suggest, own & adapt to new technical frameworks/solutions & implement continuous process improvements for better delivery
Qualifications:
- A minimum of 7-9 years of experience in developing integration/automation solutions or related experience
- 3-4 years of experience in a technical architect or lead role
- Strong working experience in Python is preferred
- Good understanding of integration concepts, methodologies, and technologies
- Good communication, presentation skills, Strong interpersonal skills with the ability to convey and relate ideas to others and work collaboratively to get things done.
Job Description -Technical Project Manager
Job Title: Technical Project Manager
Location: Bhopal / Bangalore (On-site)
Experience Required: 7+ Years
Industry: Fintech / SaaS / Software Development
Role Overview
We are looking for a Technical Project Manager (TPM) who can bridge the gap between management and developers. The TPM will manage Android, Frontend, and Backend teams, ensure smooth development processes, track progress, evaluate output quality, resolve technical issues, and deliver timely reports.
Key Responsibilities
Project & Team Management
- Manage daily tasks for Android, Frontend, and Backend developers
- Conduct daily stand-ups, weekly planning, and reviews
- Track progress, identify blockers, and ensure timely delivery
- Maintain sprint boards, task estimations, and timelines
Technical Requirement Translation
- Convert business requirements into technical tasks
- Communicate requirements clearly to developers
- Create user stories, flow diagrams, and PRDs
- Ensure requirements are understood and implemented correctly
Quality & Build Review
- Validate build quality, UI/UX flow, functionality
- Check API integrations, errors, performance issues
- Ensure coding practices and architecture guidelines are followed
- Perform preliminary QA before handover to testing or clients
Issue Resolution
- Identify development issues early
- Coordinate with developers to fix bugs
- Escalate major issues to founders with clear insights
Reporting & Documentation
- Daily/weekly reports to management
- Sprint documentation, release notes
- Maintain project documentation & version control processes
Cross-Team Communication
- Act as the single point of contact for management
- Align multiple tech teams with business goals
- Coordinate with HR and operations for resource planning
Required Skills
- Strong understanding of Android, Web (Frontend/React), Backend development flows
- Knowledge of APIs, Git, CI/CD, basic testing
- Experience with Agile/Scrum methodologies
- Ability to review builds and suggest improvements
- Strong documentation skills (Jira, Notion, Trello, Asana)
- Excellent communication & leadership
- Ability to handle pressure and multiple projects
Good to Have
- Prior experience in Fintech projects
- Basic knowledge of UI/UX
- Experience in preparing FSD/BRD/PRD
- QA experience or understanding of test cases
Salary Range: 9 to 12 LPA
- 5+ years full-stack development
- Proficiency in AWS cloud-native development
- Experience with microservices & async architectures
- Strong TypeScript proficiency
- Strong Python proficiency
- React.js expertise
- Next.js expertise
- PostgreSQL + PostGIS experience
- GraphQL development experience
- Prisma ORM experience
- Experience in B2C product development(Retail/Ecommerce)
- Looking for candidates based out of Bangalore only
CTC: up to 40 LPA
If interested kindly share your updated resume at 82008 31681
Job Description -Technical Project Manager
Job Title: Technical Project Manager
Location: Bhopal / Bangalore (On-site)
Experience Required: 7+ Years
Industry: Fintech / SaaS / Software Development
Role Overview
We are looking for a Technical Project Manager (TPM) who can bridge the gap between management and developers. The TPM will manage Android, Frontend, and Backend teams, ensure smooth development processes, track progress, evaluate output quality, resolve technical issues, and deliver timely reports.
Key Responsibilities
Project & Team Management
- Manage daily tasks for Android, Frontend, and Backend developers
- Conduct daily stand-ups, weekly planning, and reviews
- Track progress, identify blockers, and ensure timely delivery
- Maintain sprint boards, task estimations, and timelines
Technical Requirement Translation
- Convert business requirements into technical tasks
- Communicate requirements clearly to developers
- Create user stories, flow diagrams, and PRDs
- Ensure requirements are understood and implemented correctly
Quality & Build Review
- Validate build quality, UI/UX flow, functionality
- Check API integrations, errors, performance issues
- Ensure coding practices and architecture guidelines are followed
- Perform preliminary QA before handover to testing or clients
Issue Resolution
- Identify development issues early
- Coordinate with developers to fix bugs
- Escalate major issues to founders with clear insights
Reporting & Documentation
- Daily/weekly reports to management
- Sprint documentation, release notes
- Maintain project documentation & version control processes
Cross-Team Communication
- Act as the single point of contact for management
- Align multiple tech teams with business goals
- Coordinate with HR and operations for resource planning
Required Skills
- Strong understanding of Android, Web (Frontend/React), Backend development flows
- Knowledge of APIs, Git, CI/CD, basic testing
- Experience with Agile/Scrum methodologies
- Ability to review builds and suggest improvements
- Strong documentation skills (Jira, Notion, Trello, Asana)
- Excellent communication & leadership
- Ability to handle pressure and multiple projects
Good to Have
- Prior experience in Fintech projects
- Basic knowledge of UI/UX
- Experience in preparing FSD/BRD/PRD
- QA experience or understanding of test cases
Salary Range: 9 to 12 LPA
ML Intern
Hyperworks Imaging is a cutting-edge technology company based out of Bengaluru, India since 2016. Our team uses the latest advances in deep learning and multi-modal machine learning techniques to solve diverse real world problems. We are rapidly growing, working with multiple companies around the world.
JOB OVERVIEW
We are seeking a talented and results-oriented ML Intern to join our growing team in India. In this role, you will be responsible for developing and implementing new advanced ML algorithms and AI agents for creating assistants of the future.
The ideal candidate will work on a complete ML pipeline starting from extraction, transformation and analysis of data to developing novel ML algorithms. The candidate will implement latest research papers and closely work with various stakeholders to ensure data-driven decisions and integrate the solutions into a robust ML pipeline.
RESPONSIBILITIES:
- Create AI agents using Model Context Protocols (MCPs), Claude Code, DsPy etc.
- Develop custom evals for AI agents.
- Build and maintain ML pipelines
- Optimize and evaluate ML models to ensure accuracy and performance.
- Define system requirements and integrate ML algorithms into cloud based workflows.
- Write clean, well-documented, and maintainable code following best practices
REQUIREMENTS:
- 1-3+ years of experience in data science, machine learning, or a similar role.
- Demonstrated expertise with python, PyTorch, and TensorFlow.
- Graduated/Graduating with B.Tech/M.Tech/PhD degrees in Electrical Engg./Electronics Engg./Computer Science/Maths and Computing/Physics
- Has done coursework in Linear Algebra, Probability, Image Processing, Deep Learning and Machine Learning.
- Has demonstrated experience with Model Context Protocols (MCPs), DsPy, AI Agents, MLOps etc
WHO CAN APPLY:
Only those candidates will be considered who,
- have relevant skills and interests
- can commit full time
- Can show prior work and deployed projects
- can start immediately
Please note that we will reach out to ONLY those applicants who satisfy the criteria listed above.
SALARY DETAILS: Commensurate with experience.
JOINING DATE: Immediate
JOB TYPE: Full-time
























