41+ PyTorch Jobs in Bangalore (Bengaluru) | PyTorch Job openings in Bangalore (Bengaluru)
Apply to 41+ PyTorch Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest PyTorch Job opportunities across top companies like Google, Amazon & Adobe.
Job Title: AI Engineer
Location: Bengaluru
Experience: 3 Years
Working Days: 5 Days
About the Role
We’re reimagining how enterprises interact with documents and workflows—starting with BFSI and healthcare. Our AI-first platforms are transforming credit decisioning, document intelligence, and underwriting at scale. The focus is on Intelligent Document Processing (IDP), GenAI-powered analysis, and human-in-the-loop (HITL) automation to accelerate outcomes across lending, insurance, and compliance workflows.
As an AI Engineer, you’ll be part of a high-caliber engineering team building next-gen AI systems that:
- Power robust APIs and platforms used by underwriters, credit analysts, and financial institutions.
- Build and integrate GenAI agents.
- Enable “human-in-the-loop” workflows for high-assurance decisions in real-world conditions.
Key Responsibilities
- Build and optimize ML/DL models for document understanding, classification, and summarization.
- Apply LLMs and RAG techniques for validation, search, and question-answering tasks.
- Design and maintain data pipelines for structured and unstructured inputs (PDFs, OCR text, JSON, etc.).
- Package and deploy models as REST APIs or microservices in production environments.
- Collaborate with engineering teams to integrate models into existing products and workflows.
- Continuously monitor and retrain models to ensure reliability and performance.
- Stay updated on emerging AI frameworks, architectures, and open-source tools; propose improvements to internal systems.
Required Skills & Experience
- 2–5 years of hands-on experience in AI/ML model development, fine-tuning, and building ML solutions.
- Strong Python proficiency with libraries such as NumPy, Pandas, scikit-learn, PyTorch, or TensorFlow.
- Solid understanding of transformers, embeddings, and NLP pipelines.
- Experience working with LLMs (OpenAI, Claude, Gemini, etc.) and frameworks like LangChain.
- Exposure to OCR, document parsing, and unstructured text analytics.
- Familiarity with model serving, APIs, and microservice architectures (FastAPI, Flask).
- Working knowledge of Docker, cloud environments (AWS/GCP/Azure), and CI/CD pipelines.
- Strong grasp of data preprocessing, evaluation metrics, and model validation workflows.
- Excellent problem-solving ability, structured thinking, and clean, production-ready coding practices.
Review Criteria
- Strong Senior Data Scientist (AI/ML/GenAI) Profile
- Minimum of 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
- Strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
- 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/ QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
- Experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows
Preferred
- Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
- B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.
Job Specific Criteria
- CV Attachment is mandatory
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
Role & Responsibilities
Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.
Responsibilities:
- Own the full ML lifecycle: model design, training, evaluation, deployment
- Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
- Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
- Build agentic workflows for reasoning, planning, and decision-making
- Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
- Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
- Collaborate with product and engineering teams to integrate AI models into business applications
- Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices
Ideal Candidate
- 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
- Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
- Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
- Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
- Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
- Strong software engineering background with experience in testing, version control, and APIs
- Proven ability to balance innovation with scalable deployment
- B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
- Bonus: Open-source contributions, GenAI research, or applied systems at scale
Job Details
- Job Title: Lead II - Software Engineering- AI, NLP, Python, Data science
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 7-9 years
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Job Description:
Role Proficiency:
Act creatively to develop applications by selecting appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions. Account for others' developmental activities; assisting Project Manager in day-to-day project execution.
Additional Comments:
Mandatory Skills Data Science Skill to Evaluate AI, Gen AI, RAG, Data Science
Experience 8 to 10 Years
Location Bengaluru
Job Description
Job Title AI Engineer Mandatory Skills Artificial Intelligence, Natural Language Processing, python, data science Position AI Engineer – LLM & RAG Specialization Company Name: Sony India Software Centre About the role: We are seeking a highly skilled AI Engineer with 8-10 years of experience to join our innovation-driven team. This role focuses on the design, development, and deployment of advanced enterprise-scale Large Language Models (eLLM) and Retrieval Augmented Generation (RAG) solutions. You will work on end-to-end AI pipelines, from data processing to cloud deployment, delivering impactful solutions that enhance Sony’s products and services. Key Responsibilities: Design, implement, and optimize LLM-powered applications, ensuring high performance and scalability for enterprise use cases. Develop and maintain RAG pipelines, including vector database integration (e.g., Pinecone, Weaviate, FAISS) and embedding model optimization. Deploy, monitor, and maintain AI/ML models in production, ensuring reliability, security, and compliance. Collaborate with product, research, and engineering teams to integrate AI solutions into existing applications and workflows. Research and evaluate the latest LLM and AI advancements, recommending tools and architectures for continuous improvement. Preprocess, clean, and engineer features from large datasets to improve model accuracy and efficiency. Conduct code reviews and enforce AI/ML engineering best practices. Document architecture, pipelines, and results; present findings to both technical and business stakeholders. Job Description: 8-10 years of professional experience in AI/ML engineering, with at least 4+ years in LLM development and deployment. Proven expertise in RAG architectures, vector databases, and embedding models. Strong proficiency in Python; familiarity with Java, R, or other relevant languages is a plus. Experience with AI/ML frameworks (PyTorch, TensorFlow, etc.) and relevant deployment tools. Hands-on experience with cloud-based AI platforms such as AWS SageMaker, AWS Q Business, AWS Bedrock or Azure Machine Learning. Experience in designing, developing, and deploying Agentic AI systems, with a focus on creating autonomous agents that can reason, plan, and execute tasks to achieve specific goals. Understanding of security concepts in AI systems, including vulnerabilities and mitigation strategies. Solid knowledge of data processing, feature engineering, and working with large-scale datasets. Experience in designing and implementing AI-native applications and agentic workflows using the Model Context Protocol (MCP) is nice to have. Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication skills with the ability to explain complex AI concepts to diverse audiences. Day-to-day responsibilities: Design and deploy AI-driven solutions to address specific security challenges, such as threat detection, vulnerability prioritization, and security automation. Optimize LLM-based models for various security use cases, including chatbot development for security awareness or automated incident response. Implement and manage RAG pipelines for enhanced LLM performance. Integrate AI models with existing security tools, including Endpoint Detection and Response (EDR), Threat and Vulnerability Management (TVM) platforms, and Data Science/Analytics platforms. This will involve working with APIs and understanding data flows. Develop and implement metrics to evaluate the performance of AI models. Monitor deployed models for accuracy and performance and retrain as needed. Adhere to security best practices and ensure that all AI solutions are developed and deployed securely. Consider data privacy and compliance requirements. Work closely with other team members to understand security requirements and translate them into AI-driven solutions. Communicate effectively with stakeholders, including senior management, to present project updates and findings. Stay up to date with the latest advancements in AI/ML and security and identify opportunities to leverage new technologies to improve our security posture. Maintain thorough documentation of AI models, code, and processes. What We Offer Opportunity to work on cutting-edge LLM and RAG projects with global impact. A collaborative environment fostering innovation, research, and skill growth. Competitive salary, comprehensive benefits, and flexible work arrangements. The chance to shape AI-powered features in Sony’s next-generation products. Be able to function in an environment where the team is virtual and geographically dispersed
Education Qualification: Graduate
Skills: AI, NLP, Python, Data science
Must-Haves
Skills
AI, NLP, Python, Data science
NP: Immediate – 30 Days
About ThoughtClan Technologies
ThoughtClan is a niche, technology-focused, 100+ people strong software company that works on building complex enterprise-scale web and mobile-oriented digitalization and data science-related projects. They are in IT Services as well as Product Development space. They focus on applying technology to enable businesses to function better. ThoughtClan is a team of highly specialized technical folks and is growing rapidly.
They have expertise in developing projects related to:
- Data Science — including Image Analytics, Video Analytics, Building AI/ML-based Prediction Models, etc.
- Blockchain — based Cryptocurrency and NFT projects.
- Enterprise-Scale Greenfield Web and Mobile Application Development, Integration, eCommerce, Marketing, and Content Management projects.
We are looking for a Data Scientist to join our fast-growing team.
The candidate must have:
- 3–4 years’ experience in Data Modeling in Python and AI/ML.
- Hands-on experience with Machine Learning and Deep Learning techniques and tools. Tools: RAG, LLMs, Agentic AI, Langchain, Langgraph, PyTorch, OpenCV, Pandas, Scikit Learn, CrewAI, Autogen or AI chatbots. Proven ability to use/create algorithms and run simulations. Experience: Minimum 1–1.5 years or 2 projects. A technical understanding of Microservice Architectures is a plus.
- Good knowledge of Azure Platform for deployment.
- Good knowledge of web frameworks such as Flask.
- Hands-on knowledge on a NoSQL database (Maria DB, Mongo DB, etc.).
- Experience in Visualization of Data using tools like D3.js, Plotly, Power BI, and Tableau. Experience in visualizing large data is a plus.
- Experience using a variety of data mining/data analysis methods with the ability to drive business results using data-based insights and work with large data sets.
- Comfortable working with a wide range of stakeholders and functional teams.
- Good designing skills and communication skills.
- Good knowledge of front-end technologies (HTML, CSS, etc.) would be an advantage.
Role: Data Scientist (Python + R Expertise)
Exp: 8 -12 Years
CTC: up to 30 LPA
Required Skills & Qualifications:
- 8–12 years of hands-on experience as a Data Scientist or in a similar analytical role.
- Strong expertise in Python and R for data analysis, modeling, and visualization.
- Proficiency in machine learning frameworks (scikit-learn, TensorFlow, PyTorch, caret, etc.).
- Strong understanding of statistical modeling, hypothesis testing, regression, and classification techniques.
- Experience with SQL and working with large-scale structured and unstructured data.
- Familiarity with cloud platforms (AWS, Azure, or GCP) and deployment practices (Docker, MLflow).
- Excellent analytical, problem-solving, and communication skills.
Preferred Skills:
- Experience with NLP, time series forecasting, or deep learning projects.
- Exposure to data visualization tools (Tableau, Power BI, or R Shiny).
- Experience working in product or data-driven organizations.
- Knowledge of MLOps and model lifecycle management is a plus.
If interested kindly share your updated resume on 82008 31681
Role: Sr. Data Scientist
Exp: 4 -8 Years
CTC: up to 28 LPA
Technical Skills:
o Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.
o Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.
o Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.
o Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.
o Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).
o Exposure to natural language processing (NLP) techniques is a plus.
Cloud & Infrastructure:
o Strong expertise in Azure cloud ecosystem,
o Experience working in UNIX/Linux environments and using command-line tools for automation and scripting.
If interested kindly share your updated resume at 82008 31681
Role: Sr. Data Scientist
Exp: 4-8 Years
CTC: up to 25 LPA
Technical Skills:
● Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.
● Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.
● Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.
● Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.
● Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).
● Exposure to natural language processing (NLP) techniques is a plus.
• Educational Qualifications:
- B.E./B.Tech/M.Tech/MCA in Computer Science, Electronics & Communication, Electrical Engineering, or a related field.
- A master’s degree in computer science, Artificial Intelligence, or a specialization in Deep Learning or Computer Vision is highly preferred
If interested share your resume on 82008 31681
Role: Sr. Data Scientist
Exp: 4-8 Years
CTC: up to 25 LPA
Technical Skills:
● Strong programming skills in Python, with hands-on experience in deep learning frameworks like TensorFlow, PyTorch, or Keras.
● Familiarity with Databricks notebooks, MLflow, and Delta Lake for scalable machine learning workflows.
● Experience with MLOps best practices, including model versioning, CI/CD pipelines, and automated deployment.
● Proficiency in data preprocessing, augmentation, and handling large-scale image/video datasets.
● Solid understanding of computer vision algorithms, including CNNs, transfer learning, and transformer-based vision models (e.g., ViT).
● Exposure to natural language processing (NLP) techniques is a plus.
• Educational Qualifications:
- B.E./B.Tech/M.Tech/MCA in Computer Science, Electronics & Communication, Electrical Engineering, or a related field.
- A master’s degree in computer science, Artificial Intelligence, or a specialization in Deep Learning or Computer Vision is highly preferred
About Rekise Marine
Rekise Marine is a startup focused on sustainably enhancing the utility of oceans through autonomous robotic infrastructure. Our efforts center on developing advanced autonomous technology for the maritime industry, serving both defense and commercial sectors globally. We specialize in creating autonomous vessels both surface and underwater as well as autonomous port infrastructure. Currently, we are building the flagship autonomous platform of the Indian Navy.
Key Responsibilities
* Develop AI/ML pipelines for sonar/LiDAR/Radar and camera-based perception.
* Design multi-sensor fusion frameworks for obstacle detection, seabed mapping, and environmental awareness.
* Implement real-time object detection, segmentation, and tracking for underwater missions.
* Enhance robustness of perception under low-light, turbidity, and noisy acoustic conditions.
* Apply model optimization techniques (quantization, pruning, distillation, real-time deployment tuning) to ensure efficiency on embedded and resource-constrained systems.
Preferred Skills
* Experience with deep learning frameworks (PyTorch/TensorFlow).
* Strong knowledge of signal processing, computer vision, and sensor fusion.
* Proficiency in GPU acceleration, C++/Python, ROS/ROS2.
* Track record of published research or field deployments in underwater perception.
* Demonstrable full stack experience with ML based perception (data collection, annotation, training & edge inference).
Good to Have
* Publications in top-tier robotics/AI conferences or journals (e.g., ICRA, IROS, ICAR, CVPR, ICCV, NeurIPS).
* Hands-on experience with real-world Autonomous Systems (AGV/AUV/UAV), field trials, and deployments.
Why You’ll Love Working With Us
A chance to be part of a leading marine robotics startup in India.
Competitive salary.
Flexible and innovative work environment promoting collaboration.
A role where your contributions make a real difference and drive impact.
Opportunities for travel in relation to customer interactions and field testing
About Unilog
Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.
With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale distribution, manufacturing, and specialty retail.
About the Role
We are looking for a highly motivated Innovation Engineer to join our CTO Office and drive the exploration, prototyping, and adoption of next-generation technologies. This role offers a unique opportunity to work at the forefront of AI/ML, Generative AI (Gen AI), Large Language Models (LLMs), Vertex AI, MCP, Vector Databases, AI Search, Agentic AI, Automation.
As an Innovation Engineer, you will be responsible for identifying emerging technologies, building proof-of-concepts (PoCs), and collaborating with cross-functional teams to define the future of AI-driven solutions. Your work will directly influence the company’s technology strategy and help shape disruptive innovations.
Key Responsibilities
- Research Implementation: Stay ahead of industry trends, evaluate emerging AI/ML technologies, and prototype novel solutions in areas like Gen AI, Vector Search, AI Agents, VertexAI, MCP and Automation.
- Proof-of-Concept Development: Rapidly build, test, and iterate PoCs to validate new technologies for potential business impact.
- AI/ML Engineering: Design and develop AI/ML models, AI Agents, LLMs, intelligent search capabilities leveraging Vector embeddings.
- Vector AI Search: Explore vector databases and optimize retrieval-augmented generation (RAG) workflows.
- Automation AI Agents: Develop autonomous AI agents and automation frameworks to enhance business processes.
- Collaboration Thought Leadership: Work closely with software developers and product teams to integrate innovations into production-ready solutions.
- Innovation Strategy: Contribute to the technology roadmap, patents, and research papers to establish leadership in emerging domains.
Required Qualifications
- 4–10 years of experience in AI/ML, software engineering, or a related field.
- Strong hands-on expertise in Python, TensorFlow, PyTorch, LangChain, Hugging Face, OpenAI APIs, Claude, Gemini, VertexAI, MCP.
- Experience with LLMs, embeddings, AI search, vector databases (e.g., Pinecone, FAISS, Weaviate, PGVector), MCP and agentic AI (Vertex, Autogen, ADK)
- Familiarity with cloud platforms (AWS, Azure, GCP) and AI/ML infrastructure.
- Strong problem-solving skills and a passion for innovation.
- Ability to communicate complex ideas effectively and work in a fast-paced, experimental environment.
Preferred Qualifications
- Experience with multi-modal AI (text, vision, audio), reinforcement learning, or AI security.
- Knowledge of data pipelines, MLOps, and AI governance.
- Contributions to open-source AI/ML projects or published research papers.
Why Join Us?
- Work on cutting-edge AI/ML innovations with the CTO Office.
- Influence the company’s future AI strategy and shape emerging technologies.
- Competitive compensation, growth opportunities, and a culture of continuous learning.
About Our Benefits
Unilog offers a competitive total rewards package including competitive salary, multiple medical, dental, and vision plans to meet all our employees’ needs, career development, advancement opportunities, annual merit, a generous time-off policy, and a flexible work environment.
Unilog is committed to building the best team and we are committed to fair hiring practices where we hire people for their potential and advocate for diversity, equity, and inclusion. As such, we do not discriminate or make decisions based on your race, color, religion, creed, ancestry, sex, national origin, age, disability, familial status, marital status, military status, veteran status, sexual orientation, gender identity, or expression, or any other protected class.
About HelloRamp.ai
HelloRamp is on a mission to revolutionize media creation for automotive and retail using AI. Our platform powers 3D/AR experiences for leading brands like Cars24, Spinny, and Samsung. We’re now building the next generation of Computer Vision + AI products, including cutting-edge NeRF pipelines and AI-driven video generation.
What You’ll Work On
- Develop and optimize Computer Vision pipelines for large-scale media creation.
- Implement NeRF-based systems for high-quality 3D reconstruction.
- Build and fine-tune AI video generation models using state-of-the-art techniques.
- Optimize AI inference for production (CUDA, TensorRT, ONNX).
- Collaborate with the engineering team to integrate AI features into scalable cloud systems.
- Research latest AI/CV advancements and bring them into production.
Skills & Experience
- Strong Python programming skills.
- Deep expertise in Computer Vision and Machine Learning.
- Hands-on with PyTorch/TensorFlow.
- Experience with NeRF frameworks (Instant-NGP, Nerfstudio, Plenoxels) and/or video synthesis models.
- Familiarity with 3D graphics concepts (meshes, point clouds, depth maps).
- GPU programming and optimization skills.
Nice to Have
- Knowledge of Three.js or WebGL for rendering AI outputs on the web.
- Familiarity with FFmpeg and video processing pipelines.
- Experience in cloud-based GPU environments (AWS/GCP).
Why Join Us?
- Work on cutting-edge AI and Computer Vision projects with global impact.
- Join a small, high-ownership team where your work matters.
- Opportunity to experiment, publish, and contribute to open-source.
- Competitive pay and flexible work setup.
Technical Expertise
- Advanced proficiency in Python
- Expertise in Deep Learning Frameworks: PyTorch and TensorFlow
- Experience with Computer Vision Models:
- YOLO (Object Detection)
- UNet, Mask R-CNN (Segmentation)
- Deep SORT (Object Tracking)
Real-Time & Deployment Skills
- Real-time video analytics and inference optimization
- Model pipeline development using:
- Docker
- Git
- MLflow or similar tools
- Image processing proficiency: OpenCV, NumPy
- Deployment experience on Linux-based GPU systems and edge devices (Jetson Nano, Google Coral, etc.)
Professional Background
- Minimum 4+ years of experience in AI/ML, with a strong focus on Computer Vision and System-Level Design
- Educational qualification: B.E./B.Tech/M.Tech in Computer Science, Electrical Engineering, or a related field
- Strong project portfolio or experience in production-level deployments
Key Responsibilities :
- Algorithm Development : Design and optimize computer vision and deep learning algorithms for 3D applications.
- Model Deployment : Setup end-end Deep Learning pipeline for data ingestion, preparation, model training, validation and deployment on edge devices after optimization to meet customer requirements
- Research and Innovation : Prototype new solutions based on the latest advancements in AI, machine learning, and computer vision.
- Cross-Functional Collaboration : Integrate algorithms into 3D rendering systems and work closely with the team to meet project goals. Collaborate with hardware engineers to fine-tune models for power, latency, and throughput constraints
- Documentation : Maintain code quality and document solutions for easy reference.
Qualifications :
- Experience : 3 or 3+ years in computer vision, deep learning, and AI.
- Education : Bachelor's or Master's in Computer Science, Data Science, Electrical Engineering, or related field.
Technical Skills :
- Proficiency in C, C++, OpenGL, Objective C, Swift, Python, PyTorch, TensorFlow, and OpenCV.
- Understanding about depth and breadth of computer vision and deep learning algorithms.
- Knowledge of algorithms and data structures relevant to computer vision.
- Hands-on experience with NVIDIA platforms IGX, Jetson, or Xavier. (Experience with NVIDIA SDKs (e.g., DeepStream, TensorRT, CUDA, TAO Toolkit)
- Concepts of parallel architecture on GPU is an added advantage.
- Knowledge in linear algebra, calculus, and statistics
- Preferred : Knowledge of AR/VR, 3D vision, and AWS (SageMaker, Lambda, EC2, S3, RDS), CI/CD, Terraform, Docker, and Kubernetes)
You will:
- Collaborate with the I-Stem Voice AI team and CEO to design, build and ship new agent capabilities
- Develop, test and refine end-to-end voice agent models (ASR, NLU, dialog management, TTS)
- Stress-test agents in noisy, real-world scenarios and iterate for improved robustness and low latency
- Research and prototype cutting-edge techniques (e.g. robust speech recognition, adaptive language understanding)
- Partner with backend and frontend engineers to seamlessly integrate AI components into live voice products
- Monitor agent performance in production, analyze failure cases, and drive continuous improvement
- Occasionally demo our Voice AI solutions at industry events and user forums
You are:
- An AI/Software Engineer with hands-on experience in speech-centric ML (ASR, NLU or TTS)
- Skilled in building and tuning transformer-based speech models and handling real-time audio pipelines
- Obsessed with reliability: you design experiments to push agents to their limits and root-cause every error
- A clear thinker who deconstructs complex voice interactions from first principles
- Passionate about making voice technology inclusive and accessible for diverse users
- Comfortable moving fast in a small team, yet dogged about code quality, testing and reproducibility
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-fri role, In office, with excellent perks and benefits!
Position Overview
We are seeking a Software Architect to lead the design and development of high-performance robotics and AI software stacks utilizing NVIDIA technologies. This role will focus on defining scalable, modular, and efficient architectures for robot perception, planning, simulation, and embedded AI applications. You will collaborate with cross-functional teams to build next-generation autonomous systems 9
Key Responsibilities:
1. System Architecture & Design
● Define scalable software architectures for robotics perception, navigation, and AI-driven decision-making.
● Design modular and reusable frameworks that leverage NVIDIA’s Jetson, Isaac ROS, Omniverse, and CUDA ecosystems.
● Establish best practices for real-time computing, GPU acceleration, and edge AI inference.
2. Perception & AI Integration
● Architect sensor fusion pipelines using LIDAR, cameras, IMUs, and radar with DeepStream, TensorRT, and ROS2.
● Optimize computer vision, SLAM, and deep learning models for edge deployment on Jetson Orin and Xavier.
● Ensure efficient GPU-accelerated AI inference for real-time robotics applications.
3. Embedded & Real-Time Systems
● Design high-performance embedded software stacks for real-time robotic control and autonomy.
● Utilize NVIDIA CUDA, cuDNN, and TensorRT to accelerate AI model execution on Jetson platforms.
● Develop robust middleware frameworks to support real-time robotics applications in ROS2 and Isaac SDK.
4. Robotics Simulation & Digital Twins
● Define architectures for robotic simulation environments using NVIDIA Isaac Sim & Omniverse.
● Leverage synthetic data generation (Omniverse Replicator) for training AI models.
● Optimize sim-to-real transfer learning for AI-driven robotic behaviors.
5. Navigation & Motion Planning
● Architect GPU-accelerated motion planning and SLAM pipelines for autonomous robots.
● Optimize path planning, localization, and multi-agent coordination using Isaac ROS Navigation.
● Implement reinforcement learning-based policies using Isaac Gym.
6. Performance Optimization & Scalability
● Ensure low-latency AI inference and real-time execution of robotics applications.
● Optimize CUDA kernels and parallel processing pipelines for NVIDIA hardware.
● Develop benchmarking and profiling tools to measure software performance on edge AI devices.
Required Qualifications:
● Master’s or Ph.D. in Computer Science, Robotics, AI, or Embedded Systems.
● Extensive experience (7+ years) in software development, with at least 3-5 years focused on architecture and system design, especially for robotics or embedded systems.
● Expertise in CUDA, TensorRT, DeepStream, PyTorch, TensorFlow, and ROS2.
● Experience in NVIDIA Jetson platforms, Isaac SDK, and GPU-accelerated AI.
● Proficiency in programming languages such as C++, Python, or similar, with deep understanding of low-level and high-level design principles.
● Strong background in robotic perception, planning, and real-time control.
● Experience with cloud-edge AI deployment and scalable architectures.
Preferred Qualifications
● Hands-on experience with NVIDIA DRIVE, NVIDIA Omniverse, and Isaac Gym
● Knowledge of robot kinematics, control systems, and reinforcement learning
● Expertise in distributed computing, containerization (Docker), and cloud robotics
● Familiarity with automotive, industrial automation, or warehouse robotics
● Experience designing architectures for autonomous systems or multi-robot systems.
● Familiarity with cloud-based solutions, edge computing, or distributed computing for robotics
● Experience with microservices or service-oriented architecture (SOA)
● Knowledge of machine learning and AI integration within robotic systems
● Knowledge of testing on edge devices with HIL and simulations (Isaac Sim, Gazebo, V-REP etc.)
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-Fri, In office role with excellent perks and benefits!
Key Responsibilities:
1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.
2. Build and implement scalable and robust microservices and integrate API gateways.
3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).
4. Implement real-time data pipelines using Kafka.
5. Collaborate with front-end developers to ensure seamless integration of backend services.
6. Write clean, reusable, and efficient code following best practices, including design patterns.
7. Troubleshoot, debug, and enhance existing systems for improved performance.
Mandatory Skills:
1. Proficiency in at least one backend technology: Node.js or Python, or Java.
2. Strong experience in:
i. Microservices architecture,
ii. API gateways,
iii. NoSQL databases (e.g., MongoDB, DynamoDB),
iv. Kafka
v. Data structures (e.g., arrays, linked lists, trees).
3. Frameworks:
i. If Java : Spring framework for backend development.
ii. If Python: FastAPI/Django frameworks for AI applications.
iii. If Node: Express.js for Node.js development.
Good to Have Skills:
1. Experience with Kubernetes for container orchestration.
2. Familiarity with in-memory databases like Redis or Memcached.
3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.
We are seeking a Senior Data Scientist with hands-on experience in Generative AI (GenAI) and Large Language Models (LLM). The ideal candidate will have expertise in building, fine-tuning, and deploying LLMs, as well as managing the lifecycle of AI models through LLMOps practices. You will play a key role in driving AI innovation, developing advanced algorithms, and optimizing model performance for various business applications.
Key Responsibilities:
- Develop, fine-tune, and deploy Large Language Models (LLM) for various business use cases.
- Implement and manage the operationalization of LLMs using LLMOps best practices.
- Collaborate with cross-functional teams to integrate AI models into production environments.
- Optimize and troubleshoot model performance to ensure high accuracy and scalability.
- Stay updated with the latest advancements in Generative AI and LLM technologies.
Required Skills and Qualifications:
- Strong hands-on experience with Generative AI, LLMs, and NLP techniques.
- Proven expertise in LLMOps, including model deployment, monitoring, and maintenance.
- Proficiency in programming languages like Python and frameworks such as TensorFlow, PyTorch, or Hugging Face.
- Solid understanding of AI/ML algorithms and model optimization.
Data Scientist is responsible to discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver better products. Your primary focus will be in applying Machine Learning and Generative AI techniques for data mining and statistical analysis, Text analytics using NLP/LLM and building high quality prediction systems integrated with our products. The ideal candidate should have a prior background in Generative AI, NLP (Natural Language Processing), and Computer Vision techniques. Additionally, experience in working with current state of the art Large Language Models (LLMs), and Computer Vision algorithms.
Job Responsibilities:
» Building models using best in AI/ML technology.
» Leveraging your expertise in Generative AI, Computer Vision, Python, Machine Learning, and Data Science to develop cutting-edge solutions for our products.
» Integrating NLP techniques, and utilizing LLM's in our products.
» Training/fine tuning models with new/modified training dataset.
» Selecting features, building and optimizing classifiers using machine learning techniques.
» Conducting data analysis, curation, preprocessing, modelling, and post-processing to drive data-driven decision-making.
» Enhancing data collection procedures to include information that is relevant for building analytic systems
» Working understanding of cloud platforms (AWS).
» Collaborating with cross-functional teams to design and implement advanced AI models and algorithms.
» Involving in R&D activities to explore the latest advancements in AI technologies, frameworks, and tools.
» Documenting project requirements, methodologies, and outcomes for stakeholders.
Technical skills
Mandatory
» Minimum of 5 years of experience as Machine Learning Researcher or Data Scientist.
» Master's degree or Ph.D. (preferable) in Computer Science, Data Science, or a related field.
» Should have knowledge and experience in working with Deep Learning projects using CNN, Transformers, Encoder and decoder architectures.
» Working experience with LLM's (Large Language Models) and their applications (For e.g., tuning embedding models, data curation, prompt engineering, LoRA, etc.).
» Familiarity with LLM Agents and related frameworks.
» Good programming skills in Python and experience with relevant libraries and frameworks (e.g., PyTorch, and TensorFlow).
» Good applied statistics skills, such as distributions, statistical testing, regression, etc.
» Excellent understanding of machine learning and computer vision based techniques and algorithms.
» Strong problem-solving abilities and a proactive attitude towards learning and adopting new technologies.
» Ability to work independently, manage multiple projects simultaneously, and collaborate effectively with diverse stakeholders.
Nice to have
» Exposure to financial research domain
» Experience with JIRA, Confluence
» Understanding of scrum and Agile methodologies
» Basic understanding of NoSQL databases, such as MongoDB, Cassandra
Experience with data visualization tools, such as Grafana, GGplot, etc.
You will be part of the core engineering team that is working on developing AI/ML models, Algorithms, and Frameworks in the areas of Video Analytics, Business Intelligence, IoT Predictive Analytics.
For more information visit www.gyrus.ai
Candidate must have the following qualifications
- Engineering or Masters degree in CS, EC, EE or related domains
- Proficient in OpenCV
- Profficiency in Python programming
- Exposure to one of the AI platforms like Tensorflow, Caffe, PyTorch
- Must have trained and deployed at least one fairly big AI model
- Exposure to AI models for Audio/Image/Video Analytics
- Exposure to one of the Cloud Computing platforms AWS/GCP
- Strong mathematical background with special emphasis towards Linear Algebra and Statistics
Are you passionate about pushing the boundaries of Artificial Intelligence and its applications in the software development lifecycle? Are you excited about building AI models that can revolutionize how developers ship, refactor, and onboard to legacy or existing applications faster? If so, Zevo.ai has the perfect opportunity for you!
As an AI Researcher/Engineer at Zevo.ai, you will play a crucial role in developing cutting-edge AI models using CodeBERT and codexGLUE to achieve our goal of providing an AI solution that supports developers throughout the sprint cycle. You will be at the forefront of research and development, harnessing the power of Natural Language Processing (NLP) and Machine Learning (ML) to revolutionize the way software development is approached.
Responsibilities:
- AI Model Development: Design, implement, and refine AI models utilizing CodeBERT and codexGLUE to comprehend codebases, facilitate code understanding, automate code refactoring, and enhance the developer onboarding process.
- Research and Innovation: Stay up-to-date with the latest advancements in NLP and ML research, identifying novel techniques and methodologies that can be applied to Zevo.ai's AI solution. Conduct experiments, perform data analysis, and propose innovative approaches to enhance model performance.
- Data Collection and Preparation: Collaborate with data engineers to identify, collect, and preprocess relevant datasets necessary for training and evaluating AI models. Ensure data quality, correctness, and proper documentation.
- Model Evaluation and Optimization: Develop robust evaluation metrics to measure the performance of AI models accurately. Continuously optimize and fine-tune models to achieve state-of-the-art results.
- Code Integration and Deployment: Work closely with software developers to integrate AI models seamlessly into Zevo.ai's platform. Ensure smooth deployment and monitor the performance of the deployed models.
- Collaboration and Teamwork: Collaborate effectively with cross-functional teams, including data scientists, software engineers, and product managers, to align AI research efforts with overall company objectives.
- Documentation: Maintain detailed and clear documentation of research findings, methodologies, and model implementations to facilitate knowledge sharing and future developments.
- Ethics and Compliance**: Ensure compliance with ethical guidelines and legal requirements related to AI model development, data privacy, and security.
Requirements
- Educational Background: Bachelor's/Master's or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A strong academic record with a focus on NLP and ML is highly desirable.
- Technical Expertise: Proficiency in NLP, Deep Learning, and experience with AI model development using frameworks like PyTorch or TensorFlow. Familiarity with CodeBERT and codexGLUE is a significant advantage.
- Programming Skills: Strong programming skills in Python and experience working with large-scale software projects.
- Research Experience: Proven track record of conducting research in NLP, ML, or related fields, demonstrated through publications, conference papers, or open-source contributions.
- Problem-Solving Abilities: Ability to identify and tackle complex problems related to AI model development and software engineering.
- Team Player: Excellent communication and interpersonal skills, with the ability to collaborate effectively in a team-oriented environment.
- Passion for AI: Demonstrated enthusiasm for AI and its potential to transform software development practices.
If you are eager to be at the forefront of AI research, driving innovation and impacting the software development industry, join Zevo.ai's talented team of experts as an AI Researcher/Engineer. Together, we'll shape the future of the sprint cycle and revolutionize how developers approach code understanding, refactoring, and onboarding!
Requirements
Experience
- 5+ years of professional experience in implementing MLOps framework to scale up ML in production.
- Hands-on experience with Kubernetes, Kubeflow, MLflow, Sagemaker, and other ML model experiment management tools including training, inference, and evaluation.
- Experience in ML model serving (TorchServe, TensorFlow Serving, NVIDIA Triton inference server, etc.)
- Proficiency with ML model training frameworks (PyTorch, Pytorch Lightning, Tensorflow, etc.).
- Experience with GPU computing to do data and model training parallelism.
- Solid software engineering skills in developing systems for production.
- Strong expertise in Python.
- Building end-to-end data systems as an ML Engineer, Platform Engineer, or equivalent.
- Experience working with cloud data processing technologies (S3, ECR, Lambda, AWS, Spark, Dask, ElasticSearch, Presto, SQL, etc.).
- Having Geospatial / Remote sensing experience is a plus.
Roles and Responsibilities:
- Design, develop, and maintain the end-to-end MLOps infrastructure from the ground up, leveraging open-source systems across the entire MLOps landscape.
- Creating pipelines for data ingestion, data transformation, building, testing, and deploying machine learning models, as well as monitoring and maintaining the performance of these models in production.
- Managing the MLOps stack, including version control systems, continuous integration and deployment tools, containerization, orchestration, and monitoring systems.
- Ensure that the MLOps stack is scalable, reliable, and secure.
Skills Required:
- 3-6 years of MLOps experience
- Preferably worked in the startup ecosystem
Primary Skills:
- Experience with E2E MLOps systems like ClearML, Kubeflow, MLFlow etc.
- Technical expertise in MLOps: Should have a deep understanding of the MLOps landscape and be able to leverage open-source systems to build scalable, reliable, and secure MLOps infrastructure.
- Programming skills: Proficient in at least one programming language, such as Python, and have experience with data science libraries, such as TensorFlow, PyTorch, or Scikit-learn.
- DevOps experience: Should have experience with DevOps tools and practices, such as Git, Docker, Kubernetes, and Jenkins.
Secondary Skills:
- Version Control Systems (VCS) tools like Git and Subversion
- Containerization technologies like Docker and Kubernetes
- Cloud Platforms like AWS, Azure, and Google Cloud Platform
- Data Preparation and Management tools like Apache Spark, Apache Hadoop, and SQL databases like PostgreSQL and MySQL
- Machine Learning Frameworks like TensorFlow, PyTorch, and Scikit-learn
- Monitoring and Logging tools like Prometheus, Grafana, and Elasticsearch
- Continuous Integration and Continuous Deployment (CI/CD) tools like Jenkins, GitLab CI, and CircleCI
- Explain ability and Interpretability tools like LIME and SHAP
The Key Responsibilities Include But Not Limited to:
Help identify and drive Speed, Performance, Scalability, and Reliability related optimization based on experience and learnings from the production incidents.
Work in an agile DevSecOps environment in creating, maintaining, monitoring, and automation of the overall solution-deployment.
Understand and explain the effect of product architecture decisions on systems.
Identify issues and/or opportunities for improvements that are common across multiple services/teams.
This role will require weekend deployments
Skills and Qualifications:
1. 3+ years of experience in a DevOps end-to-end development process with heavy focus on service monitoring and site reliability engineering work.
2. Advanced knowledge of programming/scripting languages (Bash, PERL, Python, Node.js).
3. Experience in Agile/SCRUM enterprise-scale software development including working with GiT, JIRA, Confluence, etc.
4. Advance experience with core microservice technology (RESTFul development).
5. Working knowledge of using Advance AI/ML tools are pluses.
6. Working knowledge in the one or more of the Cloud Services: Amazon AWS, Microsoft Azure
7. Bachelors or Master’s degree in Computer Science or equivalent related field experience
Key Behaviours / Attitudes:
Professional curiosity and a desire to a develop deep understanding of services and technologies.
Experience building & running systems to drive high availability, performance and operational improvements
Excellent written & oral communication skills; to ask pertinent questions, and to assess/aggregate/report the responses.
Ability to quickly grasp and analyze complex and rapidly changing systemsSoft skills
1. Self-motivated and self-managing.
2. Excellent communication / follow-up / time management skills.
3. Ability to fulfill role/duties independently within defined policies and procedures.
4. Ability to balance multi-task and multiple priorities while maintaining a high level of customer satisfaction is key.
5. Be able to work in an interrupt-driven environment.Work with Dori Ai world class technology to develop, implement, and support Dori's global infrastructure.
As a member of the IT organization, assist with the analyze of existing complex programs and formulate logic for new complex internal systems. Prepare flowcharting, perform coding, and test/debug programs. Develop conversion and system implementation plans. Recommend changes to development, maintenance, and system standards.
Leading contributor individually and as a team member, providing direction and mentoring to others. Work is non-routine and very complex, involving the application of advanced technical/business skills in a specialized area. BS or equivalent experience in programming on enterprise or department servers or systems.
We at Thena are looking for a Machine Learning Engineer with 2-4 years of industry experience to join our team. The ideal candidate will be passionate about developing and deploying ML models that drive business value and have a strong background in ML Ops.
Responsibilities:
- Develop, fine-tune, and deploy ML models for B2B customer communication and collaboration use cases.
- Collaborate with cross-functional teams to define requirements, design models, and deploy them in production.
- Optimize model performance and accuracy through experimentation, iteration, and testing.
- Build and maintain ML infrastructure and tools to support model development and deployment.
- Stay up-to-date with the latest research and best practices in ML, and share knowledge with the team.
Qualifications:
- 2-4 years of industry experience in machine learning engineering, with a focus on natural language processing (NLP) and text classification models.
- Experience with ML Ops, including deploying and managing ML models in production environments.
- Proficiency in Python and deep learning frameworks such as PyTorch or TensorFlow.
- Experience with Embeddings and building on top of LLMs.
- Strong problem-solving and analytical skills, with the ability to develop creative solutions to complex problems.
- Strong communication skills, with the ability to collaborate effectively with cross-functional teams.
- Bachelor's or Master's degree in Computer Science, Electrical Engineering, or a related field.
Data Scientist
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What you’ll do?
- Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models.
- Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation, and serving.
- Research new and innovative machine learning approaches.
- Perform hands-on analysis and modeling of enormous data sets to develop insights that increase Ad Traffic and Campaign Efficacy.
- Collaborate with other data scientists, data engineers, product managers, and business stakeholders to build well-crafted, pragmatic data products.
- Actively take on new projects and constantly try to improve the existing models and infrastructure necessary for offline and online experimentation and iteration.
- Work with your team on ambiguous problem areas in existing or new ML initiatives
What are we looking for?
- Ability to write a SQL query to pull the data you need.
- Fluency in Python and familiarity with its scientific stack such as numpy, pandas, scikit learn, matplotlib.
- Experience in Tensorflow and/or R Modelling and/or PyTorch
- Ability to understand a business problem and translate, and structure it into a data science problem.
Job Category: Data Science
Job Type: Full Time
Job Location: Bangalore
DATA SCIENTIST-MACHINE LEARNING
GormalOne LLP. Mumbai IN
Job Description
GormalOne is a social impact Agri tech enterprise focused on farmer-centric projects. Our vision is to make farming highly profitable for the smallest farmer, thereby ensuring India's “Nutrition security”. Our mission is driven by the use of advanced technology. Our technology will be highly user-friendly, for the majority of farmers, who are digitally naive. We are looking for people, who are keen to use their skills to transform farmers' lives. You will join a highly energized and competent team that is working on advanced global technologies such as OCR, facial recognition, and AI-led disease prediction amongst others.
GormalOne is looking for a machine learning engineer to join. This collaborative yet dynamic, role is suited for candidates who enjoy the challenge of building, testing, and deploying end-to-end ML pipelines and incorporating ML Ops best practices across different technology stacks supporting a variety of use cases. We seek candidates who are curious not only about furthering their own knowledge of ML Ops best practices through hands-on experience but can simultaneously help uplift the knowledge of their colleagues.
Location: Bangalore
Roles & Responsibilities
- Individual contributor
- Developing and maintaining an end-to-end data science project
- Deploying scalable applications on different platform
- Ability to analyze and enhance the efficiency of existing products
What are we looking for?
- 3 to 5 Years of experience as a Data Scientist
- Skilled in Data Analysis, EDA, Model Building, and Analysis.
- Basic coding skills in Python
- Decent knowledge of Statistics
- Creating pipelines for ETL and ML models.
- Experience in the operationalization of ML models
- Good exposure to Deep Learning, ANN, DNN, CNN, RNN, and LSTM.
- Hands-on experience in Keras, PyTorch or Tensorflow
Basic Qualifications
- Tech/BE in Computer Science or Information Technology
- Certification in AI, ML, or Data Science is preferred.
- Master/Ph.D. in a relevant field is preferred.
Preferred Requirements
- Exp in tools and packages like Tensorflow, MLFlow, Airflow
- Exp in object detection techniques like YOLO
- Exposure to cloud technologies
- Operationalization of ML models
- Good understanding and exposure to MLOps
Kindly note: Salary shall be commensurate with qualifications and experience
About the Role:
As a Speech Engineer you will be working on development of on-device multilingual speech recognition systems.
- Apart from ASR you will be working on solving speech focused research problems like speech enhancement, voice analysis and synthesis etc.
- You will be responsible for building complete pipeline for speech recognition from data preparation to deployment on edge devices.
- Reading, implementing and improving baselines reported in leading research papers will be another key area of your daily life at Saarthi.
Requirements:
- 2-3 year of hands-on experience in speech recognitionbased projects
- Proven experience as a Speech engineer or similar role
- Should have experience of deployment on edge devices
- Candidate should have hands-on experience with open-source tools such as Kaldi, Pytorch-Kaldi and any of the end-to-end ASR tools such as ESPNET or EESEN or DeepSpeech Pytorch
- Prior proven experience in training and deployment of deep learning models on scale
- Strong programming experience in Python,C/C++, etc.
- Working experience with Pytorch and Tensorflow
- Experience contributing to research communities including publications at conferences and/or journals
- Strong communication skills
- Strong analytical and problem-solving skills
Job Description –Sr. Python Developer
Job Brief
The job requires Python experience as well as expertise with AI/ML. This Developer is expected to have strong technical skills, to work closely with the other team members in development and managing key projects. Ability to work on a small team with minimal supervision, Troubleshoot, test and maintain the core product software and databases to ensure strong optimization and functionality
Job Requirement
- 4 plus Years of Python relevant experience
- Good at communication skills and Email etiquette
- Quick learner and should be a team player
- Experience in working on python framework
- Experience in Developing With Python & MySQL on LAMP/LEMP Stack
- Experience in Developing an MVC Application with Python
- Experience with Threading, Multithreading and pipelines
- Experience in Creating RESTful API’s With Python in JSON, XMLs
- Experience in Designing Relational Database using MySQL And Writing Raw SQL Queries
- Experience with GitHub Version Control
- Ability of Write Custom Python Code
- Excellent working knowledge of AI/ML based application
- Experience in OpenCV/TensorFlow/ SimpleCV/PyTorch
- Experience working in agile software development methodology
- Understanding of end-to-end ML project lifecycle
- Understanding of cross platform OS systems like Windows, Linux or UNIX with hands-on working experience
Responsibilities
- Participate in the entire development lifecycle, from planning through implementation, documentation, testing, and deployment, all the way to monitoring.
- Produce high quality, maintainable code with great test coverage
- Integration of user-facing elements developed by front-end developers
- Build efficient, testable, and reusable Python/AI/ML modules
- Solve complex performance problems and architectural challenges
- Help with designing and architecting the product
- Design and develop the web application modules or APIs
- Troubleshoot and debug applications.
Introduction
http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
Synapsica is looking for a Principal AI Researcher to lead and drive AI based research and development efforts. Ideal candidate should have extensive experience in Computer Vision and AI Research, either through studies or industrial R&D projects and should be excited to work on advanced exploratory research and development projects in computer vision and machine learning to create the next generation of advanced radiology solutions.
The role involves computer vision tasks including development customization and training of Convolutional Neural Networks (CNNs); application of ML techniques (SVM, regression, clustering etc.), and traditional Image Processing (OpenCV, etc.). The role is research-focused and would involve going through and implementing existing research papers, deep dive of problem analysis, frequent review of results, generating new ideas, building new models from scratch, publishing papers, automating and optimizing key processes. The role will span from real-world data handling to the most advanced methods such as transfer learning, generative models, reinforcement learning, etc., with a focus on understanding quickly and experimenting even faster. Suitable candidate will collaborate closely both with the medical research team, software developers and AI research scientists. The candidate must be creative, ask questions, and be comfortable challenging the status quo. The position is based in our Bangalore office.
Primary Responsibilities
- Interface between product managers and engineers to design, build, and deliver AI models and capabilities for our spine products.
- Formulate and design AI capabilities of our stack with special focus on computer vision.
- Strategize end-to-end model training flow including data annotation, model experiments, model optimizations, model deployment and relevant automations
- Lead teams, engineers, and scientists to envision and build new research capabilities and ensure delivery of our product roadmap.
- Organize regular reviews and discussions.
- Keep the team up-to-date with latest industrial and research updates.
- Publish research and clinical validation papers
Requirements
- 6+ years of relevant experience in solving complex real-world problems at scale using computer vision-based deep learning.
- Prior experience in leading and managing a team.
- Strong problem-solving ability
- Prior experience with Python, cuDNN, Tensorflow, PyTorch, Keras, Caffe (or similar Deep Learning frameworks).
- Extensive understanding of computer vision/image processing applications like object classification, segmentation, object detection etc
- Ability to write custom Convolutional Neural Network Architecture in Pytorch (or similar)
- Background in publishing research papers and/or patents
- Computer Vision and AI Research background in medical domain will be a plus
- Experience of GPU/DSP/other Multi-core architecture programming
- Effective communication with other project members and project stakeholders
- Detail-oriented, eager to learn, acquire new skills
- Prior Project Management and Team Leadership experience
- Ability to plan work and meet the deadline
-
Build, Train and Test multiple CNN models.
-
Optimizing model training & inference by utilizing multiple GPUs and CPU cores.
-
Keen interest in Life Sciences, Image processing, Genomics, Multi-omics analysis
-
Interested in reading and implementing research papers of relevant field.
-
Strong experience of Deep Learning frameworks TensorFlow, Keras, PyTorch.
-
Strong programming skills in python and experience of Ski-Learn/NumPy libraries.
-
Experience of training of Object detection Models like YOLOv3/Mask CNN and semantic segmentation models like DeepLab, Unet etc.
-
Good understanding of image processing and computer vision algorithm like watershed, histogram matching etc.
-
Experience of cell segmentation and membrane segmentation using CNNs (Optional)
-
Individual Contributor
-
Experience with image processing
-
Experience required : 2-10 Years
-
CTC :15-40 LPA
-
Good python programming and algorithmic skill.
-
Experience with deep learning model training using any known framework.
-
Working knowledge of the genomics data in R&D
-
Understanding of one or more omics data types (transcriptomics, metabolomics, proteomics, genomics, epigenomics etc.)
-
Prior work experience as a data scientist, bioinformatician or computational biologist will be a big plus
Sizzle is an exciting new startup in the world of gaming. At Sizzle, we’re building AI to automatically create highlights of gaming streamers and esports tournaments.
For this role, we're looking for someone that loves to play and watch games, and is eager to roll up their sleeves and build up a new gaming platform. Specifically, we’re looking for a technical program manager - someone that can drive timelines, manage dependencies and get things done. You will work closely with the founders and the engineering team to iterate and launch new products and features. You will constantly report on status and maintain a dashboard across product, engineering, and user behavior.
You will:
- Be responsible for speedy and timely shipping of all products and features
- Work closely with front end engineers, product managers, and UI/UX teams to understand the product requirements in detail, and map them out to delivery timeframes
- Work closely with backend engineers to understand and map deployment timeframes and integration into pipelines
- Manage the timeline and delivery of numerous A/B tests on the website design, layout, color scheme, button placement, images/videos, and other objects to optimize time on site and conversion
- Keep track of all dependencies between projects and engineers
- Track all projects and tasks across all engineers and address any delays. Ensure tight coordination with management.
You should have the following qualities:
- Strong track record of successful delivery of complex projects and product launches
- 2+ years of software development; 2+ years of program management
- Excellent verbal and communication skills
- Deep understanding of AI model development and deployment
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Bachelors or masters degree in computer science or related field
- Ideally a gamer or someone interested in watching gaming content online
Skills:
Technical program management, ML algorithms, Tensorflow, AWS, Python
Work Experience: 3 years to 10 years
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar engineer that is well versed with AI and audio technologies around audio detection, speech-to-text, interpretation, and sentiment analysis.
You will be responsible for:
Developing audio algorithms to detect key moments within popular online games, such as:
Streamer speaking, shouting, etc.
Gunfire, explosions, and other in-game audio events
Speech-to-text and sentiment analysis of the streamer’s narration
Leveraging baseline technologies such as TensorFlow and others -- and building models on top of them
Building neural network architectures for audio analysis as it pertains to popular games
Specifying exact requirements for training data sets, and working with analysts to create the data sets
Training final models, including techniques such as transfer learning, data augmentation, etc. to optimize models for use in a production environment
Working with back-end engineers to get all of the detection algorithms into production, to automate the highlight creation
You should have the following qualities:
Solid understanding of AI frameworks and algorithms, especially pertaining to audio analysis, speech-to-text, sentiment analysis, and natural language processing
Experience using Python, TensorFlow and other AI tools
Demonstrated understanding of various algorithms for audio analysis, such as CNNs, LSTM for natural language processing, and others
Nice to have: some familiarity with AI-based audio analysis including sentiment analysis
Familiarity with AWS environments
Excited about working in a fast-changing startup environment
Willingness to learn rapidly on the job, try different things, and deliver results
Ideally a gamer or someone interested in watching gaming content online
Skills:
Machine Learning, Audio Analysis, Sentiment Analysis, Speech-To-Text, Natural Language Processing, Neural Networks, TensorFlow, OpenCV, AWS, Python
Work Experience: 2 years to 10 years
About Sizzle
Sizzle is building AI to automate gaming highlights, directly from Twitch and YouTube videos. Presently, there are over 700 million fans around the world that watch gaming videos on Twitch and YouTube. Sizzle is creating a new highlights experience for these fans, so they can catch up on their favorite streamers and esports leagues. Sizzle is available at http://www.sizzle.gg">www.sizzle.gg.
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar engineer that is well versed with computer vision and AI technologies around image and video analysis.
You will be responsible for:
- Developing computer vision algorithms to detect key moments within popular online games
- Leveraging baseline technologies such as TensorFlow, OpenCV, and others -- and building models on top of them
- Building neural network (CNN) architectures for image and video analysis, as it pertains to popular games
- Specifying exact requirements for training data sets, and working with analysts to create the data sets
- Training final models, including techniques such as transfer learning, data augmentation, etc. to optimize models for use in a production environment
- Working with back-end engineers to get all of the detection algorithms into production, to automate the highlight creation
You should have the following qualities:
- Solid understanding of computer vision and AI frameworks and algorithms, especially pertaining to image and video analysis
- Experience using Python, TensorFlow, OpenCV and other computer vision tools
- Understand common computer vision object detection models in use today e.g. Inception, R-CNN, Yolo, MobileNet SSD, etc.
- Demonstrated understanding of various algorithms for image and video analysis, such as CNNs, LSTM for motion and inter-frame analysis, and others
- Familiarity with AWS environments
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Ideally a gamer or someone interested in watching gaming content online
Skills:
Machine Learning, Computer Vision, Image Processing, Neural Networks, TensorFlow, OpenCV, AWS, Python
Seniority: We are open to junior or senior engineers. We're more interested in the proper skillsets.
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply. However, if you don't have AI or computer vision experience, please do not apply.
- 4+ years of experience Solid understanding of Python, Java and general software development skills (source code management, debugging, testing, deployment etc.).
- Experience in working with Solr and ElasticSearch Experience with NLP technologies & the handling of unstructured text Detailed understanding of text pre-processing and normalisation techniques such as tokenisation, lemmatisation, stemming, POS tagging etc.
- Prior experience in implementation of traditional ML solutions - classification, regression or clustering problem Expertise in text-analytics - Sentiment Analysis, Entity Extraction, Language modelling - and associated sequence learning models ( RNN, LSTM, GRU).
- Comfortable working with deep-learning libraries (eg. PyTorch)
- Candidate can even be a fresher with 1 or 2 years of experience IIIT, IIIT, Bits Pilani, top 5 local colleges are preferred colleges and universities.
- A Masters candidate in machine learning.
- Can source candidates from Mu Sigma and Manthan.
Develop state of the art algorithms in the fields of Computer Vision, Machine Learning and Deep Learning.
Provide software specifications and production code on time to meet project milestones Qualifications
BE or Master with 3+ years of experience
Must have Prior knowledge and experience in Image processing and Video processing • Should have knowledge of object detection and recognition
Must have experience in feature extraction, segmentation and classification of the image
Face detection, alignment, recognition, tracking & attribute recognition
Excellent Understanding and project/job experience in Machine learning, particularly in areas of Deep Learning – CNN, RNN, TENSORFLOW, KERAS etc.
Real world expertise in deep learning- applied to Computer Vision problems • Strong foundation in Mathematics
Strong development skills in Python
Must have worked upon Vision and deep learning libraries and frameworks such as Opencv, Tensorflow, Pytorch, keras
Quick learner of new technologies
Ability to work independently as well as part of a team
Knowledge of working closely with Version Control(GIT)
Job Title – Data Scientist (Forecasting)
Anicca Data is seeking a Data Scientist (Forecasting) who is motivated to apply his/her/their skill set to solve complex and challenging problems. The focus of the role will center around applying deep learning models to real-world applications. The candidate should have experience in training, testing deep learning architectures. This candidate is expected to work on existing codebases or write an optimized codebase at Anicca Data. The ideal addition to our team is self-motivated, highly organized, and a team player who thrives in a fast-paced environment with the ability to learn quickly and work independently.
Job Location: Remote (for time being) and Bangalore, India (post-COVID crisis)
Required Skills:
- At least 3+ years of experience in a Data Scientist role
- Bachelor's/Master’s degree in Computer Science, Engineering, Statistics, Mathematics, or similar quantitative discipline. D. will add merit to the application process
- Experience with large data sets, big data, and analytics
- Exposure to statistical modeling, forecasting, and machine learning. Deep theoretical and practical knowledge of deep learning, machine learning, statistics, probability, time series forecasting
- Training Machine Learning (ML) algorithms in areas of forecasting and prediction
- Experience in developing and deploying machine learning solutions in a cloud environment (AWS, Azure, Google Cloud) for production systems
- Research and enhance existing in-house, open-source models, integrate innovative techniques, or create new algorithms to solve complex business problems
- Experience in translating business needs into problem statements, prototypes, and minimum viable products
- Experience managing complex projects including scoping, requirements gathering, resource estimations, sprint planning, and management of internal and external communication and resources
- Write C++ and Python code along with TensorFlow, PyTorch to build and enhance the platform that is used for training ML models
Preferred Experience
- Worked on forecasting projects – both classical and ML models
- Experience with training time series forecasting methods like Moving Average (MA) and Autoregressive Integrated Moving Average (ARIMA) with Neural Networks (NN) models as Feed-forward NN and Nonlinear Autoregressive
- Strong background in forecasting accuracy drivers
- Experience in Advanced Analytics techniques such as regression, classification, and clustering
- Ability to explain complex topics in simple terms, ability to explain use cases and tell stories
|
Sr AI Scientist, Bengaluru |
Job Description
Introduction
Synapsica is a growth stage HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective, while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don’t have to rely on cryptic 2 liners given to them as diagnosis. Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by YCombinator and other investors from India, US and Japan. We are proud to have GE, AIIMS, the Spinal Kinetics as our partners.
Your Roles and Responsibilities
The role involves computer vision tasks including development, customization and training of Convolutional Neural Networks (CNNs); application of ML techniques (SVM, regression, clustering etc.) and traditional Image Processing (OpenCV etc.). The role is research focused and would involve going through and implementing existing research papers, deep dive of problem analysis, generating new ideas, automating and optimizing key processes.
Requirements:
- 4+ years of relevant experience in solving complex real-world problems at scale via computer vision based deep learning.
- Strong problem-solving ability
- Prior experience with Python, cuDNN, Tensorflow, PyTorch, Keras, Caffe (or similar Deep Learning frameworks).
- Extensive understanding of computer vision/image processing applications like object classification, segmentation, object detection etc
- Ability to write custom Convolutional Neural Network Architecture in Pytorch (or similar)
- Experience of GPU/DSP/other Multi-core architecture programming
- Effective communication with other project members and project stakeholders
- Detail-oriented, eager to learn, acquire new skills
- Prior Project Management and Team Leadership experience
- Ability to plan work and meet deadlines
About antuit.ai
Antuit.ai is the leader in AI-powered SaaS solutions for Demand Forecasting & Planning, Merchandising and Pricing. We have the industry’s first solution portfolio – powered by Artificial Intelligence and Machine Learning – that can help you digitally transform your Forecasting, Assortment, Pricing, and Personalization solutions. World-class retailers and consumer goods manufacturers leverage antuit.ai solutions, at scale, to drive outsized business results globally with higher sales, margin and sell-through.
Antuit.ai’s executives, comprised of industry leaders from McKinsey, Accenture, IBM, and SAS, and our team of Ph.Ds., data scientists, technologists, and domain experts, are passionate about delivering real value to our clients. Antuit.ai is funded by Goldman Sachs and Zodius Capital.
The Role:
Antuit.ai is interested in hiring a Principal Data Scientist, this person will facilitate standing up standardization and automation ecosystem for ML product delivery, he will also actively participate in managing implementation, design and tuning of product to meet business needs.
Responsibilities:
Responsibilities includes, but are not limited to the following:
- Manage and provides technical expertise to the delivery team. This includes recommendation of solution alternatives, identification of risks and managing business expectations.
- Design, build reliable and scalable automated processes for large scale machine learning.
- Use engineering expertise to help design solutions to novel problems in software development, data engineering, and machine learning.
- Collaborate with Business, Technology and Product teams to stand-up MLOps process.
- Apply your experience in making intelligent, forward-thinking, technical decisions to delivery ML ecosystem, including implementing new standards, architecture design, and workflows tools.
- Deep dive into complex algorithmic and product issues in production
- Own metrics and reporting for delivery team.
- Set a clear vision for the team members and working cohesively to attain it.
- Mentor and coach team members
Qualifications and Skills:
Requirements
- Engineering degree in any stream
- Has at least 7 years of prior experience in building ML driven products/solutions
- Excellent programming skills in any one of the language C++ or Python or Java.
- Hands on experience on open source libraries and frameworks- Tensorflow,Pytorch, MLFlow, KubeFlow, etc.
- Developed and productized large-scale models/algorithms in prior experience
- Can drive fast prototypes/proof of concept in evaluating various technology, frameworks/performance benchmarks.
- Familiar with software development practices/pipelines (DevOps- Kubernetes, docker containers, CI/CD tools).
- Good verbal, written and presentation skills.
- Ability to learn new skills and technologies.
- 3+ years working with retail or CPG preferred.
- Experience in forecasting and optimization problems, particularly in the CPG / Retail industry preferred.
Information Security Responsibilities
- Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System.
- Take part in Information Security training and act accordingly while handling information.
- Report all suspected security and policy breach to Infosec team or appropriate authority (CISO).
EEOC
Antuit.ai is an at-will, equal opportunity employer. We consider applicants for all positions without regard to race, color, religion, national origin or ancestry, gender identity, sex, age (40+), marital status, disability, veteran status, or any other legally protected status under local, state, or federal law.


















