50+ Machine Learning (ML) Jobs in India
Apply to 50+ Machine Learning (ML) Jobs on CutShort.io. Find your next job, effortlessly. Browse Machine Learning (ML) Jobs and apply today!
Job Summary
We are looking for a skilled and detail-oriented Incentive Analyst to join the SmartWinnr team. This role involves ensuring accurate calculation and validation of incentive payouts for our stakeholders, analyzing data to provide actionable insights, and supporting the effective execution of incentive plans. The Incentive Analyst will collaborate with cross-functional teams to align incentive models with client requirements and business objectives.
Responsibilities
- Calculate, validate, and ensure timely and accurate incentive calculations in compliance with established policies.
- Prepare and deliver analytics and insights related to incentive programs for stakeholders.
- Ability to setup, understand, validate SmartWinnr’s Incentive system
- Collaborate with cross-functional teams to ensure the accuracy of incentive data and alignment with goals.
- Create dashboards, reports, and visualizations using tools like Power BI, Tableau, and Excel to highlight key incentive metrics and trends.
- Design, manage, and maintain client-specific incentive models that align with their business strategies and market trends.
- Analyze existing client systems, databases, and processes to identify areas for improvement and propose cost-effective solutions.
- Develop and refine incentive compensation models, ensuring they meet client requirements within defined timelines.
- Conduct research on emerging technologies and trends to enhance incentive program design and execution.
- Partner with internal teams to drive innovation, process automation, and improvements across incentive and data analysis functions.
Required Qualifications
- Bachelor’s degree in MIS, Information Systems Management, Business, Commerce, Computer Science, or a related field.
- 4+ years of experience in incentive compensation, MIS, and/or a similar analytics role, preferably working with companies in the pharma or BFSI sectors. Experience in incentive compensation is a must for this role.
- Advanced Proficiency in tools like Power BI, Tableau, SQL, and Excel for data analysis and reporting.
- Advanced knowledge of data analysis techniques including Data Mining, Data Modeling, and Machine Learning.
- Proven ability to design and manage incentive systems, calculate incentives, and generate actionable insights for stakeholders.
- Strong understanding of financial analysis, business intelligence, and database management.
- Ability to analyze large datasets to uncover trends and propose solutions to enhance efficiency.
- Excellent communication skills to present findings and recommendations effectively to stakeholders.
- Ability to collaborate with diverse teams and adapt to a fast-paced, dynamic environment.
- A strong interest in data analysis, innovation, and leveraging technology to drive process improvements and enhance outcomes.
Job Description:
We are seeking a Senior AI Developer to join our dynamic team and lead AI-driven projects. This role requires a blend of advanced technical expertise in AI and machine learning, exceptional problem-solving abilities, and experience in building and deploying scalable AI solutions.
Key Responsibilities:
- Machine Learning & AI: Expertise in ML algorithms, especially for classification, anomaly detection, and predictive analytics.
- Cloud Platforms: Proficiency in Google Cloud Platform (GCP) services, including Cloud Run, BigQuery, and Vertex AI.
- Programming Languages: Advanced knowledge of Python; experience with libraries like TensorFlow, PyTorch, Scikit-learn, and Pandas.
- Data Processing: Skilled in SQL, data transformations, and ETL processes, especially with BigQuery.
- Anomaly Detection: Proven experience in building and deploying anomaly detection models, particularly for network data.
- Network Data Analysis: Familiarity with analyzing network logs, IP traffic, and related data structures.
- DevOps & CI/CD: Experience with CI/CD pipelines for ML model deployment, particularly with tools like Cloud Build.
About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Role
We seek skilled and experienced data science/machine learning professionals with a strong background in at least one of mathematics, nancial engineering, and electrical engineering, to join our Energy & Utilities team. If you are interested in articial intelligence, excited about solving real business problems in the energy and utilities industry, and keen to contribute to impactful projects, this role is for you!
Work you’ll do
As a data scientist in the energy and utilities industry, you will perform quantitative analysis and build mathematical models to forecast energy demand, supply and strategies of ecient load balancing. You will work on models for short term and long term pricing, improving operational eciency, reducing costs, and ensuring reliable power supply. You’ll work closely with cross-functional teams to deploy these models in solutions that provide insights/ solutions to real-world business problems. You will also be involved in conducting experiments, building POCs and prototypes.
Responsibilities
- Develop and implement quantitative models for load forecasting, energy production and distribution optimization.
- Analyze historical data to identify and predict extreme events, and measure impact of extreme events. Enhance existing pricing and risk management frameworks.
- Develop and implement quantitative models for energy pricing and risk management. Monitor market conditions and adjust models as needed to ensure accuracy and effectiveness.
- Collaborate with engineering and operations teams to provide quantitative support for energy projects. Enhance existing energy management systems and develop new strategies for energy conservation.
- Maintain and improve quantitative tools and software used in energy management.
- Support end-to-end ML/ AI model lifecycle - from data preparation, data analysis and feature engineering to model development, validation and deployment
- Collaborate with domain experts, engineers, and stakeholders in translating business problems into data-driven solutions
- Document methodologies and results, present ndings and communicate insights to non-technical audiences
Skills & Requirements
- Strong background in mathematics, econometrics, electrical engineering, or a related eld.
- Experience data analysis, and quantitative modeling using programming languages such as Python or R.
- Excellent analytical and problem-solving skills.
- Strong understanding and experience with data analysis, statistical and mathematical concepts and ML algorithms
- Proficiency in Python and familiarity with basic Python libraries for data analysis and ML algorithms (such as NumPy, Pandas, ScikitLearn, NLTK).
- Strong communication skills
- Strong collaboration skills, ability to work with engineering and operations teams.
- A continuous learning attitude and a problem solving mind-set
Good to have -
- Knowledge of energy markets, regulations, and utility operation.
- Working knowledge of cloud platforms (e.g., AWS, Azure, GCP).
- Broad understanding of data structures and data engineering.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, eciency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, x or improve – anything that isn’t done right, irrespective of who did it. Be selsh about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.
Who are we?
Eliminate Fraud. Establish Trust.
IDfy is an Integrated Identity Platform offering products and solutions for KYC, KYB, Background Verifications, Risk Mitigation, Digital Onboarding, and Digital Privacy. We establish trust while delivering a frictionless experience for you, your employees, customers, and partners.
Only IDfy combines enterprise-grade technology with business understanding and has the widest breadth of offerings in the industry. With more than 12+ years of experience and 2 million verifications per day, we are pioneers in this industry.
Our clients include HDFC Bank, Induslnd Bank, Zomato, Amazon, PhonePe, Paytm, HUL, and many others. We have successfully raised $27M from Elev8 Venture Partners, KB Investment, and Tenacity Ventures!
About Team
The machine learning team at IDfy is self-contained and responsible for building models and services that support key workflows. Our models serve as gating criteria for these workflows and are expected to perform accurately and quickly. We use a mix of conventional and hand-crafted deep learning models.
The team comes from diverse backgrounds and experiences and works directly with business and product teams to craft solutions for our customers. We function as a platform, not a services company.
We are the match if you...
- Are a mid-career machine learning engineer (or data scientist) with 4-8 years of experience in data science.
Must-Haves
- Experience in framing and solving problems with machine learning or deep learning models.
- Expertise in either computer vision or natural language processing (NLP), with appropriate production system experience.
- Experience with generative AI, including fine-tuning models and utilizing Retrieval-Augmented Generation (RAG).
- Understanding that modeling is just a part of building and delivering AI solutions, with knowledge of what it takes to keep a high-performance system up and running.
- Proficiency in Python and experience with frameworks like PyTorch, TensorFlow, JAX, HuggingFace, Spacy, etc.
- Enthusiasm and drive to learn and apply state-of-the-art research.
- Ability to write APIs.
- Experience with AI systems and at least one cloud provider: AWS Sagemaker, GCP VertexAI, AzureML.
Good to Haves
- Familiarity with languages like Go, Elixir, or an interest in functional programming.
- Knowledge and experience in MLOps and tooling, particularly Docker and Kubernetes.
- Experience with other platforms, frameworks, and tools.
Here’s what your day would look like...
In this role, you will:
- Work on all aspects of a production machine learning system, including acquiring data, training and building models, deploying models, building API services for exposing these models, and maintaining them in production.
- Focus on performance tuning of models.
- Occasionally support and debug production systems.
- Research and apply the latest technology to build new products and enhance the existing platform.
- Build workflows for training and production systems.
- Contribute to documentation.
While the emphasis will be on researching, building, and deploying models into production, you will also contribute to other aspects of the project.
Why Join Us?
- Innovative, Impactful Projects: Work on cutting-edge AI projects that push the boundaries of technology and positively impact billions of people.
- Collaborative Environment: Join a passionate and talented team dedicated to innovation and excellence. Be part of a diverse and inclusive workplace where your ideas and contributions are valued.
- Mentorship Opportunities: Mentor interns and junior team members, with support and coaching to help you develop leadership skills.
Excited already?
At IDfy, you will work on the entire end-to-end solution rather than just a small part of a larger process. Our culture thrives on collaboration, innovation, and impact.
Job Title : AI/ML Engineer
Experience : 3+ Years
Location : Gurgaon (On-site)
Work Mode : 6 Days Work From Office
Summary :
We are seeking an experienced AI/ML Engineer to join our team in Gurgaon. The ideal candidate will have a strong background in machine learning algorithms, AI model development, and deploying ML solutions into production.
Key Responsibilities:
- Design, develop, and deploy AI/ML models and solutions.
- Work on data preprocessing, feature engineering, and model optimization.
- Collaborate with cross-functional teams to integrate AI/ML capabilities into applications.
- Monitor and improve the performance of deployed models.
- Stay updated on the latest AI/ML advancements and tools.
Requirements:
- Proven experience in AI/ML development with tools like TensorFlow, PyTorch, or Scikit-learn.
- Strong programming skills in Python.
- Familiarity with data preprocessing and feature engineering techniques.
- Experience with ML model deployment (e.g., using Docker, Kubernetes).
- Excellent problem-solving and analytical skills.
Why Join Us?
- Competitive salary and growth opportunities.
- Work on cutting-edge AI/ML projects in a collaborative environment.
First & foremost, this role is not for you if you don’t enjoy solving really deeeep-tech problems with a high surface area, which means that no person would fit in for solving the complete problem (we know that) & there’ll be a lot of things to learn on the way, read further if you’re only interested in something of this sort.
We are building a Social Network for Fashion & E-commerce powered by AI with the vision of redefining how people buy things on the internet! We believe we are one of the very few companies that are here to solve a real consumer problem & not just cash on to the AI hype. We’ve raised $1M in pre-seed funding from top VCs & supported by the best entrepreneurs of India.
As a founding AI-ML Engineer, you will build & train foundation models from scratch, fine-tune existing models as per the use-case, scraping large sums of data, design pipelines for data ingestion, processing & scalability of systems. Not just this, you’ll also be working on recommendation systems, particularly algorithm design & planning the execution from day one.
Now, What we’re looking for & our expectations:
Note: We don’t really care about your qualifications as long as we can see that you have sufficient knowledge required for the role & a strong sense of ownership in whatever you do.
- Design and deploy advanced machine learning models in Computer Vision including object detection & similarity matching
- Implement scalable data pipelines, optimize models for performance and accuracy, and ensure they are production-ready with MLOps
- Take part in code reviews, share knowledge, and lead by example to maintain high-quality engineering practices
- In terms of technical skills, you should have a high proficiency in Python, machine learning frameworks like TensorFlow & PyTorch
- Experience with cloud platforms and knowledge of deploying models in production environments
- Have decent understanding of Reinforcement Learning and some understanding of Agentic AI & LLM-Ops
- First-hand experience with scalable Backend Engineering would put you on the first consideration over other candidates
A few things about us
- Building the next 100 people $100 Billion AI-first Company
- Speed of execution is really important to us
- Delivering exceptional experiences that exceed user expectations
- We embrace a Culture of continuous learning and innovation
- Users are at the heart of everything we do
- We believe in open communication, authenticity, and transparency
- Solving problems through First Principles
Benefits of joining us
- Top of the market Compensation
- Any hardware you need to do your best work
- Open to hybrid work (although we prefer in-person over remote to maximise learning)
- Flexible work timings
- Learning & Development budget
- Flexible allowances with hassle-free reimbursements
- Quarterly team outings covered by us
- First Hand experience of shipping world class products
- Having loads of fun with a GenZ team at a Hacker House
Job Title: AI/ML Engineer
Location: Pune
Experience Level: 1-5 Years
Job Type: Full-Time
About Us
Vijay Sales is one of India’s leading retail brands, offering a wide range of electronics and home appliances across multiple channels. As we continue to expand, we are building advanced technology solutions to optimise operations, improve customer experience, and drive growth. Join us in shaping the future of retail with innovative AI/ML-powered solutions.
Role Overview
We are looking for an AI/ML Engineer to join our technology team and help drive innovation across our business. In this role, you will design, develop, and implement machine learning models for applications like inventory forecasting, pricing automation, customer insights, and operational efficiencies. Collaborating with a cross-functional team, you’ll ensure our AI/ML solutions deliver measurable impact.
Key Responsibilities
- Develop and deploy machine learning models to address business challenges such as inventory forecasting, dynamic pricing, demand prediction, and customer segmentation.
- Preprocess and analyze large volumes of sales and customer data to uncover actionable insights.
- Design algorithms for supervised, unsupervised, and reinforcement learning tailored to retail use cases.
- Implement and manage pipelines to deploy and monitor models in production environments.
- Continuously optimize model performance through retraining, fine-tuning, and feedback loops.
- Work closely with business teams to identify requirements and translate them into AI/ML solutions.
- Stay current with the latest AI/ML advancements and leverage them to enhance Vijay Sales’ technology stack.
Qualifications
Required:
- Bachelor’s/Master’s degree in Computer Science, Data Science, Mathematics, or a related field.
- Proficiency in Python and ML frameworks such as TensorFlow, PyTorch, or Scikit-learn.
- Proven experience in developing, training, and deploying machine learning models.
- Strong understanding of data processing, feature engineering, and data pipeline design.
- Knowledge of algorithms for forecasting, classification, clustering, and optimization.
- Experience working with large-scale datasets and databases (SQL/NoSQL).
Preferred:
- Familiarity with retail industry challenges, such as inventory and pricing management.
- Experience with cloud platforms (AWS, GCP, or Azure) for deploying ML solutions.
- Knowledge of MLOps practices for scalable and efficient model management.
- Hands-on experience with time-series analysis and demand forecasting models.
- Understanding of customer analytics and personalization techniques.
Why Join Vijay Sales?
- Work with one of India’s most iconic retail brands as we innovate and grow.
- Be part of a team building transformative AI/ML solutions for the retail industry.
- A collaborative work environment that encourages creativity and learning.
- Competitive salary and benefits package, along with exciting growth opportunities.
Ready to make an impact? Apply now and help shape the future of Vijay Sales with cutting-edge AI/ML technologies!
Mammoth
Mammoth is a data management platform revolutionizing the way people work with data. Our lightweight, self-serve SaaS analytics solution takes care of data ingestion, storage, cleansing, transformation, and exploration, empowering users to derive insights with minimal friction. Based in London, with offices in Portugal and Bangalore, we offer flexibility in work locations, whether you prefer remote or in-person office interactions.
Job Responsibilities
- Collaboratively build robust, scalable, and maintainable software across the stack.
- Design, develop, and maintain APIs and web services.
- Dive into complex challenges around performance, scalability, and concurrency, and deliver quality results.
- Improve code quality through writing unit tests, automation, and conducting code reviews.
- Work closely with the design team to understand user requirements, formulate use cases, and translate them into effective technical solutions.
- Contribute ideas and participate in brainstorming, design, and architecture discussions.
- Engage with modern tools and frameworks to enhance development processes.
Role Focus: This is a Fullstack Developer role. However, the current priorities are more front-end focused. Candidates should also have the ability to independently address minor back-end requirements when necessary, without dependencies.
Skills Required
Frontend:
- Proficiency in Vue.js and a solid understanding of JavaScript.
- Strong grasp of HTML/CSS, with attention to detail.
- Knowledge of TypeScript is a plus.
Backend:
- Strong proficiency in Python and mastery of at least one framework (Django, Flask, Pyramid, FastAPI or litestar).
- Experience with database systems such as PostgreSQL.
- Familiarity with performance trade-offs and best practices for backend systems.
General:
- Solid understanding of fundamental computer science concepts like algorithms, data structures, databases, operating systems, and programming languages.
- Experience in designing and building RESTful APIs and web services.
- Ability to collaborate effectively with cross-functional teams.
- Passion for solving challenging technical problems with innovative solutions.
- AI/ML or Willingness to learn on need basis.
Nice to have:
- Familiarity with DevOps tools like Docker, Kubernetes, and Ansible.
- Understanding of frontend build processes using tools like Vite.
- Demonstrated experience with end-to-end development projects or personal projects.
Job Perks
- Free lunches and stocked kitchen with fruits and juices.
- Game breaks to unwind.
- Work in a spacious and serene office located in Koramangala, Bengaluru.
- Opportunity to contribute to a groundbreaking platform with a passionate and talented team.
If you’re an enthusiastic developer who enjoys working across the stack, thrives on solving complex problems, and is eager to contribute to a fast-growing, mission-driven company, apply now!
Position: ML/AI Engineer (3+ years)
Responsibilities:
Design, develop, and deploy machine learning models with a focus on scalability and performance.
Implement unsupervised algorithms and handle large datasets for inference and analysis.
Collaborate with cross-functional teams to integrate AI/ML solutions into real-world applications.
Work on MLOps frameworks for model lifecycle management (optional but preferred).
Stay updated with the latest advancements in AI/ML technologies.
Requirements:
Proven experience in machine learning model building and deployment.
Hands-on experience with unsupervised learning algorithms (e.g., clustering, anomaly detection).
Proficiency in handling Big Data and related tools/frameworks.
Exposure to MLOps tools like MLflow, Kubeflow, or similar (preferred).
Familiarity with cloud platforms such as AWS, Azure, or GCP.
Location: Baner, Pune
Employment Type: Full-time
Client based at Bangalore location.
Job Title: Solution Architect
Work Location: Tokyo
Experience: 7-10 years
Number of Positions: 3
Job Description:
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience (7-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
AI/ML Data Solution Architecture Ability to define data architecture strategy that align with business goals for AI use cases and ensures data availability & data accuracy. Ensuring data quality by implementing and enforcing data governance policies and best practices Technology evaluation & selection Execute Proof of concept and Proof of value for various technology solutions and frameworks Continuous Learning - Staying up to date with emerging technologies and trends in Data Engineering, AI / ML, and GenAI, and making recommendations for their adoption wherever appropriate Team Management Responsible for leading a team of data architects and data engineers, as well as coordinating with vendors and technology partners Collaboration & communication Collaborate and work closely with executives, stakeholder and business teams to effectively communicate architecture strategy & clearly articulate the business value.
Experience : 12 to 16 yrs
Work location : JP Nagar 3rd phase, South Bangalore. (Work from office role, IC role to begin with)
Suitable candidate be able to demonstrate strong experience in the following areas -
Data Engineering
- Hands-on experience with data engineering tools such as Talend (or Informatica or AbInitio), Databricks (or Spark), and HVR (or Attunity or Golden Gate or Equalum).
- Working knowledge of data build tools, Azure Data Factory, continuous integration and continuous delivery (CI/CD), automated testing, data lakes, data warehouses, big data, Collibra, and Unity Catalog
- Basic knowledge of building analytics applications using Azure Purview, Power BI, Spotfire, and Azure Machine Learning.
AI & ML -
- Conceptualize & design end-to-end solution view of sourcing data, pre-processing data, feature stores, model development, evaluation, deployment & governance
- Define model development best practices, create POVs on emerging AI trends, drive the proposal responses, help solving complex analytics problems and strategize the end-to-end implementation frameworks & methodologies
- Thorough understanding of database, streaming & analytics services offered by popular cloud platforms (Azure, AWS, GCP) and hands-on implementation of building machine learning pipeline with at least one of the popular cloud platforms
- Expertise on Large Language Model preferred with exposure to implementing generative AI using ChatGPT / Open AI and other models. Harvesting Models from open source will be an added advantage
- Good understanding of statistical analysis, data analysis and knowledge of data management & visualization techniques
- Exposure to other AI Platforms & products (Kore.ai, expert.ai, Dataiku etc.) desired
- Hands-on development experience in Python/R is a must and additional hands-on experience on few other procedural/object-oriented programming languages (Java, C#, C++) is desirable.
- Leadership skills to drive the AI/ML related conversation amidst CXO, Senior Leadership and making impactful presentations to customer organizations
Stakeholder Management & Communication Skills
- Excellent communication, negotiation, influencing and stakeholder management skills
- Preferred to have experience in project management, particularly in executing projects using Agile delivery frameworks
- Customer focus and excellent problem-solving skills
Qualification
- BE or MTech (BSc or MSc) in engineering, sciences, or equivalent relevant experience required.
- Total 13+ years of experience and 10+ years of experience in building/managing/administrating data and analytics applications is required
- Designing solution architecture and present the architecture in architecture review forums
Additional Qualifications
- Ability to define best practices for data governance, data quality, and data lineage, and to operationalize those practices.
- Proven track record of designing and delivering solutions that comply with industry regulations and legislation such as GxP, SoX, HIPAA, and GDPR.
Building the machine learning production (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-of-the-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop ML pipelines to support
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Develop MLOps components in Machine learning development life cycle using Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 3-5 years experience building production-quality software.
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
- Strong experience in System Integration, Application Development or Data Warehouse projects across technologies used in the enterprise space
- Knowledge of MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- CI/CD experience( i.e. Jenkins, Git hub action,
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-ofthe-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop scalable ML pipelines
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 5.5-9 years experience building production-quality software
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR equivalent
- Strong experience in System Integration, Application Development or Datawarehouse projects across technologies used in the enterprise space
- Expertise in MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- Experience developing CI/CD components for production ready ML pipeline.
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Team handling, problem solving, project management and communication skills & creative thinking
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
We are looking to expand our existing Python team across our offices in Surat. This position is for SDE-1 - Junior Software Engineer.
The requirements are as follows:
1) Familiar with the the Django REST API Framework.
2) Experience with the FAST API framework will be a plus
3) Strong grasp of basic python programming concepts ( We do ask a lot of questions on this on our interviews :) )
4) Experience with databases like MongoDB , Postgres , Elasticsearch , REDIS will be a plus
5) Experience with any ML library will be a plus.
6) Familiarity with using git , writing unit test cases for all code written and CI/CD concepts will be a plus as well.
7) Familiar with basic code patterns like MVC.
8) Grasp on basic data structures.
Fynd is India’s largest omnichannel platform and multi-platform tech company with expertise in retail tech and products in AI, ML, big data ops, gaming+crypto, image editing and learning space. Founded in 2012 by 3 IIT Bombay alumni: Farooq Adam, Harsh Shah and Sreeraman MG. We are headquartered in Mumbai and have 1000+ brands under management, more than 10k stores and servicing 23k + pin codes.
We want new ambitious research members to join our Machine Learning Research group. We are looking for people with a proven track record in computer vision research. In this role, we create new models, algorithms and approaches to solve challenging machine learning problems. Some of our problems areas include Image Restoration, Image Enhancement, Generative Models and 3D computer vision. You will work on various state-of-art new techniques to improve and optimize neural networks and also use computer vision approaches to solve various problems. You will be required to have in-depth knowledge of image processing and convolutional neural networks. You will tackle our challenging problems also have the freedom to try out your ideas and pursue research topics of your interest.
What you will do:
- Engage in advanced research to push the boundaries of computer vision technology. Leverage core image processing and computer vision fundamentals along with focus on transformer based models, generative adversarial networks (GANs), diffusion models, and other advanced deep learning methods.
- Develop and enhance models for practical applications, including image processing, object detection, image segmentation, and scene understanding, leveraging recent innovations in the field.
- Apply contemporary methodologies to address complex, real-world challenges and create innovative, deployable solutions that enhance our products and services.
- Collaborate with cross-functional teams to integrate new computer vision technologies and solutions into our product suite.
- Remain current with the latest developments in computer vision and deep learning, including emerging techniques and best practices.
- Document and communicate research findings through publications in top-tier conferences and journals, and contribute to the academic community.
Skills needed:
- A BS, MS, PhD, or equivalent experience in Computer Science, Engineering, Applied Mathematics, or a related quantitative field.
- Extensive research experience in computer vision, with a focus on deep learning techniques such as transformers, GANs, diffusion, and other emerging models.
- Proficiency in Python, with experience in deep learning frameworks like TensorFlow and PyTorch
- Deep understanding of mathematical principles relevant to computer vision, including linear algebra, optimization, and statistics.
- Proven ability to bridge the gap between theoretical research and practical application, effectively translating innovative ideas into deployable solutions.
- Strong problem-solving skills and an innovative mindset to address complex challenges and apply contemporary research effectively.
What do we offer?
Growth
Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially.
Flex University: We help you upskill by organising in-house courses on important subjects
Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you.
Culture
Community and Team building activities
Host weekly, quarterly and annual events/parties.
Wellness
Mediclaim policy for you + parents + spouse + kids
Experienced therapist for better mental health, improve productivity & work-life balance
We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
a leading data & analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage
What are we looking for?
- Bachelor’s degree in analytics related area (Data Science, Computer Engineering, Computer Science, Information Systems, Engineering, or a related discipline)
- 7+ years of work experience in data science, analytics, engineering, or product management for a diverse range of projects
- Hands on experience with python, deploying ML models
- Hands on experience with time-wise tracking of model performance, and diagnosis of data/model drift
- Familiarity with Dataiku, or other data-science-enabling tools (Sagemaker, etc)
- Demonstrated familiarity with distributed computing frameworks (snowpark, pyspark)
- Experience working with various types of data (structured / unstructured)
- Deep understanding of all data science phases (eg. Data engineering, EDA, Machine Learning, MLOps, Serving)
- Highly self-motivated to deliver both independently and with strong team collaboration
- Ability to creatively take on new challenges and work outside comfort zone
- Strong English communication skills (written & verbal)
Roles & Responsibilities:
- Clearly articulates expectations, capabilities, and action plans; actively listens with others’ frame of reference in mind; appropriately shares information with team; favorably influences people without direct authority
- Clearly articulates scope and deliverables of projects; breaks complex initiatives into detailed component parts and sequences actions appropriately; develops action plans and monitors progress independently; designs success criteria and uses them to track outcomes; drives implementation of recommendations when appropriate, engages with stakeholders throughout to ensure buy-in
- Manages projects with and through others; shares responsibility and credit; develops self and others through teamwork; comfortable providing guidance and sharing expertise with others to help them develop their skills and perform at their best; helps others take appropriate risks; communicates frequently with team members earning respect and trust of the team
- Experience in translating business priorities and vision into product/platform thinking, set clear directives to a group of team members with diverse skillsets, while providing functional & technical guidance and SME support
- Demonstrated experience interfacing with internal and external teams to develop innovative data science solutions
- Strong business analysis, product design, and product management skills
- Ability to work in a collaborative environment - reviewing peers' code, contributing to problem- solving sessions, ability to communicate technical knowledge to a a variety of audience (such as management, brand teams, data engineering teams, etc.)
- Ability to articulate model performance to a non-technical crowd, and ability to select appropriate evaluation criteria to evaluate hidden confounders and biases within a model
- MLOps frameworks and their use in model tracking and deployment, and automating the model serving pipeline
- Work with all sizes of ML model from linear/logistic regression to other sklearn-like models, to deep learning
- Formulate training schema for unbiased model training e.g. K-fold cross-validation, leave-one- out cross-validation) for parameter searching and model tuning
- Ability to work on machine learning work like recommender systems, end-to-end ML lifecycle)
- Ability to manage ML on largely imbalanced training sets (<5% positive rate)
- Ability to articulate model performance to a non-technical crowd, and ability to select appropriate evaluation criteria to evaluate hidden confounders and biases within a model
- MLOps frameworks and their use in model tracking and deployment, and automating the model serving pipeline
Client based at Bangalore location.
Data Scientist - Healthcare AI
Location: Bangalore, India
Experience: 4+ years
Skills Required: Radiology, visual images, text, classical models, LLM multi-modal
Responsibilities:
· LLM Development and Fine-tuning: Fine-tune and adapt large language models (e.g., GPT, Llama2, Mistral) for specific healthcare applications, such as text classification, named entity recognition, and question answering.
· Data Engineering: Collaborate with data engineers to build robust data pipelines for large-scale text datasets used in LLM training and fine-tuning.
· Model Evaluation and Optimization: Develop rigorous experimentation frameworks to assess model performance, identify areas for improvement, and inform model selection.
· Production Deployment: Work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.
· Predictive Model Design: Leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models).
· Cross-functional Collaboration: Partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions.
· Knowledge Sharing: Mentor junior team members and stay up-to-date with the latest advancements in machine learning and LLMs.
Qualifications:
· Doctoral or master's degree in computer science, Data Science, Artificial Intelligence, or related field.
· 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models.
· 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).
· Experience working with cloud-based platforms (AWS, GCP, Azure).
Preferred Qualifications:
o Experience working in the healthcare domain, particularly oncology.
o Publications in relevant scientific journals or conferences.
o Degree from a prestigious university or research institution.
We are seeking a motivated AI Engineer who is passionate about exploring the future of artificial intelligence and eager to transition into the IT field. This role is ideal for individuals with a career gap who have undergone professional training in AI and are looking to apply their skills in a dynamic environment.
Job Responsibilities
- Develop AI Models: Design and implement machine learning algorithms and deep learning models to extract insights from large datasets.
- Collaborate with Teams: Work closely with cross-functional teams to identify business needs and integrate AI solutions that enhance operational efficiency.
- Research and Experimentation: Conduct experiments to improve AI system performance and stay updated with the latest advancements in AI technologies.
- Documentation: Maintain comprehensive documentation for AI models, algorithms, and processes.
Qualifications
- Education: A bachelor’s degree in Computer Science, Engineering, or a related field is preferred.
- Training: Completion of professional training programs in AI, machine learning, or data science.
- Programming Skills: Proficiency in programming languages such as Python or R, with experience in data processing techniques.
- Analytical Skills: Strong problem-solving abilities and a keen analytical mindset.
Desired Attributes
- A genuine interest in the evolving landscape of AI technologies.
- Willingness to learn and adapt in a fast-paced environment.
- Excellent communication skills for effective collaboration within teams.
This position offers a unique opportunity for those looking to pivot their careers into IT while contributing to innovative AI projects. If you are ready to embrace this challenge, we encourage you to apply!
Job Overview:
Python Lead responsibilities include developing and maintaining AI pipelines, including data preprocessing, feature extraction, model training, and evaluation.
Responsibilities:
- Designing, developing, and implementing generative AI models and algorithms utilizing state-of-the-art techniques such as GPT, VAE, and GANs.
- Conducting research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services.
- 7+ years of Experience in creating rest api using popular python web frameworks like Django, flask or fastapi.
- Knowledge of databases like postgres, elastic, mongo etc.
- Knowledge of working with external integrations like redis, kafka, s3, ec2 etc.
- Some experience in ML integrations will be a plus.
Requirements:
- Work experience as a Python Developer
- Team spirit
- Good problem-solving skills
- Proficient in Python and have experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, or Keras.
- strong knowledge of data structures, algorithms, and software engineering principles
- Nice to have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face
Role Overview:
We are seeking a highly skilled and motivated Data Scientist to join our growing team. The ideal candidate will be responsible for developing and deploying machine learning models from scratch to production level, focusing on building robust data-driven products. You will work closely with software engineers, product managers, and other stakeholders to ensure our AI-driven solutions meet the needs of our users and align with the company's strategic goals.
Key Responsibilities:
- Develop, implement, and optimize machine learning models and algorithms to support product development.
- Work on the end-to-end lifecycle of data science projects, including data collection, preprocessing, model training, evaluation, and deployment.
- Collaborate with cross-functional teams to define data requirements and product taxonomy.
- Design and build scalable data pipelines and systems to support real-time data processing and analysis.
- Ensure the accuracy and quality of data used for modeling and analytics.
- Monitor and evaluate the performance of deployed models, making necessary adjustments to maintain optimal results.
- Implement best practices for data governance, privacy, and security.
- Document processes, methodologies, and technical solutions to maintain transparency and reproducibility.
Qualifications:
- Bachelor's or Master's degree in Data Science, Computer Science, Engineering, or a related field.
- 5+ years of experience in data science, machine learning, or a related field, with a track record of developing and deploying products from scratch to production.
- Strong programming skills in Python and experience with data analysis and machine learning libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch).
- Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker).
- Proficiency in building and optimizing data pipelines, ETL processes, and data storage solutions.
- Hands-on experience with data visualization tools and techniques.
- Strong understanding of statistics, data analysis, and machine learning concepts.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a fast-paced, dynamic environment.
Preferred Qualifications:
- Knowledge of microservices architecture and RESTful APIs.
- Familiarity with Agile development methodologies.
- Experience in building taxonomy for data products.
- Strong communication skills and the ability to explain complex technical concepts to non-technical stakeholders.
About the company
DCB Bank is a new generation private sector bank with 442 branches across India.It is a scheduled commercial bank regulated by the Reserve Bank of India. DCB Bank’s business segments are Retail banking, Micro SME, SME, mid-Corporate, Agriculture, Government, Public Sector, Indian Banks, Co-operative Banks and Non-Banking Finance Companies.
Job Description
Department: Risk Analytics
CTC: Max 18 Lacs
Grade: Sr Manager/AVP
Experience: Min 4 years of relevant experience
We are looking for a Data Scientist to join our growing team of Data Science experts and manage the processes and people responsible for accurate data collection, processing, modelling, analysis, implementation, and maintenance.
Responsibilities
- Understand, monitor and maintain existing financial scorecards (ML Based) and make changes to the model when required.
- Perform Statistical analysis in R and assist IT team with deployment of ML model and analytical frameworks in Python.
- Should be able to handle multiple tasks and must know how to prioritize the work.
- Lead cross-functional projects using advanced data modelling and analysis techniques to discover insights that will guide strategic decisions and uncover optimization opportunities.
- Develop clear, concise and actionable solutions and recommendations for client’s business needs and actively explore client’s business and formulate solutions/ideas which can help client in terms of efficient cost cutting or in achieving growth/revenue/profitability targets faster.
- Build, develop and maintain data models, reporting systems, data automation systems, dashboards and performance metrics support that support key business decisions.
- Design and build technical processes to address business issues.
- Oversee the design and delivery of reports and insights that analyse business functions and key operations and performance metrics.
- Manage and optimize processes for data intake, validation, mining, and engineering as well as modelling, visualization, and communication deliverables.
- Communicate results and business impacts of insight initiatives to the Management of the company.
Requirements
- Industry knowledge
- 4 years or more of experience in financial services industry particularly retail credit industry is a must.
- Candidate should have either worked in banking sector (banks/ HFC/ NBFC) or consulting organizations serving these clients.
- Experience in credit risk model building such as application scorecards, behaviour scorecards, and/ or collection scorecards.
- Experience in portfolio monitoring, model monitoring, model calibration
- Knowledge of ECL/ Basel preferred.
- Educational qualification: Advanced degree in finance, mathematics, econometrics, or engineering.
- Technical knowledge: Strong data handling skills in databases such as SQL and Hadoop. Knowledge with data visualization tools, such as SAS VI/Tableau/PowerBI is preferred.
- Expertise in either R or Python; SAS knowledge will be plus.
Soft skills:
- Ability to quickly adapt to the analytical tools and development approaches used within DCB Bank
- Ability to multi-task good communication and team working skills.
- Ability to manage day-to-day written and verbal communication with relevant stakeholders.
- Ability to think strategically and make changes to data when required.
Thirumoolar software is seeking talented AI researchers to join our cutting-edge team and help drive innovation in artificial intelligence. As an AI researcher, you will be at the forefront of developing intelligent systems that can solve complex problems and uncover valuable insights from data.
Responsibilities:
Research and Development: Conduct research in AI areas relevant to the company's goals, such as machine learning, natural language processing, computer vision, or recommendation systems. Explore new algorithms and methodologies to solve complex problems.
Algorithm Design and Implementation: Design and implement AI algorithms and models, considering factors such as performance, scalability, and computational efficiency. Use programming languages like Python, Java, or C++ to develop prototype solutions.
Data Analysis: Analyze large datasets to extract meaningful insights and patterns. Preprocess data and engineer features to prepare it for training AI models. Apply statistical methods and machine learning techniques to derive actionable insights.
Experimentation and Evaluation: Design experiments to evaluate the performance of AI algorithms and models. Conduct thorough evaluations and analyze results to identify strengths, weaknesses, and areas for improvement. Iterate on algorithms based on empirical findings.
Collaboration and Communication: Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to integrate AI solutions into our products and services. Communicate research findings, technical concepts, and project updates effectively to stakeholders.
Preferred Location: Chennai
at Zolvit (formerly Vakilsearch)
Role Overview:
We are looking for a skilled Data Scientist with expertise in data analytics, machine learning, and AI to join our team. The ideal candidate will have a strong command of data tools, programming, and knowledge of LLMs and Generative AI, contributing to the growth and automation of our business processes.
Key Responsibilities:
- Data Analysis & Visualization:
- Develop and manage data pipelines, ensuring data accuracy and integrity.
- Design and implement insightful dashboards using Power BI to help stakeholders make data-driven decisions.
- Extract and analyze complex data sets using SQL to generate actionable insights
2 Machine Learning & AI Models:
- Build and deploy machine learning models to optimize key business functions like discount management, lead qualification, and process automation.
- Apply Natural Language Processing (NLP) techniques for text extraction, analysis, and classification from customer documents.
- Implement and fine-tune Generative AI models and large language models (LLMs) for various business applications, including prompt engineering for automation tasks.
3 Automation & Innovation:
- Use AI to streamline document verification, data extraction, and customer interaction processes.
- Innovate and automate manual processes, creating AI-driven solutions for internal teams and customer-facing systems.
- Stay abreast of the latest advancements in machine learning, NLP, and generative AI, applying them to real-world business challenges.
Qualifications:
- Bachelor's or Master’s degree in Computer Science, Data Science, Statistics, or related field.
- 4-7 years of experience as a Data Scientist, with proficiency in Python, SQL, Power BI, and Excel.
- Expertise in building machine learning models and utilizing NLP techniques for text processing and automation.
- Experience in working with large language models (LLMs) and generative AI to create efficient and scalable solutions.
- Strong problem-solving skills, with the ability to work independently and in teams.
- Excellent communication skills, with the ability to present complex data in a simple, actionable way to non-technical stakeholders.
If you’re excited about leveraging data and AI to solve real-world problems, we’d love to have you on our team!
Client based at Bangalore location.
Data Science:
• Python expert level, Analytical, Different models works, Basic concepts, CPG(Domain).
• Statistical Models & Hypothesis , Testing
• Machine Learning Important
• Business Understanding, visualization in Python.
• Classification, clustering and regression
•
Mandatory Skills
• Data Science, Python, Machine Learning, Statistical Models, Classification, clustering and regression
Job Description: Product Manager for GenAI Applications on Data Products About the Company: We are a forward-thinking technology company specializing in creating innovative data products and AI applications. Our mission is to harness the power of data and AI to drive business growth and efficiency. We are seeking a dynamic and experienced Product Manager to join our team and lead the development of cutting-edge GenAI applications. Role Overview: As a Product Manager for GenAI Applications, you will be responsible for conceptualizing, developing, and managing AI-driven products that leverage our data platforms. You will work closely with cross-functional teams, including engineering, data science, marketing, and sales, to ensure the successful delivery of high-impact AI solutions. Your understanding of business user needs and ability to translate them into effective AI applications will be crucial. Key Responsibilities: - Lead the end-to-end product lifecycle from ideation to launch for GenAI applications. - Collaborate with engineering and data science teams to design, develop, and deploy AI solutions. - Conduct market research and gather user feedback to identify opportunities for new product features and improvements. - Develop detailed product requirements, roadmaps, and user stories to guide development efforts. - Work with business stakeholders to understand their needs and ensure the AI applications meet their requirements. - Drive the product vision and strategy, aligning it with company goals and market demands. - Monitor and analyze product performance, leveraging data to make informed decisions and optimizations. - Coordinate with marketing and sales teams to create go-to-market strategies and support product launches. - Foster a culture of innovation and continuous improvement within the product development team. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, Business, or a related field. - 3-5 years of experience in product management, specifically in building AI applications. - Proven track record of developing and launching AI-driven products from scratch. - Experience working with data application layers and understanding data architecture. - Strong understanding of the psyche of business users and the ability to translate their needs into technical solutions. - Excellent project management skills, with the ability to prioritize tasks and manage multiple projects simultaneously. - Strong analytical and problem-solving skills, with a data-driven approach to decision making. - Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. - Passion for AI and a deep understanding of the latest trends and technologies in the field. Benefits: - Competitive salary and benefits package. - Opportunity to work on cutting-edge AI technologies and products. - Collaborative and innovative work environment. - Professional development opportunities and career growth. If you are a passionate Product Manager with a strong background in AI and data products, and you are excited about building transformative AI applications, we would love to hear from you. Apply now to join our dynamic team and make an impact in the world of AI and data.
Client based at Bangalore location.
Data Scientist with LLM and Healthcare Expertise
Keywords: Data Scientist, LLM, Radiology, Healthcare, Machine Learning, Deep Learning, AI, Python, TensorFlow, PyTorch, Scikit-learn, Data Analysis, Medical Imaging, Clinical Data, HIPAA, FDA.
Responsibilities:
· Develop and deploy advanced machine learning models, particularly focusing on Large Language Models (LLMs) and their application in the healthcare domain.
· Leverage your expertise in radiology, visual images, and text data to extract meaningful insights and drive data-driven decision-making.
· Collaborate with cross-functional teams to identify and address complex healthcare challenges.
· Conduct research and explore new techniques to enhance the performance and efficiency of our AI models.
· Stay up-to-date with the latest advancements in machine learning and healthcare technology.
Qualifications:
· Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field.
· 6+ years of hands-on experience in data science and machine learning.
· Strong proficiency in Python and popular data science libraries (e.g., TensorFlow, PyTorch, Scikit-learn).
· Deep understanding of LLM architectures, training methodologies, and applications.
· Expertise in working with radiology images, visual data, and text data.
· Experience in the healthcare domain, particularly in areas such as medical imaging, clinical data analysis, or patient outcomes.
· Excellent problem-solving, analytical, and communication skills.
Preferred Qualifications:
· PhD in Computer Science, Data Science, or a related field.
· Experience with cloud platforms (e.g., AWS, GCP, Azure).
· Knowledge of healthcare standards and regulations (e.g., HIPAA, FDA).
· Publications in relevant academic journals or conferences.
Company: Optimum Solutions
About the company: Optimum solutions is a leader in a sheet metal industry, provides sheet metal solutions to sheet metal fabricators with a proven track record of reliable product delivery. Starting from tools through software, machines, we are one stop shop for all your technology needs.
Role Overview:
- Creating and managing database schemas that represent and support business processes, Hands-on experience in any SQL queries and Database server wrt managing deployment.
- Implementing automated testing platforms, unit tests, and CICD Pipeline
- Proficient understanding of code versioning tools, such as GitHub, Bitbucket, ADO
- Understanding of container platform, such as Docker
Job Description
- We are looking for a good Python Developer with Knowledge of Machine learning and deep learning framework.
- Your primary focus will be working the Product and Usecase delivery team to do various prompting for different Gen-AI use cases
- You will be responsible for prompting and building use case Pipelines
- Perform the Evaluation of all the Gen-AI features and Usecase pipeline
Position: AI ML Engineer
Location: Chennai (Preference) and Bangalore
Minimum Qualification: Bachelor's degree in computer science, Software Engineering, Data Science, or a related field.
Experience: 4-6 years
CTC: 16.5 - 17 LPA
Employment Type: Full Time
Key Responsibilities:
- Take care of entire prompt life cycle like prompt design, prompt template creation, prompt tuning/optimization for various Gen-AI base models
- Design and develop prompts suiting project needs
- Lead and manage team of prompt engineers
- Stakeholder management across business and domains as required for the projects
- Evaluating base models and benchmarking performance
- Implement prompt gaurdrails to prevent attacks like prompt injection, jail braking and prompt leaking
- Develop, deploy and maintain auto prompt solutions
- Design and implement minimum design standards for every use case involving prompt engineering
Skills and Qualifications
- Strong proficiency with Python, DJANGO framework and REGEX
- Good understanding of Machine learning framework Pytorch and Tensorflow
- Knowledge of Generative AI and RAG Pipeline
- Good in microservice design pattern and developing scalable application.
- Ability to build and consume REST API
- Fine tune and perform code optimization for better performance.
- Strong understanding on OOP and design thinking
- Understanding the nature of asynchronous programming and its quirks and workarounds
- Good understanding of server-side templating languages
- Understanding accessibility and security compliance, user authentication and authorization between multiple systems, servers, and environments
- Integration of APIs, multiple data sources and databases into one system
- Good knowledge in API Gateways and proxies, such as WSO2, KONG, nginx, Apache HTTP Server.
- Understanding fundamental design principles behind a scalable and distributed application
- Good working knowledge on Microservices architecture, behaviour, dependencies, scalability etc.
- Experience in deploying on Cloud platform like Azure or AWS
- Familiar and working experience with DevOps tools like Azure DEVOPS, Ansible, Jenkins, Terraform
at Cargill Business Services
Job Purpose and Impact:
The Sr. Generative AI Engineer will architect, design and develop new and existing GenAI solutions for the organization. As a Generative AI Engineer, you will be responsible for developing and implementing products using cutting-edge generative AI and RAG to solve complex problems and drive innovation across our organization. You will work closely with data scientists, software engineers, and product managers to design, build, and deploy AI-powered solutions that enhance our products and services in Cargill. You will bring order to ambiguous scenarios and apply in depth and broad knowledge of architectural, engineering and security practices to ensure your solutions are scalable, resilient and robust and will share knowledge on modern practices and technologies to the shared engineering community.
Key Accountabilities:
• Apply software and AI engineering patterns and principles to design, develop, test, integrate, maintain and troubleshoot complex and varied Generative AI software solutions and incorporate security practices in newly developed and maintained applications.
• Collaborate with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals.
• Conduct research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services, optimizing existing generative AI models and RAG for improved performance, scalability, and efficiency, developing and maintaining pipelines and RAG solutions including data preprocessing, prompt engineering, benchmarking and fine-tuning.
• Develop clear and concise documentation, including technical specifications, user guides and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders.
• Participate in the engineering community by maintaining and sharing relevant technical approaches and modern skills in AI.
• Contribute to the establishment of best practices and standards for generative AI development within the organization.
• Independently handle complex issues with minimal supervision, while escalating only the most complex issues to appropriate staff.
Minimum Qualifications:
• Bachelor’s degree in a related field or equivalent experience
• Minimum of five years of related work experience
• You are proficient in Python and have experience with machine learning libraries and frameworks
• Have deep understanding of industry leading Foundation Model capabilities and its application.
• You are familiar with cloud-based Generative AI platforms and services
• Full stack software engineering experience to build products using Foundation Models
• Confirmed experience architecting applications, databases, services or integrations.
product base company based at Bangalore location and working
We are seeking an experienced Data Scientist with a proven track record in Machine Learning, Deep Learning, and a demonstrated focus on Large Language Models (LLMs) to join our cutting-edge Data Science team. You will play a pivotal role in developing and deploying innovative AI solutions that drive real-world impact to patients and healthcare providers.
Responsibilities
• LLM Development and Fine-tuning: fine-tune, customize, and adapt large language models (e.g., GPT, Llama2, Mistral, etc.) for specific business applications and NLP tasks such as text classification, named entity recognition, sentiment analysis, summarization, and question answering. Experience in other transformer-based NLP models such as BERT, etc. will be an added advantage.
• Data Engineering: collaborate with data engineers to develop efficient data pipelines, ensuring the quality and integrity of large-scale text datasets used for LLM training and fine-tuning
• Experimentation and Evaluation: develop rigorous experimentation frameworks to evaluate model performance, identify areas for improvement, and inform model selection. Experience in LLM testing frameworks such as TruLens will be an added advantage.
• Production Deployment: work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.
• Predictive Model Design and Implementation: leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models)
• Cross-functional Collaboration: partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions
• Knowledge Sharing: mentor junior team members and stay up to date with the latest advancements in machine learning and LLMs
Qualifications Required
• Doctoral or master’s degree in computer science, Data Science, Artificial Intelligence, or related field
• 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models
• 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).
• Experience working with cloud-based platforms (AWS, GCP, Azure)
Additional Skills
• Excellent problem-solving and analytical abilities
• Strong communication skills, both written and verbal
• Ability to thrive in a collaborative and fast-paced environment
Must have:
- 8+ years of experience with a significant focus on developing, deploying & supporting AI solutions in production environments.
- Proven experience in building enterprise software products for B2B businesses, particularly in the supply chain domain.
- Good understanding of Generics, OOPs concepts & Design Patterns
- Solid engineering and coding skills. Ability to write high-performance production quality code in Python
- Proficiency with ML libraries and frameworks (e.g., Pandas, TensorFlow, PyTorch, scikit-learn).
- Strong expertise in time series forecasting using stat, ML, DL and foundation models
- Experience of working on processing time series data employing techniques such as decomposition, clustering, outlier detection & treatment
- Exposure to generative AI models and agent architectures on platforms such as AWS Bedrock, Crew AI, Mosaic/Databricks, Azure
- Experience of working with modern data architectures, including data lakes and data warehouses, having leveraged one or more of the frameworks such as Airbyte, Airflow, Dagster, AWS Glue, Snowflake,, DBT
- Hands-on experience with cloud platforms (e.g., AWS, Azure, GCP) and deploying ML models in cloud environments.
- Excellent problem-solving skills and the ability to work independently as well as in a collaborative team environment.
- Effective communication skills, with the ability to convey complex technical concepts to non-technical stakeholders
Good To Have:
- Experience with MLOps tools and practices for continuous integration and deployment of ML models.
- Has familiarity with deploying applications on Kubernetes
- Knowledge of supply chain management principles and challenges.
- A Master's or Ph.D. in Computer Science, Machine Learning, Data Science, or a related field is preferred
MLSecured(https://www.mlsecured.com/) an AI GRC (Governance, Risk, and Compliance) is Hiring! a Backend Software Engineer 🚀
Are you a passionate Backend Software Engineer with experience in Machine Learning and Open Source projects? Do you have a strong foundation in Python and Object-Oriented Programming (OOP) concepts? Join us at MLSecured.com and be part of our mission to solve AI Security & Compliance challenges! 🔐🤖
What We’re Looking For:
👨💻 1-2 years of professional experience in Backend Development and Open Source projects contribution
🐍 Proficiency in Python and OOP concepts
🤝 Experience with Machine Learning (NLP, GenAI)
🤝 Experience with CI/CD and cloud infra is a plus
💡 A passion for solving complex AI Security & Compliance problems
Why Join Us?
At MLSecured.com, you'll work with a talented team dedicated to pioneering AI security solutions. Be a part of our journey to make AI systems secure and compliant for everyone. 🌟
Perks of Joining a Fast-Paced Startup:
🚀 Rapid career growth and learning opportunities
🌍 Work on cutting-edge AI technologies and innovative projects
🤝 Collaborative and dynamic work environment
🎉 Flexible working hours and full remote work options
📈 Significant impact on the company's direction and success
Internship Opportunity at REDHILL SOFTEC
Position: Software Intern
Duration: 3 Months
Domains: Machine Learning or Full Stack Web Development
Working Hours: 10 AM - 6 PM
About REDHILL SOFTEC:
REDHILL SOFTEC is a dynamic and innovative tech company committed to fostering talent and driving technological advancements. We are excited to announce an exclusive internship opportunity for MCA students passionate about Machine Learning or Full Stack Web Development.
Internship Details:
Duration: The internship spans 3 months, offering a deep dive into either Machine Learning or Full Stack Web Development.
Domains:
Machine Learning: Work on cutting-edge ML projects, learning and applying various algorithms and data analysis techniques.
Full Stack Web Development: Gain hands-on experience in both front-end and back-end development, working with the latest web technologies.
Stipend: This is an unpaid internship designed to provide valuable industry experience and skill development.
Working Hours: Interns are expected to work from 10 AM to 6 PM, ensuring a full-time immersive experience.
What We Offer:
Hands-on Experience: Engage in real-world projects that enhance your technical skills and industry knowledge.
Mentorship: Learn from experienced professionals who will guide and support you throughout the internship.
Skill Development: Acquire practical skills in Machine Learning or Full Stack Web Development, making you industry-ready.
Networking: Connect with industry experts and like-minded peers, building a strong professional network.
Who Should Apply:
MCA/BE/BCA students with a keen interest in Machine Learning or Full Stack Web Development.
Individuals looking to gain practical industry experience and enhance their technical skills.
Self-motivated learners eager to work on real-world projects and solve challenging problems.
How to Apply:
Interested candidates can apply by visiting our website and completing the registration process. Remember, spots are limited, and early applications are encouraged to secure your place in this enriching program.
Join REDHILL SOFTEC and take a significant step towards a successful career in technology!
REDHILL SOFTEC
Vijayanagar Bangalore
www.redhillsoftec.com
We look forward to welcoming passionate and driven interns to our team!
Position Description:
Amity University, Patna campus invites applications for a tenure-track Assistant Professor position in the Department of Computer Science. The successful candidate will demonstrate a strong commitment to teaching, research, and service in the field of computer science.
Responsibilities:
- Teach undergraduate and graduate courses in computer science, with a focus on [insert areas of specialization or interest, e.g., artificial intelligence, machine learning, software engineering, etc.].
- Develop and deliver innovative curriculum that incorporates industry best practices and emerging technologies.
- Advise and mentor undergraduate and graduate students in academic and career development.
- Conduct high-quality research leading to publications in peer-reviewed journals and presentations at conferences.
- Seek external funding to support research activities and contribute to the growth of the department.
- Participate in departmental and institutional service activities, including committee work, academic advising, and community outreach.
- Contribute to the collegial and collaborative atmosphere of the department through active engagement with colleagues and participation in departmental events and initiatives.
Qualifications:
- A Ph.D. in Computer Science or atleast Thesis submitted
- Evidence of excellence in teaching at the undergraduate and/or graduate level.
- A strong record of research productivity, including publications in reputable journals and conferences.
- Demonstrated expertise in [insert areas of specialization].
- Ability to effectively communicate complex concepts to diverse audiences.
- Commitment to fostering an inclusive and equitable learning environment.
- Strong interpersonal skills and the ability to work collaboratively with students, faculty, and staff.
- Potential for leadership and contribution to the academic community.
Preferred Qualifications:
- Experience securing external research funding.
- Experience supervising undergraduate or graduate research projects.
- Experience with curriculum development and assessment.
- Experience with industry collaboration or technology transfer initiatives.
Application Process:
Interested candidates should submit a cover letter, curriculum vitae, statement of teaching philosophy, statement of research interests, evidence of teaching effectiveness (e.g., teaching evaluations), and contact information for three professional references. Review of applications will begin immediately and continue until the position is filled.
Company Name : LMES Academy Private Limited
Website : https://lmes.in/
Linkedin : https://www.linkedin.com/company/lmes-academy/mycompany/
Role : Machine Learning Engineer
Experience: 2 Year to 4 Years
Location: Urapakkam, Chennai, Tamil Nadu.
Job Overview:
We are looking for a Machine Learning Engineer to join our team and help us advance our AI capabilities.
Requirements
• Model Training and Fine-Tuning: Utilize and refine large language models using techniques such as distillation and supervised fine-tuning to enhance performance and efficiency.
• Retrieval-Augmented Generation (RAG): Good understanding on RAG systems to improve the quality and relevance of generated content.
• Vector Databases: Familiar with vector databases to support fast and accurate similarity searches and other ML-driven functionalities.
• API Integration: Good in REST APIs and integrate third-party APIs, including Open AI, Google Vertex, and Cloudflare Workers AI, to extend our AI capabilities.
• Generative AI: Experience with generative AI applications, including text-to-image, speech recognition, and text-to-speech systems.
• Collaboration: Work collaboratively with cross-functional teams, including data scientists, developers, and product managers, to deliver innovative AI solutions.
• Adaptability: Thrive in a fast-paced environment with loosely defined tasks and competing priorities, ensuring timely delivery of high-quality results.
Immediate opening for a Senior Technical Trainer for our company Vy TCDC
(https://www.vytcdc.com/) in Karur
We are back in the office so no work from home.
Role: Technical Trainer
Minimum exp: 0 to 4yrs exp
Job Types: Full-time, Regular / Permanent
Location: Karur
Notice Period: Expecting immediate joiners
Job Description
• Design effective training programs
• Train all candidates
• Conduct seminars, workshops, individual training sessions, etc.
• Monitor candidate performance and response to training
• Strong Knowledge of React Js, Java , React Native, Angular, Node Js,Java , Python, Full Stack Development.
• Conduct seminars, workshops, individual training sessions, etc
• Conduct evaluations to identify areas of improvement
About Springworks
At Springworks, we're on a mission to revolutionize the world of People Operations. With our innovative tools and products, we've already empowered over 500,000+ employees across 15,000+ organizations and 60+ countries in just a few short years.
But what sets us apart? Let us introduce you to our exciting product stack:
- SpringVerify: Our B2B background verification platform
- EngageWith: Spark vibrant cultures! Our recognition platform adds magic to work.
- Trivia: Fun remote team-building! Real-time games for strong bonds.
- SpringRole: Future-proof profiles! Blockchain-backed skill showcase.
- Albus: AI-powered workplace search and knowledge bot for companies
Join us at Springworks and be part of the remote work revolution. Get ready to work, play, and thrive in an environment that's anything but ordinary!
Role Overview
This role is for our Albus team. As a SDE 2 at Springworks, you will be responsible for designing, developing, and maintaining robust, scalable, and efficient web applications. You will work closely with cross-functional teams, turning innovative ideas into tangible, user-friendly products. The ideal candidate has a strong foundation in both front-end and back-end technologies, with a focus on Python, Node.js and ReactJS. Experience in Artificial Intelligence (AI), Machine Learning (ML) and Natural Language Processing (NLP) will be a significant advantage.
Responsibilities:
- Collaborate with product management and design teams to understand user requirements and translate them into technical specifications.
- Develop and maintain server-side logic using Node.js and Python.
- Design and implement user interfaces using React.js with focus on user experience.
- Build reusable and efficient code for future use.
- Implement security and data protection measures.
- Collaborate with other team members and stakeholders to ensure seamless integration of front-end and back-end components.
- Troubleshoot and debug complex issues, identifying root causes and implementing effective solutions.
- Stay up-to-date with the latest industry trends, technologies, and best practices to drive innovation within the team.
- Participate in architectural discussions and contribute to technical decision-making processes.
Goals (not limited to):
1 month into the job:
- Become familiar with the company's products, codebase, development tools, and coding standards. Aim to understand the existing architecture and code structure.
- Ensure that your development environment is fully set up and configured, and you are comfortable with the team's workflow and tools.
- Start contributing to the development process by taking on smaller tasks or bug fixes. Ensure that your code is well-documented and follows the team's coding conventions.
- Begin collaborating effectively with team members, attending daily stand-up meetings, and actively participating in discussions and code reviews.
- Understand the company's culture, values, and long-term vision to align your work with the company's goals.
3 months into the job:
- Be able to independently design, develop, and deliver small to medium-sized features or improvements to the product.
- Demonstrate consistent improvement in writing clean, efficient, and maintainable code. Receive positive feedback on code reviews.
- Continue to actively participate in team meetings, offer suggestions for process improvements, and collaborate effectively with colleagues.
- Start assisting junior team members or interns by sharing knowledge and providing mentorship.
- Seek feedback from colleagues and managers to identify areas for improvement and implement necessary changes.
6 months into the job:
- Take ownership of significant features or projects, from conception to deployment, demonstrating leadership in technical decision-making.
- Identify areas of the codebase that can benefit from refactoring or performance optimizations and work on these improvements.
- Propose and implement process improvements that enhance the team's efficiency and productivity.
- Continue to expand your technical skill set, potentially by exploring new technologies or frameworks that align with the company's needs.
- Strengthen your collaboration with other departments, such as product management or design, to ensure alignment between development and business objectives.
Requirements
- Minimum 4 years of experience working with Python along with machine learning frameworks and NLP technologies.
- Strong understanding of micro-services, messaging systems like SQS.
- Experience in designing and maintaining nosql databases (MongoDB)
- Familiarity with RESTful API design and implementation.
- Knowledge of version control systems (e.g., Git).
- Ability to work collaboratively in a team environment.
- Excellent problem-solving and communication skills, and a passion for learning. Essentially having a builder mindset is a plus.
- Proven ability to work on multiple projects simultaneously.
Nice to Have:
- Experience with containerization (e.g., Docker, Kubernetes).
- Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud).
- Knowledge of agile development methodologies.
- Contributions to open-source projects or a strong GitHub profile.
- Previous experience of working in a startup or fast paced environment.
- Strong understanding of front-end technologies such as HTML, CSS, and JavaScript.
About Company / Benefits
- Work from anywhere effortlessly with our remote setup perk: Rs. 50,000 for furniture and headphones, plus an annual addition of Rs. 5,000.
- We care about your well-being! Our health scheme covers not only your physical health but also your mental and social well-being. We've got you covered from head to toe!
- Say hello to endless possibilities with our learning and growth opportunities. We're here to fuel your curiosity and help you reach new heights.
- Take a breather and enjoy 30 annual paid leave days. It's time to relax, recharge, and make the most of your time off.
- Let's celebrate! We love company outings and celebrations that bring the team together for unforgettable moments and good vibes.
- We'll reimburse your workation trips, turning your travel dreams into reality.
- We've got your lifestyle covered. Treat yourself with our lifestyle allowance, which can be used for food, OTT, health/fitness, and more. Plus, we'll reimburse your internet expenses so you can stay connected wherever you go!
Join our remote team and experience the freedom and flexibility of asynchronous communication. Apply now!
Know more about Springworks:
- Life at Springworks: https://www.springworks.in/blog/category/life-at-springworks/
- Glassdoor Reviews: https://www.glassdoor.co.in/Overview/Working-at-Springworks-EI_IE1013270.11,22.htm
- More about Asynchronous Communication: https://www.springworks.in/blog/asynchronous-communication-remote-work/
at TensorIoT Software Services Private Limited, India
About TensorIoT
- AWS Advanced Consulting Partner (for ML and GenAI solutions)
- Pioneers in IoT and Generative AI products.
- Committed to diversity and inclusion in our teams.
TensorIoT is an AWS Advanced Consulting Partner. We help companies realize the value and efficiency of the AWS ecosystem. From building PoCs and MVPs to production-ready applications, we are tackling complex business problems every day and developing solutions to drive customer success.
TensorIoT's founders helped build world-class IoT and AI platforms at AWS and Google and are now creating solutions to simplify the way enterprises incorporate edge devices and their data into their day-to-day operations. Our mission is to help connect devices and make them intelligent. Our founders firmly believe in the transformative potential of smarter devices to enhance our quality of life, and we're just getting started!
TensorIoT is proud to be an equal-opportunity employer. This means that we are committed to diversity and inclusion and encourage people from all backgrounds to apply. We do not tolerate discrimination or harassment of any kind and make our hiring decisions based solely on qualifications, merit, and business needs at the time.
Job Description
At TensorIoT India team, we look forward to bringing on board senior Machine Learning Engineers / Data Scientists. In this section, we briefly describe the work role, the minimum and the preferred requirements to qualify for the first round of the selection process.
What are the kinds of tasks Data Scientists do at TensorIoT?
As a Data Scientist, the kinds of tasks revolve around the data that we have and the business objectives of the client. The tasks generally involve: Studying, understanding, and analyzing datasets; feature engineering, proposing and solutions, evaluating the solution scientifically, and communicating with the client. Implementing ETL pipelines with database/data lake tools. Conduct and present scientific research/experiments within the team and to the client.
Minimum Requirements:
- Masters + 6 years of work experience in Machine Learning Engineering OR B.Tech (Computer Science or related) + 8 years of work experience in Machine Learning Engineering 3 years of Cloud Experience.
- Experience working with Generative AI (LLM), Prompt Engineering, Fine Tuning of LLMs.
- Hands-on experience in MLOps (model deployment, maintenance)
- Hands-on experience with Docker.
- Clear concepts of the following:
- - Supervised Learning, Unsupervised Learning, Reinforcement Learning
- - Statistical Modelling, Deep Learning
- - Interpretable Machine Learning
- Well-rounded exposure to Computer Vision, Natural Language Processing, and Time-Series Analysis.
- Scientific & Analytical mindset, proactive learning, adaptability to changes.
- Strong interpersonal and language skills in English, to communicate within the team and with the clients.
Preferred Qualifications:
- PhD in the domain of Data Science / Machine Learning
- M.Sc | M.Tech in the domain of Computer Science / Machine Learning
- Some experience in creating cloud-native technologies, and microservices design.
- Published scientific papers in the relevant domain of work.
CV Tips:
Your CV is an integral part of your application process. We would appreciate it if the CV prioritizes the following:
- Focus:
- More focus on technical skills relevant to the job description.
- Less or no focus on your roles and responsibilities as a manager, team lead, etc.
- Less or no focus on the design aspect of the document.
- Regarding the projects you completed in your previous companies,
- Mention the problem statement very briefly.
- Your role and responsibilities in that project.
- Technologies & tools used in the project.
- Always good to mention (if relevant):
- Scientific papers published, Master Thesis, Bachelor Thesis.
- Github link, relevant blog articles.
- Link to LinkedIn profile.
- Mention skills that are relevant to the job description and you could demonstrate during the interview / tasks in the selection process.
We appreciate your interest in the company and look forward to your application.
at Accrete
Responsibilities:
- Collaborating with data scientists and machine learning engineers to understand their requirements and design scalable, reliable, and efficient machine learning platform solutions.
- Building and maintaining the applications and infrastructure to support end-to-end machine learning workflows, including inference and continual training.
- Developing systems for the definition deployment and operation of the different phases of the machine learning and data life cycles.
- Working within Kubernetes to orchestrate and manage containers, ensuring high availability and fault tolerance of applications.
- Documenting the platform's best practices, guidelines, and standard operating procedures and contributing to knowledge sharing within the team.
Requirements:
- 3+ years of hands-on experience in developing and managing machine learning or data platforms
- Proficiency in programming languages commonly used in machine learning and data applications such as Python, Rust, Bash, Go
- Experience with containerization technologies, such as Docker, and container orchestration platforms like Kubernetes.
- Familiarity with CI/CD pipelines for automated model training and deployment. Basic understanding of DevOps principles and practices.
- Knowledge of data storage solutions and database technologies commonly used in machine learning and data workflows.
Responsibilities
- Work on execution and scheduling of all tasks related to assigned projects' deliverable dates
- Optimize and debug existing codes to make them scalable and improve performance
- Design, development, and delivery of tested code and machine learning models into production environments
- Work effectively in teams, managing and leading teams
- Provide effective, constructive feedback to the delivery leader
- Manage client expectations and work with an agile mindset with machine learning and AI technology
- Design and prototype data-driven solutions
Eligibility
- Highly experienced in designing, building, and shipping scalable and production-quality machine learning algorithms in the field of Python applications
- Working knowledge and experience in NLP core components (NER, Entity Disambiguation, etc.)
- In-depth expertise in Data Munging and Storage (Experienced in SQL, NoSQL, MongoDB, Graph Databases)
- Expertise in writing scalable APIs for machine learning models
- Experience with maintaining code logs, task schedulers, and security
- Working knowledge of machine learning techniques, feed-forward, recurrent and convolutional neural networks, entropy models, supervised and unsupervised learning
- Experience with at least one of the following: Keras, Tensorflow, Caffe, or PyTorch
ML Engineer
HackerPulse is a new and growing company. We help software engineers showcase their skills using AI powered profiles. As a Machine Learning Engineer, you will have the opportunity to contribute to the development and implementation of advanced Machine Learning (ML) and Natural Language Processing (NLP) solutions. You will play a crucial role in taking the innovative work done by our research team and turning it into practical solutions for production deployment. By applying to this job you agree to receive communication from us.
*Make sure to fill out the link below*
To speed up the hiring process, kindly complete the following link: https://airtable.com/appcWHN5MIs3DJEj9/shriREagoEMhlfw84
Responsibilities:
- Contribute to the development of software and solutions, emphasizing ML/NLP as a key component, to productize research goals and deployable services.
- Collaborate closely with the frontend team and research team to integrate machine learning models into deployable services.
- Utilize and develop state-of-the-art algorithms and models for NLP/ML, ensuring they align with the product and research objectives.
- Perform thorough analysis to improve existing models, ensuring their efficiency and effectiveness in real-world applications.
- Engage in data engineering tasks to clean, validate, and preprocess data for uniformity and accuracy, supporting the development of robust ML models.
- Stay abreast of new developments in research and engineering in NLP and related fields, incorporating relevant advancements into the product development process.
- Actively participate in agile development methodologies within dynamic research and engineering teams, adapting to evolving project requirements.
- Collaborate effectively within cross-functional teams, fostering open communication and cooperation between research, development, and frontend teams.
- Actively contribute to building an open, transparent, and collaborative engineering culture within the organization.
- Demonstrate strong software engineering skills to ensure the reliability, scalability, and maintainability of deployable ML services.
- Take ownership of the end-to-end deployment process, including the deployment of ML models to production environments.
- Work on continuous improvement of deployment processes and contribute to building a seamless pipeline for deploying and monitoring ML models in real-world applications.
Qualifications:
- Degree in Computer Science or related discipline or equivalent practical experience, with a strong emphasis on machine learning and natural language processing.
- Proven experience and in-depth knowledge of ML techniques, with a focus on implementing deep-learning approaches for NLP tasks in the context of productizing research goals.
- Ability to apply engineering best practices to make architectural and design decisions aligned with functionalities, user experience, performance, reliability, and scalability in the development of deployable ML services.
- Substantial experience in software development using Python, Java, and/or C or C++, with a particular emphasis on integrating machine learning models into production-ready software solutions.
- Demonstrated problem-solving skills, showcasing the ability to address complex situations effectively, especially in the context of improving models, data engineering, and deployment processes.
- Strong interpersonal and communication skills, essential for effective collaboration within cross-functional teams consisting of research, development, and frontend teams.
- Proven time management skills to handle dynamic and agile development situations, ensuring timely delivery of solutions in a fast-paced environment.
- Self-motivated contributor who frequently takes initiative to enhance the codebase and share best practices, contributing to the development of an open, transparent, and collaborative engineering culture.
Who we are looking for
· A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.
Roles and Responsibilities
· Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.
· Mentor and coach other team members
· Evaluate the performance of NLP models and ideate on how they can be improved
· Support internal and external NLP-facing APIs
· Keep up to date on current research around NLP, Machine Learning and Deep Learning
Mandatory Requirements
· Any graduation with at least 2 years of demonstrated experience as a Data Scientist.
Behavioural Skills
· Strong analytical and problem-solving capabilities.
· Proven ability to multi-task and deliver results within tight time frames
· Must have strong verbal and written communication skills
· Strong listening skills and eagerness to learn
· Strong attention to detail and the ability to work efficiently in a team as well as individually
Technical Skills
Hands-on experience with
· NLP
· Deep Learning
· Machine Learning
· Python
· Bert
Preferred Requirements
· Experience in Computer Vision is preferred
Role: Data Scientist
Industry Type: Banking
Department: Data Science & Analytics
Employment Type: Full Time, Permanent
Role Category: Data Science & Machine Learning
Data engineers:
Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights.This would also include develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity
Constructing infrastructure for efficient ETL processes from various sources and storage systems.
Collaborating closely with Product Managers and Business Managers to design technical solutions aligned with business requirements.
Leading the implementation of algorithms and prototypes to transform raw data into useful information.
Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations.
Creating innovative data validation methods and data analysis tools.
Ensuring compliance with data governance and security policies.
Interpreting data trends and patterns to establish operational alerts.
Developing analytical tools, utilities, and reporting mechanisms.
Conducting complex data analysis and presenting results effectively.
Preparing data for prescriptive and predictive modeling.
Continuously exploring opportunities to enhance data quality and reliability.
Applying strong programming and problem-solving skills to develop scalable solutions.
Writes unit/integration tests, contributes towards documentation work
Must have ....
6 to 8 years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines.
High proficiency in Scala/Java/ Python API frameworks/ Swagger and Spark for applied large-scale data processing.
Expertise with big data technologies, API development (Flask,,including Spark, Data Lake, Delta Lake, and Hive.
Solid understanding of batch and streaming data processing techniques.
Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion.
Expert-level ability to write complex, optimized SQL queries across extensive data volumes.
Experience with RDBMS and OLAP databases like MySQL, Redshift.
Familiarity with Agile methodologies.
Obsession for service observability, instrumentation, monitoring, and alerting.
Knowledge or experience in architectural best practices for building data pipelines.
Good to Have:
Passion for testing strategy, problem-solving, and continuous learning.
Willingness to acquire new skills and knowledge.
Possess a product/engineering mindset to drive impactful data solutions.
Experience working in distributed environments with teams scattered geographically.
Role Overview
We are looking for a Tech Lead with a strong background in fintech, especially with experience or a strong interest in fraud prevention and Anti-Money Laundering (AML) technologies.
This role is critical in leading our fintech product development, ensuring the integration of robust security measures, and guiding our team in Hyderabad towards delivering high-quality, secure, and compliant software solutions.
Responsibilities
- Lead the development of fintech solutions, focusing on fraud prevention and AML, using Typescript, ReactJs, Python, and SQL databases.
- Architect and deploy secure, scalable applications on AWS or Azure, adhering to the best practices in financial security and data protection.
- Design and manage databases with an emphasis on security, integrity, and performance, ensuring compliance with fintech regulatory standards.
- Guide and mentor the development team, promoting a culture of excellence, innovation, and continuous learning in the fintech space.
- Collaborate with stakeholders across the company, including product management, design, and QA, to ensure project alignment with business goals and regulatory requirements.
- Keep abreast of the latest trends and technologies in fintech, fraud prevention, and AML, applying this knowledge to drive the company's objectives.
Requirements
- 5-7 years of experience in software development, with a focus on fintech solutions and a strong understanding of fraud prevention and AML strategies.
- Expertise in Typescript, ReactJs, and familiarity with Python.
- Proven experience with SQL databases and cloud services (AWS or Azure), with certifications in these areas being a plus.
- Demonstrated ability to design and implement secure, high-performance software architectures in the fintech domain.
- Exceptional leadership and communication skills, with the ability to inspire and lead a team towards achieving excellence.
- A bachelor's degree in Computer Science, Engineering, or a related field, with additional certifications in fintech, security, or compliance being highly regarded.
Why Join Us?
- Opportunity to be at the cutting edge of fintech innovation, particularly in fraud prevention and AML.
- Contribute to a company with ambitious goals to revolutionize software development and make a historical impact.
- Be part of a visionary team dedicated to creating a lasting legacy in the tech industry.
- Work in an environment that values innovation, leadership, and the long-term success of its employees.
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar Python expert to help develop and deploy our AI pipeline. The main task will be deploying models and algorithms developed by our AI team, and keeping the daily production pipeline running. Our pipeline is centered around several microservices, all written in Python, that coordinate their actions through a database. We’re looking for developers with deep experience in Python including profiling and improving the performance of production code, multiprocessing / multithreading, and managing a pipeline that is constantly running. AI/ML experience is a plus, but not necessary. AWS / docker / CI/CD practices are also a plus. If you are a gamer or streamer, or enjoy watching video games and streams, that is also definitely a plus :-)
You will be responsible for:
- Building Python scripts to deploy our AI components into pipeline and production
- Developing logic to ensure multiple different AI components work together seamlessly through a microservices architecture
- Managing our daily pipeline on both on-premise servers and AWS
- Working closely with the AI engineering, backend and frontend teams
You should have the following qualities:
- Deep expertise in Python including:
- Multiprocessing / multithreaded applications
- Class-based inheritance and modules
- DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend)
- Understanding Python performance bottlenecks, and how to profile and improve the performance of production code including:
- Optimal multithreading / multiprocessing strategies
- Memory bottlenecks and other bottlenecks encountered with large datasets and use of numpy / opencv / image processing
- Experience in creating soft real-time processing tasks is a plus
- Expertise in Docker-based virtualization including:
- Creating & maintaining custom Docker images
- Deployment of Docker images on cloud and on-premise services
- Experience with maintaining cloud applications in AWS environments
- Experience in deploying machine learning algorithms into production (e.g. PyTorch, tensorflow, opencv, etc) is a plus
- Experience with image processing in python is a plus (e.g. openCV, Pillow, etc)
- Experience with running Nvidia GPU / CUDA-based tasks is a plus (Nvidia Triton, MLFlow)
- Knowledge of video file formats (mp4, mov, avi, etc.), encoding, compression, and using ffmpeg to perform common video processing tasks is a plus.
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Ideally a gamer or someone interested in watching gaming content online
Seniority: We are looking for a mid to senior level engineer
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply.
Work Experience: 4 years to 8 years
About Sizzle
Sizzle is building AI to automate gaming highlights, directly from Twitch and YouTube videos. Sizzle works with thousands of gaming streamers to automatically create highlights and social content for them. Sizzle is available at www.sizzle.gg.
at Tiger Analytics
• Charting learning journeys with knowledge graphs.
• Predicting memory decay based upon an advanced cognitive model.
• Ensure content quality via study behavior anomaly detection.
• Recommend tags using NLP for complex knowledge.
• Auto-associate concept maps from loosely structured data.
• Predict knowledge mastery.
• Search query personalization.
Requirements:
• 6+ years experience in AI/ML with end-to-end implementation.
• Excellent communication and interpersonal skills.
• Expertise in SageMaker, TensorFlow, MXNet, or equivalent.
• Expertise with databases (e. g. NoSQL, Graph).
• Expertise with backend engineering (e. g. AWS Lambda, Node.js ).
• Passionate about solving problems in education
Key Roles/Responsibilities: –
• Develop an understanding of business obstacles, create
• solutions based on advanced analytics and draw implications for
• model development
• Combine, explore and draw insights from data. Often large and
• complex data assets from different parts of the business.
• Design and build explorative, predictive- or prescriptive
• models, utilizing optimization, simulation and machine learning
• techniques
• Prototype and pilot new solutions and be a part of the aim
• of ‘productifying’ those valuable solutions that can have impact at a
• global scale
• Guides and coaches other chapter colleagues to help solve
• data/technical problems at an operational level, and in
• methodologies to help improve development processes
• Identifies and interprets trends and patterns in complex data sets to
• enable the business to take data-driven decisions
We are looking for
A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.
Roles and Responsibilities
Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.
Mentor and coach other team members
Evaluate the performance of NLP models and ideate on how they can be improved
Support internal and external NLP-facing APIs
Keep up to date on current research around NLP, Machine Learning and Deep Learning
Mandatory Requirements
Any graduation with at least 2 years of demonstrated experience as a Data Scientist.
Behavioral Skills
Strong analytical and problem-solving capabilities.
Proven ability to multi-task and deliver results within tight time frames
Must have strong verbal and written communication skills
Strong listening skills and eagerness to learn
Strong attention to detail and the ability to work efficiently in a team as well as individually
Hands-on experience with
NLP
Deep Learning
Machine Learning
Python
Bert
- A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.
Roles and Responsibilities
- Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.
- Mentor and coach other team members
- Evaluate the performance of NLP models and ideate on how they can be improved
- Support internal and external NLP-facing APIs
- Keep up to date on current research around NLP, Machine Learning and Deep Learning
Mandatory Requirements
- Any graduation with at least 2 years of demonstrated experience as a Data Scientist.
Behavioral Skills
Strong analytical and problem-solving capabilities.
- Proven ability to multi-task and deliver results within tight time frames
- Must have strong verbal and written communication skills
- Strong listening skills and eagerness to learn
- Strong attention to detail and the ability to work efficiently in a team as well as individually
Technical Skills
Hands-on experience with
- NLP
- Deep Learning
- Machine Learning
- Python
- Bert
Preferred Requirements
- Experience in Computer Vision is preferred
About the Company:
ConveGenius is a leading Conversational AI company that is democratising educational and knowledge services for the mass market. Their knowledge bots have 35M users today. ConveGenius is building an omniverse on Conversational AI for the developer ecosystem to build together. We are looking for self-driven individuals who love to find innovative solutions and can perform under pressure. An eye for detail and being proud of produced code is the must-have attributes for this job.
About the role:
At ConveGenius, we are committed to creating an inclusive and collaborative work environment that fosters creativity and innovation. Join our AI Team as Senior AI Engineer with the focus on development of cutting-edge technologies, making a real life difference in the lives of millions of users. You will be involved in a wide range of projects, from designing and building AI capabilities for our swiftchat platform, and analysing large conversation datasets. Innovative nature and proactive involvement in the product is taken very seriously at Convegenius, therefore, a major part of your role would involve thinking about new features and new ways to deliver quality learning experience to our learners.
Responsibilities:
- Develop and implement state-of-the-art NLP, Computer Vision, Speech techniques.
- Collaborate with other AI researchers to develop innovative AI solutions and services to be used to improve our platform capabilities.
- Mentor and lead the experiments with other team members.
- Conduct research and experiments in deep learning, machine learning, and related fields to support the development of AI techniques.
Qualifications:
- Bachelor's or master's degree in Computer Science, Engineering, or a related field with experience in machine learning and data analysis.
- 2-4 years of industry experience in machine learning, with a strong focus on NLP and Computer Vision.
- Hands-on experience of ML model development and deploying.
- Experience working with training and fine-tuning LLM is a plus.
- Experience in audio/speech technologies is a plus.
- Strong programming skills in Python, node.js or other relevant programming languages.
- Familiarity with machine learning libraries, such as TensorFlow, Keras, PyTorch, scikit-learn, Conversational frameworks like Rasa, DialogFlow, Lex, etc.
- Strong analytical and problem-solving skills, with the ability to think out of the box.
- Self-starter with the ability to work independently and in a team environment.
at Blue Hex Software Private Limited
In this position, you will play a pivotal role in collaborating with our CFO, CTO, and our dedicated technical team to craft and develop cutting-edge AI-based products.
Role and Responsibilities:
- Develop and maintain Python-based software applications.
- Design and work with databases using SQL.
- Use Django, Streamlit, and front-end frameworks like Node.js and Svelte for web development.
- Create interactive data visualizations with charting libraries.
- Collaborate on scalable architecture and experimental tech. - Work with AI/ML frameworks and data analytics.
- Utilize Git, DevOps basics, and JIRA for project management. Skills and Qualifications:
- Strong Python programming
skills.
- Proficiency in OOP and SQL.
- Experience with Django, Streamlit, Node.js, and Svelte.
- Familiarity with charting libraries.
- Knowledge of AI/ML frameworks.
- Basic Git and DevOps understanding.
- Effective communication and teamwork.
Company details: We are a team of Enterprise Transformation Experts who deliver radically transforming products, solutions, and consultation services to businesses of any size. Our exceptional team of diverse and passionate individuals is united by a common mission to democratize the transformative power of AI.
Website: Blue Hex Software – AI | CRM | CXM & DATA ANALYTICS