50+ Machine Learning (ML) Jobs in India
Apply to 50+ Machine Learning (ML) Jobs on CutShort.io. Find your next job, effortlessly. Browse Machine Learning (ML) Jobs and apply today!




Proven experience as a Data Scientist or similar role with relevant experience of at least 4 years and total experience 6-8 years.
· Technical expertiseregarding data models, database design development, data mining and segmentation techniques
· Strong knowledge of and experience with reporting packages (Business Objects and likewise), databases, programming in ETL frameworks
· Experience with data movement and management in the Cloud utilizing a combination of Azure or AWS features
· Hands on experience in data visualization tools – Power BI preferred
· Solid understanding of machine learning
· Knowledge of data management and visualization techniques
· A knack for statistical analysis and predictive modeling
· Good knowledge of Python and Matlab
· Experience with SQL and NoSQL databases including ability to write complex queries and procedures

We are looking for skilled AI Engineer for our organization. Please have a look to below details and revert accordingly if we can discuss further
Company Name: TechWize (A Business unit of Mangalam Information Technologies Pvt Ltd.)
Our Accreditations -
- 25 years of industry presence
- Salesforce Partner
- ISO 27001:2019 certified
- Great Place to Work certified
- HIPAA Compliant
- SOC2 Compliant
- NASSCOM Member
Our EVP (Employee Value proposition)
- We are a Great place to work certified company.
- 30 Earned Leaves during calendar Year
- Career progression and continuous Learning & Development (Technical, Soft skills, Communication, Leadership)
- Performance bonus & Loyalty Bonus Benefits
- 5 Days working
- Rewards and Recognition programs
- Standard Salary as per market norms
- Equal career opportunities, No discrimination
- Magnificent & Dynamic Culture
- Festival celebrations & fun events
Explore more : https://techwize.com/, https://mangalaminfotech.com/
Position: AI/Sr. AI Engineer
Job location: Ahmedabad
Experience: 4+ years
Job Overview:
We are seeking a highly experienced and innovative Senior AI Engineer with a strong background in Generative AI, including LLM fine-tuning and prompt engineering. This role requires hands-on expertise across NLP, Computer Vision, and AI agent-based systems, with the ability to build, deploy, and optimize scalable AI solutions using modern tools and frameworks.
Key Responsibilities:
- Design, fine-tune, and deploy generative AI models (LLMs, diffusion models, etc.) for real-world applications.
- Develop and maintain prompt engineering workflows, including prompt chaining, optimization, and evaluation for consistent output quality.
- Build NLP solutions for Q&A, summarization, information extraction, text classification, and more.
- Develop and integrate Computer Vision models for image processing, object detection, OCR, and multimodal tasks.
- Architect and implement AI agents using frameworks such as LangChain, AutoGen, CrewAI, or custom pipelines.
- Collaborate with cross-functional teams to gather requirements and deliver tailored AI-driven features.
- Optimize models for performance, cost-efficiency, and low latency in production.
- Continuously evaluate new AI research, tools, and frameworks and apply them where relevant.
- Mentor junior AI engineers and contribute to internal AI best practices and documentation.
Required Skills & Qualifications:
- Bachelor’s or Master’s in Computer Science, AI, Machine Learning, or related field.
- 4+ years of hands-on experience in AI/ML solution development.
- Proven expertise in fine-tuning LLMs (e.g., LLaMA, Mistral, Falcon, GPT-family) using techniques like LoRA, QLoRA, PEFT.
- Deep experience in prompt engineering, including zero-shot, few-shot, and retrieval-augmented generation (RAG).
- Proficient in key AI libraries and frameworks:
- LLMs & GenAI: Hugging Face Transformers, LangChain, LlamaIndex, OpenAI API, Diffusers
- NLP: SpaCy, NLTK.
- Vision: OpenCV, MMDetection, YOLOv5/v8, Detectron2
- MLOps: MLflow, FastAPI, Docker, Git
- Familiarity with vector databases (Pinecone, FAISS, Weaviate) and embedding generation.
- Experience with cloud platforms like AWS, GCP, or Azure, and deployment on in house GPU-backed infrastructure.
- Strong communication skills and ability to convert business problems into technical solutions.


About the Role
As a Senior Analyst at JustAnswer, you will be a Subject Matter Expert (SME) on Chatbot Strategy, driving long-term growth and incorporating the newest technologies like LLM.
In this role, you will deliver tangible business impact by providing high-quality insights and recommendations, combining strategic thinking and problem-solving with detailed analysis.
This position offers a unique opportunity to collaborate closely with Product Managers and cross-functional teams, uncover valuable business insights, devise optimization strategies, and validate them through experiments.
What You’ll Do
- Collaborate with Product and Analytics leadership to conceive and structure analysis, delivering highly actionable insights from “deep dives” into specific business areas.
- Analyze large volumes of internal & external data to identify growth and optimization opportunities.
- Package and communicate findings and recommendations to a broad audience, including senior leadership.
- Perform both Descriptive & Prescriptive Analytics, including experimentations (A/B, MAB), and build reporting to track trends.
- Perform advanced modeling (NLP, Text Mining) – preferable.
- Implement and track business metrics to help drive the business.
- Contribute to growth strategy from a marketing and operations perspective.
- Operate independently as a lead analyst to understand the JA audience and guide strategic decisions & executions.
What We’re Looking For
- 5+ years of experience in e-commerce/customer experience products.
- Proficiency in analysis and business modeling using Excel.
- Experience with Google Analytics, BigQuery, Google Ads, PowerBI, and Python / R is a plus.
- Strong SQL skills with the ability to write complex queries.
- Expertise in Descriptive and Inferential Statistical Analysis.
- Strong experience in setting up and analyzing A/B Testing or Hypothesis Testing.
- Ability to translate analytical results into actionable business recommendations.
- Excellent written and verbal communication skills; ability to communicate with all levels of management.
- Advanced English proficiency.
- App-related experience – preferable.
About Us
We are a San Francisco-based company founded in 2003 with a simple mission: we help people. We have democratized professional services by connecting customers with verified Experts who provide reliable answers anytime, on any budget.
- 12,000+ Experts across various domains (doctors, lawyers, tech support, mechanics, vets, home repair, and more).
- 10 million customers in 196 countries.
- 16 million+ questions answered in 20 years.
- Investors include Charles Schwab, Crosslink Capital, and Glynn Capital Management.
Why Join the Team
- 1,000+ employees and growing rapidly.
- Hiring criteria: Smart. Fun. Get things done.
- Profitable and fast-growing company.
- We love what we do and celebrate success together.
Our JustAnswer Promise
We strive together to make the world a better place, one answer at a time.
Our values (“The JA Way”):
- Data Driven: Data decides, not egos.
- Courageous: We take risks and challenge the status quo.
- Innovative: Constantly learning, creating, and adapting.
- Lean: Customer-focused, using lean testing to learn and improve.
- Humble: Past success is not a guarantee of future success.
Work Environment
- Remote-first/hybrid model in most locations.
- Optional in-person meetings for collaboration and social events.
- Employee well-being is a top priority.
- Where legally permissible, employees must be fully vaccinated against Covid-19 to attend in-person events.
Our Commitment to Diversity
We embrace workplace diversity, believing it drives richer insights, fuels innovation, and creates better outcomes.
We are committed to attracting and developing an inclusive workforce. Individuals seeking opportunities are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, disability, veteran status, sexual orientation, gender identity, or any other protected status under applicable laws.


About the Role
As an Analyst at JustAnswer, you will be a Subject Matter Expert (SME) on Chatbot Strategy, driving long-term growth and incorporating the newest technologies like LLM.
In this role, you will deliver tangible business impact by providing high-quality insights and recommendations, combining strategic thinking and problem-solving with detailed analysis.
This position offers a unique opportunity to collaborate closely with Product Managers and cross-functional teams, uncover valuable business insights, devise optimization strategies, and validate them through experiments.
What You’ll Do
- Collaborate with Product and Analytics leadership to conceive and structure analysis, delivering highly actionable insights from “deep dives” into specific business areas.
- Analyze large volumes of internal & external data to identify growth and optimization opportunities.
- Package and communicate findings and recommendations to a broad audience, including senior leadership.
- Perform both Descriptive & Prescriptive Analytics, including experimentations (A/B, MAB), and build reporting to track trends.
- Perform advanced modeling (NLP, Text Mining) – preferable.
- Implement and track business metrics to help drive the business.
- Contribute to growth strategy from a marketing and operations perspective.
- Operate independently as a lead analyst to understand the JA audience and guide strategic decisions & executions.
What We’re Looking For
- 5+ years of experience in e-commerce/customer experience products.
- Proficiency in analysis and business modeling using Excel.
- Experience with Google Analytics, BigQuery, Google Ads, PowerBI, and Python / R is a plus.
- Strong SQL skills with the ability to write complex queries.
- Expertise in Descriptive and Inferential Statistical Analysis.
- Strong experience in setting up and analyzing A/B Testing or Hypothesis Testing.
- Ability to translate analytical results into actionable business recommendations.
- Excellent written and verbal communication skills; ability to communicate with all levels of management.
- Advanced English proficiency.
- App-related experience – preferable.
About Us
We are a San Francisco-based company founded in 2003 with a simple mission: we help people. We have democratized professional services by connecting customers with verified Experts who provide reliable answers anytime, on any budget.
- 12,000+ Experts across various domains (doctors, lawyers, tech support, mechanics, vets, home repair, and more).
- 10 million customers in 196 countries.
- 16 million+ questions answered in 20 years.
- Investors include Charles Schwab, Crosslink Capital, and Glynn Capital Management.
Why Join the Team
- 1,000+ employees and growing rapidly.
- Hiring criteria: Smart. Fun. Get things done.
- Profitable and fast-growing company.
- We love what we do and celebrate success together.
Our JustAnswer Promise
We strive together to make the world a better place, one answer at a time.
Our values (“The JA Way”):
- Data Driven: Data decides, not egos.
- Courageous: We take risks and challenge the status quo.
- Innovative: Constantly learning, creating, and adapting.
- Lean: Customer-focused, using lean testing to learn and improve.
- Humble: Past success is not a guarantee of future success.
Work Environment
- Remote-first/hybrid model in most locations.
- Optional in-person meetings for collaboration and social events.
- Employee well-being is a top priority.
- Where legally permissible, employees must be fully vaccinated against Covid-19 to attend in-person events.
Our Commitment to Diversity
We embrace workplace diversity, believing it drives richer insights, fuels innovation, and creates better outcomes.
We are committed to attracting and developing an inclusive workforce. Individuals seeking opportunities are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, disability, veteran status, sexual orientation, gender identity, or any other protected status under applicable laws.


Job Description :
This is an exciting opportunity for an experienced industry professional with strong expertise in artificial intelligence and machine learning to join and add value to a dedicated and friendly team. We are looking for an AIML Engineer who is passionate about developing AI-driven technology solutions. As a core member of the Development Team, the candidate will take ownership of AI/ML projects by working independently with little supervision.
The ideal candidate is a highly resourceful and skilled professional with experience in applying AI to practical and comprehensive technology solutions. You must also possess expertise in machine learning, deep learning, TensorFlow, Python, and NLP, along with a strong understanding of algorithms, functional design principles, and best practices.
You will be responsible for leading AI/ML initiatives, ensuring scalable and optimized solutions, and integrating AI capabilities into applications. Additionally, you will work on REST API development, NoSQL database design, and RDBMS optimizations.
Key Responsibilities :
- Develop and implement AI/ML models to solve real-world problems.
- Utilize machine learning, deep learning, TensorFlow, and NLP techniques.
- Lead AI-driven projects with program leadership, governance, and change enablement.
- Apply best practices in algorithm development, object-oriented and functional design principles.
- Design and optimize NoSQL and RDBMS databases for AI applications.
- Develop and integrate AI-powered REST APIs for seamless application functionality.
- Collaborate with cross-functional teams to deliver AI-driven solutions.
- Stay updated with the latest advancements in AI and ML technologies.
Required Qualifications :
- Qualification: Bachelor's or Master's degree in Computer Science or a related field.
- Minimum 2 years of experience in AI/ML application development.
- Strong expertise in machine learning, deep learning, TensorFlow, Python, and NLP.
- Experience in program leadership, governance, and change enablement.
- Knowledge of basic algorithms, object-oriented and functional design principles.
- Experience in REST API development, NoSQL database design, and RDBMS optimization.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills.
Location: Patna, Bihar
Why make a career at Plus91:
At Plus91, we believe we make a better world together. We value the diversity, creativity, and experience of our people. And it's your ideas that help us improve our products and customer experiences and create value for the world of healthcare.
We help our people become better professionals, as well as human beings. We are a hands-on company, and our team is all about getting things done. We nurture experiential learning. Bring passion and dedication to your job, and there's no telling what you can accomplish at Plus91.
We are always on the lookout for bright and innovative people to help us reach our business goals and your personal goals. If this role with us fits your career goals and you think you can fit into our hands-on and go-getting culture, do apply.
About HelloRamp.ai
HelloRamp is on a mission to revolutionize media creation for automotive and retail using AI. Our platform powers 3D/AR experiences for leading brands like Cars24, Spinny, and Samsung. We’re now building the next generation of Computer Vision + AI products, including cutting-edge NeRF pipelines and AI-driven video generation.
What You’ll Work On
- Develop and optimize Computer Vision pipelines for large-scale media creation.
- Implement NeRF-based systems for high-quality 3D reconstruction.
- Build and fine-tune AI video generation models using state-of-the-art techniques.
- Optimize AI inference for production (CUDA, TensorRT, ONNX).
- Collaborate with the engineering team to integrate AI features into scalable cloud systems.
- Research latest AI/CV advancements and bring them into production.
Skills & Experience
- Strong Python programming skills.
- Deep expertise in Computer Vision and Machine Learning.
- Hands-on with PyTorch/TensorFlow.
- Experience with NeRF frameworks (Instant-NGP, Nerfstudio, Plenoxels) and/or video synthesis models.
- Familiarity with 3D graphics concepts (meshes, point clouds, depth maps).
- GPU programming and optimization skills.
Nice to Have
- Knowledge of Three.js or WebGL for rendering AI outputs on the web.
- Familiarity with FFmpeg and video processing pipelines.
- Experience in cloud-based GPU environments (AWS/GCP).
Why Join Us?
- Work on cutting-edge AI and Computer Vision projects with global impact.
- Join a small, high-ownership team where your work matters.
- Opportunity to experiment, publish, and contribute to open-source.
- Competitive pay and flexible work setup.


Remote Work | Type: Freelance / Project-based
Preferred Location: Candidates from Gujarat will be preferred
What are we looking for?
- 3 to 4 years of experience in React Native development
- Minimum 1 year of experience in AI/ML integrations
- Good understanding of mobile app architecture
- Ability to work independently and meet deadlines
- Strong communication and problem-solving skills
What will you do at Digicorp?
- Build and maintain mobile apps using React Native
- Integrate AI-based features like chatbots, recommendations, etc.
- Collaborate with design and backend teams
- Test, debug, and optimize app performance
- Stay updated with latest tools and best practices
Skills Required:
- React Native, Redux, JavaScript/TypeScript
- AI/ML tools or APIs (OpenAI, TensorFlow, etc.)
- REST API integration
- Knowledge of Firebase or AWS (added advantage)


Job Title: Data Science Trainer
Location: Coimbatore
Employment Type: Full-time
Job Summary:
We are seeking an experienced and passionate Data Science Trainer to deliver engaging and practical training sessions to students or professionals. The trainer will design curriculum, prepare materials, and guide learners in developing data science skills through real-world projects and case studies.
Key Responsibilities:
- Develop and deliver comprehensive training programs on Data Science concepts, tools, and techniques.
- Teach topics including but not limited to:
- Data Analysis (NumPy, Pandas, Data Cleaning, EDA)
- Data Visualization (Matplotlib, Seaborn, Plotly)
- Statistics & Probability (Descriptive, Inferential)
- Machine Learning (Supervised & Unsupervised Algorithms, Model Evaluation)
- Deep Learning (Neural Networks, TensorFlow, PyTorch – if applicable)
- Big Data Tools (Hadoop, Spark – if applicable)
- Provide hands-on training with industry-relevant datasets and projects.
- Prepare lesson plans, assignments, assessments, and evaluation reports.
- Mentor learners individually to ensure conceptual clarity and practical application.
- Keep training content updated with the latest industry trends and technologies.
- Collaborate with the academic or corporate training team for curriculum improvement.
Required Skills & Qualifications:
- Bachelor’s/Master’s degree in Computer Science, Statistics, Mathematics, or related field.
- Proven experience in Data Science, Machine Learning, and Data Analytics.
- Proficiency in Python (NumPy, Pandas, Scikit-learn, etc.) and SQL.
- Strong understanding of statistical concepts and their application in data analysis.
- Experience with visualization tools like Tableau/Power BI is a plus.
- Excellent communication and presentation skills.
- Ability to simplify complex concepts for learners with varied backgrounds.
- Prior teaching/training experience preferred.

WHO WE ARE:
TIFIN is a fintech platform backed by industry leaders including JP Morgan, Morningstar, Broadridge,Hamilton Lane, Franklin T empleton, Motive Partners and a who’s who of the financial service industry.
We are creating engaging wealth experiences to better financial lives through AI and investment intelligence-powered personalization. We are working to change the world of wealth in ways that personalization has changed the world of movies, music and more but with the added responsibility of delivering better wealth outcomes.
We use design and behavioral thinking to enable engaging experiences through software and application programming interfaces (APIs). We use investment science and intelligence to build algorithmic engines inside the software and APIs to enable better investor outcomes.
In a world where every individual is unique, we match them to financial advice and investments with a recognition of their distinct needs and goals across our investment marketplace and our advice and
planning divisions.
OUR VALUES: Go with your GUT
● Grow at the Edge. We are driven by personal growth. We get out of our comfort zone and keep egos
aside to find our genius zones. With self-awareness and integrity we strive to be the best we can
possibly be. No excuses.
●Understanding through Listening and Speaking the Truth. We value transparency. We communicate
with radical candor, authenticity and precision to create a shared understanding. We challenge, but
once a decision is made, commit fully.
●I Win for T eamwin. We believe in staying within our genius zones to succeed and we take full
ownership of our work. We inspire each other with our energy and attitude. We fly in formation to win
together.
Key responsibilities:
● Prototype Development: Build and validate prototypes by designing datasets and performing fine-tuning to demonstrate model feasibility and effectiveness.
● Model Fine-Tuning: Lead the fine-tuning process for large language models, leveraging both open-source solutions (e.g., LORA, LLAMA 2/3) and proprietary models like GPT 3.5/4.0 to address specific business needs.
● Retrieval-Augmented Generation (RAG): Design and optimize advanced RAG systems,implementing efficient methods for chunking, indexing, and managing complex document formats, such as PDFs.
● Agentic Workflows: Develop and apply agentic workflows to create advanced conversation agents tailored for various use cases.
● Prompt Engineering: Craft effective prompts to enhance the performance of large language models, such as GPT-4 and Cloud 3, ensuring solutions align with business objectives.
● Inference Frameworks: Utilize inference frameworks like VAM and advanced pipelines to improve scalability and streamline the deployment of machine learning models.
● ML Model Deployment: Lead the deployment and productionization of machine learning models, collaborating closely with software engineers to ensure seamless integration into products.
● Conversational Experiences: Work with cross-functional teams, including design and product, to craft engaging and personalized conversational AI experiences for end-users.
● Independent Execution: Demonstrate the ability to independently manage projects, deliver features, and achieve outcomes in a dynamic, fast-paced startup environment.
● Data Personalization: Leverage and analyze existing datasets to enhance model performance and create tailored user experiences, fine-tuning models to align with unique business cases.
● Startup Mindset: T ake initiative in setting up workflows, tools, and systems from scratch, showcasing flexibility and adaptability in a growing organization.
● Leadership and Collaboration: Act as a leader and independent contributor, effectively collaborating across diverse teams while managing evolving project priorities.
● Research and Innovation: Drive innovative research aligned with business goals, exploring advancements in Natural Language Understanding (NLU) and conversational AI applications.
● Knowledge Sharing: Publish and present groundbreaking research in top-tier journals and conferences, contributing to the broader scientific community and enhancing organizational credibility.
Requirements:
● 8+ years of experience
● Experience working with LLMs and Generative AI
● Experience building conversational bots
● Experience 2+ Years fine tuning models
● Experience using RAG base approaches
● Understanding of financial concepts and investing would be a big plus but not required


Job Requirement :
- 3-5 Years of experience in Data Science
- Strong expertise in statistical modeling, machine learning, deep learning, data warehousing, ETL, and reporting tools.
- Bachelors/ Masters in Data Science, Statistics, Computer Science, Business Intelligence,
- Experience with relevant programming languages and tools such as Python, R, SQL, Spark, Tableau, Power BI.
- Experience with machine learning frameworks like TensorFlow, PyTorch, or Scikit-learn
- Ability to think strategically and translate data insights into actionable business recommendations.
- Excellent problem-solving and analytical skills
- Adaptability and openness towards changing environment and nature of work
- This is a startup environment with evolving systems and procedures, the ideal candidate will be comfortable working in a fast-paced, dynamic environment and will have a strong desire to make a significant impact on the business.
Job Roles & Responsibilities:
- Conduct in-depth analysis of large-scale datasets to uncover insights and trends.
- Build and deploy predictive and prescriptive machine learning models for various applications.
- Design and execute A/B tests to evaluate the effectiveness of different strategies.
- Collaborate with product managers, engineers, and other stakeholders to drive data-driven decision-making.
- Stay up-to-date with the latest advancements in data science and machine learning.
LLM Engineer
3-6 Years
Bengaluru/Mumbai/Remote
WHO WE ARE:
TIFIN is a fintech platform backed by industry leaders including JP Morgan, Morningstar, Broadridge, Hamilton Lane, Franklin Templeton, Motive Partners and a who’s who of the financial service industry. We are creating engaging wealth experiences to better financial lives through AI and investment intelligence-powered personalization. We are working to change the world of wealth in ways that personalization has changed the world of movies, music and more but with the added responsibility of delivering better wealth outcomes.
We use design and behavioral thinking to enable engaging experiences through software and application programming interfaces (APIs). We use investment science and intelligence to build algorithmic engines inside the software and APIs to enable better investor outcomes. In a world where every individual is unique, we match them to financial advice and investments with a recognition of their distinct needs and goals across our investment marketplace and our advice and planning divisions.
OUR VALUES: Go with your GUT
● Grow at the Edge. We are driven by personal growth. We get out of our comfort zone and keep egos aside to find our genius zones. With self-awareness and integrity we strive to be the best we can possibly be. No excuses.
●Understanding through Listening and Speaking the Truth. We value transparency. We communicate with radical candor, authenticity and precision to create a shared understanding. We challenge, but once a decision is made, commit fully.
●I Win for Teamwin. We believe in staying within our genius zones to succeed and we take full ownership of our work. We inspire each other with our energy and attitude. We fly in formation to win together.
Key responsibilities:
- Work closely with the design and product teams to help craft conversational experiences for our users
- Ability to work independently to deliver features that the team design
- You need to own outcomes without someone watching over you
- Understand the data we have and identify how we could leverage it to create more personalised experiences for our users
- Fine tune models with data specific to our use case
- Use different RAG approaches to augment data that the LLM already has ● Ability to be a leader and independent contributor (we’re a startup so be ready to do whatever it takes)
- Setup new workflow, systems and tool from the ground up (you will have support, but we’re building things from scratch)
Requirements:
- 3+ years of experience
- Experience working with LLMs and Generative AI
- Experience building conversational bots
- Experience fine tuning models
- Experience using RAG base approaches
Understanding of financial concepts and investing would be a big plus but not required

Request for Proposal (RFP): AI Receptionist for Helvetica Incoming Broker Calls
Company Overview
Helvetica Group is a direct commercial real estate lender and investment bank, specializing in providing alternative financing solutions for complex real estate transactions. We focus on fast, innovative, and common-sense underwriting to serve brokers, investors, and borrowers nationwide.
Website: helveticagroup.com
Project Overview
We are seeking an experienced AI Engineer to design and implement an AI Receptionist using Retell AI and Make (formerly Integromat). This AI Receptionist will handle incoming broker calls, collect loan request details, integrate with Salesforce, and automate follow-up communications.
Scope of Work
The solution should:
- Receive Incoming Calls
- Integrate with our phone system and Retell AI to answer broker calls professionally.
- Detect caller intent and initiate a structured loan intake conversation.
- Prompt Loan Request Details
- Use natural language conversation to gather key loan scenario data:
- Borrower and broker information
- Property type, location, value
- Loan amount requested, purpose, and timeline
- Any special circumstances (e.g., foreclosure bailout, BK buyout, etc.)
- Data Handling & Automation
- Send collected call details to Salesforce as a new Lead.
- Email a summary of the loan request to our broker group.
- Send an automated confirmation email to the caller.
- Send an SMS confirmation text to the caller with a thank-you and next steps.
- Integration Requirements
- Use Retell AI for call handling and voice interactions.
- Use Make for workflow automation (Salesforce, email, SMS).
- Ensure all data flows securely and complies with applicable regulations.
Deliverables
- Fully functional AI Receptionist integrated with our systems
- Retell AI call flow design and scripts tailored to Helvetica's lending guidelines
- Make (Integromat) scenarios for:
- Salesforce lead creation
- Email notifications (internal and external)
- SMS confirmation to caller
- Documentation for setup, maintenance, and future scaling
- Optional: Dashboard for call analytics and performance monitoring
Skills Required
- Experience with Retell AI (or similar conversational AI voice platforms)
- Expertise in Make (Integromat) or similar workflow automation tools
- Strong knowledge of Salesforce API integrations
- Experience with Twilio or other SMS/email APIs is a plus
- Understanding of secure data handling and compliance (preferred)
Proposal Requirements
Please include in your proposal:
- Relevant experience with Retell AI, Make, and Salesforce integrations
- Portfolio of similar AI automation projects (voice + CRM)
- Estimated timeline for project completion
- Proposed budget and pricing structure
- Any additional recommendations to enhance this solution
Timeline
- Proposal Submission Deadline: [Insert Date]
- Project Kickoff: [Insert Date]
- Expected Completion: Within 4-6 weeks from kickoff
Budget
We are open to proposals with fixed price or hourly rates. Please provide a clear breakdown of your pricing.
How to Apply
Submit your proposal directly through Upwork, including all requested details.
Would you also like me to include:
- Sample call flow conversation script (to attract engineers who understand the flow)?
- Technical architecture diagram (for clarity on integrations)?
- I can draft both to include in your posting. Shall I proceed?


Job Title: Software Engineer Consultant/Expert 34192
Location: Chennai
Work Type: Onsite
Notice Period: Immediate Joiners only or serving candidates upto 30 days.
Position Description:
- Candidate with strong Python experience.
- Full Stack Development in GCP End to End Deployment/ ML Ops Software Engineer with hands-on n both front end, back end and ML Ops
- This is a Tech Anchor role.
Experience Required:
- 7 Plus Years

Job description
We are looking for a Data Scientist with strong AI/ML engineering skills to join our high-impact team at KrtrimaIQ Cognitive Solutions. This is not a notebook-only role — you must have production-grade experience deploying and scaling AI/ML models in cloud environments, especially GCP, AWS, or Azure.
This role involves building, training, deploying, and maintaining ML models at scale, integrating them with business applications. Basic model prototyping won't qualify — we’re seeking hands-on expertise in building scalable machine learning pipelines.
Key Responsibilities
Design, train, test, and deploy end-to-end ML models on GCP (or AWS/Azure) to support product innovation and intelligent automation.
Implement GenAI use cases using LLMs
Perform complex data mining and apply statistical algorithms and ML techniques to derive actionable insights from large datasets.
Drive the development of scalable frameworks for automated insight generation, predictive modeling, and recommendation systems.
Work on impactful AI/ML use cases in Search & Personalization, SEO Optimization, Marketing Analytics, Supply Chain Forecasting, and Customer Experience.
Implement real-time model deployment and monitoring using tools like Kubeflow, Vertex AI, Airflow, PySpark, etc.
Collaborate with business and engineering teams to frame problems, identify data sources, build pipelines, and ensure production-readiness.
Maintain deep expertise in cloud ML architecture, model scalability, and performance tuning.
Stay up to date with AI trends, LLM integration, and modern practices in machine learning and deep learning.
Technical Skills Required Core ML & AI Skills (Must-Have):
Strong hands-on ML engineering (70% of the role) — supervised/unsupervised learning, clustering, regression, optimization.
Experience with real-world model deployment and scaling, not just notebooks or prototypes.
Good understanding of ML Ops, model lifecycle, and pipeline orchestration.
Strong with Python 3, Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch, Seaborn, Matplotlib, etc.
SQL proficiency and experience querying large datasets.
Deep understanding of linear algebra, probability/statistics, Big-O, and scientific experimentation.
Cloud experience in GCP (preferred), AWS, or Azure.
Cloud & Big Data Stack
Hands-on experience with:
GCP tools – Vertex AI, Kubeflow, BigQuery, GCS
Or equivalent AWS/Azure ML stacks
Familiar with Airflow, PySpark, or other pipeline orchestration tools.
Experience reading/writing data from/to cloud services.
Qualifications
Bachelor's/Master’s/Ph.D. in Computer Science, Mathematics, Engineering, Data Science, Statistics, or related quantitative field.
4+ years of experience in data analytics and machine learning roles.
2+ years of experience in Python or similar programming languages (Java, Scala, Rust).
Must have experience deploying and scaling ML models in production.
Nice to Have
Experience with LLM fine-tuning, Graph Algorithms, or custom deep learning architectures.
Background in academic research to production applications.
Building APIs and monitoring production ML models.
Familiarity with advanced math – Graph Theory, PDEs, Optimization Theory.
Communication & Collaboration
Strong ability to explain complex models and insights to both technical and non-technical stakeholders.
Ask the right questions, clarify objectives, and align analytics with business goals.
Comfortable working cross-functionally in agile and collaborative teams.
Important Note:
This is a Data Science-heavy role — 70% of responsibilities involve building, training, deploying, and scaling AI/ML models.
Cloud experience is mandatory (GCP preferred, AWS/Azure acceptable).
Only candidates with hands-on experience in deploying ML models into production (not just notebooks) will be considered.


About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.
Data Axle Pune is pleased to have achieved certification as a Great Place to Work!
Roles & Responsibilities:
We are looking for a Senior Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Senior Data Scientist who will be responsible for:
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring
- Oversight on team project execution and delivery
- Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
- Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.
It is not intended to be a complete list of assigned duties but to describe a position level.


Research Intern
Position Overview
We are seeking a motivated Research Intern to join our AI research team, focusing on Text-to-Speech (TTS) and Automatic Speech Recognition (ASR) technologies. The intern will play a crucial role in evaluating our proprietary models against industry benchmarks, analyzing competitive voice agent platforms, and contributing to cutting-edge research in speech AI technologies.
Key Responsibilities
Model Evaluation & Benchmarking
- Conduct comprehensive evaluation of our TTS and ASR models against existing state-of-the-art models
- Design and implement evaluation metrics and frameworks for speech quality assessment
- Perform comparative analysis of model performance across different datasets and use cases
- Generate detailed reports on model strengths, weaknesses, and improvement opportunities
Competitive Analysis
- Evaluate and compare our voice agent platform with existing solutions (Vapi, Bland AI, and other competitors)
- Analyze feature sets, performance metrics, and user experience across different voice agent platforms
- Conduct technical deep-dives into competitive architectures and methodologies
- Provide strategic recommendations based on competitive landscape analysis
Research & Innovation
- Monitor and analyze emerging trends in ASR, TTS, and voice AI technologies
- Research novel approaches to improve ASR and TTS model performance
- Investigate new architectures, training techniques, and optimization methods
- Stay current with academic literature and industry developments in speech AI
Model Development & Training
- Assist in training TTS and ASR models on various datasets
- Implement and experiment with different model architectures and configurations
- Perform model fine-tuning for specific use cases and domains
- Optimize models for different deployment scenarios (edge, cloud, real-time)
- Conduct data preprocessing and augmentation for training datasets
Documentation & Reporting
- Maintain detailed documentation of experiments, methodologies, and results
- Prepare technical reports and presentations for internal stakeholders
- Contribute to research publications and technical blog posts
- Create visualization and analysis tools for model performance tracking
Required Qualifications
Education & Experience
- Currently pursuing or recently completed Bachelor's/Master's degree in Computer Science, Electrical Engineering, Machine Learning, or related field
- Strong academic background in machine learning, deep learning, and signal processing
- Previous experience with speech processing, NLP, or audio ML projects (academic or professional)
Technical Skills
- Programming Languages: Proficiency in Python; experience with PyTorch, TensorFlow
- Speech AI Frameworks: Experience with libraries like librosa, torchaudio, speechbrain, or similar
- Machine Learning: Strong understanding of deep learning architectures, training procedures, and evaluation methods
- Data Processing: Experience with audio data preprocessing, feature extraction, and dataset management
- Tools & Platforms: Familiarity with Colab or Jupyter notebooks, Git, Docker, and cloud platforms (AWS/GCP/Azure)
Preferred Qualifications
- Experience with transformer architectures, attention mechanisms, and sequence-to-sequence models
- Knowledge of speech synthesis techniques (WaveNet, Tacotron, FastSpeech, etc.)
- Understanding of ASR architectures (Wav2Vec, Whisper, Conformer, etc.)
- Experience with model optimization techniques (quantization, pruning, distillation)
- Familiarity with MLOps tools and model deployment pipelines
- Previous work with voice AI applications or conversational AI systems
Skills & Competencies
Technical Competencies
- Strong analytical and problem-solving abilities
- Ability to design and conduct rigorous experiments
- Experience with statistical analysis and performance metrics
- Understanding of audio signal processing fundamentals
- Knowledge of distributed training and large-scale model development
Soft Skills
- Excellent written and verbal communication skills
- Ability to work independently and manage multiple projects
- Strong attention to detail and commitment to reproducible research
- Collaborative mindset and ability to work in cross-functional teams
- Curiosity and passion for staying current with AI research trends
What You'll Gain
Learning Opportunities
- Hands-on experience with state-of-the-art speech AI technologies
- Exposure to full model development lifecycle from research to deployment
- Mentorship from experienced AI researchers and engineers
- Opportunity to contribute to cutting-edge research projects
Professional Development
- Experience with industry-standard tools and methodologies
- Opportunity to present research findings to technical and business stakeholders
- Potential for research publication and conference presentations
- Networking opportunities within the AI research community

We at CLOUDSUFI are seeking a skilled and motivated AI/ML Engineer with a background in Natural Language Processing (NLP), speech technologies, and generative AI. The ideal candidate will have hands-on experience building AI projects from conception to deployment, including fine-tuning large language models (LLMs), developing conversational agents, and implementing machine learning pipelines. You will play a key role in building and enhancing our AI-powered products and services.
Key Responsibilities
• Design, develop, and deploy advanced conversational AI systems, including customer onboarding agents and support chatbots for platforms like WhatsApp.
• Process, transcribe, and diarize audio conversations to create high-quality datasets for fine-tuning large language models (LLMs).
• Develop and maintain robust, scalable infrastructure for AI model serving, utilizing technologies like FastAPI, Docker, and cloud platforms (e.g., Google Cloud Platform).
• Integrate and leverage knowledge graphs and contextual information systems to create more personalized, empathetic, and goal-oriented dialogues.
• Engineer and implement retrieval-augmented generation (RAG) systems to enable natural language querying of internal company documents, optimizing for efficiency and informativeness.
• Fine-tune and deploy generative models like Stable Diffusion for custom asset creation, with a focus on improving precision and reducing generative artifacts (FID score).
• Collaborate with cross-functional teams, including product managers and designers, to build user-friendly interfaces and tools that enhance productivity and user experience.
• Contribute to the research and publication of novel AI systems and models.
Qualifications and Skills
• Education: Bachelor of Engineering (B.E.) in Computer Science or a related field.
• Experience: 3+ years of professional experience as a Machine Learning Engineer or in a similar role.
• Programming Languages: Expertise in Python and a strong command of SQL, and JavaScript.
• Machine Learning Libraries: Hands-on experience with PyTorch, Scikit-learn, Hugging Face Transformers, Diffusers, Librosa, LangGraph, and the OpenAI API.
• Software and Tools: Proficiency with Docker and various databases including PostgreSQL, MongoDB, Redis, and Elasticsearch.
• Core Competencies:
o Experience developing end-to-end AI pipelines, from data processing and model training to API deployment and integration.
o Familiarity with MLOps principles and tools for building and maintaining production-level machine learning systems.
o A portfolio of projects, publications, or open-source contributions in the fields of NLP, Computer Vision, or Speech Analysis is a plus.
• Excellent problem-solving skills and the ability to think strategically to deliver optimized and efficient solutions.


About the Role
We are seeking an innovative Data Scientist specializing in Natural Language Processing (NLP) to join our technology team in Bangalore. The ideal candidate will harness the power of language models and document extraction techniques to transform legal information into accessible, actionable insights for our clients.
Responsibilities
- Develop and implement NLP solutions to automate legal document analysis and extraction
- Create and optimize prompt engineering strategies for large language models
- Design search functionality leveraging semantic understanding of legal documents
- Build document extraction pipelines to process unstructured legal text data
- Develop data visualizations using PowerBI and Tableau to communicate insights
- Collaborate with product and legal teams to enhance our tech-enabled services
- Continuously improve model performance and user experience
Requirements
- Bachelor's degree in relevant field
- 1-5 years of professional experience in data science, with focus on NLP applications
- Demonstrated experience working with LLM APIs (e.g., OpenAI, Anthropic, )
- Proficiency in prompt engineering and optimization techniques
- Experience with document extraction and information retrieval systems
- Strong skills in data visualization tools, particularly PowerBI and Tableau
- Excellent programming skills in Python and familiarity with NLP libraries
- Strong understanding of legal terminology and document structures (preferred)
- Excellent communication skills in English
What We Offer
- Competitive salary and benefits package
- Opportunity to work at India's largest legal tech company
- Professional growth in the fast-evolving legal technology sector
- Collaborative work environment with industry experts
- Modern office located in Bangalore
- Flexible work arrangements
Qualified candidates are encouraged to apply with a resume highlighting relevant experience with NLP, prompt engineering, and data visualization tools.
Location: Bangalore, India

Senior Machine Learning Engineer
📍 Location: Remote
💼 Type: Full-Time
💰 Salary: $800 - $1,000 USD / month
Apply at: https://forms.gle/Fwti67UeTEkx2Kkn6
About Us
At Momenta, we're committed to creating a safer digital world by protecting individuals and businesses from voice-based fraud and scams. Through innovative AI technology and community collaboration, we're building a future where communication is secure and trustworthy.
Position Overview
We’re hiring a Senior Machine Learning Engineer with deep expertise in audio signal processing and neural network-based detection. The selected engineer will be responsible for delivering a production-grade, real-time deepfake detection pipeline as part of a time-sensitive, high-stakes 3-month pilot deployment.
Key Responsibilities
📌 Design and Deliver Core Detection Pipeline
Lead the development of a robust, modular deepfake detection pipeline capable of ingesting, processing, and classifying real-time audio streams with high accuracy and low latency. Architect the system to operate under telecom-grade conditions with configurable interfaces and scalable deployment strategies.
📌 Model Strategy, Development, and Optimization
Own the experimentation and refinement of state-of-the-art deep learning models for voice fraud detection. Evaluate multiple model families, benchmark performance across datasets, and strategically select or ensemble models that balance precision, robustness, and compute efficiency for real-world deployment.
📌 Latency-Conscious Production Readiness
Ensure the entire detection stack meets strict performance targets, including sub-20ms inference latency. Apply industry best practices in model compression, preprocessing optimization, and system-level integration to support high-throughput inference on both CPU and GPU environments.
📌 Evaluation Framework and Continuous Testing
Design and implement a comprehensive evaluation suite to validate model accuracy, false positive rates, and environmental robustness. Conduct rigorous testing across domains, including cross-corpus validation, telephony channel effects, adversarial scenarios, and environmental noise conditions.
📌 Deployment Engineering and API Integration
Deliver a fully containerized, production-ready inference service with REST/gRPC endpoints. Build CI/CD pipelines, integration tests, and monitoring hooks to ensure system integrity, traceability, and ease of deployment across environments.
Required Skills & Qualifications
🎯 Technical Skills:
ML Frameworks: PyTorch, TensorFlow, ONNX, OpenVINO, TorchScript
Audio Libraries: Librosa, Torchaudio, FFmpeg
Model Development: CNNs, Transformers, Wav2Vec/WavLM, AASIST, RawNet
Signal Processing: VAD, noise reduction, band-pass filtering, codec simulation
Optimization: Quantization, pruning, GPU acceleration
DevOps: Git, Docker, CI/CD, FastAPI or Flask, REST/gRPC
🎯 Preferred Experience:
Prior work on audio deepfake detection or telephony speech processing
Experience with real-time ML model deployment
Understanding of adversarial robustness and domain adaptation
Familiarity with call center environments or telecom-grade constraints
Compensation & Career Path:
Competitive pay based on experience and capability. ($800 - $1,000 USD / month)
Full-time with potential for conversion to a core team role.
Opportunity to lead future research and production deployments as part of our AI division.
Why Join Momenta?
Solve a global security crisis with cutting-edge AI.
Own a deliverable that will ship into production at scale.
Join a fast-growing team with seasoned founders and engineers.
Fully remote, high-autonomy environment focused on deep work.
🚀 Apply now and help shape the future of voice security


Job Title : AI Architect
Location : Pune (On-site | 3 Days WFO)
Experience : 6+ Years
Shift : US or flexible shifts
Job Summary :
We are looking for an experienced AI Architect to design and deploy AI/ML solutions that align with business goals.
The role involves leading end-to-end architecture, model development, deployment, and integration using modern AI/ML tools and cloud platforms (AWS/Azure/GCP).
Key Responsibilities :
- Define AI strategy and identify business use cases
- Design scalable AI/ML architectures
- Collaborate on data preparation, model development & deployment
- Ensure data quality, governance, and ethical AI practices
- Integrate AI into existing systems and monitor performance
Must-Have Skills :
- Machine Learning, Deep Learning, NLP, Computer Vision
- Data Engineering, Model Deployment (CI/CD, MLOps)
- Python Programming, Cloud (AWS/Azure/GCP)
- Distributed Systems, Data Governance
- Strong communication & stakeholder collaboration
Good to Have :
- AI certifications (Azure/GCP/AWS)
- Experience in big data and analytics

3+ years of experience in cybersecurity, with a focus on application and cloud security.
· Proficiency in security tools such as Burp Suite, Metasploit, Nessus, OWASP ZAP, and SonarQube.
· Familiarity with data privacy regulations (GDPR, CCPA) and best practices.
· Basic knowledge of AI/ML security frameworks and tools.



Remote Job Opportunity
Job Title: Data Scientist
Contract Duration: 6 months+
Location: offshore India
Work Time: 3 pm to 12 am
Must have 4+ Years of relevant experience.
Job Summary:
We are seeking an AI Data Scientist with a strong foundation in machine learning, deep learning, and statistical modeling to design, develop, and deploy cutting-edge AI solutions.
The ideal candidate will have expertise in building and optimizing AI models, with a deep understanding of both statistical theory and modern AI techniques. You will work on high-impact projects, from prototyping to production, collaborating with engineers, researchers, and business stakeholders to solve complex problems using AI.
Key Responsibilities:
Research, design, and implement machine learning and deep learning models for predictive and generative AI applications.
Apply advanced statistical methods to improve model robustness and interpretability.
Optimize model performance through hyperparameter tuning, feature engineering, and ensemble techniques.
Perform large-scale data analysis to identify patterns, biases, and opportunities for AI-driven automation.
Work closely with ML engineers to validate, train, and deploy the models.
Stay updated with the latest research and developments in AI and machine learning to ensure innovative and cutting-edge solutions.
Qualifications & Skills:
Education: PhD or Master's degree in Statistics, Mathematics, Computer Science, or a related field.
Experience:
4+ years of experience in machine learning and deep learning, with expertise in algorithm development and optimization.
Proficiency in SQL, Python and visualization tools ( Power BI).
Experience in developing mathematical models for business applications, preferably in finance, trading, image-based AI, biomedical modeling, or recommender systems industries
Strong communication skills to interact effectively with both technical and non-technical stakeholders.
Excellent problem-solving skills with the ability to work independently and as part of a team.


Software Development Intern
About This Role
We're building next-generation browser agents that combine accuracy, security, and advanced task learning capabilities. We're looking for self-driven, independent interns who thrive on exploration and problem-solving to help us push the boundaries of what's possible with intelligent web automation.
This isn't a traditional learning internship—we want builders who have already proven they can ship projects and tackle challenges autonomously. You'll work across our full tech stack, from backend APIs to frontend interfaces, with access to cutting-edge AI-powered development tools while contributing to the future of browser automation.
What You'll Do
- Develop intelligent browser agents with advanced task learning and execution capabilities
- Build secure automation systems that can navigate complex web environments accurately
- Create robust AI-powered workflows using LangChain and modern ML frameworks
- Design and implement security measures for safe browser automation
- Create comprehensive test environments for agent validation and performance testing
- Debug and fix application bugs across the full stack to ensure reliable agent operation
- Solve complex problems independently using AI code assistants (Cursor, v0.dev, etc.)
- Explore and experiment with new technologies in AI agent development
- Own projects end-to-end from conception to deployment
- Work across the full stack as needed—no rigid role boundaries
Our Tech Stack
Backend:
- Python with FastAPI
- LangChain for AI/ML workflows
- Google Cloud Platform (GCP) services
- Supabase for database and authentication
Frontend:
- JavaScript/TypeScript
- React for web interfaces
- Electron for desktop applications
Development Tools:
- Cursor IDE with AI assistance
- v0.dev for rapid prototyping
- Modern DevOps and CI/CD pipelines
Flexibility:
- Choose your own tech stack when needed - We're open to new tools and frameworks that solve problems better
- Experiment with cutting-edge technologies - If you find a better solution, we're all ears
What We're Looking For
Required Experience
- Proven project portfolio - Show us what you've built, not what you've learned
- Full-stack development experience with Python and JavaScript
- Independent problem-solving skills - You research, experiment, and find solutions
- Experience with modern frameworks (FastAPI, React, or similar)
- Cloud platform familiarity (GCP, AWS, or Azure)
Ideal Candidates Have
- Built and deployed real applications (personal projects, hackathons, open source)
- Experience with browser automation (Selenium, Playwright, Puppeteer, or similar)
- AI/ML model integration experience (LangChain, OpenAI APIs, agent frameworks)
- Security-focused development understanding of web security principles
- Task learning and reinforcement learning familiarity
- Testing and debugging experience with automated systems and complex applications
- Test environment setup and CI/CD pipeline experience
- Database design and optimization experience
- Desktop application development (Electron or similar)
- DevOps and infrastructure automation knowledge
What We Offer
- Work on cutting-edge browser agent technology - Shape the future of intelligent web automation
- Cutting-edge AI development tools - Full access to Cursor, v0.dev, and other AI assistants
- Technology freedom - Choose the best tools for the job, not just what's already in the stack
- Real project ownership - Your work will directly impact our next-gen browser agents
- Flexible exploration time - Dedicate time to experiment with new AI/ML approaches
- Mentorship from experienced developers - When you need it, not constant hand-holding
- Remote-first environment with flexible working hours
- Competitive internship compensation
What Makes You Stand Out
- Self-starter mentality - You don't wait for detailed instructions
- Curiosity-driven exploration - You love diving into new technologies
- Problem-solving resilience - You debug, research, and iterate until it works
- Quality-focused delivery - You ship polished, well-tested code
- Open source contributions or active GitHub presence
- Technology adaptability - You can evaluate and adopt new tools when they solve problems better
Application Requirements
- Portfolio/GitHub - Show us your best projects with live demos
- Brief cover letter - Tell us about a challenging problem you solved independently
- Technical challenge - We'll provide a small project to assess your problem-solving approach
Not a Good Fit If You
- Need constant guidance and structured learning paths
- Prefer working on assigned tasks without creative input
- Haven't built substantial projects outside of coursework
- Are looking primarily for resume building rather than real contribution
Ready to build something amazing? Send us your portfolio and let's see what you can create with unlimited access to AI development tools and real-world challenges.
We're an equal opportunity employer committed to diversity and inclusion.



We are seeking a detail-oriented and analytical Data Analyst to collect, process, and analyze data to help drive informed business decisions. The ideal candidate will have strong technical skills, business acumen, and the ability to communicate insights effectively.

Job Title: Generative AI Engineer
Experience: 6–9 years
Job description:
We are seeking a Generative AI Engineer with 6–9 years of experience who can independently
explore, prototype, and present the art of the possible using LLMs, agentic frameworks, and
emerging Gen AI techniques. This role combines deep technical hands-on development with
non-technical influence and presentation skills.
You will contribute to key Gen AI innovation initiatives, help define new protocols (like MCP
and A2A) and deliver fully functional prototypes that push the boundaries of enterprise AI — not
just in Jupyter notebooks, but as real applications ready for production exploration.
Key Responsibilities:
· LLM Applications & Agentic Frameworks
· Design and implement end-to-end LLM applications using OpenAI, Claude, Mistral,
· Gemini, or LLaMA on AWS, Databricks, Azure or GCP.
· Build intelligent, autonomous agents using LangGraph, AutoGen, LlamaIndex, Crew.ai,or custom frameworks.
· Develop Multi Model, Multi Agent, Retrieval-Augmented Generation (RAG) applications with secure context embedding and tracing with reports.
· Rapidly explore and showcase the art of the possible through functional, demonstrable POCs
· Advanced AI Experimentation
· Fine-tune LLMs and Small Language Models (SLMs) for domain-specific use.
· Create and leverage synthetic datasets to simulate edge cases and scale training.
· Evaluate agents using custom agent evaluation frameworks (success rates, latency,reliability)
· Evaluate emerging agent communication standards — A2A (Agent-to-Agent) and MCP (Model Context Protocol), Business Alignment & Cross-Team Collaboration
· Translate ambiguous requirements into structured, AI-enabled solutions.
· Clearly communicate and present ideas, outcomes, and system behaviors to technical and non-technical stakeholders
Good-To-Have:
· Microsoft Copilot Studio
· DevRev
· Codium
· Cursor
· Atlassian AI
· Databricks Mosaic AI
Qualifications:
· 6–9 years of experience in software development or AI/ML engineering
· At least 3 years working with LLMs, GenAI applications, or agentic frameworks.
· Proficient in AI/ML, MLOps concepts, Python, embeddings, prompt engineering, and
· model orchestration
· Proven track record of developing functional AI prototypes beyond notebooks.
· Strong presentation and storytelling skills to clearly convey GenAI concepts and value.

Role Overview:
Zolvit is looking for a highly skilled and self-driven Lead Machine Learning Engineer / Lead Data Scientist to lead the design and development of scalable, production-grade ML systems. This role is ideal for someone who thrives on solving complex problems using data, is deeply passionate about machine learning, and has a strong understanding of both classical techniques and modern AI systems like Large Language Models (LLMs).
You will work closely with engineering, product, and business teams to identify impactful ML use cases, build data pipelines, design training workflows, and ensure the deployment of robust, high-performance models at scale.
Key Responsibilities:
● Design and implement scalable ML systems, from experimentation to deployment.
● Build and maintain end-to-end data pipelines for data ingestion, preprocessing, feature engineering, and monitoring.
● Lead the development and deployment of ML models across a variety of use cases — including classical ML and LLM-based applications like summarization, classification, document understanding, and more.
● Define model training and evaluation pipelines, ensuring reproducibility and performance tracking.
● Apply statistical methods to interpret data, validate assumptions, and inform modeling decisions.
● Collaborate cross-functionally with engineers, data analysts, and product managers to solve high-impact business problems using ML.
● Ensure proper MLOps practices are in place for model versioning, monitoring, retraining, and performance management.
● Keep up-to-date with the latest advancements in AI/ML, and actively evaluate and incorporate LLM capabilities and frameworks into solutions.
● Mentor junior ML engineers and data scientists, and help scale the ML function across the organization.
Required Qualifications:
● 7+ years of hands-on experience in ML/AI, building real-world ML systems at scale.
● Proven experience with classical ML algorithms (e.g., regression, classification,
clustering, ensemble models).
● Deep expertise in modern LLM frameworks (e.g., OpenAI, HuggingFace, LangChain)
and their integration into production workflows.
● Strong experience with Python, and frameworks such as Scikit-learn, TensorFlow,
PyTorch, or equivalent.
● Solid background in statistics and the ability to apply statistical thinking to real-world
problems.
● Experience with data engineering tools and platforms (e.g., Spark, Airflow, SQL,
Pandas, AWS Glue, etc.).
● Familiarity with cloud services (AWS preferred) and containerization tools (Docker,
Kubernetes) is a plus.
● Strong communication and leadership skills, with experience mentoring and guiding
junior team members.
● Self-starter attitude with a bias for action and ability to thrive in fast-paced environments.
● Master’s degree in Machine Learning, Artificial Intelligence, Statistics, or a related
field is preferred.
Preferred Qualifications:
● Experience deploying ML systems in microservices or event-driven architectures.
● Hands-on experience with vector databases, embeddings, and retrieval-augmented
generation (RAG) systems.
● Understanding of Responsible AI principles and practices.
Why Join Us?
● Lead the ML charter in a mission-driven company solving real-world challenges.
● Work on cutting-edge LLM use cases and platformize ML capabilities for scale.
● Collaborate with a passionate and technically strong team in a high-impact environment.
● Competitive compensation, flexible working model, and ample growth opportunities.


Job Description : Quantitative R&D Engineer
As a Quantitative R&D Engineer, you’ll explore data and design logic that becomes live trading strategies. You’ll bridge the gap between raw research and deployed, autonomous capital systems.
What You’ll Work On
- Analyze on-chain and market data to identify inefficiencies and behavioral patterns.
- Develop and prototype systematic trading strategies using statistical and ML-based techniques.
- Contribute to signal research, backtesting infrastructure, and strategy evaluation frameworks.
- Monitor and interpret DeFi protocol mechanics (AMMs, perps, lending markets) for alpha generation.
- Collaborate with engineers to turn research into production-grade, automated trading systems.
Ideal Traits
- Strong in data structures, algorithms, and core CS fundamentals.
- Proficiency in any programming language
- Understanding of probability, statistics, or ML concepts.
- Self-driven and comfortable with ambiguity, iteration, and fast learning cycles.
- Strong interest in markets, trading, or algorithmic systems.
Bonus Points For
- Experience with backtesting or feature engineering.
- Exposure to crypto primitives (AMMs, perps, mempools, etc.)
- Projects involving alpha signals, strategy testing, or DeFi bots.
- Participation in quant contests, hackathons, or open-source work.
What You’ll Gain:
- Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology.
- Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one.
- Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions.
- Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters
What We Value:
- Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes.
- Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here.
- Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build.
- Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface.
Compensation:
- INR 6 - 12 LPA
- Performance Bonuses: Linked to contribution, delivery, and impact.


A Concise Glimpse into the Role
We’re on the hunt for young, energetic, and hustling talent ready to bring fresh ideas and unstoppable drive to the table.
This isn’t just another role—it’s a launchpad for change-makers. If you’re driven to disrupt, innovate, and challenge the norm, we want you to make your mark with us.
Are you ready to redefine the future?
Apply now and step into a career where your ideas power the impossible!
Your Time Will Be Invested In
· AI/ML Model Innovation and Research
· Are you ready to lead transformative projects at the cutting edge of AI and machine learning? We're looking for a visionary mind with a passion for building ground breaking solutions that redefine the possible.
What You'll Own
Pioneering AI/ML Model Innovation
· Take ownership of designing, developing, and deploying sophisticated AI and ML models that push boundaries.
· Spearhead the creation of generative AI applications that revolutionize real-world experiences.
· Drive end-to-end implementation of AI-driven products with a focus on measurable impact.
Data Engineering and Advanced Development
· Architect robust pipelines for data collection, pre-processing, and analysis, ensuring precision at every stage.
· Deliver clean, scalable, and high-performance Python code that empowers our AI systems to excel.
Trailblazing Research and Strategic Collaboration
· Dive into the latest research to stay ahead of AI/ML trends, identifying opportunities to integrate state-of-the-art techniques.
· Foster innovation by brainstorming with a dynamic team to conceptualize novel AI solutions.
· Elevate the team's expertise by preparing insightful technical documentation and presenting actionable findings.
What We Want You to Have
· 1-2 years' experience with live AI project experience, from conceptualization to real-world deployment.
· Foundational knowledge in AI, ML, and generative AI applications.
· Proficient in Python and familiar with libraries like TensorFlow, PyTorch, Scikit-learn.
· Experience working with structured & unstructured data, as well as predictive analytics.
· Basic understanding of Deep Learning Techniques.
· Knowledge of AutoGen for building scalable multi-agent AI systems & familiarity with LangChain or similar frameworks for building AI Agents.
· Knowledge of using AI tools like VS Copilot.
· Proficient in working with vector databases for managing and retrieving data.
· Understanding of AI/ML deployment tools such as Docker, Kubernetes.
· Understanding JavaScript, TypeScript with React and Tailwind.
· Proficiency in Prompt Engineering for various use cases, including content generation and data extraction.
· Ability to work independently and as part of a collaborative team.
· Excellent communication skills and a strong willingness to learn.
Nice to Have
· Prior project or coursework experience in AI/ML.
· Background in Big Data technologies (Spark, Hadoop, Databricks).
· Experience with containerization and deployment tools.
· Proficiency in SQL & NoSQL databases.
· Familiarity with Data Visualization tools (e.g., Matplotlib, Seaborn).
Soft Skills
· Strong problem-solving and analytical capabilities.
· Excellent teamwork and interpersonal communication.
· Ability to thrive in a fast-paced and innovation-driven environment.


Lead Data Scientist role
Work Location- Remote
Exp-7+ Years Relevant
Notice Period- Immediate
Job Overview:
We are seeking a highly skilled and experienced Senior Data Scientist with expertise in Machine Learning (ML), Natural Language Processing (NLP), Generative AI (GenAI) and Deep Learning (DL).
Mandatory Skills:
• 5+ years of work experience in writing code in Python
• Experience in using various Python libraries like Pandas, NumPy
• Experience in writing good quality code in Python and code refactoring techniques (e.g.,IDE’s – PyCharm, Visual Studio Code; Libraries – Pylint, pycodestyle, pydocstyle, Black)
• Strong experience on AI assisted coding experience.
• AI assisted coding for existing IDE's like vscode.
• Experimented multiple AI assisted tools and done research around it.
• Deep understanding of data structures, algorithms, and excellent problem-solving skills
• Experience in Python, Exploratory Data Analysis (EDA), Feature Engineering, Data Visualisation
• Machine Learning libraries like Scikit-learn, XGBoost
• Experience in CV, NLP or Time Series.
• Experience in building models for ML tasks (Regression, Classification)
• Should have Experience into LLM, LLM Fine Tuning, Chatbot, RAG Pipeline Chatbot, LLM Solution, Multi Modal LLM Solution, GPT, Prompt, Prompt Engineering, Tokens, Context Window, Attention Mecanism, Embeddings
• Experience of model training and serving on any of the cloud environments (AWS, GCP,Azure)
• Experience in distributed training of models on Nvidia GPU’s
• Familiarity in Dockerizing the model and create model end points (Rest or gRPC)
• Strong working knowledge of source code control tools such as Git, Bitbucket
• Prior experience of designing, developing and maintaining Machine Learning solution through its Life Cycle is highly advantageous
• Strong drive to learn and master new technologies and techniques
• Strong communication and collaboration skills
• Good attitude and self-motivated
Mandatory Skills- *Strong Python coding, Machine Learning, Software Engineering, Deep Learning, Generative AI, LLM, AI Assisted coding tools.*


We are looking for a dynamic and skilled Business Analyst Trainer with 2 to 5 years of hands-on industry and/or teaching experience. The ideal candidate should be able to simplify complex data concepts, mentor aspiring professionals, and deliver effective training programs in Business Analyst, Power BI, Tableau, Machine learning


About Us
DAITA is a German AI startup revolutionizing the global textile supply chain by digitizing factory-to-brand workflows. We are building cutting-edge AI-powered SaaS and Agentic Systems that automate order management, production tracking, and compliance — making the supply chain smarter, faster, and more transparent.
Fresh off a $500K pre-seed raise, our passionate team is on the ground in India, collaborating directly with factories and brands to build our MVP and create real-world impact. If you’re excited by the intersection of AI, SaaS, and supply chain innovation, join us to help reshape how textiles move from factory floors to global brands.
Role Overview
We’re seeking a versatile Full-Stack Engineer to join our growing engineering team. You’ll be instrumental in designing and building scalable, secure, and high-performance applications that power our AI-driven platform. Working closely with Founders, ML Engineers, and Pilot Customers, you’ll transform complex AI workflows into intuitive, production-ready features.
What You’ll Do
• Design, develop, and deploy backend services, APIs, and microservices powering our platform.
• Build responsive, user-friendly frontend applications tailored for factory and brand users.
• Integrate AI/ML models and agentic workflows into seamless production environments.
• Develop features supporting order parsing, supply chain tracking, compliance, and reporting.
• Collaborate cross-functionally to iterate rapidly, test with users, and deliver impactful releases.
• Optimize applications for performance, scalability, and cost-efficiency on cloud platforms.
• Establish and improve CI/CD pipelines, deployment processes, and engineering best practices.
• Write clear documentation and maintain clean, maintainable code.
Required Skills
• 3–5 years of professional Full-Stack development experience
• Strong backend skills with frameworks like Node.js, Python (FastAPI, Django), Go, or similar
• Frontend experience with React, Vue.js, Next.js, or similar modern frameworks
• Solid knowledge and experience with relational databases (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Redis, Neon)
• Strong API design skills (REST mandatory; GraphQL a plus)
• Containerization expertise with Docker
• Container orchestration and management with Kubernetes (including experience with Helm charts, operators, or custom resource definitions)
• Cloud deployment and infrastructure experience on AWS, GCP or Azure
• Hands-on experience deploying AI/ML models in cloud-native environments (AWS, GCP or Azure) with scalable infrastructure and monitoring.
• Experience with managed AI/ML services like AWS SageMaker, GCP Vertex AI, Azure ML, Together.ai, or similar
• Experience with CI/CD pipelines and DevOps tools such as Jenkins, GitHub Actions, Terraform, Ansible, or ArgoCD
• Familiarity with monitoring, logging, and observability tools like Prometheus, Grafana, ELK stack (Elasticsearch, Logstash, Kibana), or Helicone
Nice-to-have
• Experience with TypeScript for full-stack AI SaaS development
• Use of modern UI frameworks and tooling like Tailwind CSS
• Familiarity with modern AI-first SaaS concepts viz. vector databases for fast ML data retrieval, prompt engineering for LLM integration, integrating with OpenRouter or similar LLM orchestration frameworks etc.
• Knowledge of MLOps tools like Kubeflow, MLflow, or Seldon for model lifecycle management.
• Background in building data pipelines, real-time analytics, and predictive modeling.
• Knowledge of AI-driven security tools and best practices for SaaS compliance.
• Proficiency in cloud automation, cost optimization, and DevOps for AI workflows.
• Ability to design and implement hyper-personalized, adaptive user experiences.
What We Value
• Ownership: You take full responsibility for your work and ship high-quality solutions quickly.
• Bias for Action: You’re pragmatic, proactive, and focused on delivering results.
• Clear Communication: You articulate ideas, challenges, and solutions effectively across teams.
• Collaborative Spirit: You thrive in a cross-functional, distributed team environment.
• Customer Focus: You build with empathy for end users and real-world usability.
• Curiosity & Adaptability: You embrace learning, experimentation, and pivoting when needed.
• Quality Mindset: You write clean, maintainable, and well-tested code.
Why Join DAITA?
• Be part of a mission-driven startup transforming a $1+ Trillion global industry.
• Work closely with founders and AI experts on cutting-edge technology.
• Directly impact real-world supply chains and sustainability.
• Grow your skills in AI, SaaS, and supply chain tech in a fast-paced environment.


We are seeking a passionate and knowledgeable Data Science and Data Analyst Trainer to deliver engaging and industry-relevant training programs. The trainer will be responsible for teaching core concepts in data analytics, machine learning, data visualization, and related tools and technologies. The ideal candidate will have hands-on experience in the data domain with 2-5 years and a flair for teaching and mentoring students or working professionals.


We are looking for a dynamic and skilled Data Science and Data Analyst Trainer with 2 to 5 years of hands-on industry and/or teaching experience. The ideal candidate should be able to simplify complex data concepts, mentor aspiring professionals, and deliver effective training programs in data analytics, data science, and business intelligence tools.


We are seeking a dynamic and experienced Data Analytics and Data Science Trainer to deliver high-quality training sessions, mentor learners, and design engaging course content. The ideal candidate will have a strong foundation in statistics, programming, and data visualization tools, and should be passionate about teaching and guiding aspiring professionals.


Duties
About Us:
We are a UK-based conveyancing firm dedicated to transforming property transactions through cutting-edge artificial intelligence. We are seeking a talented Machine Learning Engineer with 1–2 years of experience to join our growing AI team. This role offers a unique opportunity to work on scalable ML systems and Generative AI applications in a dynamic and impactful environment.
Responsibilities:
Design, Build, and Deploy Scalable ML Models
You will be responsible for end-to-end development of machine learning and deep learning models that can be scaled to handle real-world data and use cases. This includes training, testing, validating, and deploying models efficiently in production environments.
Develop NLP-Based Automation Solutions
You'll create natural language processing pipelines that automate tasks such as document understanding, text classification, and summarisation, enabling intelligent handling of property-related documents.
Prototype and Implement Generative AI Tools
Work closely with AI researchers and developers to experiment with and implement Generative AI techniques for tasks like content generation, intelligent suggestions, and workflow automation.
Integrate ML Models with APIs and Tools
Integrate machine learning models with external APIs and internal systems to support business operations and enhance customer service workflows.
Maintain CI/CD for ML Features
Collaborate with DevOps teams to manage CI/CD pipelines that automate testing, validation, and deployment of ML features and updates.
Review, Debug, and Optimise Models
Participate in thorough code reviews and model debugging sessions. Continuously monitor and fine-tune deployed models to improve their performance and reliability.
Cross-Team Communication
Communicate technical concepts effectively across teams, translating complex ML ideas into actionable business value.
· Design, build, and deploy scalable ML and deep learning models for real-world applications.
· Develop NLP-based and Gen AI based solutions for automating document understanding, classification, and summarisation.
· Collaborate with AI researchers and developers to prototype and implement Generative AI tools.
· Integrate ML and Gen AI models with APIs and internal tools to support business operations.
· Work with CI/CD pipelines to ensure continuous delivery of ML features and updates.
· Participate in code reviews, debugging, and performance optimisation of deployed models.
· Communicate technical concepts effectively across cross-functional teams.
Essentials From Day 1:
Security and Compliance:
• Ensure ML systems are built with GDPR compliance in mind.
• Adhere to RBAC policies and maintain secure handling of personal and property data.
Sandboxing and Risk Management:
• Use sandboxed environments for testing new ML features.
• Conduct basic risk analysis for model performance and data bias.
• Use sandboxed environments for testing and development.
• Evaluate and mitigate potential risks in model behavior and data pipelines
Qualifications:
· 1–2 years of professional experience in Machine Learning and Deep Learning projects.
· Proficient in Python, Object-Oriented Programming (OOPs), and Data Structures & Algorithms (DSA).
· Strong understanding of NLP and its real-world applications.
· Exposure to building scalable ML systems and deploying models into production.
· Basic working knowledge of Generative AI techniques and frameworks.
· Familiarity with CI/CD tools and experience with API-based integration.
· Excellent analytical thinking and debugging capabilities.
· Strong interpersonal and communication skills for effective team collaboration.

Job Overview:
We are seeking a highly experienced and innovative Senior AI Engineer with a strong background in Generative AI, including LLM fine-tuning and prompt engineering. This role requires hands-on expertise across NLP, Computer Vision, and AI agent-based systems, with the ability to build, deploy, and optimize scalable AI solutions using modern tools and frameworks.
Required Skills & Qualifications:
- Bachelor’s or Master’s in Computer Science, AI, Machine Learning, or related field.
- 5+ years of hands-on experience in AI/ML solution development.
- Proven expertise in fine-tuning LLMs (e.g., LLaMA, Mistral, Falcon, GPT-family) using techniques like LoRA, QLoRA, PEFT.
- Deep experience in prompt engineering, including zero-shot, few-shot, and retrieval-augmented generation (RAG).
- Proficient in key AI libraries and frameworks:
- LLMs & GenAI: Hugging Face Transformers, LangChain, LlamaIndex, OpenAI API, Diffusers
- NLP: SpaCy, NLTK.
- Vision: OpenCV, MMDetection, YOLOv5/v8, Detectron2
- MLOps: MLflow, FastAPI, Docker, Git
- Familiarity with vector databases (Pinecone, FAISS, Weaviate) and embedding generation.
- Experience with cloud platforms like AWS, GCP, or Azure, and deployment on in house GPU-backed infrastructure.
- Strong communication skills and ability to convert business problems into technical solutions.
Preferred Qualifications:
- Experience building multimodal systems (text + image, etc.)
- Practical experience with agent frameworks for autonomous or goal-directed AI.
- Familiarity with quantization, distillation, or knowledge transfer for efficient model deployment.
Key Responsibilities:
- Design, fine-tune, and deploy generative AI models (LLMs, diffusion models, etc.) for real-world applications.
- Develop and maintain prompt engineering workflows, including prompt chaining, optimization, and evaluation for consistent output quality.
- Build NLP solutions for Q&A, summarization, information extraction, text classification, and more.
- Develop and integrate Computer Vision models for image processing, object detection, OCR, and multimodal tasks.
- Architect and implement AI agents using frameworks such as LangChain, AutoGen, CrewAI, or custom pipelines.
- Collaborate with cross-functional teams to gather requirements and deliver tailored AI-driven features.
- Optimize models for performance, cost-efficiency, and low latency in production.
- Continuously evaluate new AI research, tools, and frameworks and apply them where relevant.
- Mentor junior AI engineers and contribute to internal AI best practices and documentation.



Data Scientist
Job Id: QX003
About Us:
QX impact was launched with a mission to make AI accessible and affordable and deliver AI Products/Solutions at scale for enterprises by bringing the power of Data, AI, and Engineering to drive digital transformation. We believe without insights, businesses will continue to face challenges to better understand their customers and even lose them; Secondly, without insights businesses won't’ be able to deliver differentiated products/services; and finally, without insights, businesses can’t achieve a new level of “Operational Excellence” is crucial to remain competitive, meeting rising customer expectations, expanding markets, and digitalization.
Position Overview:
We are seeking a collaborative and analytical Data Scientist who can bridge the gap between business needs and data science capabilities. In this role, you will lead and support projects that apply machine learning, AI, and statistical modeling to generate actionable insights and drive business value.
Key Responsibilities:
- Collaborate with stakeholders to define and translate business challenges into data science solutions.
- Conduct in-depth data analysis on structured and unstructured datasets.
- Build, validate, and deploy machine learning models to solve real-world problems.
- Develop clear visualizations and presentations to communicate insights.
- Drive end-to-end project delivery, from exploration to production.
- Contribute to team knowledge sharing and mentorship activities.
Must-Have Skills:
- 3+ years of progressive experience in data science, applied analytics, or a related quantitative role, demonstrating a proven track record of delivering impactful data-driven solutions.
- Exceptional programming proficiency in Python, including extensive experience with core libraries such as Pandas, NumPy, Scikit-learn, NLTK and XGBoost.
- Expert-level SQL skills for complex data extraction, transformation, and analysis from various relational databases.
- Deep understanding and practical application of statistical modeling and machine learning techniques, including but not limited to regression, classification, clustering, time series analysis, and dimensionality reduction.
- Proven expertise in end-to-end machine learning model development lifecycle, including robust feature engineering, rigorous model validation and evaluation (e.g., A/B testing), and model deployment strategies.
- Demonstrated ability to translate complex business problems into actionable analytical frameworks and data science solutions, driving measurable business outcomes.
- Proficiency in advanced data analysis techniques, including Exploratory Data Analysis (EDA), customer segmentation (e.g., RFM analysis), and cohort analysis, to uncover actionable insights.
- Experience in designing and implementing data models, including logical and physical data modeling, and developing source-to-target mappings for robust data pipelines.
- Exceptional communication skills, with the ability to clearly articulate complex technical findings, methodologies, and recommendations to diverse business stakeholders (both technical and non-technical audiences).
- Experience in designing and implementing data models, including logical and physical data modeling, and developing source-to-target mappings for robust data pipelines.
- Exceptional communication skills, with the ability to clearly articulate complex technical findings, methodologies, and recommendations to diverse business stakeholders (both technical and non-technical audiences).
Good-to-Have Skills:
- Experience with cloud platforms (Azure, AWS, GCP) and specific services like Azure ML, Synapse, Azure Kubernetes and Databricks.
- Familiarity with big data processing tools like Apache Spark or Hadoop.
- Exposure to MLOps tools and practices (e.g., MLflow, Docker, Kubeflow) for model lifecycle management.
- Knowledge of deep learning libraries (TensorFlow, PyTorch) or experience with Generative AI (GenAI) and Large Language Models (LLMs).
- Proficiency with business intelligence and data visualization tools such as Tableau, Power BI, or Plotly.
- Experience working within Agile project delivery methodologies.
Competencies:
· Tech Savvy - Anticipating and adopting innovations in business-building digital and technology applications.
· Self-Development - Actively seeking new ways to grow and be challenged using both formal and informal development channels.
· Action Oriented - Taking on new opportunities and tough challenges with a sense of urgency, high energy, and enthusiasm.
· Customer Focus - Building strong customer relationships and delivering customer-centric solutions.
· Optimizes Work Processes - Knowing the most effective and efficient processes to get things done, with a focus on continuous improvement.
Why Join Us?
- Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions.
- Work on impactful projects that make a difference across industries.
- Opportunities for professional growth and continuous learning.
- Competitive salary and benefits package.



Role : AIML Engineer
Location : Madurai
Experience : 5 to 10 Yrs
Mandatory Skills : AIML, Python, SQL, ML Models, PyTorch, Pandas, Docker, AWS
Language: Python
DBs : SQL
Core Libraries:
Time Series & Forecasting: pmdarima, statsmodels, Prophet, GluonTS, NeuralProphet
SOTA ML : ML Models, Boosting & Ensemble models etc.
Explainability : Shap / Lime
Required skills:
- Deep Learning: PyTorch, PyTorch Forecasting,
- Data Processing: Pandas, NumPy, Polars (optional), PySpark
- Hyperparameter Tuning: Optuna, Amazon SageMaker Automatic Model Tuning
- Deployment & MLOps: Batch & Realtime with API endpoints, MLFlow
- Serving: TorchServe, Sagemaker endpoints / batch
- Containerization: Docker
- Orchestration & Pipelines: AWS Step Functions, AWS SageMaker Pipelines
AWS Services:
- SageMaker (Training, Inference, Tuning)
- S3 (Data Storage)
- CloudWatch (Monitoring)
- Lambda (Trigger-based Inference)
- ECR, ECS or Fargate (Container Hosting)


We are building an advanced, AI-driven multi-agent software system designed to revolutionize task automation and code generation. This is a futuristic AI platform capable of:
✅ Real-time self-coding based on tasks
✅ Autonomous multi-agent collaboration
✅ AI-powered decision-making
✅ Cross-platform compatibility (Desktop, Web, Mobile)
We are hiring a highly skilled **AI Engineer & Full-Stack Developer** based in India, with a strong background in AI/ML, multi-agent architecture, and scalable, production-grade software development.
### Responsibilities:
- Build and maintain a multi-agent AI system (AutoGPT, BabyAGI, MetaGPT concepts)
- Integrate large language models (GPT-4o, Claude, open-source LLMs)
- Develop full-stack components (Backend: Python, FastAPI/Flask, Frontend: React/Next.js)
- Work on real-time task execution pipelines
- Build cross-platform apps using Electron or Flutter
- Implement Redis, Vector databases, scalable APIs
- Guide the architecture of autonomous, self-coding AI systems
### Must-Have Skills:
- Python (advanced, AI applications)
- AI/ML experience, including multi-agent orchestration
- LLM integration knowledge
- Full-stack development: React or Next.js
- Redis, Vector Databases (e.g., Pinecone, FAISS)
- Real-time applications (websockets, event-driven)
- Cloud deployment (AWS, GCP)
### Good to Have:
- Experience with code-generation AI models (Codex, GPT-4o coding abilities)
- Microservices and secure system design
- Knowledge of AI for workflow automation and productivity tools
Join us to work on cutting-edge AI technology that builds the future of autonomous software.


AccioJob is conducting a Walk-In Hiring Drive with Atomic Loops for the position of AI/ML Developer Intern.
To apply, register, and select your slot here: https://go.acciojob.com/E8wPb8
Required Skills: Python, AI, Prompting, ML understanding
Eligibility: ALL
Degree: ALL
Branch: ALL
Graduation Year: 2019, 2020, 2021, 2022, 2023, 2024, 2025, 2026
Work Details:
- Work Location: Pune (Onsite)
- CTC: 4 LPA to 5 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Pune Centre
Further Rounds (for shortlisted candidates only):
Profile & Background Screening Round, Company Side Process
Company Side Process
2 rounds will be for the intern role, and 3 rounds will be for the full-time role (Virtual or Face-to-Face)
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/E8wPb8


About Us
Alfred Capital - Alfred Capital is a next-generation on-chain proprietary quantitative trading technology provider, pioneering fully autonomous algorithmic systems that reshape trading and capital allocation in decentralized finance.
As a sister company of Deqode — a 400+ person blockchain innovation powerhouse — we operate at the cutting edge of quant research, distributed infrastructure, and high-frequency execution.
What We Build
- Alpha Discovery via On‑Chain Intelligence — Developing trading signals using blockchain data, CEX/DEX markets, and protocol mechanics.
- DeFi-Native Execution Agents — Automated systems that execute trades across decentralized platforms.
- ML-Augmented Infrastructure — Machine learning pipelines for real-time prediction, execution heuristics, and anomaly detection.
- High-Throughput Systems — Resilient, low-latency engines that operate 24/7 across EVM and non-EVM chains tuned for high-frequency trading (HFT) and real-time response
- Data-Driven MEV Analysis & Strategy — We analyze mempools, order flow, and validator behaviors to identify and capture MEV opportunities ethically—powering strategies that interact deeply with the mechanics of block production and inclusion.
Evaluation Process
- HR Discussion – A brief conversation to understand your motivation and alignment with the role.
- Initial Technical Interview – A quick round focused on fundamentals and problem-solving approach.
- Take-Home Assignment – Assesses research ability, learning agility, and structured thinking.
- Assignment Presentation – Deep-dive into your solution, design choices, and technical reasoning.
- Final Interview – A concluding round to explore your background, interests, and team fit in depth.
- Optional Interview – In specific cases, an additional round may be scheduled to clarify certain aspects or conduct further assessment before making a final decision.
Job Description : Blockchain Data & ML Engineer
As a Blockchain Data & ML Engineer, you’ll work on ingesting and modelling on-chain behaviour, building scalable data pipelines, and designing systems that support intelligent, autonomous market interaction.
What You’ll Work On
- Build and maintain ETL pipelines for ingesting and processing blockchain data.
- Assist in designing, training, and validating machine learning models for prediction and anomaly detection.
- Evaluate model performance, tune hyperparameters, and document experimental results.
- Develop monitoring tools to track model accuracy, data drift, and system health.
- Collaborate with infrastructure and execution teams to integrate ML components into production systems.
- Design and maintain databases and storage systems to efficiently manage large-scale datasets.
Ideal Traits
- Strong in data structures, algorithms, and core CS fundamentals.
- Proficiency in any programming language
- Familiarity with backend systems, APIs, and database design, along with a basic understanding of machine learning and blockchain fundamentals.
- Curiosity about how blockchain systems and crypto markets work under the hood.
- Self-motivated, eager to experiment and learn in a dynamic environment.
Bonus Points For
- Hands-on experience with pandas, numpy, scikit-learn, or PyTorch.
- Side projects involving automated ML workflows, ETL pipelines, or crypto protocols.
- Participation in hackathons or open-source contributions.
What You’ll Gain
- Cutting-Edge Tech Stack: You'll work on modern infrastructure and stay up to date with the latest trends in technology.
- Idea-Driven Culture: We welcome and encourage fresh ideas. Your input is valued, and you're empowered to make an impact from day one.
- Ownership & Autonomy: You’ll have end-to-end ownership of projects. We trust our team and give them the freedom to make meaningful decisions.
- Impact-Focused: Your work won’t be buried under bureaucracy. You’ll see it go live and make a difference in days, not quarters
What We Value:
- Craftsmanship over shortcuts: We appreciate engineers who take the time to understand the problem deeply and build durable solutions—not just quick fixes.
- Depth over haste: If you're the kind of person who enjoys going one level deeper to really "get" how something works, you'll thrive here.
- Invested mindset: We're looking for people who don't just punch tickets, but care about the long-term success of the systems they build.
- Curiosity with follow-through: We admire those who take the time to explore and validate new ideas, not just skim the surface.
Compensation:
- INR 6 - 12 LPA
- Performance Bonuses: Linked to contribution, delivery, and impact.


Brudite is an IT Training and Services company shaping the future of technology with Fortune 500 clients. We specialize in empowering young engineers to achieve their dreams through cutting-edge training, innovative products, and comprehensive services.
Proudly registered with iStart Rajasthan and Startup India, we are supported by industry leaders like NVIDIA and AWS.
Roles and Responsibilities -
- A can-do attitude to new challenges.
- Strong understanding of computer science fundamentals, including operating systems, Databases, and Networking.
- Knowledge of Python or any other programming language.
- Basic knowledge of Cloud Computing(AWS/Azure/GCP) will be a Plus.
- Basic Knowledge of Any Front-end Framework will be a Plus.
- We operate in a fast-paced, startup-like environment, so the ability to work in a dynamic, agile environment is essential.
- Strong written and verbal communication skills are essential for this role. You'll need to communicate with clients, team members, and stakeholders.
- Ability to learn and adapt to new technology trends and a curiosity to learn are essential

Technical Skills – Must have
Lead the design and development of AI-driven test automation frameworks and solutions.
Collaborate with stakeholders (e.g., product managers, developers, data scientists) to understand testing requirements and identify areas where AI automation can be effectively implemented.
Develop and implement test automation strategies for AI-based systems, encompassing various aspects like data generation, model testing, and performance evaluation.
Evaluate and select appropriate tools and technologies for AI test automation, including AI frameworks, testing tools, and automation platforms.
Define and implement best practices for AI test automation, covering areas like code standards, test case design, test data management, and ethical considerations.
Lead and mentor a team of test automation engineers in designing, developing, and executing AI test automation solutions.
Collaborate with development teams to ensure the testability of AI models and systems, providing guidance and feedback throughout the development lifecycle.
Analyze test results and identify areas for improvement in the AI automation process, continuously optimizing testing effectiveness and efficiency.
Stay up-to-date with the latest advancements and trends in AI and automation technologies, actively adapting and implementing new knowledge to enhance testing capabilities.
Knowledge in Generative AI and Conversational AI for implementation in test automation strategies is highly desirable.
Proficiency in programming languages commonly used in AI, such as Python, Java, or R.
Knowledge on AI frameworks and libraries, such as TensorFlow, PyTorch, or scikit-learn.
Familiarity with testing methodologies and practices, including Agile and DevOps.
Working experience on Python/Java and Selenium also knowledge in prompt engineering.

Senior Data Engineer Job Description
Overview
The Senior Data Engineer will design, develop, and maintain scalable data pipelines and
infrastructure to support data-driven decision-making and advanced analytics. This role requires deep
expertise in data engineering, strong problem-solving skills, and the ability to collaborate with
cross-functional teams to deliver robust data solutions.
Key Responsibilities
Data Pipeline Development: Design, build, and optimize scalable, secure, and reliable data
pipelines to ingest, process, and transform large volumes of structured and unstructured data.
Data Architecture: Architect and maintain data storage solutions, including data lakes, data
warehouses, and databases, ensuring performance, scalability, and cost-efficiency.
Data Integration: Integrate data from diverse sources, including APIs, third-party systems,
and streaming platforms, ensuring data quality and consistency.
Performance Optimization: Monitor and optimize data systems for performance, scalability,
and cost, implementing best practices for partitioning, indexing, and caching.
Collaboration: Work closely with data scientists, analysts, and software engineers to
understand data needs and deliver solutions that enable advanced analytics, machine
learning, and reporting.
Data Governance: Implement data governance policies, ensuring compliance with data
security, privacy regulations (e.g., GDPR, CCPA), and internal standards.
Automation: Develop automated processes for data ingestion, transformation, and validation
to improve efficiency and reduce manual intervention.
Mentorship: Guide and mentor junior data engineers, fostering a culture of technical
excellence and continuous learning.
Troubleshooting: Diagnose and resolve complex data-related issues, ensuring high
availability and reliability of data systems.
Required Qualifications
Education: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science,
or a related field.
Experience: 5+ years of experience in data engineering or a related role, with a proven track
record of building scalable data pipelines and infrastructure.
Technical Skills:
Proficiency in programming languages such as Python, Java, or Scala.
Expertise in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra).
Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services
(e.g., Redshift, BigQuery, Snowflake).
Hands-on experience with ETL/ELT tools (e.g., Apache Airflow, Talend, Informatica) and
data integration frameworks.
Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) and distributed
systems.
Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes) is a
plus.
Soft Skills:
Excellent problem-solving and analytical skills.
Strong communication and collaboration abilities.
Ability to work in a fast-paced, dynamic environment and manage multiple priorities.
Certifications (optional but preferred): Cloud certifications (e.g., AWS Certified Data Analytics,
Google Professional Data Engineer) or relevant data engineering certifications.
Preferred Qualifica
Experience with real-time data processing and streaming architectures.
Familiarity with machine learning pipelines and MLOps practices.
Knowledge of data visualization tools (e.g., Tableau, Power BI) and their integration with data
pipelines.
Experience in industries with high data complexity, such as finance, healthcare, or
e-commerce.
Work Environment
Location: Hybrid/Remote/On-site (depending on company policy).
Team: Collaborative, cross-functional team environment with data scientists, analysts, and
business stakeholders.
Hours: Full-time, with occasional on-call responsibilities for critical data systems.


Job description
Brief Description
One of our client is looking for a Lead Engineer in Bhopal with 5–10 years of experience. Candidates must have strong expertise in Python. Additional experience in AI/ML, MERN Stack, or Full Stack Development is a plus.
Job Description
We are seeking a highly skilled and experienced Lead Engineer – Python AI to join our dynamic team. The ideal candidate will have a strong background in AI technologies, MERN stack, and Python full stack development, with a passion for building scalable and intelligent systems. This role involves leading development efforts, mentoring junior engineers, and collaborating with cross-functional teams to deliver cutting-edge AI-driven solutions.
Key Responsibilities:
- Lead the design, development, and deployment of AI-powered applications using Python and MERN stack.
- Architect scalable and maintainable full-stack solutions integrating AI models and data pipelines.
- Collaborate with data scientists and product teams to integrate machine learning models into production systems.
- Ensure code quality, performance, and security across all layers of the application.
- Mentor and guide junior developers, fostering a culture of technical excellence.
- Stay updated with emerging technologies in AI, data engineering, and full-stack development.
- Participate in code reviews, sprint planning, and technical discussions.
Required Skills:
- 5+ years of experience in software development with a strong focus on Python full stack and MERN stack.
- Hands-on experience with AI technologies, machine learning frameworks (e.g., TensorFlow, PyTorch), and data processing tools.
- Proficiency in MongoDB, Express.js, React.js, Node.js.
- Strong understanding of RESTful APIs, microservices architecture, and cloud platforms (AWS, Azure, GCP).
- Experience with CI/CD pipelines, containerization (Docker), and version control (Git).
- Excellent problem-solving skills and ability to work in a fast-paced environment.
Education Qualification:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Certifications in AI/ML or Full Stack Development are a plus.



Position – Python Developer
Location – Navi Mumbai
Who are we
Based out of IIT Bombay, HaystackAnalytics is a HealthTech company creating clinical genomics products, which enable diagnostic labs and hospitals to offer accurate and personalized diagnostics. Supported by India's most respected science agencies (DST, BIRAC, DBT), we created and launched a portfolio of products to offer genomics in infectious diseases. Our genomics-based diagnostic solution for Tuberculosis was recognized as one of the top innovations supported by BIRAC in the past 10 years, and was launched by the Prime Minister of India in the BIRAC Showcase event in Delhi, 2022.
Objectives of this Role:
- Design and implement efficient, scalable backend services using Python.
- Work closely with healthcare domain experts to create innovative and accurate diagnostics solutions.
- Build APIs, services, and scripts to support data processing pipelines and front-end applications.
- Automate recurring tasks and ensure robust integration with cloud services.
- Maintain high standards of software quality and performance using clean coding principles and testing practices.
- Collaborate within the team to upskill and unblock each other for faster and better outcomes.
Primary Skills – Python Development
- Proficient in Python 3 and its ecosystem
- Frameworks: Flask / Django / FastAPI
- RESTful API development
- Understanding of OOPs and SOLID design principles
- Asynchronous programming (asyncio, aiohttp)
- Experience with task queues (Celery, RQ)
- Rust programming experience for systems-level or performance-critical components
Testing & Automation
- Unit Testing: PyTest / unittest
- Automation tools: Ansible / Terraform (good to have)
- CI/CD pipelines
DevOps & Cloud
- Docker, Kubernetes (basic knowledge expected)
- Cloud platforms: AWS / Azure / GCP
- GIT and GitOps workflows
- Familiarity with containerized deployment & serverless architecture
Bonus Skills
- Data handling libraries: Pandas / NumPy
- Experience with scripting: Bash / PowerShell
- Functional programming concepts
- Familiarity with front-end integration (REST API usage, JSON handling)
Other Skills
- Innovation and thought leadership
- Interest in learning new tools, languages, workflows
- Strong communication and collaboration skills
- Basic understanding of UI/UX principles
To know more about us – https://haystackanalytics.in

- 3 + years owning ML / LLM services in production on Azure (AKS, Azure OpenAI/Azure ML) or another major cloud.
- Strong Python plus hands-on work with a modern deep-learning stack (PyTorch / TensorFlow / HF Transformers).
- Built features with LLM toolchains: prompt engineering, function calling / tools, vector stores (FAISS, Pinecone, etc.).
- Familiar with agentic AI patterns (LangChain / LangGraph, eval harnesses, guardrails) and strategies to tame LLM non-determinism.
- Comfortable with containerization & CI/CD (Docker, Kubernetes, Git-based workflows); can monitor, scale and troubleshoot live services.
Nice-to-Haves
- Experience in billing, collections, fintech, or professional-services SaaS.
- Knowledge of email deliverability, templating engines, or CRM systems.
- Exposure to compliance frameworks (SOC 2, ISO 27001) or secure handling of financial data.

Job Title: Node.js / AI Engineer
Department: Technology
Location: Remote
Company: Mercer Talent Enterprise
Company Overview:Mercer Talent Enterprise is a leading provider of talent management solutions, dedicated to helping organizations optimize their workforce. We foster a collaborative and innovative work environment where our team members can thrive and contribute to our mission of enhancing talent strategies for our clients.
Position Overview:We are looking for a skilled Node.js / AI Engineer to join our Lighthouse Tech Team. This role is focused on application development, where you will be responsible for designing, developing, and deploying intelligent, AI-powered applications. You will leverage your expertise in Node.js and modern AI technologies to build sophisticated systems that feature Large Language Models (LLMs), AI Agents, and Retrieval-Augmented Generation (RAG) pipelines.
Key Responsibilities:
- Develop and maintain robust and scalable backend services and APIs using Node.js.
- Design, build, and integrate AI-powered features into our core applications.
- Implement and optimize Retrieval-Augmented Generation (RAG) systems to ensure accurate and context-aware responses.
- Develop and orchestrate autonomous AI agents to automate complex tasks and workflows.
- Work with third-party LLM APIs (like OpenAI, Anthropic, etc.) and open-source models, fine-tuning and adapting them for specific use cases.
- Collaborate with product managers and developers to define application requirements and deliver high-quality, AI-driven solutions.
- Ensure the performance, quality, and responsiveness of AI-powered applications.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 5+ years of professional experience in backend application development with a strong focus on Node.js.
- 2+ years of hands-on experience in AI-related development, including building applications that integrate with Large Language Models (LLMs).
- Demonstrable experience developing AI agents and implementing RAG patterns.
- Familiarity with AI/ML frameworks and libraries relevant to application development (e.g., LangChain, LlamaIndex).
- Experience with vector databases (e.g., Pinecone, Chroma, Weaviate) is a plus.
- Excellent problem-solving and analytical skills.
- Strong communication and teamwork abilities.
Benefits:
- Competitive salary and performance-based bonuses.
- Professional development opportunities.

Key Responsibilities:
● Develop and maintain web applications using Django and Flask frameworks.
● Design and implement RESTful APIs using Django Rest Framework (DRF).
● Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation.
● Build and integrate APIs for AI/ML models into existing systems.
● Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn.
● Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases.
● Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization.
● Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker.
● Ensure the scalability, performance, and reliability of applications and deployed models.
● Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions.
● Write clean, maintainable, and efficient code following best practices. ● Conduct code reviews and provide constructive feedback to peers.
● Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML.
Required Skills and Qualifications:
● Bachelor’s degree in Computer Science, Engineering, or a related field.
● 2+ years of professional experience as a Python Developer.
● Proficient in Python with a strong understanding of its ecosystem.
● Extensive experience with Django and Flask frameworks.
● Hands-on experience with AWS services for application deployment and management.
● Strong knowledge of Django Rest Framework (DRF) for building APIs. ● Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn.
● Experience with transformer architectures for NLP and advanced AI solutions.
● Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
● Familiarity with MLOps practices for managing the machine learning lifecycle.
● Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
● Excellent problem-solving skills and the ability to work independently and as part of a team.

We are seeking a highly skilled Senior/Lead Data Scientist with deep expertise in AI/ML/Gen AI, including Deep Learning, Computer Vision, and NLP. The ideal candidate will bring strong hands-on experience, particularly in building, fine-tuning, and deploying models, and will work directly with customers with minimal supervision.
This role requires someone who can not only lead and execute technical projects but also actively contribute to business development through customer interaction, proposal building, and RFP responses. You will be expected to take ownership of AI project execution and team leadership, while helping Tekdi expand its AI footprint.
Key Responsibilities:
Contribute to AI business growth by working on RFPs, proposals, and solutioning activities.
- Lead the team in delivering customer requirements, ensuring quality and timely execution.
Develop and fine tune advanced AI/ML models using deep learning and generative AI techniques.
- Fine-tune and optimize Large Language Models (LLMs) such as GPT, BERT, T5, and LLaMA.
- Interact directly with customers to understand their business needs and provide AI-driven solutions.
- Work with Deep Learning architectures including CNNs, RNNs, and Transformer-based models.
- Leverage NLP techniques such as summarization, NER, sentiment analysis, and embeddings.
- Implement MLOps pipelines and deploy scalable AI solutions in cloud environments (AWS, GCP, Azure).
- Collaborate with cross-functional teams to integrate AI into business applications.
- Stay updated with AI/ML research and integrate new techniques into projects.
Required Skills & Qualifications:
- Minimum 6 years of experience in AI/ML/Gen AI, with at least 3+ years in Deep Learning/Computer Vision.
- Strong proficiency in Python and popular AI/ML frameworks (TensorFlow, PyTorch, Hugging Face, Scikit-learn).
- Hands-on experience with LLMs and generative models (e.g., GPT, Stable Diffusion).
- Experience with data preprocessing, feature engineering, and performance evaluation.
- Exposure to containerization and cloud deployment using Docker, Kubernetes.
- Experience with vector databases and RAG-based architectures.
- Ability to lead teams and projects, and work independently with minimal guidance.
- Experience with customer-facing roles, proposals, and solutioning.
Educational Requirements:
- Bachelor’s, Master’s, or PhD in Computer Science, Artificial Intelligence, Information Technology, or related field.
Preferred Skills (Good to Have):
- Knowledge of Reinforcement Learning (e.g., RLHF), multi-modal AI, or time-series forecasting.
- Familiarity with Graph Neural Networks (GNNs).
- Exposure to Responsible AI (RAI), AI Ethics, or AutoML platforms.
- Contributions to open-source AI projects or publications in peer-reviewed journals.