Cutshort logo
AbleCredit logo
AI Systems Engineer
AI Systems Engineer
AbleCredit's logo

AI Systems Engineer

Utkarsh Apoorva's profile picture
Posted by Utkarsh Apoorva
5 - 12 yrs
₹30L - ₹60L / yr
Bengaluru (Bangalore), Pune
Skills
Large Language Models (LLM)
LLMops
Generative AI
Large Language Models (LLM) tuning

SDE 2 / SDE 3 – AI Infrastructure & LLM Systems Engineer

Location: Pune / Bangalore (India)

Experience: 4–8 years

Compensation: no bar for the right candidate

Bonus: Up to 10% of base


About the Company

AbleCredit builds production-grade AI systems for BFSI enterprises, reducing OPEX by up to 70% across onboarding, credit, collections, and claims.

We run our own LLMs on GPUs, operate high-concurrency inference systems, and build AI workflows that must scale reliably under real enterprise traffic.

Role Summary (What We’re Really Hiring For)

We are looking for a strong backend / systems engineer who can:

  • Deploy AI models on GPUs
  • Expose them via APIs
  • Scale inference under high parallel load using async systems and queues

This is not a prompt-engineering or UI-AI role.



Core Responsibilities

  • Deploy and operate LLMs on GPU infrastructure (cloud or on-prem).
  • Run inference servers such as vLLM / TGI / SGLang / Triton or equivalents.
  • Build FastAPI / gRPC APIs on top of AI models.
  • Design async, queue-based execution for AI workflows (fan-out, retries, backpressure).
  • Plan and reason about capacity & scaling:
  • GPU count vs RPS
  • batching vs latency
  • cost vs throughput
  • Add observability around latency, GPU usage, queue depth, failures.
  • Work closely with AI researchers to productionize models safely.



Must-Have Skills

  • Strong backend engineering fundamentals (distributed systems, async workflows).
  • Hands-on experience running GPU workloads in production.
  • Proficiency in Python (Golang acceptable).
  • Experience with Docker + Kubernetes (or equivalent).
  • Practical knowledge of queues / workers (Redis, Kafka, SQS, Celery, Temporal, etc.).
  • Ability to reason quantitatively about performance, reliability, and cost.



Strong Signals (Recruiter Screening Clues)

Look for candidates who have:

  • Personally deployed models on GPUs
  • Debugged GPU memory / latency / throughput issues
  • Scaled compute-heavy backends under load
  • Designed async systems instead of blocking APIs



Nice to Have

  • Familiarity with LangChain / LlamaIndex (as infra layers, not just usage).
  • Experience with vector DBs (Qdrant, Pinecone, Weaviate).
  • Prior work on multi-tenant enterprise systems.



Not a Fit If

  • Only experience is calling OpenAI / Anthropic APIs.
  • Primarily a prompt engineer or frontend-focused AI dev.
  • No hands-on ownership of infra, scaling, or production reliability.


Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About AbleCredit

Founded :
2023
Type :
Product
Size :
0-20
Stage :
Raised funding

About

AI that writes Credit Reports on its own !!

AbleCredit is a friendly, supportive credit assistant. It generates Credit Appraisal Memos based on your policies, without any human intervention.

Read more

Candid answers by the company

What does the company do?
What is the location preference of jobs?

AI that writes Credit Reports on its own !!

AbleCredit is your friendly, supportive credit assistant. It generates Credit Appraisal Memos based on your policies, without any human intervention.

Company social profiles

blog

Similar jobs

PGAGI
Javeriya Shaik
Posted by Javeriya Shaik
Remote only
0 - 0.6 yrs
₹2L - ₹2L / yr
skill iconPython
Large Language Models (LLM)
Natural Language Processing (NLP)
skill iconDeep Learning
FastAPI
+1 more

We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.


Duration: 6 months


Perks:

- Hands-on experience with real AI projects.

- Mentoring from industry experts.

- A collaborative, innovative and flexible work environment

After completion of the internship period, there is a chance to get a full-time opportunity as AI/ML engineer (Up to 12 LPA).


Compensation:

- Joining Bonus: A one-time bonus of INR 2,500 will be awarded upon joining.

- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.

Key Responsibilities

  • Experience working with python, LLM, Deep Learning, NLP, etc..
  • Utilize GitHub for version control, including pushing and pulling code updates.
  • Work with Hugging Face and OpenAI platforms for deploying models and exploring open-source AI models.
  • Engage in prompt engineering and the fine-tuning process of AI models.

Requirements

  • Proficiency in Python programming.
  • Experience with GitHub and version control workflows.
  • Familiarity with AI platforms such as Hugging Face and OpenAI.
  • Understanding of prompt engineering and model fine-tuning.
  • Excellent problem-solving abilities and a keen interest in AI technology.


To Apply Click below link and submit the Assignment

https://pgagi.in/jobs/28df1e98-f0c3-4d58-9509-d5b1a4ea9754

Read more
Deltek
Puja Rana
Posted by Puja Rana
Bengaluru (Bangalore)
7 - 10 yrs
Best in industry
SQL server
skill iconC#
skill icon.NET
Artificial Intelligence (AI)
Generative AI
  • Curiosity, passion, teamwork, and initiative
  • Extensive experience with SQL Server (T-SQL, query optimization, performance tuning, schema design)
  • Strong proficiency in C# and .NET Core for enterprise application development and integration with complex data models
  • Experience with Azure cloud services (e.g., Azure SQL, App Services, Storage)
  • Ability to leverage agentic AI as a development support tool, with a critical thinking approach
  • Solid understanding of agile methodologies, DevOps, and CI/CD practices
  • Ability to work independently and collaboratively in a fast-paced, distributed team environment
  • Excellent problem-solving, analytical, and communication skills
  • Master's degree in Computer Science or equivalent; 5+ years of relevant work experience
  • Experience with ERP systems or other complex business applications is a strong plus
Read more
Asha Health (YC F24)
at Asha Health (YC F24)
2 candid answers
Asha Health
Posted by Asha Health
Bengaluru (Bangalore)
2 - 6 yrs
₹20L - ₹60L / yr
Generative AI
Large Language Models (LLM)

About Asha Health

Asha Health helps medical practices launch their own AI clinics. We're backed by Y Combinator, General Catalyst, 186 Ventures, Reach Capital and many more. We recently raised an oversubscribed seed round from some of the best investors in Silicon Valley. Our team includes AI product leaders from companies like Google, physician executives from major health systems, and more.


About the Role

We're looking for an all-rounder backend software engineer that has incredibly strong product thinking skills to join our Bangalore team in person.


As part of this FDE role, you will work very closely with our mid market and enterprise customers closely to understand their pain points, dream up new solutions and then bring them to life for all of our customers.


You need to be ready to work very closely to understand customer pain points, and come up with ideas that drive real ROI.


Your day to day will involve building new AI agents, with a high degree of reliability, and ensuring that customers see real measurable value from them. Interfacing with customers and learning from them first hand will be one of the best facets of this role.


We pay well above market for the country's best talent and provide a number of excellent perks.


Requirements

You do not need AI experience to apply to this role, although we do prefer it.


We prefer candidates who have worked as a founding engineer at an early stage startup (Seed or Preseed) or a Senior Software Engineer at a Series A or B startup.


Perks of Working at Asha Health

#1 Build cool stuff: work on the latest, cutting-edge tech (build frontier AI agents with technologies that evolve every 2 weeks).

#2 Surround yourself with top talent: our team includes senior AI product leaders from companies like Google, experienced physician executives, and top 1% engineering talent (the best of the best).

#3 Rocketship trajectory: we get more customer interest than we have time to onboard, it's a good problem to have :)

Read more
Wissen Technology
at Wissen Technology
4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Pune
7 - 13 yrs
Best in industry
skill iconPython
skill iconDjango
RESTful APIs
Microservices
Generative AI
+2 more

7+ years of experience in Python Development

Good experience in Microservices and APIs development.

Must have exposure to large scale data

Good to have Gen AI experience

Code versioning and collaboration. (Git)

Knowledge for Libraries for extracting data from websites.

Knowledge of SQL and NoSQL databases

Familiarity with RESTful APIs

Familiarity with Cloud (Azure /AWS) technologies


About Wissen Technology:


• The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.

• Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.

• Our workforce consists of 550+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.

• Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.

• Globally present with offices US, India, UK, Australia, Mexico, and Canada.

• We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.

• Wissen Technology has been certified as a Great Place to Work®.

• Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.

• Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.

We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, StateStreet, Flipkart, Swiggy, Trafigura, GE to name a few.

Website : www.wissen.com


Read more
JK Technosoft Ltd
Akanksh Gupta
Posted by Akanksh Gupta
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad
6 - 10 yrs
₹30L - ₹42L / yr
Generative AI
GenAI
skill iconPython
skill iconFlask
FastAPI
+3 more

We are looking for a Technical Lead - GenAI with a strong foundation in Python, Data Analytics, Data Science or Data Engineering, system design, and practical experience in building and deploying Agentic Generative AI systems. The ideal candidate is passionate about solving complex problems using LLMs, understands the architecture of modern AI agent frameworks like LangChain/LangGraph, and can deliver scalable, cloud-native back-end services with a GenAI focus.


Key Responsibilities :


- Design and implement robust, scalable back-end systems for GenAI agent-based platforms.


- Work closely with AI researchers and front-end teams to integrate LLMs and agentic workflows into production services.


- Develop and maintain services using Python (FastAPI/Django/Flask), with best practices in modularity and performance.


- Leverage and extend frameworks like LangChain, LangGraph, and similar to orchestrate tool-augmented AI agents.


- Design and deploy systems in Azure Cloud, including usage of serverless functions, Kubernetes, and scalable data services.


- Build and maintain event-driven / streaming architectures using Kafka, Event Hubs, or other messaging frameworks.


- Implement inter-service communication using gRPC and REST.


- Contribute to architectural discussions, especially around distributed systems, data flow, and fault tolerance.


Required Skills & Qualifications :


- Strong hands-on back-end development experience in Python along with Data Analytics or Data Science.


- Strong track record on platforms like LeetCode or in real-world algorithmic/system problem-solving.


- Deep knowledge of at least one Python web framework (e.g., FastAPI, Flask, Django).


- Solid understanding of LangChain, LangGraph, or equivalent LLM agent orchestration tools.


- 2+ years of hands-on experience in Generative AI systems and LLM-based platforms.


- Proven experience with system architecture, distributed systems, and microservices.


- Strong familiarity with Any Cloud infrastructure and deployment practices.


- Should know about any Data Engineering or Analytics expertise (Preferred) e.g. Azure Data Factory, Snowflake, Databricks, ETL tools Talend, Informatica or Power BI, Tableau, Data modelling, Datawarehouse development.


Read more
Mckinely and rice
Pune, Noida
5 - 15 yrs
₹5L - ₹25L / yr
skill iconMongoDB
skill iconNodeJS (Node.js)
Generative AI
skill iconExpress
DevOps
+2 more

Company Overview 

McKinley Rice is not just a company; it's a dynamic community, the next evolutionary step in professional development. Spiritually, we're a hub where individuals and companies converge to unleash their full potential. Organizationally, we are a conglomerate composed of various entities, each contributing to the larger narrative of global excellence.

Redrob by McKinley Rice: Redefining Prospecting in the Modern Sales Era


Backed by a $40 million Series A funding from leading Korean & US VCs, Redrob is building the next frontier in global outbound sales. We’re not just another database—we’re a platform designed to eliminate the chaos of traditional prospecting. In a world where sales leaders chase meetings and deals through outdated CRMs, fragmented tools, and costly lead-gen platforms, Redrob provides a unified solution that brings everything under one roof.

Inspired by the breakthroughs of Salesforce, LinkedIn, and HubSpot, we’re creating a future where anyone, not just enterprise giants, can access real-time, high-quality data on 700 M+ decision-makers, all in just a few clicks.

At Redrob, we believe the way businesses find and engage prospects is broken. Sales teams deserve better than recycled data, clunky workflows, and opaque credit-based systems. That’s why we’ve built a seamless engine for:

  • Precision prospecting
  • Intent-based targeting
  • Data enrichment from 16+ premium sources
  • AI-driven workflows to book more meetings, faster

We’re not just streamlining outbound—we’re making it smarter, scalable, and accessible. Whether you’re an ambitious startup or a scaled SaaS company, Redrob is your growth copilot for unlocking warm conversations with the right people, globally.



EXPERIENCE



Duties you'll be entrusted with:


  • Develop and execute scalable APIs and applications using the Node.js or Nest.js framework
  • Writing efficient, reusable, testable, and scalable code.
  • Understanding, analyzing, and implementing – Business needs, feature modification requests, and conversion into software components
  • Integration of user-oriented elements into different applications, data storage solutions
  • Developing – Backend components to enhance performance and receptiveness, server-side logic, and platform, statistical learning models, highly responsive web applications
  • Designing and implementing – High availability and low latency applications, data protection and security features
  • Performance tuning and automation of applications and enhancing the functionalities of current software systems.
  • Keeping abreast with the latest technology and trends.


Expectations from you:


Basic Requirements


  • Minimum qualification: Bachelor’s degree or more in Computer Science, Software Engineering, Artificial Intelligence, or a related field.
  • Experience with Cloud platforms (AWS, Azure, GCP).
  • Strong understanding of monitoring, logging, and observability practices.
  • Experience with event-driven architectures (e.g., Kafka, RabbitMQ).
  • Expertise in designing, implementing, and optimizing Elasticsearch.
  • Work with modern tools including Jira, Slack, GitHub, Google Docs, etc.
  • Expertise in Event driven architecture.
  • Experience in Integrating Generative AI APIs.
  • Working experience in high user concurrency.
  • Experience in scaled databases for handling millions of records - indexing, retrieval, etc.,


Technical Skills


  • Demonstrable experience in web application development with expertise in Node.js or Nest.js.
  • Knowledge of database technologies and agile development methodologies.
  • Experience working with databases, such as MySQL or MongoDB.
  • Familiarity with web development frameworks, such as Express.js.
  • Understanding of microservices architecture and DevOps principles.
  • Well-versed with AWS and serverless architecture.



Soft Skills


  • A quick and critical thinker with the ability to come up with a number of ideas about a topic and bring fresh and innovative ideas to the table to enhance the visual impact of our content.
  • Potential to apply innovative and exciting ideas, concepts, and technologies.
  • Stay up-to-date with the latest design trends, animation techniques, and software advancements.
  • Multi-tasking and time-management skills, with the ability to prioritize tasks.


THRIVE


Some of the extensive benefits of being part of our team:


  • We offer skill enhancement and educational reimbursement opportunities to help you further develop your expertise.
  • The Member Reward Program provides an opportunity for you to earn up to INR 85,000 as an annual Performance Bonus.
  • The McKinley Cares Program has a wide range of benefits:
  • The wellness program covers sessions for mental wellness, and fitness and offers health insurance.
  • In-house benefits have a referral bonus window and sponsored social functions.
  • An Expanded Leave Basket including paid Maternity and Paternity Leaves and rejuvenation Leaves apart from the regular 20 leaves per annum. 
  • Our Family Support benefits not only include maternity and paternity leaves but also extend to provide childcare benefits.
  • In addition to the retention bonus, our McKinley Retention Benefits program also includes a Leave Travel Allowance program.
  • We also offer an exclusive McKinley Loan Program designed to assist our employees during challenging times and alleviate financial burdens.


Read more
Synorus
Synorus Admin
Posted by Synorus Admin
Remote only
0 - 1 yrs
₹0.2L - ₹1L / yr
Google colab
Retrieval Augmented Generation (RAG)
Large Language Models (LLM) tuning
skill iconPython
PyTorch
+3 more

About Synorus

Synorus is building a next-generation ecosystem of AI-first products. Our flagship legal-AI platform LexVault is redefining legal research, drafting, knowledge retrieval, and case intelligence using domain-tuned LLMs, private RAG pipelines, and secure reasoning systems.

If you are passionate about AI, legaltech, and training high-performance models — this internship will put you on the front line of innovation.


Role Overview

We are seeking passionate AI/LLM Engineering Interns who can:

  • Fine-tune LLMs for legal domain use-cases
  • Train and experiment with open-source foundation models
  • Work with large datasets efficiently
  • Build RAG pipelines and text-processing frameworks
  • Run model training workflows on Google Colab / Kaggle / Cloud GPUs

This is a hands-on engineering and research internship — you will work directly with senior founders & technical leadership.

Key Responsibilities

  • Fine-tune transformer-based models (Llama, Mistral, Gemma, etc.)
  • Build and preprocess legal datasets at scale
  • Develop efficient inference & training pipelines
  • Evaluate models for accuracy, hallucinations, and trustworthiness
  • Implement RAG architectures (vector DBs + embeddings)
  • Work with GPU environments (Colab/Kaggle/Cloud)
  • Contribute to model improvements, prompt engineering & safety tuning

Must-Have Skills

  • Strong knowledge of Python & PyTorch
  • Understanding of LLMs, Transformers, Tokenization
  • Hands-on experience with HuggingFace Transformers
  • Familiarity with LoRA/QLoRA, PEFT training
  • Data wrangling: Pandas, NumPy, tokenizers
  • Ability to handle multi-GB datasets efficiently

Bonus Skills

(Not mandatory — but a strong plus)

  • Experience with RAG / vector DBs (Chroma, Qdrant, LanceDB)
  • Familiarity with vLLM, llama.cpp, GGUF
  • Worked on summarization, Q&A or document-AI projects
  • Knowledge of legal texts (Indian laws/case-law/statutes)
  • Open-source contributions or research work

What You Will Gain

  • Real-world training on LLM fine-tuning & legal AI
  • Exposure to production-grade AI pipelines
  • Direct mentorship from engineering leadership
  • Research + industry project portfolio
  • Letter of experience + potential full-time offer

Ideal Candidate

  • You experiment with models on weekends
  • You love pushing GPUs to their limits
  • You prefer research + implementation over theory alone
  • You want to build AI that matters — not just demos


Location - Remote

Stipend - 5K - 10K

Read more
OnActive
Mansi Gupta
Posted by Mansi Gupta
Remote only
5 - 8 yrs
₹8L - ₹10L / yr
skill iconPython
Artificial Intelligence (AI)
API
Large Language Models (LLM)
User Interface (UI) Design

bout the Role

We are seeking an experienced Python Data Engineer with a strong foundation in API and basic UI development. This role is essential for advancing our analytics capabilities for AI products, helping us gain deeper insights into product performance and driving data-backed improvements. If you have a background in AI/ML, familiarity with large language models (LLMs), and a solid grasp of Python libraries for AI, we’d like to connect!


Key Responsibilities

•   Develop Analytics Framework: Build a comprehensive analytics framework to evaluate and monitor AI product performance and business value.

•   Define KPIs with Stakeholders: Collaborate with key stakeholders to establish and measure KPIs that gauge AI product maturity and impact.

•   Data Analysis for Actionable Insights: Dive into complex data sets to identify patterns and provide actionable insights to support product improvements.

•   Data Collection & Processing: Lead data collection, cleaning, and processing to ensure high-quality, actionable data for analysis.

•   Clear Reporting of Findings: Present findings to stakeholders in a clear, concise manner, emphasizing actionable insights.


Required Skills

•   Technical Skills:

o   Proficiency in Python, including experience with key AI/ML libraries.

o   Basic knowledge of UI and API development.

o   Understanding of large language models (LLMs) and experience using them effectively.


•   Analytical & Communication Skills:

o   Strong problem-solving skills to address complex, ambiguous challenges.

o   Ability to translate data insights into understandable reports for non-technical stakeholders.

o   Knowledge of machine learning algorithms and frameworks to assess AI product effectiveness.

o   Experience in statistical methods to interpret data and build metrics frameworks.

o   Skilled in quantitative analysis to drive actionable insights.



Read more
Noodle.ai
at Noodle.ai
2 recruiters
Ankita Ghosh
Posted by Ankita Ghosh
Remote only
8 - 15 yrs
₹20L - ₹70L / yr
TensorFlow
pandas
skill iconPython
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
+2 more

Must have:

  • 8+ years of experience with a significant focus on developing, deploying & supporting AI solutions in production environments.
  • Proven experience in building enterprise software products for B2B businesses, particularly in the supply chain domain.
  • Good understanding of Generics, OOPs concepts & Design Patterns
  • Solid engineering and coding skills. Ability to write high-performance production quality code in Python
  • Proficiency with ML libraries and frameworks (e.g., Pandas, TensorFlow, PyTorch, scikit-learn).
  • Strong expertise in time series forecasting using stat, ML, DL and foundation models
  • Experience of working on processing time series data employing techniques such as decomposition, clustering, outlier detection & treatment
  • Exposure to generative AI models and agent architectures on platforms such as AWS Bedrock, Crew AI, Mosaic/Databricks, Azure
  • Experience of working with modern data architectures, including data lakes and data warehouses, having leveraged one or more of the frameworks such as Airbyte, Airflow, Dagster, AWS Glue, Snowflake,, DBT
  • Hands-on experience with cloud platforms (e.g., AWS, Azure, GCP) and deploying ML models in cloud environments.
  • Excellent problem-solving skills and the ability to work independently as well as in a collaborative team environment.
  • Effective communication skills, with the ability to convey complex technical concepts to non-technical stakeholders


Good To Have:

  • Experience with MLOps tools and practices for continuous integration and deployment of ML models.
  • Has familiarity with deploying applications on Kubernetes
  • Knowledge of supply chain management principles and challenges.
  • A Master's or Ph.D. in Computer Science, Machine Learning, Data Science, or a related field is preferred
Read more
Recro
at Recro
1 video
32 recruiters
Nehlata Pandey
Posted by Nehlata Pandey
Bengaluru (Bangalore)
5 - 8 yrs
₹5L - ₹10L / yr
skill iconPython
skill iconDjango
Generative AI
Large Language Models (LLM) tuning

What you’ll be doing


 Weare much more than our job descriptions, but here is where you will begin:

 As a Senior Software Engineer Data & ML You’ll Be:

 ● Architect, design, test, implement, deploy, monitor and maintain end-to-end backend

 services. You build it, you own it.

 ● Work with people from other teams and departments on a day to day basis to ensure

 efficient project execution with a focus on delivering value to our members.

 ● Regularly aligning your team’s vision and roadmap with the target architecture within your

 domain and to ensure the success of complex multi domain initiatives.

 ● Integrate already trained ML and GenAI models (preferably GCP in services.


ROLE:

 Whatyou’ll need,

 Like us, you’ll be deeply committed to delivering impactful outcomes for customers.


 What Makes You a Great Fit

 ● 5 years of proven work experience as a Backend Python Engineer

 ● Understanding of software engineering fundamentals OOPS, SOLID, etc.)

 ● Hands-on experience with Python libraries like Pandas, NumPy, Scikit-learn,

 Lang chain/LLamaIndex etc.

 ● Experience with machine learning frameworks such as PyTorch or TensorFlow, Keras, being

 proficient in Python

 ● Hands-on Experience with frameworks such as Django or FastAPI or Flask

 ● Hands-on experience with MySQL, MongoDB, Redis and BigQuery (or equivalents)

 ● Extensive experience integrating with or creating REST APIs

 ● Experience with creating and maintaining CI/CD pipelines- we use GitHub Actions.

 ● Experience with event-driven architectures like Kafka, RabbitMq or equivalents.

 ● Knowledge about:

 o LLMs

 o Vector stores/databases

 o PromptEngineering

 o Embeddings and their implementations

 ● Somehands-onexperience in implementations of the above ML/AI will be preferred

 ● Experience with GCP/AWS services.

 ● You are curious about and motivated by the future trends in data, AI/ML, analytics

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos