
Similar jobs
We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.
Duration: 6 months
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
After completion of the internship period, there is a chance to get a full-time opportunity as AI/ML engineer (Up to 12 LPA).
Compensation:
- Joining Bonus: A one-time bonus of INR 2,500 will be awarded upon joining.
- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.
Key Responsibilities
- Experience working with python, LLM, Deep Learning, NLP, etc..
- Utilize GitHub for version control, including pushing and pulling code updates.
- Work with Hugging Face and OpenAI platforms for deploying models and exploring open-source AI models.
- Engage in prompt engineering and the fine-tuning process of AI models.
Requirements
- Proficiency in Python programming.
- Experience with GitHub and version control workflows.
- Familiarity with AI platforms such as Hugging Face and OpenAI.
- Understanding of prompt engineering and model fine-tuning.
- Excellent problem-solving abilities and a keen interest in AI technology.
To Apply Click below link and submit the Assignment
The requirements are as follows:
1) Familiar with the the Django REST API Framework.
2) Experience with the FAST API framework will be a plus
3) Strong grasp of basic python programming concepts ( We do ask a lot of questions on this on our interviews :) )
4) Experience with databases like MongoDB , Postgres , Elasticsearch , REDIS will be a plus
5) Experience with any ML library will be a plus.
6) Familiarity with using git , writing unit test cases for all code written and CI/CD concepts will be a plus as well.
7) Familiar with basic code patterns like MVC.
8) Grasp on basic data structures.
You can contact me on nine three one six one two zero one three two
We're looking for AI/ML enthusiasts who build, not just study. If you've implemented transformers from scratch, fine-tuned LLMs, or created innovative ML solutions, we want to see your work!
Make Sure before Applying (GitHub Profile Required):
1. Your GitHub must include:
- At least one substantial ML/DL project with documented results
- Code demonstrating PyTorch/TensorFlow implementation skills
- Clear documentation and experiment tracking
- Bonus: Contributions to ML open-source projects
2. Pin your best projects that showcase:
- LLM fine-tuning and evaluation
- Data preprocessing pipelines
- Model training and optimization
- Practical applications of AI/ML
Technical Requirements:
- Solid understanding of deep learning fundamentals
- Python + PyTorch/TensorFlow expertise
- Experience with Hugging Face transformers
- Hands-on with large dataset processing
- NLP/Computer Vision project experience
Education:
- Completed/Pursuing Bachelor's in Computer Science or related field
- Strong foundation in ML theory and practice
Apply if:
- You have done projects using GenAI, Machine Learning, Deep Learning.
- You must have strong Python coding experience.
- Someone who is available immediately to start with us in the office(Hyderabad).
- Someone who has the hunger to learn something new always and aims to step up at a high pace.
We value quality implementations and thorough documentation over quantity. Show us how you think through problems and implement solutions!
Job Title: AI Engineer - NLP/LLM Data Product Engineer Location: Chennai, TN- Hybrid
Duration: Full time
Job Summary:
About the Role:
We are growing our Data Science and Data Engineering team and are looking for an
experienced AI Engineer specializing in creating GenAI LLM solutions. This position involves collaborating with clients and their teams, discovering gaps for automation using AI, designing customized AI solutions, and implementing technologies to streamline data entry processes within the healthcare sector.
Responsibilities:
· Conduct detailed consultations with clients functional teams to understand client requirements, one use case is related to handwritten medical records.
· Analyze existing data entry workflows and propose automation opportunities.
Design:
· Design tailored AI-driven solutions for the extraction and digitization of information from handwritten medical records.
· Collaborate with clients to define project scopes and objectives.
Technology Selection:
· Evaluate and recommend AI technologies, focusing on NLP, LLM and machine learning.
· Ensure seamless integration with existing systems and workflows.
Prototyping and Testing:
· Develop prototypes and proof-of-concept models to demonstrate the feasibility of proposed solutions.
· Conduct rigorous testing to ensure accuracy and reliability.
Implementation and Integration:
· Work closely with clients and IT teams to integrate AI solutions effectively.
· Provide technical support during the implementation phase.
Training and Documentation:
· Develop training materials for end-users and support staff.
· Create comprehensive documentation for implemented solutions.
Continuous Improvement:
· Monitor and optimize the performance of deployed solutions.
· Identify opportunities for further automation and improvement.
Qualifications:
· Advanced degree in Computer Science, Artificial Intelligence, or related field (Masters or PhD required).
· Proven experience in developing and implementing AI solutions for data entry automation.
· Expertise in NLP, LLM and other machine-learning techniques.
· Strong programming skills, especially in Python.
· Familiarity with healthcare data privacy and regulatory requirements.
Additional Qualifications( great to have):
An ideal candidate will have expertise in the most current LLM/NLP models, particularly in the extraction of data from clinical reports, lab reports, and radiology reports. The ideal candidate should have a deep understanding of EMR/EHR applications and patient-related data.
Who we are
We’re Fluxon, a product development team founded by ex-Googlers and startup founders. We offer full-cycle software development from ideation and design to build and go-to-market. We partner with visionary companies, ranging from fast-growing startups to tech leaders like
Google and Stripe, to turn bold ideas into products with the power to transform the world.
About the role
As an AI Engineer at Fluxon, you’ll take the lead in designing, building and deploying AI-powered applications for our clients.
You'll be responsible for:
- System Architecture: Design and implement end-to-end AI systems and their parts, including data ingestion, preprocessing, model inference, and output structuring
- Generative AI Development: Build and optimize RAG (Retrieval-Augmented Generation) systems and Agentic workflows using frameworks like LangChain, LangGraph, ADK (Agent Development Kit), Genkit
- Production Engineering: Deploy models to production environments (AWS/GCP/Azure) using Docker and Kubernetes, ensuring high availability and scalability
- Evaluation & Monitoring: Implement feedback loops to evaluate model performance (accuracy, hallucinations, relevance) and set up monitoring for drift in production
- Collaboration: Work closely with other engineers to integrate AI endpoints into the core product and with product managers to define AI capabilities
- Model Optimization: Fine-tune open-source models (e.g., Llama, Mistral) for specific domain tasks and optimize them for latency and cost
You'll work with technologies including:
Languages
- Python (Preferred)
- Java / C++ / Scala / R / JavaScript
AI / ML
- LangChain
- LangGraph
- Google ADK
- Genkit
- OpenAI API
- LLM - Large Language Model
- Vertex AI
Cloud & Infrastructure
- Platforms: Google Cloud Platform (GCP) or Amazon Web Services (AWS)
- Storage: Google Cloud Storage (GCS) or AWS S3
- Orchestration: Temporal, Kubernetes
- Data Stores
- PostgreSQL
- Firestore
- MongoDB
Monitoring & Observability
- GCP Cloud Monitoring Suite
Qualifications
- 5+ years of industry experience in software engineering roles
- Strong proficiency in Python or any preferred AI programming language such as Scala, Javascript and Java
- Strong understanding of Transformer architectures, embeddings, and vector similarity search
- Experience integrating with LLM provider APIs (OpenAI, Anthropic, Google Vertex AI)
- Hands-on experience with agent workflows like LangChain, LangGraph
- Experience with Vector Databases and traditional SQL / NoSQL databases
- Familiarity with cloud platforms, preferably GCP or AWS
- Understanding of patterns like RAG (Retrieval-Augmented Generation), few-shot prompting, and Fine-Tuning
- Solid understanding of software development practices including version control (Git) and CI/CD
Nice to have:
- Experience with Google Cloud Platform (GCP) services, specifically Vertex AI, Firestore,and Cloud Functions
- Knowledge of prompt engineering techniques (Chain-of-Thought, ReAct, Tree of Thoughts)
- Experience building "Agentic" workflows where AI can execute tools or API calls autonomously
What we offer
- Exposure to high-profile SV startups and enterprise companies
- Competitive salary
- Fully remote work with flexible hours
- Flexible paid time off
- Profit-sharing program
- Healthcare
- Parental leave, including adoption and fostering
- Gym membership and tuition reimbursement
- Hands-on career development

- Experience building and managing large scale data/analytics systems.
- Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good
understanding of software design principles and architectural best practices.
- Be passionate about writing code and have experience coding in multiple languages, including at least
one scripting language, preferably Python.
- Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision
is right/wrong, and so on.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- Have exposure and working knowledge in AI environment with Machine learning experience
- Have experience working with multiple storage and indexing technologies such as MySQL, Redis,
MongoDB, Cassandra, Elastic.
- Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana,
Graylog, StatsD, Datadog etc
6-8 yrs experience
Fully Remote position
Max compensation - 45 LPA per annum (Full in hand)
Key Responsibilities
- Design, implement and maintain software to the demanding standards of a real time, highly concurrent distributed system.
- Working in conjunction with the rest of the development team, you will architect and build highly performant, scalable and extensible external APIs
- Collaborate with customers and internal stakeholders, at all levels, to continuously improve our product in a measured data-driven approach
- Learn quickly, adapt, and invent based on changing company needs and priorities
- Contribute to code reviews, tech talks, innovation drives and patents
Minimum Qualifications
- Excellent problem solving skills
- Bachelors in a computer science or other equivalent field
- Proficiency in deploying production systems using a major programming language like Java, Python, NodeJS or similar
- Excellent command over object oriented design and system design
- Experience building distributed systems and scaling them with high availability
- Ability to exercise autonomy rather than needing detailed direction and proactively get things done
Preferred Qualifications
- Experience in customer facing software development
- Proficiency building unit and performance tests to ensure reliability and scalability
- Experience in Artificial Intelligence, Machine Learning (ML) models, Natural Language Processing or Deep Learning is a plus
- Experience with cloud infrastructure such as AWS, GCP is a plus
Why work with us
- A small collaborative and excited team
- We value autonomy, allowing you to choose the configuration that makes you most productive
- Able to work remotely anywhere in Indian Standard Time
- Continuous learning and up-skill opportunities
- We love ideas, innovation and experiments!
- Competitive salary
1. Ability to read the documentation and perform 3rd party API integrations
2. Experience with Postgres and Redis
3. Experience with AWS - EC2, RDS, DynamoDB, etc
4. Experience with Python
Dashboard
Skills
1. Experience with Django
a. Django Web
b. Django REST
c. Django Channels
d. Django Celery (Queues and Brokers)
2. Experience creating a dashboard with login, user profiles, roles and permissions, reports
3. Experience with Facebook and Twitter OAuth
4. Experience with handling database migrations
Chatbot
Skills
1. Basic understanding of NLP - intents and entities
2. Strong understanding of dialogue management and conversation flow
3. Integrations with Facebook and Twitter APIs.
4. Creating and managing Facebook and Twitter apps
5. Understanding of webhooks and REST APIs.









