Research Scientist - Machine Learning/Artificial Intelligence


About Intellinet Systems
About
Connect with the team
Similar jobs


- Design, develop, and maintain data pipelines and ETL workflows on AWS platform
- Work with AWS services like S3, Glue, Lambda, Redshift, EMR, and Athena for data ingestion, transformation, and analytics
- Collaborate with Data Scientists, Analysts, and Business teams to understand data requirements
- Optimize data workflows for performance, scalability, and reliability
- Troubleshoot data issues, monitor jobs, and ensure data quality and integrity
- Write efficient SQL queries and automate data processing tasks
- Implement data security and compliance best practices
- Maintain technical documentation and data pipeline monitoring dashboards
Responsibilities
- Design and implement advanced solutions utilizing Large Language Models (LLMs).
- Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
- Conduct research and stay informed about the latest developments in generative AI and LLMs.
- Develop and maintain code libraries, tools, and frameworks to support generative AI development.
- Participate in code reviews and contribute to maintaining high code quality standards.
- Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
- Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
- Possess strong analytical and problem-solving skills.
- Demonstrate excellent communication skills and the ability to work effectively in a team environment.
Primary Skills
- Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
- Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
- Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
- Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
- Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
- Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.


Project titled: “Machine Learning Models to Predict MIC in Indian Priority Pathogens & Identification of Novel Antimicrobial Resistance Mechanisms”
Project Code: GAP-0242
Position: Project Research Scientist-II (Non-Medical) =01 Essential
Qualification: 1. Post Graduate Degree, including the integrated PG degrees, with three years post qualification Experience or Ph.D. 2. For Engineering/ IT/ CS - Graduate degree of Four years with three years’ post qualification experience.
Desirable: Experience in Machine learning and AI methods, previous experience in next generation sequence data analysis is expected. Hands-on experience in wet-lab (basic microbiology/molecular biology) is beneficial not mandatory.
Upper Age limit (years) - 40
Monthly Emoluments: Rs. 67,000/- +HRA, as admissible.

Role Overview:
We are seeking a highly skilled and motivated Data Scientist to join our growing team. The ideal candidate will be responsible for developing and deploying machine learning models from scratch to production level, focusing on building robust data-driven products. You will work closely with software engineers, product managers, and other stakeholders to ensure our AI-driven solutions meet the needs of our users and align with the company's strategic goals.
Key Responsibilities:
- Develop, implement, and optimize machine learning models and algorithms to support product development.
- Work on the end-to-end lifecycle of data science projects, including data collection, preprocessing, model training, evaluation, and deployment.
- Collaborate with cross-functional teams to define data requirements and product taxonomy.
- Design and build scalable data pipelines and systems to support real-time data processing and analysis.
- Ensure the accuracy and quality of data used for modeling and analytics.
- Monitor and evaluate the performance of deployed models, making necessary adjustments to maintain optimal results.
- Implement best practices for data governance, privacy, and security.
- Document processes, methodologies, and technical solutions to maintain transparency and reproducibility.
Qualifications:
- Bachelor's or Master's degree in Data Science, Computer Science, Engineering, or a related field.
- 5+ years of experience in data science, machine learning, or a related field, with a track record of developing and deploying products from scratch to production.
- Strong programming skills in Python and experience with data analysis and machine learning libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch).
- Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker).
- Proficiency in building and optimizing data pipelines, ETL processes, and data storage solutions.
- Hands-on experience with data visualization tools and techniques.
- Strong understanding of statistics, data analysis, and machine learning concepts.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a fast-paced, dynamic environment.
Preferred Qualifications:
- Knowledge of microservices architecture and RESTful APIs.
- Familiarity with Agile development methodologies.
- Experience in building taxonomy for data products.
- Strong communication skills and the ability to explain complex technical concepts to non-technical stakeholders.


Job Description
A job where you increase the depth of your expertise in computer vision.
A job where you learn and implement the SOTA papers.
A job where you write vectorized code that runs in seconds, not in minutes.
A job where models learn to see and understand the world around them.
A job where models run real-time because you optimize every byte.
A job where you keep the career promises that you made to yourself.
A job where you keep the learning promises that you made to yourself.
If this scares you, don't read.
If this excites you, we might love you.
Role
We are looking for a passionate Machine Learning Engineer to join our team.
The ideal candidate will be an enthusiastic developer with 2-5 years of experience in the field of Computer Vision and Artificial Intelligence.
If building things and writing code excite you, this is the startup you belong.
Key Technologies
Must be an expert in Python and Numpy.
Experience with Tensorflow/Keras/Pytorch is required.
Unsatiable hunger for writing beautiful code.
Knowledge of python design-patterns.
Some experience with C++ is preferred.
Knowledge working closely with version control (GIT).
Excellent communication skills and being able to work independently.
Strong problem-solving and coding skills override everything else written above.
About AIMonk
Run by IIT Kanpur alumni, AIMonk is a deep tech startup.
We build beautiful and scalable software using computer vision and deep learning.
We pride ourselves in solving problems nobody else can in the space


• Charting learning journeys with knowledge graphs.
• Predicting memory decay based upon an advanced cognitive model.
• Ensure content quality via study behavior anomaly detection.
• Recommend tags using NLP for complex knowledge.
• Auto-associate concept maps from loosely structured data.
• Predict knowledge mastery.
• Search query personalization.
Requirements:
• 6+ years experience in AI/ML with end-to-end implementation.
• Excellent communication and interpersonal skills.
• Expertise in SageMaker, TensorFlow, MXNet, or equivalent.
• Expertise with databases (e. g. NoSQL, Graph).
• Expertise with backend engineering (e. g. AWS Lambda, Node.js ).
• Passionate about solving problems in education
- Design and develop a framework, internal tools, and scripts for testing large-scale data systems, machine learning algorithms, and responsive User Interfaces.
- Create repeatability in testing through automation
- Participate in code reviews, design reviews, architecture discussions.
- Performance testing and benchmarking of Bidgely product suites
- Driving the adoption of these best practices around coding, design, quality, performance in your team.
- Lead the team on all technical aspects and own the quality of your teams’ deliverables
- Understand requirements, design exhaustive test scenarios, execute manual and automated test cases, dig deeper into issues, identify root causes, and articulate defects clearly.
- Strive for excellence in quality by looking beyond obvious scenarios and stated requirements and by keeping end-user needs in mind.
- Debug automation, product, deployment, and production issues and work with stakeholders/team on quick resolution
- Deliver a high-quality robust product in a fast-paced start-up environment.
- Collaborate with the engineering team and product management to elicit & understand their requirements and develop potential solutions.
- Stay current with the latest technology, tools, and methodologies; share knowledge by clearly articulating results and ideas to key decision-makers.
Requirements
- BS/MS in Computer Science, Electrical or equivalent
- 6+ years of experience in designing automation frameworks, tools
- Strong object-oriented design skills, knowledge of design patterns, and an uncanny ability to
design intuitive module and class-level interfaces - Deep understanding of design patterns, optimizations
- Experience leading multi-engineer projects and mentoring junior engineers
- Good understanding of data structures and algorithms and their space and time complexities. Strong technical aptitude and a good knowledge of CS fundamentals
- Experience in non-functional testing and performance benchmarking
- Knowledge of Test-Driven Development & implementing CD/CD
- Strong hands-on and practical working experience with at least one programming language: Java/Python/C++
- Strong analytical, problem solving, and debugging skills.
- Strong experience in API automation using Jersey/Rest Assured.
- Fluency in automation tools, frameworks such as Selenium, TestNG, Jmeter, JUnit, Jersey, etc...
- Exposure to distributed systems or web applications
- Good in RDBMS or any of the large data systems such as Hadoop, Cassandra, etc.
- Hands-on experience with build tools like Maven/Gradle & Jenkins
- Experience in testing on various browsers and devices.
- Strong communication and collaboration skills.



Job Description: Data Scientist
At Propellor.ai, we derive insights that allow our clients to make scientific decisions. We believe in demanding more from the fields of Mathematics, Computer Science, and Business Logic. Combine these and we show our clients a 360-degree view of their business. In this role, the Data Scientist will be expected to work on Procurement problems along with a team-based across the globe.
We are a Remote-First Company.
Read more about us here: https://www.propellor.ai/consulting" target="_blank">https://www.propellor.ai/consulting
What will help you be successful in this role
- Articulate
- High Energy
- Passion to learn
- High sense of ownership
- Ability to work in a fast-paced and deadline-driven environment
- Loves technology
- Highly skilled at Data Interpretation
- Problem solver
- Ability to narrate the story to the business stakeholders
- Generate insights and the ability to turn them into actions and decisions
Skills to work in a challenging, complex project environment
- Need you to be naturally curious and have a passion for understanding consumer behavior
- A high level of motivation, passion, and high sense of ownership
- Excellent communication skills needed to manage an incredibly diverse slate of work, clients, and team personalities
- Flexibility to work on multiple projects and deadline-driven fast-paced environment
- Ability to work in ambiguity and manage the chaos
Key Responsibilities
- Analyze data to unlock insights: Ability to identify relevant insights and actions from data. Use regression, cluster analysis, time series, etc. to explore relationships and trends in response to stakeholder questions and business challenges.
- Bring in experience for AI and ML: Bring in Industry experience and apply the same to build efficient and optimal Machine Learning solutions.
- Exploratory Data Analysis (EDA) and Generate Insights: Analyse internal and external datasets using analytical techniques, tools, and visualization methods. Ensure pre-processing/cleansing of data and evaluate data points across the enterprise landscape and/or external data points that can be leveraged in machine learning models to generate insights.
- DS and ML Model Identification and Training: Identity, test, and train machine learning models that need to be leveraged for business use cases. Evaluate models based on interpretability, performance, and accuracy as required. Experiment and identify features from datasets that will help influence model outputs. Determine what models will need to be deployed, data points that need to be fed into models, and aid in the deployment and maintenance of models.
Technical Skills
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of them. We are open to promising candidates who are passionate about their work, fast learners and are team players.
- Strong experience with machine learning and AI including regression, forecasting, time series, cluster analysis, classification, Image recognition, NLP, Text Analytics and Computer Vision.
- Strong experience with advanced analytics tools for Object-oriented/object function scripting using languages such as Python, or similar.
- Strong experience with popular database programming languages including SQL.
- Strong experience in Spark/Pyspark
- Experience in working in Databricks
What are the company benefits you get, when you join us as?
- Permanent Work from Home Opportunity
- Opportunity to work with Business Decision Makers and an internationally based team
- The work environment that offers limitless learning
- A culture void of any bureaucracy, hierarchy
- A culture of being open, direct, and with mutual respect
- A fun, high-caliber team that trusts you and provides the support and mentorship to help you grow
- The opportunity to work on high-impact business problems that are already defining the future of Marketing and improving real lives
To know more about how we work: https://bit.ly/3Oy6WlE" target="_blank">https://bit.ly/3Oy6WlE
Whom will you work with?
You will closely work with other Senior Data Scientists and Data Engineers.
Immediate to 15-day Joiners will be preferred.



Required skill
- Around 6- 8.5 years of experience and around 4+ years in AI / Machine learning space
- Extensive experience in designing large scale machine learning solution for the ML use case, large scale deployments and establishing continues automated improvement / retraining framework.
- Strong experience in Python and Java is required.
- Hands on experience on Scikit-learn, Pandas, NLTK
- Experience in Handling of Timeseries data and associated techniques like Prophet, LSTM
- Experience in Regression, Clustering, classification algorithms
- Extensive experience in buildings traditional Machine Learning SVM, XGBoost, Decision tree and Deep Neural Network models like RNN, Feedforward is required.
- Experience in AutoML like TPOT or other
- Must have strong hands on experience in Deep learning frameworks like Keras, TensorFlow or PyTorch
- Knowledge of Capsule Network or reinforcement learning, SageMaker is a desirable skill
- Understanding of Financial domain is desirable skill
Responsibilities
- Design and implementation of solutions for ML Use cases
- Productionize System and Maintain those
- Lead and implement data acquisition process for ML work
- Learn new methods and model quickly and utilize those in solving use cases




