Research Scientist - Machine Learning/Artificial Intelligence

About Intellinet Systems
About
Connect with the team
Similar jobs
Position Overview
- Drive cloud financial management (FinOps) lifecycle at Deltek.
- Optimize cloud spend, forecasting, and financial performance.
- Collaborate across teams to deliver actionable financial insights.
Key Responsibilities
- Deliver monthly cloud spend reports and forecasts to stakeholders.
- Manage cloud financial tools and consumption platforms.
- Apply financial principles for contract optimization and spend control.
- Present technical and financial insights in a clear, actionable format.
- Partner with engineering/product teams to identify cost drivers and savings.
- Lead optimization efforts (e.g., savings plans, reserved instances, right-sizing).
- Ensure tagging strategies and alerting frameworks are scalable.
- Support forecasting through demand planning and capacity management.
- Build financial models and forecasting tools for cloud spend.
- Automate FinOps workflows using Python, SQL, and cloud-native services.
- Develop AI-powered cost optimization tools using GenAI APIs.
- Create data pipelines for real-time cost visibility and alerting.
- Onboard stakeholders to FinOps tools and practices.
- Maintain KPIs, dashboards, and scorecards for spend tracking.
- Promote cost-conscious architecture and automation-driven optimization.
- Analyze usage patterns and provide technical cost-saving recommendations.
- Build self-service FinOps portals and GenAI chatbots for cost insights.
- Use GenAI to enhance automation, reporting, and productivity.
Qualifications
- Bachelor’s in finance, CS, IS, or related field.
- 4+ years in Cloud FinOps or IT Financial Management.
- 6–8 years in Cloud Infra, DevOps, or Software Development.
- Experience with AWS, Azure, OCI cost tools and FinOps platforms.
- Proven FinOps experience in enterprise environments.
- Skilled in building reports, dashboards, and financial insights.
- Strong grasp of cloud pricing, billing, and optimisation strategies.
- Experience with tagging, anomaly detection, and lifecycle policies.
- Proficient in Python, SQL, and automation for FinOps.
- Hands-on with GenAI APIs and prompt engineering.
- Experience with IaC (Terraform) and CI/CD pipelines.
- Familiarity with Power BI and data visualisation libraries.
- Knowledge of ML/AI frameworks and chatbot development is a plus.
- FinOps Certified Practitioner; cloud certifications preferred.
🌍 We’re Hiring: Senior Field AI Engineer | Remote | Full-time
Are you passionate about pioneering enterprise AI solutions and shaping the future of agentic AI?
Do you thrive in strategic technical leadership roles where you bridge advanced AI engineering with enterprise business impact?
We’re looking for a Senior Field AI Engineer to serve as the technical architect and trusted advisor for enterprise AI initiatives. You’ll translate ambitious business visions into production-ready applied AI systems, implementing agentic AI solutions for large enterprises.
What You’ll Do:
🔹 Design and deliver custom agentic AI solutions for mid-to-large enterprises
🔹 Build and integrate intelligent agent systems using frameworks like LangChain, LangGraph, CrewAI
🔹 Develop advanced RAG pipelines and production-grade LLM solutions
🔹 Serve as the primary technical expert for enterprise accounts and build long-term customer relationships
🔹 Collaborate with Solutions Architects, Engineering, and Product teams to drive innovation
🔹 Represent technical capabilities at industry conferences and client reviews
What We’re Looking For:
✔️ 7+ years of experience in AI/ML engineering with production deployment expertise
✔️ Deep expertise in agentic AI frameworks and multi-agent system design
✔️ Advanced Python programming and scalable backend service development
✔️ Hands-on experience with LLM platforms (GPT, Gemini, Claude) and prompt engineering
✔️ Experience with vector databases (Pinecone, Weaviate, FAISS) and modern ML infrastructure
✔️ Cloud platform expertise (AWS, Azure, GCP) and MLOps/CI-CD knowledge
✔️ Strategic thinker able to balance technical vision with hands-on delivery in fast-paced environments
✨ Why Join Us:
- Drive enterprise AI transformation for global clients
- Work with a category-defining AI platform bridging agents and experts
- High-impact, customer-facing role with strategic influence
- Competitive benefits: medical, vision, dental insurance, 401(k)
Responsibilities:
- Help build model monitoring systems across model types like regression, classification with structured & unstructured datasets.
- Build high-performance multi-tenant deployment architecture that works across frameworks like PyTorch, sklearn, TensorFlow.
- Help realize the product vision: Production-ready machine learning models with monitoring within moments, not months.
- Help companies deploy their machine learning models at scale across a wide range of use-cases and sectors.
- Write maintainable, scalable performant python code, Build high volume and high availability analytics systems, Push the state of the art of MLOPs to push the industry ahead.
Requirements:
- 3+ years work experience with production-grade python- preferably contribution to open source repost.
- Prior experience with ML monitoring, observability & explainability systems, and familiarity with tools for data science like Pandas, Notebooks, Numpy, Scipy, etc.
- Comfortable working in a Linux environment, experience with one relational & 1 non-relational DB is preferred.
- Some working experience with TensorFlow or PyTorch is preferred.
- Experience with model analysis and experimentation frameworks like MLFlow, W& B;, tfma is preferred.
From building entire infrastructures or platforms to solving complex IT challenges, Cambridge Technology helps businesses accelerate their digital transformation and become AI-first businesses. With over 20 years of expertise as a technology services company, we enable our customers to stay ahead of the curve by helping them figure out the perfect approach, solutions, and ecosystem for their business. Our experts help customers leverage the right AI, big data, cloud solutions, and intelligent platforms that will help them become and stay relevant in a rapidly changing world.
No Of Positions: 1
Skills required:
- The ideal candidate will have a bachelor’s degree in data science, statistics, or a related discipline with 4-6 years of experience, or a master’s degree with 4-6 years of experience. A strong candidate will also possess many of the following characteristics:
- Strong problem-solving skills with an emphasis on achieving proof-of-concept
- Knowledge of statistical techniques and concepts (regression, statistical tests, etc.)
- Knowledge of machine learning and deep learning fundamentals
- Experience with Python implementations to build ML and deep learning algorithms (e.g., pandas, numpy, sci-kit-learn, Stats Models, Keras, PyTorch, etc.)
- Experience writing and debugging code in an IDE
- Experience using managed web services (e.g., AWS, GCP, etc.)
- Strong analytical and communication skills
- Curiosity, flexibility, creativity, and a strong tolerance for ambiguity
- Ability to learn new tools from documentation and internet resources.
Roles and responsibilities :
- You will work on a small, core team alongside other engineers and business leaders throughout Cambridge with the following responsibilities:
- Collaborate with client-facing teams to design and build operational AI solutions for client engagements.
- Identify relevant data sources for data wrangling and EDA
- Identify model architectures to use for client business needs.
- Build full-stack data science solutions up to MVP that can be deployed into existing client business processes or scaled up based on clear documentation.
- Present findings to teammates and key stakeholders in a clear and repeatable manner.
Experience :
2 - 14 Yrs
Duties and Responsibilities:
Research and Develop Innovative Use Cases, Solutions and Quantitative Models
Quantitative Models in Video and Image Recognition and Signal Processing for cloudbloom’s
cross-industry business (e.g., Retail, Energy, Industry, Mobility, Smart Life and
Entertainment).
Design, Implement and Demonstrate Proof-of-Concept and Working Proto-types
Provide R&D support to productize research prototypes.
Explore emerging tools, techniques, and technologies, and work with academia for cutting-
edge solutions.
Collaborate with cross-functional teams and eco-system partners for mutual business benefit.
Team Management Skills
Academic Qualification
7+ years of professional hands-on work experience in data science, statistical modelling, data
engineering, and predictive analytics assignments
Mandatory Requirements: Bachelor’s degree with STEM background (Science, Technology,
Engineering and Management) with strong quantitative flavour
Innovative and creative in data analysis, problem solving and presentation of solutions.
Ability to establish effective cross-functional partnerships and relationships at all levels in a
highly collaborative environment
Strong experience in handling multi-national client engagements
Good verbal, writing & presentation skills
Core Expertise
Excellent understanding of basics in mathematics and statistics (such as differential
equations, linear algebra, matrix, combinatorics, probability, Bayesian statistics, eigen
vectors, Markov models, Fourier analysis).
Building data analytics models using Python, ML libraries, Jupyter/Anaconda and Knowledge
database query languages like SQL
Good knowledge of machine learning methods like k-Nearest Neighbors, Naive Bayes, SVM,
Decision Forests.
Strong Math Skills (Multivariable Calculus and Linear Algebra) - understanding the
fundamentals of Multivariable Calculus and Linear Algebra is important as they form the basis
of a lot of predictive performance or algorithm optimization techniques.
Deep learning : CNN, neural Network, RNN, tensorflow, pytorch, computervision,
Large-scale data extraction/mining, data cleansing, diagnostics, preparation for Modeling
Good applied statistical skills, including knowledge of statistical tests, distributions,
regression, maximum likelihood estimators, Multivariate techniques & predictive modeling
cluster analysis, discriminant analysis, CHAID, logistic & multiple regression analysis
Experience with Data Visualization Tools like Tableau, Power BI, Qlik Sense that help to
visually encode data
Excellent Communication Skills – it is incredibly important to describe findings to a technical
and non-technical audience
Capability for continuous learning and knowledge acquisition.
Mentor colleagues for growth and success
Strong Software Engineering Background
Hands-on experience with data science tools
culture and operating norms as a result of the fast-paced nature of a new, high-growth
organization.
• 7+ years of Industry experience primarily related to Unstructured Text Data and NLP
(PhD work and internships will be considered if they are related to unstructured text
in lieu of industry experience but not more than 2 years will be accounted towards
industry experience)
• Develop Natural Language Medical/Healthcare documents comprehension related
products to support Health business objectives, products and improve
processing efficiency, reducing overall healthcare costs
• Gather external data sets; build synthetic data and label data sets as per the needs
for NLP/NLR/NLU
• Apply expert software engineering skills to build Natural Language products to
improve automation and improve user experiences leveraging unstructured data storage, Entity Recognition, POS Tagging, ontologies, taxonomies, data mining,
information retrieval techniques, machine learning approach, distributed and cloud
computing platforms
• Own the Natural Language and Text Mining products — from platforms to systems
for model training, versioning, deploying, storage and testing models with creating
real time feedback loops to fully automated services
• Work closely and collaborate with Data Scientists, Machine Learning engineers, IT
teams and Business stakeholders spread out across various locations in US and India
to achieve business goals
• Provide mentoring to other Data Scientist and Machine Learning Engineers
• Strong understanding of mathematical concepts including but not limited to linear
algebra, Advanced calculus, partial differential equations and statistics including
Bayesian approaches
• Strong programming experience including understanding of concepts in data
structures, algorithms, compression techniques, high performance computing,
distributed computing, and various computer architecture
• Good understanding and experience with traditional data science approaches like
sampling techniques, feature engineering, classification and regressions, SVM, trees,
model evaluations
• Additional course work, projects, research participation and/or publications in
Natural Language processing, reasoning and understanding, information retrieval,
text mining, search, computational linguistics, ontologies, semantics
• Experience with developing and deploying products in production with experience
in two or more of the following languages (Python, C++, Java, Scala)
• Strong Unix/Linux background and experience with at least one of the following
cloud vendors like AWS, Azure, and Google for 2+ years
• Hands on experience with one or more of high-performance computing and
distributed computing like Spark, Dask, Hadoop, CUDA distributed GPU (2+ years)
• Thorough understanding of deep learning architectures and hands on experience
with one or more frameworks like tensorflow, pytorch, keras (2+ years)
• Hands on experience with libraries and tools like Spacy, NLTK, Stanford core NLP,
Genism, johnsnowlabs for 5+ years
• Understanding business use cases and be able to translate them to team with a
vision on how to implement
• Identify enhancements and build best practices that can help to improve the
productivity of the team.
- You will be responsible for design, development and testing of Products
- Contributing in all phases of the development lifecycle
- Writing well designed, testable, efficient code
- Ensure designs are in compliance with specifications
- Prepare and produce releases of software components
- Support continuous improvement by investigating alternatives and technologies and presenting these for architectural review
- Some of the technologies you will be working on: Core Java, Solr, Hadoop, Spark, Elastic search, Clustering, Text Mining, NLP, Mahout and Lucene etc.
• Excellent understanding of machine learning techniques and algorithms, such as SVM, Decision Forests, k-NN, Naive Bayes etc.
• Experience in selecting features, building and optimizing classifiers using machine learning techniques.
• Prior experience with data visualization tools, such as D3.js, GGplot, etc..
• Good knowledge on statistics skills, such as distributions, statistical testing, regression, etc..
• Adequate presentation and communication skills to explain results and methodologies to non-technical stakeholders.
• Basic understanding of the banking industry is value add
Develop, process, cleanse and enhance data collection procedures from multiple data sources.
• Conduct & deliver experiments and proof of concepts to validate business ideas and potential value.
• Test, troubleshoot and enhance the developed models in a distributed environments to improve it's accuracy.
• Work closely with product teams to implement algorithms with Python and/or R.
• Design and implement scalable predictive models, classifiers leveraging machine learning, data regression.
• Facilitate integration with enterprise applications using APIs to enrich implementations








