50+ Machine Learning (ML) Jobs in India
Apply to 50+ Machine Learning (ML) Jobs on CutShort.io. Find your next job, effortlessly. Browse Machine Learning (ML) Jobs and apply today!
Thirumoolar software is seeking talented AI researchers to join our cutting-edge team and help drive innovation in artificial intelligence. As an AI researcher, you will be at the forefront of developing intelligent systems that can solve complex problems and uncover valuable insights from data.
Responsibilities:
Research and Development: Conduct research in AI areas relevant to the company's goals, such as machine learning, natural language processing, computer vision, or recommendation systems. Explore new algorithms and methodologies to solve complex problems.
Algorithm Design and Implementation: Design and implement AI algorithms and models, considering factors such as performance, scalability, and computational efficiency. Use programming languages like Python, Java, or C++ to develop prototype solutions.
Data Analysis: Analyze large datasets to extract meaningful insights and patterns. Preprocess data and engineer features to prepare it for training AI models. Apply statistical methods and machine learning techniques to derive actionable insights.
Experimentation and Evaluation: Design experiments to evaluate the performance of AI algorithms and models. Conduct thorough evaluations and analyze results to identify strengths, weaknesses, and areas for improvement. Iterate on algorithms based on empirical findings.
Collaboration and Communication: Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to integrate AI solutions into our products and services. Communicate research findings, technical concepts, and project updates effectively to stakeholders.
Preferred Location: Chennai
at Zolvit (formerly Vakilsearch)
Role Overview:
We are looking for a skilled Data Scientist with expertise in data analytics, machine learning, and AI to join our team. The ideal candidate will have a strong command of data tools, programming, and knowledge of LLMs and Generative AI, contributing to the growth and automation of our business processes.
Key Responsibilities:
- Data Analysis & Visualization:
- Develop and manage data pipelines, ensuring data accuracy and integrity.
- Design and implement insightful dashboards using Power BI to help stakeholders make data-driven decisions.
- Extract and analyze complex data sets using SQL to generate actionable insights
2 Machine Learning & AI Models:
- Build and deploy machine learning models to optimize key business functions like discount management, lead qualification, and process automation.
- Apply Natural Language Processing (NLP) techniques for text extraction, analysis, and classification from customer documents.
- Implement and fine-tune Generative AI models and large language models (LLMs) for various business applications, including prompt engineering for automation tasks.
3 Automation & Innovation:
- Use AI to streamline document verification, data extraction, and customer interaction processes.
- Innovate and automate manual processes, creating AI-driven solutions for internal teams and customer-facing systems.
- Stay abreast of the latest advancements in machine learning, NLP, and generative AI, applying them to real-world business challenges.
Qualifications:
- Bachelor's or Master’s degree in Computer Science, Data Science, Statistics, or related field.
- 4-7 years of experience as a Data Scientist, with proficiency in Python, SQL, Power BI, and Excel.
- Expertise in building machine learning models and utilizing NLP techniques for text processing and automation.
- Experience in working with large language models (LLMs) and generative AI to create efficient and scalable solutions.
- Strong problem-solving skills, with the ability to work independently and in teams.
- Excellent communication skills, with the ability to present complex data in a simple, actionable way to non-technical stakeholders.
If you’re excited about leveraging data and AI to solve real-world problems, we’d love to have you on our team!
AI/ML Data Solution Architecture Ability to define data architecture strategy that align with business goals for AI use cases and ensures data availability & data accuracy. Ensuring data quality by implementing and enforcing data governance policies and best practices Technology evaluation & selection Execute Proof of concept and Proof of value for various technology solutions and frameworks Continuous Learning - Staying up to date with emerging technologies and trends in Data Engineering, AI / ML, and GenAI, and making recommendations for their adoption wherever appropriate Team Management Responsible for leading a team of data architects and data engineers, as well as coordinating with vendors and technology partners Collaboration & communication Collaborate and work closely with executives, stakeholder and business teams to effectively communicate architecture strategy & clearly articulate the business value.
Experience : 12 to 16 yrs
Work location : JP Nagar 3rd phase, South Bangalore. (Work from office role, IC role to begin with)
Suitable candidate be able to demonstrate strong experience in the following areas -
Data Engineering
- Hands-on experience with data engineering tools such as Talend (or Informatica or AbInitio), Databricks (or Spark), and HVR (or Attunity or Golden Gate or Equalum).
- Working knowledge of data build tools, Azure Data Factory, continuous integration and continuous delivery (CI/CD), automated testing, data lakes, data warehouses, big data, Collibra, and Unity Catalog
- Basic knowledge of building analytics applications using Azure Purview, Power BI, Spotfire, and Azure Machine Learning.
AI & ML -
- Conceptualize & design end-to-end solution view of sourcing data, pre-processing data, feature stores, model development, evaluation, deployment & governance
- Define model development best practices, create POVs on emerging AI trends, drive the proposal responses, help solving complex analytics problems and strategize the end-to-end implementation frameworks & methodologies
- Thorough understanding of database, streaming & analytics services offered by popular cloud platforms (Azure, AWS, GCP) and hands-on implementation of building machine learning pipeline with at least one of the popular cloud platforms
- Expertise on Large Language Model preferred with exposure to implementing generative AI using ChatGPT / Open AI and other models. Harvesting Models from open source will be an added advantage
- Good understanding of statistical analysis, data analysis and knowledge of data management & visualization techniques
- Exposure to other AI Platforms & products (Kore.ai, expert.ai, Dataiku etc.) desired
- Hands-on development experience in Python/R is a must and additional hands-on experience on few other procedural/object-oriented programming languages (Java, C#, C++) is desirable.
- Leadership skills to drive the AI/ML related conversation amidst CXO, Senior Leadership and making impactful presentations to customer organizations
Stakeholder Management & Communication Skills
- Excellent communication, negotiation, influencing and stakeholder management skills
- Preferred to have experience in project management, particularly in executing projects using Agile delivery frameworks
- Customer focus and excellent problem-solving skills
Qualification
- BE or MTech (BSc or MSc) in engineering, sciences, or equivalent relevant experience required.
- Total 13+ years of experience and 10+ years of experience in building/managing/administrating data and analytics applications is required
- Designing solution architecture and present the architecture in architecture review forums
Additional Qualifications
- Ability to define best practices for data governance, data quality, and data lineage, and to operationalize those practices.
- Proven track record of designing and delivering solutions that comply with industry regulations and legislation such as GxP, SoX, HIPAA, and GDPR.
Founded by IIT Delhi Alumni, Convin is a conversation intelligence platform that helps organisations improve sales/collections and elevate customer experience while automating the quality & coaching for reps, and backing it up with super deep business insights for leaders.
At Convin, we are leveraging AI/ML to achieve these larger business goals while focusing on bringing efficiency and reducing cost. We are already helping the leaders across Health-tech, Ed-tech, Fintech, E-commerce, and consumer services like Treebo, SOTC, Thomas Cook, Aakash, MediBuddy, PlanetSpark.
If you love AI, understand SaaS, love selling and looking to join a ship bound to fly- then Convin is the place for you!
We are seeking a talented and motivated Core Machine Learning Engineer with a passion for the audio domain. As a member of our dynamic team, you will play a crucial role in developing state-of-the-art solutions in speech-to-text, speaker separation, diarization, and related areas.
Responsibilities
- Collaborate with cross-functional teams to design, develop, and implement machine learning models and algorithms in the audio domain.
- Contribute to the research, prototyping, and deployment of speech-to-text, speaker separation, and diarization solutions.
- Explore and experiment with various techniques to improve the accuracy and efficiency of audio processing models.
- Work closely with senior engineers to optimize and integrate machine learning components into our products.
- Participate in code reviews, provide constructive feedback, and adhere to coding standards and best practices.
- Communicate effectively with team members, sharing insights and progress updates.
- Stay updated with the latest developments in machine learning, AI, NLP, and signal processing, and apply relevant advancements to our projects.
- Collaborate on the development of end-to-end systems that involve speech and language technologies.
- Assist in building and training large-scale language models like chatGPT, LLAMA, Falcon, etc., leveraging their capabilities as required.
Requirements
- Bachelor's or Master's degree in Computer Science or a related field from a reputed institution.
- 5+ years of hands-on experience in Machine Learning, Artificial Intelligence, Natural Language Processing, or signal processing.
- Strong programming skills in languages such as Python, and familiarity with relevant libraries and frameworks (e.g., TensorFlow, PyTorch).
- Knowledge of speech-to-text, text-to-speech, speaker separation, and diarization techniques is a plus.
- Solid understanding of machine learning fundamentals and algorithms.
- Excellent problem-solving skills and the ability to learn quickly.
- Strong communication skills to collaborate effectively within a team environment.
- Enthusiasm for staying updated with the latest trends and technologies in the field.
- Familiarity with large language models like chatGPT, LLAMA, Falcon, etc., is advantageous.
Client based at Bangalore location.
Data Science:
• Python expert level, Analytical, Different models works, Basic concepts, CPG(Domain).
• Statistical Models & Hypothesis , Testing
• Machine Learning Important
• Business Understanding, visualization in Python.
• Classification, clustering and regression
•
Mandatory Skills
• Data Science, Python, Machine Learning, Statistical Models, Classification, clustering and regression
Role
You will develop and maintain the key backend code and infrastructure of the company stack. You will implement AI solutions like LLMs for various tasks such as voice-based interactive systems, chatbots, and AI web apps. Ability to see projects through from start to finish with good organizational skills and attention to detail. This is a perfect role for someone who likes to build state-of-the-art AI products and work with cutting-edge AI technologies like GPT, LLAMA, etc
Qualifications
- Basic: BS in Computer Science or relevant field. Preferred: MS in AI/ML
- 5+ years experience in backend software development or AI role
- Must be able to design high throughput scalable backend systems
- Knowledge/experience of ML and deep learning, fine tuning AI models (especially LLMs), policy optimization, creating embeddings, transformer architecture, Prompt engineering
- TensorFlow, pytorch, langchain knowledge/usage
- Experience using LLMs such as llama or GPT in apps
- Nice to have - CV knowledge like VLM and CNN
- Proficiency in any programming language such as Python, Nodejs, Go.
- Experience with cloud computing platforms (AWS, GCP) and technologies like Docker
- Knowledge of Rest APIs, databases (mysql, mongo, vectorDB)
Job Summary:
We are looking for a highly skilled and experienced Generative AI Developer to join our team. The ideal candidate will have a strong understanding of generative AI Models, Frameworks, techniques and be able to apply them to develop innovative solutions. The successful candidate will also have a proven track record of working in cloud computing environments.
Responsibilities:
Design, develop, and implement generative AI models using state-of-the-art techniques. Collaborate with cross-functional teams to define project goals, research requirements, and develop innovative solutions. Optimize model performance through experimentation, hyperparameter tuning, and advanced optimization techniques. Stay up-to-date on the latest advancements in generative AI, deep learning, and related fields, and incorporate new techniques and methods into the team's workflow. Develop and maintain clear and concise documentation of generative AI models, processes, and results.
Communicate complex concepts and results to both technical and non-technical stakeholders. Provide support and guidance to other team members, and contribute to a positive, collaborative working environment.
Qualifications:
Bachelor's degree in computer science, artificial intelligence, or a related field.
1-2 years of experience in generative AI development.
Job Description: Product Manager for GenAI Applications on Data Products About the Company: We are a forward-thinking technology company specializing in creating innovative data products and AI applications. Our mission is to harness the power of data and AI to drive business growth and efficiency. We are seeking a dynamic and experienced Product Manager to join our team and lead the development of cutting-edge GenAI applications. Role Overview: As a Product Manager for GenAI Applications, you will be responsible for conceptualizing, developing, and managing AI-driven products that leverage our data platforms. You will work closely with cross-functional teams, including engineering, data science, marketing, and sales, to ensure the successful delivery of high-impact AI solutions. Your understanding of business user needs and ability to translate them into effective AI applications will be crucial. Key Responsibilities: - Lead the end-to-end product lifecycle from ideation to launch for GenAI applications. - Collaborate with engineering and data science teams to design, develop, and deploy AI solutions. - Conduct market research and gather user feedback to identify opportunities for new product features and improvements. - Develop detailed product requirements, roadmaps, and user stories to guide development efforts. - Work with business stakeholders to understand their needs and ensure the AI applications meet their requirements. - Drive the product vision and strategy, aligning it with company goals and market demands. - Monitor and analyze product performance, leveraging data to make informed decisions and optimizations. - Coordinate with marketing and sales teams to create go-to-market strategies and support product launches. - Foster a culture of innovation and continuous improvement within the product development team. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, Business, or a related field. - 3-5 years of experience in product management, specifically in building AI applications. - Proven track record of developing and launching AI-driven products from scratch. - Experience working with data application layers and understanding data architecture. - Strong understanding of the psyche of business users and the ability to translate their needs into technical solutions. - Excellent project management skills, with the ability to prioritize tasks and manage multiple projects simultaneously. - Strong analytical and problem-solving skills, with a data-driven approach to decision making. - Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. - Passion for AI and a deep understanding of the latest trends and technologies in the field. Benefits: - Competitive salary and benefits package. - Opportunity to work on cutting-edge AI technologies and products. - Collaborative and innovative work environment. - Professional development opportunities and career growth. If you are a passionate Product Manager with a strong background in AI and data products, and you are excited about building transformative AI applications, we would love to hear from you. Apply now to join our dynamic team and make an impact in the world of AI and data.
Client based at Bangalore location.
Data Scientist with LLM and Healthcare Expertise
Keywords: Data Scientist, LLM, Radiology, Healthcare, Machine Learning, Deep Learning, AI, Python, TensorFlow, PyTorch, Scikit-learn, Data Analysis, Medical Imaging, Clinical Data, HIPAA, FDA.
Responsibilities:
· Develop and deploy advanced machine learning models, particularly focusing on Large Language Models (LLMs) and their application in the healthcare domain.
· Leverage your expertise in radiology, visual images, and text data to extract meaningful insights and drive data-driven decision-making.
· Collaborate with cross-functional teams to identify and address complex healthcare challenges.
· Conduct research and explore new techniques to enhance the performance and efficiency of our AI models.
· Stay up-to-date with the latest advancements in machine learning and healthcare technology.
Qualifications:
· Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field.
· 6+ years of hands-on experience in data science and machine learning.
· Strong proficiency in Python and popular data science libraries (e.g., TensorFlow, PyTorch, Scikit-learn).
· Deep understanding of LLM architectures, training methodologies, and applications.
· Expertise in working with radiology images, visual data, and text data.
· Experience in the healthcare domain, particularly in areas such as medical imaging, clinical data analysis, or patient outcomes.
· Excellent problem-solving, analytical, and communication skills.
Preferred Qualifications:
· PhD in Computer Science, Data Science, or a related field.
· Experience with cloud platforms (e.g., AWS, GCP, Azure).
· Knowledge of healthcare standards and regulations (e.g., HIPAA, FDA).
· Publications in relevant academic journals or conferences.
Company: Optimum Solutions
About the company: Optimum solutions is a leader in a sheet metal industry, provides sheet metal solutions to sheet metal fabricators with a proven track record of reliable product delivery. Starting from tools through software, machines, we are one stop shop for all your technology needs.
Role Overview:
- Creating and managing database schemas that represent and support business processes, Hands-on experience in any SQL queries and Database server wrt managing deployment.
- Implementing automated testing platforms, unit tests, and CICD Pipeline
- Proficient understanding of code versioning tools, such as GitHub, Bitbucket, ADO
- Understanding of container platform, such as Docker
Job Description
- We are looking for a good Python Developer with Knowledge of Machine learning and deep learning framework.
- Your primary focus will be working the Product and Usecase delivery team to do various prompting for different Gen-AI use cases
- You will be responsible for prompting and building use case Pipelines
- Perform the Evaluation of all the Gen-AI features and Usecase pipeline
Position: AI ML Engineer
Location: Chennai (Preference) and Bangalore
Minimum Qualification: Bachelor's degree in computer science, Software Engineering, Data Science, or a related field.
Experience: 4-6 years
CTC: 16.5 - 17 LPA
Employment Type: Full Time
Key Responsibilities:
- Take care of entire prompt life cycle like prompt design, prompt template creation, prompt tuning/optimization for various Gen-AI base models
- Design and develop prompts suiting project needs
- Lead and manage team of prompt engineers
- Stakeholder management across business and domains as required for the projects
- Evaluating base models and benchmarking performance
- Implement prompt gaurdrails to prevent attacks like prompt injection, jail braking and prompt leaking
- Develop, deploy and maintain auto prompt solutions
- Design and implement minimum design standards for every use case involving prompt engineering
Skills and Qualifications
- Strong proficiency with Python, DJANGO framework and REGEX
- Good understanding of Machine learning framework Pytorch and Tensorflow
- Knowledge of Generative AI and RAG Pipeline
- Good in microservice design pattern and developing scalable application.
- Ability to build and consume REST API
- Fine tune and perform code optimization for better performance.
- Strong understanding on OOP and design thinking
- Understanding the nature of asynchronous programming and its quirks and workarounds
- Good understanding of server-side templating languages
- Understanding accessibility and security compliance, user authentication and authorization between multiple systems, servers, and environments
- Integration of APIs, multiple data sources and databases into one system
- Good knowledge in API Gateways and proxies, such as WSO2, KONG, nginx, Apache HTTP Server.
- Understanding fundamental design principles behind a scalable and distributed application
- Good working knowledge on Microservices architecture, behaviour, dependencies, scalability etc.
- Experience in deploying on Cloud platform like Azure or AWS
- Familiar and working experience with DevOps tools like Azure DEVOPS, Ansible, Jenkins, Terraform
at Cargill Business Services
Job Purpose and Impact:
The Sr. Generative AI Engineer will architect, design and develop new and existing GenAI solutions for the organization. As a Generative AI Engineer, you will be responsible for developing and implementing products using cutting-edge generative AI and RAG to solve complex problems and drive innovation across our organization. You will work closely with data scientists, software engineers, and product managers to design, build, and deploy AI-powered solutions that enhance our products and services in Cargill. You will bring order to ambiguous scenarios and apply in depth and broad knowledge of architectural, engineering and security practices to ensure your solutions are scalable, resilient and robust and will share knowledge on modern practices and technologies to the shared engineering community.
Key Accountabilities:
• Apply software and AI engineering patterns and principles to design, develop, test, integrate, maintain and troubleshoot complex and varied Generative AI software solutions and incorporate security practices in newly developed and maintained applications.
• Collaborate with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals.
• Conduct research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services, optimizing existing generative AI models and RAG for improved performance, scalability, and efficiency, developing and maintaining pipelines and RAG solutions including data preprocessing, prompt engineering, benchmarking and fine-tuning.
• Develop clear and concise documentation, including technical specifications, user guides and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders.
• Participate in the engineering community by maintaining and sharing relevant technical approaches and modern skills in AI.
• Contribute to the establishment of best practices and standards for generative AI development within the organization.
• Independently handle complex issues with minimal supervision, while escalating only the most complex issues to appropriate staff.
Minimum Qualifications:
• Bachelor’s degree in a related field or equivalent experience
• Minimum of five years of related work experience
• You are proficient in Python and have experience with machine learning libraries and frameworks
• Have deep understanding of industry leading Foundation Model capabilities and its application.
• You are familiar with cloud-based Generative AI platforms and services
• Full stack software engineering experience to build products using Foundation Models
• Confirmed experience architecting applications, databases, services or integrations.
product base company based at Bangalore location and working
We are seeking an experienced Data Scientist with a proven track record in Machine Learning, Deep Learning, and a demonstrated focus on Large Language Models (LLMs) to join our cutting-edge Data Science team. You will play a pivotal role in developing and deploying innovative AI solutions that drive real-world impact to patients and healthcare providers.
Responsibilities
• LLM Development and Fine-tuning: fine-tune, customize, and adapt large language models (e.g., GPT, Llama2, Mistral, etc.) for specific business applications and NLP tasks such as text classification, named entity recognition, sentiment analysis, summarization, and question answering. Experience in other transformer-based NLP models such as BERT, etc. will be an added advantage.
• Data Engineering: collaborate with data engineers to develop efficient data pipelines, ensuring the quality and integrity of large-scale text datasets used for LLM training and fine-tuning
• Experimentation and Evaluation: develop rigorous experimentation frameworks to evaluate model performance, identify areas for improvement, and inform model selection. Experience in LLM testing frameworks such as TruLens will be an added advantage.
• Production Deployment: work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.
• Predictive Model Design and Implementation: leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models)
• Cross-functional Collaboration: partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions
• Knowledge Sharing: mentor junior team members and stay up to date with the latest advancements in machine learning and LLMs
Qualifications Required
• Doctoral or master’s degree in computer science, Data Science, Artificial Intelligence, or related field
• 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models
• 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).
• Experience working with cloud-based platforms (AWS, GCP, Azure)
Additional Skills
• Excellent problem-solving and analytical abilities
• Strong communication skills, both written and verbal
• Ability to thrive in a collaborative and fast-paced environment
Must have:
- 8+ years of experience with a significant focus on developing, deploying & supporting AI solutions in production environments.
- Proven experience in building enterprise software products for B2B businesses, particularly in the supply chain domain.
- Good understanding of Generics, OOPs concepts & Design Patterns
- Solid engineering and coding skills. Ability to write high-performance production quality code in Python
- Proficiency with ML libraries and frameworks (e.g., Pandas, TensorFlow, PyTorch, scikit-learn).
- Strong expertise in time series forecasting using stat, ML, DL and foundation models
- Experience of working on processing time series data employing techniques such as decomposition, clustering, outlier detection & treatment
- Exposure to generative AI models and agent architectures on platforms such as AWS Bedrock, Crew AI, Mosaic/Databricks, Azure
- Experience of working with modern data architectures, including data lakes and data warehouses, having leveraged one or more of the frameworks such as Airbyte, Airflow, Dagster, AWS Glue, Snowflake,, DBT
- Hands-on experience with cloud platforms (e.g., AWS, Azure, GCP) and deploying ML models in cloud environments.
- Excellent problem-solving skills and the ability to work independently as well as in a collaborative team environment.
- Effective communication skills, with the ability to convey complex technical concepts to non-technical stakeholders
Good To Have:
- Experience with MLOps tools and practices for continuous integration and deployment of ML models.
- Has familiarity with deploying applications on Kubernetes
- Knowledge of supply chain management principles and challenges.
- A Master's or Ph.D. in Computer Science, Machine Learning, Data Science, or a related field is preferred
MLSecured(https://www.mlsecured.com/) an AI GRC (Governance, Risk, and Compliance) is Hiring! a Backend Software Engineer 🚀
Are you a passionate Backend Software Engineer with experience in Machine Learning and Open Source projects? Do you have a strong foundation in Python and Object-Oriented Programming (OOP) concepts? Join us at MLSecured.com and be part of our mission to solve AI Security & Compliance challenges! 🔐🤖
What We’re Looking For:
👨💻 1-2 years of professional experience in Backend Development and Open Source projects contribution
🐍 Proficiency in Python and OOP concepts
🤝 Experience with Machine Learning (NLP, GenAI)
🤝 Experience with CI/CD and cloud infra is a plus
💡 A passion for solving complex AI Security & Compliance problems
Why Join Us?
At MLSecured.com, you'll work with a talented team dedicated to pioneering AI security solutions. Be a part of our journey to make AI systems secure and compliant for everyone. 🌟
Perks of Joining a Fast-Paced Startup:
🚀 Rapid career growth and learning opportunities
🌍 Work on cutting-edge AI technologies and innovative projects
🤝 Collaborative and dynamic work environment
🎉 Flexible working hours and full remote work options
📈 Significant impact on the company's direction and success
Internship Opportunity at REDHILL SOFTEC
Position: Software Intern
Duration: 3 Months
Domains: Machine Learning or Full Stack Web Development
Working Hours: 10 AM - 6 PM
About REDHILL SOFTEC:
REDHILL SOFTEC is a dynamic and innovative tech company committed to fostering talent and driving technological advancements. We are excited to announce an exclusive internship opportunity for MCA students passionate about Machine Learning or Full Stack Web Development.
Internship Details:
Duration: The internship spans 3 months, offering a deep dive into either Machine Learning or Full Stack Web Development.
Domains:
Machine Learning: Work on cutting-edge ML projects, learning and applying various algorithms and data analysis techniques.
Full Stack Web Development: Gain hands-on experience in both front-end and back-end development, working with the latest web technologies.
Stipend: This is an unpaid internship designed to provide valuable industry experience and skill development.
Working Hours: Interns are expected to work from 10 AM to 6 PM, ensuring a full-time immersive experience.
What We Offer:
Hands-on Experience: Engage in real-world projects that enhance your technical skills and industry knowledge.
Mentorship: Learn from experienced professionals who will guide and support you throughout the internship.
Skill Development: Acquire practical skills in Machine Learning or Full Stack Web Development, making you industry-ready.
Networking: Connect with industry experts and like-minded peers, building a strong professional network.
Who Should Apply:
MCA/BE/BCA students with a keen interest in Machine Learning or Full Stack Web Development.
Individuals looking to gain practical industry experience and enhance their technical skills.
Self-motivated learners eager to work on real-world projects and solve challenging problems.
How to Apply:
Interested candidates can apply by visiting our website and completing the registration process. Remember, spots are limited, and early applications are encouraged to secure your place in this enriching program.
Join REDHILL SOFTEC and take a significant step towards a successful career in technology!
REDHILL SOFTEC
Vijayanagar Bangalore
www.redhillsoftec.com
We look forward to welcoming passionate and driven interns to our team!
Principal Engineer
Work at the intersection of Energy, Weather & Climate Sciences and Artificial Intelligence
Responsibilities:
- Engineering - Take complete ownership of engineering stacks including Data Engineering and MLOps. Define and maintain software systems architecture for high availability 24x7 systems.
- Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.
- Product Development - Contribute towards new product development through engineering solutions to business and market requirements. Interact with cross-functional teams to bring forward a technology perspective.
- Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted service delivery.
Requirements:
- Bachelors or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent
- Proficient in python programming skills and expertise with data engineering and machine learning deployment
- Experience in databases including MySQL and NoSQL
- Experience in developing and maintaining critical and high availability systems will be given strong preference
- Experience working with AWS cloud platform.
- At Least 3 years experience leading a team of engineers and analysts
- Strong analytical and data driven approach to problem solving
Experience: 5 - 7 years
Location: Bangalore
Job Description
Data scientist with a strong background in data mining, machine learning, recommendation systems, and statistics. Should possess signature strengths of a qualified mathematician with ability to apply concepts of Mathematics, Applied Statistics, with specialisation in one or more of NLP, Computer Vision, Speech, Data mining to develop models that provide effective solution.A strong data engineering background with hands-on coding capabilities is needed to own and deliver outcomes.
A Master’s or PhD Degree in a highly quantitative field (Computer Science, Machine Learning, Operational Research, Statistics, Mathematics, etc.) or equivalent experience, 7+ years of industry experience in predictive modelling, data science and analysis, with prior experience in a ML or data scientist role and a track record of building ML or DL models.
Responsibilities and skills:
- Work with our customers to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organisation.
- Selecting features, building and optimising classifiers using ML techniques.
- Data mining using state-of-the-art methods, create text mining pipelines to clean & process large unstructured datasets to reveal high quality information and hidden insights using machine learning techniques.
- Should be able to appreciate and work on:
Computer Vision problems – for example extract rich information from images to categorise and process visual data— Develop machine learning algorithms for object and image classification, Experience in using DBScan, PCA, Random Forests and Multinomial Logistic Regression to select the best features to classify objects.
OR
Deep understanding of NLP such as fundamentals of information retrieval, deep learning approaches, transformers, attention models, text summarisation, attribute extraction, etc. Preferable experience in one or more of the following areas: recommender systems, moderation of user generated content, sentiment analysis, etc.
OR
Experience of having worked in these areas : speech recognition, speech to text and vice versa, understanding NLP and IR, text summarisation, statistical and deep learning approaches to text processing.
- Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc. Needs to appreciate deep learning frameworks like MXNet, Caffe 2, Keras, Tensorflow.
- Experience in working with GPUs to develop models, handling terabyte size datasets.
- Experience with common data science toolkits, such as R, Weka, NumPy, MatLab, mlr, mllib, Scikit-learn, caret etc - excellence in at least one of these is highly desirable.
- Should be able to work hands-on in Python, R etc. Should closely collaborate & work with engineering teams to iteratively analyse data using Scala, Spark, Hadoop, Kafka, Storm etc.
- Experience with NoSQL databases and familiarity with data visualisation tools will be of great advantage.
at StarApps Studio
We are building tools to help e-commerce merchants sell better. Our software powers more than 20000 merchants worldwide, and we're just getting started.
At StarApps, we ship on quality and time. Our team deploys new code daily, and our production scale is massive. Your code will improve the shopping experience of more than 50 million shoppers daily – a demanding but incredibly rewarding responsibility.
We're looking for Software Developers with a passion for solving challenging problems with performant code, the ability to learn new technology quickly, and the ability to work in a team environment. This role is based in Pune, India. You will be closely working with our Merchant Success team to improve products. You'll have the creative freedom to make a real difference in the world of commerce, the support to bring your authentic self to work, and the chance to work with the best in the business.
Requirements for the role:
- A product-minded developer who cares about the "Why" - Why build this feature? How will we measure impact?
- A generalist (or a Full Stack developer) rather than a specialist, excited by problems that require a mix of frontend and backend skills, and unblocking anything that stands in the way of success.
- Experience in writing automated tests as part of your development workflow.
- The curiosity and passion to constantly learn new things; e-commerce is changing fast, and we need the people who work here to be able to change and learn fast.
- Familiarity with Ruby and/or Ruby on Rails, or the desire to learn quickly
- Strong logical mindset and ability to break problems into simple logical steps
Tech stack:
- React
- Ruby on Rails, Python
- Postgres
- AWS
It would be great if you have:
- A history of contributing to the developer community through code, documentation, mentoring, teaching, speaking or organizing events
- A passion for helping growing development teams and making others better
- Familiarity with development on a leading cloud provider (GCP, AWS, Azure, Aliyun, Tencent, etc.)
- A commitment and drive for quality, excellence, and results
- If some of this tech is new to you, that's ok! We know not everyone will come fully familiar with this stack, and we provide support to learn on the job.
What we offer:
- We care about you; therefore, you'll be offered a competitive salary.
- Opportunities to travel and work from exquisite locations with the team.
- Opportunities to go around the world to connect with our partners and to make new partnerships.
- We'll support your professional development however you need, whether with equipment, courses, books, or conferences.
- We offer one annual day off for charitable work and match your donations to a charity.
- Flexible holiday policy to help you plan your vacations better.
We will be conducting the assessment, interview, and onboarding of new members through video calls or in-person meets. We would like you to join ASAP and would like you to relocate close to our office in Pune.
Position Description:
Amity University, Patna campus invites applications for a tenure-track Assistant Professor position in the Department of Computer Science. The successful candidate will demonstrate a strong commitment to teaching, research, and service in the field of computer science.
Responsibilities:
- Teach undergraduate and graduate courses in computer science, with a focus on [insert areas of specialization or interest, e.g., artificial intelligence, machine learning, software engineering, etc.].
- Develop and deliver innovative curriculum that incorporates industry best practices and emerging technologies.
- Advise and mentor undergraduate and graduate students in academic and career development.
- Conduct high-quality research leading to publications in peer-reviewed journals and presentations at conferences.
- Seek external funding to support research activities and contribute to the growth of the department.
- Participate in departmental and institutional service activities, including committee work, academic advising, and community outreach.
- Contribute to the collegial and collaborative atmosphere of the department through active engagement with colleagues and participation in departmental events and initiatives.
Qualifications:
- A Ph.D. in Computer Science or atleast Thesis submitted
- Evidence of excellence in teaching at the undergraduate and/or graduate level.
- A strong record of research productivity, including publications in reputable journals and conferences.
- Demonstrated expertise in [insert areas of specialization].
- Ability to effectively communicate complex concepts to diverse audiences.
- Commitment to fostering an inclusive and equitable learning environment.
- Strong interpersonal skills and the ability to work collaboratively with students, faculty, and staff.
- Potential for leadership and contribution to the academic community.
Preferred Qualifications:
- Experience securing external research funding.
- Experience supervising undergraduate or graduate research projects.
- Experience with curriculum development and assessment.
- Experience with industry collaboration or technology transfer initiatives.
Application Process:
Interested candidates should submit a cover letter, curriculum vitae, statement of teaching philosophy, statement of research interests, evidence of teaching effectiveness (e.g., teaching evaluations), and contact information for three professional references. Review of applications will begin immediately and continue until the position is filled.
Company Name : LMES Academy Private Limited
Website : https://lmes.in/
Linkedin : https://www.linkedin.com/company/lmes-academy/mycompany/
Role : Machine Learning Engineer
Experience: 2 Year to 4 Years
Location: Urapakkam, Chennai, Tamil Nadu.
Job Overview:
We are looking for a Machine Learning Engineer to join our team and help us advance our AI capabilities.
Requirements
• Model Training and Fine-Tuning: Utilize and refine large language models using techniques such as distillation and supervised fine-tuning to enhance performance and efficiency.
• Retrieval-Augmented Generation (RAG): Good understanding on RAG systems to improve the quality and relevance of generated content.
• Vector Databases: Familiar with vector databases to support fast and accurate similarity searches and other ML-driven functionalities.
• API Integration: Good in REST APIs and integrate third-party APIs, including Open AI, Google Vertex, and Cloudflare Workers AI, to extend our AI capabilities.
• Generative AI: Experience with generative AI applications, including text-to-image, speech recognition, and text-to-speech systems.
• Collaboration: Work collaboratively with cross-functional teams, including data scientists, developers, and product managers, to deliver innovative AI solutions.
• Adaptability: Thrive in a fast-paced environment with loosely defined tasks and competing priorities, ensuring timely delivery of high-quality results.
Immediate opening for a Senior Technical Trainer for our company Vy TCDC
(https://www.vytcdc.com/) in Karur
We are back in the office so no work from home.
Role: Technical Trainer
Minimum exp: 0 to 4yrs exp
Job Types: Full-time, Regular / Permanent
Location: Karur
Notice Period: Expecting immediate joiners
Job Description
• Design effective training programs
• Train all candidates
• Conduct seminars, workshops, individual training sessions, etc.
• Monitor candidate performance and response to training
• Strong Knowledge of React Js, Java , React Native, Angular, Node Js,Java , Python, Full Stack Development.
• Conduct seminars, workshops, individual training sessions, etc
• Conduct evaluations to identify areas of improvement
the world’s first real-time marketing automation built on an Intelligent and Secure Customer Data Platform orchestrating 1-to-1 personalization and cross-channel customer journeys at scale that increases conversion, retention, & growth for enterprises
Experience Required:
- 6+ years of data science experience.
- Demonstrated experience in leading programs.
- Prior experience in customer data platforms/finance domain is a plus.
- Demonstrated ability in developing and deploying data-driven products.
- Experience of working with large datasets and developing scalable algorithms.
- Hands-on experience of working with tech, product, and operation teams.
Key Responsibilities:
Technical Skills:
- Deep understanding and hands-on experience of Machine learning and Deep learning algorithms. Good understanding of NLP and LLM concepts and fair experience in developing NLU and NLG solutions.
- Experience with Keras/TensorFlow/PyTorch deep learning frameworks.
- Proficient in scripting languages (Python/Shell), SQL.
- Good knowledge of Statistics.
- Experience with big data, cloud, and MLOps.
Soft Skills:
- Strong analytical and problem-solving skills.
- Excellent presentation and communication skills.
- Ability to work independently and deal with ambiguity.
Continuous Learning:
- Stay up to date with emerging technologies.
Qualifications:
A degree in Computer Science, Statistics, Applied Mathematics, Machine Learning, or any related field / B. Tech.
About Springworks
At Springworks, we're on a mission to revolutionize the world of People Operations. With our innovative tools and products, we've already empowered over 500,000+ employees across 15,000+ organizations and 60+ countries in just a few short years.
But what sets us apart? Let us introduce you to our exciting product stack:
- SpringVerify: Our B2B background verification platform
- EngageWith: Spark vibrant cultures! Our recognition platform adds magic to work.
- Trivia: Fun remote team-building! Real-time games for strong bonds.
- SpringRole: Future-proof profiles! Blockchain-backed skill showcase.
- Albus: AI-powered workplace search and knowledge bot for companies
Join us at Springworks and be part of the remote work revolution. Get ready to work, play, and thrive in an environment that's anything but ordinary!
Role Overview
This role is for our Albus team. As a SDE 2 at Springworks, you will be responsible for designing, developing, and maintaining robust, scalable, and efficient web applications. You will work closely with cross-functional teams, turning innovative ideas into tangible, user-friendly products. The ideal candidate has a strong foundation in both front-end and back-end technologies, with a focus on Python, Node.js and ReactJS. Experience in Artificial Intelligence (AI), Machine Learning (ML) and Natural Language Processing (NLP) will be a significant advantage.
Responsibilities:
- Collaborate with product management and design teams to understand user requirements and translate them into technical specifications.
- Develop and maintain server-side logic using Node.js and Python.
- Design and implement user interfaces using React.js with focus on user experience.
- Build reusable and efficient code for future use.
- Implement security and data protection measures.
- Collaborate with other team members and stakeholders to ensure seamless integration of front-end and back-end components.
- Troubleshoot and debug complex issues, identifying root causes and implementing effective solutions.
- Stay up-to-date with the latest industry trends, technologies, and best practices to drive innovation within the team.
- Participate in architectural discussions and contribute to technical decision-making processes.
Goals (not limited to):
1 month into the job:
- Become familiar with the company's products, codebase, development tools, and coding standards. Aim to understand the existing architecture and code structure.
- Ensure that your development environment is fully set up and configured, and you are comfortable with the team's workflow and tools.
- Start contributing to the development process by taking on smaller tasks or bug fixes. Ensure that your code is well-documented and follows the team's coding conventions.
- Begin collaborating effectively with team members, attending daily stand-up meetings, and actively participating in discussions and code reviews.
- Understand the company's culture, values, and long-term vision to align your work with the company's goals.
3 months into the job:
- Be able to independently design, develop, and deliver small to medium-sized features or improvements to the product.
- Demonstrate consistent improvement in writing clean, efficient, and maintainable code. Receive positive feedback on code reviews.
- Continue to actively participate in team meetings, offer suggestions for process improvements, and collaborate effectively with colleagues.
- Start assisting junior team members or interns by sharing knowledge and providing mentorship.
- Seek feedback from colleagues and managers to identify areas for improvement and implement necessary changes.
6 months into the job:
- Take ownership of significant features or projects, from conception to deployment, demonstrating leadership in technical decision-making.
- Identify areas of the codebase that can benefit from refactoring or performance optimizations and work on these improvements.
- Propose and implement process improvements that enhance the team's efficiency and productivity.
- Continue to expand your technical skill set, potentially by exploring new technologies or frameworks that align with the company's needs.
- Strengthen your collaboration with other departments, such as product management or design, to ensure alignment between development and business objectives.
Requirements
- Minimum 4 years of experience working with Python along with machine learning frameworks and NLP technologies.
- Strong understanding of micro-services, messaging systems like SQS.
- Experience in designing and maintaining nosql databases (MongoDB)
- Familiarity with RESTful API design and implementation.
- Knowledge of version control systems (e.g., Git).
- Ability to work collaboratively in a team environment.
- Excellent problem-solving and communication skills, and a passion for learning. Essentially having a builder mindset is a plus.
- Proven ability to work on multiple projects simultaneously.
Nice to Have:
- Experience with containerization (e.g., Docker, Kubernetes).
- Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud).
- Knowledge of agile development methodologies.
- Contributions to open-source projects or a strong GitHub profile.
- Previous experience of working in a startup or fast paced environment.
- Strong understanding of front-end technologies such as HTML, CSS, and JavaScript.
About Company / Benefits
- Work from anywhere effortlessly with our remote setup perk: Rs. 50,000 for furniture and headphones, plus an annual addition of Rs. 5,000.
- We care about your well-being! Our health scheme covers not only your physical health but also your mental and social well-being. We've got you covered from head to toe!
- Say hello to endless possibilities with our learning and growth opportunities. We're here to fuel your curiosity and help you reach new heights.
- Take a breather and enjoy 30 annual paid leave days. It's time to relax, recharge, and make the most of your time off.
- Let's celebrate! We love company outings and celebrations that bring the team together for unforgettable moments and good vibes.
- We'll reimburse your workation trips, turning your travel dreams into reality.
- We've got your lifestyle covered. Treat yourself with our lifestyle allowance, which can be used for food, OTT, health/fitness, and more. Plus, we'll reimburse your internet expenses so you can stay connected wherever you go!
Join our remote team and experience the freedom and flexibility of asynchronous communication. Apply now!
Know more about Springworks:
- Life at Springworks: https://www.springworks.in/blog/category/life-at-springworks/
- Glassdoor Reviews: https://www.glassdoor.co.in/Overview/Working-at-Springworks-EI_IE1013270.11,22.htm
- More about Asynchronous Communication: https://www.springworks.in/blog/asynchronous-communication-remote-work/
at TensorIoT Software Services Private Limited, India
About TensorIoT
- AWS Advanced Consulting Partner (for ML and GenAI solutions)
- Pioneers in IoT and Generative AI products.
- Committed to diversity and inclusion in our teams.
TensorIoT is an AWS Advanced Consulting Partner. We help companies realize the value and efficiency of the AWS ecosystem. From building PoCs and MVPs to production-ready applications, we are tackling complex business problems every day and developing solutions to drive customer success.
TensorIoT's founders helped build world-class IoT and AI platforms at AWS and Google and are now creating solutions to simplify the way enterprises incorporate edge devices and their data into their day-to-day operations. Our mission is to help connect devices and make them intelligent. Our founders firmly believe in the transformative potential of smarter devices to enhance our quality of life, and we're just getting started!
TensorIoT is proud to be an equal-opportunity employer. This means that we are committed to diversity and inclusion and encourage people from all backgrounds to apply. We do not tolerate discrimination or harassment of any kind and make our hiring decisions based solely on qualifications, merit, and business needs at the time.
Job Description
At TensorIoT India team, we look forward to bringing on board senior Machine Learning Engineers / Data Scientists. In this section, we briefly describe the work role, the minimum and the preferred requirements to qualify for the first round of the selection process.
What are the kinds of tasks Data Scientists do at TensorIoT?
As a Data Scientist, the kinds of tasks revolve around the data that we have and the business objectives of the client. The tasks generally involve: Studying, understanding, and analyzing datasets; feature engineering, proposing and solutions, evaluating the solution scientifically, and communicating with the client. Implementing ETL pipelines with database/data lake tools. Conduct and present scientific research/experiments within the team and to the client.
Minimum Requirements:
- Masters + 6 years of work experience in Machine Learning Engineering OR B.Tech (Computer Science or related) + 8 years of work experience in Machine Learning Engineering 3 years of Cloud Experience.
- Experience working with Generative AI (LLM), Prompt Engineering, Fine Tuning of LLMs.
- Hands-on experience in MLOps (model deployment, maintenance)
- Hands-on experience with Docker.
- Clear concepts of the following:
- - Supervised Learning, Unsupervised Learning, Reinforcement Learning
- - Statistical Modelling, Deep Learning
- - Interpretable Machine Learning
- Well-rounded exposure to Computer Vision, Natural Language Processing, and Time-Series Analysis.
- Scientific & Analytical mindset, proactive learning, adaptability to changes.
- Strong interpersonal and language skills in English, to communicate within the team and with the clients.
Preferred Qualifications:
- PhD in the domain of Data Science / Machine Learning
- M.Sc | M.Tech in the domain of Computer Science / Machine Learning
- Some experience in creating cloud-native technologies, and microservices design.
- Published scientific papers in the relevant domain of work.
CV Tips:
Your CV is an integral part of your application process. We would appreciate it if the CV prioritizes the following:
- Focus:
- More focus on technical skills relevant to the job description.
- Less or no focus on your roles and responsibilities as a manager, team lead, etc.
- Less or no focus on the design aspect of the document.
- Regarding the projects you completed in your previous companies,
- Mention the problem statement very briefly.
- Your role and responsibilities in that project.
- Technologies & tools used in the project.
- Always good to mention (if relevant):
- Scientific papers published, Master Thesis, Bachelor Thesis.
- Github link, relevant blog articles.
- Link to LinkedIn profile.
- Mention skills that are relevant to the job description and you could demonstrate during the interview / tasks in the selection process.
We appreciate your interest in the company and look forward to your application.
at Accrete
Responsibilities:
- Collaborating with data scientists and machine learning engineers to understand their requirements and design scalable, reliable, and efficient machine learning platform solutions.
- Building and maintaining the applications and infrastructure to support end-to-end machine learning workflows, including inference and continual training.
- Developing systems for the definition deployment and operation of the different phases of the machine learning and data life cycles.
- Working within Kubernetes to orchestrate and manage containers, ensuring high availability and fault tolerance of applications.
- Documenting the platform's best practices, guidelines, and standard operating procedures and contributing to knowledge sharing within the team.
Requirements:
- 3+ years of hands-on experience in developing and managing machine learning or data platforms
- Proficiency in programming languages commonly used in machine learning and data applications such as Python, Rust, Bash, Go
- Experience with containerization technologies, such as Docker, and container orchestration platforms like Kubernetes.
- Familiarity with CI/CD pipelines for automated model training and deployment. Basic understanding of DevOps principles and practices.
- Knowledge of data storage solutions and database technologies commonly used in machine learning and data workflows.
Responsibilities
- Work on execution and scheduling of all tasks related to assigned projects' deliverable dates
- Optimize and debug existing codes to make them scalable and improve performance
- Design, development, and delivery of tested code and machine learning models into production environments
- Work effectively in teams, managing and leading teams
- Provide effective, constructive feedback to the delivery leader
- Manage client expectations and work with an agile mindset with machine learning and AI technology
- Design and prototype data-driven solutions
Eligibility
- Highly experienced in designing, building, and shipping scalable and production-quality machine learning algorithms in the field of Python applications
- Working knowledge and experience in NLP core components (NER, Entity Disambiguation, etc.)
- In-depth expertise in Data Munging and Storage (Experienced in SQL, NoSQL, MongoDB, Graph Databases)
- Expertise in writing scalable APIs for machine learning models
- Experience with maintaining code logs, task schedulers, and security
- Working knowledge of machine learning techniques, feed-forward, recurrent and convolutional neural networks, entropy models, supervised and unsupervised learning
- Experience with at least one of the following: Keras, Tensorflow, Caffe, or PyTorch
ML Engineer
HackerPulse is a new and growing company. We help software engineers showcase their skills using AI powered profiles. As a Machine Learning Engineer, you will have the opportunity to contribute to the development and implementation of advanced Machine Learning (ML) and Natural Language Processing (NLP) solutions. You will play a crucial role in taking the innovative work done by our research team and turning it into practical solutions for production deployment. By applying to this job you agree to receive communication from us.
*Make sure to fill out the link below*
To speed up the hiring process, kindly complete the following link: https://airtable.com/appcWHN5MIs3DJEj9/shriREagoEMhlfw84
Responsibilities:
- Contribute to the development of software and solutions, emphasizing ML/NLP as a key component, to productize research goals and deployable services.
- Collaborate closely with the frontend team and research team to integrate machine learning models into deployable services.
- Utilize and develop state-of-the-art algorithms and models for NLP/ML, ensuring they align with the product and research objectives.
- Perform thorough analysis to improve existing models, ensuring their efficiency and effectiveness in real-world applications.
- Engage in data engineering tasks to clean, validate, and preprocess data for uniformity and accuracy, supporting the development of robust ML models.
- Stay abreast of new developments in research and engineering in NLP and related fields, incorporating relevant advancements into the product development process.
- Actively participate in agile development methodologies within dynamic research and engineering teams, adapting to evolving project requirements.
- Collaborate effectively within cross-functional teams, fostering open communication and cooperation between research, development, and frontend teams.
- Actively contribute to building an open, transparent, and collaborative engineering culture within the organization.
- Demonstrate strong software engineering skills to ensure the reliability, scalability, and maintainability of deployable ML services.
- Take ownership of the end-to-end deployment process, including the deployment of ML models to production environments.
- Work on continuous improvement of deployment processes and contribute to building a seamless pipeline for deploying and monitoring ML models in real-world applications.
Qualifications:
- Degree in Computer Science or related discipline or equivalent practical experience, with a strong emphasis on machine learning and natural language processing.
- Proven experience and in-depth knowledge of ML techniques, with a focus on implementing deep-learning approaches for NLP tasks in the context of productizing research goals.
- Ability to apply engineering best practices to make architectural and design decisions aligned with functionalities, user experience, performance, reliability, and scalability in the development of deployable ML services.
- Substantial experience in software development using Python, Java, and/or C or C++, with a particular emphasis on integrating machine learning models into production-ready software solutions.
- Demonstrated problem-solving skills, showcasing the ability to address complex situations effectively, especially in the context of improving models, data engineering, and deployment processes.
- Strong interpersonal and communication skills, essential for effective collaboration within cross-functional teams consisting of research, development, and frontend teams.
- Proven time management skills to handle dynamic and agile development situations, ensuring timely delivery of solutions in a fast-paced environment.
- Self-motivated contributor who frequently takes initiative to enhance the codebase and share best practices, contributing to the development of an open, transparent, and collaborative engineering culture.
Who we are looking for
· A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.
Roles and Responsibilities
· Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.
· Mentor and coach other team members
· Evaluate the performance of NLP models and ideate on how they can be improved
· Support internal and external NLP-facing APIs
· Keep up to date on current research around NLP, Machine Learning and Deep Learning
Mandatory Requirements
· Any graduation with at least 2 years of demonstrated experience as a Data Scientist.
Behavioural Skills
· Strong analytical and problem-solving capabilities.
· Proven ability to multi-task and deliver results within tight time frames
· Must have strong verbal and written communication skills
· Strong listening skills and eagerness to learn
· Strong attention to detail and the ability to work efficiently in a team as well as individually
Technical Skills
Hands-on experience with
· NLP
· Deep Learning
· Machine Learning
· Python
· Bert
Preferred Requirements
· Experience in Computer Vision is preferred
Role: Data Scientist
Industry Type: Banking
Department: Data Science & Analytics
Employment Type: Full Time, Permanent
Role Category: Data Science & Machine Learning
Data engineers:
Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights.This would also include develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity
Constructing infrastructure for efficient ETL processes from various sources and storage systems.
Collaborating closely with Product Managers and Business Managers to design technical solutions aligned with business requirements.
Leading the implementation of algorithms and prototypes to transform raw data into useful information.
Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations.
Creating innovative data validation methods and data analysis tools.
Ensuring compliance with data governance and security policies.
Interpreting data trends and patterns to establish operational alerts.
Developing analytical tools, utilities, and reporting mechanisms.
Conducting complex data analysis and presenting results effectively.
Preparing data for prescriptive and predictive modeling.
Continuously exploring opportunities to enhance data quality and reliability.
Applying strong programming and problem-solving skills to develop scalable solutions.
Writes unit/integration tests, contributes towards documentation work
Must have ....
6 to 8 years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines.
High proficiency in Scala/Java/ Python API frameworks/ Swagger and Spark for applied large-scale data processing.
Expertise with big data technologies, API development (Flask,,including Spark, Data Lake, Delta Lake, and Hive.
Solid understanding of batch and streaming data processing techniques.
Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion.
Expert-level ability to write complex, optimized SQL queries across extensive data volumes.
Experience with RDBMS and OLAP databases like MySQL, Redshift.
Familiarity with Agile methodologies.
Obsession for service observability, instrumentation, monitoring, and alerting.
Knowledge or experience in architectural best practices for building data pipelines.
Good to Have:
Passion for testing strategy, problem-solving, and continuous learning.
Willingness to acquire new skills and knowledge.
Possess a product/engineering mindset to drive impactful data solutions.
Experience working in distributed environments with teams scattered geographically.
Role Overview
We are looking for a Tech Lead with a strong background in fintech, especially with experience or a strong interest in fraud prevention and Anti-Money Laundering (AML) technologies.
This role is critical in leading our fintech product development, ensuring the integration of robust security measures, and guiding our team in Hyderabad towards delivering high-quality, secure, and compliant software solutions.
Responsibilities
- Lead the development of fintech solutions, focusing on fraud prevention and AML, using Typescript, ReactJs, Python, and SQL databases.
- Architect and deploy secure, scalable applications on AWS or Azure, adhering to the best practices in financial security and data protection.
- Design and manage databases with an emphasis on security, integrity, and performance, ensuring compliance with fintech regulatory standards.
- Guide and mentor the development team, promoting a culture of excellence, innovation, and continuous learning in the fintech space.
- Collaborate with stakeholders across the company, including product management, design, and QA, to ensure project alignment with business goals and regulatory requirements.
- Keep abreast of the latest trends and technologies in fintech, fraud prevention, and AML, applying this knowledge to drive the company's objectives.
Requirements
- 5-7 years of experience in software development, with a focus on fintech solutions and a strong understanding of fraud prevention and AML strategies.
- Expertise in Typescript, ReactJs, and familiarity with Python.
- Proven experience with SQL databases and cloud services (AWS or Azure), with certifications in these areas being a plus.
- Demonstrated ability to design and implement secure, high-performance software architectures in the fintech domain.
- Exceptional leadership and communication skills, with the ability to inspire and lead a team towards achieving excellence.
- A bachelor's degree in Computer Science, Engineering, or a related field, with additional certifications in fintech, security, or compliance being highly regarded.
Why Join Us?
- Opportunity to be at the cutting edge of fintech innovation, particularly in fraud prevention and AML.
- Contribute to a company with ambitious goals to revolutionize software development and make a historical impact.
- Be part of a visionary team dedicated to creating a lasting legacy in the tech industry.
- Work in an environment that values innovation, leadership, and the long-term success of its employees.
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar Python expert to help develop and deploy our AI pipeline. The main task will be deploying models and algorithms developed by our AI team, and keeping the daily production pipeline running. Our pipeline is centered around several microservices, all written in Python, that coordinate their actions through a database. We’re looking for developers with deep experience in Python including profiling and improving the performance of production code, multiprocessing / multithreading, and managing a pipeline that is constantly running. AI/ML experience is a plus, but not necessary. AWS / docker / CI/CD practices are also a plus. If you are a gamer or streamer, or enjoy watching video games and streams, that is also definitely a plus :-)
You will be responsible for:
- Building Python scripts to deploy our AI components into pipeline and production
- Developing logic to ensure multiple different AI components work together seamlessly through a microservices architecture
- Managing our daily pipeline on both on-premise servers and AWS
- Working closely with the AI engineering, backend and frontend teams
You should have the following qualities:
- Deep expertise in Python including:
- Multiprocessing / multithreaded applications
- Class-based inheritance and modules
- DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend)
- Understanding Python performance bottlenecks, and how to profile and improve the performance of production code including:
- Optimal multithreading / multiprocessing strategies
- Memory bottlenecks and other bottlenecks encountered with large datasets and use of numpy / opencv / image processing
- Experience in creating soft real-time processing tasks is a plus
- Expertise in Docker-based virtualization including:
- Creating & maintaining custom Docker images
- Deployment of Docker images on cloud and on-premise services
- Experience with maintaining cloud applications in AWS environments
- Experience in deploying machine learning algorithms into production (e.g. PyTorch, tensorflow, opencv, etc) is a plus
- Experience with image processing in python is a plus (e.g. openCV, Pillow, etc)
- Experience with running Nvidia GPU / CUDA-based tasks is a plus (Nvidia Triton, MLFlow)
- Knowledge of video file formats (mp4, mov, avi, etc.), encoding, compression, and using ffmpeg to perform common video processing tasks is a plus.
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Ideally a gamer or someone interested in watching gaming content online
Seniority: We are looking for a mid to senior level engineer
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply.
Work Experience: 4 years to 8 years
About Sizzle
Sizzle is building AI to automate gaming highlights, directly from Twitch and YouTube videos. Sizzle works with thousands of gaming streamers to automatically create highlights and social content for them. Sizzle is available at www.sizzle.gg.
at Tiger Analytics
• Charting learning journeys with knowledge graphs.
• Predicting memory decay based upon an advanced cognitive model.
• Ensure content quality via study behavior anomaly detection.
• Recommend tags using NLP for complex knowledge.
• Auto-associate concept maps from loosely structured data.
• Predict knowledge mastery.
• Search query personalization.
Requirements:
• 6+ years experience in AI/ML with end-to-end implementation.
• Excellent communication and interpersonal skills.
• Expertise in SageMaker, TensorFlow, MXNet, or equivalent.
• Expertise with databases (e. g. NoSQL, Graph).
• Expertise with backend engineering (e. g. AWS Lambda, Node.js ).
• Passionate about solving problems in education
Key Roles/Responsibilities: –
• Develop an understanding of business obstacles, create
• solutions based on advanced analytics and draw implications for
• model development
• Combine, explore and draw insights from data. Often large and
• complex data assets from different parts of the business.
• Design and build explorative, predictive- or prescriptive
• models, utilizing optimization, simulation and machine learning
• techniques
• Prototype and pilot new solutions and be a part of the aim
• of ‘productifying’ those valuable solutions that can have impact at a
• global scale
• Guides and coaches other chapter colleagues to help solve
• data/technical problems at an operational level, and in
• methodologies to help improve development processes
• Identifies and interprets trends and patterns in complex data sets to
• enable the business to take data-driven decisions
We are looking for
A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.
Roles and Responsibilities
Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.
Mentor and coach other team members
Evaluate the performance of NLP models and ideate on how they can be improved
Support internal and external NLP-facing APIs
Keep up to date on current research around NLP, Machine Learning and Deep Learning
Mandatory Requirements
Any graduation with at least 2 years of demonstrated experience as a Data Scientist.
Behavioral Skills
Strong analytical and problem-solving capabilities.
Proven ability to multi-task and deliver results within tight time frames
Must have strong verbal and written communication skills
Strong listening skills and eagerness to learn
Strong attention to detail and the ability to work efficiently in a team as well as individually
Hands-on experience with
NLP
Deep Learning
Machine Learning
Python
Bert
Title:
Software Engineer – Backend (Python)
About the Role:
Our team is responsible for building the backend components of the MLOps platform on AWS.
The backend components we build are the fundamental blocks for feature engineering, feature serving, model deployment and model inference in both batch and online modes.
What you’ll do here
• Design & build backend components of our MLOps platform on AWS.
• Collaborate with geographically distributed cross-functional teams.
• Participate in on-call rotation with the rest of the team to handle production incidents.
What you’ll need to succeed
Must have skills:
• At least 3+ years of professional backend web development experience with Python.
• Experience with web development frameworks such as Flask, Django or FastAPI.
• Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc.
• Experience with concurrent programming designs such as AsyncIO.
• Experience with unit and functional testing frameworks.
• Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS.
• Experience with CI/CD practices, tools, and frameworks.
Nice to have skills:
• Experience with MLOps plaBorms such as AWS Sagemaker, Kubeflow or MLflow.
• Experience with big data processing frameworks, preferably Apache Spark.
• Experience with containers (Docker) and container platforms like AWS ECS or AWS EKS.
• Experience with DevOps & IaC tools such as Terraform, Jenkins etc.
• Experience with various Python packaging options such as Wheel, PEX or Conda.
Data driven decision-making is core to advertising technology at AdElement. We are looking for sharp, disciplined, and highly quantitative machine learning/ artificial intellignce engineers with big data experience and a passion for digital marketing to help drive informed decision-making. You will work with top-talent and cutting edge technology and have a unique opportunity to turn your insights into products influencing billions. The potential candidate will have an extensive background in distributed training frameworks, will have experience to deploy related machine learning models end to end, and will have some experience in data-driven decision making of machine learning infrastructure enhancement. This is your chance to leave your legacy and be part of a highly successful and growing company.
Required Skills
- 3+ years of industry experience with Java/ Python in a programming intensive role
- 3+ years of experience with one or more of the following machine learning topics: classification, clustering, optimization, recommendation system, graph mining, deep learning
- 3+ years of industry experience with distributed computing frameworks such as Hadoop/Spark, Kubernetes ecosystem, etc
- 3+ years of industry experience with popular deep learning frameworks such as Spark MLlib, Keras, Tensorflow, PyTorch, etc
- 3+ years of industry experience with major cloud computing services
- An effective communicator with the ability to explain technical concepts to a non-technical audience
- (Preferred) Prior experience with ads product development (e.g., DSP/ad-exchange/SSP)
- Able to lead a small team of AI/ML Engineers to achieve business objectives
Responsibilities
- Collaborate across multiple teams - Data Science, Operations & Engineering on unique machine learning system challenges at scale
- Leverage distributed training systems to build scalable machine learning pipelines including ETL, model training and deployments in Real-Time Bidding space.
- Design and implement solutions to optimize distributed training execution in terms of model hyperparameter optimization, model training/inference latency and system-level bottlenecks
- Research state-of-the-art machine learning infrastructures to improve data healthiness, model quality and state management during the lifecycle of ML models refresh.
- Optimize integration between popular machine learning libraries and cloud ML and data processing frameworks.
- Build Deep Learning models and algorithms with optimal parallelism and performance on CPUs/ GPUs.
- Work with top management on defining teams goals and objectives.
Education
- MTech or Ph.D. in Computer Science, Software Engineering, Mathematics or related fields
- A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.
Roles and Responsibilities
- Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.
- Mentor and coach other team members
- Evaluate the performance of NLP models and ideate on how they can be improved
- Support internal and external NLP-facing APIs
- Keep up to date on current research around NLP, Machine Learning and Deep Learning
Mandatory Requirements
- Any graduation with at least 2 years of demonstrated experience as a Data Scientist.
Behavioral Skills
Strong analytical and problem-solving capabilities.
- Proven ability to multi-task and deliver results within tight time frames
- Must have strong verbal and written communication skills
- Strong listening skills and eagerness to learn
- Strong attention to detail and the ability to work efficiently in a team as well as individually
Technical Skills
Hands-on experience with
- NLP
- Deep Learning
- Machine Learning
- Python
- Bert
Preferred Requirements
- Experience in Computer Vision is preferred
About the Company:
ConveGenius is a leading Conversational AI company that is democratising educational and knowledge services for the mass market. Their knowledge bots have 35M users today. ConveGenius is building an omniverse on Conversational AI for the developer ecosystem to build together. We are looking for self-driven individuals who love to find innovative solutions and can perform under pressure. An eye for detail and being proud of produced code is the must-have attributes for this job.
About the role:
At ConveGenius, we are committed to creating an inclusive and collaborative work environment that fosters creativity and innovation. Join our AI Team as Senior AI Engineer with the focus on development of cutting-edge technologies, making a real life difference in the lives of millions of users. You will be involved in a wide range of projects, from designing and building AI capabilities for our swiftchat platform, and analysing large conversation datasets. Innovative nature and proactive involvement in the product is taken very seriously at Convegenius, therefore, a major part of your role would involve thinking about new features and new ways to deliver quality learning experience to our learners.
Responsibilities:
- Develop and implement state-of-the-art NLP, Computer Vision, Speech techniques.
- Collaborate with other AI researchers to develop innovative AI solutions and services to be used to improve our platform capabilities.
- Mentor and lead the experiments with other team members.
- Conduct research and experiments in deep learning, machine learning, and related fields to support the development of AI techniques.
Qualifications:
- Bachelor's or master's degree in Computer Science, Engineering, or a related field with experience in machine learning and data analysis.
- 2-4 years of industry experience in machine learning, with a strong focus on NLP and Computer Vision.
- Hands-on experience of ML model development and deploying.
- Experience working with training and fine-tuning LLM is a plus.
- Experience in audio/speech technologies is a plus.
- Strong programming skills in Python, node.js or other relevant programming languages.
- Familiarity with machine learning libraries, such as TensorFlow, Keras, PyTorch, scikit-learn, Conversational frameworks like Rasa, DialogFlow, Lex, etc.
- Strong analytical and problem-solving skills, with the ability to think out of the box.
- Self-starter with the ability to work independently and in a team environment.
at Blue Hex Software Private Limited
In this position, you will play a pivotal role in collaborating with our CFO, CTO, and our dedicated technical team to craft and develop cutting-edge AI-based products.
Role and Responsibilities:
- Develop and maintain Python-based software applications.
- Design and work with databases using SQL.
- Use Django, Streamlit, and front-end frameworks like Node.js and Svelte for web development.
- Create interactive data visualizations with charting libraries.
- Collaborate on scalable architecture and experimental tech. - Work with AI/ML frameworks and data analytics.
- Utilize Git, DevOps basics, and JIRA for project management. Skills and Qualifications:
- Strong Python programming
skills.
- Proficiency in OOP and SQL.
- Experience with Django, Streamlit, Node.js, and Svelte.
- Familiarity with charting libraries.
- Knowledge of AI/ML frameworks.
- Basic Git and DevOps understanding.
- Effective communication and teamwork.
Company details: We are a team of Enterprise Transformation Experts who deliver radically transforming products, solutions, and consultation services to businesses of any size. Our exceptional team of diverse and passionate individuals is united by a common mission to democratize the transformative power of AI.
Website: Blue Hex Software – AI | CRM | CXM & DATA ANALYTICS
Job Title: AI/ML Engineer
Expereince: 5+ Years
Location: Remote
Responsbilities
Algorithm Development:
• Design and develop cutting-edge algorithms for solving complex business problems.
• Collaborate with cross-functional teams to understand business requirements and translate them into effective AI/ML solutions.
Content Customization:
• Implement content customization strategies to enhance user experience and engagement.
• Work closely with stakeholders to tailor AI/ML models for specific business needs.
Data Collection and Preprocessing:
• Develop robust data collection strategies to ensure the availability of high-quality datasets.
• Implement preprocessing techniques to clean and prepare data for model training.
Model Training and Evaluation:
• Utilize machine learning frameworks such as TensorFlow, Pytorch, and Scikit Learn for model training.
• Conduct rigorous evaluation of models to ensure accuracy and reliability.
Monitoring and Optimization:
• Implement monitoring systems to track the performance of deployed models.
• Continuously optimize models for better efficiency and accuracy.
Data Analytics and Reporting:
• Leverage statistical modelling techniques to derive meaningful insights from data.
• Generate reports and dashboards to communicate findings to stakeholders.
Documentation:
• Create comprehensive documentation for algorithms, models, and implementation details.
• Provide documentation for training, deployment, and maintenance procedures.
Skills:
- 6+ years of experience in AI/ML engineering.
- Proficiency in Python and machine learning frameworks (TensorFlow, Pytorch, Scikit Learn).
- Experience in Natural Language Processing (NLP) and Computer Vision.
- Expertise in Generative AI techniques.
- Familiarity with cloud platforms such as Azure and AWS.
- Strong background in statistical modelling and content customization.
- Excellent problem-solving skills and ability to work independently.
- Strong communication and collaboration skills
Experience: 1- 5 Years
Job Location: WFH
No. of Position: Multiple
Qualifications: Ph.D. Must have
Work Timings: 1:30 PM IST to 10:30 PM IST
Functional Area: Data Science
NextGen Invent is currently searching for Data Scientist. This role will directly report to the VP, Data Science in Data Science Practice. The person will work on data science use-cases for the enterprise and must have deep expertise in supervised and unsupervised machine learning, modeling and algorithms with a strong focus on delivering use-cases and solutions at speed and scale to solve business problems.
Job Responsibilities:
- Leverage AI/ML modeling and algorithms to deliver on use cases
- Build modeling solutions at speed and scale to solve business problems
- Develop data science solutions that can be tested and deployed in Agile delivery model
- Implement and scale-up high-availability models and algorithms for various business and corporate functions
- Investigate and create experimental prototypes that work on specific domains and verticals
- Analyze large, complex data sets to reveal underlying patterns, and trends
- Support and enhance existing models to ensure better performance
- Set up and conduct large-scale experiments to test hypotheses and delivery of models
Skills, Knowledge, Experience:
- Must have Ph.D. in an analytical or technical field (e.g. applied mathematics, computer science)
- Strong knowledge of statistical and machine learning methods
- Hands on experience on building models at speed and scale
- Ability to work in a collaborative, transparent style with cross-functional stakeholders across the organization to lead and deliver results
- Strong skills in oral and written communication
- Ability to lead a high-functioning team and develop and train people
- Must have programming experience in SQL, Python and R
- Experience conceiving, implementing and continually improving machine learning projects
- Strong familiarity with higher level trends in artificial intelligence and open-source platforms
- Experience working with AWS, Azure, or similar cloud platform
- Familiarity with visualization techniques and software
- Healthcare experience is a plus
- Experience in Kafka, Chatbot and blockchain is a plus.
Job Title:- Head of Analytics
Job Location:- Bangalore - On - site
About Qrata:
Qrata matches top talent with global career opportunities from the world’s leading digital companies including some of the world’s fastest growing start-ups using qrata’s talent marketplaces. To sign-up please visit Qrata Talent Sign-Up
We are currently scouting for Head of Analytics
Our Client Story:
Founded by a team of seasoned bankers with over 120 years of collective experience in banking, financial services and cards, encompassing strategy, operation, marketing, risk & technology, both in India and internationally.
We offer credit card processing Solution that can help you in effectively managing your credit card portfolio end-to-end. These solution are customized to meet the unique strategic, operational and compliance requirements of each bank.
1. Card Programs built for Everyone Limit assignment based on customer risk assessment & credit profiles including secured cards
2. Cards that can be used Everywhere. Through POS machines, UPI, E-Commerce websites
3. A Card for Everything Enable customer purchases, both large and small
4. Customized Card configurations Restrict usage based on merchant codes, location, amount limits etc
5. End-to-End Support We undertake the complete customer life cycle management right from KYC checks, onboarding, risk profiling, fraud control, billing and collections
6. Rewards Program Management We will manage the entire cards reward and customer loyalty programs for you
What you will do:
We are seeking an experienced individual for the role of Head of Analytics. As the Head of Analytics, you will be responsible for driving data-driven decision-making, implementing advanced analytics strategies, and providing valuable insights to optimize our credit card business operations, sales and marketing, risk management & customer experience. Your expertise in statistical analysis, predictive modelling, and data visualization will be instrumental in driving growth and enhancing the overall performance of our credit card business
Qualification:
- Bachelor's or master’s degree in Technology, Mathematics, Statistics, Economics, Computer Science, or a related field
- Proven experience (7+ years) in leading analytics teams in the credit card industry
- Strong expertise in statistical analysis, predictive modelling, data mining, and segmentation techniques
- Proficiency in data manipulation and analysis using programming languages such as Python, R, or SQL
- Experience with analytics tools such as SAS, SPSS, or Tableau
- Excellent leadership and team management skills, with a track record of building and developing high-performing teams
- Strong knowledge of credit card business and understanding of credit card industry dynamics, including risk management, marketing, and customer lifecycle
- Exceptional communication and presentation skills, with the ability to effectively communicate complex information to a varied audience
What you can expect:
1. Develop and implement Analytics Strategy:
o Define the analytics roadmap for the credit card business, aligning it with overall business objectives
o Identify key performance indicators (KPIs) and metrics to track the performance of the credit card business
o Collaborate with senior management and cross-functional teams to prioritize and execute analytics initiatives
2. Lead Data Analysis and Insights:
o Conduct in-depth analysis of credit card data, customer behaviour, and market trends to identify opportunities for business growth and risk mitigation
o Develop predictive models and algorithms to assess credit risk, customer segmentation, acquisition, retention, and upsell opportunities
o Generate actionable insights and recommendations based on data analysis to optimize credit card product offerings, pricing, and marketing strategies
o Regularly present findings and recommendations to senior leadership, using data visualization techniques to effectively communicate complex information
3. Drive Data Governance and Quality:
o Oversee data governance initiatives, ensuring data accuracy, consistency, and integrity across relevant systems and platforms
o Collaborate with IT teams to optimize data collection, integration, and storage processes to support advanced analytics capabilities
o Establish and enforce data privacy and security protocols to comply with regulatory requirements
4. Team Leadership and Collaboration:
o Build and manage a high-performing analytics team, fostering a culture of innovation, collaboration, and continuous learning
o Provide guidance and mentorship to the team, promoting professional growth and development
o Collaborate with stakeholders across departments, including Marketing, Risk Management, and Finance, to align analytics initiatives with business objectives
5. Stay Updated on Industry Trends:
o Keep abreast of emerging trends, techniques, and technologies in analytics, credit card business, and the financial industry
o Leverage industry best practices to drive innovation and continuous improvement in analytics methodologies and tools
For more Opportunities Visit: Qrata Opportunities.
Job Summary:
We are seeking a highly skilled Enterprise Architect with expertise in Artificial Intelligence (AI), Microservices, and a background in insurance and healthcare to lead our organization's AI strategy, design AI solutions, and ensure alignment with business objectives. The ideal candidate will have a deep understanding of AI technologies, data analytics, cloud computing, software architecture, microservices, as well as experience in the insurance and healthcare sectors. They should be able to translate these concepts into practical solutions that drive innovation and efficiency within our enterprise. Additionally, this role will involve setting up monitoring systems to ensure the performance and reliability of our AI and microservices solutions.
Responsibilities:
AI Strategy Development:
- Collaborate with senior management to define and refine the AI strategy that aligns with the organization's goals and objectives.
- Identify opportunities to leverage AI and machine learning technologies to enhance business processes and create value.
Solution Design:
- Architect AI-driven solutions that meet business requirements, ensuring scalability, reliability, and security.
- Collaborate with cross-functional teams to define system architecture and design principles for AI applications and microservices.
- Evaluate and select appropriate AI technologies, microservices architectures, and frameworks for specific projects.
Data Management:
- Oversee data strategy, including data acquisition, preparation, and governance, to support AI and microservices initiatives.
- Design data pipelines and workflows to ensure high-quality, accessible data for AI models and microservices.
AI Model Development:
- Lead the development and deployment of AI and machine learning models.
- Implement best practices for model training, testing, and validation.
- Monitor and optimize model performance to ensure accuracy and efficiency.
Microservices Architecture:
- Define and implement microservices architecture patterns and best practices.
- Ensure that microservices are designed for scalability, flexibility, and resilience.
- Collaborate with development teams to build and deploy microservices-based applications.
Monitoring Systems:
- Set up monitoring systems for AI and microservices solutions to ensure performance, reliability, and security.
- Implement proactive alerting and reporting mechanisms to identify and address issues promptly.
Integration and Deployment:
- Work with IT teams to integrate AI solutions and microservices into existing systems and applications.
- Ensure seamless deployment and monitoring of AI and microservices solutions in production environments.
Compliance and Security:
- Ensure that AI solutions, microservices, and monitoring systems comply with relevant regulations and data privacy standards.
- Implement security measures to protect AI models, data, and microservices.
Stakeholder Communication:
- Collaborate with business stakeholders to gather requirements and provide regular updates on AI and microservices project progress.
- Translate technical concepts into non-technical language for various audiences.
Research and Innovation:
- Stay up-to-date with emerging AI, microservices, and cloud computing trends, technologies, and best practices.
- Identify opportunities for innovation and propose new AI and microservices initiatives to drive business growth.
Requirements:
- Bachelor's or Masters degree in computer science, Data Science, or a related field.
- Proven experience (X+ years) as an Enterprise Architect with a focus on AI, Microservices, and machine learning.
- Strong knowledge of AI technologies, including deep learning, natural language processing, computer vision, and reinforcement learning.
- Proficiency in data analytics, cloud computing platforms (e.g., AWS, Azure, GCP), big data technologies, and microservices architecture.
- Experience with AI model development, deployment, and monitoring, as well as microservices design and implementation.
- Excellent communication and interpersonal skills.
- Ability to lead cross-functional teams and drive innovation.
- Strong problem-solving and critical-thinking abilities.
- Knowledge of regulatory requirements and data privacy standards related to AI and microservices.
Preferred Qualifications:
- AI-related certifications (e.g., AWS Certified Machine Learning – Specialty, Google Cloud Professional Machine Learning Engineer, etc.).
- Experience in industries such as insurance and healthcare, with a deep understanding of their specific challenges and requirements.
- Previous experience with enterprise architecture frameworks (e.g., TOGAF).
This Enterprise Architect with AI, Microservices, Insurance, and Healthcare Experience role offers an exciting opportunity to shape the AI strategy, microservices architecture, and drive innovation within our organization, particularly in the insurance and healthcare sectors. Additionally, you will play a key role in setting up monitoring systems to ensure the performance and reliability of our AI and microservices solutions. If you have a passion for AI, microservices, and a strong background in these industries, we encourage you to apply and be part of our dynamic team.
Responsibilities
> Selecting features, building and optimizing classifiers using machine
> learning techniques
> Data mining using state-of-the-art methods
> Extending company’s data with third party sources of information when
> needed
> Enhancing data collection procedures to include information that is
> relevant for building analytic systems
> Processing, cleansing, and verifying the integrity of data used for
> analysis
> Doing ad-hoc analysis and presenting results in a clear manner
> Creating automated anomaly detection systems and constant tracking of
> its performance
Key Skills
> Hands-on experience of analysis tools like R, Advance Python
> Must Have Knowledge of statistical techniques and machine learning
> algorithms
> Artificial Intelligence
> Understanding of Text analysis- Natural Language processing (NLP)
> Knowledge on Google Cloud Platform
> Advanced Excel, PowerPoint skills
> Advanced communication (written and oral) and strong interpersonal
> skills
> Ability to work cross-culturally
> Good to have Deep Learning
> VBA and visualization tools like Tableau, PowerBI, Qliksense, Qlikview
> will be an added advantage
· Qualification: bachelor’s or master’s degree in computer science, Engineering, or related field.
· Proven 4+ years of experience in full stack development, including proficiency with React, Tailwind CSS, Python, and Flask.
· Strong understanding of Graph QL concepts and hands-on experience with Ariadne or similar frameworks.
· Familiarity with containerization using Docker for application deployment.
· Demonstrated expertise in machine learning model integration, preferably using the Hugging Face framework.
· Experience deploying machine learning models on cloud platforms such as Replicate and AWS is a strong plus.
· Excellent problem-solving skills and a proactive attitude towards learning and adapting to new technologies.
· Should have experience in Backend: Python, Flask Framework, Langchain ML Ops Framework, Ariadne Graph QL Framework, Docker. Frontend: React, Tailwind. ML Model Development – Hugging Face Model Training, ML Deployment onto Replicate or AWS
Lifespark is looking for individuals with a passion for impacting real lives through technology. Lifespark is one of the most promising startups in the Assistive Tech space in India, and has been honoured with several National and International awards. Our mission is to create seamless, persistent and affordable healthcare solutions. If you are someone who is driven to make a real impact in this world, we are your people.
Lifespark is currently building solutions for Parkinson’s Disease, and we are looking for a ML lead to join our growing team. You will be working directly with the founders on high impact problems in the Neurology domain. You will be solving some of the most fundamental and exciting challenges in the industry and will have the ability to see your insights turned into real products every day
Essential experience and requirements:
1. Advanced knowledge in the domains of computer vision, deep learning
2. Solid understand of Statistical / Computational concepts like Hypothesis Testing, Statistical Inference, Design of Experiments and production level ML system design
3. Experienced with proper project workflow
4. Good at collating multiple datasets (potentially from different sources)
5. Good understanding of setting up production level data pipelines
6. Ability to independently develop and deploy ML systems to various platforms (local and cloud)
7. Fundamentally strong with time-series data analysis, cleaning, featurization and visualisation
8. Fundamental understanding of model and system explainability
9. Proactive at constantly unlearning and relearning
10. Documentation ninja - can understand others documentation as well as create good documentation
Responsibilities :
1. Develop and deploy ML based systems built upon healthcare data in the Neurological domain
2. Maintain deployed systems and upgrade them through online learning
3. Develop and deploy advanced online data pipelines
Are you passionate about pushing the boundaries of Artificial Intelligence and its applications in the software development lifecycle? Are you excited about building AI models that can revolutionize how developers ship, refactor, and onboard to legacy or existing applications faster? If so, Zevo.ai has the perfect opportunity for you!
As an AI Researcher/Engineer at Zevo.ai, you will play a crucial role in developing cutting-edge AI models using CodeBERT and codexGLUE to achieve our goal of providing an AI solution that supports developers throughout the sprint cycle. You will be at the forefront of research and development, harnessing the power of Natural Language Processing (NLP) and Machine Learning (ML) to revolutionize the way software development is approached.
Responsibilities:
- AI Model Development: Design, implement, and refine AI models utilizing CodeBERT and codexGLUE to comprehend codebases, facilitate code understanding, automate code refactoring, and enhance the developer onboarding process.
- Research and Innovation: Stay up-to-date with the latest advancements in NLP and ML research, identifying novel techniques and methodologies that can be applied to Zevo.ai's AI solution. Conduct experiments, perform data analysis, and propose innovative approaches to enhance model performance.
- Data Collection and Preparation: Collaborate with data engineers to identify, collect, and preprocess relevant datasets necessary for training and evaluating AI models. Ensure data quality, correctness, and proper documentation.
- Model Evaluation and Optimization: Develop robust evaluation metrics to measure the performance of AI models accurately. Continuously optimize and fine-tune models to achieve state-of-the-art results.
- Code Integration and Deployment: Work closely with software developers to integrate AI models seamlessly into Zevo.ai's platform. Ensure smooth deployment and monitor the performance of the deployed models.
- Collaboration and Teamwork: Collaborate effectively with cross-functional teams, including data scientists, software engineers, and product managers, to align AI research efforts with overall company objectives.
- Documentation: Maintain detailed and clear documentation of research findings, methodologies, and model implementations to facilitate knowledge sharing and future developments.
- Ethics and Compliance**: Ensure compliance with ethical guidelines and legal requirements related to AI model development, data privacy, and security.
Requirements
- Educational Background: Bachelor's/Master's or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A strong academic record with a focus on NLP and ML is highly desirable.
- Technical Expertise: Proficiency in NLP, Deep Learning, and experience with AI model development using frameworks like PyTorch or TensorFlow. Familiarity with CodeBERT and codexGLUE is a significant advantage.
- Programming Skills: Strong programming skills in Python and experience working with large-scale software projects.
- Research Experience: Proven track record of conducting research in NLP, ML, or related fields, demonstrated through publications, conference papers, or open-source contributions.
- Problem-Solving Abilities: Ability to identify and tackle complex problems related to AI model development and software engineering.
- Team Player: Excellent communication and interpersonal skills, with the ability to collaborate effectively in a team-oriented environment.
- Passion for AI: Demonstrated enthusiasm for AI and its potential to transform software development practices.
If you are eager to be at the forefront of AI research, driving innovation and impacting the software development industry, join Zevo.ai's talented team of experts as an AI Researcher/Engineer. Together, we'll shape the future of the sprint cycle and revolutionize how developers approach code understanding, refactoring, and onboarding!
We are seeking a talented and motivated AI Verification Engineer to join our team. The ideal candidate will be responsible for the validation of our AI and Machine Learning systems, ensuring that they meet all necessary quality assurance requirements and work reliably and optimally in real-world scenarios. The role requires strong analytical skills, a good understanding of AI and ML technologies, and a dedication to achieving excellence in the production of state-of-the-art systems.
Key Responsibilities:
- Develop and execute validation strategies and test plans for AI and ML systems, during development and on production environments.
- Work closely with AI/ML engineers and data scientists in understanding system requirements and capabilities and coming up with key metrics for system efficacy.
- Evaluate the system performance under various operating conditions, data variety, and scenarios.
- Perform functional, stress, system, and other testing types to ensure our systems' reliability and robustness.
- Create automated test procedures and systems for regular verification and validation processes, and detect any abnormal anomalies in usage.
- Report and track defects, providing detailed information to facilitate problem resolution.
- Lead the continuous review and improvement of validation and testing methodologies, procedures, and tools.
- Provide detailed reports and documentation on system performance, issues, and validation results.
Required Skills & Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Proven experience in the testing and validation of AI/ML systems or equivalent complex systems.
- Good knowledge and understanding of AI and ML concepts, tools, and frameworks.
- Proficient in scripting and programming languages such as Python, shell scripts etc.
- Experience with AI/ML platforms and libraries such as TensorFlow, PyTorch, Keras, or Scikit-Learn.
- Excellent problem-solving abilities and attention to detail.
- Strong communication skills, with the ability to document and explain complex technical concepts clearly.
- Ability to work in a fast-paced, collaborative environment.
Preferred Skills & Qualifications:
- A good understanding of various large language models, image models, and their comparative strengths and weaknesses.
- Knowledge of CI/CD pipelines and experience with tools such as Jenkins, Git, Docker.
- Experience with cloud platforms like AWS, Google Cloud, or Azure.
- Understanding of Data Analysis and Visualization tools and techniques.
The role is with a Fintech Credit Card company based in Pune within the Decision Science team. (OneCard )
About
Credit cards haven't changed much for over half a century so our team of seasoned bankers, technologists, and designers set out to redefine the credit card for you - the consumer. The result is OneCard - a credit card reimagined for the mobile generation. OneCard is India's best metal credit card built with full-stack tech. It is backed by the principles of simplicity, transparency, and giving back control to the user.
The Engineering Challenge
“Re-imaging credit and payments from First Principles”
Payments is an interesting engineering challenge in itself with requirements of low latency, transactional guarantees, security, and high scalability. When we add credit and engagement into the mix, the challenge becomes even more interesting with underwriting and recommendation algorithms working on large data sets. We have eliminated the current call center, sales agent, and SMS-based processes with a mobile app that puts the customers in complete control. To stay agile, the entire stack is built on the cloud with modern technologies.
Purpose of Role :
- Develop and implement the collection analytics and strategy function for the credit cards. Use analysis and customer insights to develop optimum strategy.
CANDIDATE PROFILE :
- Successful candidates will have in-depth knowledge of statistical modelling/data analysis tools (Python, R etc.), techniques. They will be an adept communicator with good interpersonal skills to work with senior stake holders in India to grow revenue primarily through identifying / delivering / creating new, profitable analytics solutions.
We are looking for someone who:
- Proven track record in collection and risk analytics preferably in Indian BFSI industry. This is a must.
- Identify & deliver appropriate analytics solutions
- Experienced in Analytics team management
Essential Duties and Responsibilities :
- Responsible for delivering high quality analytical and value added services
- Responsible for automating insights and proactive actions on them to mitigate collection Risk.
- Work closely with the internal team members to deliver the solution
- Engage Business/Technical Consultants and delivery teams appropriately so that there is a shared understanding and agreement as to deliver proposed solution
- Use analysis and customer insights to develop value propositions for customers
- Maintain and enhance the suite of suitable analytics products.
- Actively seek to share knowledge within the team
- Share findings with peers from other teams and management where required
- Actively contribute to setting best practice processes.
Knowledge, Experience and Qualifications :
Knowledge :
- Good understanding of collection analytics preferably in Retail lending industry.
- Knowledge of statistical modelling/data analysis tools (Python, R etc.), techniques and market trends
- Knowledge of different modelling frameworks like Linear Regression, Logistic Regression, Multiple Regression, LOGIT, PROBIT, time- series modelling, CHAID, CART etc.
- Knowledge of Machine learning & AI algorithms such as Gradient Boost, KNN, etc.
- Understanding of decisioning and portfolio management in banking and financial services would be added advantage
- Understanding of credit bureau would be an added advantage
Experience :
- 4 to 8 years of work experience in core analytics function of a large bank / consulting firm.
- Experience on working on Collection analytics is must
- Experience on handling large data volumes using data analysis tools and generating good data insights
- Demonstrated ability to communicate ideas and analysis results effectively both verbally and in writing to technical and non-technical audiences
- Excellent communication, presentation and writing skills Strong interpersonal skills
- Motivated to meet and exceed stretch targets
- Ability to make the right judgments in the face of complexity and uncertainty
- Excellent relationship and networking skills across our different business and geographies
Qualifications :
- Masters degree in Statistics, Mathematics, Economics, Business Management or Engineering from a reputed college
Relevant Experience:
• 3-5 years
Required Skills
• In-depth experience with Python and OO concepts
• Experience with using NumPy, Pandas, or similar AI/ML libraries
• Deep understanding of multi-process architecture and threading limitations of Python
• Expert knowledge of Python and related frameworks including Django and Flask.
• Experience in Machine Learning and Artificial Intelligence
• Good understanding of relational data modeling concepts and comfortable with SQL
• databases.
• Good understanding of AWS / Azure cloud
• Nice to have
• Experience / Exposure in Chat BOT or Voice BOT
• Experience in Lambda Function OR Azure Functions
• Exposure to Financial Domain
Additional Skillsets Desired:
• Ability to work with clients and key stakeholders to ensure requirements are met
• Strong problem-solving and debugging skills
• Passion to work in a start-up environment and readiness to dabble with challenging
• Ability to work in both a solo and team environment
• Proven ability to communicate technical information coherently, both verbally and into internal team and external customers, maintaining a customer focused