36+ Machine Learning (ML) Jobs in Chennai | Machine Learning (ML) Job openings in Chennai
Apply to 36+ Machine Learning (ML) Jobs in Chennai on CutShort.io. Explore the latest Machine Learning (ML) Job opportunities across top companies like Google, Amazon & Adobe.
Building the machine learning production (or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-of-the-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop ML pipelines to support
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Develop MLOps components in Machine learning development life cycle using Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 3-5 years experience building production-quality software.
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR Equivalent
- Strong experience in System Integration, Application Development or Data Warehouse projects across technologies used in the enterprise space
- Knowledge of MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- CI/CD experience( i.e. Jenkins, Git hub action,
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Building the machine learning production System(or MLOps) is the biggest challenge most large companies currently have in making the transition to becoming an AI-driven organization. This position is an opportunity for an experienced, server-side developer to build expertise in this exciting new frontier. You will be part of a team deploying state-ofthe-art AI solutions for Fractal clients.
Responsibilities
As MLOps Engineer, you will work collaboratively with Data Scientists and Data engineers to deploy and operate advanced analytics machine learning models. You’ll help automate and streamline Model development and Model operations. You’ll build and maintain tools for deployment, monitoring, and operations. You’ll also troubleshoot and resolve issues in development, testing, and production environments.
- Enable Model tracking, model experimentation, Model automation
- Develop scalable ML pipelines
- Develop MLOps components in Machine learning development life cycle using Model Repository (either of): MLFlow, Kubeflow Model Registry
- Machine Learning Services (either of): Kubeflow, DataRobot, HopsWorks, Dataiku or any relevant ML E2E PaaS/SaaS
- Work across all phases of Model development life cycle to build MLOPS components
- Build the knowledge base required to deliver increasingly complex MLOPS projects on Azure
- Be an integral part of client business development and delivery engagements across multiple domains
Required Qualifications
- 5.5-9 years experience building production-quality software
- B.E/B.Tech/M.Tech in Computer Science or related technical degree OR equivalent
- Strong experience in System Integration, Application Development or Datawarehouse projects across technologies used in the enterprise space
- Expertise in MLOps, machine learning and docker
- Object-oriented languages (e.g. Python, PySpark, Java, C#, C++)
- Experience developing CI/CD components for production ready ML pipeline.
- Database programming using any flavors of SQL
- Knowledge of Git for Source code management
- Ability to collaborate effectively with highly technical resources in a fast-paced environment
- Ability to solve complex challenges/problems and rapidly deliver innovative solutions
- Team handling, problem solving, project management and communication skills & creative thinking
- Foundational Knowledge of Cloud Computing on Azure
- Hunger and passion for learning new skills
Role Overview:
We are seeking a highly skilled and motivated Data Scientist to join our growing team. The ideal candidate will be responsible for developing and deploying machine learning models from scratch to production level, focusing on building robust data-driven products. You will work closely with software engineers, product managers, and other stakeholders to ensure our AI-driven solutions meet the needs of our users and align with the company's strategic goals.
Key Responsibilities:
- Develop, implement, and optimize machine learning models and algorithms to support product development.
- Work on the end-to-end lifecycle of data science projects, including data collection, preprocessing, model training, evaluation, and deployment.
- Collaborate with cross-functional teams to define data requirements and product taxonomy.
- Design and build scalable data pipelines and systems to support real-time data processing and analysis.
- Ensure the accuracy and quality of data used for modeling and analytics.
- Monitor and evaluate the performance of deployed models, making necessary adjustments to maintain optimal results.
- Implement best practices for data governance, privacy, and security.
- Document processes, methodologies, and technical solutions to maintain transparency and reproducibility.
Qualifications:
- Bachelor's or Master's degree in Data Science, Computer Science, Engineering, or a related field.
- 5+ years of experience in data science, machine learning, or a related field, with a track record of developing and deploying products from scratch to production.
- Strong programming skills in Python and experience with data analysis and machine learning libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch).
- Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker).
- Proficiency in building and optimizing data pipelines, ETL processes, and data storage solutions.
- Hands-on experience with data visualization tools and techniques.
- Strong understanding of statistics, data analysis, and machine learning concepts.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a fast-paced, dynamic environment.
Preferred Qualifications:
- Knowledge of microservices architecture and RESTful APIs.
- Familiarity with Agile development methodologies.
- Experience in building taxonomy for data products.
- Strong communication skills and the ability to explain complex technical concepts to non-technical stakeholders.
at Zolvit (formerly Vakilsearch)
Role Overview:
We are looking for a skilled Data Scientist with expertise in data analytics, machine learning, and AI to join our team. The ideal candidate will have a strong command of data tools, programming, and knowledge of LLMs and Generative AI, contributing to the growth and automation of our business processes.
Key Responsibilities:
- Data Analysis & Visualization:
- Develop and manage data pipelines, ensuring data accuracy and integrity.
- Design and implement insightful dashboards using Power BI to help stakeholders make data-driven decisions.
- Extract and analyze complex data sets using SQL to generate actionable insights
2 Machine Learning & AI Models:
- Build and deploy machine learning models to optimize key business functions like discount management, lead qualification, and process automation.
- Apply Natural Language Processing (NLP) techniques for text extraction, analysis, and classification from customer documents.
- Implement and fine-tune Generative AI models and large language models (LLMs) for various business applications, including prompt engineering for automation tasks.
3 Automation & Innovation:
- Use AI to streamline document verification, data extraction, and customer interaction processes.
- Innovate and automate manual processes, creating AI-driven solutions for internal teams and customer-facing systems.
- Stay abreast of the latest advancements in machine learning, NLP, and generative AI, applying them to real-world business challenges.
Qualifications:
- Bachelor's or Master’s degree in Computer Science, Data Science, Statistics, or related field.
- 4-7 years of experience as a Data Scientist, with proficiency in Python, SQL, Power BI, and Excel.
- Expertise in building machine learning models and utilizing NLP techniques for text processing and automation.
- Experience in working with large language models (LLMs) and generative AI to create efficient and scalable solutions.
- Strong problem-solving skills, with the ability to work independently and in teams.
- Excellent communication skills, with the ability to present complex data in a simple, actionable way to non-technical stakeholders.
If you’re excited about leveraging data and AI to solve real-world problems, we’d love to have you on our team!
Company: Optimum Solutions
About the company: Optimum solutions is a leader in a sheet metal industry, provides sheet metal solutions to sheet metal fabricators with a proven track record of reliable product delivery. Starting from tools through software, machines, we are one stop shop for all your technology needs.
Role Overview:
- Creating and managing database schemas that represent and support business processes, Hands-on experience in any SQL queries and Database server wrt managing deployment.
- Implementing automated testing platforms, unit tests, and CICD Pipeline
- Proficient understanding of code versioning tools, such as GitHub, Bitbucket, ADO
- Understanding of container platform, such as Docker
Job Description
- We are looking for a good Python Developer with Knowledge of Machine learning and deep learning framework.
- Your primary focus will be working the Product and Usecase delivery team to do various prompting for different Gen-AI use cases
- You will be responsible for prompting and building use case Pipelines
- Perform the Evaluation of all the Gen-AI features and Usecase pipeline
Position: AI ML Engineer
Location: Chennai (Preference) and Bangalore
Minimum Qualification: Bachelor's degree in computer science, Software Engineering, Data Science, or a related field.
Experience: 4-6 years
CTC: 16.5 - 17 LPA
Employment Type: Full Time
Key Responsibilities:
- Take care of entire prompt life cycle like prompt design, prompt template creation, prompt tuning/optimization for various Gen-AI base models
- Design and develop prompts suiting project needs
- Lead and manage team of prompt engineers
- Stakeholder management across business and domains as required for the projects
- Evaluating base models and benchmarking performance
- Implement prompt gaurdrails to prevent attacks like prompt injection, jail braking and prompt leaking
- Develop, deploy and maintain auto prompt solutions
- Design and implement minimum design standards for every use case involving prompt engineering
Skills and Qualifications
- Strong proficiency with Python, DJANGO framework and REGEX
- Good understanding of Machine learning framework Pytorch and Tensorflow
- Knowledge of Generative AI and RAG Pipeline
- Good in microservice design pattern and developing scalable application.
- Ability to build and consume REST API
- Fine tune and perform code optimization for better performance.
- Strong understanding on OOP and design thinking
- Understanding the nature of asynchronous programming and its quirks and workarounds
- Good understanding of server-side templating languages
- Understanding accessibility and security compliance, user authentication and authorization between multiple systems, servers, and environments
- Integration of APIs, multiple data sources and databases into one system
- Good knowledge in API Gateways and proxies, such as WSO2, KONG, nginx, Apache HTTP Server.
- Understanding fundamental design principles behind a scalable and distributed application
- Good working knowledge on Microservices architecture, behaviour, dependencies, scalability etc.
- Experience in deploying on Cloud platform like Azure or AWS
- Familiar and working experience with DevOps tools like Azure DEVOPS, Ansible, Jenkins, Terraform
Company Name : LMES Academy Private Limited
Website : https://lmes.in/
Linkedin : https://www.linkedin.com/company/lmes-academy/mycompany/
Role : Machine Learning Engineer
Experience: 2 Year to 4 Years
Location: Urapakkam, Chennai, Tamil Nadu.
Job Overview:
We are looking for a Machine Learning Engineer to join our team and help us advance our AI capabilities.
Requirements
• Model Training and Fine-Tuning: Utilize and refine large language models using techniques such as distillation and supervised fine-tuning to enhance performance and efficiency.
• Retrieval-Augmented Generation (RAG): Good understanding on RAG systems to improve the quality and relevance of generated content.
• Vector Databases: Familiar with vector databases to support fast and accurate similarity searches and other ML-driven functionalities.
• API Integration: Good in REST APIs and integrate third-party APIs, including Open AI, Google Vertex, and Cloudflare Workers AI, to extend our AI capabilities.
• Generative AI: Experience with generative AI applications, including text-to-image, speech recognition, and text-to-speech systems.
• Collaboration: Work collaboratively with cross-functional teams, including data scientists, developers, and product managers, to deliver innovative AI solutions.
• Adaptability: Thrive in a fast-paced environment with loosely defined tasks and competing priorities, ensuring timely delivery of high-quality results.
at Tiger Analytics
• Charting learning journeys with knowledge graphs.
• Predicting memory decay based upon an advanced cognitive model.
• Ensure content quality via study behavior anomaly detection.
• Recommend tags using NLP for complex knowledge.
• Auto-associate concept maps from loosely structured data.
• Predict knowledge mastery.
• Search query personalization.
Requirements:
• 6+ years experience in AI/ML with end-to-end implementation.
• Excellent communication and interpersonal skills.
• Expertise in SageMaker, TensorFlow, MXNet, or equivalent.
• Expertise with databases (e. g. NoSQL, Graph).
• Expertise with backend engineering (e. g. AWS Lambda, Node.js ).
• Passionate about solving problems in education
- A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.
Roles and Responsibilities
- Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.
- Mentor and coach other team members
- Evaluate the performance of NLP models and ideate on how they can be improved
- Support internal and external NLP-facing APIs
- Keep up to date on current research around NLP, Machine Learning and Deep Learning
Mandatory Requirements
- Any graduation with at least 2 years of demonstrated experience as a Data Scientist.
Behavioral Skills
Strong analytical and problem-solving capabilities.
- Proven ability to multi-task and deliver results within tight time frames
- Must have strong verbal and written communication skills
- Strong listening skills and eagerness to learn
- Strong attention to detail and the ability to work efficiently in a team as well as individually
Technical Skills
Hands-on experience with
- NLP
- Deep Learning
- Machine Learning
- Python
- Bert
Preferred Requirements
- Experience in Computer Vision is preferred
About the company: https://www.hectar.in/
Hectar Global is a financial technology company that provides a disruptive cross-border trading platform for the agricultural commodities market. Our platform utilizes machine learning and data analysis to provide insights and improve the trading experience for farmers, traders, and other stakeholders in the agricultural industry. Our mission is to bring transparency and efficiency to the agricultural commodities market, which has traditionally been fragmented and opaque. We are committed to driving innovation in the industry and providing a user-friendly and accessible platform for our customers.
Job Overview:
We are seeking a highly skilled Head of Engineering to lead the development of our innovative and disruptive cross-border trading platform at Hectar Global. This person will be responsible for spearheading all technical aspects of the project, from architecture and infrastructure to data and machine learning.
Responsibilities:
- Lead and manage a team of engineers, including hiring, training, and mentoring
- Design and implement the technical architecture of the cross-border trading platform
- Ensure the platform is scalable, efficient, and secure
- Oversee the development of data models and machine learning algorithms
- Collaborate with cross-functional teams, including product and design, to ensure the platform meets user needs and is visually appealing and easy to use
- Work with stakeholders to identify business requirements and translate them into technical solutions
- Develop and maintain technical documentation, including system specifications, design documents, and user manuals
- Keep up to date with emerging trends and technologies in software engineering, machine learning, and data science
Requirements:
- Bachelor's or Master's degree in Computer Science or related field
- 10+ years of experience in software engineering, with a focus on web applications and machine learning
- Proven track record of leading and managing a team of engineers
- Expertise in software architecture and design patterns
- Experience with data modeling and machine learning techniques
- Strong problem-solving and analytical skills
- Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams
- Experience working in an agile development environment
- Strong knowledge of front-end technologies, such as HTML, CSS, and JavaScript
- Familiarity with modern web frameworks, such as React or Angular
Preferred qualifications:
- Experience with cloud computing platforms, such as AWS or Azure
- Familiarity with data visualization tools, such as D3.js or Tableau
- Experience with containerization and orchestration tools, such as Docker and Kubernetes
- Understanding of financial markets and trading platforms
If you are a passionate leader with a proven track record of building innovative and disruptive products and teams, we would love to hear from you.
FURIOUS FOX is looking for Embedded Developers with strong coding skills in C & C++ as well as experience with Embedded Linux.
Experience : (Minimum 7-10 yrs)
• Experienced in edge processing for connected building / industrial / consumer
appliances / automotive ECU
• Have a good understanding of IoT platforms and architecture
• Deep experience in operating systems eg: Linux, freeRTOS / kernel development/device drivers.
/ sensor drivers
• Have experience with various low-level communication protocols, memory devices, messaging
framework etc.
• Have a deep understanding of design principles, design patterns, container preparations
• Have developed hardware, OS abstraction layers, and sensor handlers services to manage various BSP, os standards
• Have experience with Python edge packages.
• Have a good understanding about IoT databases for edge computing
• Good understanding of connectivity application protocols and connectivity SDK for Wi-Fi and BT / BLE
• Experienced in arm architecture, peripheral devices and hardware board configurations
• Able to set up debuggers, configure build environments, and compilers and optimize code and performance.
Skills / Tools:
• Expert at object-oriented programming
• Modular programming
• C / C++ / JavaScript / Python
• Eclipse framework
• Target deployment techniques
• IoT framework
• Test framework
Highlights :
• Having AI / ML knowledge in applications
• Have worked on wireless protocols
• Ethernet / Wi-Fi / Bluetooth / BLE
• Highly exploratory attitude
• willing to venture in and learn new
technologies.
• Have done passionate projects based on self-interest.
Job Summary
As a Data Science Lead, you will manage multiple consulting projects of varying complexity and ensure on-time and on-budget delivery for clients. You will lead a team of data scientists and collaborate across cross-functional groups, while contributing to new business development, supporting strategic business decisions and maintaining & strengthening client base
- Work with team to define business requirements, come up with analytical solution and deliver the solution with specific focus on Big Picture to drive robustness of the solution
- Work with teams of smart collaborators. Be responsible for their appraisals and career development.
- Participate and lead executive presentations with client leadership stakeholders.
- Be part of an inclusive and open environment. A culture where making mistakes and learning from them is part of life
- See how your work contributes to building an organization and be able to drive Org level initiatives that will challenge and grow your capabilities.
Role & Responsibilities
- Serve as expert in Data Science, build framework to develop Production level DS/AI models.
- Apply AI research and ML models to accelerate business innovation and solve impactful business problems for our clients.
- Lead multiple teams across clients ensuring quality and timely outcomes on all projects.
- Lead and manage the onsite-offshore relation, at the same time adding value to the client.
- Partner with business and technical stakeholders to translate challenging business problems into state-of-the-art data science solutions.
- Build a winning team focused on client success. Help team members build lasting career in data science and create a constant learning/development environment.
- Present results, insights, and recommendations to senior management with an emphasis on the business impact.
- Build engaging rapport with client leadership through relevant conversations and genuine business recommendations that impact the growth and profitability of the organization.
- Lead or contribute to org level initiatives to build the Tredence of tomorrow.
Qualification & Experience
- Bachelor's /Master's /PhD degree in a quantitative field (CS, Machine learning, Mathematics, Statistics, Data Science) or equivalent experience.
- 6-10+ years of experience in data science, building hands-on ML models
- Expertise in ML – Regression, Classification, Clustering, Time Series Modeling, Graph Network, Recommender System, Bayesian modeling, Deep learning, Computer Vision, NLP/NLU, Reinforcement learning, Federated Learning, Meta Learning.
- Proficient in some or all of the following techniques: Linear & Logistic Regression, Decision Trees, Random Forests, K-Nearest Neighbors, Support Vector Machines ANOVA , Principal Component Analysis, Gradient Boosted Trees, ANN, CNN, RNN, Transformers.
- Knowledge of programming languages SQL, Python/ R, Spark.
- Expertise in ML frameworks and libraries (TensorFlow, Keras, PyTorch).
- Experience with cloud computing services (AWS, GCP or Azure)
- Expert in Statistical Modelling & Algorithms E.g. Hypothesis testing, Sample size estimation, A/B testing
- Knowledge in Mathematical programming – Linear Programming, Mixed Integer Programming etc , Stochastic Modelling – Markov chains, Monte Carlo, Stochastic Simulation, Queuing Models.
- Experience with Optimization Solvers (Gurobi, Cplex) and Algebraic programming Languages(PulP)
- Knowledge in GPU code optimization, Spark MLlib Optimization.
- Familiarity to deploy and monitor ML models in production, delivering data products to end-users.
- Experience with ML CI/CD pipelines.
THE IDEAL CANDIDATE WILL
- Engage with executive level stakeholders from client's team to translate business problems to high level solution approach
- Partner closely with practice, and technical teams to craft well-structured comprehensive proposals/ RFP responses clearly highlighting Tredence’s competitive strengths relevant to Client's selection criteria
- Actively explore the client’s business and formulate solution ideas that can improve process efficiency and cut cost, or achieve growth/revenue/profitability targets faster
- Work hands-on across various MLOps problems and provide thought leadership
- Grow and manage large teams with diverse skillsets
- Collaborate, coach, and learn with a growing team of experienced Machine Learning Engineers and Data Scientists
ELIGIBILITY CRITERIA
- BE/BTech/MTech (Specialization/courses in ML/DS)
- At-least 7+ years of Consulting services delivery experience
- Very strong problem-solving skills & work ethics
- Possesses strong analytical/logical thinking, storyboarding and executive communication skills
- 5+ years of experience in Python/R, SQL
- 5+ years of experience in NLP algorithms, Regression & Classification Modelling, Time Series Forecasting
- Hands on work experience in DevOps
- Should have good knowledge in different deployment type like PaaS, SaaS, IaaS
- Exposure on cloud technologies like Azure, AWS or GCP
- Knowledge in python and packages for data analysis (scikit-learn, scipy, numpy, pandas, matplotlib).
- Knowledge of Deep Learning frameworks: Keras, Tensorflow, PyTorch, etc
- Experience with one or more Container-ecosystem (Docker, Kubernetes)
- Experience in building orchestration pipeline to convert plain python models into a deployable API/RESTful endpoint.
- Good understanding of OOP & Data Structures concepts
Nice to Have:
- Exposure to deployment strategies like: Blue/Green, Canary, AB Testing, Multi-arm Bandit
- Experience in Helm is a plus
- Strong understanding of data infrastructure, data warehouse, or data engineering
You can expect to –
- Work with world’ biggest retailers and help them solve some of their most critical problems. Tredence is a preferred analytics vendor for some of the largest Retailers across the globe
- Create multi-million Dollar business opportunities by leveraging impact mindset, cutting edge solutions and industry best practices.
- Work in a diverse environment that keeps evolving
- Hone your entrepreneurial skills as you contribute to growth of the organization
Job Description – Data Science
Basic Qualification:
- ME/MS from premier institute with a background in Mechanical/Industrial/Chemical/Materials engineering.
- Strong Analytical skills and application of Statistical techniques to problem solving
- Expertise in algorithms, data structures and performance optimization techniques
- Proven track record of demonstrating end to end ownership involving taking an idea from incubator to market
- Minimum years of experience in data analysis (2+), statistical analysis, data mining, algorithms for optimization.
Responsibilities
The Data Engineer/Analyst will
- Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions.
- Clear interaction with Business teams including product planning, sales, marketing, finance for defining the projects, objectives.
- Mine and analyze data from company databases to drive optimization and improvement of product and process development, marketing techniques and business strategies
- Coordinate with different R&D and Business teams to implement models and monitor outcomes.
- Mentor team members towards developing quick solutions for business impact.
- Skilled at all stages of the analysis process including defining key business questions, recommending measures, data sources, methodology and study design, dataset creation, analysis execution, interpretation and presentation and publication of results.
- 4+ years’ experience in MNC environment with projects involving ML, DL and/or DS
- Experience in Machine Learning, Data Mining or Machine Intelligence (Artificial Intelligence)
- Knowledge on Microsoft Azure will be desired.
- Expertise in machine learning such as Classification, Data/Text Mining, NLP, Image Processing, Decision Trees, Random Forest, Neural Networks, Deep Learning Algorithms
- Proficient in Python and its various libraries such as Numpy, MatPlotLib, Pandas
- Superior verbal and written communication skills, ability to convey rigorous mathematical concepts and considerations to Business Teams.
- Experience in infra development / building platforms is highly desired.
- A drive to learn and master new technologies and techniques.
Top Management Consulting Company
We are looking out for a technically driven "ML OPS Engineer" for one of our premium client
COMPANY DESCRIPTION:
Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
https://www.ynos.in/" target="_blank">YNOS is a next-generation funded startup founded by IIT Madras faculty and incubated at IIT Madras Incubation Cell. It is a digital platform for Entrepreneurs, Investors, Innovators and Eco-system enablers, providing actionable insights on the startup and investment landscape in India. We are passionate about solving tough problems using technology and data, making a difference
The Opening
We are presently seeking for our next enthusiastic, talented, and driven Python Backend Engineer to start right away. We'd want you to
- Be excited about building a next-generation intelligence platform
- Possess a can-do attitude and be open to new challenges
- Value working with a great team - self-assured, creative, and insightful individuals who work together to achieve amazing things
- Be willing to explore, learn, and contribute new ideas to the platform, thereby improving it
- Be high on self-belief and enthusiasm to work in a startup culture - small team, fast-paced work environment
If this is you, we'd love to hear from you!
As the Python Backend Engineer in https://www.ynos.in/" target="_blank">YNOS, you will
- Create reusable optimised code and libraries
- Deploy task management systems and automate routine tasks
- Build performant apps that adhere to the best practices, therefore increasing latency, performance, and scalability
- Improve the existing codebase while reducing technical debt
- Take charge of all elements of the application, including architecture, quality, and efficiency
Requirements
- Proficient understanding of Python language
- Expertise in developing web apps and APIs using Python frameworks like Flask with an overall grasp of client-server interactions
- Familiarity with task management systems and process automation
- Comfortable with using Command Line and Linux systems
- Experience and understanding of version control systems like git, svn, etc
- Knowledge of NoSQL databases such as MongoDB
Good to have
- Expertise in other backend frameworks viz. Django, NodeJS, Go, Rust etc
- Knowledge of data-modelling, data-wrangling & data-mining techniques
- Experience with data visualisation tools & libraries such as Plotly, Seaborn etc
- Exposure to Statistical and Machine Learning (ML) techniques, particularly in the field of Natural Language Processing (NLP)
- Familiarity with front-end tools & frameworks such as HTML, CSS, JS, React, Vue, and others
Work location, Job type & Salary
- Our office is located at https://respark.iitm.ac.in/" target="_blank">IIT Madras Research Park, Chennai, Tamil Nadu, surrounded by the beautiful IITM Campus!
- This is a full-time position and we’d like you to relocate to Chennai
- Expected salary range ₹6L - ₹8L per annum
Do you want to help build real technology for a meaningful purpose? Do you want to contribute to making the world more sustainable, advanced and accomplished extraordinary precision in Analytics?
What is your role?
As a Computer Vision & Machine Learning Engineer at Datasee.AI, you’ll be core to the development of our robotic harvesting system’s visual intelligence. You’ll bring deep computer vision, machine learning, and software expertise while also thriving in a fast-paced, flexible, and energized startup environment. As an early team member, you’ll directly build our success, growth, and culture. You’ll hold a significant role and are excited to grow your role as Datasee.AI grows.
What you’ll do
- You will be working with the core R&D team which drives the computer vision and image processing development.
- Build deep learning model for our data and object detection on large scale images.
- Design and implement real-time algorithms for object detection, classification, tracking, and segmentation
- Coordinate and communicate within computer vision, software, and hardware teams to design and execute commercial engineering solutions.
- Automate the workflow process between the fast-paced data delivery systems.
What we are looking for
- 1 to 3+ years of professional experience in computer vision and machine learning.
- Extensive use of Python
- Experience in python libraries such as OpenCV, Tensorflow and Numpy
- Familiarity with a deep learning library such as Keras and PyTorch
- Worked on different CNN architectures such as FCN, R-CNN, Fast R-CNN and YOLO
- Experienced in hyperparameter tuning, data augmentation, data wrangling, model optimization and model deployment
- B.E./M.E/M.Sc. Computer Science/Engineering or relevant degree
- Dockerization, AWS modules and Production level modelling
- Basic knowledge of the Fundamentals of GIS would be added advantage
Prefered Requirements
- Experience with Qt, Desktop application development, Desktop Automation
- Knowledge on Satellite image processing, Geo-Information System, GDAL, Qgis and ArcGIS
About Datasee.AI:
Datasee>AI, Inc. is an AI driven Image Analytics company offering Asset Management solutions for industries in the sectors of Renewable Energy, Infrastructure, Utilities & Agriculture. With core expertise in Image processing, Computer Vision & Machine Learning, Takvaviya’s solution provides value across the enterprise for all the stakeholders through a data driven approach.
With Sales & Operations based out of US, Europe & India, Datasee.AI is a team of 32 people located across different geographies and with varied domain expertise and interests.
A focused and happy bunch of people who take tasks head-on and build scalable platforms and products.
client of peoplefirst consultants
Skills: Machine Learning,Deep Learning,Artificial Intelligence,python.
Location:Chennai
Domain knowledge: Data cleaning, modelling, analytics, statistics, machine learning, AI
Requirements:
· To be part of Digital Manufacturing and Industrie 4.0 projects across Saint Gobain group of companies
· Design and develop AI//ML models to be deployed across SG factories
· Knowledge on Hadoop, Apache Spark, MapReduce, Scala, Python programming, SQL and NoSQL databases is required
· Should be strong in statistics, data analysis, data modelling, machine learning techniques and Neural Networks
· Prior experience in developing AI and ML models is required
· Experience with data from the Manufacturing Industry would be a plus
Roles and Responsibilities:
· Develop AI and ML models for the Manufacturing Industry with a focus on Energy, Asset Performance Optimization and Logistics
· Multitasking, good communication necessary
· Entrepreneurial attitude.
Job Title: Engineering Manager
Job Location: Chennai, Bangalore
Job Summary
The Engineering Org is looking for a proficient Engineering Manager to join a team that is building exciting
and futuristic Data Products at Condé Nast to enable both internal and external marketers to target
audiences in real time. As an Engineering Manager, you will drive the day-to-day execution of technical
and architectural decisions. EM will own engineering deliverables inclusive of solving dependencies
such as architecture, solutions, sequencing, and working with other engineering delivery teams.This role
is also responsible for driving innovation, prototyping, and recommending solutions. Above all, you will
influence how users interact with Conde Nast’s industry-leading journalism.
● Primary Responsibilities
● Manage a high performing team of Software and Data Engineers within the Data & ML
Engineering team part of Engineering Data Organization.
● Provide leadership and guidance to the team in Data Discovery, Data Ingestion, Transformation
and Storage
● Utilizing product mindset to build, scale and deploy holistic data products after successful
prototyping and drive their engineering implementation
● Provide technical coaching and lead direct reports and other members of adjacent support teams
to the highest level of performance..
● Evaluate performance of direct reports and offer career development guidance.
● Meeting hiring and retention targets of the team & building a high-performance culture
● Handle escalations from internal stakeholders and manage critical issues to resolution.
● Collaborate with Architects, Product Manager, Project Manager and other teams to deliver high
quality products.
● Identify recurring system and application issues and enable engineers to work with release teams,
infra teams, product development, vendors and other stakeholders in investigating and resolving
the cause.
● Required Skills
● 4+ years of managing Software Development teams, preferably in ML and Data
Engineering teams.
● 4+ years of Agile Software development practices
● 12+ years of Software Development experience.
● Excellent Problem Solving and System Design skill
● Hands on: Writing and Reviewing code primarily in Spark, Python and/or Java
● Hand on: Architect & Design end to end Data Pipeline (noSQL databases, Job Schedulers, Big
Data Development preferably on Databricks / Cloud)
● Experience with SOA & Microservice architecture
● Knowledge of Software Engineering best practices with experience on implementing CI/CD,
Log aggregation/Monitoring/alerting for production system
● Working Knowledge of cloud and devops skills (AWS will be preferred)
● Strong verbal and written communication skills.
● Experience in evaluating team member performance and offering career development
guidance.
● Experience in providing technical coaching to direct reports.
● Experience in architecting highly scalable products.
● Experience in collaborating with global stakeholder teams.
● Experience in working on highly available production systems.
● Strong knowledge of software release process and release pipeline.
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right move to
invest heavily in understanding this data and formed a whole new Data team entirely dedicated to
data processing, engineering, analytics, and visualization. This team helps drive engagement, fuel
process innovation, further content enrichment, and increase market revenue. The Data team
aimed to create a company culture where data was the common language and facilitate an
environment where insights shared in real-time could improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The team at
Condé Nast Chennai works extensively with data to amplify its brands' digital capabilities and boost
online revenue. We are broadly divided into four groups, Data Intelligence, Data Engineering, Data
Science, and Operations (including Product and Marketing Ops, Client Services) along with Data
Strategy and monetization. The teams built capabilities and products to create data-driven solutions
for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse forms of
self-expression. At Condé Nast, we encourage the imaginative and celebrate the extraordinary. We
are a media company for the future, with a remarkable past. We are Condé Nast, and It Starts Here.
Job Location: Chennai
Job Summary
The Engineering Org is looking for a proficient Engineering Manager to join a team that is building exciting
and futuristic Data Products at Condé Nast to enable both internal and external marketers to target
audiences in real time. As an Engineering Manager, you will drive the day-to-day execution of technical
and architectural decisions. EM will own engineering deliverables inclusive of solving dependencies
such as architecture, solutions, sequencing, and working with other engineering delivery teams.This role
is also responsible for driving innovation, prototyping, and recommending solutions. Above all, you will
influence how users interact with Conde Nast’s industry-leading journalism.
● Primary Responsibilities
● Manage a high performing team of Software and Data Engineers within the Data & ML
Engineering team part of Engineering Data Organization.
● Provide leadership and guidance to the team in Data Discovery, Data Ingestion, Transformation
and Storage
● Utilizing product mindset to build, scale and deploy holistic data products after successful
prototyping and drive their engineering implementation
● Provide technical coaching and lead direct reports and other members of adjacent support teams
to the highest level of performance..
● Evaluate performance of direct reports and offer career development guidance.
● Meeting hiring and retention targets of the team & building a high-performance culture
● Handle escalations from internal stakeholders and manage critical issues to resolution.
● Collaborate with Architects, Product Manager, Project Manager and other teams to deliver high
quality products.
● Identify recurring system and application issues and enable engineers to work with release teams,
infra teams, product development, vendors and other stakeholders in investigating and resolving
the cause.
● Required Skills
● 4+ years of managing Software Development teams, preferably in ML and Data
Engineering teams.
● 4+ years of Agile Software development practices
● 12+ years of Software Development experience.
● Excellent Problem Solving and System Design skill
● Hands on: Writing and Reviewing code primarily in Spark, Python and/or Java
● Hand on: Architect & Design end to end Data Pipeline (noSQL databases, Job Schedulers, Big
Data Development preferably on Databricks / Cloud)
● Experience with SOA & Microservice architecture
● Knowledge of Software Engineering best practices with experience on implementing CI/CD,
Log aggregation/Monitoring/alerting for production system
● Working Knowledge of cloud and devops skills (AWS will be preferred)
● Strong verbal and written communication skills.
● Experience in evaluating team member performance and offering career development
guidance.
● Experience in providing technical coaching to direct reports.
● Experience in architecting highly scalable products.
● Experience in collaborating with global stakeholder teams.
● Experience in working on highly available production systems.
● Strong knowledge of software release process and release pipeline.
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right move to
invest heavily in understanding this data and formed a whole new Data team entirely dedicated to
data processing, engineering, analytics, and visualization. This team helps drive engagement, fuel
process innovation, further content enrichment, and increase market revenue. The Data team
aimed to create a company culture where data was the common language and facilitate an
environment where insights shared in real-time could improve performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The team at
Condé Nast Chennai works extensively with data to amplify its brands' digital capabilities and boost
online revenue. We are broadly divided into four groups, Data Intelligence, Data Engineering, Data
Science, and Operations (including Product and Marketing Ops, Client Services) along with Data
Strategy and monetization. The teams built capabilities and products to create data-driven solutions
for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse forms of
self-expression. At Condé Nast, we encourage the imaginative and celebrate the extraordinary. We
are a media company for the future, with a remarkable past. We are Condé Nast, and It Starts Here.
Job Location: India
Job Summary
We at CondeNast are looking for a data science manager for the content intelligence
workstream primarily, although there might be some overlap with other workstreams. The
position is based out of Chennai and shall report to the head of the data science team, Chennai
Responsibilities:
1. Ideate new opportunities within the content intelligence workstream where data Science can
be applied to increase user engagement
2. Partner with business and translate business and analytics strategies into multiple short-term
and long-term projects
3. Lead data science teams to build quick prototypes to check feasibility and value to business
and present to business
4. Formulate the business problem into an machine learning/AI problem
5. Review & validate models & help improve the accuracy of model
6. Socialize & present the model insights in a manner that business can understand
7. Lead & own the entire value chain of a project/initiative life cycle - Interface with business,
understand the requirements/specifications, gather data, prepare it, train,validate, test the
model, create business presentations to communicate insights, monitor/track the performance
of the solution and suggest improvements
8. Work closely with ML engineering teams to deploy models to production
9. Work closely with data engineering/services/BI teams to help develop data stores, intuitive
visualizations for the products
10. Setup career paths & learning goals for reportees & mentor them
Required Skills:
1. 5+ years of experience in leading Data Science & Advanced analytics projects with a focus on
building recommender systems and 10-12 years of overall experience
2. Experience in leading data science teams to implement recommender systems using content
based, collaborative filtering, embedding techniques
3. Experience in building propensity models, churn prediction, NLP - language models,
embeddings, recommendation engine etc
4. Master’s degree with an emphasis in a quantitative discipline such as statistics, engineering,
economics or mathematics/ Degree programs in data science/ machine learning/ artificial
intelligence
5. Exceptional Communication Skills - verbal and written
6. Moderate level proficiency in SQL, Python
7. Needs to have demonstrated continuous learning through external certifications, degree
programs in machine learning & artificial intelligence
8. Knowledge of Machine learning algorithms & understanding of how they work
9. Knowledge of Reinforcement Learning
Preferred Qualifications
1. Expertise in libraries for data science - pyspark(Databricks), scikit-learn, pandas, numpy,
matplotlib, pytorch/tensorflow/keras etc
2. Working Knowledge of deep learning models
3. Experience in ETL/ data engineering
4. Prior experience in e-commerce, media & publishing domain is a plus
5. Experience in digital advertising is a plus
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right move
to invest heavily in understanding this data and formed a whole new Data team entirely
dedicated to data processing, engineering, analytics, and visualization. This team helps drive
engagement, fuel process innovation, further content enrichment, and increase market
revenue. The Data team aimed to create a company culture where data was the common
language and facilitate an environment where insights shared in real-time could improve
performance.
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The team
at Condé Nast Chennai works extensively with data to amplify its brands' digital capabilities and
boost online revenue. We are broadly divided into four groups, Data Intelligence, Data
Engineering, Data Science, and Operations (including Product and Marketing Ops, Client
Services) along with Data Strategy and monetization. The teams built capabilities and products
to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are Condé
Nast, and It Starts Here.
at Solinas Integrity Private Limited
Solinas Integrity (www.solinas.in) is a leading water & sanitation robotics start-up founded by IIT Madras Alumni & professors to develop cutting edge solutions to solve the problems in water pipelines and sewer lines\septic tanks, thereby improving the lives of millions of people. Our core values start with trust, and respect for everyone and along with strong collaboration and communication. We believe in giving agency to our teammates and strongly pushing them towards developing a growth mindset.
Duties and Responsibilities:
- To develop and improve signal processing algorithms for analysis of acoustic signals with up-to-date knowledge on processing methods.
- Understand key acoustic algorithm functions, develop efficient code, verify performance and functionality.
- Exposure to all phases of software development life cycle (concept, design, implementation, test, and production).
- Propose innovations to improve performance, quality, etc.
- Work with peers to develop excellent, structured code, well-optimized and easily maintainable.
Basic Qualifications:
● Experience programming in either Python, C++, or MATLAB
● MS/PhD degree in Electrical/Electronics Engineering/ Signal processing
● At least 1 year of signal processing or related area
● Good analytical and problem-solving skills
● Good knowledge of signal processing techniques, basic knowledge of ML algorithms and good visualisation skills.
A global business process management company
B1 – Data Scientist - Kofax Accredited Developers
Requirement – 3
Mandatory –
- Accreditation of Kofax KTA / KTM
- Experience in Kofax Total Agility Development – 2-3 years minimum
- Ability to develop and translate functional requirements to design
- Experience in requirement gathering, analysis, development, testing, documentation, version control, SDLC, Implementation and process orchestration
- Experience in Kofax Customization, writing Custom Workflow Agents, Custom Modules, Release Scripts
- Application development using Kofax and KTM modules
- Good/Advance understanding of Machine Learning /NLP/ Statistics
- Exposure to or understanding of RPA/OCR/Cognitive Capture tools like Appian/UI Path/Automation Anywhere etc
- Excellent communication skills and collaborative attitude
- Work with multiple teams and stakeholders within like Analytics, RPA, Technology and Project management teams
- Good understanding of compliance, data governance and risk control processes
Total Experience – 7-10 Years in BPO/KPO/ ITES/BFSI/Retail/Travel/Utilities/Service Industry
Good to have
- Previous experience of working on Agile & Hybrid delivery environment
- Knowledge of VB.Net, C#( C-Sharp ), SQL Server , Web services
Qualification -
- Masters in Statistics/Mathematics/Economics/Econometrics Or BE/B-Tech, MCA or MBA
Work Location : Chennai
Experience Level : 5+yrs
Package : Upto 18 LPA
Notice Period : Immediate Joiners
It's a full-time opportunity with our client.
Mandatory Skills:Machine Learning,Python,Tableau & SQL
Job Requirements:
--2+ years of industry experience in predictive modeling, data science, and Analysis.
--Experience with ML models including but not limited to Regression, Random Forests, XGBoost.
--Experience in an ML engineer or data scientist role building and deploying ML models or hands on experience developing deep learning models.
--Experience writing code in Python and SQL with documentation for reproducibility.
--Strong Proficiency in Tableau.
--Experience handling big datasets, diving into data to discover hidden patterns, using data visualization tools, writing SQL.
--Experience writing and speaking about technical concepts to business, technical, and lay audiences and giving data-driven presentations.
--AWS Sagemaker experience is a plus not required.
We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Roles and Responsibilities:
- Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
- Develop programs in Scala and Python as part of data cleaning and processing.
- Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.
- Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
- Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
- Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Provide high operational excellence guaranteeing high availability and platform stability.
- Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Skills:
- Experience with Big Data pipeline, Big Data analytics, Data warehousing.
- Experience with SQL/No-SQL, schema design and dimensional data modeling.
- Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
- Experience in designing systems that process structured as well as unstructured data at large scale.
- Experience in AWS/Spark/Java/Scala/Python development.
- Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
- Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
- Prior exposure to streaming data sources such as Kafka.
- Should have knowledge on Shell Scripting and Python scripting.
- High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
- Experience with NoSQL databases such as Cassandra / MongoDB.
- Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
- Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
- Experience building and deploying applications on on-premise and cloud-based infrastructure.
- Having a good understanding of machine learning landscape and concepts.
Qualifications and Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.
Certifications:
Good to have at least one of the Certifications listed here:
AZ 900 - Azure Fundamentals
DP 200, DP 201, DP 203, AZ 204 - Data Engineering
AZ 400 - Devops Certification
We are looking for an outstanding ML Architect (Deployments) with expertise in deploying Machine Learning solutions/models into production and scaling them to serve millions of customers. A candidate with an adaptable and productive working style which fits in a fast-moving environment.
Skills:
- 5+ years deploying Machine Learning pipelines in large enterprise production systems.
- Experience developing end to end ML solutions from business hypothesis to deployment / understanding the entirety of the ML development life cycle.
- Expert in modern software development practices; solid experience using source control management (CI/CD).
- Proficient in designing relevant architecture / microservices to fulfil application integration, model monitoring, training / re-training, model management, model deployment, model experimentation/development, alert mechanisms.
- Experience with public cloud platforms (Azure, AWS, GCP).
- Serverless services like lambda, azure functions, and/or cloud functions.
- Orchestration services like data factory, data pipeline, and/or data flow.
- Data science workbench/managed services like azure machine learning, sagemaker, and/or AI platform.
- Data warehouse services like snowflake, redshift, bigquery, azure sql dw, AWS Redshift.
- Distributed computing services like Pyspark, EMR, Databricks.
- Data storage services like cloud storage, S3, blob, S3 Glacier.
- Data visualization tools like Power BI, Tableau, Quicksight, and/or Qlik.
- Proven experience serving up predictive algorithms and analytics through batch and real-time APIs.
- Solid working experience with software engineers, data scientists, product owners, business analysts, project managers, and business stakeholders to design the holistic solution.
- Strong technical acumen around automated testing.
- Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc.)
- Strong hands-on experience with statistical packages and ML libraries (e.g., Python scikit learn, Spark MLlib, etc.)
- Experience in effective data exploration and visualization (e.g., Excel, Power BI, Tableau, Qlik, etc.)
- Experience in developing and debugging in one or more of the languages Java, Python.
- Ability to work in cross functional teams.
- Apply Machine Learning techniques in production including, but not limited to, neuralnets, regression, decision trees, random forests, ensembles, SVM, Bayesian models, K-Means, etc.
Roles and Responsibilities:
Deploying ML models into production, and scaling them to serve millions of customers.
Technical solutioning skills with deep understanding of technical API integrations, AI / Data Science, BigData and public cloud architectures / deployments in a SaaS environment.
Strong stakeholder relationship management skills - able to influence and manage the expectations of senior executives.
Strong networking skills with the ability to build and maintain strong relationships with both business, operations and technology teams internally and externally.
Provide software design and programming support to projects.
Qualifications & Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Machine Learning Architect (Deployments) or a similar role for 5-7 years.
- Partnering with internal business owners (product, marketing, edit, etc.) to understand needs and develop custom analysis to optimize for user engagement and retention
- Good understanding of the underlying business and workings of cross functional teams for successful execution
- Design and develop analyses based on business requirement needs and challenges.
- Leveraging statistical analysis on consumer research and data mining projects, including segmentation, clustering, factor analysis, multivariate regression, predictive modeling, etc.
- Providing statistical analysis on custom research projects and consult on A/B testing and other statistical analysis as needed. Other reports and custom analysis as required.
- Identify and use appropriate investigative and analytical technologies to interpret and verify results.
- Apply and learn a wide variety of tools and languages to achieve results
- Use best practices to develop statistical and/ or machine learning techniques to build models that address business needs.
Requirements
- 2 - 4 years of relevant experience in Data science.
- Preferred education: Bachelor's degree in a technical field or equivalent experience.
- Experience in advanced analytics, model building, statistical modeling, optimization, and machine learning algorithms.
- Machine Learning Algorithms: Crystal clear understanding, coding, implementation, error analysis, model tuning knowledge on Linear Regression, Logistic Regression, SVM, shallow Neural Networks, clustering, Decision Trees, Random forest, XGBoost, Recommender Systems, ARIMA and Anomaly Detection. Feature selection, hyper parameters tuning, model selection and error analysis, boosting and ensemble methods.
- Strong with programming languages like Python and data processing using SQL or equivalent and ability to experiment with newer open source tools.
- Experience in normalizing data to ensure it is homogeneous and consistently formatted to enable sorting, query and analysis.
- Experience designing, developing, implementing and maintaining a database and programs to manage data analysis efforts.
- Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG, HIVE).
- Experience in risk and credit score domains preferred.
• Solid technical / data-mining skills and ability to work with large volumes of data; extract
and manipulate large datasets using common tools such as Python and SQL other
programming/scripting languages to translate data into business decisions/results
• Be data-driven and outcome-focused
• Must have good business judgment with demonstrated ability to think creatively and
strategically
• Must be an intuitive, organized analytical thinker, with the ability to perform detailed
analysis
• Takes personal ownership; Self-starter; Ability to drive projects with minimal guidance
and focus on high impact work
• Learns continuously; Seeks out knowledge, ideas and feedback.
• Looks for opportunities to build owns skills, knowledge and expertise.
• Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG,
HIVE)
• Experience in risk and credit score domains preferred
• Comfortable with ambiguity and frequent context-switching in a fast-paced
environment
- Must have 6+ years of experience in C/C++ programming language.
- Knowledge of Go programming language and Python programming language is a big plus.
- Strong background in L4-L7 Internet Protocols TCP, HTTP, HTTP2, GRPC and HTTPS/SSL/TLS.
- Background in Internet security related products such as Web Application Firewalls, API Security Gateways, Reverse Proxies and Forward Proxies
- Proven knowledge of Linux kernel internals (process scheduler, memory management, etc.)
- Experience with eBPF is a plus.
- Hands-on experience in cloud architectures (SaaS, PaaS, IaaS, distributed systems) with continuous delivery
- Familiar with containerization solutions like Docker/Kubernetes etc.
- Familiar with serverless technologies such as AWS Lambda.
- Exposure to machine learning technologies and distributed systems is a plus
- B.E/B.Tech/MS degree in Computer Science, or equivalent
- 3+ years experience in practical implementation and deployment of ML based systems preferred.
- BE/B Tech or M Tech (preferred) in CS/Engineering with strong mathematical/statistical background
- Strong mathematical and analytical skills, especially statistical and ML techniques, with familiarity with different supervised and unsupervised learning algorithms
- Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimisation
- Experience in working on modeling graph structures related to spatiotemporal systems
- Programming skills in Python
- Experience in developing and deploying on cloud (AWS or Google or Azure)
- Good verbal and written communication skills
- Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow
About Toyota Connected
If you want to change the way the world works, transform the automotive industry and positively impact others on a global scale, then Toyota Connected is the right place for you! Within our collaborative, fast-paced environment we focus on continual improvement and work in a highly iterative way to deliver exceptional value in the form of connected products and services that wow and delight our customers and the world around us.
About the Team
Toyota Connected India is hiring talented engineers at Chennai to use Deep Learning, Computer vision, Big data, high performance cloud-based services and other cutting-edge technologies to transform the customer experience with their vehicle. Come help us re-imagine what mobility can be today and for years to come!
Job Description
The Toyota Connected team is looking for a Senior ML Engineer (Computer Vision) to be a part of a highly talented engineering team to help create new products and services from the ground up for the next generation connected vehicles. We are looking for team members that are required to be creative in solving problems, excited to work in new technology areas and be ready to wear multiple hats to get things done in a highly energized, fast-paced, innovative and collaborative startup environment.
What you will do
• Develop solutions using Machine Learning/Deep Learning and other advanced technologies to solve a variety of problems
• Develop image analysis algorithms and deep learning architectures to solve Computer vision related problems.
• Implement cutting edge machine learning techniques in image classification, object detection, semantic segmentation, sequence modeling, etc. using frameworks such as OpenCV, TensorFlow and Pytorch.
• Translate user stories and business requirements to technical solutions by building quick prototypes or proof of concepts with several business and technical stakeholder groups in both internal and external organizations
• Partner with leaders in the area and have insights to select off the shelf components vs building from the scratch
• Convert the proof of concepts to production-grade solutions that can scale for hundreds of thousands of users
• Be hands-on where required and lead from the front in following best practices in development and CI/CD methods
• Own delivery of features from top to bottom, from concept to code to production
• Develop tools and libraries that will enable rapid and scalable development in the future
You are a successful candidate if
• You are smart and can demonstrate it
• You have 8+ years of experience as software engineer with minimum 3 years hands-on experience delivering products or solutions that utilized Computer Vision
• Strong experience in deploying solutions to production, with hands-on experience in any public cloud environment (AWS, GCP or Azure)
• Excellent proficiency in Open CV or related computer vision frameworks and libraries
• Mathematical understanding of a variety of statistical learning algorithms (Reinforcement Learning, Supervised/Unsupervised, Graphical Models)
• Expertise in a variety of Deep Learning architectures including Residual Networks, RNN/CNN, Transformer, and Transfer Learning. And experience in delivering value using these in real production environments for real customers
• You have deep proficiency in Python and at least one other major programming language (C++, Java, Golang)
• You are very fluent in one or more ML tools/libraries like Tensorflow, Pytorch, Caffe, and/or Theano and have solved several real-life problems using these
• We think the knowledge acquired earning a degree in Computer Science or Math would be of great value in this position, but if you're smart and have the experience that backs up your abilities, for us, talent trumps degree every time
What is in it for you?
• Top of the line compensation!
• You'll be treated like the professional we know you are and left to manage your own time and work load.
• Yearly gym membership reimbursement. & Free catered lunches.
• No dress code! We trust you are responsible enough to choose what’s appropriate to wear for the day.
• Opportunity to build products that improves the safety and convenience of millions of customers.
Our Core Values
- Empathetic: We begin making decisions by looking at the world from the perspective of our customers, teammates, and partners.
- Passionate: We are here to build something great, not just for the money. We are always looking to improve the experience of our millions of customers
- Innovative: We experiment with ideas to get to the best solution. Any constraint is a challenge, and we love looking for creative ways to solve them.
- Collaborative: When it comes to people, we think the whole is greater than its parts and that everyone has a role to play in the success!
Location: Chennai
Notice Period: Immediate joiners or less than 30 days
Compensation: 35-40 LPA
Experience : 10-18 years of total experience
Job Requirements
- Experience in core JAVA technologies
- Experience with RESTful services
- Experience with relational DBs like MySQL
- Experience working within an Agile/Scrum and CI/CD environment.
- Experience working with version control using GIT/BitBucket.
- Managed 50+ engineers.
Tiger Analytics is a global AI & analytics consulting firm. With data and technology at the core of our solutions, we are solving some of the toughest problems out there. Our culture is modeled around expertise and mutual respect with a team first mindset. Working at Tiger, you’ll be at the heart of this AI revolution. You’ll work with teams that push the boundaries of what-is-possible and build solutions that energize and inspire.
We are headquartered in the Silicon Valley and have our delivery centres across the globe. The below role is for our Chennai or Bangalore office, or you can choose to work remotely.
About the Role:
As an Associate Director - Data Science at Tiger Analytics, you will lead data science aspects of endto-end client AI & analytics programs. Your role will be a combination of hands-on contribution, technical team management, and client interaction.
• Work closely with internal teams and client stakeholders to design analytical approaches to
solve business problems
• Develop and enhance a broad range of cutting-edge data analytics and machine learning
problems across a variety of industries.
• Work on various aspects of the ML ecosystem – model building, ML pipelines, logging &
versioning, documentation, scaling, deployment, monitoring and maintenance etc.
• Lead a team of data scientists and engineers to embed AI and analytics into the client
business decision processes.
Desired Skills:
• High level of proficiency in a structured programming language, e.g. Python, R.
• Experience designing data science solutions to business problems
• Deep understanding of ML algorithms for common use cases in both structured and
unstructured data ecosystems.
• Comfortable with large scale data processing and distributed computing
• Excellent written and verbal communication skills
• 10+ years exp of which 8 years of relevant data science experience including hands-on
programming.
Designation will be commensurate with expertise/experience. Compensation packages among the best in the industry.
GREETINGS FROM CODEMANTRA !!!
EXCELLENT OPPORTUNITY FOR DATA SCIENCE/AI AND ML ARCHITECT !!!
Skills and Qualifications
*Strong Hands-on experience in Python Programming
*** Working experience with Computer Vision models - Object Detection Model, Image Classification
* Good experience in feature extraction, feature selection techniques and transfer learning
* Working Experience in building deep learning NLP Models for text classification, image analytics-CNN,RNN,LSTM.
* Working Experience in any of the AWS/GCP cloud platforms, exposure in fetching data from various sources.
* Good experience in exploratory data analysis, data visualisation, and other data pre-processing techniques.
* Knowledge in any one of the DL frameworks like Tensorflow, Pytorch, Keras, Caffe Good knowledge in statistics, distribution of data and in supervised and unsupervised machine learning algorithms.
* Exposure to OpenCV Familiarity with GPUs + CUDA Experience with NVIDIA software for cluster management and provisioning such as nvsm, dcgm and DeepOps.
* We are looking for a candidate with 9+ years of relevant experience , who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: *Experience with big data tools: Hadoop, Spark, Kafka, etc.
*Experience with AWS cloud services: EC2, RDS, AWS-Sagemaker(Added advantage)
*Experience with object-oriented/object function scripting languages in any: Python, Java, C++, Scala, etc.
Responsibilities
*Selecting features, building and optimizing classifiers using machine learning techniques
*Data mining using state-of-the-art methods
*Enhancing data collection procedures to include information that is relevant for building analytic systems
*Processing, cleansing, and verifying the integrity of data used for analysis
*Creating automated anomaly detection systems and constant tracking of its performance
*Assemble large, complex data sets that meet functional / non-functional business requirements.
*Secure and manage when needed GPU cluster resources for events
*Write comprehensive internal feedback reports and find opportunities for improvements
*Manage GPU instances/machines to increase the performance and efficiency of the ML/DL model
Regards
Ranjith PR