50+ Computer Vision Jobs in Bangalore (Bengaluru) | Computer Vision Job openings in Bangalore (Bengaluru)
Apply to 50+ Computer Vision Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Computer Vision Job opportunities across top companies like Google, Amazon & Adobe.
The DataFlow Group is a leading global provider of specialized Primary Source Verification (PSV) solutions, background screening and immigration compliance services. The DataFlow Group partners with clients across the public and private sectors to assist them in mitigating potential risk by exposing fraudulent:
Duties and responsibilities:
Key Responsibilities:
● Model Selection & Training:
○ Lead the process of selecting and training state-of-the-art machine
learning models tailored to specific computer vision tasks, including object detection, image segmentation, and anomaly detection.
○ Apply advanced techniques for model fine-tuning and optimization to
achieve the best performance for various applications.
○ Leverage your expertise in YOLO, ViT (Vision Transformers), and other popular architectures to develop high-performing models.
● Computer Vision & Image Processing:
○ Develop and implement computer vision algorithms for tasks such as
object detection, image segmentation and anomaly detection.
○ Use tools like OpenCV for image pre-processing, augmentation, and other necessary transformations for robust model training.
● Model Deployment & Serving:
○ Deploy machine learning models in production environments using
TensorFlow or PyTorch Serving to create highly scalable and performant inference pipelines.
○ Ensure smooth integration of models with production systems and
optimise them for latency, throughput, and memory efficiency.
● MLOps & Data Pipelines:
○ Build and maintain end-to-end machine learning pipelines, from data
ingestion and processing to model inference and monitoring.
○ Apply best practices in ML Ops for versioning, tracking experiments,
automating workflows, and managing deployments.
○ Work with cloud platforms (AWS, GCP, Azure) and containerization
technologies (Docker, Kubernetes) for model serving.
● SQL & Data Management:
○ Write optimised and efficient SQL queries to extract and manipulate large datasets for model training and evaluation.
○ Analyse structured and unstructured data to generate insights and
features for model development.
● Model Evaluation & Performance Metrics:
○ Use statistical methods and metrics to evaluate model performance,
ensuring high accuracy, precision, recall, F1 score, etc., on real-world tasks.
Skills/ Qualifications:
1. Qualifications:
a. Bachelors or Masters. in Computer Science, Machine Learning, Artificial Intelligence, or related field.
b. 5+ years of hands-on experience in data science and machine learning, with a focus on computer vision.
2. Technical know-how:
a. Strong expertise in computer vision tasks, including object detection (YOLO), image segmentation, and image classification.
b. Knowledge of using tools like LabelMe to create image annotations required for training data labelling.
c. In-depth knowledge of deep learning frameworks such as TensorFlow, PyTorch, and Keras.
d. Familiarity with Vision Transformer (ViT) models and their applications.
e. Proficiency with OpenCV for image pre-processing, augmentation, and other related tasks.
f. Experience in building and deploying machine learning models using
TensorFlow Serving, PyTorch Serving, or similar tools.
g. Strong MLOps experience, with hands-on knowledge of creating and
maintaining data pipelines, automating workflows, and managing model
deployments using tools like Docker, Kubernetes, and cloud platforms (AWS, GCP, Azure).
3. Data Handling:
a. Expertise in writing optimised SQL queries for large-scale data processing and analysis.
b. Ability to work with large, complex datasets and efficiently query, clean, and process them for training and inference purposes.
4. Model Evaluation:
a. Strong understanding of statistical methods and metrics to evaluate machine learning model performance.
b. Experience with A/B testing, model validation techniques, and performance benchmarking.
Good to have:
● Experience with large-scale distributed machine learning systems.
● Knowledge of advanced techniques such as self-supervised learning, transfer learning, or reinforcement learning.
● Familiarity with data versioning and experiment tracking tools like MLflow or DVC.
● Experience in integrating machine learning models into production environments for real-time inference.
Responsibilities
- Design and implement advanced solutions utilizing Large Language Models (LLMs).
- Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
- Conduct research and stay informed about the latest developments in generative AI and LLMs.
- Develop and maintain code libraries, tools, and frameworks to support generative AI development.
- Participate in code reviews and contribute to maintaining high code quality standards.
- Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
- Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
- Possess strong analytical and problem-solving skills.
- Demonstrate excellent communication skills and the ability to work effectively in a team environment.
Primary Skills
- Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
- Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
- Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
- Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
- Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
- Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.
About Us
We offer AI-powered biomechanics analysis to optimize peak athletic performance.. We enhance player development through accessible data and cutting-edge technology, providing clear insights for athletes and their coaches. Our vision is to create a future where AI serves as a powerful tool for athletes of all levels. By providing data-driven insights into biomechanics and facilitating the development of strong technical skills, we aim to unlock potential, reduce injury risk and revolutionize sports training
Job Description:
We are looking for an experienced Flutter Developer to design and develop a high-quality mobile application. The ideal candidate shall have a minimum of 2 years experience in mobile development, proficiency in Flutter, with good knowledge in C++, and desirably understands Computer Vision (CV).
Responsibilities:
- Design, build, and maintain efficient, reusable, and reliable Flutter code.
- Ensure the best possible performance, quality, and responsiveness of applications.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Identify and correct bottlenecks and fix bugs.
- Help maintain code quality, organization, and automation.
- Stay up-to-date with the latest industry trends and technologies.
Requirements:
- Bachelor’s degree in Computer Science, Engineering, or a related field.
- 2-3 years of professional experience in mobile app development.
- Proficiency in Flutter and Dart.
- Strong knowledge of C++ or Java.
- Exposure to Computer Vision (CV) technologies and frameworks.
- Familiarity with RESTful APIs to connect mobile applications to back-end services.
- Solid understanding of the full mobile development life cycle.
- Experience with version control systems, such as Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork skills.
Preferred Skills:
- Experience with Agile/Scrum development methodologies.
- Knowledge of other mobile development frameworks and technologies.
- Familiarity with cloud services and integration (e.g., AWS, Firebase).
- Experience with CI/CD pipelines.
What We Offer:
- Competitive salary and benefits package.
- Opportunity to work on cutting-edge technologies.
- Collaborative and innovative work environment.
- Professional growth and development opportunities.
About Digit88
Digit88 empowers digital transformation for innovative and high growth B2B and B2C SaaS companies as their trusted offshore software product engineering partner!
We are a lean mid-stage software company, with a team of 75+ fantastic technologists, backed by executives with deep understanding of and extensive experience in consumer and enterprise product development across large corporations and startups. We build highly efficient and effective engineering teams that solve real and complex problems for our partners.
With more than 50+ years of collective experience in areas ranging from B2B and B2C SaaS, web and mobile apps, e-commerce platforms and solutions, custom enterprise SaaS platforms and domains spread across Conversational AI, Chatbots, IoT, Health-tech, ESG/Energy Analytics, Data Engineering, the founding team thrives in a fast paced and challenging environment that allows us to showcase our best.
The Vision: To be the most trusted technology partner to innovative software product companies world-wide
The Opportunity:
As the Data Science Lead, you will own and drive the company's data initiatives, managing a team of data scientists, and ensuring the
As a Data Scientist II, you will be part of the Product Risk Operations team that owns the development of AI-powered technology that helps our clients make communities safer and more resilient. Our technology helps our customers in reducing carbon emissions, reducing infrastructure risk and avoiding fatalities, a fact that we pride ourselves in.
As a part of the Risk Operations team, you will be working on a wide range of activities related to end-to-end Machine Learning (ML) deployments, research and development of ML product features to support internal and external stakeholders. This is a great role for someone who enjoys variety and is also looking to expand their skill set in a structured fashion.
What You’ll Do
- Become a subject matter expert on products, including understanding how AI can be used in the utilities industry to enable our clients desired outcomes.
- Work closely with cross-functional teams to identify opportunities, design experiments and deploy repeatable and scalable machine learning solutions that drive business impact.
- Lead design of experiments and hypothesis tests related to machine learning product features development.
- Lead design, implementation, and deployment of machine learning models to support existing as well as new customers.
- Communicate findings and recommendations to both technical and non-technical stakeholders through clear visual presentations.
- Monitor and analyze machine learning model performance and data accuracy.
- Mentor junior staff members.
- Stay current with best practices in data science, machine learning and AI.
Who You Are
- 3-5 years of experience building and deploying machine learning models
- Master’s or PHD in statistics, mathematics, computer science or another quantitative field
- Strong problem solving skills with emphasis on product development.
- Well versed in programming languages such as R or Python with experience using libraries such as pandas, scikit-learn, tensor flow .
- Experience with SQL and relational databases for data extraction and manipulation.
- Experience using a variety of Machine Learning techniques for Predictive Modeling, Classification, Natural Language Processing (including Large Language Models), Content Recommendation Systems, Time Series Techniques
- Passionate about being up-to-date with the latest developments in Machine Learning
- Strong organizational, time management, and communication skills
- High degree of accountability
- Utility, Infrastructure or Energy related field experience is a plus
Benefits/Culture @ Digit88:
- Comprehensive Insurance (Life, Health, Accident)
- Flexible Work Model
- Accelerated learning & non-linear growth
- Flat organisation structure driven by ownership and accountability.
- Opportunity to own and be a part of some of the most innovative and promising AI/SaaS product companies in North America and around the world.
- Accomplished Global Peers - Working with some of the best engineers/professionals globally from the likes of Apple, Amazon, IBM Research, Adobe and other innovative product companies
- Ability to make a global impact with your work, leading innovations in Conversational AI, Energy/Utilities, ESG, HealthTech, IoT, PLM and more.
You will work with a founding team of serial entrepreneurs with multiple successful exits to their credit. The learning will be immense just as will the challenges.
This is the right time to join us and partner in our growth!
Join Our Team
● We are seeking a highly skilled and experienced Senior Python Developer to join our dynamic team.
● The ideal candidate will be an expert in developing applications and chatbots utilizing OpenAI and LangChain technologies.
● This role requires proficiency in working with vector stores, interfacing with databases (especially PostgreSQL), and integrating website content and documents (including PDFs) seamlessly into websites.
● The successful candidate will play a pivotal role in driving our AI initiatives, enhancing our digital interfaces, and ensuring a seamless user experience. Key Responsibilities:
● Design and develop advanced applications and chatbots using OpenAI, LangChain, and related technologies.
● Implement and maintain vector stores to optimize AI model performance and data retrieval.
● Interface with PostgreSQL databases, ensuring efficient data storage, retrieval, and management.Integrate dynamic website content and documents (e.g., PDFs) into web applications, enhancing functionality and user experience.Collaborate with cross-functional teams (UI/UX designers, web developers, project managers) to deliver high-quality software solutions.
● Lead the technical design, development, and deployment of AI-powered features, ensuring scalability and reliability.
● Stay abreast of emerging technologies and industry trends to incorporate best practices and innovations into our projects.
● Provide technical mentorship to junior developers and contribute to the team's knowledge-sharing efforts.
Qualifications:
● Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
● At least 10+ years of professional experience in Python development, with a strong portfolio of projects demonstrating expertise in AI and chatbot development.
● Proficiency in OpenAI and LangChain technologies is essential.
● Experience with vector stores (e.g., Pinecone, Weaviate) and their integration into AI applications.
● Solid understanding of database management, particularly PostgreSQL, including schema design, query optimization, and connection pooling.
● Demonstrated ability to integrate website content and documents (PDFs) into web applications.
● Familiarity with web development frameworks (e.g., Django, Flask) and front-end technologies (JavaScript, HTML5, CSS) is a plus.Excellent problem-solving skills, with a creative and analytical approach to overcoming challenges.
● Strong communication and collaboration abilities, with a track record of working effectively in team environments.
Job Description:
1.Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges
a. Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem
b. Improve Model accuracy to deliver greater business impact
c.Estimate business impact due to deployment of model
2.Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge
3.Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines
4.Design , develop & deploy Deep learning models using Tensorflow / Pytorch
5.Experience in using Deep learning models with text, speech, image and video data
a.Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc
b.Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV
c.Knowledge of State of the art Deep learning algorithms
6.Optimize and tune Deep Learnings model for best possible accuracy
7.Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau
8.Work with application teams, in deploying models on cloud as a service or on-prem
a.Deployment of models in Test / Control framework for tracking
b.Build CI/CD pipelines for ML model deployment
9.Integrating AI&ML models with other applications using REST APIs and other connector technologies
10.Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact.
· Technology/Subject Matter Expertise
- Sufficient expertise in machine learning, mathematical and statistical sciences
- Use of versioning & Collaborative tools like Git / Github
- Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming
- Develop prototype level ideas into a solution that can scale to industrial grade strength
- Ability to quantify & estimate the impact of ML models.
· Softskills Profile
- Curiosity to think in fresh and unique ways with the intent of breaking new ground.
- Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control
- Ability to think ahead, and anticipate the needs for solving the problem will be important
· Ability to communicate key messages effectively, and articulate strong opinions in large forums
· Desirable Experience:
- Keen contributor to open source communities, and communities like Kaggle
- Ability to process Huge amount of Data using Pyspark/Hadoop
- Development & Application of Reinforcement Learning
- Knowledge of Optimization/Genetic Algorithms
- Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios
- Optimize and tune deep learning model for best possible accuracy
- Understanding of stream data processing, RPA, edge computing, AR/VR etc
- Appreciation of digital ethics, data privacy will be important
- Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus
- Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus
at Tiger Analytics
• Charting learning journeys with knowledge graphs.
• Predicting memory decay based upon an advanced cognitive model.
• Ensure content quality via study behavior anomaly detection.
• Recommend tags using NLP for complex knowledge.
• Auto-associate concept maps from loosely structured data.
• Predict knowledge mastery.
• Search query personalization.
Requirements:
• 6+ years experience in AI/ML with end-to-end implementation.
• Excellent communication and interpersonal skills.
• Expertise in SageMaker, TensorFlow, MXNet, or equivalent.
• Expertise with databases (e. g. NoSQL, Graph).
• Expertise with backend engineering (e. g. AWS Lambda, Node.js ).
• Passionate about solving problems in education
at Tiger Analytics
Job brief
We are looking for a Lead Data Scientist to lead a technical team and help us gain
useful insight out of raw data.
Lead Data Scientist responsibilities include managing the data science team, planning
projects and building analytics models. You should have a strong problem-solving ability
and a knack for statistical analysis. If you’re also able to align our data products with our
business goals, we’d like to meet you.
Your ultimate goal will be to help improve our products and business decisions by
making the most out of our data.
Responsibilities
● Conceive, plan and prioritize data projects
● Ensure data quality and integrity
● Interpret and analyze data problems
● Build analytic systems and predictive models
● Align data projects with organizational goals
● Lead data mining and collection procedures
● Test performance of data-driven products
● Visualize data and create reports
● Build and manage a team of data scientists and data engineers
Requirements
● Proven experience as a Data Scientist or similar role
● Solid understanding of machine learning
● Knowledge of data management and visualization techniques
● A knack for statistical analysis and predictive modeling
● Good knowledge of R, Python and MATLAB
● Experience with SQL and NoSQL databases
● Strong organizational and leadership skills
● Excellent communication skills
● A business mindset
● Degree in Computer Science, Data Science, Mathematics or similar field
● Familiar with emerging/cutting edge, open source, data science/machine learning
libraries/big data platforms
Responsibilities:
- Identify relevant data sources - a combination of data sources to make it useful.
- Build the automation of the collection processes.
- Pre-processing of structured and unstructured data.
- Handle large amounts of information to create the input to analytical Models.
- Build predictive models and machine-learning algorithms Innovate Machine-Learning , Deep-Learning algorithms.
- Build Network graphs , NLP , Forecasting Models Building data pipelines for end-to-end solutions.
- Propose solutions and strategies to business challenges. Collaborate with product development teams and communicate with the Senior Leadership teams.
- Participate in Problem solving sessions
Requirements:
- Bachelor's degree in a highly quantitative field (e.g. Computer Science , Engineering , Physics , Math , Operations Research , etc) or equivalent experience.
- Extensive machine learning and algorithmic background with a deep level understanding of at least one of the following areas: supervised and unsupervised learning methods , reinforcement learning , deep learning , Bayesian inference , Network graphs , Natural Language Processing Analytical mind and business acumen
- Strong math skills (e.g. statistics , algebra)
- Problem-solving aptitude Excellent communication skills with ability to communicate technical information.
- Fluency with at least one data science/analytics programming language (e.g. Python , R , Julia).
- Start-up experience is a plus Ideally 5-8 years of advanced analytics experience in startups/marquee com
Required Skills:
Machine Learning, Deep Learning, Algorithms, Computer Science, Engineering, Operations Research, Math Skills, Communication Skills, SAAS Product, IT Services, Artificial Intelligence, ERP, Product Management, Automation, Analytical Models, Predictive Models, NLP, Forecasting Models, Product Development, Leadership, Problem Solving, Unsupervised Learning, Reinforcement Learning, Natural Language Processing, Algebra, Data Science, Programming Language, Python, Julia
Prismforce (www.prismforce.com) is a US Head quartered vertical SAAS product company , with development teams in India. We are Series-A funded venture , backed by Tier 1 VC and targeted towards tech/IT services industry and tech talent organizations in enterprises, solving their most critical sector specific problems in Talent Supply Chain. The product suite is powered by artificial intelligence designed to accelerate business impact e.g. improved profitability and agility , by digitizing core vertical workflows underserved by custom applications and typical ERP offerings.
We are looking for Data Scientists to build data products to be the core of SAAS company disrupting the Skill market.In this role you should be highly analytical with a keen understanding of Data, Machine Learning, Deep Learning, Analysis, Algorithms, Products, Maths, and Statistics. The hands-on individual would be playing multiple roles of being a Data Scientist , Data Engineer , Data Analysts , Efficient Coder and above all Problem Solver.
Location: Mumbai / Bangalore / Pune / Kolkata
Responsibilities:
- Identify relevant data sources - a combination of data sources to make it useful.
- Build the automation of the collection processes.
- Pre-processing of structured and unstructured data.
- Handle large amounts of information to create the input to analytical Models.
- Build predictive models and machine-learning algorithms Innovate Machine-Learning , Deep-Learning algorithms.
- Build Network graphs , NLP , Forecasting Models Building data pipelines for end-to-end solutions.
- Propose solutions and strategies to business challenges. Collaborate with product development teams and communicate with the Senior Leadership teams.
- Participate in Problem solving sessions
Requirements:
- Bachelor's degree in a highly quantitative field (e.g. Computer Science , Engineering , Physics , Math , Operations Research , etc) or equivalent experience.
- Extensive machine learning and algorithmic background with a deep level understanding of at least one of the following areas: supervised and unsupervised learning methods , reinforcement learning , deep learning , Bayesian inference , Network graphs , Natural Language Processing Analytical mind and business acumen
- Strong math skills (e.g. statistics , algebra)
- Problem-solving aptitude Excellent communication skills with ability to communicate technical information.
- Fluency with at least one data science/analytics programming language (e.g. Python , R , Julia).
- Start-up experience is a plus Ideally 5-8 years of advanced analytics experience in startups/marquee com
Requirement understanding and elicitation, analysis, data/workflows, contribution to product
projects and Proof of concept (POC)
Contribute to preparing design documents and effort estimations.
Develop AI/ML Models using best-in-class ML models.
Building, testing, and deploying AI/ML solutions.
Work with Business Analysts and Product Managers to assist with defining functional user
stories.
Ensure deliverables across teams are of high quality and documented.
Recommend best ML practices/Industry standards for any ML use case.
Proactively take up R and D and recommend solution options for any ML use case.
Requirements
Experience:
- 5-8 years of working experience in ML/Data science preferably from remote sensing background.
- Experience in leading a team of both data scientist and machine learning engineers to solve challenging problems preferably in the Infrastructure and Utilities domain using geospatial & remote sensing data.
- Statistical knowledge along with great proficiency in python
- Strong understanding and implementation experience of predictive modeling algorithms such as regressions, time series, neural networks, clustering, decision trees and heuristic models, with familiarity dealing with tradeoffs between model performance and business needs
- Experience combining user research and data science methodologies across multiple products within the business unit.
- Must have delivered multiple data science product(s) in production.
Minimum qualification:
- Advanced degree (MSc/PhD) in Computer Science, Economics, Engineering, Operations Research, Physics or Mathematics/Statistics preferred.
Competencies:
- Fantastic communication skills that enable you to work cross-functionally with business folks, product managers, technical experts, building solid relationships with a diverse set of stakeholders.
- The ability to convey complex solutions to a less technical person.
- Vast analytical problem-solving capabilities & experience.
- Bias for action.
Requirements
Experience
- 5+ years of professional experience in implementing MLOps framework to scale up ML in production.
- Hands-on experience with Kubernetes, Kubeflow, MLflow, Sagemaker, and other ML model experiment management tools including training, inference, and evaluation.
- Experience in ML model serving (TorchServe, TensorFlow Serving, NVIDIA Triton inference server, etc.)
- Proficiency with ML model training frameworks (PyTorch, Pytorch Lightning, Tensorflow, etc.).
- Experience with GPU computing to do data and model training parallelism.
- Solid software engineering skills in developing systems for production.
- Strong expertise in Python.
- Building end-to-end data systems as an ML Engineer, Platform Engineer, or equivalent.
- Experience working with cloud data processing technologies (S3, ECR, Lambda, AWS, Spark, Dask, ElasticSearch, Presto, SQL, etc.).
- Having Geospatial / Remote sensing experience is a plus.
Roles and Responsibilities:
- Design, develop, and maintain the end-to-end MLOps infrastructure from the ground up, leveraging open-source systems across the entire MLOps landscape.
- Creating pipelines for data ingestion, data transformation, building, testing, and deploying machine learning models, as well as monitoring and maintaining the performance of these models in production.
- Managing the MLOps stack, including version control systems, continuous integration and deployment tools, containerization, orchestration, and monitoring systems.
- Ensure that the MLOps stack is scalable, reliable, and secure.
Skills Required:
- 3-6 years of MLOps experience
- Preferably worked in the startup ecosystem
Primary Skills:
- Experience with E2E MLOps systems like ClearML, Kubeflow, MLFlow etc.
- Technical expertise in MLOps: Should have a deep understanding of the MLOps landscape and be able to leverage open-source systems to build scalable, reliable, and secure MLOps infrastructure.
- Programming skills: Proficient in at least one programming language, such as Python, and have experience with data science libraries, such as TensorFlow, PyTorch, or Scikit-learn.
- DevOps experience: Should have experience with DevOps tools and practices, such as Git, Docker, Kubernetes, and Jenkins.
Secondary Skills:
- Version Control Systems (VCS) tools like Git and Subversion
- Containerization technologies like Docker and Kubernetes
- Cloud Platforms like AWS, Azure, and Google Cloud Platform
- Data Preparation and Management tools like Apache Spark, Apache Hadoop, and SQL databases like PostgreSQL and MySQL
- Machine Learning Frameworks like TensorFlow, PyTorch, and Scikit-learn
- Monitoring and Logging tools like Prometheus, Grafana, and Elasticsearch
- Continuous Integration and Continuous Deployment (CI/CD) tools like Jenkins, GitLab CI, and CircleCI
- Explain ability and Interpretability tools like LIME and SHAP
About Us
Mindtickle provides a comprehensive, data-driven solution for sales readiness and enablement that fuels revenue growth and brand value for dozens of Fortune 500 and Global 2000 companies and hundreds of the world’s most recognized companies across technology, life sciences, financial services, manufacturing, and service sectors.
With purpose-built applications, proven methodologies, and best practices designed to drive effective sales onboarding and ongoing readiness, mindtickle enables company leaders and sellers to continually assess, diagnose and develop the knowledge, skills, and behaviors required to engage customers and drive growth effectively. We are funded by great investors, like – Softbank, Canaan partners, NEA, Accel Partners, and others.
Job Brief
We are looking for a rockstar researcher at the Center of Excellence for Machine Learning. You are responsible for thinking outside the box, crafting new algorithms, developing end-to-end artificial intelligence-based solutions, and rightly selecting the most appropriate architecture for the system(s), such that it suits the business needs, and achieves the desired results under given constraints.
Credibility:
- You must have a proven track record in research and development with adequate publication/patenting and/or academic credentials in data science.
- You have the ability to directly connect business problems to research problems along with the latest emerging technologies.
Strategic Responsibility:
- To perform the following: understanding problem statements, connecting the dots between high-level business statements and deep technology algorithms, crafting new systems and methods in the space of structured data mining, natural language processing, computer vision, speech technologies, robotics or Internet of things etc.
- To be responsible for end-to-end production level coding with data science and machine learning algorithms, unit and integration testing, deployment, optimization and fine-tuning of models on cloud, desktop, mobile or edge etc.
- To learn in a continuous mode, upgrade and upskill along with publishing novel articles in journals and conference proceedings and/or filing patents, and be involved in evangelism activities and ecosystem development etc.
- To share knowledge, mentor colleagues, partners, and customers, take sessions on artificial intelligence topics both online or in-person, participate in workshops, conferences, seminars/webinars as a speaker, instructor, demonstrator or jury member etc.
- To design and develop high-volume, low-latency applications for mission-critical systems and deliver high availability and performance.
- To collaborate within the product streams and team to bring best practices and leverage world-class tech stack.
- To set up every essentials (tracking / alerting) to make sure the infrastructure / software built is working as expected.
- To search, collect and clean Data for analysis and setting up efficient storage and retrieval pipelines.
Personality:
- Requires excellent communication skills – written, verbal, and presentation.
- You should be a team player.
- You should be positive towards problem-solving and have a very structured thought process to solve problems.
- You should be agile enough to learn new technology if needed.
Qualifications:
- B Tech / BS / BE / M Tech / MS / ME in CS or equivalent from Tier I / II or Top Tier Engineering Colleges and Universities.
- 6+ years of strong software (application or infrastructure) development experience and software engineering skills (Python, R, C, C++ / Java / Scala / Golang).
- Deep expertise and practical knowledge of operating systems, MySQL and NoSQL databases(Redis/couchbase/mongodb/ES or any graphDB).
- Good understanding of Machine Learning Algorithms, Linear Algebra and Statistics.
- Working knowledge of Amazon Web Services(AWS).
- Experience with Docker and Kubernetes will be a plus.
- Experience with Natural Language Processing, Recommendation Systems, or Search Engines.
Our Culture
As an organization, it’s our priority to create a highly engaging and rewarding workplace. We offer tons of awesome perks, great learning opportunities & growth.
Our culture reflects the globally diverse backgrounds of our employees along with our commitment to our customers, each other, and a passion for excellence.
To know more about us, feel free to go through these videos:
1. Sales Readiness Explained: https://www.youtube.com/watch?v=XyMJj9AlNww&t=6s
2. What We Do: https://www.youtube.com/watch?v=jv3Q2XgnkBY
3. Ready to Close More Deals, Faster: https://www.youtube.com/watch?v=nB0exreVU-s
To view more videos, please access the below-mentioned link:
https://www.youtube.com/c/mindtickle/videos
Mindtickle is proud to be an Equal Opportunity Employer
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.
Your Right to Work - In compliance with applicable laws, all persons hired will be required to verify identity and eligibility to work in the respective work locations and to complete the required employment eligibility verification document form upon hire.
We are a stealth startup focusing on AI in healthcare and are looking for software engineers to join our PoC (Proof of Concept) team. You will be a core member of the company with equity options. If you are ambitious, excited about next-generation tech and have a constant hunger to learn, we encourage you to apply.
Your responsibilities:
- Design and develop full-stack web applications for PoC.
- Implement computer vision and NLP based deep learning models.
- Participate in client meetings and refine product capabilities.
Essential requirements:
- Interested in working at startups.
- Ability to work independently.
- Experienced in developing full-stack web applications.
- Strong command over React and Python.
- Good experience with Data Science.
- Comfortable with ambiguity and frequent changes to project scope in innovation environment.
Data Scientist
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What you’ll do?
- Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models.
- Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation, and serving.
- Research new and innovative machine learning approaches.
- Perform hands-on analysis and modeling of enormous data sets to develop insights that increase Ad Traffic and Campaign Efficacy.
- Collaborate with other data scientists, data engineers, product managers, and business stakeholders to build well-crafted, pragmatic data products.
- Actively take on new projects and constantly try to improve the existing models and infrastructure necessary for offline and online experimentation and iteration.
- Work with your team on ambiguous problem areas in existing or new ML initiatives
What are we looking for?
- Ability to write a SQL query to pull the data you need.
- Fluency in Python and familiarity with its scientific stack such as numpy, pandas, scikit learn, matplotlib.
- Experience in Tensorflow and/or R Modelling and/or PyTorch
- Ability to understand a business problem and translate, and structure it into a data science problem.
Job Category: Data Science
Job Type: Full Time
Job Location: Bangalore
Job Summary
As a Data Science Lead, you will manage multiple consulting projects of varying complexity and ensure on-time and on-budget delivery for clients. You will lead a team of data scientists and collaborate across cross-functional groups, while contributing to new business development, supporting strategic business decisions and maintaining & strengthening client base
- Work with team to define business requirements, come up with analytical solution and deliver the solution with specific focus on Big Picture to drive robustness of the solution
- Work with teams of smart collaborators. Be responsible for their appraisals and career development.
- Participate and lead executive presentations with client leadership stakeholders.
- Be part of an inclusive and open environment. A culture where making mistakes and learning from them is part of life
- See how your work contributes to building an organization and be able to drive Org level initiatives that will challenge and grow your capabilities.
Role & Responsibilities
- Serve as expert in Data Science, build framework to develop Production level DS/AI models.
- Apply AI research and ML models to accelerate business innovation and solve impactful business problems for our clients.
- Lead multiple teams across clients ensuring quality and timely outcomes on all projects.
- Lead and manage the onsite-offshore relation, at the same time adding value to the client.
- Partner with business and technical stakeholders to translate challenging business problems into state-of-the-art data science solutions.
- Build a winning team focused on client success. Help team members build lasting career in data science and create a constant learning/development environment.
- Present results, insights, and recommendations to senior management with an emphasis on the business impact.
- Build engaging rapport with client leadership through relevant conversations and genuine business recommendations that impact the growth and profitability of the organization.
- Lead or contribute to org level initiatives to build the Tredence of tomorrow.
Qualification & Experience
- Bachelor's /Master's /PhD degree in a quantitative field (CS, Machine learning, Mathematics, Statistics, Data Science) or equivalent experience.
- 6-10+ years of experience in data science, building hands-on ML models
- Expertise in ML – Regression, Classification, Clustering, Time Series Modeling, Graph Network, Recommender System, Bayesian modeling, Deep learning, Computer Vision, NLP/NLU, Reinforcement learning, Federated Learning, Meta Learning.
- Proficient in some or all of the following techniques: Linear & Logistic Regression, Decision Trees, Random Forests, K-Nearest Neighbors, Support Vector Machines ANOVA , Principal Component Analysis, Gradient Boosted Trees, ANN, CNN, RNN, Transformers.
- Knowledge of programming languages SQL, Python/ R, Spark.
- Expertise in ML frameworks and libraries (TensorFlow, Keras, PyTorch).
- Experience with cloud computing services (AWS, GCP or Azure)
- Expert in Statistical Modelling & Algorithms E.g. Hypothesis testing, Sample size estimation, A/B testing
- Knowledge in Mathematical programming – Linear Programming, Mixed Integer Programming etc , Stochastic Modelling – Markov chains, Monte Carlo, Stochastic Simulation, Queuing Models.
- Experience with Optimization Solvers (Gurobi, Cplex) and Algebraic programming Languages(PulP)
- Knowledge in GPU code optimization, Spark MLlib Optimization.
- Familiarity to deploy and monitor ML models in production, delivering data products to end-users.
- Experience with ML CI/CD pipelines.
As an Associate Manager - Senior Data scientist you will solve some of the most impactful business problems for our clients using a variety of AI and ML technologies. You will collaborate with business partners and domain experts to design and develop innovative solutions on the data to achieve
predefined outcomes.
• Engage with clients to understand current and future business goals and translate business
problems into analytical frameworks
• Develop custom models based on an in-depth understanding of underlying data, data structures,
and business problems to ensure deliverables meet client needs
• Create repeatable, interpretable and scalable models
• Effectively communicate the analytics approach and insights to a larger business audience
• Collaborate with team members, peers and leadership at Tredence and client companies
Qualification:
1. Bachelor's or Master's degree in a quantitative field (CS, machine learning, mathematics,
statistics) or equivalent experience.
2. 5+ years of experience in data science, building hands-on ML models
3. Experience leading the end-to-end design, development, and deployment of predictive
modeling solutions.
4. Excellent programming skills in Python. Strong working knowledge of Python’s numerical, data
analysis, or AI frameworks such as NumPy, Pandas, Scikit-learn, Jupyter, etc.
5. Advanced SQL skills with SQL Server and Spark experience.
6. Knowledge of predictive/prescriptive analytics including Machine Learning algorithms
(Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks
7. Experience with Natural Language Processing (NLTK) and text analytics for information
extraction, parsing and topic modeling.
8. Excellent verbal and written communication. Strong troubleshooting and problem-solving skills.
Thrive in a fast-paced, innovative environment
9. Experience with data visualization tools — PowerBI, Tableau, R Shiny, etc. preferred
10. Experience with cloud platforms such as Azure, AWS is preferred but not required
Top Management Consulting Company
We are looking for a Machine Learning engineer for on of our premium client.
Experience: 2-9 years
Location: Gurgaon/Bangalore
Tech Stack:
Python, PySpark, the Python Scientific Stack; MLFlow, Grafana, Prometheus for machine learning pipeline management and monitoring; SQL, Airflow, Databricks, our own open-source data pipelining framework called Kedro, Dask/RAPIDS; Django, GraphQL and ReactJS for horizontal product development; container technologies such as Docker and Kubernetes, CircleCI/Jenkins for CI/CD, cloud solutions such as AWS, GCP, and Azure as well as Terraform and Cloudformation for deployment
Expert in Machine Learning (ML) & Natural Language Processing (NLP).
Expert in Python, Pytorch and Data Structures.
Experience in ML model life cycle (Data preparation, Model training and Testing and ML Ops).
Strong experience in NLP, NLU and NLU using transformers & deep learning.
Experience in federated learning is a plus
Experience with knowledge graphs and ontology.
Responsible for developing, enhancing, modifying, optimizing and/or maintaining applications, pipelines and codebase in order to enhance the overall solution.
Experience working with scalable, highly-interactive, high-performance systems/projects (ML).
Design, code, test, debug and document programs as well as support activities for the corporate systems architecture.
Working closely with business partners in defining requirements for ML applications and advancements of solution.
Engage in specifications in creating comprehensive technical documents.
Experience / Knowledge in designing enterprise grade system architecture for solving complex problems with a sound understanding of object-oriented programming and Design Patterns.
Experience in Test Driven Development & Agile methodologies.
Good communication skills - client facing environment.
Hunger for learning, self-starter with a drive to technically mentor cohort of developers. 16. Good to have working experience in Knowledge Graph based ML products development; and AWS/GCP based ML services.
Duties and Responsibilities:
Research and Develop Innovative Use Cases, Solutions and Quantitative Models
Quantitative Models in Video and Image Recognition and Signal Processing for cloudbloom’s
cross-industry business (e.g., Retail, Energy, Industry, Mobility, Smart Life and
Entertainment).
Design, Implement and Demonstrate Proof-of-Concept and Working Proto-types
Provide R&D support to productize research prototypes.
Explore emerging tools, techniques, and technologies, and work with academia for cutting-
edge solutions.
Collaborate with cross-functional teams and eco-system partners for mutual business benefit.
Team Management Skills
Academic Qualification
7+ years of professional hands-on work experience in data science, statistical modelling, data
engineering, and predictive analytics assignments
Mandatory Requirements: Bachelor’s degree with STEM background (Science, Technology,
Engineering and Management) with strong quantitative flavour
Innovative and creative in data analysis, problem solving and presentation of solutions.
Ability to establish effective cross-functional partnerships and relationships at all levels in a
highly collaborative environment
Strong experience in handling multi-national client engagements
Good verbal, writing & presentation skills
Core Expertise
Excellent understanding of basics in mathematics and statistics (such as differential
equations, linear algebra, matrix, combinatorics, probability, Bayesian statistics, eigen
vectors, Markov models, Fourier analysis).
Building data analytics models using Python, ML libraries, Jupyter/Anaconda and Knowledge
database query languages like SQL
Good knowledge of machine learning methods like k-Nearest Neighbors, Naive Bayes, SVM,
Decision Forests.
Strong Math Skills (Multivariable Calculus and Linear Algebra) - understanding the
fundamentals of Multivariable Calculus and Linear Algebra is important as they form the basis
of a lot of predictive performance or algorithm optimization techniques.
Deep learning : CNN, neural Network, RNN, tensorflow, pytorch, computervision,
Large-scale data extraction/mining, data cleansing, diagnostics, preparation for Modeling
Good applied statistical skills, including knowledge of statistical tests, distributions,
regression, maximum likelihood estimators, Multivariate techniques & predictive modeling
cluster analysis, discriminant analysis, CHAID, logistic & multiple regression analysis
Experience with Data Visualization Tools like Tableau, Power BI, Qlik Sense that help to
visually encode data
Excellent Communication Skills – it is incredibly important to describe findings to a technical
and non-technical audience
Capability for continuous learning and knowledge acquisition.
Mentor colleagues for growth and success
Strong Software Engineering Background
Hands-on experience with data science tools
XressBees – a logistics company started in 2015 – is amongst the fastest growing companies of its sector. Our
vision to evolve into a strong full-service logistics organization reflects itself in the various lines of business like B2C
logistics 3PL, B2B Xpress, Hyperlocal and Cross border Logistics.
Our strong domain expertise and constant focus on innovation has helped us rapidly evolve as the most trusted
logistics partner of India. XB has progressively carved our way towards best-in-class technology platforms, an
extensive logistics network reach, and a seamless last mile management system.
While on this aggressive growth path, we seek to become the one-stop-shop for end-to-end logistics solutions. Our
big focus areas for the very near future include strengthening our presence as service providers of choice and
leveraging the power of technology to drive supply chain efficiencies.
Job Overview
XpressBees would enrich and scale its end-to-end logistics solutions at a high pace. This is a great opportunity to join
the team working on forming and delivering the operational strategy behind Artificial Intelligence / Machine Learning
and Data Engineering, leading projects and teams of AI Engineers collaborating with Data Scientists. In your role, you
will build high performance AI/ML solutions using groundbreaking AI/ML and BigData technologies. You will need to
understand business requirements and convert them to a solvable data science problem statement. You will be
involved in end to end AI/ML projects, starting from smaller scale POCs all the way to full scale ML pipelines in
production.
Seasoned AI/ML Engineers would own the implementation and productionzation of cutting-edge AI driven algorithmic
components for search, recommendation and insights to improve the efficiencies of the logistics supply chain and
serve the customer better.
You will apply innovative ML tools and concepts to deliver value to our teams and customers and make an impact to
the organization while solving challenging problems in the areas of AI, ML , Data Analytics and Computer Science.
Opportunities for application:
- Route Optimization
- Address / Geo-Coding Engine
- Anomaly detection, Computer Vision (e.g. loading / unloading)
- Fraud Detection (fake delivery attempts)
- Promise Recommendation Engine etc.
- Customer & Tech support solutions, e.g. chat bots.
- Breach detection / prediction
An Artificial Intelligence Engineer would apply himself/herself in the areas of -
- Deep Learning, NLP, Reinforcement Learning
- Machine Learning - Logistic Regression, Decision Trees, Random Forests, XGBoost, etc..
- Driving Optimization via LPs, MILPs, Stochastic Programs, and MDPs
- Operations Research, Supply Chain Optimization, and Data Analytics/Visualization
- Computer Vision and OCR technologies
The AI Engineering team enables internal teams to add AI capabilities to their Apps and Workflows easily via APIs
without needing to build AI expertise in each team – Decision Support, NLP, Computer Vision, for Public Clouds and
Enterprise in NLU, Vision and Conversational AI.Candidate is adept at working with large data sets to find
opportunities for product and process optimization and using models to test the effectiveness of different courses of
action. They must have knowledge using a variety of data mining/data analysis methods, using a variety of data tools,
building, and implementing models, using/creating algorithms, and creating/running simulations. They must be
comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion
for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.
Roles & Responsibilities
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Building cloud services in Decision Support (Anomaly Detection, Time series forecasting, Fraud detection,
Risk prevention, Predictive analytics), computer vision, natural language processing (NLP) and speech that
work out of the box.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Build core of Artificial Intelligence and AI Services such as Decision Support, Vision, Speech, Text, NLP, NLU,
and others.
● Leverage Cloud technology –AWS, GCP, Azure
● Experiment with ML models in Python using machine learning libraries (Pytorch, Tensorflow), Big Data,
Hadoop, HBase, Spark, etc
● Work with stakeholders throughout the organization to identify opportunities for leveraging company data to
drive business solutions.
● Mine and analyze data from company databases to drive optimization and improvement of product
development, marketing techniques and business strategies.
● Assess the effectiveness and accuracy of new data sources and data gathering techniques.
● Develop custom data models and algorithms to apply to data sets.
● Use predictive modeling to increase and optimize customer experiences, supply chain metric and other
business outcomes.
● Develop company A/B testing framework and test model quality.
● Coordinate with different functional teams to implement models and monitor outcomes.
● Develop processes and tools to monitor and analyze model performance and data accuracy.
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Deliver machine learning and data science projects with data science techniques and associated libraries
such as AI/ ML or equivalent NLP (Natural Language Processing) packages. Such techniques include a good
to phenomenal understanding of statistical models, probabilistic algorithms, classification, clustering, deep
learning or related approaches as it applies to financial applications.
● The role will encourage you to learn a wide array of capabilities, toolsets and architectural patterns for
successful delivery.
What is required of you?
You will get an opportunity to build and operate a suite of massive scale, integrated data/ML platforms in a broadly
distributed, multi-tenant cloud environment.
● B.S., M.S., or Ph.D. in Computer Science, Computer Engineering
● Coding knowledge and experience with several languages: C, C++, Java,JavaScript, etc.
● Experience with building high-performance, resilient, scalable, and well-engineered systems
● Experience in CI/CD and development best practices, instrumentation, logging systems
● Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights
from large data sets.
● Experience working with and creating data architectures.
● Good understanding of various machine learning and natural language processing technologies, such as
classification, information retrieval, clustering, knowledge graph, semi-supervised learning and ranking.
● Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest,
Boosting, Trees, text mining, social network analysis, etc.
● Knowledge on using web services: Redshift, S3, Spark, Digital Ocean, etc.
● Knowledge on creating and using advanced machine learning algorithms and statistics: regression,
simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
● Knowledge on analyzing data from 3rd party providers: Google Analytics, Site Catalyst, Core metrics,
AdWords, Crimson Hexagon, Facebook Insights, etc.
● Knowledge on distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, MySQL, Kafka etc.
● Knowledge on visualizing/presenting data for stakeholders using: Quicksight, Periscope, Business Objects,
D3, ggplot, Tableau etc.
● Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
● Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests, and proper usage, etc.) and experience with applications.
● Experience building data pipelines that prep data for Machine learning and complete feedback loops.
● Knowledge of Machine Learning lifecycle and experience working with data scientists
● Experience with Relational databases and NoSQL databases
● Experience with workflow scheduling / orchestration such as Airflow or Oozie
● Working knowledge of current techniques and approaches in machine learning and statistical or
mathematical models
● Strong Data Engineering & ETL skills to build scalable data pipelines. Exposure to data streaming stack (e.g.
Kafka)
● Relevant experience in fine tuning and optimizing ML (especially Deep Learning) models to bring down
serving latency.
● Exposure to ML model productionzation stack (e.g. MLFlow, Docker)
● Excellent exploratory data analysis skills to slice & dice data at scale using SQL in Redshift/BigQuery.
DATA SCIENTIST-MACHINE LEARNING
GormalOne LLP. Mumbai IN
Job Description
GormalOne is a social impact Agri tech enterprise focused on farmer-centric projects. Our vision is to make farming highly profitable for the smallest farmer, thereby ensuring India's “Nutrition security”. Our mission is driven by the use of advanced technology. Our technology will be highly user-friendly, for the majority of farmers, who are digitally naive. We are looking for people, who are keen to use their skills to transform farmers' lives. You will join a highly energized and competent team that is working on advanced global technologies such as OCR, facial recognition, and AI-led disease prediction amongst others.
GormalOne is looking for a machine learning engineer to join. This collaborative yet dynamic, role is suited for candidates who enjoy the challenge of building, testing, and deploying end-to-end ML pipelines and incorporating ML Ops best practices across different technology stacks supporting a variety of use cases. We seek candidates who are curious not only about furthering their own knowledge of ML Ops best practices through hands-on experience but can simultaneously help uplift the knowledge of their colleagues.
Location: Bangalore
Roles & Responsibilities
- Individual contributor
- Developing and maintaining an end-to-end data science project
- Deploying scalable applications on different platform
- Ability to analyze and enhance the efficiency of existing products
What are we looking for?
- 3 to 5 Years of experience as a Data Scientist
- Skilled in Data Analysis, EDA, Model Building, and Analysis.
- Basic coding skills in Python
- Decent knowledge of Statistics
- Creating pipelines for ETL and ML models.
- Experience in the operationalization of ML models
- Good exposure to Deep Learning, ANN, DNN, CNN, RNN, and LSTM.
- Hands-on experience in Keras, PyTorch or Tensorflow
Basic Qualifications
- Tech/BE in Computer Science or Information Technology
- Certification in AI, ML, or Data Science is preferred.
- Master/Ph.D. in a relevant field is preferred.
Preferred Requirements
- Exp in tools and packages like Tensorflow, MLFlow, Airflow
- Exp in object detection techniques like YOLO
- Exposure to cloud technologies
- Operationalization of ML models
- Good understanding and exposure to MLOps
Kindly note: Salary shall be commensurate with qualifications and experience
Key deliverables for the Data Science Engineer would be to help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be on applying data mining techniques, doing statistical analysis, and building high-quality prediction systems integrated with our products.
What will you do?
- You will be building and deploying ML models to solve specific business problems related to NLP, computer vision, and fraud detection.
- You will be constantly assessing and improving the model using techniques like Transfer learning
- You will identify valuable data sources and automate collection processes along with undertaking pre-processing of structured and unstructured data
- You will own the complete ML pipeline - data gathering/labeling, cleaning, storage, modeling, training/testing, and deployment.
- Assessing the effectiveness and accuracy of new data sources and data gathering techniques.
- Building predictive models and machine-learning algorithms to apply to data sets.
- Coordinate with different functional teams to implement models and monitor outcomes.
- Presenting information using data visualization techniques and proposing solutions and strategies to business challenges
We would love to hear from you if :
- You have 2+ years of experience as a software engineer at a SaaS or technology company
- Demonstrable hands-on programming experience with Python/R Data Science Stack
- Ability to design and implement workflows of Linear and Logistic Regression, Ensemble Models (Random Forest, Boosting) using R/Python
- Familiarity with Big Data Platforms (Databricks, Hadoop, Hive), AWS Services (AWS, Sagemaker, IAM, S3, Lambda Functions, Redshift, Elasticsearch)
- Experience in Probability and Statistics, ability to use ideas of Data Distributions, Hypothesis Testing and other Statistical Tests.
- Demonstrable competency in Data Visualisation using the Python/R Data Science Stack.
- Preferable Experience Experienced in web crawling and data scraping
- Strong experience in NLP. Worked on libraries such as NLTK, Spacy, Pattern, Gensim etc.
- Experience with text mining, pattern matching and fuzzy matching
Why Tartan?
- Brand new Macbook
- Stock Options
- Health Insurance
- Unlimited Sick Leaves
- Passion Fund (Invest in yourself or your passion project)
- Wind Down
Job brief
We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights.
In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.
Your goal will be to help our company analyze trends to make better decisions.
Requirements
1. 2 to 4 years of relevant industry experience
2. Experience in Linear algebra, statistics & Probability skills, such as distributions, Deep Learning, Machine Learning
3. Strong mathematical and statistics background is a must
4. Experience in machine learning frameworks such as Tensorflow, Caffe, PyTorch, or MxNet
5. Strong industry experience in using design patterns, algorithms and data structures
6. Industry experience in using feature engineering, model performance tuning, and optimizing machine learning models
7. Hands on development experience in Python and packages such as NumPy, Sci-Kit Learn and Matplotlib
8. Experience in model building, hyper
Introduction
Synapsica is a growth stage HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective, while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don’t have to rely on cryptic 2 liners given to them as diagnosis. Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by YCombinator and other investors from India, US and Japan. We are proud to have GE, AIIMS, and the Spinal Kinetics as our partners.
Your Roles and Responsibilities
The role involves computer vision tasks including development, customization and training of Convolutional Neural Networks (CNNs); application of ML techniques (SVM, regression, clustering etc.) and traditional Image Processing (OpenCV etc.). The role is research focused and would involve going through and implementing existing research papers, deep dive of problem analysis, generating new ideas, automating and optimizing key processes.
Requirements:
- Strong problem-solving ability
- Prior experience with Python, cuDNN, Tensorflow, PyTorch, Keras, Caffe (or similar Deep Learning frameworks).
- Extensive understanding of computer vision/image processing applications like object classification, segmentation, object detection etc
- Ability to write custom Convolutional Neural Network Architecture in Pytorch (or similar)
- Experience of GPU/DSP/other Multi-core architecture programming
- Effective communication with other project members and project stakeholders
- Detail-oriented, eager to learn, acquire new skills
- Prior Project Management and Team Leadership experience
- Ability to plan work and meet deadlines
- End to end deployment of deep learning models.
closely with the Kinara management team to investigate strategically important business
questions.
Lead a team through the entire analytical and machine learning model life cycle:
Define the problem statement
Build and clean datasets
Exploratory data analysis
Feature engineering
Apply ML algorithms and assess the performance
Code for deployment
Code testing and troubleshooting
Communicate Analysis to Stakeholders
Manage Data Analysts and Data Scientists
1+ years of proven experience in ML/AI with Python
Work with the manager through the entire analytical and machine learning model life cycle:
⮚ Define the problem statement
⮚ Build and clean datasets
⮚ Exploratory data analysis
⮚ Feature engineering
⮚ Apply ML algorithms and assess the performance
⮚ Codify for deployment
⮚ Test and troubleshoot the code
⮚ Communicate analysis to stakeholders
Technical Skills
⮚ Proven experience in usage of Python and SQL
⮚ Excellent in programming and statistics
⮚ Working knowledge of tools and utilities - AWS, DevOps with Git, Selenium, Postman, Airflow, PySpark
Global Banking Client
Work Location - Bangalore
The Data Analytics Senior Analyst is a seasoned professional role. Applies in-depth disciplinary knowledge, contributing to the development of new techniques and the improvement of processes and work-flow for the area or function. Integrates subject matter and industry expertise within a defined area. Requires in-depth understanding of how areas collectively integrate within the sub-function as well as coordinate and contribute to the objectives of the function and overall business. Evaluates moderately complex and variable issues with substantial potential impact, where development of an approach/taking of an action involves weighing various alternatives and balancing potentially conflicting situations using multiple sources of information. Requires good analytical skills in order to filter, prioritize and validate potentially complex and dynamic material from multiple sources. Strong communication and diplomacy skills are required. Regularly assumes informal/formal leadership role within teams. Involved in coaching and training of new recruits. Significant impact in terms of project size, geography, etc. by influencing decisions through advice, counsel and/or facilitating services to others in area of specialization. Work and performance of all teams in the area are directly affected by the performance of the individual.
Responsibilities:
- Build and enhance the software stack for modelling and data analytics
- Incorporate relevant data related algorithms in the products to solve business problems and improve them over time
- Automate repetitive data modelling and analytics tasks
- Keep up to date with available relevant technologies, to solve business problems
- Become a subject matter expert and closely work with analytics users to understand their need & provide recommendations/solutions
- Help define/share best practices for the business users and enforce/monitor that best practices are being incorporated for better efficiency (speed to market & system performance)
- Share daily/weekly progress made by the team
- Work with senior stakeholders & drive the discussions independently
- Mentor and lead a team of software developers on analytics related product development practices
Qualifications:
- 10-13 years of data engineering experience.
- Experience in working on machine-learning model deployment/scoring, model lifecycle management and model performance measurement.
- In-depth understanding of statistics and probability distributions, with experience of applying it in big-data software products for solving business problems
- Hands-on programming experience with big-data and analytics related product development using Python, Spark and Kafka to provide solutions for business needs.
- Intuitive with good interpersonal-skills, time-management and task-prioritization
- Ability to lead a technical team of software developers and mentor them on good software development practices.
- Ability to quickly grasp the business problem and nuances when put forth.
- Ability to quickly put together an execution plan and see it through till closure.
- Strong communication, presentation and influencing skills.
Education:
- Bachelor’s/University degree or equivalent experience
- Data Science or Analytics specialization preferred
Purpose of Job:
Responsible to lead a team of analysts to build and deploy predictive models to infuse core
business functions with deep analytical insights. The Senior Data Scientist will also work
closely with the Kinara management team to investigate strategically important business questions.
Job Responsibilities:
Lead a team through the entire analytical and machine learning model life cycle:
Define the problem statement
Build and clean datasets
Exploratory data analysis
Feature engineering
Apply ML algorithms and assess the performance
Code for deployment
Code testing and troubleshooting
Communicate Analysis to Stakeholders
Manage Data Analysts and Data Scientists
Qualifications:
Education: MS/MTech/Btech graduates or equivalent with a focus on data science and
quantitative fields (CS, Engineering, Mathematics, Economics)
Work Experience: 5+ years in a professional role with 3+ years in ML/AI
Other Requirements: ⮚ Domain knowledge in Financial Services is a big plus
Skills & Competencies
Technical Skills
⮚ Aptitude in Math and Stats
⮚ Proven experience in the use of Python, SQL, DevOps
⮚ Excellent in programming (Python), stats tools, and SQL
⮚ Working knowledge of tools and utilities - AWS, Git, Selenium, Postman,Prefect, Airflow, PySpark
Soft Skills
⮚ Deep Curiosity and Humility
⮚ Strong communications verbal and written
Company Name: Curl Tech
Location: Bangalore
Website: www.curl.tech
Company Profile: Curl Tech is a deep-tech firm, based out of Bengaluru, India. Curl works on developing Products & Solutions leveraging emerging technologies such as Machine Learning, Blockchain (DLT) & IoT. We work on domains such as Commodity Trading, Banking & Financial Services, Healthcare, Logistics & Retail.
Curl has been founded by technology enthusiasts with rich industry experience. Products and solutions that have been developed at Curl, have gone on to have considerable success and have in turn become separate companies (focused on that product / solution).
If you are looking for a job, that would challenge you and desire to work with an organization that disrupts entire value chain; Curl is the right one for you!
Designation: Data Scientist or Junior Data Scientist (according to experience)
Job Description:
Good with Machine Learning and Deep learning, good with programming and maths.
Details: The candidate will be working on many image analytics/ numerical data analytics projects. The work involves, data collection, building the machine learning models, deployment, client interaction and publishing academic papers.
Responsibilities:
-
The candidate will be working on many image analytics/numerical data projects.
-
Candidate will be building various machine learning models depending upon the requirements.
-
Candidate would be responsible for deployment of the machine learning models.
-
Candidate would be the face of the company in front of the clients and will have regular client interactions to understand that client requirements.
What we are looking for candidates with:
-
Basic Understanding of Statistics, Time Series, Machine Learning, Deep Learning, and their fundamentals and mathematical underpinnings.
-
Proven code proficiency in Python,C/C++ or any other AI language of choice.
-
Strong algorithmic thinking, creative problem solving and the ability to take ownership and do independent
research.
-
Understanding how things work internally in ML and DL models is a must.
-
Understanding of the fundamentals of Computer Vision and Image Processing techniques would be a plus.
-
Expertise in OpenCV, ML/Neural networks technologies and frameworks such as PyTorch, Tensorflow would be a
plus.
-
Educational background in any quantitative field (Computer Science / Mathematics / Computational Sciences and related disciplines) will be given preference.
Education: BE/ BTech/ B.Sc.(Physics or Mathematics)/Masters in Mathematics, Physics or related branches.
Conversational AI- Product Development company
Chatbot Developer
We are a Conversational AI- Product Development company which is located in the USA, Bangalore.
We are looking for a Senior Chatbot /Javascript Developer to join the Avaamo PSG(delivery) team.
Responsibilities:
- Independent team member for analyzing requirements, designing, coding, and implementing Conversation AI products.
- Its a product expert work closely with IT Managers and Business Groups to gather requirements and translate those into the required technical solution.
- Drive solution implementation using the Conversational design approach.
- Develop, deploy and maintain customized extensions to the Avaamo platform-specific to customer requirements.
- Conduct training and technical guidance sessions for partner and customer development teams.
- Evaluating reported defects and the correction of prioritized defects.
- Travel onsite to customer locations for close support.
- Document how to's and implement best practices for Avaamo product solutions.
Requirements:
- Strong programming experience in javascript, HTML/CSS.
- Experience of creating and consuming REST APIs and SOAP services.
- Strong knowledge and awareness of Web Technologies and current web trends.
- Working knowledge of Security in Web applications and services.
- experience in using the NodeJS framework with good understanding of the underlying architecture.
- Experience of deploying web applications on Linux servers in production environment.
- Excellent communication skills.
Good to haves:
- Full stack experience UI and UX design experience or insights
- Working knowledge of AI, ML and NLP.
- Experience of enterprise systems integration like MS Dynamics CRM, Salesforce, ServiceNow, MS Active Directory etc.
- Experience of building Single Sign On in web/mobile applications.
- Ability to learn latest technologies and handle small engineering teams.
About us: Nexopay helps transforming digital payments and enabling instant financing for parents, across schools and colleges world-wide.
Responsibilities:
- Work with stakeholders throughout the organisation and across entities to identify opportunities for leveraging internal and external data to drive business impact
- Mine and analyze data to improve and optimise performance, capture meaningful insights and turn them into business advantages
- Assess the effectiveness and accuracy of new data sources and data gathering techniques
- Develop custom data models and algorithms to apply to data sets
- Use predictive modeling to predict outcomes and identify key drivers
- Coordinate with different functional teams to implement models and monitor outcomes
- Develop processes and tools to monitor and analyze model performance and data accuracy
Requirements:
- Experience in solving business problem using descriptive analytics, statistical modelling / machine learning
- 2+ years of strong working knowledge of SQL language
- Experience with visualization tools e. g., Tableau, Power BI
- Working knowledge on handling analytical projects end to end using industry standard tools (e. g., R, Python)
- Strong presentation and communication skills
- Experience in education sector is a plus
- Fluency in English
Machine Learning & Deep Learning – Strong
Experienced in TensorFlow, PyTorch, ONNX, Object Detection, Pretrained Models like YOLO, SSD, Faster RCNN, etc…
Python – Strong
NumPy, Pandas, OpenCV
Problem Solving - strong
C++ - average
It will be good if candidate have working experience in C++ in any domain
Note :: Looking for Immediate to 30 days of Notice Period
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar engineer that is well versed with AI and audio technologies around audio detection, speech-to-text, interpretation, and sentiment analysis.
You will be responsible for:
Developing audio algorithms to detect key moments within popular online games, such as:
Streamer speaking, shouting, etc.
Gunfire, explosions, and other in-game audio events
Speech-to-text and sentiment analysis of the streamer’s narration
Leveraging baseline technologies such as TensorFlow and others -- and building models on top of them
Building neural network architectures for audio analysis as it pertains to popular games
Specifying exact requirements for training data sets, and working with analysts to create the data sets
Training final models, including techniques such as transfer learning, data augmentation, etc. to optimize models for use in a production environment
Working with back-end engineers to get all of the detection algorithms into production, to automate the highlight creation
You should have the following qualities:
Solid understanding of AI frameworks and algorithms, especially pertaining to audio analysis, speech-to-text, sentiment analysis, and natural language processing
Experience using Python, TensorFlow and other AI tools
Demonstrated understanding of various algorithms for audio analysis, such as CNNs, LSTM for natural language processing, and others
Nice to have: some familiarity with AI-based audio analysis including sentiment analysis
Familiarity with AWS environments
Excited about working in a fast-changing startup environment
Willingness to learn rapidly on the job, try different things, and deliver results
Ideally a gamer or someone interested in watching gaming content online
Skills:
Machine Learning, Audio Analysis, Sentiment Analysis, Speech-To-Text, Natural Language Processing, Neural Networks, TensorFlow, OpenCV, AWS, Python
Work Experience: 2 years to 10 years
About Sizzle
Sizzle is building AI to automate gaming highlights, directly from Twitch and YouTube videos. Presently, there are over 700 million fans around the world that watch gaming videos on Twitch and YouTube. Sizzle is creating a new highlights experience for these fans, so they can catch up on their favorite streamers and esports leagues. Sizzle is available at http://www.sizzle.gg">www.sizzle.gg.
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar engineer that is well versed with computer vision and AI technologies around image and video analysis.
You will be responsible for:
- Developing computer vision algorithms to detect key moments within popular online games
- Leveraging baseline technologies such as TensorFlow, OpenCV, and others -- and building models on top of them
- Building neural network (CNN) architectures for image and video analysis, as it pertains to popular games
- Specifying exact requirements for training data sets, and working with analysts to create the data sets
- Training final models, including techniques such as transfer learning, data augmentation, etc. to optimize models for use in a production environment
- Working with back-end engineers to get all of the detection algorithms into production, to automate the highlight creation
You should have the following qualities:
- Solid understanding of computer vision and AI frameworks and algorithms, especially pertaining to image and video analysis
- Experience using Python, TensorFlow, OpenCV and other computer vision tools
- Understand common computer vision object detection models in use today e.g. Inception, R-CNN, Yolo, MobileNet SSD, etc.
- Demonstrated understanding of various algorithms for image and video analysis, such as CNNs, LSTM for motion and inter-frame analysis, and others
- Familiarity with AWS environments
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Ideally a gamer or someone interested in watching gaming content online
Skills:
Machine Learning, Computer Vision, Image Processing, Neural Networks, TensorFlow, OpenCV, AWS, Python
Seniority: We are open to junior or senior engineers. We're more interested in the proper skillsets.
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply. However, if you don't have AI or computer vision experience, please do not apply.
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams.
For this role, we're looking for someone that ideally loves to watch video gaming content on Twitch and YouTube. Specifically, you will help generate training data for all the AI we are building. This will include gathering screenshots, clips and other data from gaming videos on Twitch and YouTube. You will then be responsible for labeling and annotating them. You will work very closely with our AI engineers.
You will:
- Gather training data as specified by the management and engineering team
- Label and annotate all the training data
- Ensure all data is prepped and ready to feed into the AI models
- Revise the training data as specified by the engineering team
- Test the output of the AI models and update training data needs
You should have the following qualities:
- Willingness to work hard and hit deadlines
- Work well with people
- Be able to work remotely (if not in Bangalore)
- Interested in learning about AI and computer vision
- Willingness to learn rapidly on the job
- Ideally a gamer or someone interested in watching gaming content online
Skills:
Data labeling, annotation, AI, computer vision, gaming
Work Experience: 0 years to 3 years
About Sizzle
Sizzle is building AI to automate gaming highlights, directly from Twitch and YouTube videos. Presently, there are over 700 million fans around the world that watch gaming videos on Twitch and YouTube. Sizzle is creating a new highlights experience for these fans, so they can catch up on their favorite streamers and esports leagues. Sizzle is available at http://www.sizzle.gg">www.sizzle.gg.Job Description
Do you have a passion for computer vision and deep learning problems? We are looking for someone who thrives on collaboration and wants to push the boundaries of what is possible today! Material Depot (materialdepot.in) is on a mission to be India’s largest tech company in the Architecture, Engineering and Construction space by democratizing the construction ecosystem and bringing stakeholders onto a common digital platform. Our engineering team is responsible for developing Computer Vision and Machine Learning tools to enable digitization across the construction ecosystem. The founding team includes people from top management consulting firms and top colleges in India (like BCG, IITB), and have worked extensively in the construction space globally and is funded by top Indian VCs.
Our team empowers Architectural and Design Businesses to effectively manage their day to day operations. We are seeking an experienced, talented Data Scientist to join our team. You’ll be bringing your talents and expertise to continue building and evolving our highly available and distributed platform.
Our solutions need complex problem solving in computer vision that require robust, efficient, well tested, and clean solutions. The ideal candidate will possess the self-motivation, curiosity, and initiative to achieve those goals. Analogously, the candidate is a lifelong learner who passionately seeks to improve themselves and the quality of their work. You will work together with similar minds in a unique team where your skills and expertise can be used to influence future user experiences that will be used by millions.
In this role, you will:
- Extensive knowledge in machine learning and deep learning techniques
- Solid background in image processing/computer vision
- Experience in building datasets for computer vision tasks
- Experience working with and creating data structures / architectures
- Proficiency in at least one major machine learning framework
- Experience visualizing data to stakeholders
- Ability to analyze and debug complex algorithms
- Good understanding and applied experience in classic 2D image processing and segmentation
- Robust semantic object detection under different lighting conditions
- Segmentation of non-rigid contours in challenging/low contrast scenarios
- Sub-pixel accurate refinement of contours and features
- Experience in image quality assessment
- Experience with in depth failure analysis of algorithms
- Highly skilled in at least one scripting language such as Python or Matlab and solid experience in C++
- Creativity and curiosity for solving highly complex problems
- Excellent communication and collaboration skills
- Mentor and support other technical team members in the organization
- Create, improve, and refine workflows and processes for delivering quality software on time and with carefully calculated debt
- Work closely with product managers, customer support representatives, and account executives to help the business move fast and efficiently through relentless automation.
How you will do this:
- You’re part of an agile, multidisciplinary team.
- You bring your own unique skill set to the table and collaborate with others to accomplish your team’s goals.
- You prioritize your work with the team and its product owner, weighing both the business and technical value of each task.
- You experiment, test, try, fail, and learn continuously.
- You don’t do things just because they were always done that way, you bring your experience and expertise with you and help the team make the best decisions.
For this role, you must have:
- Strong knowledge of and experience with the functional programming paradigm.
- Experience conducting code reviews, providing feedback to other engineers.
- Great communication skills and a proven ability to work as part of a tight-knit team.
Responsibilities Description:
Responsible for the development and implementation of machine learning algorithms and techniques to solve business problems and optimize member experiences. Primary duties may include are but not limited to: Design machine learning projects to address specific business problems determined by consultation with business partners. Work with data-sets of varying degrees of size and complexity including both structured and unstructured data. Piping and processing massive data-streams in distributed computing environments such as Hadoop to facilitate analysis. Implements batch and real-time model scoring to drive actions. Develops machine learning algorithms to build customized solutions that go beyond standard industry tools and lead to innovative solutions. Develop sophisticated visualization of analysis output for business users.
Experience Requirements:
BS/MA/MS/PhD in Statistics, Computer Science, Mathematics, Machine Learning, Econometrics, Physics, Biostatistics or related Quantitative disciplines. 2-4 years of experience in predictive analytics and advanced expertise with software such as Python, or any combination of education and experience which would provide an equivalent background. Experience in the healthcare sector. Experience in Deep Learning strongly preferred.
Required Technical Skill Set:
- Full cycle of building machine learning solutions,
o Understanding of wide range of algorithms and their corresponding problems to solve
o Data preparation and analysis
o Model training and validation
o Model application to the problem
- Experience using the full open source programming tools and utilities
- Experience in working in end-to-end data science project implementation.
- 2+ years of experience with development and deployment of Machine Learning applications
- 2+ years of experience with NLP approaches in a production setting
- Experience in building models using bagging and boosting algorithms
- Exposure/experience in building Deep Learning models for NLP/Computer Vision use cases preferred
- Ability to write efficient code with good understanding of core Data Structures/algorithms is critical
- Strong python skills following software engineering best practices
- Experience in using code versioning tools like GIT, bit bucket
- Experience in working in Agile projects
- Comfort & familiarity with SQL and Hadoop ecosystem of tools including spark
- Experience managing big data with efficient query program good to have
- Good to have experience in training ML models in tools like Sage Maker, Kubeflow etc.
- Good to have experience in frameworks to depict interpretability of models using libraries like Lime, Shap etc.
- Experience with Health care sector is preferred
- MS/M.Tech or PhD is a plus
Develop state of the art algorithms in the fields of Computer Vision, Machine Learning and Deep Learning.
Provide software specifications and production code on time to meet project milestones Qualifications
BE or Master with 3+ years of experience
Must have Prior knowledge and experience in Image processing and Video processing • Should have knowledge of object detection and recognition
Must have experience in feature extraction, segmentation and classification of the image
Face detection, alignment, recognition, tracking & attribute recognition
Excellent Understanding and project/job experience in Machine learning, particularly in areas of Deep Learning – CNN, RNN, TENSORFLOW, KERAS etc.
Real world expertise in deep learning- applied to Computer Vision problems • Strong foundation in Mathematics
Strong development skills in Python
Must have worked upon Vision and deep learning libraries and frameworks such as Opencv, Tensorflow, Pytorch, keras
Quick learner of new technologies
Ability to work independently as well as part of a team
Knowledge of working closely with Version Control(GIT)
Job Description: Senior Software Developer (Exp.2-6 years)
Location: Bangalore
What you need:
* Bachelor’s/Master’s degree is preferred in computer science or related field (such as computer engineering, software engineering, biomedical engineering, or mathematical sciences) from premier institutes.
* 1-3 years of industry experience in professional software development.
* Strong C++ knowledge.
* Knowledge of ITK / VTK / OpenCV / Robots / Qt Framework is plus.
* Required Technical Competencies in Algorithms and data structures object oriented design and analysis.
* Expertise in Design Patterns & C++ programming concepts; Linear Algebra, Computer Vision, Software design, development and verification methodologies would be preferred.
* Should be open to work in fast growing medical devices start-up making cutting edge computer assisted & robotic assisted surgery products in India for the world.
* Should have willingness to develop something great from India.
What you will do:
* Work with program manager to understand business requirement and translate that into technical design.
* Create and own leading edge reusable algorithm solutions.
* Create and own cross-platform SDKs.
* Research cutting-edge algorithms and techniques.
* Lead technical design and implementation of a feature.
* Implement high quality code with comprehensive unit testing.
* Troubleshoot issues raised from production and resolve customer problems.
* Evaluate and adopt technologies which improve the team efficiency and platform capability.
* Code review peer developers code and provide constructive feedbacks to ensure consistency and quality of code.
* Be a part of core R&D team for developing Surgical Robots.
* Ensures the integrity and security of company intellectual property and confidential data.
Company Profile
Happy Reliable Surgeries Pvt Ltd (HRS Navigation) started in 2015. It is India's first and only company to develop high tech surgical navigation system for highly complex Brain & Spine surgeries. Our products directly compete with world’s biggest medical devices companies. We are proudly one of the few global companies who have capabilities to develop Computer assisted and Robotic Assisted surgeries products. R&D Centre is based at Bangalore. It has been started by Ex of a global medical device company. It has been incubated & mentored by IIM Calcutta.
Why Do We Exist: - Currently all hi-tech medical devices are imported. Which leads to very high cost of healthcare & non reachability to common people. Our Purpose is to make premium healthcare affordable & accessible by continuous innovation for our people. We want to make India Proud.
Our Achievements: -
- 10 Best start-up in Medical Devices (Insight Success Magazine)
- Won Top 50 emerging product start-up NASSCOM 2017 (National Award)
- Winners in Economic Power of Ideas award 2018 (IIM Ahmadabad) (National Award)
- Winners in Smart Fifty competition conducted by IIM Calcutta (National Award)
- Won Elevate 100, Karnataka top 100 company (State Award from Karnataka Government)
- Grown Exponential even in 2020 financial Year. (No Salary Cuts / Firings)
- 3000+ successful Surgeries performed by our products.
Our Client company is into Telecommunications. (SY1)
- Participate in full machine learning Lifecycle including data collection, cleaning, preprocessing to training models, and deploying them to Production.
- Discover data sources, get access to them, ingest them, clean them up, and make them “machine learning ready”.
- Work with data scientists to create and refine features from the underlying data and build pipelines to train and deploy models.
- Partner with data scientists to understand and implement machine learning algorithms.
- Support A/B tests, gather data, perform analysis, draw conclusions on the impact of your models.
- Work cross-functionally with product managers, data scientists, and product engineers, and communicate results to peers and leaders.
- Mentor junior team members
Who we have in mind:
- Graduate in Computer Science or related field, or equivalent practical experience.
- 4+ years of experience in software engineering with 2+ years of direct experience in the machine learning field.
- Proficiency with SQL, Python, Spark, and basic libraries such as Scikit-learn, NumPy, Pandas.
- Familiarity with deep learning frameworks such as TensorFlow or Keras
- Experience with Computer Vision (OpenCV), NLP frameworks (NLTK, SpaCY, BERT).
- Basic knowledge of machine learning techniques (i.e. classification, regression, and clustering).
- Understand machine learning principles (training, validation, etc.)
- Strong hands-on knowledge of data query and data processing tools (i.e. SQL)
- Software engineering fundamentals: version control systems (i.e. Git, Github) and workflows, and ability to write production-ready code.
- Experience deploying highly scalable software supporting millions or more users
- Experience building applications on cloud (AWS or Azure)
- Experience working in scrum teams with Agile tools like JIRA
- Strong oral and written communication skills. Ability to explain complex concepts and technical material to non-technical users
Global internet of things connected solutions provider(H1)
- Required to work individually or as part of a team on data science projects and work closely with lines of business to understand business problems and translate them into identifiable machine learning problems which can be delivered as technical solutions.
- Build quick prototypes to check feasibility and value to the business.
- Design, training, and deploying neural networks for computer vision and machine learning-related problems.
- Perform various complex activities related to statistical/machine learning.
- Coordinate with business teams to provide analytical support for developing, evaluating, implementing, monitoring, and executing models.
- Collaborate with technology teams to deploy the models to production.
Key Criteria:
- 2+ years of experience in solving complex business problems using machine learning.
- Understanding and modeling experience in supervised, unsupervised, and deep learning models; hands-on knowledge of data wrangling, data cleaning/ preparation, dimensionality reduction is required.
- Experience in Computer Vision/Image Processing/Pattern Recognition, Machine Learning, Deep Learning, or Artificial Intelligence.
- Understanding of Deep Learning Architectures like InceptionNet, VGGNet, FaceNet, YOLO, SSD, RCNN, MASK Rcnn, ResNet.
- Experience with one or more deep learning frameworks e.g., TensorFlow, PyTorch.
- Knowledge of vector algebra, statistical and probabilistic modeling is desirable.
- Proficiency in programming skills involving Python, C/C++, and Python Data Science Stack (NumPy, SciPy, Pandas, Scikit-learn, Jupyter, IPython).
- Experience working with Amazon SageMaker or Azure ML Studio for deployments is a plus.
- Experience in data visualization software such as Tableau, ELK, etc is a plus.
- Strong analytical, critical thinking, and problem-solving skills.
- B.E/ B.Tech./ M. E/ M. Tech in Computer Science, Applied Mathematics, Statistics, Data Science, or related Engineering field.
- Minimum 60% in Graduation or Post-Graduation
- Great interpersonal and communication skills
● Research and develop advanced statistical and machine learning models for
analysis of large-scale, high-dimensional data.
● Dig deeper into data, understand characteristics of data, evaluate alternate
models and validate hypotheses through theoretical and empirical approaches.
● Productize has proven or working models into production-quality code.
● Collaborate with product management, marketing, and engineering teams in
Business Units to elicit & understand their requirements & challenges and
develop potential solutions
● Stay current with the latest research and technology ideas; share knowledge by
clearly articulating results and ideas to key decision-makers.
● File patents for innovative solutions that add to the company's IP portfolio
Requirements
● 4 to 6 years of strong experience in data mining, machine learning and
statistical analysis.
● BS/MS/Ph.D. in Computer Science, Statistics, Applied Math, or related areas
from Premier institutes ( only IITs / IISc / BITS / Top NITs or top US university
should apply)
● Experience in productizing models to code in a fast-paced start-up
environment.
● Fluency in analytical tools such as Matlab, R, Weka etc.
● Strong intuition for data and Keen aptitude on large scale data analysis
● Strong communication and collaboration skills.
at Synapsica Technologies Pvt Ltd
Sr AI Scientist, Bengaluru |
Job Description
Introduction
Synapsica is a growth stage HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective, while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don’t have to rely on cryptic 2 liners given to them as diagnosis. Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by YCombinator and other investors from India, US and Japan. We are proud to have GE, AIIMS, the Spinal Kinetics as our partners.
Your Roles and Responsibilities
The role involves computer vision tasks including development, customization and training of Convolutional Neural Networks (CNNs); application of ML techniques (SVM, regression, clustering etc.) and traditional Image Processing (OpenCV etc.). The role is research focused and would involve going through and implementing existing research papers, deep dive of problem analysis, generating new ideas, automating and optimizing key processes.
Requirements:
- 4+ years of relevant experience in solving complex real-world problems at scale via computer vision based deep learning.
- Strong problem-solving ability
- Prior experience with Python, cuDNN, Tensorflow, PyTorch, Keras, Caffe (or similar Deep Learning frameworks).
- Extensive understanding of computer vision/image processing applications like object classification, segmentation, object detection etc
- Ability to write custom Convolutional Neural Network Architecture in Pytorch (or similar)
- Experience of GPU/DSP/other Multi-core architecture programming
- Effective communication with other project members and project stakeholders
- Detail-oriented, eager to learn, acquire new skills
- Prior Project Management and Team Leadership experience
- Ability to plan work and meet deadlines
The role involves computer vision tasks including development, customization and training of Convolutional Neural Networks (CNNs); application of ML techniques (SVM, regression, clustering etc. ) and traditional Image Processing (OpenCV etc. ). The role is research focused and would involve going through and implementing existing research papers, deep dive of problem analysis, generating new ideas, automating and optimizing key processes.
Requirements:
- 2 - 4 years of relevant experience in solving complex real-world problems at scale via deep learning, computer vision or AI
- Python, cuDNN, Tensorflow/PyTorch/Keras (or similar Deep Learning frameworks).
- CNNs, RNNs, Transfer learning (for image classification, segmentation, object detection etc).
- Image Processing techniques using OpenCV or other white-box image feature extraction algorithms.
- End to end deployment of deep learning models.
Ganit has flipped the data science value chain as we do not start with a technique but for us, consumption comes first. With this philosophy, we have successfully scaled from being a small start-up to a 200 resource company with clients in the US, Singapore, Africa, UAE, and India.
We are looking for experienced data enthusiasts who can make the data talk to them.
You will:
- Understand business problems and translate business requirements into technical requirements.
- Conduct complex data analysis to ensure data quality & reliability i.e., make the data talk by extracting, preparing, and transforming it.
- Identify, develop and implement statistical techniques and algorithms to address business challenges and add value to the organization.
- Gather requirements and communicate findings in the form of a meaningful story with the stakeholders
- Build & implement data models using predictive modelling techniques. Interact with clients and provide support for queries and delivery adoption.
- Lead and mentor data analysts.
We are looking for someone who has:
- Apart from your love for data and ability to code even while sleeping you would need the following.
- Minimum of 02 years of experience in designing and delivery of data science solutions.
- You should have successful projects of retail/BFSI/FMCG/Manufacturing/QSR in your kitty to show-off.
- Deep understanding of various statistical techniques, mathematical models, and algorithms to start the conversation with the data in hand.
- Ability to choose the right model for the data and translate that into a code using R, Python, VBA, SQL, etc.
- Bachelors/Masters degree in Engineering/Technology or MBA from Tier-1 B School or MSc. in Statistics or Mathematics
Skillset Required:
- Regression
- Classification
- Predictive Modelling
- Prescriptive Modelling
- Python
- R
- Descriptive Modelling
- Time Series
- Clustering
What is in it for you:
- Be a part of building the biggest brand in Data science.
- An opportunity to be a part of a young and energetic team with a strong pedigree.
- Work on awesome projects across industries and learn from the best in the industry, while growing at a hyper rate.
Please Note:
At Ganit, we are looking for people who love problem solving. You are encouraged to apply even if your experience does not precisely match the job description above. Your passion and skills will stand out and set you apart—especially if your career has taken some extraordinary twists and turns over the years. We welcome diverse perspectives, people who think rigorously and are not afraid to challenge assumptions in a problem. Join us and punch above your weight!
Ganit is an equal opportunity employer and is committed to providing a work environment that is free from harassment and discrimination.
All recruitment, selection procedures and decisions will reflect Ganit’s commitment to providing equal opportunity. All potential candidates will be assessed according to their skills, knowledge, qualifications, and capabilities. No regard will be given to factors such as age, gender, marital status, race, religion, physical impairment, or political opinions.