Computer Vision Jobs
Company Overview:
At Codvo, software and people transformations go hand-in-hand. We are a global empathyled technology services company. Product innovation and mature software engineering are part of our core DNA. Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day. We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results.
Required Skills (Technical):
- Advanced knowledge of statistical techniques, NLP, machine learning algorithms and deep learning frameworks like TensorFlow, Theano, Kera’s, Pytorch.
- Proficiency with modern statistical modelling (regression, boosting trees, random forests, etc.), machine learning (text mining, neural network, NLP, etc.), optimization (linear optimization, nonlinear optimization, stochastic optimization, etc.) methodologies.
- Building complex predictive models using ML and DL techniques with production quality code and jointly own complex data science workflows with the Data Engineering team.
- Familiarity with modern data analytics architecture and data engineering technologies (SQL and No-SQL databases).
- Knowledge of REST APIs and Web Services
- Experience with Python, R, sh/bash
Required Skills (Non-Technical):
- Fluent in English Communication (Spoken and verbal)
- Should be a team player
- Should have a learning aptitude
- Detail-oriented, analytically.
- Extremely organized with strong time-management skills
- Problem Solving & Critical Thinking
Xuriti is an anchor-led b2b Sales Enablement Platform. We use Anchor sponsored Credit for its Retailers to create better engagement and bring a better understanding of the complete Sales channel.
We are looking for a dynamic, energetic Data Science Intern who is eager to learn about
Under writing, Risk modeling in b2b fintech space.
Responsibilities
· Undertaking data collection, preprocessing and analysis
· Building models to address business problems
· Presenting information using data visualization techniques
· Undertake preprocessing of structured and unstructured data
· Analyze large amounts of information to discover trends and patterns
· Build predictive models and machine-learning algorithms
· Combine models through ensemble modeling
· Present information using data visualization techniques
· Propose solutions and strategies to business challenges
· Collaborate with engineering and product development teams
Qualifications
· Understanding of machine-learning and operations research
· Knowledge of R, SQL and Python, Mongodb
· Analytical mind and business acumen
· Strong math skills (e.g. statistics, algebra)
· Problem-solving aptitude
· Excellent communication and presentation skills
· BSc/Btech in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred
**Required** • 3+ years’ experience designing, developing, and implementing ML / AI algorithms • Strong programming skills in python, including fundamental software engineering principles and machine learning design patterns. • Experience with at least one Deep Learning framework such as TensorFlow, PyTorch, or Keras, or ML framework such as Scikit-learn • Demonstrated understanding in AI/ML algorithms, learning models, graphical models, overfitting, regularization, etc. • Experience in building ML pipelines for processing large datasets. • Understanding of state-of-the-art approaches to computer vision • Experience in working with medical images (MRI/CT/X-Ray, etc…) • Refereed publications in premiere computer vision and/or bioinformatics venues. • Deep knowledge of math (linear algebra), probability, statistics and algorithms • Outstanding analytical and problem-solving skills • Work well with in a fast moving, small team environment
- Demonstrate ability in NLP/ML/DL project solutions and architectures.
- Strong ability in developing NLP tool and end to end solutions.
- Minimum 1 year of experience in text cleaning, data wrangling, and text mining.
- Good understanding of Rule-based, statistical, and probabilistic NLP techniques.
- Collaborate with analytics team members to design, implement, and develop enterprise-level NLP capabilities, including data engineering, technology platforms, and algorithms.
- Good knowledge of NLP approaches and concepts like topic modelling, text summarization, semantic modelling, Named Entity recognition, etc.
- Evaluate and benchmark the performance of different NLP systems and provide guidance on metrics and best practices.
- Test and deploy promising solutions quickly, managing deadlines and deliverables while applying latest research and techniques.
- Collaborate with business stakeholders to effectively integrate and communicate analysis findings across NLP solutions.
Key Technical Skills:
- Hands on experience in building NLP models using different NLP libraries and toolkit like NLTK, Stanford NLP, TextBlob, OCR etc.
- Strong programming skills in Python
- Good to have programming skill: Java/Scala/C/C++.
- Strong problem solving, logical and communication skills.
Tags: Natural Language Processing (NLP), Artificial Intelligence (AI), Machine Learning (ML), and Natural Language Toolkit (NLTK), Analytics
WHO WE ARE:
TIFIN is a fintech platform backed by industry leaders including JP Morgan, Morningstar, Broadridge, Hamilton Lane and a who’s who of the financial service industry. We are creating engaging wealth experiences to better financial lives through AI and investment intelligence powered personalization. We are working to change the world of wealth in ways that personalization has changed the world of movies, music and more but with the added responsibility of delivering better wealth outcomes.
We use design and behavioural thinking to enable engaging experiences through software and application programming interfaces (APIs). We use investment science and intelligence to build algorithmic engines inside the software and APIs to enable better investor outcomes.
In a world where every individual is unique, we match them to financial advice and investments with a recognition of their distinct needs and goals across our investment marketplace and our advice and planning divisions.
OUR VALUES:
- Shared Understanding through Listening and Speaking the Truth. We communicate with radical candor, precision and compassion to create a shared understanding. We challenge, but once a decision is made, commit fully. We listen attentively, speak candidly.
- Teamwork for Teamwin. We believe in win together, learn together. We fly in formation. We cover each other’s backs. We inspire each other with our energy and attitude.
- Make Magic for our Users. We center around the voice of the customer. With deep empathy for our clients, we create technology that transforms investor experiences.
- Grow at the Edge. We are driven by personal growth. We get out of our comfort zone and keep egos aside to find our genius zones. We strive to be the best we can possibly be. No excuses.
- Innovate with Creative Solutions. We believe that disruptive innovation begins with curiosity and creativity. We challenge the status quo and problem solve to find new answers.
WHAT YOU'LL BE DOING:
We are looking for an experienced quantitative professional to develop, implement, test, and maintain the core algorithms and R&D framework for our Investment and investment advisory platform. The ideal candidate for this role has successfully implemented and maintained quantitative and statistical modules using modular software design constructs. The candidate needs to be a responsible product owner, a problem solver and a team player looking to make a significant impact on a fast-growing company. The successful candidate will directly report to the Head of Quant Research & Development.
Responsibilities:
- The end-to-end research, development, and maintenance of investment platform, data and algorithms
- Take part in building out the R&D back testing and simulation engines
- Thoroughly vet investment algorithmic results
- Contribute to the research data platform design
- Investigate datasets for use in new or existing algorithms
- Participate in agile development practices
- Liaise with stakeholders to gather & understand the functional requirements
- Take part in code reviews ensuring quality meets highest level of standards
- Develop software using high quality standards and best practices, conduct thorough end-to-end unit testing, and provide support during testing and post go-live
- Support research innovation through the creative and aggressive experimentation of cutting-edge hardware, software, processes, procedures, and methods
- Collaborate with technology teams to ensure appropriate requirements, standards, and integration
Qualifications / Skillsets:
- Experience in a quant research & development role
- Proficient in Python, Git and Jira
- Knowledge in SQL and database development (PostgreSQL is a plus)
- Understanding of R and RMarkdown is a plus
- Bachelor’s degree in computer science, computational mathematics, or financial engineering
- Master’s degree or advanced training is a strong plus
- Excellent mathematical foundation and hands-on experience working in the finance industry
- Proficient in quantitative, statistical, and ML/AI techniques and their implementation using Python modules such as Pandas, NumPy, SciPy, SciKit-Learn, etc.
- Strong communication (written and oral) and analytical problem-solving skills
- Strong sense of attention to detail, pride in delivering high quality work and willingness to learn
- An understanding of or exposure to financial capital markets, various financial instruments (such as stocks, ETFs, Mutual Funds, etc.), and financial tools (such as Bloomberg, Reuters, etc.)
COMPENSATION AND BENEFITS PACKAGE:
Compensation: Competitive and commensurate to experience + discretionary annual bonus.
TIFIN offers a competitive benefits package that includes:
- Performance linked variable compensation
- Medical insurance
- Remote work flexibility and other company benefits
TIFIN is proud to be an equal opportunity workplace and values the multitude of talents and perspectives that a diverse workforce brings. All qualified applicants will receive consideration for employment without regard to any discrimination.
- Demonstrate ability in NLP/ML/DL project solutions and architectures.
- Strong ability in developing NLP tool and end to end solutions.
- Minimum 1 year of experience in text cleaning, data wrangling, and text mining.
- Good understanding of Rule-based, statistical, and probabilistic NLP techniques.
- Collaborate with analytics team members to design, implement, and develop enterprise-level NLP capabilities, including data engineering, technology platforms, and algorithms.
- Good knowledge of NLP approaches and concepts like topic modelling, text summarization, semantic modelling, Named Entity recognition, etc.
- Evaluate and benchmark the performance of different NLP systems and provide guidance on metrics and best practices.
- Test and deploy promising solutions quickly, managing deadlines and deliverables while applying latest research and techniques.
- Collaborate with business stakeholders to effectively integrate and communicate analysis findings across NLP solutions.
Key Technical Skills:
- Hands on experience in building NLP models using different NLP libraries and toolkit like NLTK, Stanford NLP, TextBlob, OCR etc.
- Strong programming skills in Python
- Good to have programming skill: Java/Scala/C/C++.
- Strong problem solving, logical and communication skills.
Tags: Natural Language Processing (NLP), Artificial Intelligence (AI), Machine Learning (ML), and Natural Language Toolkit (NLTK), Analytics
Skills
Graas uses predictive AI to turbo-charge growth for eCommerce businesses. We are “Growth-as-a-Service”. Graas is a technology solution provider using predictive AI to turbo-charge growth for eCommerce businesses. Graas integrates traditional data silos and applies a machine-learning AI engine, acting as an in-house data scientist to predict trends and give real-time insights and actionable recommendations for brands. The platform can also turn insights into action by seamlessly executing these recommendations across marketplace store fronts, brand.coms, social and conversational commerce, performance marketing, inventory management, warehousing, and last mile logistics - all of which impacts a brand’s bottom line, driving profitable growth.
Location – Pune
Job Responsibilities
- Work closely with data scientists and data analysts to build models and continuous data monitoring workflows.
- Implement algorithms / models within companies recommendation engine framework.
- Own the MLOps life-cycle; build and own ML model life-cycle management process encompassing coding to building robust model monitoring workflows.
- Consult with product and business teams to build prototypes and then deploy holistic machine learning solutions.
- Recommend and implement architecture to deploy machine learning pipelines and CI/CD processes at scale
Skills Needed
- Minimum 3 years of experience as Machine Learning Engineer
- Knowledge of machine learning and statistics
- Strong experience working in the areas of time series analysis, reinforcement learning, NLP, optimization and heuristics based implementation to solve real-world problems
- Experienced in architecting solutions with Continuous Integration and Continuous Delivery in mind
- Strong knowledge of coding in Python and libraries such as Pandas, Numpy, Scikit-Learn, PyTorch, etc.
- Experience handling Big Data leveraging technologies like Snowflake, Spark. Ability to work in a big data ecosystem - expert in SQL and ability to work in distributed databases.
- Able to refactor data science code and has collaborated with data scientists in developing ML solutions.
- Experience playing the role of full-stack data scientist and taking solutions to production.
- Educational qualifications should be preferably in Computer Science, Statistics, Engineering or a related area.
Data Scientist
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What you’ll do?
- Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models.
- Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation, and serving.
- Research new and innovative machine learning approaches.
- Perform hands-on analysis and modeling of enormous data sets to develop insights that increase Ad Traffic and Campaign Efficacy.
- Collaborate with other data scientists, data engineers, product managers, and business stakeholders to build well-crafted, pragmatic data products.
- Actively take on new projects and constantly try to improve the existing models and infrastructure necessary for offline and online experimentation and iteration.
- Work with your team on ambiguous problem areas in existing or new ML initiatives
What are we looking for?
- Ability to write a SQL query to pull the data you need.
- Fluency in Python and familiarity with its scientific stack such as numpy, pandas, scikit learn, matplotlib.
- Experience in Tensorflow and/or R Modelling and/or PyTorch
- Ability to understand a business problem and translate, and structure it into a data science problem.
Job Category: Data Science
Job Type: Full Time
Job Location: Bangalore
Designing and developing NLP applications
Using effective text representation techniques and classification algorithms
Proven experience as an NLP Engineer or similar role
Understanding of NLP techniques for text representation, semantic extraction techniques, data structures and modeling
Working knowledge on pretrained NLP Models (ULMFiT, Transformer, BERT etc)
Ability to effectively design software architecture
Deep understanding of text representation techniques (such as n-grams, bag of words, sentiment analysis etc), statistics and classification algorithms
Proficiency with a deep learning framework such as TensorFlow, Keras or PyTorch) and libraries (like scikit-learn, Pandas and Numpy)
Proficiency with Python and basic libraries for machine learning.
Proficiency with IDEs – Jupyter Notebook, Spyder, Anaconda environments.
Familiarity with Linux (Centos and Ubuntu)
Ability to select hardware to run an ML model with the required latency
Excellent communication skills
Ability to work in a team
Outstanding analytical and problem-solving skills
Develop machine learning pipeline
Select appropriate datasets and data representation methods
Run machine learning tests and experiments
Perform statistical analysis and fine-tuning using test results
Train and retrain systems when necessary
Experience with Git based version control
Extend existing ML libraries and frameworks
Keep abreast of developments in the field
Research and implement MLOps tools, frameworks and platforms for our Data Science projects.
Job Summary
As a Data Science Lead, you will manage multiple consulting projects of varying complexity and ensure on-time and on-budget delivery for clients. You will lead a team of data scientists and collaborate across cross-functional groups, while contributing to new business development, supporting strategic business decisions and maintaining & strengthening client base
- Work with team to define business requirements, come up with analytical solution and deliver the solution with specific focus on Big Picture to drive robustness of the solution
- Work with teams of smart collaborators. Be responsible for their appraisals and career development.
- Participate and lead executive presentations with client leadership stakeholders.
- Be part of an inclusive and open environment. A culture where making mistakes and learning from them is part of life
- See how your work contributes to building an organization and be able to drive Org level initiatives that will challenge and grow your capabilities.
Role & Responsibilities
- Serve as expert in Data Science, build framework to develop Production level DS/AI models.
- Apply AI research and ML models to accelerate business innovation and solve impactful business problems for our clients.
- Lead multiple teams across clients ensuring quality and timely outcomes on all projects.
- Lead and manage the onsite-offshore relation, at the same time adding value to the client.
- Partner with business and technical stakeholders to translate challenging business problems into state-of-the-art data science solutions.
- Build a winning team focused on client success. Help team members build lasting career in data science and create a constant learning/development environment.
- Present results, insights, and recommendations to senior management with an emphasis on the business impact.
- Build engaging rapport with client leadership through relevant conversations and genuine business recommendations that impact the growth and profitability of the organization.
- Lead or contribute to org level initiatives to build the Tredence of tomorrow.
Qualification & Experience
- Bachelor's /Master's /PhD degree in a quantitative field (CS, Machine learning, Mathematics, Statistics, Data Science) or equivalent experience.
- 6-10+ years of experience in data science, building hands-on ML models
- Expertise in ML – Regression, Classification, Clustering, Time Series Modeling, Graph Network, Recommender System, Bayesian modeling, Deep learning, Computer Vision, NLP/NLU, Reinforcement learning, Federated Learning, Meta Learning.
- Proficient in some or all of the following techniques: Linear & Logistic Regression, Decision Trees, Random Forests, K-Nearest Neighbors, Support Vector Machines ANOVA , Principal Component Analysis, Gradient Boosted Trees, ANN, CNN, RNN, Transformers.
- Knowledge of programming languages SQL, Python/ R, Spark.
- Expertise in ML frameworks and libraries (TensorFlow, Keras, PyTorch).
- Experience with cloud computing services (AWS, GCP or Azure)
- Expert in Statistical Modelling & Algorithms E.g. Hypothesis testing, Sample size estimation, A/B testing
- Knowledge in Mathematical programming – Linear Programming, Mixed Integer Programming etc , Stochastic Modelling – Markov chains, Monte Carlo, Stochastic Simulation, Queuing Models.
- Experience with Optimization Solvers (Gurobi, Cplex) and Algebraic programming Languages(PulP)
- Knowledge in GPU code optimization, Spark MLlib Optimization.
- Familiarity to deploy and monitor ML models in production, delivering data products to end-users.
- Experience with ML CI/CD pipelines.
About Us
We are an AI-Powered CX Cloud that enables enterprises to transform customer experience and boost revenue with our APIs by automating and analyzing customer interactions at scale. We assist across multiple voices and non-voice channels in 30+ languages whilst coaching and training agents with minimal costs.
The problem we are solving
In comparison to worldwide norms, customer support in traditional contact centers is quite appalling, due to a high number of queries, insufficient capacity of agents and inane customer support systems, businesses struggle with a multi-fold rise in customer discontent and bounce rate, resulting in connectivity failure points between them and customers. To address this issue, IITian couple Manish and Rashi Gupta founded Rezo's AI-Powered CX Cloud for Enterprises 2018 to help businesses avoid customer churn and boost revenue without incurring financial costs by providing 24x7 real-time responses to customer inquiries with minimal human interaction
Roles and Responsibilities :
- Speech Recognition model development across multiple languages.
- Solve critical real-world scenarios - Noisy channel ASR performance, Multi speaker detection, etc.
- Implement and deliver PoC's /UATs products on the Rezo platform.
- Responsible for product performance, robustness and reliability.
Requirements:
- 2+ years Experience with Bachelors's/Master degree with a focus on CS, Machine Learning, and Signal Processing.
- Strong knowledge of various ML concepts/algorithms and hands-on experience in relevant projects.
- Experience in machine learning platforms such as TensorFlow, and Pytorch and solid programming development skills (Python, C, C++ etc).
- Ability to learn new tools, languages and frameworks quickly.
- Familiarity with databases, data transformation techniques, and ability to work with unstructured data like OCR/ speech/text data.
- Previous experience with working in Conversational AI is a plus.
- Git portfolios will be helpful.
Life at Rezo.AI
- We take transparency very seriously. Along with a full view of team goals, get a top-level view across the board with our regular town hall meetings.
- A highly inclusive work culture that promotes a relaxed, creative, and productive environment.
- Practice autonomy, open communication, and growth opportunities, while maintaining a perfect work-life balance.
- Go on company-sponsored offsites, and blow off steam with your work buddies.
Perks & Benefits
Learning is a way of life. Unlock your full potential backed with cutting-edge tools and mentor-ship
Get the best in class medical insurance, programs for taking care of your mental health, and a Contemporary Leave Policy (beyond sick leaves)
Why Us?
We are a fast-paced start-up with some of the best talents from diverse backgrounds. Working together to solve customer service problems. We believe a diverse workforce is a powerful multiplier of innovation and growth, which is key to providing our clients with the best possible service and our employees with the best possible career. Diversity makes us smarter, more competitive, and more innovative.
Explore more here
www.rezo.ai
We are seeking a dedicated Machine Learning Engineer to join our growing company.
You will collaborate with software engineers and product managers to create efficient artificial intelligence algorithms. As an ML Engineer, we hope you can put your passion for AI engineering towards solving amazing problems through AI.
Roles and Responsibility
- Develop Machine Learning (ML) models using various neural network architectures and implement the model using Python.
- Understand the problem by interacting with domain experts and design/implement various training algorithms and feature detectors.
- Train models using various datasets and optimize the inference architecture for performance.
- Continuously work to improve the Recall accuracy and precision metrics for ML models.
- Design and implement event driven pipelines using Kafka, Python, Keras, Pytorch and Tensorflow.
- Perform data clean-up and guide the labelling team to create labelled datasets.
- Work with different engineers to implement inference graphs, infographics and automated report/alert generation.
- Debug, build, test and release complete software products under SaaS model.
Bonus points for -
- Experience developing and consuming REST APIs.
- Knowledge of developing dockerized micorservices-based architecture to ensure scalability.
Job Qualifications and Skill Sets
- 1-2 years of relevant experience.
- Proven experience as a software developer with knowledge about software development lifecycle (SDLC), from design to implementation.
- Knowledge of scripting languages (e.g. Python)
- Experience with deep learning frameworks (e.g., PyTorch, Tensorflow etc) and software stack (e.g., TensorRT, TVM, etc)
- Experience with model optimization techniques like pruning, quantization, NAS, etc.
- Experience with ML accelerators and hardware architecture, e.g., GPUs, TPUs, NNAs, MLAs Experience with modern parallel programming: GPU programming (CUDA, OpenCL), SIMD (avx, neon/SVE), multi-process and multi-threaded designs.
- Familiarity with HW vendors' deep learning stacks (e.g., cuDNN, cuBLAS, AMD MIOpen, TensorRT, OpenVino, ARM Compute Library, etc)
- Experience with version control systems such as Git and offerings such as GitHub, BitBucket etc.
Bonus points for -
- Familiarity with databases (e.g. MySQL, MongoDB, Cassandra), web servers (e.g. Apache, NGINX), UI/UX design.
- Exposure to edge/mobile-based ML is a plus.
- 3+ years of Experience majoring in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions.
- Programming skills in Python, knowledge in Statistics.
- Hands-on experience developing supervised and unsupervised machine learning algorithms (regression, decision trees/random forest, neural networks, feature selection/reduction, clustering, parameter tuning, etc.). Familiarity with reinforcement learning is highly desirable.
- Experience in the financial domain and familiarity with financial models are highly desirable.
- Experience in image processing and computer vision.
- Experience working with building data pipelines.
- Good understanding of Data preparation, Model planning, Model training, Model validation, Model deployment and performance tuning.
- Should have hands on experience with some of these methods: Regression, Decision Trees,CART, Random Forest, Boosting, Evolutionary Programming, Neural Networks, Support Vector Machines, Ensemble Methods, Association Rules, Principal Component Analysis, Clustering, ArtificiAl Intelligence
- Should have experience in using larger data sets using Postgres Database.
Job Description:
- Understanding about depth and breadth of computer vision and deep learning algorithms.
- At least 2 years of experience in computer vision and or deep learning for object detection and tracking along with semantic or instance segmentation either in the academic or industrial domain.
- Experience with any machine deep learning frameworks like Tensorflow, Keras, Scikit-Learn and PyTorch.
- Experience in training models through GPU computing using NVIDIA CUDA or on the cloud.
- Ability to transform research articles into working solutions to solve real-world problems.
- Strong experience in using both basic and advanced image processing algorithms for feature engineering.
- Proficiency in Python and related packages like numpy, scikit-image, PIL, opencv, matplotlib, seaborn, etc.
- Excellent written and verbal communication skills for effectively communicating with the team and ability to present information to a varied technical and non-technical audiences.
- Must be able to produce solutions independently in an organized manner and also be able to work in a team when required.
- Must have good Object-Oriented Programing & logical analysis skills in Python.
As an Associate Manager - Senior Data scientist you will solve some of the most impactful business problems for our clients using a variety of AI and ML technologies. You will collaborate with business partners and domain experts to design and develop innovative solutions on the data to achieve
predefined outcomes.
• Engage with clients to understand current and future business goals and translate business
problems into analytical frameworks
• Develop custom models based on an in-depth understanding of underlying data, data structures,
and business problems to ensure deliverables meet client needs
• Create repeatable, interpretable and scalable models
• Effectively communicate the analytics approach and insights to a larger business audience
• Collaborate with team members, peers and leadership at Tredence and client companies
Qualification:
1. Bachelor's or Master's degree in a quantitative field (CS, machine learning, mathematics,
statistics) or equivalent experience.
2. 5+ years of experience in data science, building hands-on ML models
3. Experience leading the end-to-end design, development, and deployment of predictive
modeling solutions.
4. Excellent programming skills in Python. Strong working knowledge of Python’s numerical, data
analysis, or AI frameworks such as NumPy, Pandas, Scikit-learn, Jupyter, etc.
5. Advanced SQL skills with SQL Server and Spark experience.
6. Knowledge of predictive/prescriptive analytics including Machine Learning algorithms
(Supervised and Unsupervised) and deep learning algorithms and Artificial Neural Networks
7. Experience with Natural Language Processing (NLTK) and text analytics for information
extraction, parsing and topic modeling.
8. Excellent verbal and written communication. Strong troubleshooting and problem-solving skills.
Thrive in a fast-paced, innovative environment
9. Experience with data visualization tools — PowerBI, Tableau, R Shiny, etc. preferred
10. Experience with cloud platforms such as Azure, AWS is preferred but not required
- Architecting end-to-end prediction pipelines and managing them
- Scoping projects and mentoring 2-4 people
- Owning parts of the AI and data infrastructure of the organization
- Develop state-of-the-art deep learning/classical models
- Continuously learn new skills and technologies and implement them when relevant
- Contribute to the community through open-source, blogs, etc.
- Take a number of high-quality decisions about infrastructure, pipelines, and internal tooling.
What are we looking for
- Deep understanding of core concepts
- Broader knowledge of different types of problem statements and approaches
- Great hold on Python and the standard library
- Knowledge of industry-standard tools like scikit-learn, TensorFlow/PyTorch, etc.
- Experience with at least one among Computer Vision, Forecasting, NLP, or Recommendation
Systems a must
- A get shit done attitude
- A research mindset and a creative caliber to utilize previous work to your advantage.
- A helping/mentoring first approach towards work
Position: Senior Speech Recognition Engineer
- Experience : 6+ Years
- Salary : As per market standards
- Location: Bangalore
- Work Mode: Only WFO
- Notice Period: immediate to 30 Days of notice
Job Description – Data Science
Basic Qualification:
- ME/MS from premier institute with a background in Mechanical/Industrial/Chemical/Materials engineering.
- Strong Analytical skills and application of Statistical techniques to problem solving
- Expertise in algorithms, data structures and performance optimization techniques
- Proven track record of demonstrating end to end ownership involving taking an idea from incubator to market
- Minimum years of experience in data analysis (2+), statistical analysis, data mining, algorithms for optimization.
Responsibilities
The Data Engineer/Analyst will
- Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions.
- Clear interaction with Business teams including product planning, sales, marketing, finance for defining the projects, objectives.
- Mine and analyze data from company databases to drive optimization and improvement of product and process development, marketing techniques and business strategies
- Coordinate with different R&D and Business teams to implement models and monitor outcomes.
- Mentor team members towards developing quick solutions for business impact.
- Skilled at all stages of the analysis process including defining key business questions, recommending measures, data sources, methodology and study design, dataset creation, analysis execution, interpretation and presentation and publication of results.
- 4+ years’ experience in MNC environment with projects involving ML, DL and/or DS
- Experience in Machine Learning, Data Mining or Machine Intelligence (Artificial Intelligence)
- Knowledge on Microsoft Azure will be desired.
- Expertise in machine learning such as Classification, Data/Text Mining, NLP, Image Processing, Decision Trees, Random Forest, Neural Networks, Deep Learning Algorithms
- Proficient in Python and its various libraries such as Numpy, MatPlotLib, Pandas
- Superior verbal and written communication skills, ability to convey rigorous mathematical concepts and considerations to Business Teams.
- Experience in infra development / building platforms is highly desired.
- A drive to learn and master new technologies and techniques.
Key Skills
- Must have experience building apps using fluter, good knowledge on dart. Preferably built and deployed couple of apps both on iOS and Android
- Good knowledge in integrating or developing app functions using firebase
- Must have ported at least 2 applications on iOS and Android platforms. Exposure to the entire build piepline is a must.
- Has good knowledge on how native android and iOS apps work. Must have experience building iOS native apps
- Has an understanding on Computer vision and basic deep learning concepts. Any prototypes of proof of concepts are a big plus.
What they need to do?
- Code - write or learn to write clean code, understand design patterns and develop a quick turnaround time to ship updates
- UI/UX - understand design guidlines and implement them in the project
- Communication - communicate project goals clearly to the team. Understand and convert project requirements in to actionable steps
- Documentation - maintain documentation to support product development
What is homeground?
- HomeGround is an AI platform that helps aspiring cricketers and coaches level up thier training. We help our users get an equal opportunity to play, train for and achieve their goal by providing professional level training analytics right on the smartphone, no special equipment or sensors required.
- HomeGround is one of the Top 10 early stage sports tech startup selected by Startupbootcamp Australia
- Our team shares experience in product growth, marketing and deep learning. We are passionate about our work and are committed 100% to our startup.
About TensorIoT
TensorIoT is an AWS Advanced Consulting Partner. We help companies realize the value and efficiency of the AWS ecosystem. From building PoCs and MVPs to production-ready applications, we are tackling complex business problems every day and developing solutions to drive customer success.
TensorIoT's founders helped build world-class IoT and AI platforms at AWS and Google and are now creating solutions to simplify the way enterprises incorporate edge devices and their data into their day-to-day operations. Our mission is to help connect devices and make them intelligent. Our founders firmly believe in the transformative potential of smarter devices to enhance our quality of life, and we're just getting started!
TensorIoT is proud to be an equal-opportunity employer. This means that we are committed to diversity and inclusion and encourage people from all backgrounds to apply. We do not tolerate discrimination or harassment of any kind and make our hiring decisions based solely on qualifications, merit, and business needs at the time.
You have:
- Study and transform data science prototypes.
- Design machine-learning systems
- Research and implement appropriate ML algorithms and tools.
- Develop machine-learning applications according to requirements.
- Select appropriate datasets and data representation methods.
- Run machine-learning tests and experiments.
- Perform statistical analysis and fine-tuning using test results.
- Train and retrain systems when necessary.
- Extend existing ML libraries and frameworks.
- Keep abreast of developments in the field.
Machine Learning Engineer responsibilities include:
- Designing and developing machine learning and deep learning systems
- Running machine learning tests and experiments
- Implementing appropriate ML algorithms
Must have/Requirements.
- Proven experience as a Machine Learning Engineer or similar role
- Must have experience with integrating applications and platforms with cloud technologies (i.e., AWS)
- Must have experience with Docker containers.
- Experience with GPU acceleration (i.e., CUDA and cuDNN)
- Create feature engineering pipelines to process high-volume, multi-dimensional, unstructured (audio, video, NLP) data at scale.
- Knowledge of server-less architectures (e.g., Lambda, Kinesis, Glue).
- Understanding of end-to-end ML project lifecycle.
- Must have experience with Data Science tools and frameworks (i.e., Python, Scikit, NLTK, NumPy, Pandas, TensorFlow, Kera’s, R, Spark, PyTorch).
- Experience with cloud-native technologies, microservices design, and REST APIs.
- Knowledge of data query and data processing tools (i.e., SQL)
- Deep knowledge of Math, Probability, Statistics, and Algorithms
- Strong understanding of image recognition & computer vision.
- Must have 4-8 years of experience.
- Excellent communication skills
- Ability to work in a team.
- BSc in Computer Science, Mathematics, or a similar field; a Master’s degree is a plus.
Responsibilities
- Work on execution and scheduling of all tasks related to assigned projects' deliverable dates
- Optimize and debug existing codes to make them scalable and improve performance
- Design, development, and delivery of tested code and machine learning models into production environments
- Work effectively in teams, managing and leading teams
- Provide effective, constructive feedback to the delivery leader
- Manage client expectations and work with an agile mindset with machine learning and AI technology
- Design and prototype data-driven solutions
Eligibility
- Highly experienced in designing, building, and shipping scalable and production-quality machine learning algorithms in the field of Python applications
- Working knowledge and experience in NLP core components (NER, Entity Disambiguation, etc.)
- In-depth expertise in Data Munging and Storage (Experienced in SQL, NoSQL, MongoDB, Graph Databases)
- Expertise in writing scalable APIs for machine learning models
- Experience with maintaining code logs, task schedulers, and security
- Working knowledge of machine learning techniques, feed-forward, recurrent and convolutional neural networks, entropy models, supervised and unsupervised learning
- Experience with at least one of the following: Keras, Tensorflow, Caffe, or PyTorch
Job Profile: The job profile requires an inter-disciplinary approach to solving technical problems whilst also understanding the business requirements. Knowledge of the power and energy sector and a strong interest in solving national and global energy challenges is a must. This position will involve:
- Developing and maintaining SaaS products for the power and energy sector.
- Working with team members to handle day to day operations and product management tasks.
- Applying Machine Learning techniques to power and energy related data sets.
- Identification and characterization of complex relationships between multiple weather parameters and renewable energy generation.
- Providing inputs for new product development and improvements to existing products and services.
Requirements:
- Educational Qualification: B.Tech/B.E. Electrical and Electronics Engineering/Software Engineering/Mathematics and Computer Science.
- 1-2 years experience writing code independently in any object oriented programming language. Knowledge of python and its packages is highly preferable (numpy, pandas, keras, OpenCV, sci-kit).
- Demonstrable experience of project work (academic/internship/job)
- Ability to work on broad objectives and go from business requirements to code independently.
- Ability to work independently under minimal supervision and in a team;
- Preferable to have some understanding of power systems concepts (Power Generation Technologies, Transmission and Distribution, Load Flow, Renewable Energy Generation, etc.)
- Experience working with large data sets and manipulating data to find relationships between variables.
We are looking for a Machine Learning engineer for on of our premium client.
Experience: 2-9 years
Location: Gurgaon/Bangalore
Tech Stack:
Python, PySpark, the Python Scientific Stack; MLFlow, Grafana, Prometheus for machine learning pipeline management and monitoring; SQL, Airflow, Databricks, our own open-source data pipelining framework called Kedro, Dask/RAPIDS; Django, GraphQL and ReactJS for horizontal product development; container technologies such as Docker and Kubernetes, CircleCI/Jenkins for CI/CD, cloud solutions such as AWS, GCP, and Azure as well as Terraform and Cloudformation for deployment
Expert in Machine Learning (ML) & Natural Language Processing (NLP).
Expert in Python, Pytorch and Data Structures.
Experience in ML model life cycle (Data preparation, Model training and Testing and ML Ops).
Strong experience in NLP, NLU and NLU using transformers & deep learning.
Experience in federated learning is a plus
Experience with knowledge graphs and ontology.
Responsible for developing, enhancing, modifying, optimizing and/or maintaining applications, pipelines and codebase in order to enhance the overall solution.
Experience working with scalable, highly-interactive, high-performance systems/projects (ML).
Design, code, test, debug and document programs as well as support activities for the corporate systems architecture.
Working closely with business partners in defining requirements for ML applications and advancements of solution.
Engage in specifications in creating comprehensive technical documents.
Experience / Knowledge in designing enterprise grade system architecture for solving complex problems with a sound understanding of object-oriented programming and Design Patterns.
Experience in Test Driven Development & Agile methodologies.
Good communication skills - client facing environment.
Hunger for learning, self-starter with a drive to technically mentor cohort of developers. 16. Good to have working experience in Knowledge Graph based ML products development; and AWS/GCP based ML services.
Duties and Responsibilities:
Research and Develop Innovative Use Cases, Solutions and Quantitative Models
Quantitative Models in Video and Image Recognition and Signal Processing for cloudbloom’s
cross-industry business (e.g., Retail, Energy, Industry, Mobility, Smart Life and
Entertainment).
Design, Implement and Demonstrate Proof-of-Concept and Working Proto-types
Provide R&D support to productize research prototypes.
Explore emerging tools, techniques, and technologies, and work with academia for cutting-
edge solutions.
Collaborate with cross-functional teams and eco-system partners for mutual business benefit.
Team Management Skills
Academic Qualification
7+ years of professional hands-on work experience in data science, statistical modelling, data
engineering, and predictive analytics assignments
Mandatory Requirements: Bachelor’s degree with STEM background (Science, Technology,
Engineering and Management) with strong quantitative flavour
Innovative and creative in data analysis, problem solving and presentation of solutions.
Ability to establish effective cross-functional partnerships and relationships at all levels in a
highly collaborative environment
Strong experience in handling multi-national client engagements
Good verbal, writing & presentation skills
Core Expertise
Excellent understanding of basics in mathematics and statistics (such as differential
equations, linear algebra, matrix, combinatorics, probability, Bayesian statistics, eigen
vectors, Markov models, Fourier analysis).
Building data analytics models using Python, ML libraries, Jupyter/Anaconda and Knowledge
database query languages like SQL
Good knowledge of machine learning methods like k-Nearest Neighbors, Naive Bayes, SVM,
Decision Forests.
Strong Math Skills (Multivariable Calculus and Linear Algebra) - understanding the
fundamentals of Multivariable Calculus and Linear Algebra is important as they form the basis
of a lot of predictive performance or algorithm optimization techniques.
Deep learning : CNN, neural Network, RNN, tensorflow, pytorch, computervision,
Large-scale data extraction/mining, data cleansing, diagnostics, preparation for Modeling
Good applied statistical skills, including knowledge of statistical tests, distributions,
regression, maximum likelihood estimators, Multivariate techniques & predictive modeling
cluster analysis, discriminant analysis, CHAID, logistic & multiple regression analysis
Experience with Data Visualization Tools like Tableau, Power BI, Qlik Sense that help to
visually encode data
Excellent Communication Skills – it is incredibly important to describe findings to a technical
and non-technical audience
Capability for continuous learning and knowledge acquisition.
Mentor colleagues for growth and success
Strong Software Engineering Background
Hands-on experience with data science tools
● Proficient in Python and using packages like NLTK, Numpy, Pandas
● Should have worked on deep learning frameworks (like Tensorflow, Keras, PyTorch, etc)
● Hands-on experience in Natural Language Processing, Sequence, and RNN Based models
● Mathematical intuition of ML and DL algorithms
● Should be able to perform thorough model evaluation by creating hypotheses on the basis of statistical
analyses
● Should be comfortable in going through open-source code and reading research papers.
Outplay is building the future of sales engagement, a solution that helps sales teams personalize at scale while consistently staying on message and on task, through true multi-channel outreach including email, phone, SMS, chat and social media. Outplay is the only tool your sales team will ever need to crush their goals. Funded by Sequoia - Headquartered in the US. Sequoia not only led a $2 million seed round in Outplay early this year, but also followed with $7.3 million Series - A recently. The team is spread remotely all over the globe.
Perks of being an Outplayer :
• Fully remote job - You can be on the mountains or at the beach, and still work with us. Outplay is a 100% remote company.
• Flexible work hours - We believe mental health is way more important than a 9-5 job.
• Health Insurance - We are a family, and we take care of each other - we provide medical insurance coverage to all employees and their family members. We also provide an additional benefit of doctor consultation along with the insurance plan.
• Annual company retreat - we work hard, and we party harder.
• Best tools - we buy you the best tools of the trade
• Celebrations - No, we never forget your birthday or anniversary (be it work or wedding) and we never leave an opportunity to celebrate milestones and wins.
• Safe space to innovate and experiment
• Steady career growth and job security
About the Role:
We are looking for a Senior Data Scientist to help research, develop and advance the charter of AI at Outplay and push the threshold of conversational intelligence.
Job description :
• Lead AI initiatives that dissects data for creating new feature prototypes and minimum viable products
• Conduct product research in natural language processing, conversation intelligence, and virtual assistant technologies
• Use independent judgment to enhance product by using existing data and building AI/ML models
• Collaborate with teams, provide technical guidance to colleagues and come up with new ideas for rapid prototyping. Convert prototypes into scalable and efficient products.
• Work closely with multiple teams on projects using textual and voice data to build conversational intelligence
• Prototype and demonstrate AI augmented capabilities in the product for customers
• Conduct experiments to assess the precision and recall of language processing modules and study the effect of such experiments on different application areas of sales
• Assist business development teams in the expansion and enhancement of a feature pipeline to support short and long-range growth plans
• Identify new business opportunities and prioritize pursuits of AI for different areas of conversational intelligence
• Build reusable and scalable solutions for use across a varied customer base
• Participate in long range strategic planning activities designed to meet the company’s objectives and revenue goals
Required Skills :
• Bachelors or Masters in a quantitative field such as Computer Science, Statistics, Mathematics, Operations Research or related field with focus on applied Machine Learning, AI, NLP and data-driven statistical analysis & modelling.
• 4+ years of experience applying AI/ML/NLP/Deep Learning/ data-driven statistical analysis & modelling solutions to multiple domains. Experience in the Sales and Marketing domain is a plus.
• Experience in building Natural Language Processing (NLP), Conversational Intelligence, and Virtual Assistants based features.
• Excellent grasp on programming languages like Python. Experience in GoLang would be a plus.
• Proficient in analysis using python packages like Pandas, Plotly, Numpy, Scipy, etc.
• Strong and proven programming skills in machine learning and deep learning with experience in frameworks such as TensorFlow/Keras, Pytorch, Transformers, Spark etc
• Excellent communication skills to explain complex solutions to stakeholders across multiple disciplines.
• Experience in SQL, RDBMS, Data Management and Cloud Computing (AWS and/or Azure) is a plus.
• Extensive experience of training and deploying different Machine Learning models
• Experience in monitoring deployed models to proactively capture data drifts, low performing models, etc.
• Exposure to Deep Learning, Neural Networks or related fields
• Passion for solving AI/ML problems for both textual and voice data.
• Fast learner, with great written and verbal communication skills, and be able to work independently as
well as in a team environment
XressBees – a logistics company started in 2015 – is amongst the fastest growing companies of its sector. Our
vision to evolve into a strong full-service logistics organization reflects itself in the various lines of business like B2C
logistics 3PL, B2B Xpress, Hyperlocal and Cross border Logistics.
Our strong domain expertise and constant focus on innovation has helped us rapidly evolve as the most trusted
logistics partner of India. XB has progressively carved our way towards best-in-class technology platforms, an
extensive logistics network reach, and a seamless last mile management system.
While on this aggressive growth path, we seek to become the one-stop-shop for end-to-end logistics solutions. Our
big focus areas for the very near future include strengthening our presence as service providers of choice and
leveraging the power of technology to drive supply chain efficiencies.
Job Overview
XpressBees would enrich and scale its end-to-end logistics solutions at a high pace. This is a great opportunity to join
the team working on forming and delivering the operational strategy behind Artificial Intelligence / Machine Learning
and Data Engineering, leading projects and teams of AI Engineers collaborating with Data Scientists. In your role, you
will build high performance AI/ML solutions using groundbreaking AI/ML and BigData technologies. You will need to
understand business requirements and convert them to a solvable data science problem statement. You will be
involved in end to end AI/ML projects, starting from smaller scale POCs all the way to full scale ML pipelines in
production.
Seasoned AI/ML Engineers would own the implementation and productionzation of cutting-edge AI driven algorithmic
components for search, recommendation and insights to improve the efficiencies of the logistics supply chain and
serve the customer better.
You will apply innovative ML tools and concepts to deliver value to our teams and customers and make an impact to
the organization while solving challenging problems in the areas of AI, ML , Data Analytics and Computer Science.
Opportunities for application:
- Route Optimization
- Address / Geo-Coding Engine
- Anomaly detection, Computer Vision (e.g. loading / unloading)
- Fraud Detection (fake delivery attempts)
- Promise Recommendation Engine etc.
- Customer & Tech support solutions, e.g. chat bots.
- Breach detection / prediction
An Artificial Intelligence Engineer would apply himself/herself in the areas of -
- Deep Learning, NLP, Reinforcement Learning
- Machine Learning - Logistic Regression, Decision Trees, Random Forests, XGBoost, etc..
- Driving Optimization via LPs, MILPs, Stochastic Programs, and MDPs
- Operations Research, Supply Chain Optimization, and Data Analytics/Visualization
- Computer Vision and OCR technologies
The AI Engineering team enables internal teams to add AI capabilities to their Apps and Workflows easily via APIs
without needing to build AI expertise in each team – Decision Support, NLP, Computer Vision, for Public Clouds and
Enterprise in NLU, Vision and Conversational AI.Candidate is adept at working with large data sets to find
opportunities for product and process optimization and using models to test the effectiveness of different courses of
action. They must have knowledge using a variety of data mining/data analysis methods, using a variety of data tools,
building, and implementing models, using/creating algorithms, and creating/running simulations. They must be
comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion
for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.
Roles & Responsibilities
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Building cloud services in Decision Support (Anomaly Detection, Time series forecasting, Fraud detection,
Risk prevention, Predictive analytics), computer vision, natural language processing (NLP) and speech that
work out of the box.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Build core of Artificial Intelligence and AI Services such as Decision Support, Vision, Speech, Text, NLP, NLU,
and others.
● Leverage Cloud technology –AWS, GCP, Azure
● Experiment with ML models in Python using machine learning libraries (Pytorch, Tensorflow), Big Data,
Hadoop, HBase, Spark, etc
● Work with stakeholders throughout the organization to identify opportunities for leveraging company data to
drive business solutions.
● Mine and analyze data from company databases to drive optimization and improvement of product
development, marketing techniques and business strategies.
● Assess the effectiveness and accuracy of new data sources and data gathering techniques.
● Develop custom data models and algorithms to apply to data sets.
● Use predictive modeling to increase and optimize customer experiences, supply chain metric and other
business outcomes.
● Develop company A/B testing framework and test model quality.
● Coordinate with different functional teams to implement models and monitor outcomes.
● Develop processes and tools to monitor and analyze model performance and data accuracy.
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Deliver machine learning and data science projects with data science techniques and associated libraries
such as AI/ ML or equivalent NLP (Natural Language Processing) packages. Such techniques include a good
to phenomenal understanding of statistical models, probabilistic algorithms, classification, clustering, deep
learning or related approaches as it applies to financial applications.
● The role will encourage you to learn a wide array of capabilities, toolsets and architectural patterns for
successful delivery.
What is required of you?
You will get an opportunity to build and operate a suite of massive scale, integrated data/ML platforms in a broadly
distributed, multi-tenant cloud environment.
● B.S., M.S., or Ph.D. in Computer Science, Computer Engineering
● Coding knowledge and experience with several languages: C, C++, Java,JavaScript, etc.
● Experience with building high-performance, resilient, scalable, and well-engineered systems
● Experience in CI/CD and development best practices, instrumentation, logging systems
● Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights
from large data sets.
● Experience working with and creating data architectures.
● Good understanding of various machine learning and natural language processing technologies, such as
classification, information retrieval, clustering, knowledge graph, semi-supervised learning and ranking.
● Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest,
Boosting, Trees, text mining, social network analysis, etc.
● Knowledge on using web services: Redshift, S3, Spark, Digital Ocean, etc.
● Knowledge on creating and using advanced machine learning algorithms and statistics: regression,
simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
● Knowledge on analyzing data from 3rd party providers: Google Analytics, Site Catalyst, Core metrics,
AdWords, Crimson Hexagon, Facebook Insights, etc.
● Knowledge on distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, MySQL, Kafka etc.
● Knowledge on visualizing/presenting data for stakeholders using: Quicksight, Periscope, Business Objects,
D3, ggplot, Tableau etc.
● Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
● Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests, and proper usage, etc.) and experience with applications.
● Experience building data pipelines that prep data for Machine learning and complete feedback loops.
● Knowledge of Machine Learning lifecycle and experience working with data scientists
● Experience with Relational databases and NoSQL databases
● Experience with workflow scheduling / orchestration such as Airflow or Oozie
● Working knowledge of current techniques and approaches in machine learning and statistical or
mathematical models
● Strong Data Engineering & ETL skills to build scalable data pipelines. Exposure to data streaming stack (e.g.
Kafka)
● Relevant experience in fine tuning and optimizing ML (especially Deep Learning) models to bring down
serving latency.
● Exposure to ML model productionzation stack (e.g. MLFlow, Docker)
● Excellent exploratory data analysis skills to slice & dice data at scale using SQL in Redshift/BigQuery.
About FarMart At FarMart we are building the world’s first OS powering food value chains. By digitizing and incentivizing the rural agri-retailer, FarMart has created one-stop hubs for farmers to buy input and sell output in close proximity to their farms. This alternative, asset-light, food value chain eliminates the considerable transportation costs, spillages, and time effort for both the producer and the end-buyer. Are you passionate about the intersection of tech and food?
Role: Data Scientist II
Experience: 2-4 years
About You
Are you a beginner whose eyes light up when you see the progress bar of your model training or are you an experienced data professional whose heart sinks as the model loss starts to climb up? If that’s you, we like you already. Do you think about problem statements when you are on a cab ride or do you open up blog articles to entertain and enlighten you in boring meetings? If that’s you, we like you more now. All in all, you must have an insatiable hunger for knowledge and a team player attitude!
Key Responsibilities
- Understand and optimize the data infrastructure
- Develop visualization dashboards for the business and operations team - Setup and own data acquisition for several external sources
- Manage and clean the data for use by several systems
- Develop state-of-the-art Deep Learning/Classical models
- Deploy and Maintain production services
- Contribute to the community through open-source, blogs, etc.
What are we looking for
- Deep understanding of core concepts
- Broader knowledge of different types of problem statements and approaches
- Excellent hold on Python and the standard library
- Knowledge of industry-standard tools like scikit-learn, TensorFlow/PyTorch, etc.
- Experience with Computer Vision, Forecasting, and NLP will come in handy.
- A get shit done attitude
- A research mindset and a creative caliber to utilize previous work to your advantage.
- Work closely with your business to identify issues and use data to propose solutions for effective decision making
- Build algorithms and design experiments to merge, manage, interrogate and extract data to supply tailored reports to colleagues, customers or the wider organisation.
- Creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc
- Querying databases and using statistical computer languages: R, Python, SLQ, etc.
- Visualizing/presenting data through various Dashboards for Data Analysis, Using Python Dash, Flask etc.
- Test data mining models to select the most appropriate ones for use on a project
- Work in a POSIX/UNIX environment to run/deploy applications
- Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.
- Develop custom data models and algorithms to apply to data sets.
- Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting and other business outcomes.
- Assess the effectiveness of data sources and data-gathering techniques and improve data collection methods
- Horizon scan to stay up to date with the latest technology, techniques and methods
- Coordinate with different functional teams to implement models and monitor outcomes.
- Stay curious and enthusiastic about using algorithms to solve problems and enthuse others to see the benefit of your work.
General Expectations:
- Able to create algorithms to extract information from large data sets
- Strong knowledge of Python, R, Java or another scripting/statistical languages to automate data retrieval, manipulation and analysis.
- Experience with extracting and aggregating data from large data sets using SQL or other tools
- Strong understanding of various NLP, and NLU techniques like Named Entity Recognition, Summarization, Topic Modeling, Text Classification, Lemmatization and Stemming.
- Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, etc.
- Experience with Python libraries such as Pandas, NumPy, SciPy, Scikit-Learn
- Experience with Jupyter / Pandas / Numpy to manipulate and analyse data
- Knowledge of Machine Learning techniques and their respective pros and cons
- Strong Knowledge of various Data Science Visualization Tools like Tableau, PowerBI, D3, Plotly, etc.
- Experience using web services: Redshift, AWS, S3, Spark, DigitalOcean, etc.
- Proficiency in using query languages, such as SQL, Spark DataFrame API, etc.
- Hands-on experience in HTML, CSS, Bootstrap, JavaScript, AJAX, jQuery and Prototyping.
- Hands-on experience on C#, Javascript, .Net
- Experience in understanding and analyzing data using statistical software (e.g., Python, R, KDB+ and other relevant libraries)
- Experienced in building applications that meet enterprise needs – secure, scalable, loosely coupled design
- Strong knowledge of computer science, algorithms, and design patterns
- Strong oral and written communication, and other soft skills critical to collaborating and engage with teams
About Us
Censius is a US-based product company that is enabling AI at scale for enterprises. We are unlocking MLOps scalability by building the world's fastest way to deploy models and are amongst the earliest companies to tackle Model Performance Management. At Censius, you will get to solve difficult problems in a very nascent, but rapidly growing, area.
About the role
In this role, you will design and implement a generic ML platform that helps monitor models across modalities in production. You will collaborate with the research and development teams to build robust ML and big data monitoring platforms.
Responsibilities
* Work on large-scale machine learning challenges that impact millions of people around the globe
* Research and implement cutting-edge algorithms and implement pipelines that work with massive data sets in real-time
* Implementing fast, scalable solutions with optimal performance day in and out
* Machine-learning being the core of our business, this role will be responsible for all phases of the product development lifecycle
* Evaluate and validate the analyses with statistical methods and explain to people unfamiliar with the domain of data science
* Writing specifications for algorithms, reports on data analysis, documentation of algorithms, and collaborating with product teams' skills and attributes for success will want you to have
* Strong programming skills in Python.
* Working experience with a variety of ML techniques (decision trees, clustering, boosting, bagging, neural networks, etc.)
* Working experience with advanced statistical concepts (outliers, distance, regression, distributions, statistical tests, etc.)
* Hands-on experience with one or more machine learning frameworks - PyTorch, Keras, Tensorflow, XGBoost, and libraries - Pandas, NumPY, Scikit-learn
* Familiarity with ML platforms like MLflow, Weights&Biases, Kubeflow, and AWS SageMaker.It'd be nice if you have* Passion for developing data products from scratch and a high level of proactiveness
* Knowledge of Reinforcement learning and Optimisation problems on a large scale is a big plus
* Some experience in project management and mentoring is also a plus.
* Knowledge and experience in deploying large-scale systems using distributed and cloud-based systems (Hadoop, Amazon EC2) is a big plus.
You will excel in this role if
* You are scrappy, take ownership, and follow through to the very end
* You enjoy wearing multiple hats
* A sincere desire to learn and grow - we're quite small, so the desire to learn and grow as the company grows is essential!
Benefits
- Competitive Salary 💸
- Work Remotely 🌎
- Health insurance 🏥
- Unlimited Time Off ⏰
- Support for continual learning (free books and online courses) 📚
- Reimbursement for streaming services (think Netflix) 🎥
- Reimbursement for gym or physical activity of your choice 🏋🏽♀️
- Flex hours 💪
- Leveling Up Opportunities 🌱
DATA SCIENTIST-MACHINE LEARNING
GormalOne LLP. Mumbai IN
Job Description
GormalOne is a social impact Agri tech enterprise focused on farmer-centric projects. Our vision is to make farming highly profitable for the smallest farmer, thereby ensuring India's “Nutrition security”. Our mission is driven by the use of advanced technology. Our technology will be highly user-friendly, for the majority of farmers, who are digitally naive. We are looking for people, who are keen to use their skills to transform farmers' lives. You will join a highly energized and competent team that is working on advanced global technologies such as OCR, facial recognition, and AI-led disease prediction amongst others.
GormalOne is looking for a machine learning engineer to join. This collaborative yet dynamic, role is suited for candidates who enjoy the challenge of building, testing, and deploying end-to-end ML pipelines and incorporating ML Ops best practices across different technology stacks supporting a variety of use cases. We seek candidates who are curious not only about furthering their own knowledge of ML Ops best practices through hands-on experience but can simultaneously help uplift the knowledge of their colleagues.
Location: Bangalore
Roles & Responsibilities
- Individual contributor
- Developing and maintaining an end-to-end data science project
- Deploying scalable applications on different platform
- Ability to analyze and enhance the efficiency of existing products
What are we looking for?
- 3 to 5 Years of experience as a Data Scientist
- Skilled in Data Analysis, EDA, Model Building, and Analysis.
- Basic coding skills in Python
- Decent knowledge of Statistics
- Creating pipelines for ETL and ML models.
- Experience in the operationalization of ML models
- Good exposure to Deep Learning, ANN, DNN, CNN, RNN, and LSTM.
- Hands-on experience in Keras, PyTorch or Tensorflow
Basic Qualifications
- Tech/BE in Computer Science or Information Technology
- Certification in AI, ML, or Data Science is preferred.
- Master/Ph.D. in a relevant field is preferred.
Preferred Requirements
- Exp in tools and packages like Tensorflow, MLFlow, Airflow
- Exp in object detection techniques like YOLO
- Exposure to cloud technologies
- Operationalization of ML models
- Good understanding and exposure to MLOps
Kindly note: Salary shall be commensurate with qualifications and experience
- Role: Machine Learning Lead
- Experience: 5+ Years
- Employee strength: 80+
- Remuneration: Most competitive in the market
Programming Language:
• Advance knowledge of Python.
• Object Oriented Programming skills.
Conceptual:
• Mathematical understanding of machine learning and deep learning algorithms.
• Thorough grasp on statistical terminologies.
Applied:
• Libraries: Tensorflow, Keras, Pytorch, Statsmodels, Scikit-learn, SciPy, Numpy, Pandas, Matplotlib, Seaborn, Plotly
• Algorithms: Ensemble Algorithms, Artificial Neural Networks and Deep Learning, Clustering Algorithms, Decision Tree Algorithms, Dimensionality Reduction Algorithms, etc.
• MySQL, MongoDB, ElasticSearch or other NoSQL database implementations.
If interested kindly share your cv at tanya @tigihr. com
Location: Ahmedabad / Pune
Team: Technology
Company Profile
InFoCusp is a company working in the broad field of Computer Science, Software Engineering, and Artificial Intelligence (AI). It is headquartered in Ahmedabad, India, having a branch office in Pune.
We have worked on / are working on AI projects / algorithms-heavy projects with applications ranging in finance, healthcare, e-commerce, legal, HR/recruiting, pharmaceutical, leisure sports and computer gaming domains. All of this is based on the core concepts of data science,
computer vision, machine learning (with emphasis on deep learning), cloud computing, biomedical signal processing, text and natural language processing, distributed systems, embedded systems and Internet of Things.
PRIMARY RESPONSIBILITIES:
● Applying machine learning, deep learning, and signal processing on large datasets (Audio, sensors, images, videos, text) to develop models.
● Architecting large scale data analytics / modeling systems.
● Designing and programming machine learning methods and integrating them into our ML framework / pipeline.
● Analyzing data collected from various sources,
● Evaluate and validate the analysis with statistical methods. Also presenting this in a lucid form to people not familiar with the domain of data science / computer science.
● Writing specifications for algorithms, reports on data analysis, and documentation of algorithms.
● Evaluating new machine learning methods and adopting them for our
purposes.
● Feature engineering to add new features that improve model
performance.
KNOWLEDGE AND SKILL REQUIREMENTS:
● Background and knowledge of recent advances in machine learning, deep learning, natural language processing, and/or image/signal/video processing with at least 3 years of professional work experience working on real-world data.
● Strong programming background, e.g. Python, C/C++, R, Java, and knowledge of software engineering concepts (OOP, design patterns).
● Knowledge of machine learning libraries Tensorflow, Jax, Keras, scikit-learn, pyTorch. Excellent mathematical and skills and background, e.g. accuracy, significance tests, visualization, advanced probability concepts
● Ability to perform both independent and collaborative research.
● Excellent written and spoken communication skills.
● A proven ability to work in a cross-discipline environment in defined time frames. Knowledge and experience of deploying large-scale systems using distributed and cloud-based systems (Hadoop, Spark, Amazon EC2, Dataflow) is a big plus.
● Knowledge of systems engineering is a big plus.
● Some experience in project management and mentoring is also a big plus.
EDUCATION:
- B.E.\B. Tech\B.S. candidates' entries with significant prior experience in the aforementioned fields will be considered.
- M.E.\M.S.\M. Tech\PhD preferably in fields related to Computer Science with experience in machine learning, image and signal processing, or statistics preferred.
Key deliverables for the Data Science Engineer would be to help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be on applying data mining techniques, doing statistical analysis, and building high-quality prediction systems integrated with our products.
What will you do?
- You will be building and deploying ML models to solve specific business problems related to NLP, computer vision, and fraud detection.
- You will be constantly assessing and improving the model using techniques like Transfer learning
- You will identify valuable data sources and automate collection processes along with undertaking pre-processing of structured and unstructured data
- You will own the complete ML pipeline - data gathering/labeling, cleaning, storage, modeling, training/testing, and deployment.
- Assessing the effectiveness and accuracy of new data sources and data gathering techniques.
- Building predictive models and machine-learning algorithms to apply to data sets.
- Coordinate with different functional teams to implement models and monitor outcomes.
- Presenting information using data visualization techniques and proposing solutions and strategies to business challenges
We would love to hear from you if :
- You have 2+ years of experience as a software engineer at a SaaS or technology company
- Demonstrable hands-on programming experience with Python/R Data Science Stack
- Ability to design and implement workflows of Linear and Logistic Regression, Ensemble Models (Random Forest, Boosting) using R/Python
- Familiarity with Big Data Platforms (Databricks, Hadoop, Hive), AWS Services (AWS, Sagemaker, IAM, S3, Lambda Functions, Redshift, Elasticsearch)
- Experience in Probability and Statistics, ability to use ideas of Data Distributions, Hypothesis Testing and other Statistical Tests.
- Demonstrable competency in Data Visualisation using the Python/R Data Science Stack.
- Preferable Experience Experienced in web crawling and data scraping
- Strong experience in NLP. Worked on libraries such as NLTK, Spacy, Pattern, Gensim etc.
- Experience with text mining, pattern matching and fuzzy matching
Why Tartan?
- Brand new Macbook
- Stock Options
- Health Insurance
- Unlimited Sick Leaves
- Passion Fund (Invest in yourself or your passion project)
- Wind Down
Location: Ahmedabad / Pune
Team: Technology
Company Profile
InFoCusp is a company working in the broad field of Computer Science, Software Engineering, and Artificial Intelligence (AI). It is headquartered in Ahmedabad, India, having a branch office in Pune.
We have worked on / are working on AI projects / algorithms-heavy projects with applications ranging in finance, healthcare, e-commerce, legal, HR/recruiting, pharmaceutical, leisure sports and computer gaming domains. All of this is based on the core concepts of data science,
computer vision, machine learning (with emphasis on deep learning), cloud computing, biomedical signal processing, text and natural language processing, distributed systems, embedded systems and the Internet of Things.
PRIMARY RESPONSIBILITIES:
● Applying machine learning, deep learning, and signal processing on large datasets (Audio, sensors, images, videos, text) to develop models.
● Architecting large scale data analytics/modeling systems.
● Designing and programming machine learning methods and integrating them into our ML framework/pipeline.
● Analyzing data collected from various sources,
● Evaluate and validate the analysis with statistical methods. Also presenting this in a lucid form to people not familiar with the domain of data science/computer science.
● Writing specifications for algorithms, reports on data analysis, and documentation of algorithms.
● Evaluating new machine learning methods and adapting them for our
purposes.
● Feature engineering to add new features that improve model
performance.
KNOWLEDGE AND SKILL REQUIREMENTS:
● Background and knowledge of recent advances in machine learning, deep learning, natural language processing, and/or image/signal/video processing with at least 3 years of professional work experience working on real-world data.
● Strong programming background, e.g. Python, C/C++, R, Java, and knowledge of software engineering concepts (OOP, design patterns).
● Knowledge of machine learning libraries Tensorflow, Jax, Keras, scikit-learn, pyTorch. Excellent mathematical skills and background, e.g. accuracy, significance tests, visualization, advanced probability concepts
● Ability to perform both independent and collaborative research.
● Excellent written and spoken communication skills.
● A proven ability to work in a cross-discipline environment in defined time frames. Knowledge and experience of deploying large-scale systems using distributed and cloud-based systems (Hadoop, Spark, Amazon EC2, Dataflow) is a big plus.
● Knowledge of systems engineering is a big plus.
● Some experience in project management and mentoring is also a big plus.
EDUCATION:
- B.E.\B. Tech\B.S. candidates' entries with significant prior experience in the aforementioned fields will be considered.
- M.E.\M.S.\M. Tech\PhD preferably in fields related to Computer Science with experience in machine learning, image and signal processing, or statistics preferred.
About Us :
Docsumo is Document AI software that helps enterprises capture data and analyze customer documents. We convert documents such as invoices, ID cards, and bank statements into actionable data. We are work with clients such as PayU, Arbor and Hitachi and backed by Sequoia, Barclays, Techstars, and Better Capital.
As a Senior Machine Learning you will be working directly with the CTO to develop end to end API products for the US market in the information extraction domain.
Responsibilities :
- You will be designing and building systems that help Docsumo process visual data i.e. as PDF & images of documents.
- You'll work in our Machine Intelligence team, a close-knit group of scientists and engineers who incubate new capabilities from whiteboard sketches all the way to finished apps.
- You will get to learn the ins and outs of building core capabilities & API products that can scale globally.
- Should have hands-on experience applying advanced statistical learning techniques to different types of data.
- Should be able to design, build and work with RESTful Web Services in JSON and XML formats. (Flask preferred)
- Should follow Agile principles and processes including (but not limited to) standup meetings, sprints and retrospectives.
Skills / Requirements :
- Minimum 3+ years experience working in machine learning, text processing, data science, information retrieval, deep learning, natural language processing, text mining, regression, classification, etc.
- Must have a full-time degree in Computer Science or similar (Statistics/Mathematics)
- Working with OpenCV, TensorFlow and Keras
- Working with Python: Numpy, Scikit-learn, Matplotlib, Panda
- Familiarity with Version Control tools such as Git
- Theoretical and practical knowledge of SQL / NoSQL databases with hands-on experience in at least one database system.
- Must be self-motivated, flexible, collaborative, with an eagerness to learn
We are a 20-year old IT Services company from Kolkata working in India and abroad. We primarily work as SSP(Software Solutions Partner) and serve some of the leading business houses in the country in various software project implementations specially on SAP and Oracle platform and also working on Govt & Semi Govt projects as outsourcing partner all over PAN India.
Can be anywhere in India ( Mumbai ,Pune and Kolkata is
preferable)
JD
Machine Learning/Deep Learning experience above 3
years
- Clear and structured thinking and communication
Keywords: Machine Learning, Deep Learning, AI,
Regression, Classification, Clustering, NLP, CNN, RNN,
LSTM, AutoML, k-NN, Naive Bayes, SVM, Decision
Forests
- Understand granular requirement, underlying business
problem and convert to low level design
- Develop analytic process chain with pre-processing,
training, testing, boosting etc.
- Develop the technical deliverable in mcube
(Python/Spark ML/R, H2O/Tensorflow) as per design
- Ensure quality of deliverable (coding standards, data
quality, data reconciliation)
- Proactively reach out for risks to Technical Lead
- Machine Learning, Deep Learning, Regression,
Classification, Clustering, NLP,CNN, RNN
- Expertise in data analysis and analytic programming
(Python/R/ SparkML/Tensorflow)
- Experience in multiple data processing technology
(preferably Pentaho or Spark)
- Basic knowledge in effort estimation, Clear and
structured thinking and communication
- Expertise in testing accuracy of deliverables (model)
- Exposure to Data Modelling and Analysis
- Exposure to information delivery (model outcome
communication)
Qualification:
M.S. / M.Tech/ B.Tech / B.E. (in this order of preference)
- Masters course in Data Science after technical (engineering/
science) degree
We are establishing infrastructure for internal and external reporting using Tableau and are looking for someone with experience building visualizations and dashboards in Tableau and using Tableau Server to deliver them to internal and external users.
Required Experience
- Implementation of interactive visualizations using Tableau Desktop
- Integration with Tableau Server and support of production dashboards and embedded reports with it
- Writing and optimization of SQL queries
- Proficient in Python including the use of Pandas and numpy libraries to perform data exploration and analysis
- 3 years of experience working as a Software Engineer / Senior Software Engineer
- Bachelors in Engineering – can be Electronic and comm , Computer , IT
- Well versed with Basic Data Structures Algorithms and system design
- Should be capable of working well in a team – and should possess very good communication skills
- Self-motivated and fun to work with and organized
- Productive and efficient working remotely
- Test driven mindset with a knack for finding issues and problems at earlier stages of development
- Interest in learning and picking up a wide range of cutting edge technologies
- Should be curious and interested in learning some Data science related concepts and domain knowledge
- Work alongside other engineers on the team to elevate technology and consistently apply best practices
Highly Desirable
- Data Analytics
- Experience in AWS cloud or any cloud technologies
- Experience in BigData technologies and streaming like – pyspark, kafka is a big plus
- Shell scripting
- Preferred tech stack – Python, Rest API, Microservices, Flask/Fast API, pandas, numpy, linux, shell scripting, Airflow, pyspark
- Has a strong backend experience – and worked with Microservices and Rest API’s - Flask, FastAPI, Databases Relational and Non-relational
Changing the way cataloging is done across the Globe. Our vision is to empower the smallest of
sellers, situated in the farthest of corners, to create superior product images and videos, without the need
for any external professional help. Imagine 30M+ merchants shooting Product Images or Videos using
their Smartphones, and then choosing Filters for Amazon, Asos, Airbnb, Doordash, etc to instantly
compose High-Quality "tuned-in" product visuals, instantly.
editing AI software, to capture and process beautiful product images for online selling. We are also
fortunate and proud to be backed by the biggest names in the investment community including the likes of
Accel Partners, Angellist and prominent Founders and Internet company operators, who believe that
there is an intelligent and efficient way of doing Digital Production than how the world operates currently.
Job Description :
- We are looking for a seasoned Computer Vision Engineer with AI/ML/CV and Deep Learning skills to
play a senior leadership role in our Product & Technology Research Team.
- You will be leading a team of CV researchers to build models that automatically transform millions of ecommerce, automobiles, food, real-estate ram images into processed final images.
- You will be responsible for researching the latest art of the possible in the field of computer vision,
designing the solution architecture for our offerings and lead the Computer Vision teams to build the core
algorithmic models & deploy them on Cloud Infrastructure.
- Working with the Data team to ensure your data pipelines are well set up and
models are being constantly trained and updated
- Working alongside product team to ensure that AI capabilities are built as democratized tools that
provides internal as well external stakeholders to innovate on top of it and make our customers
successful
- You will work closely with the Product & Engineering teams to convert the models into beautiful products
that will be used by thousands of Businesses everyday to transform their images and videos.
Job Requirements:
- 4-5 years of experience overall.
- Looking for strong Computer Vision experience and knowledge.
- BS/MS/ Phd degree in Computer Science, Engineering or a related subject from a ivy league institute
- Exposure on Deep Learning Techniques, TensorFlow/Pytorch
- Prior expertise on building Image processing applications using GANs, CNNs, Diffusion models
- Expertise with Image Processing Python libraries like OpenCV, etc.
- Good hands-on experience on Python, Flask or Django framework
- Authored publications at peer-reviewed AI conferences (e.g. NeurIPS, CVPR, ICML, ICLR,ICCV, ACL)
- Prior experience of managing teams and building large scale AI / CV projects is a big plus
- Great interpersonal and communication skills
- Critical thinker and problem-solving skills
Who Are We
A research-oriented company with expertise in computer vision and artificial intelligence, at its core, Orbo is a comprehensive platform of AI-based visual enhancement stack. This way, companies can find a suitable product as per their need where deep learning powered technology can automatically improve their Imagery.
ORBO's solutions are helping BFSI, beauty and personal care digital transformation and Ecommerce image retouching industries in multiple ways.
WHY US
- Join top AI company
- Grow with your best companions
- Continuous pursuit of excellence, equality, respect
- Competitive compensation and benefits
You'll be a part of the core team and will be working directly with the founders in building and iterating upon the core products that make cameras intelligent and images more informative.
To learn more about how we work, please check out
Description:
We are looking for a computer vision engineer to lead our team in developing a factory floor analytics SaaS product. This would be a fast-paced role and the person will get an opportunity to develop an industrial grade solution from concept to deployment.
Responsibilities:
- Research and develop computer vision solutions for industries (BFSI, Beauty and personal care, E-commerce, Defence etc.)
- Lead a team of ML engineers in developing an industrial AI product from scratch
- Setup end-end Deep Learning pipeline for data ingestion, preparation, model training, validation and deployment
- Tune the models to achieve high accuracy rates and minimum latency
- Deploying developed computer vision models on edge devices after optimization to meet customer requirements
Requirements:
- Bachelor’s degree
- Understanding about depth and breadth of computer vision and deep learning algorithms.
- Experience in taking an AI product from scratch to commercial deployment.
- Experience in Image enhancement, object detection, image segmentation, image classification algorithms
- Experience in deployment with OpenVINO, ONNXruntime and TensorRT
- Experience in deploying computer vision solutions on edge devices such as Intel Movidius and Nvidia Jetson
- Experience with any machine/deep learning frameworks like Tensorflow, and PyTorch.
- Proficient understanding of code versioning tools, such as Git
Our perfect candidate is someone that:
- is proactive and an independent problem solver
- is a constant learner. We are a fast growing start-up. We want you to grow with us!
- is a team player and good communicator
What We Offer:
- You will have fun working with a fast-paced team on a product that can impact the business model of E-commerce and BFSI industries. As the team is small, you will easily be able to see a direct impact of what you build on our customers (Trust us - it is extremely fulfilling!)
- You will be in charge of what you build and be an integral part of the product development process
- Technical and financial growth!
What You’ll Do:
- Accurate translation of business needs into a conceptual and technical architecture design of AI models and solutions
- Collaboration with developers and engineering teams resolving challenging tasks and ensuring proposed design is properly implemented
- Strategy for managing the changes to the AI models (new business needs, technology changes, model retraining, etc.)
- Collaborate with business partners and clients for AI solutioning and use cases. Provide recommendations to drive alignment with business teams
- Define and implement evaluation strategies for each model, demonstrate applicability and performance of the model, and identify its limits
- Design complex system integrations of AI technologies with API-driven platforms, using best practices for security and performance
- Experience in languages, tools & technologies such as Python, Tensorflow, Pytorch, Kubernetes, Docker, etc
- Experience with MLOps tools (like TFx, Tensorflow Serving, KubeFlow, etc.) and methodologies for CI/CD of ML models
- Proactively identify and address technical strengths, weaknesses, and opportunities across the AI and ML domain
- Strategic direction for maximizing simplification and re-use lowering overall TCO
What You’ll Bring:
- Minimum 10 years of hands-on experience in the IT field, at least 6+ years in Data Science/ ML / AI implementation based Products and Solutions
- Experience with Computer Vision - Vision AI & Document AI
- Must be hands-on with Python programming language, MLOps, Tensorflow, Pytorch, Keras, Scikit, etc.,
- Well versed with deep learning concepts, computer vision, image processing, document processing, convolutional neural networks and data ontology applications
- Proven track record at execution of projects in agile & cross-functional teams
- Published research papers and represented in reputable AI conferences and the ability to lead and drive research mindset across the team
- Good to have experience with GCP / Microsoft Azure / Amazon Web Services
- Ph.D. or Masters in a quantitative field such as Computer Science, IT, Stats/Maths
What we offer:
- Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
- Sum Insured: INR 5,00,000/-
- Maternity cover upto two children
- Inclusive of COVID-19 Coverage
- Cashless & Reimbursement facility
- Access to free online doctor consultation
- Personal Accident Policy (Disability Insurance) -
- Sum Insured: INR. 25,00,000/- Per Employee
- Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
- Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
- Temporary Total Disability is covered
- An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver benefit
- Monthly Internet Reimbursement of upto Rs. 1,000
- Opportunity to pursue Executive Programs/ courses at top universities globally
- Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Google Cloud, Amazon & others.
Job Description
We are looking for a highly capable machine learning engineer to optimize our deep learning systems. You will be evaluating existing deep learning (DL) processes, do hyperparameter tuning, performing statistical analysis (logging and evaluating model’s performance) to resolve data set problems, and enhancing the accuracy of our AI software's predictive automation capabilities.
You will be working with technologies like AWS Sagemaker, TensorFlow JS, TensorFlow/ Keras/TensorBoard to create Deep Learning backends that powers our application.
To ensure success as a machine learning engineer, you should demonstrate solid data science knowledge and experience in Deep Learning role. A first-class machine learning engineer will be someone whose expertise translates into the enhanced performance of predictive automation software. To do this job successfully, you need exceptional skills in DL and programming.
Responsibilities
-
Consulting with managers to determine and refine machine learning objectives.
-
Designing deep learning systems and self-running artificial intelligence (AI) software to
automate predictive models.
-
Transforming data science prototypes and applying appropriate ML algorithms and
tools.
-
Carry out data engineering subtasks such as defining data requirements, collecting,
labeling, inspecting, cleaning, augmenting, and moving data.
-
Carry out modeling subtasks such as training deep learning models, defining
evaluation metrics, searching hyperparameters, and reading research papers.
-
Carry out deployment subtasks such as converting prototyped code into production
code, working in-depth with AWS services to set up cloud environment for training,
improving response times and saving bandwidth.
-
Ensuring that algorithms generate robust and accurate results.
-
Running tests, performing analysis, and interpreting test results.
-
Documenting machine learning processes.
-
Keeping abreast of developments in machine learning.
Requirements
-
Proven experience as a Machine Learning Engineer or similar role.
-
Should have indepth knowledge of AWS Sagemaker and related services (like S3).
-
Extensive knowledge of ML frameworks, libraries, algorithms, data structures, data
modeling, software architecture, and math & statistics.
-
Ability to write robust code in Python & Javascript (TensorFlow JS).
-
Experience with Git and Github.
-
Superb analytical and problem-solving abilities.
-
Excellent troubleshooting skills.
-
Good project management skills.
-
Great communication and collaboration skills.
-
Excellent time management and organizational abilities.
-
Bachelor's degree in computer science, data science, mathematics, or a related field;
Master’s degree is a plus.
Who Are We
A research-oriented company with expertise in computer vision and artificial intelligence, at its core, Orbo is a comprehensive platform of AI-based visual enhancement stack. This way, companies can find a suitable product as per their need where deep learning powered technology can automatically improve their Imagery.
ORBO's solutions are helping BFSI, beauty and personal care digital transformation and Ecommerce image retouching industries in multiple ways.
WHY US
- Join top AI company
- Grow with your best companions
- Continuous pursuit of excellence, equality, respect
- Competitive compensation and benefits
You'll be a part of the core team and will be working directly with the founders in building and iterating upon the core products that make cameras intelligent and images more informative.
To learn more about how we work, please check out
Description:
We are looking for a computer vision engineer to lead our team in developing a factory floor analytics SaaS product. This would be a fast-paced role and the person will get an opportunity to develop an industrial grade solution from concept to deployment.
Responsibilities:
- Research and develop computer vision solutions for industries (BFSI, Beauty and personal care, E-commerce, Defence etc.)
- Lead a team of ML engineers in developing an industrial AI product from scratch
- Setup end-end Deep Learning pipeline for data ingestion, preparation, model training, validation and deployment
- Tune the models to achieve high accuracy rates and minimum latency
- Deploying developed computer vision models on edge devices after optimization to meet customer requirements
Requirements:
- Bachelor’s degree
- Understanding about depth and breadth of computer vision and deep learning algorithms.
- 4+ years of industrial experience in computer vision and/or deep learning
- Experience in taking an AI product from scratch to commercial deployment.
- Experience in Image enhancement, object detection, image segmentation, image classification algorithms
- Experience in deployment with OpenVINO, ONNXruntime and TensorRT
- Experience in deploying computer vision solutions on edge devices such as Intel Movidius and Nvidia Jetson
- Experience with any machine/deep learning frameworks like Tensorflow, and PyTorch.
- Proficient understanding of code versioning tools, such as Git
Our perfect candidate is someone that:
- is proactive and an independent problem solver
- is a constant learner. We are a fast growing start-up. We want you to grow with us!
- is a team player and good communicator
What We Offer:
- You will have fun working with a fast-paced team on a product that can impact the business model of E-commerce and BFSI industries. As the team is small, you will easily be able to see a direct impact of what you build on our customers (Trust us - it is extremely fulfilling!)
- You will be in charge of what you build and be an integral part of the product development process
- Technical and financial growth!
Who Are We
A research-oriented company with expertise in computer vision and artificial intelligence, at its core, Orbo is a comprehensive platform of AI-based visual enhancement stack. This way, companies can find a suitable product as per their need where deep learning powered technology can automatically improve their Imagery.
ORBO's solutions are helping BFSI, beauty and personal care digital transformation and Ecommerce image retouching industries in multiple ways.
WHY US
- Join top AI company
- Grow with your best companions
- Continuous pursuit of excellence, equality, respect
- Competitive compensation and benefits
You'll be a part of the core team and will be working directly with the founders in building and iterating upon the core products that make cameras intelligent and images more informative.
To learn more about how we work, please check out
Description:
We are looking for a computer vision engineer to lead our team in developing a factory floor analytics SaaS product. This would be a fast-paced role and the person will get an opportunity to develop an industrial grade solution from concept to deployment.
Responsibilities:
- Research and develop computer vision solutions for industries (BFSI, Beauty and personal care, E-commerce, Defence etc.)
- Lead a team of ML engineers in developing an industrial AI product from scratch
- Setup end-end Deep Learning pipeline for data ingestion, preparation, model training, validation and deployment
- Tune the models to achieve high accuracy rates and minimum latency
- Deploying developed computer vision models on edge devices after optimization to meet customer requirements
Requirements:
- Bachelor’s degree
- Understanding about depth and breadth of computer vision and deep learning algorithms.
- 4+ years of industrial experience in computer vision and/or deep learning
- Experience in taking an AI product from scratch to commercial deployment.
- Experience in Image enhancement, object detection, image segmentation, image classification algorithms
- Experience in deployment with OpenVINO, ONNXruntime and TensorRT
- Experience in deploying computer vision solutions on edge devices such as Intel Movidius and Nvidia Jetson
- Experience with any machine/deep learning frameworks like Tensorflow, and PyTorch.
- Proficient understanding of code versioning tools, such as Git
Our perfect candidate is someone that:
- is proactive and an independent problem solver
- is a constant learner. We are a fast growing start-up. We want you to grow with us!
- is a team player and good communicator
What We Offer:
- You will have fun working with a fast-paced team on a product that can impact the business model of E-commerce and BFSI industries. As the team is small, you will easily be able to see a direct impact of what you build on our customers (Trust us - it is extremely fulfilling!)
- You will be in charge of what you build and be an integral part of the product development process
- Technical and financial growth!
- B.E Computer Science or equivalent.
- In-depth knowledge of machine learning algorithms and their applications including
practical experience with and theoretical understanding of algorithms for classification,
regression and clustering.
- Hands-on experience in computer vision and deep learning projects to solve real world
problems involving vision tasks such as object detection, Object tracking, instance
segmentation, activity detection, depth estimation, optical flow, multi-view geometry,
domain adaptation etc.
- Strong understanding of modern and traditional Computer Vision Algorithms.
- Experience in one of the Deep Learning Frameworks / Networks: PyTorch, TensorFlow,
Darknet (YOLO v4 v5), U-Net, Mask R-CNN, EfficientDet, BERT etc.
- Proficiency with CNN architectures such as ResNet, VGG, UNet, MobileNet, pix2pix,
and Cycle GAN.
- Experienced user of libraries such as OpenCV, scikit-learn, matplotlib and pandas.
- Ability to transform research articles into working solutions to solve real-world problems.
- High proficiency in Python programming knowledge.
- Familiar with software development practices/pipelines (DevOps- Kubernetes, docker
containers, CI/CD tools).
- Strong communication skills.
Lead Machine Learning Engineer
About IDfy
IDfy is ranked amongst the World's Top 100 Regulatory Technology companies for the last two years. IDfy's AI-powered technology solutions help real people unlock real opportunities. We create the confidence required for people and businesses to engage with each other in the digital world. If you have used any major payment wallets, digitally opened a bank account , have used a self-drive car, have played a real-money online game, or hosted people through AirBnB, it's quite likely that your identity has been verified through IDfy at some point.
About the team
- The machine learning team is a closely knit team responsible for building models and services that support key workflows for IDfy.
- Our models are critical for these workflows and as such are expected to perform accurately and with low latency. We use a mix of conventional and hand-crafted deep learning models.
- The team comes from diverse backgrounds and experience. We respect opinions and believe in honest, open communication.
- We work directly with business and product teams to craft solutions for our customers. We know that we are, and function as a platform and not a services company.
About the role
In this role you will:
- Work on all aspects of a production machine learning platform: acquiring data, training and building models, deploying models, building API services for exposing these models, maintaining them in production, and more.
- Work on performance tuning of models
- From time to time work on support and debugging of these production systems
- Work on researching the latest technology in the areas of our interest and applying it to build newer products and enhancement of the existing platform.
- Building workflows for training and production systems
- Contribute to documentation
While the emphasis will be on researching, building and deploying models into production, you will be expected to contribute to aspects mentioned above.
About you
- You are a seasoned machine learning engineer (or data scientist). Our ideal candidate is someone with 8+ years of experience in production machine learning.
Must Haves
- You should be experienced in framing and solving complex problems with the application of machine learning or deep learning models.
- Deep expertise in computer vision or NLP with the experience of putting it into production at scale.
- You have experienced that and understand that modelling is only a small part of building and delivering AI solutions and know what it takes to keep a high-performance system up and running.
- Managing a large scale production ML system for at least a couple of years
- Optimization and tuning of models for deployment at scale
- Monitoring and debugging of production ML systems
- An enthusiasm and drive to learn, assimilate and disseminate the state of the art research. A lot of what we are building will require innovative approaches using newly researched models and applications.
- Past experience of mentoring junior colleagues
- Knowledge of and experience in ML Ops and tooling for efficient machine learning processes
Good to Have
- Our stack also includes languages like Go and Elixir. We would love it if you know any of these or take interest in functional programming.
- We use Docker and Kubernetes for deploying our services, so an understanding of this would be useful to have.
- Experience in using any other platform, frameworks, tools.
Other things to keep in mind
- Our goal is to help a significant part of the world’s population unlock real opportunities. This is an opportunity to make a positive impact here, and we hope you like it as much as we do.
Life At IDfy
People at IDfy care about creating value. We take pride in the strong collaborative culture that we have built, and our love for solving challenging problems. Life at IDfy is not always what you’d expect at a tech start-up that’s growing exponentially every quarter. There’s still time and space for balance.
We host regular talks, events and performances around Life, Art, Sports, and Technology; continuously sparking creative neurons in our people to keep their intellectual juices flowing. There’s never a dull day at IDfy. The office environment is casual and it goes beyond just the dress code. We have no conventional hierarchies and believe in an open-door policy where everyone is approachable.
About IDfy
IDfy is ranked amongst the World's Top 100 Regulatory Technology companies for the last two years. IDfy's AI-powered technology solutions help real people unlock real opportunities. We create the confidence required for people and businesses to engage with each other in the digital world. If you have used any major payment wallets, digitally opened a bank account , have used a self-drive car, have played a real-money online game, or hosted people through AirBnB, it's quite likely that your identity has been verified through IDfy at some point.
About the team
- The machine learning team is a closely knit team responsible for building models and services that support key workflows for IDfy.
- Our models are critical for these workflows and as such are expected to perform accurately and with low latency. We use a mix of conventional and hand-crafted deep learning models.
- The team comes from diverse backgrounds and experience. We respect opinions and believe in honest, open communication.
- We work directly with business and product teams to craft solutions for our customers. We know that we are, and function as a platform and not a services company.
About the role
In this role you will:
- Work on all aspects of a production machine learning platform: acquiring data, training and building models, deploying models, building API services for exposing these models, maintaining them in production, and more.
- Work on performance tuning of models
- From time to time work on support and debugging of these production systems
- Work on researching the latest technology in the areas of our interest and applying it to build newer products and enhancement of the existing platform.
- Building workflows for training and production systems
- Contribute to documentation
While the emphasis will be on researching, building and deploying models into production, you will be expected to contribute to aspects mentioned above.
About you
You are a seasoned machine learning engineer (or data scientist). Our ideal candidate is someone with 5+ years of experience in production machine learning.
Must Haves
- You should be experienced in framing and solving complex problems with the application of machine learning or deep learning models.
- Deep expertise in computer vision or NLP with the experience of putting it into production at scale.
- You have experienced that and understand that modelling is only a small part of building and delivering AI solutions and know what it takes to keep a high-performance system up and running.
- Managing a large scale production ML system for at least a couple of years
- Optimization and tuning of models for deployment at scale
- Monitoring and debugging of production ML systems
- An enthusiasm and drive to learn, assimilate and disseminate the state of the art research. A lot of what we are building will require innovative approaches using newly researched models and applications.
- Past experience of mentoring junior colleagues
- Knowledge of and experience in ML Ops and tooling for efficient machine learning processes
Good to Have
- Our stack also includes languages like Go and Elixir. We would love it if you know any of these or take interest in functional programming.
- We use Docker and Kubernetes for deploying our services, so an understanding of this would be useful to have.
- Experience in using any other platform, frameworks, tools.
Other things to keep in mind
- Our goal is to help a significant part of the world’s population unlock real opportunities. This is an opportunity to make a positive impact here, and we hope you like it as much as we do.
Life At IDfy
People at IDfy care about creating value. We take pride in the strong collaborative culture that we have built, and our love for solving challenging problems. Life at IDfy is not always what you’d expect at a tech start-up that’s growing exponentially every quarter. There’s still time and space for balance.
We host regular talks, events and performances around Life, Art, Sports, and Technology; continuously sparking creative neurons in our people to keep their intellectual juices flowing. There’s never a dull day at IDfy. The office environment is casual and it goes beyond just the dress code. We have no conventional hierarchies and believe in an open-door policy where everyone is approachable.