6+ Decision trees Jobs in India
Apply to 6+ Decision trees Jobs on CutShort.io. Find your next job, effortlessly. Browse Decision trees Jobs and apply today!
Job Description – Data Science
Basic Qualification:
- ME/MS from premier institute with a background in Mechanical/Industrial/Chemical/Materials engineering.
- Strong Analytical skills and application of Statistical techniques to problem solving
- Expertise in algorithms, data structures and performance optimization techniques
- Proven track record of demonstrating end to end ownership involving taking an idea from incubator to market
- Minimum years of experience in data analysis (2+), statistical analysis, data mining, algorithms for optimization.
Responsibilities
The Data Engineer/Analyst will
- Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions.
- Clear interaction with Business teams including product planning, sales, marketing, finance for defining the projects, objectives.
- Mine and analyze data from company databases to drive optimization and improvement of product and process development, marketing techniques and business strategies
- Coordinate with different R&D and Business teams to implement models and monitor outcomes.
- Mentor team members towards developing quick solutions for business impact.
- Skilled at all stages of the analysis process including defining key business questions, recommending measures, data sources, methodology and study design, dataset creation, analysis execution, interpretation and presentation and publication of results.
- 4+ years’ experience in MNC environment with projects involving ML, DL and/or DS
- Experience in Machine Learning, Data Mining or Machine Intelligence (Artificial Intelligence)
- Knowledge on Microsoft Azure will be desired.
- Expertise in machine learning such as Classification, Data/Text Mining, NLP, Image Processing, Decision Trees, Random Forest, Neural Networks, Deep Learning Algorithms
- Proficient in Python and its various libraries such as Numpy, MatPlotLib, Pandas
- Superior verbal and written communication skills, ability to convey rigorous mathematical concepts and considerations to Business Teams.
- Experience in infra development / building platforms is highly desired.
- A drive to learn and master new technologies and techniques.
XressBees – a logistics company started in 2015 – is amongst the fastest growing companies of its sector. Our
vision to evolve into a strong full-service logistics organization reflects itself in the various lines of business like B2C
logistics 3PL, B2B Xpress, Hyperlocal and Cross border Logistics.
Our strong domain expertise and constant focus on innovation has helped us rapidly evolve as the most trusted
logistics partner of India. XB has progressively carved our way towards best-in-class technology platforms, an
extensive logistics network reach, and a seamless last mile management system.
While on this aggressive growth path, we seek to become the one-stop-shop for end-to-end logistics solutions. Our
big focus areas for the very near future include strengthening our presence as service providers of choice and
leveraging the power of technology to drive supply chain efficiencies.
Job Overview
XpressBees would enrich and scale its end-to-end logistics solutions at a high pace. This is a great opportunity to join
the team working on forming and delivering the operational strategy behind Artificial Intelligence / Machine Learning
and Data Engineering, leading projects and teams of AI Engineers collaborating with Data Scientists. In your role, you
will build high performance AI/ML solutions using groundbreaking AI/ML and BigData technologies. You will need to
understand business requirements and convert them to a solvable data science problem statement. You will be
involved in end to end AI/ML projects, starting from smaller scale POCs all the way to full scale ML pipelines in
production.
Seasoned AI/ML Engineers would own the implementation and productionzation of cutting-edge AI driven algorithmic
components for search, recommendation and insights to improve the efficiencies of the logistics supply chain and
serve the customer better.
You will apply innovative ML tools and concepts to deliver value to our teams and customers and make an impact to
the organization while solving challenging problems in the areas of AI, ML , Data Analytics and Computer Science.
Opportunities for application:
- Route Optimization
- Address / Geo-Coding Engine
- Anomaly detection, Computer Vision (e.g. loading / unloading)
- Fraud Detection (fake delivery attempts)
- Promise Recommendation Engine etc.
- Customer & Tech support solutions, e.g. chat bots.
- Breach detection / prediction
An Artificial Intelligence Engineer would apply himself/herself in the areas of -
- Deep Learning, NLP, Reinforcement Learning
- Machine Learning - Logistic Regression, Decision Trees, Random Forests, XGBoost, etc..
- Driving Optimization via LPs, MILPs, Stochastic Programs, and MDPs
- Operations Research, Supply Chain Optimization, and Data Analytics/Visualization
- Computer Vision and OCR technologies
The AI Engineering team enables internal teams to add AI capabilities to their Apps and Workflows easily via APIs
without needing to build AI expertise in each team – Decision Support, NLP, Computer Vision, for Public Clouds and
Enterprise in NLU, Vision and Conversational AI.Candidate is adept at working with large data sets to find
opportunities for product and process optimization and using models to test the effectiveness of different courses of
action. They must have knowledge using a variety of data mining/data analysis methods, using a variety of data tools,
building, and implementing models, using/creating algorithms, and creating/running simulations. They must be
comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion
for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.
Roles & Responsibilities
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Building cloud services in Decision Support (Anomaly Detection, Time series forecasting, Fraud detection,
Risk prevention, Predictive analytics), computer vision, natural language processing (NLP) and speech that
work out of the box.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Build core of Artificial Intelligence and AI Services such as Decision Support, Vision, Speech, Text, NLP, NLU,
and others.
● Leverage Cloud technology –AWS, GCP, Azure
● Experiment with ML models in Python using machine learning libraries (Pytorch, Tensorflow), Big Data,
Hadoop, HBase, Spark, etc
● Work with stakeholders throughout the organization to identify opportunities for leveraging company data to
drive business solutions.
● Mine and analyze data from company databases to drive optimization and improvement of product
development, marketing techniques and business strategies.
● Assess the effectiveness and accuracy of new data sources and data gathering techniques.
● Develop custom data models and algorithms to apply to data sets.
● Use predictive modeling to increase and optimize customer experiences, supply chain metric and other
business outcomes.
● Develop company A/B testing framework and test model quality.
● Coordinate with different functional teams to implement models and monitor outcomes.
● Develop processes and tools to monitor and analyze model performance and data accuracy.
● Develop scalable infrastructure, including microservices and backend, that automates training and
deployment of ML models.
● Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems.
● Work with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively
communicating your needs and understanding theirs and address external and internal shareholder's
product challenges.
● Deliver machine learning and data science projects with data science techniques and associated libraries
such as AI/ ML or equivalent NLP (Natural Language Processing) packages. Such techniques include a good
to phenomenal understanding of statistical models, probabilistic algorithms, classification, clustering, deep
learning or related approaches as it applies to financial applications.
● The role will encourage you to learn a wide array of capabilities, toolsets and architectural patterns for
successful delivery.
What is required of you?
You will get an opportunity to build and operate a suite of massive scale, integrated data/ML platforms in a broadly
distributed, multi-tenant cloud environment.
● B.S., M.S., or Ph.D. in Computer Science, Computer Engineering
● Coding knowledge and experience with several languages: C, C++, Java,JavaScript, etc.
● Experience with building high-performance, resilient, scalable, and well-engineered systems
● Experience in CI/CD and development best practices, instrumentation, logging systems
● Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights
from large data sets.
● Experience working with and creating data architectures.
● Good understanding of various machine learning and natural language processing technologies, such as
classification, information retrieval, clustering, knowledge graph, semi-supervised learning and ranking.
● Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest,
Boosting, Trees, text mining, social network analysis, etc.
● Knowledge on using web services: Redshift, S3, Spark, Digital Ocean, etc.
● Knowledge on creating and using advanced machine learning algorithms and statistics: regression,
simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
● Knowledge on analyzing data from 3rd party providers: Google Analytics, Site Catalyst, Core metrics,
AdWords, Crimson Hexagon, Facebook Insights, etc.
● Knowledge on distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, MySQL, Kafka etc.
● Knowledge on visualizing/presenting data for stakeholders using: Quicksight, Periscope, Business Objects,
D3, ggplot, Tableau etc.
● Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
● Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests, and proper usage, etc.) and experience with applications.
● Experience building data pipelines that prep data for Machine learning and complete feedback loops.
● Knowledge of Machine Learning lifecycle and experience working with data scientists
● Experience with Relational databases and NoSQL databases
● Experience with workflow scheduling / orchestration such as Airflow or Oozie
● Working knowledge of current techniques and approaches in machine learning and statistical or
mathematical models
● Strong Data Engineering & ETL skills to build scalable data pipelines. Exposure to data streaming stack (e.g.
Kafka)
● Relevant experience in fine tuning and optimizing ML (especially Deep Learning) models to bring down
serving latency.
● Exposure to ML model productionzation stack (e.g. MLFlow, Docker)
● Excellent exploratory data analysis skills to slice & dice data at scale using SQL in Redshift/BigQuery.
- Opportunity to work in a fast-paced environment aimed at creating a very high impact
- Exposure to different business functions in a fintech organization, helping each function to become successful with your work
- Opportunity to work with a diverse team of smart and hardworking professionals from various backgrounds
- Amazing work culture with endless opportunities for personal and professional growth Independence to put your thoughts/ideas into the executable business projects
- Ability to directly implement solutions and see them in action; critical partner for all decision-making in the organization
- Interact with people from various backgrounds
- A mix of Statisticians, Consultants, Business and Programmers
- Flat organization structure with open and direct culture
- Merit-based fast-growth environment - Market-leading compensation and benefit
Responsibilities
- As a Fraud Investigator, you'll lead multiple complex fraud investigations simultaneously in your area of responsibility and you'll write clear and concise reports for investigation findings and report outcomes to Avail finance's management. In addition, you are going to make this a success by conducting audit activities (prepare, execute, report) and customer screening in line with our Internal Audit procedures and Due Diligence Program.
- Identify risk trends in your area and translate them into learning and solutions, sharing best practices and facilitating the risk owner and business manager to make decisions.
- Actively anticipate trends and requirements and provide support for Avail finance's global anti-fraud procedures and standards for fraud identification and management of investigations to be the benchmark.
- Act as a representative Interact and engage with senior stakeholders in the Area.
- Participate in, provide input, and lead initiatives for the FICS Knowledge Practice.
- Able to translate Area and Global risk trends into proactive actions.
- Document findings and conclusions in line with the prescribed tools and methods.
- Responsible for the entire documentation and reporting of fraud/audit/customer screening in line with Signify Internal Audit procedures and systems
- Conduct customer screenings as part of the Due Diligence Program.
- Conduct compliance audit activities and contribute to fraud prevention within Avail Finance. - Work experience in NBFCs / Fraud Analytics in Banks / Fintech / Lending organizations in India would be a big plus
About You :
- You put the Customer First by identifying risk trends in your area of expertise and translate these into learnings and solutions.
- You are Greater Together by collaborating across teams to build on our strengths and diversity and work towards our shared goal.
- You are a Game Changer by innovating. Together we set ourselves apart and continue to lead in the market.
- You have a Passion for Results by working smarter and faster to deliver excellence
Advanced degree in computer science, math, statistics or a related discipline ( Must have master degree )
Extensive data modeling and data architecture skills
Programming experience in Python, R
Background in machine learning frameworks such as TensorFlow or Keras
Knowledge of Hadoop or another distributed computing systems
Experience working in an Agile environment
Advanced math skills (Linear algebra
Discrete math
Differential equations (ODEs and numerical)
Theory of statistics 1
Numerical analysis 1 (numerical linear algebra) and 2 (quadrature)
Abstract algebra
Number theory
Real analysis
Complex analysis
Intermediate analysis (point set topology)) ( important )
Strong written and verbal communications
Hands on experience on NLP and NLG
Experience in advanced statistical techniques and concepts. ( GLM/regression, Random forest, boosting, trees, text mining ) and experience with application.
Job Description
We are looking for an experienced engineer to join our data science team, who will help us design, develop, and deploy machine learning models in production. You will develop robust models, prepare their deployment into production in a controlled manner, while providing appropriate means to monitor their performance and stability after deployment.
What You’ll Do will include (But not limited to):
- Preparing datasets needed to train and validate our machine learning models
- Anticipate and build solutions for problems that interrupt availability, performance, and stability in our systems, services, and products at scale.
- Defining and implementing metrics to evaluate the performance of the models, both for computing performance (such as CPU & memory usage) and for ML performance (such as precision, recall, and F1)
- Supporting the deployment of machine learning models on our infrastructure, including containerization, instrumentation, and versioning
- Supporting the whole lifecycle of our machine learning models, including gathering data for retraining, A/B testing, and redeployments
- Developing, testing, and evaluating tools for machine learning models deployment, monitoring, retraining.
- Working closely within a distributed team to analyze and apply innovative solutions over billions of documents
- Supporting solutions ranging from rule-bases, classical ML techniques to the latest deep learning systems.
- Partnering with cross-functional team members to bring large scale data engineering solutions to production
- Communicating your approach and results to a wider audience through presentations
Your Qualifications:
- Demonstrated success with machine learning in a SaaS or Cloud environment, with hands–on knowledge of model creation and deployments in production at scale
- Good knowledge of traditional machine learning methods and neural networks
- Experience with practical machine learning modeling, especially on time-series forecasting, analysis, and causal inference.
- Experience with data mining algorithms and statistical modeling techniques for anomaly detection in time series such as clustering, classification, ARIMA, and decision trees is preferred.
- Ability to implement data import, cleansing and transformation functions at scale
- Fluency in Docker, Kubernetes
- Working knowledge of relational and dimensional data models with appropriate visualization techniques such as PCA.
- Solid English skills to effectively communicate with other team members
Due to the nature of the role, it would be nice if you have also:
- Experience with large datasets and distributed computing, especially with the Google Cloud Platform
- Fluency in at least one deep learning framework: PyTorch, TensorFlow / Keras
- Experience with No–SQL and Graph databases
- Experience working in a Colab, Jupyter, or Python notebook environment
- Some experience with monitoring, analysis, and alerting tools like New Relic, Prometheus, and the ELK stack
- Knowledge of Java, Scala or Go-Lang programming languages
- Familiarity with KubeFlow
- Experience with transformers, for example the Hugging Face libraries
- Experience with OpenCV
About Egnyte
In a content critical age, Egnyte fuels business growth by enabling content-rich business processes, while also providing organizations with visibility and control over their content assets. Egnyte’s cloud-native content services platform leverages the industry’s leading content intelligence engine to deliver a simple, secure, and vendor-neutral foundation for managing enterprise content across business applications and storage repositories. More than 16,000 customers trust Egnyte to enhance employee productivity, automate data management, and reduce file-sharing cost and complexity. Investors include Google Ventures, Kleiner Perkins, Caufield & Byers, and Goldman Sachs. For more information, visit www.egnyte.com
#LI-Remote
About Turing:
Turing enables U.S. companies to hire the world’s best remote software engineers. 100+ companies including those backed by Sequoia, Andreessen, Google Ventures, Benchmark, Founders Fund, Kleiner, Lightspeed, and Bessemer have hired Turing engineers. For more than 180,000 engineers across 140 countries, we are the preferred platform for finding remote U.S. software engineering roles. We offer a wide range of full-time remote opportunities for full-stack, backend, frontend, DevOps, mobile, and AI/ML engineers.
We are growing fast (our revenue 15x’d in the past 12 months and is accelerating), and we have raised $14M in seed funding (https://tcrn.ch/3lNKbM9">one of the largest in Silicon Valley) from:
- Facebook’s 1st CTO and Quora’s Co-Founder (Adam D’Angelo)
- Executives from Google, Facebook, Square, Amazon, and Twitter
- Foundation Capital (investors in Uber, Netflix, Chegg, Lending Club, etc.)
- Cyan Banister
- Founder of Upwork (Beerud Sheth)
We also raised a much larger round of funding in October 2020 that we will be publicly announcing over the coming month.
Some articles about Turing:
- https://techcrunch.com/2020/08/25/turing-raises-14m-to-help-source-vet-place-and-manage-remote-developers-in-tech-jobs/">TechCrunch: Turing raises $14M seed to help source, vet, place, and manage remote developers
- https://www.theinformation.com/articles/six-startups-prospering-during-coronavirus">The Information: Six Startups Prospering During Coronavirus
- https://medium.com/@cyanbanister/turing-helps-the-world-level-up-ff44b4e6415d">Cyan Banister: Turing Helps the World Level Up
- https://turing.com/boundarylessblog/2019/10/the-future-of-work-is-remote/the-future-of-work/">Jonathan Siddharth (Turing CEO): The Future of Work is Remote.
Turing is led by successful repeat founders Jonathan Siddharth and Vijay Krishnan, whose last A.I. company leveraged elite remote talent and had a successful acquisition. (https://techcrunch.com/2017/02/23/revcontent-acquires-rover/">Techcrunch story). Turing’s leadership team is composed of ex-Engineering and Sales leadership from Facebook, Google, Uber, and Capgemini.
About the role:
Software developers from all over the world have taken 200,000+ tests and interviews on Turing. Turing has also recommended thousands of developers to its customers and got customer feedback in terms of customer interview pass/fail data and data from the success of the collaboration with a U.S customer. This generates a massive proprietary dataset with a rich feature set comprising resume and test/interview features and labels in the form of actual customer feedback. Continuing rapid growth in our business creates an ever-increasing data advantage for us.
We are looking for a Machine Learning Scientist who can help solve a whole range of exciting and valuable machine learning problems at Turing. Turing collects a lot of valuable heterogeneous signals about software developers including their resume, GitHub profile and associated code and a lot of fine-grained signals from Turing’s own screening tests and interviews (that span various areas including Computer Science fundamentals, project ownership and collaboration, communication skills, proactivity and tech stack skills), their history of successful collaboration with different companies on Turing, etc.
A machine learning scientist at Turing will help create deep developer profiles that are a good representation of a developer’s strengths and weaknesses as it relates to their probability of getting successfully matched to one of Turing’s partner companies and having a fruitful long-term collaboration. The ML scientist will build models that are able to rank developers for different jobs based on their probability of success at the job.
You will also help make Turing’s tests more efficient by assessing their ability to predict the probability of a successful match of a developer with at least one company. The prior probability of a registered developer getting matched with a customer is about 1%. We want our tests to adaptively reduce perplexity as steeply as possible and move this probability estimate rapidly toward either 0% or 100%; maximize expected information-gain per unit time in other words.
As an ML Scientist on the team, you will have a unique opportunity to make an impact by advancing ML models and systems, as well as uncovering new opportunities to apply machine learning concepts to Turing product(s).
This role will directly report to Turing’s founder and CTO, https://www.linkedin.com/in/vijay0/">Vijay Krishnan. This is his https://scholar.google.com/citations?user=uCRc7DgAAAAJ&hl=en">Google Scholar profile.
Responsibilities:
- Enhance our existing machine learning systems using your core coding skills and ML knowledge.
- Take end to end ownership of machine learning systems - from data pipelines, feature engineering, candidate extraction, model training, as well as integration into our production systems.
- Utilize state-of-the-art ML modeling techniques to predict user interactions and the direct impact on the company’s top-line metrics.
- Design features and builds large scale recommendation systems to improve targeting and engagement.
- Identify new opportunities to apply machine learning to different parts of our product(s) to drive value for our customers.
Minimum Requirements:
- BS, MS, or Ph.D. in Computer Science or a relevant technical field (AI/ML preferred).
- Extensive experience building scalable machine learning systems and data-driven products working with cross-functional teams
- Expertise in machine learning fundamentals, applicable to search - Learning to Rank, Deep Learning, Tree-Based Models, Recommendation Systems, Relevance and Data mining, understanding of NLP approaches like W2V or Bert.
- 2+ years of experience applying machine learning methods in settings like recommender systems, search, user modeling, graph representation learning, natural language processing.
- Strong understanding of neural network/deep learning, feature engineering, feature selection, optimization algorithms. Proven ability to dig deep into practical problems and choose the right ML method to solve them.
- Strong programming skills in Python and fluency in data manipulation (SQL, Spark, Pandas) and machine learning (scikit-learn, XGBoost, Keras/Tensorflow) tools.
- Good understanding of mathematical foundations of machine learning algorithms.
- Ability to be available for meetings and communication during Turing's "coordination hours" (Mon - Fri: 8 am to 12 pm PST).
Other Nice-to-have Requirements:
- First author publications in ICML, ICLR, NeurIPS, KDD, SIGIR, and related conferences/journals.
- Strong performance in Kaggle competitions.
- 5+ years of industry experience or a Ph.D. with 3+ years of industry experience in applied machine learning in similar problems e.g. ranking, recommendation, ads, etc.
- Strong communication skills.
- Experienced in leading large-scale multi-engineering projects.
- Flexible, and a positive team player with outstanding interpersonal skills.