45+ Remote Data Science Jobs in India
Apply to 45+ Remote Data Science Jobs on CutShort.io. Find your next job, effortlessly. Browse Data Science Jobs and apply today!
We are seeking a highly skilled and experienced Senior Data Analyst to join our team. In this role, you will be responsible for analyzing complex datasets, generating actionable insights, and supporting strategic decision-making processes. You will collaborate with cross-functional teams, develop data models, and create compelling reports and visualizations to communicate findings effectively.
Key Responsibilities:
- Data Analysis: Perform in-depth analysis of large datasets to identify trends, patterns, and correlations. Utilize statistical methods and data mining techniques to extract actionable insights.
- Reporting & Visualization: Develop and deliver comprehensive reports and interactive dashboards that effectively communicate data insights and business performance to stakeholders.
- Data Modeling: Build and maintain data models that support business operations and decision-making processes. Conduct data validation and ensure accuracy and consistency.
- Business Intelligence: Collaborate with business leaders and teams to understand their data needs and translate them into analytical requirements. Provide data-driven recommendations to support strategic initiatives.
About the Role
We are actively seeking talented Senior Python Developers to join our ambitious team dedicated to pushing the frontiers of AI technology. This opportunity is tailored for professionals who thrive on developing innovative solutions and who aspire to be at the forefront of AI advancements. You will work with different companies in the US who are looking to develop both commercial and research AI solutions.
Required Skills:
- Write effective Python code to tackle complex issues
- Use business sense and analytical abilities to glean valuable insights from public databases
- Clearly express the reasoning and logic when writing code in Jupyter notebooks or other suitable mediums
- Extensive experience working with Python
- Proficiency with the language's syntax and conventions
- Previous experience tackling algorithmic problems
- Nice to have some prior Software Quality Assurance and Test Planning experience
- Excellent spoken and written English communication skills
The ideal candidates should be able to
- Clearly explain their strategies for problem-solving.
- Design practical solutions in code.
- Develop test cases to validate their solutions.
- Debug and refine their solutions for improvement.
About the Company :
Nextgen Ai Technologies is at the forefront of innovation in artificial intelligence, specializing in developing cutting-edge AI solutions that transform industries. We are committed to pushing the boundaries of AI technology to solve complex challenges and drive business success.
Currently offering "Data Science Internship" for 2 months.
Data Science Projects details In which Intern’s Will Work :
Project 01 : Image Caption Generator Project in Python
Project 02 : Credit Card Fraud Detection Project
Project 03 : Movie Recommendation System
Project 04 : Customer Segmentation
Project 05 : Brain Tumor Detection with Data Science
Eligibility
A PC or Laptop with decent internet speed.
Good understanding of English language.
Any Graduate with a desire to become a web developer. Freshers are welcomed.
Knowledge of HTML, CSS and JavaScript is a plus but NOT mandatory.
Fresher are welcomed. You will get proper training also, so don't hesitate to apply if you don't have any coding background.
#please note that THIS IS AN INTERNSHIP , NOT A JOB.
We recruit permanent employees from inside our interns only (if needed).
Duration : 02 Months
MODE: Work From Home (Online)
Responsibilities
Manage reports and sales leads in salesforce.com, CRM.
Develop content, manage design, and user access to SharePoint sites for customers and employees.
Build data driven reports, store procedures, query optimization using SQL and PL/SQL knowledge.
Learned the essentials to C++ and Java to refine code and build the exterior layer of web pages.
Configure and load xml data for the BVT tests.
Set up a GitHub page.
Develop spark scripts by using Scala shell as per requirements.
Develop and A/B test improvements to business survey questions on iOS.
Deploy statistical models to various company data streams using Linux shells.
Create monthly performance-base client billing reports using MySQL and NoSQL databases.
Utilize Hadoop and MapReduce to generate dynamic queries and extract data from HDFS.
Create source code utilizing JavaScript and PHP language to make web pages functional.
Excellent problem-solving skills and the ability to work independently or as part of a team.
Effective communication skills to convey complex technical concepts.
Benefits
Internship Certificate
Letter of recommendation
Stipend Performance Based
Part time work from home (2-3 Hrs per day)
5 days a week, Fully Flexible Shift
We are working on AI for medical images. We need someone who can run pre trained models and also train new ones
at Blue Hex Software Private Limited
In this position, you will play a pivotal role in collaborating with our CFO, CTO, and our dedicated technical team to craft and develop cutting-edge AI-based products.
Role and Responsibilities:
- Develop and maintain Python-based software applications.
- Design and work with databases using SQL.
- Use Django, Streamlit, and front-end frameworks like Node.js and Svelte for web development.
- Create interactive data visualizations with charting libraries.
- Collaborate on scalable architecture and experimental tech. - Work with AI/ML frameworks and data analytics.
- Utilize Git, DevOps basics, and JIRA for project management. Skills and Qualifications:
- Strong Python programming
skills.
- Proficiency in OOP and SQL.
- Experience with Django, Streamlit, Node.js, and Svelte.
- Familiarity with charting libraries.
- Knowledge of AI/ML frameworks.
- Basic Git and DevOps understanding.
- Effective communication and teamwork.
Company details: We are a team of Enterprise Transformation Experts who deliver radically transforming products, solutions, and consultation services to businesses of any size. Our exceptional team of diverse and passionate individuals is united by a common mission to democratize the transformative power of AI.
Website: Blue Hex Software – AI | CRM | CXM & DATA ANALYTICS
Responsibilities
> Selecting features, building and optimizing classifiers using machine
> learning techniques
> Data mining using state-of-the-art methods
> Extending company’s data with third party sources of information when
> needed
> Enhancing data collection procedures to include information that is
> relevant for building analytic systems
> Processing, cleansing, and verifying the integrity of data used for
> analysis
> Doing ad-hoc analysis and presenting results in a clear manner
> Creating automated anomaly detection systems and constant tracking of
> its performance
Key Skills
> Hands-on experience of analysis tools like R, Advance Python
> Must Have Knowledge of statistical techniques and machine learning
> algorithms
> Artificial Intelligence
> Understanding of Text analysis- Natural Language processing (NLP)
> Knowledge on Google Cloud Platform
> Advanced Excel, PowerPoint skills
> Advanced communication (written and oral) and strong interpersonal
> skills
> Ability to work cross-culturally
> Good to have Deep Learning
> VBA and visualization tools like Tableau, PowerBI, Qliksense, Qlikview
> will be an added advantage
An 8 year old IT Services and consulting company.
CTC Budget: 35-55LPA
Location: Hyderabad (Remote after 3 months WFO)
Company Overview:
An 8-year-old IT Services and consulting company based in Hyderabad providing services in maximizing product value while delivering rapid incremental innovation, possessing extensive SaaS company M&A experience including 20+ closed transactions on both the buy and sell sides. They have over 100 employees and looking to grow the team.
- 6 plus years of experience as a Python developer.
- Experience in web development using Python and Django Framework.
- Experience in Data Analysis and Data Science using Pandas, Numpy and Scifi-Kit - (GTH)
- Experience in developing User Interface using HTML, JavaScript, CSS.
- Experience in server-side templating languages including Jinja 2 and Mako
- Knowledge into Kafka and RabitMQ (GTH)
- Experience into Docker, Git and AWS
- Ability to integrate multiple data sources into a single system.
- Ability to collaborate on projects and work independently when required.
- DB (MySQL, Postgress, SQL)
Selection Process: 2-3 Interview rounds (Tech, VP, Client)
world’s first real-time opportunity engine. We constantly cr
● Statistics - Always makes data-driven decisions using tools from statistics, such as: populations and
sampling, normal distribution and central limit theorem, mean, median, mode, variance, standard
deviation, covariance, correlation, p-value, expected value, conditional probability and Bayes's theorem
● Machine Learning
○ Solid grasp of attention mechanism, transformers, convolutions, optimisers, loss functions,
LSTMs, forget gates, activation functions.
○ Can implement all of these from scratch in pytorch, tensorflow or numpy.
○ Comfortable defining own model architectures, custom layers and loss functions.
● Modelling
○ Comfortable with using all the major ML frameworks (pytorch, tensorflow, sklearn, etc) and NLP
models (not essential). Able to pick the right library and framework for the job.
○ Capable of turning research and papers into operational execution and functionality delivery.
- Work closely with your business to identify issues and use data to propose solutions for effective decision making
- Build algorithms and design experiments to merge, manage, interrogate and extract data to supply tailored reports to colleagues, customers or the wider organisation.
- Creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc
- Querying databases and using statistical computer languages: R, Python, SLQ, etc.
- Visualizing/presenting data through various Dashboards for Data Analysis, Using Python Dash, Flask etc.
- Test data mining models to select the most appropriate ones for use on a project
- Work in a POSIX/UNIX environment to run/deploy applications
- Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.
- Develop custom data models and algorithms to apply to data sets.
- Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting and other business outcomes.
- Assess the effectiveness of data sources and data-gathering techniques and improve data collection methods
- Horizon scan to stay up to date with the latest technology, techniques and methods
- Coordinate with different functional teams to implement models and monitor outcomes.
- Stay curious and enthusiastic about using algorithms to solve problems and enthuse others to see the benefit of your work.
General Expectations:
- Able to create algorithms to extract information from large data sets
- Strong knowledge of Python, R, Java or another scripting/statistical languages to automate data retrieval, manipulation and analysis.
- Experience with extracting and aggregating data from large data sets using SQL or other tools
- Strong understanding of various NLP, and NLU techniques like Named Entity Recognition, Summarization, Topic Modeling, Text Classification, Lemmatization and Stemming.
- Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, etc.
- Experience with Python libraries such as Pandas, NumPy, SciPy, Scikit-Learn
- Experience with Jupyter / Pandas / Numpy to manipulate and analyse data
- Knowledge of Machine Learning techniques and their respective pros and cons
- Strong Knowledge of various Data Science Visualization Tools like Tableau, PowerBI, D3, Plotly, etc.
- Experience using web services: Redshift, AWS, S3, Spark, DigitalOcean, etc.
- Proficiency in using query languages, such as SQL, Spark DataFrame API, etc.
- Hands-on experience in HTML, CSS, Bootstrap, JavaScript, AJAX, jQuery and Prototyping.
- Hands-on experience on C#, Javascript, .Net
- Experience in understanding and analyzing data using statistical software (e.g., Python, R, KDB+ and other relevant libraries)
- Experienced in building applications that meet enterprise needs – secure, scalable, loosely coupled design
- Strong knowledge of computer science, algorithms, and design patterns
- Strong oral and written communication, and other soft skills critical to collaborating and engage with teams
Role :
- Understand and translate statistics and analytics to address business problems
- Responsible for helping in data preparation and data pull, which is the first step in machine learning
- Should be able to do cut and slice data to extract interesting insights from the data
- Model development for better customer engagement and retention
- Hands on experience in relevant tools like SQL(expert), Excel, R/Python
- Working on strategy development to increase business revenue
Requirements:
- Hands on experience in relevant tools like SQL(expert), Excel, R/Python
- Statistics: Strong knowledge of statistics
- Should able to do data scraping & Data mining
- Be self-driven, and show ability to deliver on ambiguous projects
- An ability and interest in working in a fast-paced, ambiguous and rapidly-changing environment
- Should have worked on Business Projects for an organization, Ex: customer acquisition, Customer retention.
client of peoplefirst consultants
Skills: Machine Learning,Deep Learning,Artificial Intelligence,python.
Location:Chennai
Domain knowledge: Data cleaning, modelling, analytics, statistics, machine learning, AI
Requirements:
· To be part of Digital Manufacturing and Industrie 4.0 projects across Saint Gobain group of companies
· Design and develop AI//ML models to be deployed across SG factories
· Knowledge on Hadoop, Apache Spark, MapReduce, Scala, Python programming, SQL and NoSQL databases is required
· Should be strong in statistics, data analysis, data modelling, machine learning techniques and Neural Networks
· Prior experience in developing AI and ML models is required
· Experience with data from the Manufacturing Industry would be a plus
Roles and Responsibilities:
· Develop AI and ML models for the Manufacturing Industry with a focus on Energy, Asset Performance Optimization and Logistics
· Multitasking, good communication necessary
· Entrepreneurial attitude.
2. Build large datasets that will be used to train the models
3. Empirically evaluate related research works
4. Train and evaluate deep learning architectures on multiple large scale datasets
5. Collaborate with the rest of the research team to produce high-quality research
Job Responsibilities:-
- Develop robust, scalable and maintainable machine learning models to answer business problems against large data sets.
- Build methods for document clustering, topic modeling, text classification, named entity recognition, sentiment analysis, and POS tagging.
- Perform elements of data cleaning, feature selection and feature engineering and organize experiments in conjunction with best practices.
- Benchmark, apply, and test algorithms against success metrics. Interpret the results in terms of relating those metrics to the business process.
- Work with development teams to ensure models can be implemented as part of a delivered solution replicable across many clients.
- Knowledge of Machine Learning, NLP, Document Classification, Topic Modeling and Information Extraction with a proven track record of applying them to real problems.
- Experience working with big data systems and big data concepts.
- Ability to provide clear and concise communication both with other technical teams and non-technical domain specialists.
- Strong team player; ability to provide both a strong individual contribution but also work as a team and contribute to wider goals is a must in this dynamic environment.
- Experience with noisy and/or unstructured textual data.
knowledge graph and NLP including summarization, topic modelling etc
- Strong coding ability with statistical analysis tools in Python or R, and general software development skills (source code management, debugging, testing, deployment, etc.)
- Working knowledge of various text mining algorithms and their use-cases such as keyword extraction, PLSA, LDA, HMM, CRF, deep learning & recurrent ANN, word2vec/doc2vec, Bayesian modeling.
- Strong understanding of text pre-processing and normalization techniques, such as tokenization,
- POS tagging and parsing and how they work at a low level.
- Excellent problem solving skills.
- Strong verbal and written communication skills
- Masters or higher in data mining or machine learning; or equivalent practical analytics / modelling experience
- Practical experience in using NLP related techniques and algorithms
- Experience in open source coding and communities desirable.
Able to containerize Models and associated modules and work in a Microservices environment
About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for a Data Lead - someone who works at the intersection of data science, GIS, and engineering. We want a leader who not only understands environmental data but someone who can quickly assemble large scale datasets that are crucial to the well being of our planet. Come save the planet with us!
Your Role
Manage: As a leadership position, this requires long term strategic thinking. You will be in charge of daily operations of the data team. This would include running team standups, planning the execution of data generation and ensuring the algorithms are put in production. You will also be the person in charge to dumb down the data science for the rest of us who do not know what it means.
Love and Live Data: You will also be taking all the responsibility of ensuring that the data we generate is accurate, clean, and is ready to use for our clients. This would entail that you understand what the market needs, calculate feasibilities and build data pipelines. You should understand the algorithms that we use or need to use and take decisions on what would serve the needs of our clients well. We also want our Data Lead to be constantly probing for newer and optimized ways of generating datasets. It would help if they were abreast of all the latest developments in the data science and environmental worlds. The Data Lead also has to be able to work with our Platform team on integrating the data on our platform and API portal.
Collaboration: We use Clubhouse to track and manage our projects across our organization - this will require you to collaborate with the team and follow up with members on a regular basis. About 50% of the work, needs to be the pulse of the platform team. You'll collaborate closely with peers from other functions—Design, Product, Marketing, Sales, and Support to name a few—on our overall product roadmap, on product launches, and on ongoing operations. You will find yourself working with the product management team to define and execute the feature roadmap. You will be expected to work closely with the CTO, reporting on daily operations and development. We don't believe in a top-down hierarchical approach and are transparent with everyone. This means honest and mutual feedback and ability to adapt.
Teaching: Not exactly in the traditional sense. You'll recruit, coach, and develop engineers while ensuring that they are regularly receiving feedback and making rapid progress on personal and professional goals.
Humble and cool: Look we will be upfront with you about one thing - our team is fairly young and is always buzzing with work. In this fast-paced setting, we are looking for someone who can stay cool, is humble, and is willing to learn. You are adaptable, can skill up fast, and are fearless at trying new methods. After all, you're in the business of saving the planet!
Requirements
- A minimum of 5 years of industry experience.
- Hyper-curious!
- Exceptional at Remote Sensing Data, GIS, Data Science.
- Must have big data & data analytics experience
- Very good in documentation & speccing datasets
- Experience with AWS Cloud, Linux, Infra as Code & Docker (containers) is a must
- Coordinate with cross-functional teams (DevOPS, QA, Design etc.) on planning and execution
- Lead, mentor and manage deliverables of a team of talented and highly motivated team of developers
- Must have experience in building, managing, growing & hiring data teams. Has built large-scale datasets from scratch
- Managing work on team's Clubhouse & follows up with the team. ~ 50% of work, needs to be the pulse of the platform team
- Exceptional communication skills & ability to abstract away problems & build systems. Should be able to explain to the management anything & everything
- Quality control - you'll be responsible for maintaining a high quality bar for everything your team ships. This includes documentation and data quality
- Experience of having led smaller teams, would be a plus.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for a data scientist to join its growing team. This position will require you to think and act on the geospatial architecture and data needs (specifically geospatial data) of the company. This position is strategic and will also require you to collaborate closely with data engineers, data scientists, software developers and even colleagues from other business functions. Come save the planet with us!
Your Role
Manage: It goes without saying that you will be handling large amounts of image and location datasets. You will develop dataframes and automated pipelines of data from multiple sources. You are expected to know how to visualize them and use machine learning algorithms to be able to make predictions. You will be working across teams to get the job done.
Analyze: You will curate and analyze vast amounts of geospatial datasets like satellite imagery, elevation data, meteorological datasets, openstreetmaps, demographic data, socio-econometric data and topography to extract useful insights about the events happening on our planet.
Develop: You will be required to develop processes and tools to monitor and analyze data and its accuracy. You will develop innovative algorithms which will be useful in tracking global environmental problems like depleting water levels, illegal tree logging, and even tracking of oil-spills.
Demonstrate: A familiarity with working in geospatial libraries such as GDAL/Rasterio for reading/writing of data, and use of QGIS in making visualizations. This will also extend to using advanced statistical techniques and applying concepts like regression, properties of distribution, and conduct other statistical tests.
Produce: With all the hard work being put into data creation and management, it has to be used! You will be able to produce maps showing (but not limited to) spatial distribution of various kinds of data, including emission statistics and pollution hotspots. In addition, you will produce reports that contain maps, visualizations and other resources developed over the course of managing these datasets.
Requirements
These are must have skill-sets that we are looking for:
- Excellent coding skills in Python (including deep familiarity with NumPy, SciPy, pandas).
- Significant experience with git, GitHub, SQL, AWS (S3 and EC2).
- Worked on GIS and is familiar with geospatial libraries such as GDAL and rasterio to read/write the data, a GIS software such as QGIS for visualisation and query, and basic machine learning algorithms to make predictions.
- Demonstrable experience implementing efficient neural network models and deploying them in a production environment.
- Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications.
- Capable of writing clear and lucid reports and demystifying data for the rest of us.
- Be curious and care about the planet!
- Minimum 2 years of demonstrable industry experience working with large and noisy datasets.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
About LodgIQ
LodgIQ is led by a team of experienced hospitality technology experts, data scientists and product domain experts. Seed funded by Highgate Ventures, a venture capital platform focused on early stage technology investments in the hospitality industry and Trilantic Capital Partners, a global private equity firm, LodgIQ has made a significant investment in advanced machine learning platforms and data science.
Title : Data Scientist
Job Description:
- Apply Data Science and Machine Learning to a REAL-LIFE problem - “Predict Guest Arrivals and Determine Best Prices for Hotels”
- Apply advanced analytics in a BIG Data Environment – AWS, MongoDB, SKLearn
- Help scale up the product in a global offering across 100+ global markets
Qualifications:
- Minimum 3 years of experience with advanced data analytic techniques, including data mining, machine learning, statistical analysis, and optimization. Student projects are acceptable.
- At least 1 year of experience with Python / Numpy / Pandas / Scipy/ MatPlotLib / Scikit-Learn
- Experience in working with massive data sets, including structured and unstructured with at least 1 prior engagement involving data gathering, data cleaning, data mining, and data visualization
- Solid grasp over optimization techniques
- Master's or PhD degree in Business Analytics. Data science, Statistics or Mathematics
- Ability to show a track record of solving large, complex problems
Cloud Transformation products, frameworks and services Org
Senior Data Scientist
- 6+ years Experienced in building data pipelines and deployment pipelines for machine learning models
- 4+ years’ experience with ML/AI toolkits such as Tensorflow, Keras, AWS Sagemaker, MXNet, H20, etc.
- 4+ years’ experience developing ML/AI models in Python/R
- Must have leadership abilities to lead a project and team.
- Must have leadership skills to lead and deliver projects, be proactive, take ownership, interface with business, represent the team and spread the knowledge.
- Strong knowledge of statistical data analysis and machine learning techniques (e.g., Bayesian, regression, classification, clustering, time series, deep learning).
- Should be able to help deploy various models and tune them for better performance.
- Working knowledge in operationalizing models in production using model repositories, API s and data pipelines.
- Experience with machine learning and computational statistics packages.
- Experience with Data Bricks, Data Lake.
- Experience with Dremio, Tableau, Power Bi.
- Experience working with spark ML, spark DL with Pyspark would be a big plus!
- Working knowledge of relational database systems like SQL Server, Oracle.
- Knowledge of deploying models in platforms like PCF, AWS, Kubernetes.
- Good knowledge in Continuous integration suites like Jenkins.
- Good knowledge in web servers (Apache, NGINX).
- Good knowledge in Git, Github, Bitbucket.
- Working knowledge in operationalizing models in production using model repositories, APIs and data pipelines.
- Java, R, and Python programming experience.
- Should be very familiar with (MS SQL, Teradata, Oracle, DB2).
- Big Data – Hadoop.
- Expert knowledge using BI tools e.g.Tableau
- Experience with machine learning and computational statistics packages.
Responsibilities:
- Improve robustness of Leena AI current NLP stack
- Increase zero shot learning capability of Leena AI current NLP stack
- Opportunity to add/build new NLP architectures based on requirements
- Manage End to End lifecycle of the data in the system till it achieves more than 90% accuracy
- Manage a NLP team
Page BreakRequirements:
- Strong understanding of linear algebra, optimisation, probability, statistics
- Experience in the data science methodology from exploratory data analysis, feature engineering, model selection, deployment of the model at scale and model evaluation
- Experience in deploying NLP architectures in production
- Understanding of latest NLP architectures like transformers is good to have
- Experience in adversarial attacks/robustness of DNN is good to have
- Experience with Python Web Framework (Django), Analytics and Machine Learning frameworks like Tensorflow/Keras/Pytorch.
PriceLabs (https://www.chicagobusiness.com/innovators/what-if-you-could-adjust-prices-meet-demand" target="_blank">chicagobusiness.com/innovators/what-if-you-could-adjust-prices-meet-demand) is a cloud based software for vacation and short term rentals to help them dynamically manage prices just the way large hotels and airlines do! Our mission is to help small businesses in the travel and tourism industry by giving them access to advanced analytical systems that are often restricted to large companies.
We're looking for someone with strong analytical capabilities who wants to understand how our current architecture and algorithms work, and help us design and develop long lasting solutions to address those. Depending on the needs of the day, the role will come with a good mix of team-work, following our best practices, introducing us to industry best practices, independent thinking, and ownership of your work.
Responsibilities:
- Design, develop and enhance our pricing algorithms to enable new capabilities.
- Process, analyze, model, and visualize findings from our market level supply and demand data.
- Build and enhance internal and customer facing dashboards to better track metrics and trends that help customers use PriceLabs in a better way.
- Take ownership of product ideas and design discussions.
- Occasional travel to conferences to interact with prospective users and partners, and learn where the industry is headed.
Requirements:
- Bachelors, Masters or Ph. D. in Operations Research, Industrial Engineering, Statistics, Computer Science or other quantitative/engineering fields.
- Strong understanding of analysis of algorithms, data structures and statistics.
- Solid programming experience. Including being able to quickly prototype an idea and test it out.
- Strong communication skills, including the ability and willingness to explain complicated algorithms and concepts in simple terms.
- Experience with relational databases and strong knowledge of SQL.
- Experience building data heavy analytical models in the travel industry.
- Experience in the vacation rental industry.
- Experience developing dynamic pricing models.
- Prior experience working at a fast paced environment.
- Willingness to wear many hats.
Job Description:
We are looking for an exceptional Data Scientist Lead / Manager who is passionate about data and motivated to build large scale machine learning solutions to shine our data products. This person will be contributing to the analytics of data for insight discovery and development of machine learning pipeline to support modeling of terabytes of daily data for various use cases.
Location: Pune (Initially remote due to COVID 19)
*****Looking for someone who can start immediately / Within a month. Hands-on experience in Python programming (Minimum 5 Years) is a must.
About the Organisation :
- It provides a dynamic, fun workplace filled with passionate individuals. We are at the cutting edge of advertising technology and there is never a dull moment at work.
- We have a truly global footprint, with our headquarters in Singapore and offices in Australia, United States, Germany, United Kingdom and India.
- You will gain work experience in a global environment. We speak over 20 different languages, from more than 16 different nationalities and over 42% of our staff are multilingual.
Qualifications:
• 8+ years relevant working experience
• Master / Bachelors in computer science or engineering
• Working knowledge of Python and SQL
• Experience in time series data, data manipulation, analytics, and visualization
• Experience working with large-scale data
• Proficiency of various ML algorithms for supervised and unsupervised learning
• Experience working in Agile/Lean model
• Experience with Java and Golang is a plus
• Experience with BI toolkit such as Tableau, Superset, Quicksight, etc is a plus
• Exposure to building large-scale ML models using one or more of modern tools and libraries such as AWS Sagemaker, Spark ML-Lib, Dask, Tensorflow, PyTorch, Keras, GCP ML Stack
• Exposure to modern Big Data tech such as Cassandra/Scylla, Kafka, Ceph, Hadoop, Spark
• Exposure to IAAS platforms such as AWS, GCP, Azure
Typical persona: Data Science Manager/Architect
Experience: 8+ years programming/engineering experience (with at least last 4 years in Data science in a Product development company)
Type: Hands-on candidate only
Must:
a. Hands-on Python: pandas,scikit-learn
b. Working knowledge of Kafka
c. Able to carry out own tasks and help the team in resolving problems - logical or technical (25% of job)
d. Good on analytical & debugging skills
e. Strong communication skills
Desired (in order of priorities)
a.Go (Strong advantage)
b. Airflow (Strong advantage)
c. Familiarity & working experience on more than one type of database: relational, object, columnar, graph and other unstructured databases
d. Data structures, Algorithms
e. Experience with multi-threaded and thread sync concepts
f. AWS Sagemaker
g. Keras
Senior Full Stack Python Engineer at Silicon Valley AI company
Company |
Photon Commerce (http://www.photoncommerce.com">www.photoncommerce.com)
|
Title |
Senior software development engineer
|
Commitment |
Full-time. Remote (San Francisco, CA). Starting as soon as possible.
|
About the company |
Photon empowers financial services and eCommerce businesses with computer vision to automate invoices and receipts. Photon turns paper and pdf invoices, receipts, POs, and packing slips into a modern collaboration platform for commerce, preventing problems before they become costly. Backed by the Nasdaq Entrepreneurial Center and Village Global, its leadership brings experience from eCommerce and SaaS unicorns, Google AI, Y Combinator, Stanford, and 4 exits and an IPO.
|
What you will do |
· Build a python/Django/Flask app that processes PDFs, images, invoices, and receipts for Quickbooks and Shopify · Create integrations, APIs for the captured structured data · Build a document and invoice collaboration app with embedded chat, like Slack for documents/invoices · Deploy the app into production
|
Experience |
· 6-10 years of experience having launched apps in production with real users in React, Node, Javascript, Python, Django/Flask, SQL, AWS, and full-stack web development · Have repos and/or demos to show your previous work · Knowledge in and/or passion for enterprise SaaS, and AWS devops · Excellent communication, critical thinking, problem-solving, and team skills
|
What you’ll receive |
· Join an exciting, high growth startup led by serial entrepreneurs and backed by top tier Silicon Valley investors and billionaire industry leaders in tech · Competitive compensation commensurate with experience and performance
|
Why us?
We at Wow Labz are always striving to look for exciting problems to solve. Whether we’re creating new products or helping a small startup extend its reach, we build from our heart. We’re entrepreneurial and we love new ideas. Fun culture with a team that cares about your development and growth.
What are we looking for?
We are looking for an expert in machine learning to help us extract maximum value from our data. You will be leading all the processes from data collection, cleaning, and preprocessing, to training models and deploying them to production. In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.
Role & Responsibilities:
- Identify valuable data sources and automate collection processes
- Study and transform data science prototypes
- Research and Implement appropriate ML algorithms and tools
- Develop machine learning applications according to requirements
- Extend existing ML libraries and frameworks
- Cross-validate models to ensure their generalizability
- Present information using data visualization techniques
- Propose solutions and strategies to business challenges
- Collaborate with engineering and product development teams
- Guide and mentor the respective teams
Desired Skills and Experience:
- Proven experience as a Machine Learning Engineer or similar role
- Demonstrable history of devising and overseeing data-centered projects
- Understanding of data structures, data modeling and software architecture
- Deep knowledge of math, probability, statistics and algorithms
- Experience with cloud platforms like AWS/Azure/GCP
- Knowledge of server configurations and maintenance.
- Knowledge of R, SQL and Python; familiarity with Scala, Java or C++ is an asset
- Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
- Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop)
- Ability to select hardware to run an ML model with the required latency
- Excellent communication skills
- Ability to work in a team
- Outstanding analytical and problem-solving skills
Must have:
- Inclination towards Mathematics and statistics to understand the algorithms at a deeper level
- Strong OOPs concepts (python preferable)
- Hands on experience with Flask or Django
- Ability to learn latest deployed models and understand their core architecture to gain breadth of expertise
Persona of the kind of people who would be a culture fit:
- You are curious and aware of the latest tech trends
- You are self-driven
- You get a kick out of leading a solution towards its completion.
- You have the capacity to foster a healthy, stimulating work environment that frequently harnesses teamwork
- You are fun to hang out with!
- We are looking for an experienced data engineer to join our team.
- The preprocessing involves ETL tasks, using pyspark, AWS Glue, staging data in parquet formats on S3, and Athena
To succeed in this data engineering position, you should care about well-documented, testable code and data integrity. We have devops who can help with AWS permissions.
We would like to build up a consistent data lake with staged, ready-to-use data, and to build up various scripts that will serve as blueprints for various additional data ingestion and transforms.
If you enjoy setting up something which many others will rely on, and have the relevant ETL expertise, we’d like to work with you.
Responsibilities
- Analyze and organize raw data
- Build data pipelines
- Prepare data for predictive modeling
- Explore ways to enhance data quality and reliability
- Potentially, collaborate with data scientists to support various experiments
Requirements
- Previous experience as a data engineer with the above technologies
Job Description
We are looking for an experienced engineer to join our data science team, who will help us design, develop, and deploy machine learning models in production. You will develop robust models, prepare their deployment into production in a controlled manner, while providing appropriate means to monitor their performance and stability after deployment.
What You’ll Do will include (But not limited to):
- Preparing datasets needed to train and validate our machine learning models
- Anticipate and build solutions for problems that interrupt availability, performance, and stability in our systems, services, and products at scale.
- Defining and implementing metrics to evaluate the performance of the models, both for computing performance (such as CPU & memory usage) and for ML performance (such as precision, recall, and F1)
- Supporting the deployment of machine learning models on our infrastructure, including containerization, instrumentation, and versioning
- Supporting the whole lifecycle of our machine learning models, including gathering data for retraining, A/B testing, and redeployments
- Developing, testing, and evaluating tools for machine learning models deployment, monitoring, retraining.
- Working closely within a distributed team to analyze and apply innovative solutions over billions of documents
- Supporting solutions ranging from rule-bases, classical ML techniques to the latest deep learning systems.
- Partnering with cross-functional team members to bring large scale data engineering solutions to production
- Communicating your approach and results to a wider audience through presentations
Your Qualifications:
- Demonstrated success with machine learning in a SaaS or Cloud environment, with hands–on knowledge of model creation and deployments in production at scale
- Good knowledge of traditional machine learning methods and neural networks
- Experience with practical machine learning modeling, especially on time-series forecasting, analysis, and causal inference.
- Experience with data mining algorithms and statistical modeling techniques for anomaly detection in time series such as clustering, classification, ARIMA, and decision trees is preferred.
- Ability to implement data import, cleansing and transformation functions at scale
- Fluency in Docker, Kubernetes
- Working knowledge of relational and dimensional data models with appropriate visualization techniques such as PCA.
- Solid English skills to effectively communicate with other team members
Due to the nature of the role, it would be nice if you have also:
- Experience with large datasets and distributed computing, especially with the Google Cloud Platform
- Fluency in at least one deep learning framework: PyTorch, TensorFlow / Keras
- Experience with No–SQL and Graph databases
- Experience working in a Colab, Jupyter, or Python notebook environment
- Some experience with monitoring, analysis, and alerting tools like New Relic, Prometheus, and the ELK stack
- Knowledge of Java, Scala or Go-Lang programming languages
- Familiarity with KubeFlow
- Experience with transformers, for example the Hugging Face libraries
- Experience with OpenCV
About Egnyte
In a content critical age, Egnyte fuels business growth by enabling content-rich business processes, while also providing organizations with visibility and control over their content assets. Egnyte’s cloud-native content services platform leverages the industry’s leading content intelligence engine to deliver a simple, secure, and vendor-neutral foundation for managing enterprise content across business applications and storage repositories. More than 16,000 customers trust Egnyte to enhance employee productivity, automate data management, and reduce file-sharing cost and complexity. Investors include Google Ventures, Kleiner Perkins, Caufield & Byers, and Goldman Sachs. For more information, visit www.egnyte.com
#LI-Remote
- Modeling complex problems, discovering insights, and identifying opportunities through the use of statistical, algorithmic, mining, and visualization techniques
- Experience working with business understanding the requirement, creating the problem statement, and building scalable and dependable Analytical solutions
- Must have hands-on and strong experience in Python
- Broad knowledge of fundamentals and state-of-the-art in NLP and machine learning
- Strong analytical & algorithm development skills
- Deep knowledge of techniques such as Linear Regression, gradient descent, Logistic Regression, Forecasting, Cluster analysis, Decision trees, Linear Optimization, Text Mining, etc
- Ability to collaborate across teams and strong interpersonal skills
Skills
- Sound theoretical knowledge in ML algorithm and their application
- Hands-on experience in statistical modeling tools such as R, Python, and SQL
- Hands-on experience in Machine learning/data science
- Strong knowledge of statistics
- Experience in advanced analytics / Statistical techniques – Regression, Decision trees, Ensemble machine learning algorithms, etc
- Experience in Natural Language Processing & Deep Learning techniques
- Pandas, NLTK, Scikit-learn, SpaCy, Tensorflow
- Conducting advanced statistical analysis to provide actionable insights, identify trends, and measure performance
- Performing data exploration, cleaning, preparation and feature engineering; in addition to executing tasks such as building a POC, validation/ AB testing
- Collaborating with data engineers & architects to implement and deploy scalable solutions
- Communicating results to diverse audiences with effective writing and visualizations
- Identifying and executing on high impact projects, triage external requests, and ensure timely completion for the results to be useful
- Providing thought leadership by researching best practices, conducting experiments, and collaborating with industry leaders
What you need to have:
- 2-4 year experience in machine learning algorithms, predictive analytics, demand forecasting in real-world projects
- Strong statistical background in descriptive and inferential statistics, regression, forecasting techniques.
- Strong Programming background in Python (including packages like Tensorflow), R, D3.js , Tableau, Spark, SQL, MongoDB.
- Preferred exposure to Optimization & Meta-heuristic algorithm and related applications
- Background in a highly quantitative field like Data Science, Computer Science, Statistics, Applied Mathematics,Operations Research, Industrial Engineering, or similar fields.
- Should have 2-4 years of experience in Data Science algorithm design and implementation, data analysis in different applied problems.
- DS Mandatory skills : Python, R, SQL, Deep learning, predictive analysis, applied statistics
Responsibilities:
- Design and develop strong analytics system and predictive models
- Managing a team of data scientists, machine learning engineers, and big data specialists
- Identify valuable data sources and automate data collection processes
- Undertake pre-processing of structured and unstructured data
- Analyze large amounts of information to discover trends and patterns
- Build predictive models and machine-learning algorithms
- Combine models through ensemble modeling
- Present information using data visualization techniques
- Propose solutions and strategies to business challenges
- Collaborate with engineering and product development teams
Requirements:
- Proven experience as a seasoned Data Scientist
- Good Experience in data mining processes
- Understanding of machine learning and Knowledge of operations research is a value addition
- Strong understanding and experience in R, SQL, and Python; Knowledge base with Scala, Java, or C++ is an asset
- Experience using business intelligence tools (e. g. Tableau) and data frameworks (e. g. Hadoop)
- Strong math skills (e. g. statistics, algebra)
- Problem-solving aptitude
- Excellent communication and presentation skills
- Experience in Natural Language Processing (NLP)
- Strong competitive coding skills
- BSc/BA in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred
One of the leading banks with a global presence.
High Level Scope of Work :
- Work with AI / Analytics team to priorities MACHINE LEARNING Identified USE CASES for Development and Rollout
- Meet and understand current retail / Marketing Requirements and how AI/ML solution will address and automate the decision process.
- Develop AI/ML Programs using DATAIKU Solution & Python or open source tech with focus to deliver high Quality and accurate ML prediction Model
- Gather additional and external data sources to support the AI/ML Model as desired .
- Support the ML Model and FINE TUNEit to ensure high accuracy all the time.
- Example of use cases (Customer Segmentation , Product Recommendation, Price Optimization, Retail Customer Personalization Offers, Next Best Location for Business Est, CCTV Computer Vision, NLP and Voice Recognition Solutions)
Required technology expertise :
- Deep Knowledge & Understanding on MACHINE LEARNING ALGORITHMS (Supervised / Unsupervised Learning / Deep Learning Models)
- Hands on EXP for at least 5+ years with PYTHON and R STATISTICS PROGRAMMING Languages
- Strong Database Development knowledge using SQL and PL/SQL
- Must have EXP using Commercial Data Science Solution particularly DATAIKU and (Altryx, SAS, Azure ML, Google ML, Oracle ML is a plus)
- Strong hands on EXP with BIG DATA Solution Architecture and Optimization for AI/ML Workload.
- Data Analytics and BI Tools Hand on EXP particularly (Oracle OBIEE and Power BI)
- Have implemented and Developed at least 3 successful AI/ML Projects with tangible Business Outcomes In retail Focused Industry
- Have at least 5+ Years EXP in Retail Industry and Customer Focus Business.
- Ability to communicate with Business Owner & stakeholders to understand their current issues and provide MACHINE LEARNING Solution accordingly.
Qualifications
- Bachelor Degree or Master Degree in Data Science, Artificial Intelligent, Computer Science
- Certified as DATA SCIENTIST or MACHINE LEARNING Expert.
- Adept at Machine learning techniques and algorithms.
Feature selection, dimensionality reduction, building and
- optimizing classifiers using machine learning techniques
- Data mining using state-of-the-art methods
- Doing ad-hoc analysis and presenting results
- Proficiency in using query languages such as N1QL, SQL
Experience with data visualization tools, such as D3.js, GGplot,
- Plotly, PyPlot, etc.
Creating automated anomaly detection systems and constant tracking
- of its performance
- Strong in Python is a must.
- Strong in Data Analysis and mining is a must
- Deep Learning, Neural Network, CNN, Image Processing (Must)
Building analytic systems - data collection, cleansing and
- integration
Experience with NoSQL databases, such as Couchbase, MongoDB,
Cassandra, HBase
We are a nascent quantitative hedge fund led by an MIT PhD and Math Olympiad medallist, offering opportunities to grow with us as we build out the team. Our fund has world class investors and big data experts as part of the GP, top-notch ML experts as advisers to the fund, plus has equity funding to grow the team, license data and scale the data processing.
We are interested in researching and taking in live a variety of quantitative strategies based on historic and live market data, alternative datasets, social media data (both audio and video) and stock fundamental data.
You would join, and, if qualified, lead a growing team of data scientists and researchers, and be responsible for a complete lifecycle of quantitative strategy implementation and trading.
Requirements:
- Atleast 3 years of relevant ML experience
- Graduation date : 2018 and earlier
- 3-5 years of experience in high level Python programming.
- Master Degree (or Phd) in quantitative disciplines such as Statistics, Mathematics, Physics, Computer Science in top universities.
- Good knowledge of applied and theoretical statistics, linear algebra and machine learning techniques.
- Ability to leverage financial and statistical insights to research, explore and harness a large collection of quantitative strategies and financial datasets in order to build strong predictive models.
- Should take ownership for the research, design, development and implementation of the strategy development and effectively communicate with other team mates
- Prior experience and good knowledge of lifecycle and pitfalls of algorithmic strategy development and modelling.
- Good practical knowledge in understanding financial statements, value investing, portfolio and risk management techniques.
- A proven ability to lead and drive innovation to solve challenges and road blocks in project completion.
- A valid Github profile with some activity in it
Bonus to have:
- Experience in storing and retrieving data from large and complex time series databases
- Very good practical knowledge on time-series modelling and forecasting (ARIMA, ARCH and Stochastic modelling)
- Prior experience in optimizing and back testing quantitative strategies, doing return and risk attribution, feature/factor evaluation.
- Knowledge of AWS/Cloud ecosystem is an added plus (EC2s, Lambda, EKS, Sagemaker etc.)
- Knowledge of REST APIs and data extracting and cleaning techniques
- Good to have experience in Pyspark or any other big data programming/parallel computing
- Familiarity with derivatives, knowledge in multiple asset classes along with Equities.
- Any progress towards CFA or FRM is a bonus
- Average tenure of atleast 1.5 years in a company
- 3-5yrs of practical DS experience working with varied data sets. Working with retail banking is preferred but not necessary.
- Need to be strong in concepts of statistical modelling – particularly looking for practical knowledge learnt from work experience (should be able to give "rule of thumb" answers)
- Strong problem solving skills and the ability to articulate really well.
- Ideally, the data scientist should have interfaced with data engineering and model deployment teams to bring models / solutions to "live" in production.
- Strong working knowledge of python ML stack is very important here.
- Willing to work on diverse range of tasks in building ML related capability on the Corridor Platform as well as client work.
- Someone with strong interest in data engineering aspect of ML is highly preferred, i.e. can play dual role of Data Scientist as well as someone who can code a module on our Corridor Platform writing robust code.
Structured ML techniques for candidates:
- GBM
- XgBoost
- Random Forest
- Neural Net
- Logistic Regression
Job Description
We are looking for applicants who have a demonstrated research background in machine learning, a passion for independent research and technical problem-solving, and a proven ability to develop and implement ideas from research. The candidate will collaborate with researchers and engineers of multiple disciplines within Ideapoke, in particular with researchers in data collection and development teams to develop advanced data analytics solutions. Work with massive amounts of data collected from various sources.-4 to 5 years of academic or professional experience in Artificial Intelligence and Data Analytics, Machine Learning, Natural Language Processing/Text mining or related field.
-Technical ability and hands on expertise in Python, R, XML parsing, Big Data, NoSQL and SQL
- Identifying valuable data sources and automate collection processes
- Undertaking preprocessing of structured and unstructured data
- Analyzing large amounts of information to discover trends and patterns
- Building predictive models and machine-learning algorithms
- Combining models through ensemble modeling
- Presenting information using data visualization techniques
- Proposing solutions and strategies to business challenges
- Collaborating with engineering and product development teams
What you need to have:
- Data Scientist with min 3 years of experience in Analytics or Data Science preferably in Pricing or Polymer Market
- Experience using scripting languages like Python(preferred) or R is a must.
- Experience with SQL, Tableau is good to have
- Strong numerical, problem solving and analytical aptitude
- Being able to make data based decisions
- Ability to present/communicate analytics driven insights.
- Critical and Analytical thinking skills
Our client provides data solutions for fraud prevention
-
Manage individual projects priorities, deadlines, and deliverables
-
Gather and process raw data at scale (including writing scripts, web scraping, calling/create
APIs, etc.) from the web / internet
-
Develop frameworks for automating and maintaining constant flow of data from multiple
sources
-
Identify, analysis, design, and implement internal process improvements
-
Design and implement tooling upgrades to increase stability and data quality
-
Help team to fix issues that occur in test and production environments
-
Automate software development processes, including build, deploy, and test
-
Manage and guide the team members
REQUIRED QUALIFICATIONS:
|
|
Tiger Analytics is a global AI & analytics consulting firm. With data and technology at the core of our solutions, we are solving some of the toughest problems out there. Our culture is modeled around expertise and mutual respect with a team first mindset. Working at Tiger, you’ll be at the heart of this AI revolution. You’ll work with teams that push the boundaries of what-is-possible and build solutions that energize and inspire.
We are headquartered in the Silicon Valley and have our delivery centres across the globe. The below role is for our Chennai or Bangalore office, or you can choose to work remotely.
About the Role:
As an Associate Director - Data Science at Tiger Analytics, you will lead data science aspects of endto-end client AI & analytics programs. Your role will be a combination of hands-on contribution, technical team management, and client interaction.
• Work closely with internal teams and client stakeholders to design analytical approaches to
solve business problems
• Develop and enhance a broad range of cutting-edge data analytics and machine learning
problems across a variety of industries.
• Work on various aspects of the ML ecosystem – model building, ML pipelines, logging &
versioning, documentation, scaling, deployment, monitoring and maintenance etc.
• Lead a team of data scientists and engineers to embed AI and analytics into the client
business decision processes.
Desired Skills:
• High level of proficiency in a structured programming language, e.g. Python, R.
• Experience designing data science solutions to business problems
• Deep understanding of ML algorithms for common use cases in both structured and
unstructured data ecosystems.
• Comfortable with large scale data processing and distributed computing
• Excellent written and verbal communication skills
• 10+ years exp of which 8 years of relevant data science experience including hands-on
programming.
Designation will be commensurate with expertise/experience. Compensation packages among the best in the industry.
Responsibilities
- Research and test novel machine learning approaches for analysing large-scale distributed computing applications.
- Develop production-ready implementations of proposed solutions across different models AI and ML algorithms, including testing on live customer data to improve accuracy, efficacy, and robustness
- Work closely with other functional teams to integrate implemented systems into the SaaS platform
- Suggest innovative and creative concepts and ideas that would improve the overall platform
Qualifications
The ideal candidate must have the following qualifications:
- 5 + years experience in practical implementation and deployment of large customer-facing ML based systems.
- MS or M Tech (preferred) in applied mathematics/statistics; CS or Engineering disciplines are acceptable but must have with strong quantitative and applied mathematical skills
- In-depth working, beyond coursework, familiarity with classical and current ML techniques, both supervised and unsupervised learning techniques and algorithms
- Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimization
- Experience in working on modeling graph structures related to spatiotemporal systems
- Programming skills in Python is a must
- Experience in developing and deploying on cloud (AWS or Google or Azure)
- Good verbal and written communication skills
- Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow
Most importantly, you should be someone who is passionate about building new and innovative products that solve tough real-world problems.
Location
Chennai, India
1)Machine learning development using Python or Scala Spark
2)Knowledge of multiple ML algorithms like Random forest, XG boost, RNN, CNN, Transform learning etc..
3)Aware of typical challenges in machine learning implementation and respective applications
Good to have
1)Stack development or DevOps team experience
2)Cloud service (AWS, Cloudera), SAAS, PAAS
3)Big data tools and framework
4)SQL experience
AI-driven platform designed for education,e-commerce sectors
The Job
The Architect, Machine Learning and Artificial Intelligence including Computer Vision will grow and lead a team of talented Machine Learning (ML), Computer Vision (CV) and Artificial Intelligence (AI) researchers and engineers to develop innovative machine learning algorithms, scalable ML system, and AI applications for Racetrack. This role will be focused on developing and deploying personalization and recommender system, search, experimentation, audience, and content AI solutions to drive user experience and growth.
The Daily
- Develop innovative data science solutions that utilize machine learning and deep learning algorithms, statistical and quantitative modelling approaches to support product, engineering, content, and marketing initiatives.
- Build and lead a world-class team of ML and AI scientists and engineers.
- Be a hands-on leader to mentor the team in latest machine learning and deep learning approaches, and to introduce new technologies and processes. Single headedly manage the MVP and PoCs
- Work with ML engineers to design solution architecture and develop scalable machine learning system to accelerate learning cycle.
- Identify data science opportunities that deliver business value.
- Develop ML/AI/CV roadmap and educate both internal and external stakeholders at all levels to drive implementation and measurement.
- Hands on experience in Image processing for auto industry
- BFSI domain knowledge is a plus
- Provide thought leadership to enable ML/AI applications.
- Manage products priorities and ensure timely delivery.
- Develop and evangelize best practices for scoping, building, validating, deploying, and monitoring ML/AI products.
- Prepare and present ML modelling results and analytical insights that help drive the business to senior leadership.
The Essentials
- 8 + years of work experience in Machine Learning, AI and Data Science with a proven track record to drive innovation and business impacts
- 4 + years of managing a team of data scientists, ML and AI researchers and engineers
- Strong machine learning, deep learning, and statistical modelling expertise, such as causal inference modelling, ensembles, neural networks, reinforcement learning, NLP, and computer vision
- Advanced knowledge of SQL and experience with big data platform (AWS, Snowflake, Spark, Google Cloud etc.)
- Proficiency in machine learning and deep learning languages and platforms (Python, R, TensorFlow, Keras, PyTorch, MXNet etc.)
- Experience in deploying machine learning algorithms and advanced modelling solutions
- Experience in developing advanced analytics and ML infrastructure and system
- Self-starter and self-motivated with the proven ability to deliver results in a fast-paced, high-energy environment
- Strong communication skills and the ability to explain complex analysis and algorithms to non-technical audience
- Works effectively cross functional teams to build trusted partnership
- Working experience in digital media and entertainment industry preferred
- Experience with Agile methodologies preferred
My client is a US based Product development company.
Responsibilities:
- Identify complex business problems and work towards building analytical solutions in-order to create large business impact.
- Demonstrate leadership through innovation in software and data products from ideation/conception through design, development and ongoing enhancement, leveraging user research techniques, traditional data tools, and techniques from the data science toolkit such as predictive modelling, NLP, statistical analysis, vector space modelling, machine learning etc.
- Collaborate and ideate with cross-functional teams to identify strategic questions for the business that can be solved and champion the effectiveness of utilizing data, analytics, and insights to shape business.
- Contribute to company growth efforts, increasing revenue and supporting other key business outcomes using analytics techniques.
- Focus on driving operational efficiencies by use of data and analytics to impact cost and employee efficiency.
- Baseline current analytics capability, ensure optimum utilization and continued advancement to stay abridge with industry developments.
- Establish self as a strategic partner with stakeholders, focused on full innovation system and fully supportive of initiatives from early stages to activation.
- Review stakeholder objectives and team's recommendations to ensure alignment and understanding.
- Drive analytics thought leadership and effectively contributes towards transformational initiatives.
- Ensure accuracy of data and deliverables of reporting employees with comprehensive policies and processes.
We are looking for a Python Developer to join our engineering team and help us
Python Developer responsibilities include writing and testing code, debugging programs
Responsibilities :
Requirements :
1) Understand the business objectives, formulate hypotheses and collect the relevant data using SQL/R/Python. Analyse bureau, customer and lending performance data on a periodic basis to generate insights. Present complex information and data in an uncomplicated, easyto-understand way to drive action.
2) Independently Build and refit robust models for achieving game-changing growth while managing risk.
3) Identify and implement new analytical/modelling techniques to improve model performance across customer lifecycle (acquisitions, management, fraud, collections, etc.
4) Help define the data infrastructure strategy for Indian subsidiary.
a. Monitor data quality and quantity.
b. Define a strategy for acquisition, storage, retention, and retrieval of data elements. e.g.: Identify new data types and collaborate with technology teams to capture them.
c. Build a culture of strong automation and monitoring
d. Staying connected to the Analytics industry trends - data, techniques, technology, etc. and leveraging them to continuously evolve data science standards at Credit Saison.
Required Skills & Qualifications:
1) 3+ years working in data science domains with experience in building risk models. Fintech/Financial analysis experience is required.
2) Expert level proficiency in Analytical tools and languages such as SQL, Python, R/SAS, VBA etc.
3) Experience with building models using common modelling techniques (Logistic and linear regressions, decision trees, etc.)
4) Strong familiarity with Tableau//Power BI/Qlik Sense or other data visualization tools
5) Tier 1 college graduate (IIT/IIM/NIT/BITs preferred).
6) Demonstrated autonomy, thought leadership, and learning agility.
at iNurture education solution pvt ltd
Professor:
To drive the campus academic operations in close coordination with,
a) Head of Departments. b) Faculty members c) Students
To create an 'IT centre of excellence' at the campus.
Should have driven the academic function of IT dept.
Should have experience of handling the academic operations
Associate / Assistant Professor:
Adhering to university time line (other the new courses)
To ensure coverage of Syllabus as per university standards
Learning Outcome
Examination Result ( Includes both Internal & Main Exams)
Knowledge Improvement Program
Innovation & Development
Technical Skills:
- General: C++, Java, OS, RDBMS, Software Engg, Data Structure etc
- Niche technologies like Cloud, Mobility, Information security, Data Science, IoT and Artificial Intelligence to name a few
Behavioural Competencies:
- Strong Leadership Qualities
- Excellent Communication skills
- Strong interpersonal skills to work with diverse teams
- Strong Presentation Skills
Qualification: B.Tech, M.Tech & Ph.D - Completed / Pursuing
Year of Exp:
- Professor: Min 12+ years of exp post PG and 5 years post Ph.D
- Associate Professor: Min 10+ years of exp post PG and 3 years post Ph.D
- Assistant Professor: 2+ years exp
How you match
Skills
-
MatchTeaching
-
No matchData Privacy
-
No matchInternet of Things (IoT)
-
No matchArtificial Intelligence (AI)
-
No matchCurriculum
-
No matchNetwork Security
-
No matchPresentation Skills
-
No matchInformation Security
-
No matchRDBMS
-
No matchInterpersonal Skills.
- Write, test, debug and ship code and gather feedback on the scale, performance, security to incorporate back into the platform.
- Work with the founders to identify complex technical problems and solve them.
- Work with the product design and client experience development team to support
them with scalable services
- Feed into the overall mission and vision of the eParchi’s platform over the period of the coming months and years.
- An ability to perform well in a fast-paced environment
- Excellent analytical and multitasking skills.