empower healthcare payers, providers and members to quickly process medical data to
make informed decisions and reduce health care costs. You will be focusing on research,
development, strategy, operations, people management, and being a thought leader for
team members based out of India. You should have professional healthcare experience
using both structured and unstructured data to build applications. These applications
include but are not limited to machine learning, artificial intelligence, optical character
recognition, natural language processing, and integrating processes into the overall AI
pipeline to mine healthcare and medical information with high recall and other relevant
metrics. The results will be used dually for real-time operational processes with both
automated and human-based decision making as well as contribute to reducing
healthcare administrative costs. We work with all major cloud and big data vendors
offerings including (Azure, AWS, Google, IBM, etc.) to achieve our goals in healthcare and
support
The Director, Data Science will have the opportunity to build a team, shape team culture
and operating norms as a result of the fast-paced nature of a new, high-growth
organization.
• Strong communication and presentation skills to convey progress to a diverse group of stakeholders
• Strong expertise in data science, data engineering, software engineering, cloud vendors, big data technologies, real-time streaming applications, DevOps and product delivery
• Experience building stakeholder trust and confidence in deployed models especially via application of the algorithmic bias, interpretable machine learning,
data integrity, data quality, reproducible research and reliable engineering 24x7x365 product availability, scalability
• Expertise in healthcare privacy, federated learning, continuous integration and deployment, DevOps support
• Provide mentoring to data scientists and machine learning engineers as well as career development
• Meet project related team members for individual specific needs on a regular basis related to project/product deliverables
• Provide training and guidance for team members when required
• Provide performance feedback when required by leadership
The Experience You’ll Need (Required):
• MS/M.Tech degree or PhD in Computer Science, Mathematics, Physics or related STEM fields
• Significant healthcare data experience including but not limited to usage of claims data
• Delivered multiple data science and machine learning projects over 8+ years with values exceeding $10 Million or more and has worked on platform members exceeding 10 million lives
• 9+ years of industry experience in data science, machine learning, and artificial intelligence
• Strong expertise in data science, data engineering, software engineering, cloud vendors, big data technologies, real time streaming applications, DevOps, and product delivery
• Knows how to solve and launch real artificial intelligence and data science related problems and products along with managing and coordinating the
business process change, IT / cloud operations, meeting production level code standards
• Ownerships of key workflows part of data science life cycle like data acquisition, data quality, and results
• Experience building stakeholder trust and confidence in deployed models especially via application of algorithmic bias, interpretable machine learning,
data integrity, data quality, reproducible research, and reliable engineering 24x7x365 product availability, scalability
• Expertise in healthcare privacy, federated learning, continuous integration and deployment, DevOps support
• 3+ Years of experience managing directly five (5) or more senior level data scientists, machine learning engineers with advanced degrees and directly
made staff decisions
• Very strong understanding of mathematical concepts including but not limited to linear algebra, advanced calculus, partial differential equations, and
statistics including Bayesian approaches at master’s degree level and above
• 6+ years of programming experience in C++ or Java or Scala and data science programming languages like Python and R including strong understanding of
concepts like data structures, algorithms, compression techniques, high performance computing, distributed computing, and various computer architecture
• Very strong understanding and experience with traditional data science approaches like sampling techniques, feature engineering, classification, and
regressions, SVM, trees, model evaluations with several projects over 3+ years
• Very strong understanding and experience in Natural Language Processing,
reasoning, and understanding, information retrieval, text mining, search, with
3+ years of hands on experience
• Experience with developing and deploying several products in production with
experience in two or more of the following languages (Python, C++, Java, Scala)
• Strong Unix/Linux background and experience with at least one of the
following cloud vendors like AWS, Azure, and Google
• Three plus (3+) years hands on experience with MapR \ Cloudera \ Databricks
Big Data platform with Spark, Hive, Kafka etc.
• Three plus (3+) years of experience with high-performance computing like
Dask, CUDA distributed GPU, TPU etc.
• Presented at major conferences and/or published materials
Similar jobs
Senior Data Scientist
- 6+ years Experienced in building data pipelines and deployment pipelines for machine learning models
- 4+ years’ experience with ML/AI toolkits such as Tensorflow, Keras, AWS Sagemaker, MXNet, H20, etc.
- 4+ years’ experience developing ML/AI models in Python/R
- Must have leadership abilities to lead a project and team.
- Must have leadership skills to lead and deliver projects, be proactive, take ownership, interface with business, represent the team and spread the knowledge.
- Strong knowledge of statistical data analysis and machine learning techniques (e.g., Bayesian, regression, classification, clustering, time series, deep learning).
- Should be able to help deploy various models and tune them for better performance.
- Working knowledge in operationalizing models in production using model repositories, API s and data pipelines.
- Experience with machine learning and computational statistics packages.
- Experience with Data Bricks, Data Lake.
- Experience with Dremio, Tableau, Power Bi.
- Experience working with spark ML, spark DL with Pyspark would be a big plus!
- Working knowledge of relational database systems like SQL Server, Oracle.
- Knowledge of deploying models in platforms like PCF, AWS, Kubernetes.
- Good knowledge in Continuous integration suites like Jenkins.
- Good knowledge in web servers (Apache, NGINX).
- Good knowledge in Git, Github, Bitbucket.
- Working knowledge in operationalizing models in production using model repositories, APIs and data pipelines.
- Java, R, and Python programming experience.
- Should be very familiar with (MS SQL, Teradata, Oracle, DB2).
- Big Data – Hadoop.
- Expert knowledge using BI tools e.g.Tableau
- Experience with machine learning and computational statistics packages.
Capabilities & Insights Analyst
at Top Managment Consulting Firm
Company Profile:
The company is World's No1 Global management consulting firm.
Job Qualifications
Graduate or post graduate degree in statistics, economics, econometrics, computer science,
engineering, or mathematics
2-5 years of relevant experience
Adept in forecasting, regression analysis and segmentation work
Understanding of modeling techniques, specifically logistic regression, linear regression, cluster
analysis, CHAID, etc.
Statistical programming software experience in R & Python, comfortable working with large data
sets; SAS & SQL are also preferred
Excellent analytical and problem-solving skills, including the ability to disaggregate issues, identify
root causes and recommend solutions
Excellent time management skills
Good written and verbal communication skills; understanding of both written and spoken English
Strong interpersonal skills
Ability to act autonomously, bringing structure and organization to work
Creative and action-oriented mindset
Ability to interact in a fluid, demanding and unstructured environment where priorities evolve
constantly and methodologies are regularly challenged
Ability to work under pressure and deliver on tight deadlines
Python + Data scientist
at A leading global information technology and business process
Python + Data scientist : |
• Build data-driven models to understand the characteristics of engineering systems |
• Train, tune, validate, and monitor predictive models |
• Sound knowledge on Statistics |
• Experience in developing data processing tasks using PySpark such as reading, merging, enrichment, loading of data from external systems to target data destinations |
• Working knowledge on Big Data or/and Hadoop environments |
• Experience creating CI/CD Pipelines using Jenkins or like tools |
• Practiced in eXtreme Programming (XP) disciplines |
What You’ll Do:
- Accurate translation of business needs into a conceptual and technical architecture design of AI models and solutions
- Collaboration with developers and engineering teams resolving challenging tasks and ensuring proposed design is properly implemented
- Strategy for managing the changes to the AI models (new business needs, technology changes, model retraining, etc.)
- Collaborate with business partners and clients for AI solutioning and use cases. Provide recommendations to drive alignment with business teams
- Define and implement evaluation strategies for each model, demonstrate applicability and performance of the model, and identify its limits
- Design complex system integrations of AI technologies with API-driven platforms, using best practices for security and performance
- Experience in languages, tools & technologies such as Python, Tensorflow, Pytorch, Kubernetes, Docker, etc
- Experience with MLOps tools (like TFx, Tensorflow Serving, KubeFlow, etc.) and methodologies for CI/CD of ML models
- Proactively identify and address technical strengths, weaknesses, and opportunities across the AI and ML domain
- Strategic direction for maximizing simplification and re-use lowering overall TCO
What You’ll Bring:
- Minimum 10 years of hands-on experience in the IT field, at least 6+ years in Data Science/ ML / AI implementation based Products and Solutions
- Experience with Computer Vision - Vision AI & Document AI
- Must be hands-on with Python programming language, MLOps, Tensorflow, Pytorch, Keras, Scikit, etc.,
- Well versed with deep learning concepts, computer vision, image processing, document processing, convolutional neural networks and data ontology applications
- Proven track record at execution of projects in agile & cross-functional teams
- Published research papers and represented in reputable AI conferences and the ability to lead and drive research mindset across the team
- Good to have experience with GCP / Microsoft Azure / Amazon Web Services
- Ph.D. or Masters in a quantitative field such as Computer Science, IT, Stats/Maths
What we offer:
- Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
- Sum Insured: INR 5,00,000/-
- Maternity cover upto two children
- Inclusive of COVID-19 Coverage
- Cashless & Reimbursement facility
- Access to free online doctor consultation
- Personal Accident Policy (Disability Insurance) -
- Sum Insured: INR. 25,00,000/- Per Employee
- Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
- Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
- Temporary Total Disability is covered
- An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver benefit
- Monthly Internet Reimbursement of upto Rs. 1,000
- Opportunity to pursue Executive Programs/ courses at top universities globally
- Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Google Cloud, Amazon & others.
• Experience with Advanced SQL
• Experience with Azure data factory, data bricks,
• Experience with Azure IOT, Cosmos DB, BLOB Storage
• API management, FHIR API development,
• Proficient with Git and CI/CD best practices
• Experience working with Snowflake is a plus
Senior Data Scientist (Health Metrics)
at Biostrap
Introduction
The Biostrap platform extracts many metrics related to health, sleep, and activity. Many algorithms are designed through research and often based on scientific literature, and in some cases they are augmented with or entirely designed using machine learning techniques. Biostrap is seeking a Data Scientist to design, develop, and implement algorithms to improve existing metrics and measure new ones.
Job Description
As a Data Scientist at Biostrap, you will take on projects to improve or develop algorithms to measure health metrics, including:
- Research: search literature for starting points of the algorithm
- Design: decide on the general idea of the algorithm, in particular whether to use machine learning, mathematical techniques, or something else.
- Implement: program the algorithm in Python, and help deploy it.
The algorithms and their implementation will have to be accurate, efficient, and well-documented.
Requirements
- A Master’s degree in a computational field, with a strong mathematical background.
- Strong knowledge of, and experience with, different machine learning techniques, including their theoretical background.
- Strong experience with Python
- Experience with Keras/TensorFlow, and preferably also with RNNs
- Experience with AWS or similar services for data pipelining and machine learning.
- Ability and drive to work independently on an open problem.
- Fluency in English.
Location: Bengaluru
Department: - Engineering
Bidgely is looking for extraordinary and dynamic Senior Data Analyst to be part of its core team in Bangalore. You must have delivered exceptionally high quality robust products dealing with large data. Be part of a highly energetic and innovative team that believes nothing is impossible with some creativity and hard work.
● Design and implement a high volume data analytics pipeline in Looker for Bidgely flagship product.
● Implement data pipeline in Bidgely Data Lake
● Collaborate with product management and engineering teams to elicit & understand their requirements & challenges and develop potential solutions
● Stay current with the latest tools, technology ideas and methodologies; share knowledge by clearly articulating results and ideas to key decision makers.
● 3-5 years of strong experience in data analytics and in developing data pipelines.
● Very good expertise in Looker
● Strong in data modeling, developing SQL queries and optimizing queries.
● Good knowledge of data warehouse (Amazon Redshift, BigQuery, Snowflake, Hive).
● Good understanding of Big data applications (Hadoop, Spark, Hive, Airflow, S3, Cloudera)
● Attention to details. Strong communication and collaboration skills.
● BS/MS in Computer Science or equivalent from premier institutes.
Data Scientist
- Adept at Machine learning techniques and algorithms.
Feature selection, dimensionality reduction, building and
- optimizing classifiers using machine learning techniques
- Data mining using state-of-the-art methods
- Doing ad-hoc analysis and presenting results
- Proficiency in using query languages such as N1QL, SQL
Experience with data visualization tools, such as D3.js, GGplot,
- Plotly, PyPlot, etc.
Creating automated anomaly detection systems and constant tracking
- of its performance
- Strong in Python is a must.
- Strong in Data Analysis and mining is a must
- Deep Learning, Neural Network, CNN, Image Processing (Must)
Building analytic systems - data collection, cleansing and
- integration
Experience with NoSQL databases, such as Couchbase, MongoDB,
Cassandra, HBase
● Ability to do exploratory analysis: Fetch data from systems and analyze trends.
● Developing customer segmentation models to improve the efficiency of marketing and product
campaigns.
● Establishing mechanisms for cross functional teams to consume customer insights to improve
engagement along the customer life cycle.
● Gather requirements for dashboards from business, marketing and operations stakeholders.
● Preparing internal reports for executive leadership and supporting their decision making.
● Analyse data, derive insights and embed it into Business actions.
● Work with cross functional teams.
Skills Required
• Data Analytics Visionary.
• Strong in SQL & Excel and good to have experience in Tableau.
• Experience in the field of Data Analysis, Data Visualization.
• Strong in analysing the Data and creating dashboards.
• Strong in communication, presentation and business intelligence.
• Multi-Dimensional, "Growth Hacker" Skill Set with strong sense of ownership for work.
• Aggressive “Take no prisoners” approach.
Role and Responsibilities
- Build a low latency serving layer that powers DataWeave's Dashboards, Reports, and Analytics functionality
- Build robust RESTful APIs that serve data and insights to DataWeave and other products
- Design user interaction workflows on our products and integrating them with data APIs
- Help stabilize and scale our existing systems. Help design the next generation systems.
- Scale our back end data and analytics pipeline to handle increasingly large amounts of data.
- Work closely with the Head of Products and UX designers to understand the product vision and design philosophy
- Lead/be a part of all major tech decisions. Bring in best practices. Mentor younger team members and interns.
- Constantly think scale, think automation. Measure everything. Optimize proactively.
- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope.
Skills and Requirements
- 8- 15 years of experience building and scaling APIs and web applications.
- Experience building and managing large scale data/analytics systems.
- Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good understanding of software design principles and architectural best practices.
- Be passionate about writing code and have experience coding in multiple languages, including at least one scripting language, preferably Python.
- Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision is right/wrong, and so on.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- Have experience working with multiple storage and indexing technologies such as MySQL, Redis, MongoDB, Cassandra, Elastic.
- Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana, Graylog, StatsD, Datadog etc.
- Working knowledge of building websites and apps. Good understanding of integration complexities and dependencies.
- Working knowledge linux server administration as well as the AWS ecosystem is desirable.
- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.