Principal AI Researcher

at Synapsica Healthcare

DP
Posted by Human Resources
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹20L - ₹45L / yr
icon
Full time
Skills
SVM
OpenCV
Machine Learning (ML)
Deep Learning
Artificial Intelligence (AI)
Python
PyTorch
Keras
Graphics Processing Unit (GPU)
DSP
CNN

Introduction

Synapsica is a series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis. 

Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting.  We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls

 

Your Roles and Responsibilities

Synapsica is looking for a Principal AI Researcher to lead and drive AI based research and development efforts. Ideal candidate should have extensive experience in Computer Vision and AI Research, either through studies or industrial R&D projects and should be excited to work on advanced exploratory research and development projects in computer vision and machine learning to create the next generation of advanced radiology solutions.

The role involves computer vision tasks including development customization and training of Convolutional Neural Networks (CNNs); application of ML techniques (SVM, regression, clustering etc.), and traditional Image Processing (OpenCV, etc.). The role is research-focused and would involve going through and implementing existing research papers, deep dive of problem analysis, frequent review of results, generating new ideas, building new models from scratch, publishing papers, automating and optimizing key processes. The role will span from real-world data handling to the most advanced methods such as transfer learning, generative models, reinforcement learning, etc., with a focus on understanding quickly and experimenting even faster. Suitable candidate will collaborate closely both with the medical research team, software developers and AI research scientists. The candidate must be creative, ask questions, and be comfortable challenging the status quo. The position is based in our Bangalore office.

 

 

Primary Responsibilities

  • Interface between product managers and engineers to design, build, and deliver AI models and capabilities for our spine products.
  • Formulate and design AI capabilities of our stack with special focus on computer vision.
  • Strategize end-to-end model training flow including data annotation, model experiments, model optimizations, model deployment and relevant automations
  • Lead teams, engineers, and scientists to envision and build new research capabilities and ensure delivery of our product roadmap.
  • Organize regular reviews and discussions.
  • Keep the team up-to-date with latest industrial and research updates.
  • Publish research and clinical validation papers

 

Requirements

  • 6+ years of relevant experience in solving complex real-world problems at scale using computer vision-based deep learning.
  • Prior experience in leading and managing a team.
  • Strong problem-solving ability
  • Prior experience with Python, cuDNN, Tensorflow, PyTorch, Keras, Caffe (or similar Deep Learning frameworks).
  • Extensive understanding of computer vision/image processing applications like object classification, segmentation, object detection etc
  • Ability to write custom Convolutional Neural Network Architecture in Pytorch (or similar)
  • Background in publishing research papers and/or patents 
  • Computer Vision and AI Research background in medical domain will be a plus
  • Experience of GPU/DSP/other Multi-core architecture programming
  • Effective communication with other project members and project stakeholders
  • Detail-oriented, eager to learn, acquire new skills
  • Prior Project Management and Team Leadership experience
  • Ability to plan work and meet the deadline

About Synapsica Healthcare

At Synapsica, we are creating AI-first PACS and radiology workflow solution that is fast, secure and automates reporting tasks, helping radiologists create high quality reports quicker.

Our goal is to enable radiologists with fast, easy to use AI technology that helps generate high quality, evidence-based reports to significantly improve patient care.

Synapsica is a growth stage HealthTech startup founded by alumni from AIIMS, IIT-KGP & IIM-A with a vision to increase the accessibility of diagnostic services globally. We are solving the problem of shortage of radiologists by bringing in automation at various stages of the radiology reporting process to reduce reporting time, costs & error rates & deliver efficiencies for better patient care.

We are deploying Computer Vision based Neural Networks models to identify biomarkers of pathologies in Radiology scans. These models not just identify pathologies, but provide detailed characterization of what exactly constitutes that pathology.


RADIOLens
is an intuitive, AI enabled, cloud based RIS/PACS solution that makes diagnostic radiology workflows smoother. It provides Faster image uploading with Zero loss in image quality. RADIOLens helps automate mundane reporting tasks so radiologists can focus on clinical correlations for their patients.

Spindle for MRI Spine helps with reporting of age related degeneration of spine. It automatically identifies key spinal mensurations and provides a detailed, standardized report with annotated images of abnormal various spinal elements. It quickly identifies variations and pathologies, such as spinal deformations, degeneration, identification of listhesis, etc.

SpindleX
for stress X-Rays of spine takes automation a step further by generating a one click automated reporting for all XR cases. These reports quantify all abnormalities and features of injury or early degeneration such as spinal instability, abnormal intersegmental motion etc. Illustrative graphs and tables comparing extent of injury against standards are automatically included in the report making it easier to generate qualitative reports with visual evidence.

Crescent
segregates normal, abnormal and bad quality captures in chest X-rays. It identifies bad quality scans that need re-capture and for good scans it localizes and characterizes common lesions & abnormalities. With Crescent, standard, pre-filled reports are generated for normal radiographs and critical studies are prioritized in the worklist.

We are backed by Y Combinator and other investors from India, US and Japan. We are proud to have GE, AIIMS, the Spinal Kinetics as our partners.

Here’s a small sample of what we’re building: https://youtu.be/MtWSF-x2sxY


Join us, if you find this as exciting as we do!

 
 
 
 
Founded
2019
Type
Product
Size
20-100 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Analytics Head

at Brand Manufacturer for Bearded Men

Agency job
via Qrata
Analytics
Business Intelligence (BI)
Business Analysis
Python
SQL
Relational Database (RDBMS)
Data architecture
icon
Ahmedabad
icon
3 - 10 yrs
icon
₹15L - ₹30L / yr
Analytics Head

Technical must haves:

● Extensive exposure to at least one Business Intelligence Platform (if possible, QlikView/Qlik
Sense) – if not Qlik, ETL tool knowledge, ex- Informatica/Talend
● At least 1 Data Query language – SQL/Python
● Experience in creating breakthrough visualizations
● Understanding of RDMS, Data Architecture/Schemas, Data Integrations, Data Models and Data Flows is a must
● A technical degree like BE/B. Tech a must

Technical Ideal to have:

● Exposure to our tech stack – PHP
● Microsoft workflows knowledge

Behavioural Pen Portrait:

● Must Have: Enthusiastic, aggressive, vigorous, high achievement orientation, strong command
over spoken and written English
● Ideal: Ability to Collaborate

Preferred location is Ahmedabad, however, if we find exemplary talent then we are open to remote working model- can be discussed.
Job posted by
Prajakta Kulkarni

Data Engineer

at Information solution provider company

Agency job
via Jobdost
Spark
Hadoop
Big Data
Data engineering
PySpark
Machine Learning (ML)
Scala
icon
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
icon
2 - 4 yrs
icon
₹5L - ₹9L / yr

Data Engineer 

Responsibilities:

 

  • Designing and implementing fine-tuned production ready data/ML pipelines in Hadoop platform.
  • Driving optimization, testing and tooling to improve quality.
  • Reviewing and approving high level & amp; detailed design to ensure that the solution delivers to the business needs and aligns to the data & analytics architecture principles and roadmap.
  • Understanding business requirements and solution design to develop and implement solutions that adhere to big data architectural guidelines and address business requirements.
  • Following proper SDLC (Code review, sprint process).
  • Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, etc.
  • Building robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users.
  • Understanding various data security standards and using secure data security tools to apply and adhere to the required data controls for user access in the Hadoop platform.
  • Supporting and contributing to development guidelines and standards for data ingestion.
  • Working with a data scientist and business analytics team to assist in data ingestion and data related technical issues.
  • Designing and documenting the development & deployment flow.

 

Requirements:

 

  • Experience in developing rest API services using one of the Scala frameworks.
  • Ability to troubleshoot and optimize complex queries on the Spark platform
  • Expert in building and optimizing ‘big data’ data/ML pipelines, architectures and data sets.
  • Knowledge in modelling unstructured to structured data design.
  • Experience in Big Data access and storage techniques.
  • Experience in doing cost estimation based on the design and development.
  • Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs.
  • Highly organized, self-motivated, proactive, and ability to propose best design solutions.
  • Good time management and multitasking skills to work to deadlines by working independently and as a part of a team.
Job posted by
Saida Jabbar

Quality Assurance Engineer

at Disruptive Electronic Accessories Brand

Agency job
via Unnati
quality assurance
Test Automation (QA)
testing
Quality control
Unit testing
Test cases
Python
TWS
Bluetooth
Software Testing (QA)
Bug tracking
Test Planning
wearables
wifi
icon
Bengaluru (Bangalore)
icon
1 - 3 yrs
icon
₹3L - ₹5L / yr
Here is a chance to work for a Consumer Electronics Brand, where you get to deal with some of the best channels and work with some excellent experienced minds. Read on.

 

Started in 2015, this lifestyle and accessories startup has taken over the consumer electronics sector in India. Our client has a product range that includes an extensive catalog of headphones, speakers, travel accessories, and modern earphones. It believes in providing cutting edge electronic products stamped with durability and affordability.

The brand is associated with some of the major icons across categories and tie-ups with industries covering fashion, sports, and music, of course. The founders are Marketing grads, with vast experience in the consumer lifestyle products and other major brands. With their vigorous efforts toward quality and marketing, they have been able to strike a chord with major E-commerce brands and even consumers.
 
As a Quality Assurance Engineer, you will collaborate cross-functionally to understand support / field issues and drive solutions.
 
What you will do:
  • Creating test plans, write test cases, and other QA documents based on platform software requirements and specifications
  • Performing hands-on testing of platform/ embedded software
  • Automating test cases and processes using Python
  • Filing and tracking bugs from opening to closure after verification
  • Reporting defects and following-through as necessary to complete the testing cycle
  • Providing timely status updates and assisting the team in making decisions about release-readiness
  • Testing of Alexa audio products and Wifi, Bluetooth products
  • Testing of TWS and Wearable(Fitness band, Watch)
 

What you need to have:

  • BE / BTech; Specialisation in Electronics engineering or Computer Engineering
  • Experience of 1 to 3 years in Quality Testing of embedded software
  • Knowledge of Python, Test automation and bug tracking
  • Excellent understanding of Alexa, Wifi, Bluetooth and wearables testing
Job posted by
Astha Bharadwaj

Data Scientist

at Innovative Brand Design Studio

Agency job
via Unnati
Data Science
Data Scientist
Python
Tableau
R Programming
Spark
SQL
NOSQL Databases
Machine Learning (ML)
Statistical Modeling
toluna
question pro
icon
Mumbai
icon
2 - 5 yrs
icon
₹8L - ₹15L / yr
Come work with a growing consumer market research team that is currently serving one of the biggest FMCG companies in the world.
 
Our client works with global brands and creates projects that are user-centric. They build cost-effective and compelling product stories that help their clients gain a competitive edge and growth in their brand image. Their team of experts consists of academicians, designers, startup specialists and experts are working for clients across 12 countries targeting new markets and solutions with an excellent understanding of end-users.
 
They work with global brands from FMCG, Beauty and Hospitality sectors, namely Unilever, Lipton, Lakme, Loreal, AXE etc. who have chosen them for a long-term relationship, depending on their insights, consumer research, storytelling and contetnt experience. The founder is a design and product activation expert with over 10 years of impact and over 300 completed projects in India, UK, South Asia and USA.
 
As a Data Scientist, you will help to deliver quantitative consumer primary market research through Survey.
 
What you will do:
  • Handling Survey Scripting Process through the use of survey software platform such as Toluna, QuestionPro, Decipher.
  • Mining large & complex data sets using SQL, Hadoop, NoSQL or Spark.
  • Delivering complex consumer data analysis through the use of software like R, Python, Excel and etc such as
  • Working on Basic Statistical Analysis such as:T-Test &Correlation
  • Performing more complex data analysis processes through Machine Learning technique such as:
  1. Classification
  2. Regression
  3. Clustering
  4. Text
  5. Analysis
  6. Neural Networking
  • Creating an Interactive Dashboard Creation through the use of software like Tableau or any other software you are able to use.
  • Working on Statistical and mathematical modelling, application of ML and AI algorithms

 

What you need to have:
  • Bachelor or Master's degree in highly quantitative field (CS, machine learning, mathematics, statistics, economics) or equivalent experience.
  • An opportunity for one, who is eager of proving his or her data analytical skills with one of the Biggest FMCG market player.

 

Job posted by
Astha Bharadwaj

Survey Analytics Analyst

at Leading Management Consulting Firm

R Programming
SPSS
Python
Surveying
Data Analytics
icon
Gurugram
icon
1 - 5 yrs
icon
₹6L - ₹10L / yr
Desired Skills & Mindset:

We are looking for candidates who have demonstrated both a strong business sense and deep understanding of the quantitative foundations of modelling.

• Excellent analytical and problem-solving skills, including the ability to disaggregate issues, identify root causes and recommend solutions
• Statistical programming software experience in SPSS and comfortable working with large data sets.
• R, Python, SAS & SQL are preferred but not a mandate
• Excellent time management skills
• Good written and verbal communication skills; understanding of both written and spoken English
• Strong interpersonal skills
• Ability to act autonomously, bringing structure and organization to work
• Creative and action-oriented mindset
• Ability to interact in a fluid, demanding and unstructured environment where priorities evolve constantly, and methodologies are regularly challenged
• Ability to work under pressure and deliver on tight deadlines

Qualifications and Experience
:

• Graduate degree in: Statistics/Economics/Econometrics/Computer
Science/Engineering/Mathematics/MBA (with a strong quantitative background) or
equivalent
• Strong track record work experience in the field of business intelligence, market
research, and/or Advanced Analytics
• Knowledge of data collection methods (focus groups, surveys, etc.)
• Knowledge of statistical packages (SPSS, SAS, R, Python, or similar), databases,
and MS Office (Excel, PowerPoint, Word)
• Strong analytical and critical thinking skills
• Industry experience in Consumer Experience/Healthcare a plus
Job posted by
Jayaraj E

Subject Matter Expert - Computer Science

at Tutorbin

Founded 2017  •  Services  •  20-100 employees  •  Bootstrapped
"Computer Science"
Java
dbms
Artificial Intelligence (AI)
Programming
icon
Gurugram
icon
0 - 1.5 yrs
icon
₹3L - ₹6.4L / yr
Education:

B.E. / B.Tech/ M.Tech in Computer Science

Skills 

●Proper understanding of different programmingsoftwares/subjects related to undergraduate courses ofComputer Science domain.
●Knowledge ofArtificial Intelligence and Machine learningwould be aplus.
●Excellent interpersonal and communication skills
●Problem solving attitude with good command on logicalreasoning skills
●Ability to work independently with minimal supervision
●Keen to learn about new tools & technologies for use inchanging situations.
●Comfortable in working in a fast paced environment withgreat efficiency.

Roles and Responsibilities

●Solving questions of the students from across the globeon the TutorBin board.
●Working on tasks involving various subjects/softwarerelated to undergraduate / postgraduate courses ofComputer Science.
●Reviewing the works completed by the tutor on ourplatform and providing necessary instructions forrectification as required.
●Ensuring the overall quality of work provided to thestudents from our platform.
●Management of the tutors on our platform, theiron-boarding and performance review.
●Planning & implementing the training of the new tutorson our platform.
Job posted by
VashpBuddy PvtLtd

Data Engineer (Fresher)

at Fragma Data Systems

Founded 2015  •  Products & Services  •  employees  •  Profitable
SQL
Data engineering
Data Engineer
Python
Big Data
PySpark
icon
Remote, Bengaluru (Bangalore), Hyderabad
icon
0 - 1 yrs
icon
₹3L - ₹3.5L / yr
Strong Programmer with expertise in Python and SQL
 
● Hands-on Work experience in SQL/PLSQL
● Expertise in at least one popular Python framework (like Django,
Flask or Pyramid)
● Knowledge of object-relational mapping (ORM)
● Familiarity with front-end technologies (like JavaScript and HTML5)
● Willingness to learn & upgrade to Big data and cloud technologies
like Pyspark Azure etc.
● Team spirit
● Good problem-solving skills
● Write effective, scalable code
Job posted by
Evelyn Charles

Sr. Data Engineer ( a Fintech product company )

at Velocity.in

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
Data engineering
Data Engineer
Big Data
Big Data Engineer
Python
Data Visualization
Data Warehouse (DWH)
Google Cloud Platform (GCP)
Data-flow analysis
Amazon Web Services (AWS)
PL/SQL
NOSQL Databases
PostgreSQL
ETL
data pipelining
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹20L - ₹35L / yr

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 3+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 2+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

 

 

Job posted by
chinnapareddy S

Data Warehousing Engineer

at DataToBiz

Founded 2018  •  Services  •  20-100 employees  •  Bootstrapped
Datawarehousing
Amazon Redshift
Analytics
Python
Amazon Web Services (AWS)
SQL server
Data engineering
icon
Chandigarh, NCR (Delhi | Gurgaon | Noida)
icon
2 - 6 yrs
icon
₹7L - ₹15L / yr
Job Responsibilities :  
As a Data Warehouse Engineer in our team, you should have a proven ability to deliver high-quality work on time and with minimal supervision.
Develops or modifies procedures to solve complex database design problems, including performance, scalability, security and integration issues for various clients (on-site and off-site).
Design, develop, test, and support the data warehouse solution.
Adapt best practices and industry standards, ensuring top quality deliverable''s and playing an integral role in cross-functional system integration.
Design and implement formal data warehouse testing strategies and plans including unit testing, functional testing, integration testing, performance testing, and validation testing.
Evaluate all existing hardware's and software's according to required standards and ability to configure the hardware clusters as per the scale of data.
Data integration using enterprise development tool-sets (e.g. ETL, MDM, Quality, CDC, Data Masking, Quality).
Maintain and develop all logical and physical data models for enterprise data warehouse (EDW).
Contributes to the long-term vision of the enterprise data warehouse (EDW) by delivering Agile solutions.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.  
Participate in data warehouse health monitoring and performance optimizations as well as quality documentation.

Job Requirements :  
2+ years experience working in software development & data warehouse development for enterprise analytics.
2+ years of working with Python with major experience in Red-shift as a must and exposure to other warehousing tools.
Deep expertise in data warehousing, dimensional modeling and the ability to bring best practices with regard to data management, ETL, API integrations, and data governance.
Experience working with data retrieval and manipulation tools for various data sources like Relational (MySQL, PostgreSQL, Oracle), Cloud-based storage.
Experience with analytic and reporting tools (Tableau, Power BI, SSRS, SSAS). Experience in AWS cloud stack (S3, Glue, Red-shift, Lake Formation).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business clients.
Knowledge of Logistics and/or Transportation Domain is a plus.
Ability to handle/ingest very huge data sets (both real-time data and batched data) in an efficient manner.
Job posted by
PS Dhillon

Python Developer

at Intentbase

Founded 2015  •  Product  •  100-500 employees  •  Profitable
Pandas
Numpy
Bash
Structured Query Language
Python
Big Data
NOSQL Databases
icon
Pune
icon
2 - 5 yrs
icon
₹5L - ₹10L / yr
We are an early stage startup working in the space of analytics, big data, machine learning, data visualization on multiple platforms and SaaS. We have our offices in Palo Alto and WTC, Kharadi, Pune and got some marque names as our customers. We are looking for really good Python programmer who MUST have scientific programming experience (Python, etc.) Hands-on with numpy and the Python scientific stack is a must. Demonstrated ability to track and work with 100s-1000s of files and GB-TB of data. Exposure to ML and Data mining algorithms. Need to be comfortable working in a Unix environment and SQL. You will be required to do following: Using command line tools to perform data conversion and analysis Supporting other team members in retrieving and archiving experimental results Quickly writing scripts to automate routine analysis tasks Creating insightful, simple graphics to represent complex trends Explore/design/invent new tools and design patterns to solve complex big data problems Experience working on a long-term, lab-based project (academic experience acceptable)
Job posted by
Nischal Vohra
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Synapsica Healthcare?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort