Synapsica is a series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
Synapsica is looking for a Principal AI Researcher to lead and drive AI based research and development efforts. Ideal candidate should have extensive experience in Computer Vision and AI Research, either through studies or industrial R&D projects and should be excited to work on advanced exploratory research and development projects in computer vision and machine learning to create the next generation of advanced radiology solutions.
The role involves computer vision tasks including development customization and training of Convolutional Neural Networks (CNNs); application of ML techniques (SVM, regression, clustering etc.), and traditional Image Processing (OpenCV, etc.). The role is research-focused and would involve going through and implementing existing research papers, deep dive of problem analysis, frequent review of results, generating new ideas, building new models from scratch, publishing papers, automating and optimizing key processes. The role will span from real-world data handling to the most advanced methods such as transfer learning, generative models, reinforcement learning, etc., with a focus on understanding quickly and experimenting even faster. Suitable candidate will collaborate closely both with the medical research team, software developers and AI research scientists. The candidate must be creative, ask questions, and be comfortable challenging the status quo. The position is based in our Bangalore office.
- Interface between product managers and engineers to design, build, and deliver AI models and capabilities for our spine products.
- Formulate and design AI capabilities of our stack with special focus on computer vision.
- Strategize end-to-end model training flow including data annotation, model experiments, model optimizations, model deployment and relevant automations
- Lead teams, engineers, and scientists to envision and build new research capabilities and ensure delivery of our product roadmap.
- Organize regular reviews and discussions.
- Keep the team up-to-date with latest industrial and research updates.
- Publish research and clinical validation papers
- 6+ years of relevant experience in solving complex real-world problems at scale using computer vision-based deep learning.
- Prior experience in leading and managing a team.
- Strong problem-solving ability
- Prior experience with Python, cuDNN, Tensorflow, PyTorch, Keras, Caffe (or similar Deep Learning frameworks).
- Extensive understanding of computer vision/image processing applications like object classification, segmentation, object detection etc
- Ability to write custom Convolutional Neural Network Architecture in Pytorch (or similar)
- Background in publishing research papers and/or patents
- Computer Vision and AI Research background in medical domain will be a plus
- Experience of GPU/DSP/other Multi-core architecture programming
- Effective communication with other project members and project stakeholders
- Detail-oriented, eager to learn, acquire new skills
- Prior Project Management and Team Leadership experience
- Ability to plan work and meet the deadline
About Synapsica Healthcare
At Synapsica, we are creating AI-first PACS and radiology workflow solution that is fast, secure and automates reporting tasks, helping radiologists create high quality reports quicker.
Our goal is to enable radiologists with fast, easy to use AI technology that helps generate high quality, evidence-based reports to significantly improve patient care.
Synapsica is a growth stage HealthTech startup founded by alumni from AIIMS, IIT-KGP & IIM-A with a vision to increase the accessibility of diagnostic services globally. We are solving the problem of shortage of radiologists by bringing in automation at various stages of the radiology reporting process to reduce reporting time, costs & error rates & deliver efficiencies for better patient care.
We are deploying Computer Vision based Neural Networks models to identify biomarkers of pathologies in Radiology scans. These models not just identify pathologies, but provide detailed characterization of what exactly constitutes that pathology.
RADIOLens is an intuitive, AI enabled, cloud based RIS/PACS solution that makes diagnostic radiology workflows smoother. It provides Faster image uploading with Zero loss in image quality. RADIOLens helps automate mundane reporting tasks so radiologists can focus on clinical correlations for their patients.
Spindle for MRI Spine helps with reporting of age related degeneration of spine. It automatically identifies key spinal mensurations and provides a detailed, standardized report with annotated images of abnormal various spinal elements. It quickly identifies variations and pathologies, such as spinal deformations, degeneration, identification of listhesis, etc.
SpindleX for stress X-Rays of spine takes automation a step further by generating a one click automated reporting for all XR cases. These reports quantify all abnormalities and features of injury or early degeneration such as spinal instability, abnormal intersegmental motion etc. Illustrative graphs and tables comparing extent of injury against standards are automatically included in the report making it easier to generate qualitative reports with visual evidence.
Crescent segregates normal, abnormal and bad quality captures in chest X-rays. It identifies bad quality scans that need re-capture and for good scans it localizes and characterizes common lesions & abnormalities. With Crescent, standard, pre-filled reports are generated for normal radiographs and critical studies are prioritized in the worklist.
We are backed by Y Combinator and other investors from India, US and Japan. We are proud to have GE, AIIMS, the Spinal Kinetics as our partners.
Here’s a small sample of what we’re building: https://youtu.be/MtWSF-x2sxY
Join us, if you find this as exciting as we do!
We are looking for a skilled Senior/Lead Bigdata Engineer to join our team. The role is part of the research and development team, where you with enthusiasm and knowledge are going to be our technical evangelist for the development of our inspection technology and products.
At Elop we are developing product lines for sustainable infrastructure management using our own patented technology for ultrasound scanners and combine this with other sources to see holistic overview of the concrete structure. At Elop we will provide you with world-class colleagues highly motivated to position the company as an international standard of structural health monitoring. With the right character you will be professionally challenged and developed.
This position requires travel to Norway.
Elop is sister company of Simplifai and co-located together in all geographic locations.
Roles and Responsibilities
- Define technical scope and objectives through research and participation in requirements gathering and definition of processes
- Ingest and Process data from data sources (Elop Scanner) in raw format into Big Data ecosystem
- Realtime data feed processing using Big Data ecosystem
- Design, review, implement and optimize data transformation processes in Big Data ecosystem
- Test and prototype new data integration/processing tools, techniques and methodologies
- Conversion of MATLAB code into Python/C/C++.
- Participate in overall test planning for the application integrations, functional areas and projects.
- Work with cross functional teams in an Agile/Scrum environment to ensure a quality product is delivered.
Desired Candidate Profile
- Bachelor's degree in Statistics, Computer or equivalent
- 7+ years of experience in Big Data ecosystem, especially Spark, Kafka, Hadoop, HBase.
- 7+ years of hands-on experience in Python/Scala is a must.
- Experience in architecting the big data application is needed.
- Excellent analytical and problem solving skills
- Strong understanding of data analytics and data visualization, and must be able to help development team with visualization of data.
- Experience with signal processing is plus.
- Experience in working on client server architecture is plus.
- Knowledge about database technologies like RDBMS, Graph DB, Document DB, Apache Cassandra, OpenTSDB
- Good communication skills, written and oral, in English
We can Offer
- An everyday life with exciting and challenging tasks with the development of socially beneficial solutions
- Be a part of companys research and Development team to create unique and innovative products
- Colleagues with world-class expertise, and an organization that has ambitions and is highly motivated to position the company as an international player in maintenance support and monitoring of critical infrastructure!
- Good working environment with skilled and committed colleagues an organization with short decision paths.
- Professional challenges and development
- You will be part of the data delivery team and will have the opportunity to develop a deep understanding of the domain/function.
- You will design and drive the work plan for the optimization/automation and standardization of the processes incorporating best practices to achieve efficiency gains.
- You will run data engineering pipelines, link raw client data with data model, conduct data assessment, perform data quality checks, and transform data using ETL tools.
- You will perform data transformations, modeling, and validation activities, as well as configure applications to the client context. You will also develop scripts to validate, transform, and load raw data using programming languages such as Python and / or PySpark.
- In this role, you will determine database structural requirements by analyzing client operations, applications, and programming.
- You will develop cross-site relationships to enhance idea generation, and manage stakeholders.
- Lastly, you will collaborate with the team to support ongoing business processes by delivering high-quality end products on-time and perform quality checks wherever required.
- Bachelor’s degree in Engineering or Computer Science; Master’s degree is a plus
- 3+ years of professional work experience with a reputed analytics firm
- Expertise in handling large amount of data through Python or PySpark
- Conduct data assessment, perform data quality checks and transform data using SQL and ETL tools
- Experience of deploying ETL / data pipelines and workflows in cloud technologies and architecture such as Azure and Amazon Web Services will be valued
- Comfort with data modelling principles (e.g. database structure, entity relationships, UID etc.) and software development principles (e.g. modularization, testing, refactoring, etc.)
- A thoughtful and comfortable communicator (verbal and written) with the ability to facilitate discussions and conduct training
- Strong problem-solving, requirement gathering, and leading.
Track record of completing projects successfully on time, within budget and as per scope
- Must have 5-8 years of experience in handling data
- Must have the ability to interpret large amounts of data and to multi-task
- Must have strong knowledge of and experience with programming (Python), Linux/Bash scripting, databases(SQL, etc)
- Must have strong analytical and critical thinking to resolve business problems using data and tech
- Must have domain familiarity and interest of – Cloud technologies (GCP/Azure Microsoft/ AWS Amazon), open-source technologies, Enterprise technologies
- Must have the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
- Must have good communication skills
- Working knowledge/exposure to ElasticSearch, PostgreSQL, Athena, PrestoDB, Jupyter Notebook
Solve problems in speech and NLP domain using advanced Deep learning and Machine Learning techniques. Few examples of the problems are -
* Limited resource Speaker Diarization on mono-channel recordings in noisy environment.
* Speech Enhancement to improve accuracy of downstream speech analytics tasks.
* Automated Speech Recognition for accent heavy audio with a noisy background.
* Speech analytic tasks, which include: emotions, empathy, keyword extraction.
* Text analytic tasks, which include: topic modeling, entity and intent extraction, opinion mining, text classification, and sentiment detection on multilingual data.
A typical day at work
You will work closely with the product team to own a business problem. You will then model the business problem into a Machine Learning problem. Next you will do literature review to identify approaches to solve the problem. Test these approaches, identify the best approach, add your own insights to improve the performance and ship that to production!
What should you know?
* Solid understanding of Classical Machine Learning and Deep Learning concepts and algorithms.
* Experience with literature review either in academia or industry.
* Proficiency in at least one programming language such as Python, C, C++, Java, etc.
* Proficiency in Machine Learning tools such as TensorFlow, Keras, Caffe, Torch/PyTorch or Theano.
* Advanced degree in Computer Science, Electrical Engineering, Machine Learning, Mathematics, Statistics, Physics, or Computational Linguistics
* You’ll learn insanely fast here.
* Esops and competitive compensation.
* Opportunity and encouragement for publishing research at top conferences, paid trips to attend workshop and conferences where you have published.
* Independent work, flexible timings and sense of ownership of your work.
* Mentorship from distinguished researchers and professors.
We are looking for applicants with a strong background in Analytics and Data mining (Web, Social and Big data), Machine Learning and Pattern Recognition, Natural Language Processing and Computational Linguistics, Statistical Modelling and Inferencing, Information Retrieval, Large Scale Distributed Systems and Cloud Computing, Econometrics and Quantitative Marketing, Applied Game Theory and Mechanism Design, Operations Research and Optimization, Human Computer Interaction and Information Visualization. Applicants with a background in other quantitative areas are also encouraged to apply.
We are looking for someone who can create and implement AI solutions. If you have built a product like IBM WATSON in the past and not just used WATSON to build applications, this could be the perfect role for you.
All successful candidates are expected to dive deep into problem areas of Zycus’ interest and invent technology solutions to not only advance the current products, but also to generate new product options that can strategically advantage the organization.
- Experience in predictive modelling and predictive software development
- Skilled in Java, C++, Perl/Python (or similar scripting language)
- Experience in using R, Matlab, or any other statistical software
- Experience in mentoring junior team members, and guiding them on machine learning and data modelling applications
- Strong communication and data presentation skills
- Classification (svm, decision tree, random forest, neural network)
- Regression (linear, polynomial, logistic, etc)
- Classical Optimization(gradient descent, newton raphson, etc)
- Graph theory (network analytics)
- Heuristic optimisation (genetic algorithm, swarm theory)
- Deep learning (lstm, convolutional nn, recurrent nn)
- Experience: 3-9 years
- The ideal candidate must have proven expertise in Artificial Intelligence (including deep learning algorithms), Machine Learning and/or NLP
- The candidate must also have expertise in programming traditional machine learning algorithms, algorithm design & usage
- Preferred experience with large data sets & distributed computing in Hadoop ecosystem
- Fluency with databases