Sr AI Scientist, Bengaluru
Synapsica is a growth stage HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective, while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don’t have to rely on cryptic 2 liners given to them as diagnosis. Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by YCombinator and other investors from India, US and Japan. We are proud to have GE, AIIMS, the Spinal Kinetics as our partners.
Your Roles and Responsibilities
The role involves computer vision tasks including development, customization and training of Convolutional Neural Networks (CNNs); application of ML techniques (SVM, regression, clustering etc.) and traditional Image Processing (OpenCV etc.). The role is research focused and would involve going through and implementing existing research papers, deep dive of problem analysis, generating new ideas, automating and optimizing key processes.
- 4+ years of relevant experience in solving complex real-world problems at scale via computer vision based deep learning.
- Strong problem-solving ability
- Prior experience with Python, cuDNN, Tensorflow, PyTorch, Keras, Caffe (or similar Deep Learning frameworks).
- Extensive understanding of computer vision/image processing applications like object classification, segmentation, object detection etc
- Ability to write custom Convolutional Neural Network Architecture in Pytorch (or similar)
- Experience of GPU/DSP/other Multi-core architecture programming
- Effective communication with other project members and project stakeholders
- Detail-oriented, eager to learn, acquire new skills
- Prior Project Management and Team Leadership experience
- Ability to plan work and meet deadlines
About Synapsica Healthcare
At Synapsica, we are creating AI-first PACS and radiology workflow solution that is fast, secure and automates reporting tasks, helping radiologists create high quality reports quicker.
Our goal is to enable radiologists with fast, easy to use AI technology that helps generate high quality, evidence-based reports to significantly improve patient care.
Synapsica is a growth stage HealthTech startup founded by alumni from AIIMS, IIT-KGP & IIM-A with a vision to increase the accessibility of diagnostic services globally. We are solving the problem of shortage of radiologists by bringing in automation at various stages of the radiology reporting process to reduce reporting time, costs & error rates & deliver efficiencies for better patient care.
We are deploying Computer Vision based Neural Networks models to identify biomarkers of pathologies in Radiology scans. These models not just identify pathologies, but provide detailed characterization of what exactly constitutes that pathology.
RADIOLens is an intuitive, AI enabled, cloud based RIS/PACS solution that makes diagnostic radiology workflows smoother. It provides Faster image uploading with Zero loss in image quality. RADIOLens helps automate mundane reporting tasks so radiologists can focus on clinical correlations for their patients.
Spindle for MRI Spine helps with reporting of age related degeneration of spine. It automatically identifies key spinal mensurations and provides a detailed, standardized report with annotated images of abnormal various spinal elements. It quickly identifies variations and pathologies, such as spinal deformations, degeneration, identification of listhesis, etc.
SpindleX for stress X-Rays of spine takes automation a step further by generating a one click automated reporting for all XR cases. These reports quantify all abnormalities and features of injury or early degeneration such as spinal instability, abnormal intersegmental motion etc. Illustrative graphs and tables comparing extent of injury against standards are automatically included in the report making it easier to generate qualitative reports with visual evidence.
Crescent segregates normal, abnormal and bad quality captures in chest X-rays. It identifies bad quality scans that need re-capture and for good scans it localizes and characterizes common lesions & abnormalities. With Crescent, standard, pre-filled reports are generated for normal radiographs and critical studies are prioritized in the worklist.
We are backed by Y Combinator and other investors from India, US and Japan. We are proud to have GE, AIIMS, the Spinal Kinetics as our partners.
Here’s a small sample of what we’re building: https://youtu.be/MtWSF-x2sxY
Join us, if you find this as exciting as we do!
At Anarock Tech, we are building a modern technology platform with automated analytics and reporting tools. This offers timely solutions to our real estate clients while delivering financially favourable and efficient results.
If it excites you to - drive innovation, create industry-first solutions, build new capabilities ground-up, and work with multiple new technologies, ANAROCK is the place for you.
We are looking for a Machine Learning (ML) Engineer to help us create artificial intelligence products. Machine Learning Engineer’s responsibilities include creating machine learning models and retraining systems. To do this job successfully, you need exceptional skills in statistics and programming. If you also have knowledge of data science and software engineering, we’d like to meet you. Your ultimate goal will be to shape and build efficient self-learning applications.
Key job responsibilities
- Designing and developing machine learning and deep learning systems
- Running machine learning tests and experiments
- Implementing appropriate ML algorithms
- Good understanding of Algorithms and data structures.
- Experience in designing scalable systems ML is a plus
- Expert programming experience in any one general programming language (strong OO skills preferred). Experience in at least one general programming language (Ruby, Python, Java, Elixir, C/C++)
- Experience with ML/DL frameworks
Experience with pandas, numpy etc.
Skills that will help you build a success story with us
- Worked in a start-up environment with high levels of ownership and full dedication.
- Experience in NoSQL datastores like Redis, MongoDB, Couchdb etc with an understanding of underlying sharding and scaling techniques
- Experience in building highly scalable business applications, which involve implementing large complex business flows and dealing with a huge amount of data
Experience: 1 - 4 years
Locations: Bangalore or Mumbai or Gurgaon
- What to look for at Anarock
- Who are we A glimpse of Anarock Tech, know us better
- Anarock - Media – Visit our media page
Anarock Ethos - Values Over Value:
Our assurance of consistent ethical dealing with clients and partners reflects our motto - Values Over Value.
We value diversity within ANAROCK Group and are committed to offering equal opportunities in employment. We do not discriminate against any team member or applicant for employment based on nationality, race, color, religion, caste, gender identity / expression, sexual orientation, disability, social origin and status, indigenous status, political opinion, age, marital status or any other personal characteristics or status. ANAROCK Group values all talent and will do its utmost to hire, nurture and grow them
- Convert the machine learning models into application program interfaces (APIs) so that other applications can use it
- Build AI models from scratch and help the different components of the organization (such as product managers and stakeholders) understand what results they gain from the model
- Build data ingestion and data transformation infrastructure
- Automate infrastructure that the data science team uses
- Perform statistical analysis and tune the results so that the organization can make better-informed decisions
- Set up and manage AI development and product infrastructure
- Be a good team player, as coordinating with others is a must
Kwalee is one of the world’s leading multiplatform game publishers and developers, with well over 750 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Traffic Cop 3D and Makeover Studio 3D. Alongside this, we also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope and Die by the Blade.
With a team of talented people collaborating daily between our studios in Leamington Spa, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, the Philippines and many more places, we have a truly global team making games for a global audience. And it’s paying off: Kwalee games have been downloaded in every country on earth! If you think you’re a good fit for one of our remote vacancies, we want to hear from you wherever you are based.
Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters for many years, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts. Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle. Could your idea be the next global hit?
What’s the job?
As a Data Engineer at Kwalee you will be connecting data generated by our games, and external sources, to the range of products and services which the Data Science team provide for Kwalee.
What you will be doing
- Building efficient, scalable data pipelines which are the backbone of valuable products and services which Kwalee relies on.
- Leveraging AWS cloud technologies, to deal with huge volumes of data.
- Collaborating with all members of the Data Science team and other key stakeholders to understand the strengths and weaknesses of each process and service.
- Applying a deep knowledge of data warehousing to maximise the productivity of analysts and ETL processes.
- Adopting new or different technologies to the existing data stack.
How you will be doing this
- You’ll be part of an agile, multidisciplinary and creative team and work closely with them to ensure the best results.
- You'll think creatively and be motivated by challenges and constantly striving for the best.
- You’ll work with cutting edge technology, if you need software or hardware to get the job done efficiently, you will get it. We even have a robot!
Our talented team is our signature. We have a highly creative atmosphere with more than 200 staff where you’ll have the opportunity to contribute daily to important decisions. You’ll work within an extremely experienced, passionate and diverse team, including David Darling and the creator of the Micro Machines video games.
Skills and Requirements
- Exceptional knowledge of SQL - Redshift.
- Working knowledge of Python for the purposes of automation and data manipulation.
- Proven record of successfully working with big data.
- Varied experience with a range of AWS services such as Kinesis, Redshift, Glue, Lambda, Spectrum and S3.
- Eagerness to learn new technologies and apply them to your work.
- Excellent problem solving skills.
- We want everyone involved in our games to share our success, that’s why we have a generous team profit sharing scheme from day 1 of employment
- In addition to a competitive salary we also offer private medical cover and life assurance
- Creative Wednesdays!(Design and make your own games every Wednesday)
- 20 days of paid holidays plus bank holidays
- Hybrid model available depending on the department and the role
- Relocation support available
- Great work-life balance with flexible working hours
- Quarterly team building days - work hard, play hard!
- Monthly employee awards
- Free snacks, fruit and drinks
We firmly believe in creativity and innovation and that a fundamental requirement for a successful and happy company is having the right mix of individuals. With the right people in the right environment anything and everything is possible.
Kwalee makes games to bring people, their stories, and their interests together. As an employer, we’re dedicated to making sure that everyone can thrive within our team by welcoming and supporting people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances. With the inclusion of diverse voices in our teams, we bring plenty to the table that’s fresh, fun and exciting; it makes for a better environment and helps us to create better games for everyone! This is how we move forward as a company – because these voices are the difference that make all the difference.
We are an emerging Artificial Intelligence-based startup trying to cater to the needs of industries that employ cutting-edge technologies for their operations. Currently, we are into and provide services to disruptive sectors such as drone tech, video surveillance, human-computer interaction, etc. In general, we believe that AI has the ability to shape the future of humanity and we aim to work towards spearheading this transition.
About the role:
We are looking for a highly motivated data engineer with a strong algorithmic mindset and problem-solving propensity.
Since we are operating in a highly competitive market, every opportunity to increase efficiency and cut costs is critical and the candidate should have an eye for such opportunities. We are constantly innovating – working on novel hardware and software – so a high level of flexibility and celerity in learning is expected.
- Analyzing and organizing raw data
- Building MLOps pipelines to support development, experimentation, continuous integration, continuous delivery, verification/validation, and monitoring of AI/ML models.
- Evaluating business needs and objectives
- Interpreting trends and patterns
- Preparing data for prescriptive and predictive modeling
- Building algorithms and prototypes.
- Collaborating with data scientists and architects on projects
- Take offline models data scientists to build and turn them into a real machine learning production system.
- Explore new promising technology and implement it to create awesome stuff.
- Continuously evaluate the latest packages and frameworks in the ML ecosystem
- Must have:
- SQL, NoSQL Databases
- Good understanding of ML and AI concepts. Hands-on experience in ML model development.
- Data-oriented workflow orchestration frameworks ( Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, MLFlow, etc.)
- AWS/GCP/Azure experience
- Good to have:
- ETL tool experience
- Data warehousing solutions experience
- Own the design, development, testing, deployment, and craftsmanship of the team’s infrastructure and systems capable of handling massive amounts of requests with high reliability and scalability
- Leverage the deep and broad technical expertise to mentor engineers and provide leadership on resolving complex technology issues
- Entrepreneurial and out-of-box thinking essential for a technology startup
- Guide the team for unit-test code for robustness, including edge cases, usability, and general reliability
- In-depth understanding of image processing algorithms, pattern recognition methods, and rule-based classifiers
- Experience in feature extraction, object recognition and tracking, image registration, noise reduction, image calibration, and correction
- Ability to understand, optimize and debug imaging algorithms
- Understating and experience in openCV library
- Fundamental understanding of mathematical techniques involved in ML and DL schemas (Instance-based methods, Boosting methods, PGM, Neural Networks etc.)
- Thorough understanding of state-of-the-art DL concepts (Sequence modeling, Attention, Convolution etc.) along with knack to imagine new schemas that work for the given data.
- Understanding of engineering principles and a clear understanding of data structures and algorithms
- Experience in writing production level codes using either C++ or Java
- Experience with technologies/libraries such as python pandas, numpy, scipy
- Experience with tensorflow and scikit.
Solve problems in speech and NLP domain using advanced Deep learning and Machine Learning techniques. Few examples of the problems are -
* Limited resource Speaker Diarization on mono-channel recordings in noisy environment.
* Speech Enhancement to improve accuracy of downstream speech analytics tasks.
* Automated Speech Recognition for accent heavy audio with a noisy background.
* Speech analytic tasks, which include: emotions, empathy, keyword extraction.
* Text analytic tasks, which include: topic modeling, entity and intent extraction, opinion mining, text classification, and sentiment detection on multilingual data.
A typical day at work
You will work closely with the product team to own a business problem. You will then model the business problem into a Machine Learning problem. Next you will do literature review to identify approaches to solve the problem. Test these approaches, identify the best approach, add your own insights to improve the performance and ship that to production!
What should you know?
* Solid understanding of Classical Machine Learning and Deep Learning concepts and algorithms.
* Experience with literature review either in academia or industry.
* Proficiency in at least one programming language such as Python, C, C++, Java, etc.
* Proficiency in Machine Learning tools such as TensorFlow, Keras, Caffe, Torch/PyTorch or Theano.
* Advanced degree in Computer Science, Electrical Engineering, Machine Learning, Mathematics, Statistics, Physics, or Computational Linguistics
* You’ll learn insanely fast here.
* Esops and competitive compensation.
* Opportunity and encouragement for publishing research at top conferences, paid trips to attend workshop and conferences where you have published.
* Independent work, flexible timings and sense of ownership of your work.
* Mentorship from distinguished researchers and professors.
along with metrics to track their progress
Managing available resources such as hardware, data, and personnel so that deadlines
Analysing the ML algorithms that could be used to solve a given problem and ranking
them by their success probability
Exploring and visualizing data to gain an understanding of it, then identifying
differences in data distribution that could affect performance when deploying the model
in the real world
Verifying data quality, and/or ensuring it via data cleaning
Supervising the data acquisition process if more data is needed
Defining validation strategies
Defining the pre-processing or feature engineering to be done on a given dataset
Defining data augmentation pipelines
Training models and tuning their hyper parameters
Analysing the errors of the model and designing strategies to overcome them
Deploying models to production