
Job brief
We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights.
In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research.
Your goal will be to help our company analyze trends to make better decisions.
Requirements
1. 2 to 4 years of relevant industry experience
2. Experience in Linear algebra, statistics & Probability skills, such as distributions, Deep Learning, Machine Learning
3. Strong mathematical and statistics background is a must
4. Experience in machine learning frameworks such as Tensorflow, Caffe, PyTorch, or MxNet
5. Strong industry experience in using design patterns, algorithms and data structures
6. Industry experience in using feature engineering, model performance tuning, and optimizing machine learning models
7. Hands on development experience in Python and packages such as NumPy, Sci-Kit Learn and Matplotlib
8. Experience in model building, hyper

About Dori AI
At Dori, we develop platforms and services that enable artificial intelligence centered application development for mobile edge devices, embedded IoT devices, on-premise servers, and cloud platforms. The company provides a turnkey solution to add intelligence in applications by simplifying model development and deployment.
We have developed an AI-as-a-service platform that provides prebuilt and custom engines to evaluate, deploy, and monitor artificial intelligence systems for consumer and enterprise applications. Application developers can rapidly develop and deploy AI-enabled applications for multiple operating systems, hardware architectures, and cloud infrastructures.
Similar jobs
Job Title -Data Scientist
Job Duties
- Data Scientist responsibilities includes planning projects and building analytics models.
- You should have a strong problem-solving ability and a knack for statistical analysis.
- If you're also able to align our data products with our business goals, we'd like to meet you. Your ultimate goal will be to help improve our products and business decisions by making the most out of our data.
Responsibilities
Own end-to-end business problems and metrics, build and implement ML solutions using cutting-edge technology.
Create scalable solutions to business problems using statistical techniques, machine learning, and NLP.
Design, experiment and evaluate highly innovative models for predictive learning
Work closely with software engineering teams to drive real-time model experiments, implementations, and new feature creations
Establish scalable, efficient, and automated processes for large-scale data analysis, model development, deployment, experimentation, and evaluation.
Research and implement novel machine learning and statistical approaches.
Requirements
2-5 years of experience in data science.
In-depth understanding of modern machine learning techniques and their mathematical underpinnings.
Demonstrated ability to build PoCs for complex, ambiguous problems and scale them up.
Strong programming skills (Python, Java)
High proficiency in at least one of the following broad areas: machine learning, statistical modelling/inference, information retrieval, data mining, NLP
Experience with SQL and NoSQL databases
Strong organizational and leadership skills
Excellent communication skills
Data Engineer- Senior
Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.
What are you going to do?
Design & Develop high performance and scalable solutions that meet the needs of our customers.
Closely work with the Product Management, Architects and cross functional teams.
Build and deploy large-scale systems in Java/Python.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.
Follow best practices that can be adopted in Bigdata stack.
Use your engineering experience and technical skills to drive the features and mentor the engineers.
What are we looking for ( Competencies) :
Bachelor’s degree in computer science, computer engineering, or related technical discipline.
Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.
Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.
Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.
Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.
Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.
Ability to work with distributed teams in a collaborative and productive manner.
Benefits:
Competitive Salary Packages and benefits.
Collaborative, lively and an upbeat work environment with young professionals.
Job Category: Development
Job Type: Full Time
Job Location: Bangalore
Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Ganit has flipped the data science value chain as we do not start with a technique but for us, consumption comes first. With this philosophy, we have successfully scaled from being a small start-up to a 200 resource company with clients in the US, Singapore, Africa, UAE, and India.
We are looking for experienced data enthusiasts who can make the data talk to them.
You will:
- Understand business problems and translate business requirements into technical requirements.
- Conduct complex data analysis to ensure data quality & reliability i.e., make the data talk by extracting, preparing, and transforming it.
- Identify, develop and implement statistical techniques and algorithms to address business challenges and add value to the organization.
- Gather requirements and communicate findings in the form of a meaningful story with the stakeholders
- Build & implement data models using predictive modelling techniques. Interact with clients and provide support for queries and delivery adoption.
- Lead and mentor data analysts.
We are looking for someone who has:
- Apart from your love for data and ability to code even while sleeping you would need the following.
- Minimum of 02 years of experience in designing and delivery of data science solutions.
- You should have successful projects of retail/BFSI/FMCG/Manufacturing/QSR in your kitty to show-off.
- Deep understanding of various statistical techniques, mathematical models, and algorithms to start the conversation with the data in hand.
- Ability to choose the right model for the data and translate that into a code using R, Python, VBA, SQL, etc.
- Bachelors/Masters degree in Engineering/Technology or MBA from Tier-1 B School or MSc. in Statistics or Mathematics
Skillset Required:
- Regression
- Classification
- Predictive Modelling
- Prescriptive Modelling
- Python
- R
- Descriptive Modelling
- Time Series
- Clustering
What is in it for you:
- Be a part of building the biggest brand in Data science.
- An opportunity to be a part of a young and energetic team with a strong pedigree.
- Work on awesome projects across industries and learn from the best in the industry, while growing at a hyper rate.
Please Note:
At Ganit, we are looking for people who love problem solving. You are encouraged to apply even if your experience does not precisely match the job description above. Your passion and skills will stand out and set you apart—especially if your career has taken some extraordinary twists and turns over the years. We welcome diverse perspectives, people who think rigorously and are not afraid to challenge assumptions in a problem. Join us and punch above your weight!
Ganit is an equal opportunity employer and is committed to providing a work environment that is free from harassment and discrimination.
All recruitment, selection procedures and decisions will reflect Ganit’s commitment to providing equal opportunity. All potential candidates will be assessed according to their skills, knowledge, qualifications, and capabilities. No regard will be given to factors such as age, gender, marital status, race, religion, physical impairment, or political opinions.
Introduction
http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
We are looking for an experienced MLOps Engineer to join our engineering team and help us create dynamic software applications for our clients. In this role, you will be a key member of a team in decision making, implementations, development and advancement of ML operations of the core AI platform.
Roles and Responsibilities:
- Work closely with a cross functional team to serve business goals and objectives.
- Develop, Implement and Manage MLOps in cloud infrastructure for data preparation,deployment, monitoring and retraining models
- Design and build application containerisation and orchestrate with Docker and Kubernetes in AWS platform.
- Build and maintain code, tools, packages in cloud
Requirements:
- At Least 2+ years of experience in Data engineering
- At Least 3+ yr experience in Python with familiarity in popular ML libraries.
- At Least 2+ years experience in model serving and pipelines
- Working knowledge of containers like kubernetes , dockers, in AWS
- Design distributed systems deployment at scale
- Hands-on experience in coding and scripting
- Ability to write effective scalable and modular code.
- Familiarity with Git workflows, CI CD and NoSQL Mongodb
- Familiarity with Airflow, DVC and MLflow is a plus
Role :
- Understand and translate statistics and analytics to address business problems
- Responsible for helping in data preparation and data pull, which is the first step in machine learning
- Should be able to do cut and slice data to extract interesting insights from the data
- Model development for better customer engagement and retention
- Hands on experience in relevant tools like SQL(expert), Excel, R/Python
- Working on strategy development to increase business revenue
Requirements:
- Hands on experience in relevant tools like SQL(expert), Excel, R/Python
- Statistics: Strong knowledge of statistics
- Should able to do data scraping & Data mining
- Be self-driven, and show ability to deliver on ambiguous projects
- An ability and interest in working in a fast-paced, ambiguous and rapidly-changing environment
- Should have worked on Business Projects for an organization, Ex: customer acquisition, Customer retention.
- We are looking for a Data Engineer to build the next-generation mobile applications for our world-class fintech product.
- The candidate will be responsible for expanding and optimising our data and data pipeline architecture, as well as optimising data flow and collection for cross-functional teams.
- The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimising data systems and building them from the ground up.
- Looking for a person with a strong ability to analyse and provide valuable insights to the product and business team to solve daily business problems.
- You should be able to work in a high-volume environment, have outstanding planning and organisational skills.
Qualifications for Data Engineer
- Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimising ‘big data’ data pipelines, architectures, and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Looking for a candidate with 2-3 years of experience in a Data Engineer role, who is a CS graduate or has an equivalent experience.
What we're looking for?
- Experience with big data tools: Hadoop, Spark, Kafka and other alternate tools.
- Experience with relational SQL and NoSQL databases, including MySql/Postgres and Mongodb.
- Experience with data pipeline and workflow management tools: Luigi, Airflow.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
- Experience with stream-processing systems: Storm, Spark-Streaming.
- Experience with object-oriented/object function scripting languages: Python, Java, Scala.
Required Python ,R
work in handling large-scale data engineering pipelines.
Excellent verbal and written communication skills.
Proficient in PowerPoint or other presentation tools.
Ability to work quickly and accurately on multiple projects.
We are looking for passionate, talented and super-smart engineers to join our product development team. If you are someone who innovates, loves solving hard problems, and enjoys end-to-end product development, then this job is for you! You will be working with some of the best developers in the industry in a self-organising, agile environment where talent is valued over job title or years of experience.
Responsibilities:
- You will be involved in end-to-end development of VIMANA technology, adhering to our development practices and expected quality standards.
- You will be part of a highly collaborative Agile team which passionately follows SAFe Agile practices, including pair-programming, PR reviews, TDD, and Continuous Integration/Delivery (CI/CD).
- You will be working with cutting-edge technologies and tools for stream processing using Java, NodeJS and Python, using frameworks like Spring, RxJS etc.
- You will be leveraging big data technologies like Kafka, Elasticsearch and Spark, processing more than 10 Billion events per day to build a maintainable system at scale.
- You will be building Domain Driven APIs as part of a micro-service architecture.
- You will be part of a DevOps culture where you will get to work with production systems, including operations, deployment, and maintenance.
- You will have an opportunity to continuously grow and build your capabilities, learning new technologies, languages, and platforms.
Requirements:
- Undergraduate degree in Computer Science or a related field, or equivalent practical experience.
- 2 to 5 years of product development experience.
- Experience building applications using Java, NodeJS, or Python.
- Deep knowledge in Object-Oriented Design Principles, Data Structures, Dependency Management, and Algorithms.
- Working knowledge of message queuing, stream processing, and highly scalable Big Data technologies.
- Experience in working with Agile software methodologies (XP, Scrum, Kanban), TDD and Continuous Integration (CI/CD).
- Experience using no-SQL databases like MongoDB or Elasticsearch.
- Prior experience with container orchestrators like Kubernetes is a plus.
We build products and platforms for the Industrial Internet of Things. Our technology is being used around the world in mission-critical applications - from improving the performance of manufacturing plants, to making electric vehicles safer and more efficient, to making industrial equipment smarter.
Please visit https://govimana.com/ to learn more about what we do.
Why Explore a Career at VIMANA
- We recognize that our dedicated team members make us successful and we offer competitive salaries.
- We are a workplace that values work-life balance, provides flexible working hours, and full time remote work options.
- You will be part of a team that is highly motivated to learn and work on cutting edge technologies, tools, and development practices.
- Bon Appetit! Enjoy catered breakfasts, lunches and free snacks!
VIMANA Interview Process
We usually target to complete all the interviews in a week's time and would provide prompt feedback to the candidate. As of now, all the interviews are conducted online due to covid situation.
1.Telephonic screening (30 Min )
A 30 minute telephonic interview to understand and evaluate the candidate's fit with the job role and the company.
Clarify any queries regarding the job/company.
Give an overview about further interview rounds
2. Technical Rounds
This would be deep technical round to evaluate the candidate's technical capability pertaining to the job role.
3. HR Round
Candidate's team and cultural fit will be evaluated during this round
We would proceed with releasing the offer if the candidate clears all the above rounds.
Note: In certain cases, we might schedule additional rounds if needed before releasing the offer.
2. Should understand the importance and know-how of taking the machine-learning-based solution to the consumer.
3. Hands-on experience with statistical, machine-learning tools and techniques
4. Good exposure to Deep learning libraries like Tensorflow, PyTorch.
5. Experience in implementing Deep Learning techniques, Computer Vision and NLP. The candidate should be able to develop the solution from scratch with Github codes exposed.
6. Should be able to read research papers and pick ideas to quickly reproduce research in the most comfortable Deep Learning library.
7. Should be strong in data structures and algorithms. Should be able to do code complexity analysis/optimization for smooth delivery to production.
8. Expert level coding experience in Python.
9. Technologies: Backend - Python (Programming Language)
10. Should have the ability to think long term solutions, modularity, and reusability of the components.
11. Should be able to work in a collaborative way. Should be open to learning from peers as well as constantly bring new ideas to the table.
12. Self-driven missile. Open to peer criticism, feedback and should be able to take it positively. Ready to be held accountable for the responsibilities undertaken.

