Data Scientist

at Simplifai Cognitive Solutions Pvt Ltd

DP
Posted by Vipul Tiwari
icon
Pune
icon
3 - 8 yrs
icon
₹5L - ₹30L / yr (ESOP available)
icon
Full time
Skills
Data Science
Machine Learning (ML)
Python
Big Data
SQL
Natural Language Processing (NLP)
Deep Learning
chatbots
Job Description for Data Scientist/ NLP Engineer

Responsibilities for Data Scientist/ NLP Engineer

Work with customers to identify opportunities for leveraging their data to drive business
solutions.
• Develop custom data models and algorithms to apply to data sets.
• Basic data cleaning and annotation for any incoming raw data.
• Use predictive modeling to increase and optimize customer experiences, revenue
generation, ad targeting and other business outcomes.
• Develop company A/B testing framework and test model quality.
• Deployment of ML model in production.
Qualifications for Junior Data Scientist/ NLP Engineer

• BS, MS in Computer Science, Engineering, or related discipline.
• 3+ Years of experience in Data Science/Machine Learning.
• Experience with programming language Python.
• Familiar with at least one database query language, such as SQL
• Knowledge of Text Classification & Clustering, Question Answering & Query Understanding,
Search Indexing & Fuzzy Matching.
• Excellent written and verbal communication skills for coordinating acrossteams.
• Willing to learn and master new technologies and techniques.
• Knowledge and experience in statistical and data mining techniques:
GLM/Regression, Random Forest, Boosting, Trees, text mining, NLP, etc.
• Experience with chatbots would be bonus but not required

About Simplifai Cognitive Solutions Pvt Ltd

The growth of artificial intelligence accelerated these thoughts. Machine learning made it possible for the projects to get smaller, the solutions smarter, and the automation more efficient. Bård and Erik wanted to bring AI to the people, and they wanted to do it simply.

Simplifai was founded in 2017 and has grown considerably since then. Today we work globally and have offices in Norway, India, and Ukraine. We have built a global, diverse organization that is well prepared for further growth.

Founded
2017
Type
Product
Size
100-500 employees
Stage
Bootstrapped
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineers

at JMAN group

Founded 2007  •  Product  •  100-500 employees  •  Profitable
SQL
Python
Java
ADF
Snow flake schema
Amazon Web Services (AWS)
DBT
Machine Learning (ML)
icon
Chennai
icon
3 - 7 yrs
icon
₹5L - ₹14L / yr
To all the #Dataengineers, we have an immediate requirement for the #Chennai location.

Hiring Developer with multiple combinations with experience of 3 to 6 years.

Hands-on experience with SQL ,Java and Python

Knowledge of these tools DBT, ADF, Snowflakes , Databricks would be added advantage for our current project

ML and AWS would be a plus

We need people who can work from our Chennai branch.

Do share your profile to  gayathrirajagopalan @jmangroup.com
Job posted by
JMAN Digital Services P Ltd

MicroStrategy

at Response Informatics

Founded 2018  •  Services  •  employees  •  Bootstrapped
MicroStrategy
SQL
Business Intelligence (BI)
icon
Remote, Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹5L - ₹25L / yr
Microstrategy. Having knowledge on Database like SQL. And also knowledge on BI concepts.Strong communication skills.
Job posted by
Swagatika Sahoo

Data Engineer- Integration

at SteelEye

Founded 2017  •  Product  •  20-100 employees  •  Raised funding
ETL
Informatica
Data Warehouse (DWH)
Python
pandas
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹20L - ₹30L / yr

About us

SteelEye is the only regulatory compliance technology and data analytics firm that offers transaction reporting, record keeping, trade reconstruction, best execution and data insight in one comprehensive solution. The firm’s scalable secure data storage platform offers encryption at rest and in flight and best-in-class analytics to help financial firms meet regulatory obligations and gain competitive advantage.

The company has a highly experienced management team and a strong board, who have decades of technology and management experience and worked in senior positions at many leading international financial businesses. We are a young company that shares a commitment to learning, being smart, working hard and being honest in all we do and striving to do that better each day. We value all our colleagues equally and everyone should feel able to speak up, propose an idea, point out a mistake and feel safe, happy and be themselves at work.

Being part of a start-up can be equally exciting as it is challenging. You will be part of the SteelEye team not just because of your talent but also because of your entrepreneurial flare which we thrive on at SteelEye. This means we want you to be curious, contribute, ask questions and share ideas. We encourage you to get involved in helping shape our business. What you'll do

What you will do?

  • Deliver plugins for our python based ETL pipelines.
  • Deliver python services for provisioning and managing cloud infrastructure.
  • Design, Develop, Unit Test, and Support code in production.
  • Deal with challenges associated with large volumes of data.
  • Manage expectations with internal stakeholders and context switch between multiple deliverables as priorities change.
  • Thrive in an environment that uses AWS and Elasticsearch extensively.
  • Keep abreast of technology and contribute to the evolution of the product.
  • Champion best practices and provide mentorship.

What we're looking for

  • Python 3.
  • Python libraries used for data (such as pandas, numpy).
  • AWS.
  • Elasticsearch.
  • Performance tuning.
  • Object Oriented Design and Modelling.
  • Delivering complex software, ideally in a FinTech setting.
  • CI/CD tools.
  • Knowledge of design patterns.
  • Sharp analytical and problem-solving skills.
  • Strong sense of ownership.
  • Demonstrable desire to learn and grow.
  • Excellent written and oral communication skills.
  • Mature collaboration and mentoring abilities.

What will you get?

  • This is an individual contributor role. So, if you are someone who loves to code and solve complex problems and build amazing products and not worry about anything else, this is the role for you.
  • You will have the chance to learn from the best in the business who have worked across the world and are technology geeks.
  • Company that always appreciates ownership and initiative. If you are someone who is full of ideas, this role is for you.
Job posted by
akanksha rajput
ETL
Data Warehouse (DWH)
ETL Developer
Relational Database (RDBMS)
Spark
Hadoop
SQL server
SSIS
ADF
Python
Java
talend
Azure Data Factory
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹8L - ₹13L / yr

 Minimum of 4 years’ experience of working on DW/ETL projects and expert hands-on working knowledge of ETL tools.

Experience with Data Management & data warehouse development

Star schemas, Data Vaults, RDBMS, and ODS

Change Data capture

Slowly changing dimensions

Data governance

Data quality

Partitioning and tuning

Data Stewardship

Survivorship

Fuzzy Matching

Concurrency

Vertical and horizontal scaling

ELT, ETL

Spark, Hadoop, MPP, RDBMS

Experience with Dev/OPS architecture, implementation and operation

Hand's on working knowledge of Unix/Linux

Building Complex SQL Queries. Expert SQL and data analysis skills, ability to debug and fix data issue.

Complex ETL program design coding

Experience in Shell Scripting, Batch Scripting.

Good communication (oral & written) and inter-personal skills

Expert SQL and data analysis skill, ability to debug and fix data issue Work closely with business teams to understand their business needs and participate in requirements gathering, while creating artifacts and seek business approval.

Helping business define new requirements, Participating in End user meetings to derive and define the business requirement, propose cost effective solutions for data analytics and familiarize the team with the customer needs, specifications, design targets & techniques to support task performance and delivery.

Propose good design & solutions and adherence to the best Design & Standard practices.

Review & Propose industry best tools & technology for ever changing business rules and data set. Conduct Proof of Concepts (POC) with new tools & technologies to derive convincing benchmarks.

Prepare the plan, design and document the architecture, High-Level Topology Design, Functional Design, and review the same with customer IT managers and provide detailed knowledge to the development team to familiarize them with customer requirements, specifications, design standards and techniques.

Review code developed by other programmers, mentor, guide and monitor their work ensuring adherence to programming and documentation policies.

Work with functional business analysts to ensure that application programs are functioning as defined. 

Capture user-feedback/comments on the delivered systems and document it for the client and project manager’s review. Review all deliverables before final delivery to client for quality adherence.

Technologies (Select based on requirement)

Databases - Oracle, Teradata, Postgres, SQL Server, Big Data, Snowflake, or Redshift

Tools – Talend, Informatica, SSIS, Matillion, Glue, or Azure Data Factory

Utilities for bulk loading and extracting

Languages – SQL, PL-SQL, T-SQL, Python, Java, or Scala

J/ODBC, JSON

Data Virtualization Data services development

Service Delivery - REST, Web Services

Data Virtualization Delivery – Denodo

 

ELT, ETL

Cloud certification Azure

Complex SQL Queries

 

Data Ingestion, Data Modeling (Domain), Consumption(RDMS)
Job posted by
Jerrin Thomas
PySpark
Python
Spark
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹8L - ₹16L / yr
Roles and Responsibilities:

• Responsible for developing and maintaining applications with PySpark 
• Contribute to the overall design and architecture of the application developed and deployed.
• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
• Interact with business users to understand requirements and troubleshoot issues.
• Implement Projects based on functional specifications.

Must-Have Skills:

• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ETL architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Job posted by
Priyanka U

Sr. Data Scientist

at www.claimgenius

Founded 2017  •  Product  •  100-500 employees  •  Raised funding
Data Science
Deep Learning
Python
Image Processing
CNN
Convolution neural network
OpenCV
icon
Nagpur, Hyderabad
icon
3 - 10 yrs
icon
₹5L - ₹25L / yr

Responsibilities: 
 

  • The Machine & Deep Machine Learning Software Engineer (Expertise in Computer Vision) will be an early member of a growing team with responsibilities for designing and developing highly scalable machine learning solutions that impact many areas of our business. 
  • The individual in this role will help in the design and development of Neural Network (especially Convolution Neural Networks) & ML solutions based on our reference architecture which is underpinned by big data & cloud technology, micro-service architecture and high performing compute infrastructure. 
  • Typical daily activities include contributing to all phases of algorithm development including ideation, prototyping, design, and development production implementation. 


Required Skills: 
 

  • An ideal candidate will have a background in software engineering and data science with expertise in machine learning algorithms, statistical analysis tools, and distributed systems. 
  • Experience in building machine learning applications, and broad knowledge of machine learning APIs, tools, and open-source libraries 
  • Strong coding skills and fundamentals in data structures, predictive modeling, and big data concepts 
  • Experience in designing full stack ML solutions in a distributed computing environment 
  • Experience working with Python, Tensor Flow, Kera’s, Sci-kit, pandas, NumPy, AZURE, AWS GPU
  • Excellent communication skills with multiple levels of the organization 
  • Image CNN, Image processing, MRCNN, FRCNN experience is a must.
Job posted by
KalyaniMuley

Data Scientist - IV

at Glance

Founded 2018  •  Product  •  500-1000 employees  •  Raised funding
Data Science
Machine Learning (ML)
Artificial Intelligence (AI)
Python
Deep Learning
R Programming
Statistical Analysis
Natural Language Processing (NLP)
Databases
Mathematical modeling
Mathematics
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹50L - ₹80L / yr

Glance – An InMobi Group Company:

Glance is an AI-first Screen Zero content discovery platform, and it’s scaled massively in the last few months to one of the largest platforms in India. Glance is a lock-screen first mobile content platform set up within InMobi. The average mobile phone user unlocks their phone >150 times a day. Glance aims to be there, providing visually rich, easy to consume content to entertain and inform mobile users - one unlock at a time. Glance is live on more than 80 millions of mobile phones in India already, and we are only getting started on this journey! We are now into phase 2 of the Glance story - we are going global!

Roposo is part of the Glance family. It is a short video entertainment platform. All the videos created here are user generated (via upload or Roposo creation tools in camera) and there are many communities creating these videos on various themes we call channels. Around 4 million videos are created every month on Roposo and power Roposo channels, some of the channels are - HaHa TV (for comedy videos), News, Beats (for singing/ dance performances) along with a For You (personalized for a user) and Your Feed (for videos of people a user follows).

 

What’s the Glance family like?

Consistently featured among the “Great Places to Work” in India since 2017, our culture is our true north, enabling us to think big, solve complex challenges and grow with new opportunities. Glanciers are passionate and driven, creative and fun-loving, take ownership and are results-focused. We invite you to free yourself, dream big and chase your passion.

 

What can we promise? 

We offer an opportunity to have an immediate impact on the company and our products. The work that you shall do will be mission critical for Glance and will be critical for optimizing tech operations, working with highly capable and ambitious peer groups. At Glance, you get food for your body, soul, and mind with daily meals, gym, and yoga classes, cutting-edge training and tools, cocktails at drink cart Thursdays and fun at work on Funky Fridays. We even promise to let you bring your kids and pets to work. 

 

What you will be doing?

Glance is looking for a Data Scientist who will design and develop processes and systems to analyze high volume, diverse "big data" sources using advanced mathematical, statistical, querying, and reporting methods. Will use machine learning techniques and statistical analysis to predict outcomes and behaviors. Interacts with business partners to identify questions for data analysis and experiments. Identifies meaningful insights from large data and metadata sources; interprets and communicates insights and or prepares output from analysis and experiments to business partners. 

You will be working with Product leadership, taking high-level objectives and developing solutions that fulfil these requirements. Stakeholder management across Eng, Product and Business teams will be required.

 

Basic Qualifications:

  • Five+ years experience working in a Data Science role
  • Extensive experience developing and deploying ML models in real world environments
  • Bachelor's degree in Computer Science, Mathematics, Statistics, or other analytical fields
  • Exceptional familiarity with Python, Java, Spark or other open-source software with data science libraries
  • Experience in advanced math and statistics
  • Excellent familiarity with command line linux environment
  • Able to understand various data structures and common methods in data transformation
  • Experience deploying machine learning models and measuring their impact
  • Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.

 

Preferred Qualifications

  • Experience developing recommendation systems
  • Experience developing and deploying deep learning models
  • Bachelor’s or Master's Degree or PhD that included coursework in statistics, machine learning or data analysis
  • Five+ years experience working with Hadoop, a NoSQL Database or other big data infrastructure
  • Experience with being actively engaged in data science or other research-oriented position
  • You would be comfortable collaborating with cross-functional teams.
  • Active personal GitHub account.
Job posted by
Sandeep Shadankar

Business Intelligence Developer

at Kaleidofin

Founded 2018  •  Products & Services  •  100-1000 employees  •  Profitable
PowerBI
Business Intelligence (BI)
Python
Tableau
SQL
Data modeling
icon
Chennai, Bengaluru (Bangalore)
icon
2 - 4 yrs
icon
Best in industry
We are looking for a developer to design and deliver strategic data-centric insights leveraging the next generation analytics and BI technologies. We want someone who is data-centric and insight-centric, less report centric. We are looking for someone wishing to make an impact by enabling innovation and growth; someone with passion for what they do and a vision for the future.

Responsibilities:
  • Be the analytical expert in Kaleidofin, managing ambiguous problems by using data to execute sophisticated quantitative modeling and deliver actionable insights.
  • Develop comprehensive skills including project management, business judgment, analytical problem solving and technical depth.
  • Become an expert on data and trends, both internal and external to Kaleidofin.
  • Communicate key state of the business metrics and develop dashboards to enable teams to understand business metrics independently.
  • Collaborate with stakeholders across teams to drive data analysis for key business questions, communicate insights and drive the planning process with company executives.
  • Automate scheduling and distribution of reports and support auditing and value realization.
  • Partner with enterprise architects to define and ensure proposed.
  • Business Intelligence solutions adhere to an enterprise reference architecture.
  • Design robust data-centric solutions and architecture that incorporates technology and strong BI solutions to scale up and eliminate repetitive tasks.
 Requirements:
  • Experience leading development efforts through all phases of SDLC.
  • 2+ years "hands-on" experience designing Analytics and Business Intelligence solutions.
  • Experience with Quicksight, PowerBI, Tableau and Qlik is a plus.
  • Hands on experience in SQL, data management, and scripting (preferably Python).
  • Strong data visualisation design skills, data modeling and inference skills.
  • Hands-on and experience in managing small teams.
  • Financial services experience preferred, but not mandatory.
  • Strong knowledge of architectural principles, tools, frameworks, and best practices.
  • Excellent communication and presentation skills to communicate and collaborate with all levels of the organisation.
  • Preferred candidates with less than 30 days notice period.
Job posted by
Poornima B
PySpark
Amazon Web Services (AWS)
Python
icon
Chennai, Hyderabad
icon
4 - 6 yrs
icon
₹10L - ₹20L / yr
  • Hands-on experience in Development
  • 4-6 years of Hands on experience with Python scripts
  • 2-3 years of Hands on experience in PySpark coding. Worked in spark cluster computing technology.
  • 3-4 years of Hands on end to end data pipeline experience working on AWS environments
  • 3-4 years of Hands on experience working on AWS services – Glue, Lambda, Step Functions, EC2, RDS, SES, SNS, DMS, CloudWatch etc.
  • 2-3 years of Hands on experience working on AWS redshift
  • 6+ years of Hands on experience with writing Unix Shell scripts
  • Good communication skills
Job posted by
Rakesh Kumar

Machine Learning Engineer

at Zycus

Founded 1998  •  Product  •  500-1000 employees  •  Profitable
Machine Learning (ML)
Natural Language Processing (NLP)
Image Processing
Artificial Intelligence (AI)
Deep Learning
icon
Bengaluru (Bangalore)
icon
2 - 13 yrs
icon
₹5L - ₹15L / yr

We are looking for applicants with a strong background in Analytics and Data mining (Web, Social and Big data), Machine Learning and Pattern Recognition, Natural Language Processing and Computational Linguistics, Statistical Modelling and Inferencing, Information Retrieval, Large Scale Distributed Systems and Cloud Computing, Econometrics and Quantitative Marketing, Applied Game Theory and Mechanism Design, Operations Research and Optimization, Human Computer Interaction and Information Visualization. Applicants with a background in other quantitative areas are also encouraged to apply.

We are looking for someone who can create and implement AI solutions. If you have built a product like IBM WATSON in the past and not just used WATSON to build applications, this could be the perfect role for you.

All successful candidates are expected to dive deep into problem areas of Zycus’ interest and invent technology solutions to not only advance the current products, but also to generate new product options that can strategically advantage the organization.

Skills:

  • Experience in predictive modelling and predictive software development
  • Skilled in Java, C++, Perl/Python (or similar scripting language)
  • Experience in using R, Matlab, or any other statistical software
  • Experience in mentoring junior team members, and guiding them on machine learning and data modelling applications
  • Strong communication and data presentation skills
  • Classification (svm, decision tree, random forest, neural network)
  • Regression (linear, polynomial, logistic, etc)
  • Classical Optimization(gradient descent, newton raphson, etc)
  • Graph theory (network analytics)
  • Heuristic optimisation (genetic algorithm, swarm theory)
  • Deep learning (lstm, convolutional nn, recurrent nn)

Must Have:

  • Experience: 3-9 years
  • The ideal candidate must have proven expertise in Artificial Intelligence (including deep learning algorithms), Machine Learning and/or NLP
  • The candidate must also have expertise in programming traditional machine learning algorithms, algorithm design & usage
  • Preferred experience with large data sets & distributed computing in Hadoop ecosystem
  • Fluency with databases

Job posted by
madhavi JR
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Simplifai Cognitive Solutions Pvt Ltd?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort