Cutshort logo
Ascendeum logo
Assistant Data Scientist
Assistant Data Scientist
Ascendeum's logo

Assistant Data Scientist

Swezelle Esteves's profile picture
Posted by Swezelle Esteves
1 - 5 yrs
₹8L - ₹10L / yr
Remote only
Skills
skill iconPython
skill iconData Analytics
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
MySQL
Apache Kafka
Hadoop
PySpark

Job Responsibilities: 

 

  • Identify valuable data sources and automate collection processes 
  • Undertake preprocessing of structured and unstructured data. 
  • Analyze large amounts of information to discover trends and patterns 
  • Helping develop reports and analysis. 
  • Present information using data visualization techniques. 
  • Assessing tests and implementing new or upgraded software and assisting with strategic decisions on new systems. 
  • Evaluating changes and updates to source production systems. 
  • Develop, implement, and maintain leading-edge analytic systems, taking complicated problems and building simple frameworks 
  • Providing technical expertise in data storage structures, data mining, and data cleansing. 
  • Propose solutions and strategies to business challenges 

 

Desired Skills and Experience: 

 

  • At least 1 year of experience in Data Analysis 
  • Complete understanding of Operations Research, Data Modelling, ML, and AI concepts. 
  • Knowledge of Python is mandatory, familiarity with MySQL, SQL, Scala, Java or C++ is an asset 
  • Experience using visualization tools (e.g. Jupyter Notebook) and data frameworks (e.g. Hadoop) 
  • Analytical mind and business acumen 
  • Strong math skills (e.g. statistics, algebra) 
  • Problem-solving aptitude 
  • Excellent communication and presentation skills. 
  • Bachelor’s / Master's Degree in Computer Science, Engineering, Data Science or other quantitative or relevant field is preferred  
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Ascendeum

Founded :
2015
Type
Size :
20-100
Stage :
Profitable
About
N/A
Connect with the team
Profile picture
Sonali Jain
Profile picture
Swezelle Esteves
Profile picture
Mayur Upadhyay
Company social profiles
N/A

Similar jobs

Curl Tech
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
3 - 10 yrs
₹15L - ₹25L / yr
skill iconDeep Learning
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Computer Vision
+1 more

Company Name: Curl Tech

Location: Bangalore

Website: www.curl.tech

Company Profile: Curl Tech is a deep-tech firm, based out of Bengaluru, India. Curl works on developing Products & Solutions leveraging emerging technologies such as Machine Learning, Blockchain (DLT) & IoT. We work on domains such as Commodity Trading, Banking & Financial Services, Healthcare, Logistics & Retail.

Curl has been founded by technology enthusiasts with rich industry experience. Products and solutions that have been developed at Curl, have gone on to have considerable success and have in turn become separate companies (focused on that product / solution).

If you are looking for a job, that would challenge you and desire to work with an organization that disrupts entire value chain; Curl is the right one for you!

Designation: Data Scientist or Junior Data Scientist (according to experience)

Job Description:

Good with Machine Learning and Deep learning, good with programming and maths.

Details: The candidate will be working on many image analytics/ numerical data analytics projects. The work involves, data collection, building the machine learning models, deployment, client interaction and publishing academic papers.

Responsibilities:

  • The candidate will be working on many image analytics/numerical data projects.

  • Candidate will be building various machine learning models depending upon the requirements.

  • Candidate would be responsible for deployment of the machine learning models.

  • Candidate would be the face of the company in front of the clients and will have regular client interactions to understand that client requirements.

    What we are looking for candidates with:

  • Basic Understanding of Statistics, Time Series, Machine Learning, Deep Learning, and their fundamentals and mathematical underpinnings.

  • Proven code proficiency in Python,C/C++ or any other AI language of choice.

  • Strong algorithmic thinking, creative problem solving and the ability to take ownership and do independent

    research.

  • Understanding how things work internally in ML and DL models is a must.

  • Understanding of the fundamentals of Computer Vision and Image Processing techniques would be a plus.

  • Expertise in OpenCV, ML/Neural networks technologies and frameworks such as PyTorch, Tensorflow would be a

    plus.

  • Educational background in any quantitative field (Computer Science / Mathematics / Computational Sciences and related disciplines) will be given preference.

Education: BE/ BTech/ B.Sc.(Physics or Mathematics)/Masters in Mathematics, Physics or related branches.

Read more
EnterpriseMinds
at EnterpriseMinds
2 recruiters
phani kalyan
Posted by phani kalyan
Bengaluru (Bangalore)
3 - 7.5 yrs
₹10L - ₹25L / yr
skill iconMachine Learning (ML)
skill iconData Science
Natural Language Processing (NLP)
Spark
Software deployment
+1 more
Job ID: ZS0701

Hi,

We are hiring for Data Scientist for Bangalore.

Req Skills:

  • NLP 
  • ML programming
  • Spark
  • Model Deployment
  • Experience processing unstructured data and building NLP models
  • Experience with big data tools pyspark
  • Pipeline orchestration using Airflow and model deployment experience is preferred
Read more
dataeaze systems
at dataeaze systems
1 recruiter
Ankita Kale
Posted by Ankita Kale
Pune
1 - 5 yrs
₹3L - ₹10L / yr
ETL
Hadoop
Apache Hive
skill iconJava
Spark
+2 more
  • Core Java: advanced level competency, should have worked on projects with core Java development.

 

  • Linux shell : advanced level competency, work experience with Linux shell scripting, knowledge and experience to use important shell commands

 

  • Rdbms, SQL: advanced level competency, Should have expertise in SQL query language syntax, should be well versed with aggregations, joins of SQL query language.

 

  • Data structures and problem solving: should have ability to use appropriate data structure.

 

  • AWS cloud : Good to have experience with aws serverless toolset along with aws infra

 

  • Data Engineering ecosystem : Good to have experience and knowledge of data engineering, ETL, data warehouse (any toolset)

 

  • Hadoop, HDFS, YARN : Should have introduction to internal working of these toolsets

 

  • HIVE, MapReduce, Spark: Good to have experience developing transformations using hive queries, MapReduce job implementation and Spark Job Implementation. Spark implementation in Scala will be plus point.

 

  • Airflow, Oozie, Sqoop, Zookeeper, Kafka: Good to have knowledge about purpose and working of these technology toolsets. Working experience will be a plus point here.

 

Read more
liquiloans
at liquiloans
5 recruiters
Dhirendra Singh
Posted by Dhirendra Singh
Mumbai
4 - 8 yrs
₹20L - ₹25L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+4 more
 Role: AVP Analytics
 The AVP Analytics will have the following responsibilities:
 • Analyze data across all verticals in the organization to come up with actionable insights and recommendations 
• Conceptualize and define key metrics to track for all functions within the organization
 • Create strategic roadmap and business cases to enter into new products/lines of business
 • Build, develop and maintain dashboards and performance metrics support that support key business decisions. 
• Lead cross-functional projects using advanced data analytics to discover insights that will guide strategic decisions and uncover optimization opportunities. 
• Organize and drive successful completion of data insight initiatives through effective management of analyst and data employees and effective collaboration with stakeholders. 
• Communicate results and business impacts of insight initiatives to stakeholders within and outside of the company.
 • Recruit, train, develop and supervise analyst-level employees. 
• Anticipate future demands of initiatives related to people, technology, budget and business within your department and design/implement solutions to meet these needs. 
 
 Founding Team
 1. Achal Mittal (https://www.linkedin.com/in/achal-mittal-8a95993a/" target="_blank">https://www.linkedin.com/in/achal-mittal-8a95993a/): NMIMS Alum, Ex ICICI and HSBC and co-founded Rentomojo 
 2. Gautam Adukia (https://www.linkedin.com/in/gautam-adukia-415a0650/" target="_blank">https://www.linkedin.com/in/gautam-adukia-415a0650/): IIM Alum, Ex IIFL
 Funding: Raised Seed Round funding from Matrix Partners  
Read more
Inviz Ai Solutions Private Limited
Shridhar Nayak
Posted by Shridhar Nayak
Bengaluru (Bangalore)
4 - 8 yrs
Best in industry
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more

InViz is Bangalore Based Startup helping Enterprises simplifying the Search and Discovery experiences for both their end customers as well as their internal users. We use state-of-the-art technologies in Computer Vision, Natural Language Processing, Text Mining, and other ML techniques to extract information/concepts from data of different formats- text, images, videos and make them easily discoverable through simple human-friendly touchpoints. 

 

TSDE - Data 

Data Engineer:

 

  • Should have total 3-6 Yrs of experience in Data Engineering.
  • Person should have experience in coding data pipeline on GCP. 
  • Prior experience on Hadoop systems is ideal as candidate may not have total GCP experience. 
  • Strong on programming languages like Scala, Python, Java. 
  • Good understanding of various data storage formats and it’s advantages. 
  • Should have exposure on GCP tools to develop end to end data pipeline for various scenarios (including ingesting data from traditional data bases as well as integration of API based data sources). 
  • Should have Business mindset to understand data and how it will be used for BI and Analytics purposes. 
  • Data Engineer Certification preferred 

 

Experience in Working with GCP tools like

 
 

Store :  CloudSQL , Cloud Storage, Cloud Bigtable,  Bigquery, Cloud Spanner, Cloud DataStore

 

Ingest :  Stackdriver, Pub/Sub, AppEngine, Kubernete Engine, Kafka, DataPrep , Micro services

 

Schedule : Cloud Composer

 

Processing: Cloud Dataproc, Cloud Dataflow, Cloud Dataprep

 

CI/CD - Bitbucket+Jenkinjs / Gitlab

 

Atlassian Suite

 

 

 

 .
Read more
British Telecom
Agency job
via posterity consulting by Kapil Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹14L / yr
Data engineering
Big Data
Google Cloud Platform (GCP)
ETL
Datawarehousing
+6 more
You'll have the following skills & experience:

• Problem Solving:. Resolving production issues to fix service P1-4 issues. Problems relating to
introducing new technology, and resolving major issues in the platform and/or service.
• Software Development Concepts: Understands and is experienced with the use of a wide range of
programming concepts and is also aware of and has applied a range of algorithms.
• Commercial & Risk Awareness: Able to understand & evaluate both obvious and subtle commercial
risks, especially in relation to a programme.
Experience you would be expected to have
• Cloud: experience with one of the following cloud vendors: AWS, Azure or GCP
• GCP : Experience prefered, but learning essential.
• Big Data: Experience with Big Data methodology and technologies
• Programming : Python or Java worked with Data (ETL)
• DevOps: Understand how to work in a Dev Ops and agile way / Versioning / Automation / Defect
Management – Mandatory
• Agile methodology - knowledge of Jira
Read more
Bengaluru (Bangalore)
5 - 8 yrs
₹10L - ₹15L / yr
Statistical Analysis
PowerBI
skill iconData Analytics
azureML
skill iconData Science

In this role, we are looking for:

  • A problem-solving mindset with the ability to understand business challenges and how to apply your analytics expertise to solve them.
  • The unique person who can present complex mathematical solutions in a simple manner that most will understand, using data visualization techniques to tell a story with data.
  • An individual excited by innovation and new technology and eager to finds ways to employ these innovations in practice.
  • A team mentality, empowered by the ability to work with a diverse set of individuals.
  • A passion for data, with a particular emphasis on data visualization.

 

Basic Qualifications

 

  • A Bachelor’s degree in Data Science, Math, Statistics, Computer Science or related field with an emphasis on data analytics.
  • 5+ Years professional experience, preferably in a data analyst / data scientist role or similar, with proven results in a data analyst role.
  • 3+ Years professional experience in a leadership role guiding high-performing, data-focused teams with a track record of building and developing talent.
  • Proficiency in your statistics / analytics / visualization tool of choice, but preferably in the Microsoft Azure Suite, including PowerBI and/or AzureML.
Read more
DFCS Technologies
Agency job
via dfcs Technologies by SheikDawood Ali
Remote, Chennai, Anywhere India
1 - 5 yrs
₹9L - ₹14L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more
  • Create and maintain optimal data pipeline architecture,
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with big
    • data tools: Hadoop, Spark, Kafka, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
    • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
    • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
    • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
    • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Read more
Networking & Cybersecurity Solutions
Bengaluru (Bangalore)
4 - 8 yrs
₹40L - ₹60L / yr
skill iconData Science
Data Scientist
skill iconR Programming
skill iconPython
skill iconAmazon Web Services (AWS)
+2 more
  • Research and develop statistical learning models for data analysis
  • Collaborate with product management and engineering departments to understand company needs and devise possible solutions
  • Keep up-to-date with latest technology trends
  • Communicate results and ideas to key decision makers
  • Implement new statistical or other mathematical methodologies as needed for specific models or analysis
  • Optimize joint development efforts through appropriate database use and project design

Qualifications/Requirements:

  • Masters or PhD in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
  • Excellent understanding of machine learning techniques and algorithms, including clustering, anomaly detection, optimization, neural network etc
  • 3+ years experiences building data science-driven solutions including data collection, feature selection, model training, post-deployment validation
  • Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning models
  • Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
  • Good team worker with excellent communication skills written, verbal and presentation

Desired Experience:

  • Experience with AWS, S3, Flink, Spark, Kafka, Elastic Search
  • Knowledge and experience with NLP technology
  • Previous work in a start-up environment
Read more
company logo
Agency job
via SecureKloud by sharmila padmanaban
Chennai
1 - 8 yrs
₹2L - ₹20L / yr
Data Warehouse (DWH)
Informatica
ETL
skill iconPython
DevOps
+2 more
We are cloud based company working on secureity projects.

Good Python developers / Data Engineers / Devops engineers
Exp: 1-8years
Work loc: Chennai. / Remote support
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos