Cutshort logo
Data Scientist

Data Scientist

at Angel One

DP
Posted by Vineeta Singh
icon
Remote, Mumbai
icon
3 - 7 yrs
icon
₹5L - ₹15L / yr
icon
Full time
Skills
Data Science
Data Scientist
Python
SQL
R Language
Data mining

Role : 

  • Understand and translate statistics and analytics to address business problems
  • Responsible for helping in data preparation and data pull, which is the first step in machine learning
  • Should be able to do cut and slice data to extract interesting insights from the data
  • Model development for better customer engagement and retention
  • Hands on experience in relevant tools like SQL(expert), Excel, R/Python
  • Working on strategy development to increase business revenue

 


Requirements:

  • Hands on experience in relevant tools like SQL(expert), Excel, R/Python
  • Statistics: Strong knowledge of statistics
  • Should able to do data scraping & Data mining
  • Be self-driven, and show ability to deliver on ambiguous projects
  • An ability and interest in working in a fast-paced, ambiguous and rapidly-changing environment
  • Should have worked on Business Projects for an organization, Ex: customer acquisition, Customer retention.
Read more

About Angel One

Founded
1987
Type
Size
Stage
Profitable
About

We are Angel One (formerly known as Angel Broking). India's most trusted Fintech company and an all-in-One financial house. Founded in 1996 Angel One offers a world-class experience across all digital channels including web, trading software and mobile applications, to help make millions of Indians informed investment decisions.


Certified as a Great Place To Work for six-consecutive years, we are driven by technology and a mission to become the No. 1 fintech organization in India. With a 9.2 Million+ registered client base and more than 18+ million app downloads, we are onboarding more than 400,000 new users every month. We are working to build personalized financial journeys for customers via a single app, powered by new-age engineering tech and Machine Learning.


We are a group of self-driven, motivated individuals who enjoy taking ownership and believe in providing the best value for money to investors through innovative products and investment strategies. We apply and amplify design thinking with our products and solution.


It is a flat structure, with ample opportunity to showcase your talent and a growth path for engineers to the very top. We are remote-first, with people spread across Bangalore, Mumbai and UAE. Here are some of the perks that you'll enjoy as an Angelite,


  • Work with world-class peer group from leading organizations
  • Exciting, dynamic and agile work environment
  • Freedom to ideate, innovate, express, solve and create customer experience through #Fintech & #ConsumerTech
  • Cutting edge technology and Products / Digital Platforms of future
  • Continuous learning interventions and upskilling
  • Open culture to collaborate where failing fast is encouraged to invent new ways and methods, join our Failure Club to experience it
  • 6-time certified as a Great Place To Work culture
  • Highly competitive pay structures, one of the best


Come say Hello to ideas and goodbye to hierarchies at Angel One!

Read more
Photos
Connect with the team
icon
Andleeb Mujeeb
icon
Shriya Tak
icon
Ananda Pandey
icon
Vineeta Singh
Company social profiles
icon
icon
icon
icon
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Slintel
Agency job
via Qrata by Prajakta Kulkarni
Bengaluru (Bangalore)
4 - 9 yrs
₹20L - ₹28L / yr
Big Data
ETL
Apache Spark
Spark
Data engineer
+5 more
Responsibilities
  • Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse.
  • Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs.
  • Assemble large, complex data sets from third-party vendors to meet business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elasticsearch, MongoDB, and AWS technology.
  • Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems.

Requirements
  • 5+ years of experience in a Data Engineer role.
  • Proficiency in Linux.
  • Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena.
  • Must have experience with Python/Scala.
  • Must have experience with Big Data technologies like Apache Spark.
  • Must have experience with Apache Airflow.
  • Experience with data pipeline and ETL tools like AWS Glue.
  • Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
buy now & pay later
Agency job
via Qrata by Rayal Rajan
Mumbai
2 - 7 yrs
₹5L - ₹10L / yr
Python
SQL
Data Analytics
Communication Skills
Data management

DATA ANALYST

About:

 

 We allows customers to "buy now and pay later" for goods and services purchased online and offline portals. It's a rapidly growing organization opening up new avenues of payments for online and offline customers.

 

Role:

 

 Define and continuously refine the analytics roadmap.

Build, Deploy and Maintain the data infrastructure that supports all of the analysis, including the data warehouse and various data marts

Build, deploy and maintain the predictive models and scoring infrastructure that powers critical decision management systems.

Strive to devise ways to gather more alternate data and build increasingly enhanced predictive models

Partner with business teams to systematically design experiments to continuously improve customer acquisition, minimize churn, reduce delinquency and improve profitability

Provide data insights to all business teams through automated queries, MIS, etc.

 

Requirements:

 4+ years of deep, hands-on analytics experience in a management consulting, start-up or financial services, or fintech company.

Should have strong knowledge in SQL and Python.

Deep knowledge of problem-solving approach using analytical frameworks.

Deep knowledge of frameworks for data management, deployment, and monitoring of performance metrics.

Hands-on exposure to delivering improvements through test and learn methodologies.

Excellent communication and interpersonal skills, with the ability to be pleasantly persistent.

 

 

Location-MUMBAI

Read more
at My Yoga Teacher
1 video
3 recruiters
DP
Posted by MYT HR
Bengaluru (Bangalore)
3 - 6 yrs
₹18L - ₹25L / yr
MySQL
MySQL DBA
Javascript
Amazon Redshift
XML
+7 more

Data Scientist


We are a growing startup in the healthcare space, our business model has been mostly unexplored and that is exciting! 


Our company decisions are heavily guided by insights we get from data.  So this position is key and core to our business growth.  You can make a real impact.  We are looking for a data scientist who is passionate about contributing to the growth of MyYogaTeacher and propelling the company to newer heights. 


We encourage you to spend some time browsing through content on our website  myyogateacher.com  and maybe even sign up for our service and try it out! 


As a Data Scientist, you’ll

  • Help collect data from a variety of sources - decipher and address quality of data, filter and cleanse data, identify missing data
  • Help measure, transform and  organize data into readily usable formats for reporting and further analysis
  • Develop and implement analytical databases and data collection systems
  • Analyze data in meaningful ways.  Use statistical methods and data mining algorithms to analyze data and generate useful insights and reports
  • Develop recommendation engines in a variety of areas
  • Identify and recommend new ways to optimize and streamline data collection processes
  • Collaborate with programmers, engineers, and organizational leaders to identify opportunities for process improvements, recommend system modifications, and develop policies for data governance

You are qualified if:

  • You’re a Bachelor and/or Master in Mathematics, Statistics,  Computer Engineering, Data Science,  Data Analytics or Data Mining
  • 3+ years experience in a data analyst role
  • 3+ years’ of data mining and machine learning experience
  • Great understanding of databases such as MySQL, and Amazon Redshift and are very adept at SQL 
  • Good knowledge of No SQL databases such as Mongo DB and ClickHouse
  • Understanding of scripts such as XML, Javascript, JSON
  • Knowledge of programming languages like SQL, Oracle, R, and MATLAB. Proficient in Python and Shell scripting
  • Understanding of ETL framework and ETL tools 
  • Proficiency in  statistical packages like Excel, SPSS, and SAS to be used for data set analyzing
  • Knowledge of how to create and apply the most appropriate algorithms to datasets to find solutions 

Would be nice if you also have 

  • Experience with data visualization tools such as Tableau, Business Objects, PowerBI or Qlik
  • Adept at using data processing platforms like Hadoop and Apache Spark
  • Experience handling unstructured data such as text, audio and video and extracting features from those 

You have 

  • Excellent analytical skills - the ability to identify trends, patterns and insights from data. You love numbers
  • Strong attention to detail 
  • Great communication and presentation skills – the ability to write and speak clearly to easily communicate complex ideas in a way that is easy to understand.  
  • effective stakeholder management and great problem-solving skills 
  • Keen desire to own up to things and get things done.  You follow through on commitments: live up to verbal and written agreements 
  • You are a  quick learner of new technologies and easily adapt to change 
  • Ability to collaborate effectively and work as part of a team
  • You follow through on commitments: Live up to verbal and written agreements, regardless of personal cost. 
  • Enthusiasm: exhibit passion, excitement and positive energy over work.

Here are couple of articles about us from our CEO Jitendra


Why we started MyYogaTeacher  https://www.myyogateacher.com/articles/why-i-started-myyogateacher

Our  mission and culture https://www.myyogateacher.com/articles/company-mission-culture


Look forward to hearing from you ! 





Read more
A global business process management company
Agency job
via Jobdost by Saida Jabbar
Gurugram, Pune, Mumbai, Bengaluru (Bangalore), Chennai, Nashik
4 - 12 yrs
₹12L - ₹15L / yr
Data engineering
Data modeling
data pipeline
Data integration
Data Warehouse (DWH)
+12 more

 

 

Designation – Deputy Manager - TS


Job Description

  1. Total of  8/9 years of development experience Data Engineering . B1/BII role
  2. Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
  3. Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
  4. Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
  5. Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
  6. Strong Python skill .
  7. Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
  8. Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
  9. Life Science & Healthcare domain background will be a plus

Qualifications

BE/Btect/ME/MTech

 

Read more
at Oneture Technologies
1 recruiter
DP
Posted by Ravi Mevcha
Mumbai, Navi Mumbai
2 - 4 yrs
₹8L - ₹12L / yr
Spark
Big Data
ETL
Data engineering
ADF
+4 more

Job Overview


We are looking for a Data Engineer to join our data team to solve data-driven critical

business problems. The hire will be responsible for expanding and optimizing the existing

end-to-end architecture including the data pipeline architecture. The Data Engineer will

collaborate with software developers, database architects, data analysts, data scientists and platform team on data initiatives and will ensure optimal data delivery architecture is

consistent throughout ongoing projects. The right candidate should have hands on in

developing a hybrid set of data-pipelines depending on the business requirements.

Responsibilities

  • Develop, construct, test and maintain existing and new data-driven architectures.
  • Align architecture with business requirements and provide solutions which fits best
  • to solve the business problems.
  • Build the infrastructure required for optimal extraction, transformation, and loading
  • of data from a wide variety of data sources using SQL and Azure ‘big data’
  • technologies.
  • Data acquisition from multiple sources across the organization.
  • Use programming language and tools efficiently to collate the data.
  • Identify ways to improve data reliability, efficiency and quality
  • Use data to discover tasks that can be automated.
  • Deliver updates to stakeholders based on analytics.
  • Set up practices on data reporting and continuous monitoring

Required Technical Skills

  • Graduate in Computer Science or in similar quantitative area
  • 1+ years of relevant work experience as a Data Engineer or in a similar role.
  • Advanced SQL knowledge, Data-Modelling and experience working with relational
  • databases, query authoring (SQL) as well as working familiarity with a variety of
  • databases.
  • Experience in developing and optimizing ETL pipelines, big data pipelines, and datadriven
  • architectures.
  • Must have strong big-data core knowledge & experience in programming using Spark - Python/Scala
  • Experience with orchestrating tool like Airflow or similar
  • Experience with Azure Data Factory is good to have
  • Build processes supporting data transformation, data structures, metadata,
  • dependency and workload management.
  • Experience supporting and working with cross-functional teams in a dynamic
  • environment.
  • Good understanding of Git workflow, Test-case driven development and using CICD
  • is good to have
  • Good to have some understanding of Delta tables It would be advantage if the candidate also have below mentioned experience using
  • the following software/tools:
  • Experience with big data tools: Hadoop, Spark, Hive, etc.
  • Experience with relational SQL and NoSQL databases
  • Experience with cloud data services
  • Experience with object-oriented/object function scripting languages: Python, Scala, etc.
Read more
Bengaluru (Bangalore), Chennai
5 - 8 yrs
Best in industry
Data engineering
Big Data
Java
Python
Hibernate (Java)
+10 more

Data Engineer- Senior

Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.

What are you going to do?

Design & Develop high performance and scalable solutions that meet the needs of our customers.

Closely work with the Product Management, Architects and cross functional teams.

Build and deploy large-scale systems in Java/Python.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.

Follow best practices that can be adopted in Bigdata stack.

Use your engineering experience and technical skills to drive the features and mentor the engineers.

What are we looking for ( Competencies) :

Bachelor’s degree in computer science, computer engineering, or related technical discipline.

Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.

Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.

Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.

Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.

Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.

Ability to work with distributed teams in a collaborative and productive manner.

Benefits:

Competitive Salary Packages and benefits.

Collaborative, lively and an upbeat work environment with young professionals.

Job Category: Development

Job Type: Full Time

Job Location: Bangalore

 

Read more
AI-powered cloud-based SaaS solution
Agency job
via wrackle by Naveen Taalanki
Bengaluru (Bangalore)
2 - 10 yrs
₹15L - ₹50L / yr
Data engineering
Big Data
Data Engineer
Big Data Engineer
Hibernate (Java)
+18 more
Responsibilities

● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
Read more
at Fragma Data Systems
8 recruiters
DP
Posted by Evelyn Charles
Remote only
1.5 - 5 yrs
₹8L - ₹15L / yr
PySpark
SQL
• Responsible for developing and maintaining applications with PySpark 
• Contribute to the overall design and architecture of the application developed and deployed.
• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.
• Interact with business users to understand requirements and troubleshoot issues.
• Implement Projects based on functional specifications.

Must Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skills
Read more
at Greenway Health
2 recruiters
Agency job
via VIPSA TALENT SOLUTIONS by Prashma S R
Bengaluru (Bangalore)
6 - 8 yrs
₹8L - ₹15L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+5 more
6-8years of experience in data engineer
Spark
Hadoop
Big Data
Data engineering
PySpark
Python
AWS Lambda
SQL
hadoop
kafka
Read more
Product Based MNC
Agency job
via I Squaresoft by Madhusudhan R
Remote, Bengaluru (Bangalore)
5 - 9 yrs
₹5L - ₹20L / yr
Apache Spark
Python
Amazon Web Services (AWS)
SQL

 

Job Description

Role requires experience in AWS and also programming experience in Python and Spark

Roles & Responsibilities

You Will:

  • Translate functional requirements into technical design
  • Interact with clients and internal stakeholders to understand the data and platform requirements in detail and determine core cloud services needed to fulfil the technical design
  • Design, Develop and Deliver data integration interfaces in the AWS
  • Design, Develop and Deliver data provisioning interfaces to fulfil consumption needs
  • Deliver data models on Cloud platform, it could be on AWS Redshift, SQL.
  • Design, Develop and Deliver data integration interfaces at scale using Python / Spark 
  • Automate core activities to minimize the delivery lead times and improve the overall quality
  • Optimize platform cost by selecting right platform services and architecting the solution in a cost-effective manner
  • Manage code and deploy DevOps and CI CD processes
  • Deploy logging and monitoring across the different integration points for critical alerts

You Have:

  • Minimum 5 years of software development experience
  • Bachelor's and/or Master’s degree in computer science
  • Strong Consulting skills in data management including data governance, data quality, security, data integration, processing and provisioning
  • Delivered data management projects in any of the AWS
  • Translated complex analytical requirements into technical design including data models, ETLs and Dashboards / Reports
  • Experience deploying dashboards and self-service analytics solutions on both relational and non-relational databases
  • Experience with different computing paradigms in databases such as In-Memory, Distributed, Massively Parallel Processing
  • Successfully delivered large scale data management initiatives covering Plan, Design, Build and Deploy phases leveraging different delivery methodologies including Agile
  • Strong knowledge of continuous integration, static code analysis and test-driven development
  • Experience in delivering projects in a highly collaborative delivery model with teams at onsite and offshore
  • Must have Excellent analytical and problem-solving skills
  • Delivered change management initiatives focused on driving data platforms adoption across the enterprise
  • Strong verbal and written communications skills are a must, as well as the ability to work effectively across internal and external organizations

 

Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Angel One?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort