Data Architect

at Hypersonix Inc

DP
Posted by Gowshini Maheswaran
icon
Bengaluru (Bangalore)
icon
10 - 15 yrs
icon
₹15L - ₹20L / yr
icon
Full time
Skills
Big Data
Data Warehouse (DWH)
Apache Kafka
Spark
Hadoop
Data engineering
Artificial Intelligence (AI)
Machine Learning (ML)
Data Structures
Data modeling
Data wrangling
Data integration
Data-driven testing
Database performance tuning
Apache Storm
Python
Scala
SQL
Amazon Web Services (AWS)
SQL Azure
kafka
databricks
Flinks
druid
Airflow
Luigi
Nifi
Talend
Hypersonix.ai is disrupting the Business Intelligence and Analytics space with AI, ML and NLP capabilities to drive specific business insights with a conversational user experience. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in Restaurants, Hospitality and other industry verticals.

Hypersonix.ai is seeking a Data Evangelist who can work closely with customers to understand the data sources, acquire data and drive product success by delivering insights based on customer needs.

Primary Responsibilities :

- Lead and deliver complete application lifecycle design, development, deployment, and support for actionable BI and Advanced Analytics solutions

- Design and develop data models and ETL process for structured and unstructured data that is distributed across multiple Cloud platforms

- Develop and deliver solutions with data streaming capabilities for a large volume of data

- Design, code and maintain parts of the product and drive customer adoption

- Build data acquisition strategy to onboard customer data with speed and accuracy

- Working both independently and with team members to develop, refine, implement, and scale ETL processes

- On-going support and maintenance of live-clients for their data and analytics needs

- Defining the data automation architecture to drive self-service data load capabilities

Required Qualifications :

- Bachelors/Masters/Ph.D. in Computer Science, Information Systems, Data Science, Artificial Intelligence, Machine Learning or related disciplines

- 10+ years of experience guiding the development and implementation of Data architecture in structured, unstructured, and semi-structured data environments.

- Highly proficient in Big Data, data architecture, data modeling, data warehousing, data wrangling, data integration, data testing and application performance tuning

- Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Flink, Storm, Druid and Hadoop

- Strong with hands-on programming and scripting for Big Data ecosystem (Python, Scala, Spark, etc)

- Experience building batch and streaming ETL data pipelines using workflow management tools like Airflow, Luigi, NiFi, Talend, etc

- Familiarity with cloud-based platforms like AWS, Azure or GCP

- Experience with cloud data warehouses like Redshift and Snowflake

- Proficient in writing complex SQL queries.

- Excellent communication skills and prior experience of working closely with customers

- Data savvy who loves to understand large data trends and obsessed with data analysis

- Desire to learn about, explore, and invent new tools for solving real-world problems using data

Desired Qualifications :

- Cloud computing experience, Amazon Web Services (AWS)

- Prior experience in Data Warehousing concepts, multi-dimensional data models

- Full command of Analytics concepts including Dimension, KPI, Reports & Dashboards

- Prior experience in managing client implementation of Analytics projects

- Knowledge and prior experience of using machine learning tools

About Hypersonix Inc

Hypersonix offers a unified, AI-Powered Intelligent Enterprise Platform designed to help leading enterprises drive profitable revenue growth.
Founded
2018
Type
Product
Size
100-500 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Driven Product Manager

at SoStronk

Founded 2014  •  Products & Services  •  20-100 employees  •  Raised funding
Data Analytics
Data Science
Product Strategy
Product Management
KPI management
Game Design
Gamification
SQL
icon
Remote only
icon
3 - 8 yrs
icon
₹20L - ₹30L / yr

About SoStronk

 

SoStronk is a 30 person strong team of gamers, storytellers, engineers, designers and trailblazers who are disrupting gaming and esports at scale. We have built market leading platforms in the esports space for over 5 years and are now positioned for meteoric growth with our latest social gaming platform - IMBR (I am Battle Ready). 

 

Funded by some of the leading strategics in the space - Dream11 (Dream Capital) & Nodwin Gaming (Krafton), IMBR is well on its way to becoming a category creating social gaming platform.    

 

Who you are:

 

We’re looking for an experienced data scientist who has had success working with experts in cross-functional teams containing Product Managers, Engineers, Designers, and more. A data-driven individual who is just as passionate about data visualization and analytics as they are interested in deconstructing apps, human psychology and gamification.  


Our ideal squad-mate is data obsessed, understands the difference between being data informed vs being data blind and is excited by navigating the unknown, collaborating, succeeding and building amazing, industry-leading products: someone who can work iteratively but also knows when it’s time to dream big and shake things up. You are or have


  • 3+ years as a Data Scientist:  creating measurable impact on features and products. You have a demonstrated track record of scoping and executing complex data projects with high business impact. Startup experience is preferred, ideally in gaming/B2C space. 
  • Strong programming skills: in Python and SQL and experience with & knowledge of: Cloud platforms such as AWS or Google Cloud; Big data using Hive, Spark, EMR etc
  • Full stack data scientist: you are a wizard with the entire pipeline of data analytics, familiarity with analytics platforms such as Amplitude, Branch among others.
  • Impact-driven: You understand KPI’s and north star metrics. You get excited by measuring impact of features and taking data-informed decisions with regards to engagement, retention and monetisation. 
  • Storyteller: You are a storyteller who can convert your data insights into stories with precision and empathy; in doing so get buy-ins from your fellow peers to create impact.
  • User-obsessed: You are deeply empathetic, constantly putting yourself in the shoes of the end users, be their psychologist by understanding their internal and external characteristics, be their personal trainer by figuring out what makes them tick so that they achieve their goals.
  • Renaissance (wo)man. You’re curious. You’re always learning. You’re as comfortable communicating and empathizing with an end user as you are with a designer or engineer
  • Good helicopter. You can alternate between 10,000 foot and 10 foot views easily, and at the right time. Big picture product objectives to microscopic feature alignments all in a day’s work.  
  • High agency leader. You’re kind, charismatic and humble. Teams want to be in the trenches with you, and to build something great by your side. You are self-driven with regards to constantly analyzing your dashboards to find even the smallest opportunities to optimize for KPIs. 



What You’ll Do:


  • Create and manage dashboards: Build trust among stakeholders by collaborating with fellow leads and peers on an constantly evolving data pipeline; with company and product milestones in mind.  
  • Become a domain Leader: Maintain expertise of our domain and market trends, competitor analysis as well as being on the pulse of which mid-core games to support next using platforms such as data.ai (formerly app annie). 
  • Be a metric driver: Define metrics for everything we should be focusing on building, measure it and drive it. Use data-informed decision making to get buy-in from all your fellow squad members.
  • Drive experimentation culture: Figure out ideal A/B tests to increase KPI impact, execute them and continue to evolve experimentation with culture with little to none approval culture.
  • Be the glue. Cross-functionally collaborate with growth, design and engineering to drive the product KPIs further. Driving engagement, retention and monetisation strategies as part of a cross-functional team.
  • Be the co-pilot. Work closely alongside the founders in driving the product vision, in particular by being the co-pilot of the CPO. 


You're also excited by the prospect of rolling up your sleeves to tackle meaningful problems each and every day. You’re a kind, passionate and collaborative problem-solver who seeks and gives candid feedback, and values the chance to make an important impact.

If this sounds like you, you'll fit right in at SoStronk.

Job posted by
Ajaikumar Ravichandran

Lead Data Engineer

at Top 3 Fintech Startup

Agency job
via Jobdost
SQL
Amazon Web Services (AWS)
Spark
PySpark
Apache Hive
icon
Bengaluru (Bangalore)
icon
6 - 9 yrs
icon
₹16L - ₹24L / yr

We are looking for an exceptionally talented Lead data engineer who has exposure in implementing AWS services to build data pipelines, api integration and designing data warehouse. Candidate with both hands-on and leadership capabilities will be ideal for this position.

 

Qualification: At least a bachelor’s degree in Science, Engineering, Applied Mathematics. Preferred Masters degree

 

Job Responsibilities:

• Total 6+ years of experience as a Data Engineer and 2+ years of experience in managing a team

• Have minimum 3 years of AWS Cloud experience.

• Well versed in languages such as Python, PySpark, SQL, NodeJS etc

• Has extensive experience in the real-timeSpark ecosystem and has worked on both real time and batch processing

• Have experience in AWS Glue, EMR, DMS, Lambda, S3, DynamoDB, Step functions, Airflow, RDS, Aurora etc.

• Experience with modern Database systems such as Redshift, Presto, Hive etc.

• Worked on building data lakes in the past on S3 or Apache Hudi

• Solid understanding of Data Warehousing Concepts

• Good to have experience on tools such as Kafka or Kinesis

• Good to have AWS Developer Associate or Solutions Architect Associate Certification

• Have experience in managing a team

Job posted by
Shalaka ZawarRathi

Data Engineer

at Inviz Ai Solutions Private Limited

Founded 2019  •  Products & Services  •  20-100 employees  •  Profitable
Python
PySpark
Scala
Google Cloud Platform (GCP)
Amazon Web Services (AWS)
Windows Azure
Apache Hive
Hadoop
Relational Database (RDBMS)
SQL
Big Data
Apache Kafka
JSON
Data flow
icon
Bengaluru (Bangalore)
icon
2 - 8 yrs
icon
₹15L - ₹40L / yr

InViz is Bangalore Based Startup helping Enterprises simplifying the Search and Discovery experiences for both their end customers as well as their internal users. We use state-of-the-art technologies in Computer Vision, Natural Language Processing, Text Mining, and other ML techniques to extract information/concepts from data of different formats- text, images, videos and make them easily discoverable through simple human-friendly touchpoints. 

 

Experience: 2-8 years 

Responsibility: 

  • The person will be responsible for leading the development and implementing advanced analytical approaches across a variety of projects, domain and solutions. 
  • Should have a mix of analytical, technical skills, someone who can work with business requirements and develop them into a useable and scalable solution. 
  • One should fully understand the value proposition of data mining and analytical methods. 
  • Should be able to oversee the maintenance and enhancements of existing models, algorithms and processes as well as oversee the development and maintenance of code and process documentation. 

Required Skillset: 

  • Good hands-on experience on Kafka, Hive, Airflow, Shell scripting, No-SQL database 
  • Good exposure to RDBMS and SQL. 
  • Should have skills in data ingestion, transformation, staging and storing of data, analysis of data from Parquet, Avro, JSON, and other formats. 
  • Experience in building and optimizing “big data” data pipelines, architectures and data sets. 
  • Good hands-on experience on Python/Pyspark/Scala-spark. Having exposure to data science libraries is a plus. 
  • Good experience in BigQuery,DataFlow,Pub/Sub,Composer,Cloud Functions.
  • Create custom software components (e.g., specialized UDFs) and analytics applications. 
  • Hands on experience in Statistical Methods such as Regression, Logistic regression, decision trees, random forest, other segmentation & clustering methods. 
  • Build high-performance algorithms, prototypes, predictive models and proof of concepts.
  • Experience in e-commerce domain is a plus .
Job posted by
Shridhar Nayak

MLOps Lead Engineer

at Hiring for an MNC company

Agency job
via Response Informatics
Natural Language Processing (NLP)
Machine Learning (ML)
Computer Vision
Deep Learning
Google Cloud Platform (GCP)
Artificial Intelligence (AI)
TensorFlow
icon
Hyderabad, Bengaluru (Bangalore), Chennai, Pune
icon
5 - 7 yrs
icon
₹5L - ₹25L / yr

Job roles and responsibilities:

  • Design, develop, test, deploy, maintain and improve ML models/infrastructure and software that uses these models
  • Experience writing software in one or more languages such as Python, Scala, R, or similar with strong competencies in data structures, algorithms, and software design
  • Experience working with recommendation engines, data pipelines, or distributed machine learning
  • Experience working with deep learning frameworks (such as TensorFlow, Keras, Torch, Caffe, Theano)
  • Knowledge of data analytics concepts, including bigdata, data warehouse technical architectures, ETL and reporting/analytic tools and environments
  • Participate in cutting edge research in artificial intelligence and machine learning applications
  • Contribute to engineering efforts from planning and organization to execution and delivery to solve complex, real world engineering problems
  • Working knowledge on different Algorithms and Machine Learning techniques like, Linear & Logistic Regression analysis, Segmentation, Decisions trees, Cluster analysis and factor analysis, Time Series Analysis, K-Nearest Neighbour, K-Means algorithm, Random Forests Algorithm, NLP (Natural language processing), Sentimental analysis, various Artificial Neural Networks, Convolution Neural Nets (CNN), Bidirectional Recurrent Neural Networks (BRNN)
  • Demonstrated excellent communication, presentation, and problem-solving skills

Technical Skills Required:

  • GCP Native AI/ML services like Vision, NLP, Document AI, Dialogflow, CCAI, BQ etc.,
  • Proficiency with a deep learning framework such as TensorFlow or Keras, etc.,
  • Proficiency with Python and basic libraries for machine learning such as scikit-learn and pandas, jupyter notebook
  • Expertise in visualizing and manipulating big datasets
  • Ability to select hardware to run an ML model with the required latency
  • Good to have MLOps and Kubeflow knowledge
  • GCP ML Engineer Certification
Job posted by
Swagatika Sahoo

Machine Learning Engineer

at Carsome

Founded 2015  •  Product  •  1000-5000 employees  •  Raised funding
Python
Amazon Web Services (AWS)
Django
Flask
TensorFlow
Big Data
athena
icon
Remote, Kuala Lumpur
icon
2 - 5 yrs
icon
₹20L - ₹30L / yr
Carsome is a growing startup that is utilising data to improve the experience of second hand car shoppers. This involves developing, deploying & maintaining machine learning models that are used to improve our customers' experience. We are looking for candidates who are aware of the machine learning project lifecycle and can help managing ML deployments.

Responsibilities: - Write and maintain production level code in Python for deploying machine learning models - Create and maintain deployment pipelines through CI/CD tools (preferribly GitLab CI) - Implement alerts and monitoring for prediction accuracy and data drift detection - Implement automated pipelines for training and replacing models - Work closely with with the data science team to deploy new models to production Required Qualifications: - Degree in Computer Science, Data Science, IT or a related discipline. - 2+ years of experience in software engineering or data engineering. - Programming experience in Python - Experience in data profiling, ETL development, testing and implementation - Experience in deploying machine learning models

Good to have: - Experience in AWS resources for ML and data engineering (SageMaker, Glue, Athena, Redshift, S3) - Experience in deploying TensorFlow models - Experience in deploying and managing ML Flow
Job posted by
Piyush Palkar

Data Scientist

at TVS Credit Services Ltd

Founded 2009  •  Services  •  100-1000 employees  •  Profitable
Data Science
R Programming
Python
Machine Learning (ML)
Hadoop
SQL server
Linear regression
Predictive modelling
icon
Chennai
icon
4 - 10 yrs
icon
₹10L - ₹20L / yr
Job Description: Be responsible for scaling our analytics capability across all internal disciplines and guide our strategic direction in regards to analytics Organize and analyze large, diverse data sets across multiple platforms Identify key insights and leverage them to inform and influence product strategy Technical Interactions with vendor or partners in technical capacity for scope/ approach & deliverables. Develops proof of concept to prove or disprove validity of concept. Working with all parts of the business to identify analytical requirements and formalize an approach for reliable, relevant, accurate, efficientreporting on those requirements Designing and implementing advanced statistical testing for customized problem solving Deliver concise verbal and written explanations of analyses to senior management that elevate findings into strategic recommendations Desired Candidate Profile: MTech / BE / BTech / MSc in CS or Stats or Maths, Operation Research, Statistics, Econometrics or in any quantitative field Experience in using Python, R, SAS Experience in working with large data sets and big data systems (SQL, Hadoop, Hive, etc.) Keen aptitude for large-scale data analysis with a passion for identifying key insights from data Expert working knowledge in various machine learning algorithms such XGBoost, SVM Etc. We are looking candidates from the following: Experience in Unsecured Loans & SME Loans analytics (cards, installment loans) - risk based pricing analytics Experience in Differential pricing / selection analytics (retail, airlines / travel etc). Experience in Digital product companies or Digital eCommerce with Product mindset and experience Experience in Fraud / Risk from Banks, NBFC / Fintech / Credit Bureau Experience in Online media with knowledge of media, online ads & sales (agencies) - Knowledge of DMP, DFP, Adobe/Omniture tools, Cloud Experience in Consumer Durable Loans lending companies (Experience in Credit Cards, Personal Loan - optional) Experience in Tractor Loans lending companies (Experience in Farm) Experience in Recovery, Collections analytics Experience in Marketing Analytics with Digital Marketing, Market Mix modelling, Advertising Technology
Job posted by
Vinodhkumar Panneerselvam

Data Engineer

at Searce Inc

Founded 2004  •  Products & Services  •  100-1000 employees  •  Profitable
Big Data
Hadoop
Apache Hive
Architecture
Data engineering
Java
Python
Scala
ETL
icon
Mumbai
icon
5 - 12 yrs
icon
₹10L - ₹20L / yr
JD of Data Engineer
As a Data Engineer, you are a full-stack data engineer that loves solving business problems.
You work with business leads, analysts and data scientists to understand the business domain
and engage with fellow engineers to build data products that empower better decision making.
You are passionate about data quality of our business metrics and flexibility of your solution that
scales to respond to broader business questions.
If you love to solve problems using your skills, then come join the Team Searce. We have a
casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses
on productivity and creativity, and allows you to be part of a world-class team while still being
yourself.

What You’ll Do
● Understand the business problem and translate these to data services and engineering
outcomes
● Explore new technologies and learn new techniques to solve business problems
creatively
● Think big! and drive the strategy for better data quality for the customers
● Collaborate with many teams - engineering and business, to build better data products

What We’re Looking For
● Over 1-3 years of experience with
○ Hands-on experience of any one programming language (Python, Java, Scala)
○ Understanding of SQL is must
○ Big data (Hadoop, Hive, Yarn, Sqoop)
○ MPP platforms (Spark, Pig, Presto)
○ Data-pipeline & scheduler tool (Ozzie, Airflow, Nifi)
○ Streaming engines (Kafka, Storm, Spark Streaming)
○ Any Relational database or DW experience
○ Any ETL tool experience
● Hands-on experience in pipeline design, ETL and application development
Job posted by
Reena Bandekar
Data steward
MDM
Tamr
Reltio
Data engineering
Python
ETL
SQL
Windows Azure
sas
dm studio
profisee
icon
NCR (Delhi | Gurgaon | Noida), Bengaluru (Bangalore), Mumbai, Pune
icon
7 - 8 yrs
icon
₹15L - ₹16L / yr
  1. Data Steward :

Data Steward will collaborate and work closely within the group software engineering and business division. Data Steward has overall accountability for the group's / Divisions overall data and reporting posture by responsibly managing data assets, data lineage, and data access, supporting sound data analysis. This role requires focus on data strategy, execution, and support for projects, programs, application enhancements, and production data fixes. Makes well-thought-out decisions on complex or ambiguous data issues and establishes the data stewardship and information management strategy and direction for the group. Effectively communicates to individuals at various levels of the technical and business communities. This individual will become part of the corporate Data Quality and Data management/entity resolution team supporting various systems across the board.

 

Primary Responsibilities:

 

  • Responsible for data quality and data accuracy across all group/division delivery initiatives.
  • Responsible for data analysis, data profiling, data modeling, and data mapping capabilities.
  • Responsible for reviewing and governing data queries and DML.
  • Accountable for the assessment, delivery, quality, accuracy, and tracking of any production data fixes.
  • Accountable for the performance, quality, and alignment to requirements for all data query design and development.
  • Responsible for defining standards and best practices for data analysis, modeling, and queries.
  • Responsible for understanding end-to-end data flows and identifying data dependencies in support of delivery, release, and change management.
  • Responsible for the development and maintenance of an enterprise data dictionary that is aligned to data assets and the business glossary for the group responsible for the definition and maintenance of the group's data landscape including overlays with the technology landscape, end-to-end data flow/transformations, and data lineage.
  • Responsible for rationalizing the group's reporting posture through the definition and maintenance of a reporting strategy and roadmap.
  • Partners with the data governance team to ensure data solutions adhere to the organization’s data principles and guidelines.
  • Owns group's data assets including reports, data warehouse, etc.
  • Understand customer business use cases and be able to translate them to technical specifications and vision on how to implement a solution.
  • Accountable for defining the performance tuning needs for all group data assets and managing the implementation of those requirements within the context of group initiatives as well as steady-state production.
  • Partners with others in test data management and masking strategies and the creation of a reusable test data repository.
  • Responsible for solving data-related issues and communicating resolutions with other solution domains.
  • Actively and consistently support all efforts to simplify and enhance the Clinical Trial Predication use cases.
  • Apply knowledge in analytic and statistical algorithms to help customers explore methods to improve their business.
  • Contribute toward analytical research projects through all stages including concept formulation, determination of appropriate statistical methodology, data manipulation, research evaluation, and final research report.
  • Visualize and report data findings creatively in a variety of visual formats that appropriately provide insight to the stakeholders.
  • Achieve defined project goals within customer deadlines; proactively communicate status and escalate issues as needed.

 

Additional Responsibilities:

 

  • Strong understanding of the Software Development Life Cycle (SDLC) with Agile Methodologies
  • Knowledge and understanding of industry-standard/best practices requirements gathering methodologies.
  • Knowledge and understanding of Information Technology systems and software development.
  • Experience with data modeling and test data management tools.
  • Experience in the data integration project • Good problem solving & decision-making skills.
  • Good communication skills within the team, site, and with the customer

 

Knowledge, Skills and Abilities

 

  • Technical expertise in data architecture principles and design aspects of various DBMS and reporting concepts.
  • Solid understanding of key DBMS platforms like SQL Server, Azure SQL
  • Results-oriented, diligent, and works with a sense of urgency. Assertive, responsible for his/her own work (self-directed), have a strong affinity for defining work in deliverables, and be willing to commit to deadlines.
  • Experience in MDM tools like MS DQ, SAS DM Studio, Tamr, Profisee, Reltio etc.
  • Experience in Report and Dashboard development
  • Statistical and Machine Learning models
  • Python (sklearn, numpy, pandas, genism)
  • Nice to Have:
  • 1yr of ETL experience
  • Natural Language Processing
  • Neural networks and Deep learning
  • xperience in keras,tensorflow,spacy, nltk, LightGBM python library

 

Interaction :  Frequently interacts with subordinate supervisors.

Education : Bachelor’s degree, preferably in Computer Science, B.E or other quantitative field related to the area of assignment. Professional certification related to the area of assignment may be required

Experience :  7 years of Pharmaceutical /Biotech/life sciences experience, 5 years of Clinical Trials experience and knowledge, Excellent Documentation, Communication, and Presentation Skills including PowerPoint

 

Job posted by
RAHUL BATTA

Data Scientist

at WyngCommerce

Founded 2017  •  Product  •  20-100 employees  •  Raised funding
Data Science
Python
R Programming
Supply Chain Management (SCM)
icon
Bengaluru (Bangalore)
icon
1 - 4 yrs
icon
₹9L - ₹15L / yr
WyngCommerce is building state of the art AI software for the Global Consumer Brands & Retailers to enable best-in-class customer experiences. Our vision is to democratize machine learning algorithms for our customers and help them realize dramatic improvements in speed, cost and flexibility. Backed by a clutch of prominent angel investors & having some of the category leaders in the retail industry as clients, we are looking to hire for our data science team. The data science team at WyngCommerce is on a mission to challenge the norms and re-imagine how retail business should be run across the world. As a Junior Data Scientist in the team, you will be driving and owning the thought leadership and impact on one of our core data science problems. You will work collaboratively with the founders, clients and engineering team to formulate complex problems, run Exploratory Data Analysis and test hypotheses, implement ML-based solutions and fine tune them with more data. This is a high impact role with goals that directly impact our business. Your Role & Responsibilities: - Implement data-driven solutions based on advanced ML and optimization algorithms to address business problems - Research, experiment, and innovate ML/statistical approaches in various application areas of interest and contribute to IP - Partner with engineering teams to build scalable, efficient, automated ML-based pipelines (training/evaluation/monitoring) - Deploy, maintain, and debug ML/decision models in production environment - Analyze and assess data to ensure high data quality and correctness of downstream processes - Communicate results to stakeholders and present data/insights to participate in and drive decision making Desired Skills & Experiences: - Bachelors or Masters in a quantitative field from a top tier college - 1-2 years experience in a data science / analytics role in a technology / analytics company - Solid mathematical background (especially in linear algebra & probability theory) - Familiarity with theoretical aspects of common ML techniques (generalized linear models, ensembles, SVMs, clustering algos, graphical models, etc.), statistical tests/metrics, experiment design, and evaluation methodologies - Demonstrable track record of dealing with ambiguity, prioritizing needs, bias for iterative learning, and delivering results in a dynamic environment with minimal guidance - Hands-on experience in at least one of the following: (a) Anomaly Detection, (b) Time Series Analysis, (c) Product Clustering, (d) Demand Forecasting, (e) Intertemporal Optimization - Good programming skills (fluent in Java/Python/SQL) with experience of using common ML toolkits (e.g., sklearn, tensor flow, keras, nltk) to build models for real world problems - Computational thinking and familiarity with practical application requirements (e.g., latency, memory, processing time) - Excellent written and verbal communication skills for both technical and non-technical audiences - (Plus Point) Experience of applying ML / other techniques in the domain of supply chain - and particularly in retail - for inventory optimization, demand forecasting, assortment planning, and other such problems - (Nice to have) Research experience and publications in top ML/Data science conferences
Job posted by
Ankit Jain

Analytics Scientist - Risk Analytics

at market-leading fintech company dedicated to providing credit

Analytics
Predictive analytics
Linear regression
Logistic regression
Python
R Programming
icon
Noida, NCR (Delhi | Gurgaon | Noida)
icon
1 - 4 yrs
icon
₹8L - ₹18L / yr
Job Description : Role : Analytics Scientist - Risk Analytics Experience Range : 1 to 4 Years Job Location : Noida Key responsibilities include •Building models to predict risk and other key metrics •Coming up with data driven solutions to control risk •Finding opportunities to acquire more customers by modifying/optimizing existing rules •Doing periodic upgrades of the underwriting strategy based on business requirements •Evaluating 3rd party solutions for predicting/controlling risk of the portfolio •Running periodic controlled tests to optimize underwriting •Monitoring key portfolio metrics and take data driven actions based on the performance Business Knowledge: Develop an understanding of the domain/function. Manage business process (es) in the work area. The individual is expected to develop domain expertise in his/her work area. Teamwork: Develop cross site relationships to enhance leverage of ideas. Set and manage partner expectations. Drive implementation of projects with Engineering team while partnering seamlessly with cross site team members Communication: Responsibly perform end to end project communication across the various levels in the organization. Candidate Specification: Skills: • Knowledge of analytical tool - R Language or Python • Established competency in Predictive Analytics (Logistic & Regression) • Experience in handling complex data sources •Dexterity with MySQL, MS Excel is good to have •Strong Analytical aptitude and logical reasoning ability •Strong presentation and communication skills Preferred: •1 - 3 years of experience in Financial Services/Analytics Industry •Understanding of the financial services business • Experience in working on advanced machine learning techniques If interested, please send your updated profile in word format with below details for further discussion at the earliest. 1. Current Company 2. Current Designation 3. Total Experience 4. Current CTC( Fixed & Variable) 5. Expected CTC 6. Notice Period 7. Current Location 8. Reason for Change 9. Availability for face to face interview on weekdays 10.Education Degreef the financial services business Thanks & Regards, Hema Talent Socio
Job posted by
Hema Latha N
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Hypersonix Inc?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort