What you will do:
- Identifying alternate data sources beyond financial statements and implementing them as a part of assessment criteria
- Automating appraisal mechanisms for all newly launched products and revisiting the same for an existing product
- Back-testing investment appraisal models at regular intervals to improve the same
- Complementing appraisals with portfolio data analysis and portfolio monitoring at regular intervals
- Working closely with the business and the technology team to ensure the portfolio is performing as per internal benchmarks and that relevant checks are put in place at various stages of the investment lifecycle
- Identifying relevant sub-sector criteria to score and rate investment opportunities internally
Desired Candidate Profile
What you need to have:
- Bachelor’s degree with relevant work experience of at least 3 years with CA/MBA (mandatory)
- Experience in working in lending/investing fintech (mandatory)
- Strong Excel skills (mandatory)
- Previous experience in credit rating or credit scoring or investment analysis (preferred)
- Prior exposure to working on data-led models on payment gateways or accounting systems (preferred)
- Proficiency in data analysis (preferred)
- Good verbal and written skills
About Disruptive Fintech Startup
Similar jobs
About us:
Hypersonix.ai is disrupting the e-commerce space with AI, ML and advanced decision capabilities to drive real-time business insights. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in various industry verticals.
Roles and Responsibilities:
- Collaborate with cross-functional teams to design, develop, and deploy machine learning models and algorithms in supply chain
- Research, experiment with, and implement state-of-the-art machine learning techniques and frameworks to solve challenging problems.
- Develop and optimize deep learning models for different tasks such as NER, matching, image recognition, and generative content creation.
- Stay up to date with the latest advancements in ML and AI research and integrate them into our projects.
- Collaborate with data scientists, software engineers, and product teams to integrate machine learning solutions into production environments.
- Document research findings, methodologies, and codebase to facilitate knowledge sharing and team collaboration.
- Troubleshoot and resolve issues in production environments to ensure the seamless operation of our systems.
- Ability to perform root cause analysis of product defects and implement effective solutions.
- Design, code, and maintain parts of the product and drive customer adoption.
- Work on using multiple data science methodologies to solve complex business problems.
Qualification/Requirements:
- Strong problem-solving skills and the ability to work on complex, open-ended challenges.
- Motivated self-starter with a strong work ethic and the ability to work independently and as part of a team.
- Proven experience working on NLP and deep learning with a strong portfolio of projects.
- Strong programming skills in Python and the ability to write efficient and maintainable code.
- Proficiency in machine learning libraries and frameworks such as TensorFlow, PyTorch, scikit-learn.
- Experience with cloud-based AI services and infrastructure (e.g., AWS)
- Must have experience in API development and API integration.
- Experience working in production environment, ensuring system stability and performance.
SKILLS:
Mandatory Skills:
- Advanced Understanding of Adobe Analytics as an Architect or at the least Business Practitioner.
- Possess excellent analytical skills to understand the intricacies of the data and visualization.
- Advanced MS Excel skills.
- Good communication skills.
- Team Building & Handling
Desired Skills:
- Hands-on Visualization Tools like Tableau and Power BI.
- Must have understanding on Adobe DTM and Launch (Tag Management Systems).
- SQL – Basic.
What you’ll be responsible for:
- Monitor the site performance on a daily basis.
- Understand user behaviour and create actionable dashboards.
- Create and maintain daily/weekly/monthly reports.
- Analyse and Audit data to generate dashboards.
- Measure and report performance of marketing campaigns, gain insight and assess against goals.
- Hands-on knowledge with user behaviour tracking systems and methodologies.
- Identifying key needs or gaps and provide leadership to close those gaps.
Challenges (also responsible) you’ll be facing in the role:
- Understand client requirement and create ad hoc reports/Recommendations.
- Provide meaningful insights from data to make operational and strategic decisions for clients.
- Strong Understanding of various technologies being used in the digital transformation space.
- Should possess Data Visualization skills to help build interactive reports.
- Independent and proactive self-starter.
We are looking out for a technically driven "Full-Stack Engineer" for one of our premium client
COMPANY DESCRIPTION:
Qualifications
• Bachelor's degree in computer science or related field; Master's degree is a plus
• 3+ years of relevant work experience
• Meaningful experience with at least two of the following technologies: Python, Scala, Java
• Strong proven experience on distributed processing frameworks (Spark, Hadoop, EMR) and SQL is very
much expected
• Commercial client-facing project experience is helpful, including working in close-knit teams
• Ability to work across structured, semi-structured, and unstructured data, extracting information and
identifying linkages across disparate data sets
• Confirmed ability in clearly communicating complex solutions
• Understandings on Information Security principles to ensure compliant handling and management of
client data
• Experience and interest in Cloud platforms such as: AWS, Azure, Google Platform or Databricks
• Extraordinary attention to detail
- At least 4 to 7 years of relevant experience as Big Data Engineer
- Hands-on experience in Scala or Python
- Hands-on experience on major components in Hadoop Ecosystem like HDFS, Map Reduce, Hive, Impala.
- Strong programming experience in building applications/platform using Scala or Python.
- Experienced in implementing Spark RDD Transformations, actions to implement business analysis
We are specialized in productizing solutions of new technology.
Our vision is to build engineers with entrepreneurial and leadership mindsets who can create highly impactful products and solutions using technology to deliver immense value to our clients.
We strive to develop innovation and passion into everything we do, whether it is services or products, or solutions.
We’re hiring a talented Data Engineer and Big Data enthusiast to work in our platform to help ensure that our data quality is flawless. As a company, we have millions of new data points every day that come into our system. You will be working with a passionate team of engineers to solve challenging problems and ensure that we can deliver the best data to our customers, on-time. You will be using the latest cloud data warehouse technology to build robust and reliable data pipelines. Duties/Responsibilities Include:
|
Requirements:
Exceptional candidates will have:
|
- You hold an MS/Ph.D. degree in a STEM domain and 5+ years in a relevant position
- You share your ideas and continuously improve yourself and the team around you.
- Experienced in building and scaling data teams across multiple locations and domains.
- You have a good understanding of evolving an organization’s culture based on analytics and data insights
- Natural and comfortable leader, have excellent problem-solving, organizational, and analytical skills
- Passionate about data improving business and engineering practices like continuous delivery, traceability, and observability
- Strong communication skills, high integrity, and great attention to detail
You’ll get to work with:
- Consumer-facing, as well as core platform, finance, and distribution business units
- Marketing and product teams, across to our engineering teams
- Modern infrastructure (Kubernetes, AWS, GCP)
What we offer
- We offer you a chance to be part of a truly amazing journey in a company that sets very high targets and works hard to achieve them. You will be able to work with smart, motivated, and engaged co-workers from all over the world, in an intense and very energetic environment. This leads to you having a tangible impact on the way that we operate and expand our business.
Some of the highlights of the package include:
- Strong technical culture of continuous innovation and improvement
- Chance to become a shareholder of Gelato!
- Flexible festive holidays, swap days off according to your values and beliefs.
- Work at one of our hub city offices or even remotely
- And much more!
Basic Qualifications:
∙Bachelors in Computer Science/Mathematics + Research (Machine Learning, Deep Learning, Statistics, Data Mining, Game Theory or core mathematical areas) from Tier1 tech institutes.
∙3+ years of relevant experience in building large scale machine learning or deep learning models and/or systems.
∙1 year or more of experience specifically with deep learning (CNN, RNN, LSTM, RBM etc).
∙Strong working knowledge of deep learning, machine learning, and statistics.
- Deep domain understanding of Personalization, Search and Visual.
∙Strong math skills with statistical modeling / machine learning.
∙Hands-on experience building models with deep learning frameworks like MXNet or Tensorflow.
∙Experience in using Python, statistical/machine learning libs.
∙Ability to think creatively and solve problems.
∙Data presentation skills.
Preferred:
∙MS/ Ph.D. (Machine Learning, Deep Learning, Statistics, Data Mining, Game Theory or core mathematical areas) from IISc and other Top Global Universities.
∙Or, Publications in highly accredited journals (If available, please share links to your published work.).
∙Or, history of scaling ML/Deep learning algorithm at massively large scale.
JD:
Required Skills:
- Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala.
- Strong practical knowledge of SQL.
Hands on experience on Spark/SparkSQL - Data Structure and Algorithms
- Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications
- Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc
- Experience on NoSQL Databases like HBase, etc
- Experience with Linux OS environment (Shell script, AWK, SED)
- Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)