About the Startup!
BharatX is a startup trying to change how the 250 million Indian Middle-Class Indians get access to credit. We give Credit via other consumer-facing apps and platforms as-a-Feature to their customers via a simple integration of our APIs in a Plug-and-Play manner. Our offerings enable journeys like Postpaid on Uber/Ola, Pay after Trial on Lenskart/Meesho, Pay in 3 on Flipkart/BoAt, Credit-Line on PhonePe/Gpay in a white-labelled and embedded manner!
Who We Are:
A team of young, ambitious, and bold people love to dedicate their life’s work towards something meaningful for India & the world. We love to have a shit ton of fun and cut the bullshit corporate culture! We are not colleagues, we are a family, in it for the long run!
Folks who believe in us:
We have been fortunate to have a lot of Global VCs, Founders, Clients, Angels and Industry veterans back us in our journey. We also have a lot of mentors in the Industry Globally who work with us day in, day out on building BharatX. Some of our Investors Include:
Global VCs
|
Angels
|
A special shout out to some of the clients of BharatX who have also chosen to back us, their vote of confidence in our product and vision is the most valuable to us.
What you will impact:
Data is the backbone of everything we do at BharatX. And you will be leading that. Your role would majorly revolve around gathering insights from the data points that various systems at BharatX generate.
Data points that you could handle can include:
- Device metrics of different cohorts of users
- User behavioural data
- User interaction data (events triggered based how user interacts with various interfaces)
- Many more
And of course you are welcome to introduce more data sources that we might have overlooked.
What you will learn:
How to get stuff done! You will solve real-world challenges that no experience or training can help you. Only your grit and passion for solving the problem will help you figure out how to deal with them. You will learn to think from a technical as well as product point of view.
Key Responsibilities:
- Providing insights to optimise existing processes or design new ones
- Providing insights to gauge performance of various products
- Drafting policies or algorithms that can add on to our underwriting stack
What we look for:
We need people who are quick learners and bold enough to suggest crazy ideas. From a technical front, we need people who have:
- Demonstrable experience with SQL, python/R
- Good knowledge of analytical concepts
- Ability to interact with all stakeholders and understand business requirements.
- Startup work experience (preferred).
We also encourage you to apply even if you took a break from professional life for whatever reasons. For us, the people and their skills matter more than their resumes.
What you get:
We don’t seek employees, we seek friends. If you are looking for an environment where smart people work around you who help you achieve your goals without any corporate bureaucracy, encourage you to make mistakes and get complete ownership of your work, then this is the place for you! Here are some side perks that come with this job:
- Be a part of our 0 to 1 ride!
- Attractive compensation with the best-in-class ESOP Structure. You take care of doing your best work, and we will take care of making sure you never have to worry about money.
- Insurance for your entire family.
- Choose your own device (and keep it if you stay long enough).
- Unlimited paid time off with no questions asked.
- Encouragement to take time for your mental health and personal life.
- Maternity and Paternity Leave.
- A tight-knit and brutally honest team with no politics or hierarchy :D
About BharatX
BharatX is a FinTech Startup funded by VCs, Marquee Angels and Industry Veterans trying to change how the 110 Million Indian Middle-Class families consume credit. We enable large consumer-facing apps to provide Credit-as-a-Feature to their customers via a simple integration. Imagine offering Postpaid on Uber, Pay after Trial on Lenskart, Khata on BigBasket/JioMart, Credi-Line on PhonePe/Gpay etc.
Who we are:
A team of young, ambitious, and bold people who are working tirelessly to change India, and at the same time, love to have fun! If you get selected, expect some unplanned treats, outings, and some great moments.
What we look for:
We need people who are quick learners and bold enough to suggest crazy ideas. We also encourage you to apply even if you took a break from professional life for whatever reasons. For us, the people and their skills matter more than their resumes.
Benefits
We don’t seek employees, we seek friends. If you are looking for an environment where smart people work around you who help you achieve your goals without any corporate bureaucracy, encourage you to make mistakes and get complete ownership of your work, then this is the place for you! Here are some side perks that come with this job:
- Be a part of our 0 to 1 ride!
- The best-in-class ESOP Structure.
- Above Market Standards Cash Component.
- Insurance for your entire family (including In-Laws/Parents).
- Choose your own device (and keep it if you stay >24 months).
- Unlimited paid time off with no questions asked.
- COVID Budget for setting up anything you need to work remotely.
- Encouragement to take time for your mental health and personal life.
- Maternity and Paternity Leave.
- A tight-knit and brutally honest team with no politics or hierarchy :D
Cover Attribution: Wikimedia Commons/Praveenpaavni
Similar jobs
**Education:
Qualification – Any engineering graduate with STRONG programming and logical reasoning skills.
**Minimum years of Experience:2 – 5 years**
Required Skills:
Previous experience as a Data Engineer or in a similar role.
Technical expertise with data models, data mining, and segmentation techniques.
**Knowledge of programming languages (e. g. Java and Python).
Hands-on experience with SQL Programming
Hands-on experience with Python Programming
Knowledge of these tools DBT, ADF, Snowflakes, and Databricks would be added advantage for our current project.**
Strong numerical and analytical skills.
Experience in dealing directly with customers and internal sales organizations.
Strong written and verbal communication, including technical writing skills.
Good to have: Hands-on experience in Cloud services.
Knowledge with ML
Data Warehouse builds (DB, SQL, ETL, Reporting Tools like Power BI…)
Do share your profile to gayathrirajagopalan @jmangroup.com
- Your responsibilities:
- Build, improve and extend NLP capabilities
- Research and evaluate different approaches to NLP problems
- Must be able to write code that is well designed, produce deliverable results
- Write code that scales and can be deployed to production
- Fundamentals of statistical methods is a must
- Experience in named entity recognition, POS Tagging, Lemmatization, vector representations of textual data and neural networks - RNN, LSTM
- A solid foundation in Python, data structures, algorithms, and general software development skills.
- Ability to apply machine learning to problems that deal with language
- Engineering ability to build robustly scalable pipelines
- Ability to work in a multi-disciplinary team with a strong product focus
We are looking for a python developer who has a passion to drive more solar and clean energy in the world working with us. The software helps anyone understand how much solar could be put up on a rooftop and calculates how many units of clean energy the solar PV system would generate, along with how much savings the homeowner would have. This is a crucial step in helping educate people who want to go solar, but aren’t completely convinced of solar's value proposition. If you are interested in bringing the latest technologies to the fast-growing solar industry and want to help society transition to a more sustainable future, we would love to hear from you!
You will -
- Be an early employee at a growing startup and help shape the team culture
- Safeguard code quality on their team, reviewing others’ code with an eye to performance and maintainability
- Be trusted to take point on complex product initiatives
- Work in a ownership driven, micro-management free environment
You should have:
- Strong programming fundamentals. (if you don’t officially have a CS degree but know programming, it’s fine with us!)
- Have a strong problem solving attitude.
- Experience with solar or electrical modelling is a plus, although not required.
What you will do:
- Identifying alternate data sources beyond financial statements and implementing them as a part of assessment criteria
- Automating appraisal mechanisms for all newly launched products and revisiting the same for an existing product
- Back-testing investment appraisal models at regular intervals to improve the same
- Complementing appraisals with portfolio data analysis and portfolio monitoring at regular intervals
- Working closely with the business and the technology team to ensure the portfolio is performing as per internal benchmarks and that relevant checks are put in place at various stages of the investment lifecycle
- Identifying relevant sub-sector criteria to score and rate investment opportunities internally
Desired Candidate Profile
What you need to have:
- Bachelor’s degree with relevant work experience of at least 3 years with CA/MBA (mandatory)
- Experience in working in lending/investing fintech (mandatory)
- Strong Excel skills (mandatory)
- Previous experience in credit rating or credit scoring or investment analysis (preferred)
- Prior exposure to working on data-led models on payment gateways or accounting systems (preferred)
- Proficiency in data analysis (preferred)
- Good verbal and written skills
- 5+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka Streams, Py Spark, and streaming databases like druid or equivalent like Hive
- Strong industry expertise with containerization technologies including kubernetes (EKS/AKS), Kubeflow
- Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
- 5+ Industry experience in python
- Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
- Experience with scripting languages. Python experience highly desirable. Experience in API development using Swagger
- Implementing automated testing platforms and unit tests
- Proficient understanding of code versioning tools, such as Git
- Familiarity with continuous integration, Jenkins
Responsibilities
- Architect, Design and Implement Large scale data processing pipelines using Kafka Streams, PySpark, Fluentd and Druid
- Create custom Operators for Kubernetes, Kubeflow
- Develop data ingestion processes and ETLs
- Assist in dev ops operations
- Design and Implement APIs
- Identify performance bottlenecks and bugs, and devise solutions to these problems
- Help maintain code quality, organization, and documentation
- Communicate with stakeholders regarding various aspects of solution.
- Mentor team members on best practices
● Good communication and collaboration skills with 4-7 years of experience.
● Ability to code and script with strong grasp of CS fundamentals, excellent problem solving abilities.
● Comfort with frequent, incremental code testing and deployment, Data management skills
● Good understanding of RDBMS
● Experience in building Data pipelines and processing large datasets .
● Knowledge of building Web Scraping and data mining is a plus.
● Working knowledge of open source tools such as mysql, Solr, ElasticSearch, Cassandra ( data stores )
would be a plus.
● Expert in Python programming
Role and responsibilities
● Inclined towards working in a start-up environment.
● Comfort with frequent, incremental code testing and deployment, Data management skills
● Design and Build robust and scalable data engineering solutions for structured and unstructured data for
delivering business insights, reporting and analytics.
● Expertise in troubleshooting, debugging, data completeness and quality issues and scaling overall
system performance.
● Build robust API ’s that powers our delivery points (Dashboards, Visualizations and other integrations).
• Drive the data engineering implementation
• Strong experience in building data pipelines
• AWS stack experience is must
• Deliver Conceptual, Logical and Physical data models for the implementation
teams.
• SQL stronghold is must. Advanced SQL working knowledge and experience
working with a variety of relational databases, SQL query authoring
• AWS Cloud data pipeline experience is must. Data pipelines and data centric
applications using distributed storage platforms like S3 and distributed processing
platforms like Spark, Airflow, Kafka
• Working knowledge of AWS technologies such as S3, EC2, EMR, RDS, Lambda,
Elasticsearch
• Ability to use a major programming (e.g. Python /Java) to process data for
modelling.
ML ARCHITECT
Job Overview
We are looking for a ML Architect to help us discover the information hidden in vast amounts of data, and help us make smarter decisions to deliver even better products. Your primary focus will be in applying data mining techniques, doing statistical analysis, and building high quality prediction systems integrated with our products. They must have strong experience using variety of data mining and data analysis methods, building and implementing models, using/creating algorithm’s and creating/running simulations. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Automating to identify the textual data with their properties and structure form various type of document.
Responsibilities
- Selecting features, building and optimizing classifiers using machine learning techniques
- Data mining using state-of-the-art methods
- Enhancing data collection procedures to include information that is relevant for building analytic systems
- Processing, cleansing, and verifying the integrity of data used for analysis
- Creating automated anomaly detection systems and constant tracking of its performance
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Secure and manage when needed GPU cluster resources for events
- Write comprehensive internal feedback reports and find opportunities for improvements
- Manage GPU instances/machines to increase the performance and efficiency of the ML/DL model.
Skills and Qualifications
- Strong Hands-on experience in Python Programming
- Working experience with Computer Vision models - Object Detection Model, Image Classification
- Good experience in feature extraction, feature selection techniques and transfer learning
- Working Experience in building deep learning NLP Models for text classification, image analytics-CNN,RNN,LSTM.
- Working Experience in any of the AWS/GCP cloud platforms, exposure in fetching data from various sources.
- Good experience in exploratory data analysis, data visualisation, and other data preprocessing techniques.
- Knowledge in any one of the DL frameworks like Tensorflow, Pytorch, Keras, Caffe
- Good knowledge in statistics,distribution of data and in supervised and unsupervised machine learning algorithms.
- Exposure to OpenCV Familiarity with GPUs + CUDA
- Experience with NVIDIA software for cluster management and provisioning such as nvsm, dcgm and DeepOps.
- We are looking for a candidate with 14+ years of experience, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with AWS cloud services: EC2, RDS, AWS-Sagemaker(Added advantage)
- Experience with object-oriented/object function scripting languages in any: Python, Java, C++, Scala, etc.
along with metrics to track their progress
Managing available resources such as hardware, data, and personnel so that deadlines
are met
Analysing the ML algorithms that could be used to solve a given problem and ranking
them by their success probability
Exploring and visualizing data to gain an understanding of it, then identifying
differences in data distribution that could affect performance when deploying the model
in the real world
Verifying data quality, and/or ensuring it via data cleaning
Supervising the data acquisition process if more data is needed
Defining validation strategies
Defining the pre-processing or feature engineering to be done on a given dataset
Defining data augmentation pipelines
Training models and tuning their hyper parameters
Analysing the errors of the model and designing strategies to overcome them
Deploying models to production