Xuriti is an anchor-led b2b Sales Enablement Platform. We use Anchor sponsored Credit for its Retailers to create better engagement and bring a better understanding of the complete Sales channel.
We are looking for a dynamic, energetic Data Science Intern who is eager to learn about
Under writing, Risk modeling in b2b fintech space.
Responsibilities
· Undertaking data collection, preprocessing and analysis
· Building models to address business problems
· Presenting information using data visualization techniques
· Undertake preprocessing of structured and unstructured data
· Analyze large amounts of information to discover trends and patterns
· Build predictive models and machine-learning algorithms
· Combine models through ensemble modeling
· Present information using data visualization techniques
· Propose solutions and strategies to business challenges
· Collaborate with engineering and product development teams
Qualifications
· Understanding of machine-learning and operations research
· Knowledge of R, SQL and Python, Mongodb
· Analytical mind and business acumen
· Strong math skills (e.g. statistics, algebra)
· Problem-solving aptitude
· Excellent communication and presentation skills
· BSc/Btech in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred
Similar jobs
- Collecting and analyzing large sets of data using statistical techniques and software tools.
- Identifying patterns and trends in data to inform business decisions.
- Creating reports, charts, and visualizations to present data insights to stakeholders.
- Collaborating with cross-functional teams to identify opportunities for data-driven innovation.
- Continuously monitoring and analyze data to identify areas for improvement.
- Utilizing programming languages like Python, R or SQL to manipulate and extract data from various sources.
- Communicating complex data insights to non-technical stakeholders in an easy-to-understand format.
- Stay current with new data analysis tools and technologies to provide the best solution.
- Participating in data governance, data quality and data security activities.
- Bachelor's degree in a quantitative field, such as statistics, math, computer science, or operations research is usually required.
- Bachelors or Masters's in a quantitative field (such as Engineering, Statistics, Maths, Sciences or Computer Science with Modeling/Data Science), preferably with relevant work experience of over 2 years.
- Must have In-depth knowledge and understanding of Heap Analytics and Google Analytics.
- Must have In-depth knowledge of creating and maintaining complex queries with SQL.
- Must have the ability to work with data visualization tools such as Grafana
- Experience with data warehousing, transformation, and processing is a plus. Work experience with real data for customer insights, business, and market analysis will be advantageous.
- Familiarity with R and statistical packages is preferred.
- Ability to manage simultaneous tasks in a fast-paced, technology-oriented environment.
- Experience with text analytics, data mining and web analytics.
- Sound knowledge of Mongo as a primary skill
- . Should have hands on experience of MySQL as a secondary skill will be enough
- . Experience with replication , sharding and scaling.
- . Design, install, maintain highly available systems (includes monitoring, security, backup, and performance tuning)
- . Implement secure database and server installations (privilege access methodology / role based access)
- . Help application team in query writing, performance tuning & other D2D issues
- • Deploy automation techniques for d2d operations
- . Must possess good analytical and problem solving skills
- . Must be willing to work flexible hours as needed
- . Scripting experience a plus
- . Ability to work independently and as a member of a team
- . good verbal and written communication skills
- Cloud: GCP
- Must have: BigQuery, Python, Vertex AI
- Nice to have Services: Data Plex
- Exp level: 5-10 years.
- Preferred Industry (nice to have): Manufacturing – B2B sales
Expert in Machine Learning (ML) & Natural Language Processing (NLP).
Expert in Python, Pytorch and Data Structures.
Experience in ML model life cycle (Data preparation, Model training and Testing and ML Ops).
Strong experience in NLP, NLU and NLU using transformers & deep learning.
Experience in federated learning is a plus
Experience with knowledge graphs and ontology.
Responsible for developing, enhancing, modifying, optimizing and/or maintaining applications, pipelines and codebase in order to enhance the overall solution.
Experience working with scalable, highly-interactive, high-performance systems/projects (ML).
Design, code, test, debug and document programs as well as support activities for the corporate systems architecture.
Working closely with business partners in defining requirements for ML applications and advancements of solution.
Engage in specifications in creating comprehensive technical documents.
Experience / Knowledge in designing enterprise grade system architecture for solving complex problems with a sound understanding of object-oriented programming and Design Patterns.
Experience in Test Driven Development & Agile methodologies.
Good communication skills - client facing environment.
Hunger for learning, self-starter with a drive to technically mentor cohort of developers. 16. Good to have working experience in Knowledge Graph based ML products development; and AWS/GCP based ML services.
What we look for:
We are looking for an associate who will be doing data crunching from various sources and finding the key points from the data. Also help us to improve/build new pipelines as per the requests. Also, this associate will be helping us to visualize the data if required and find flaws in our existing algorithms.
Responsibilities:
- Work with multiple stakeholders to gather the requirements of data or analysis and take action on them.
- Write new data pipelines and maintain the existing pipelines.
- Person will be gathering data from various DB’s and will be finding the required metrics out of it.
Required Skills:
- Experience with python and Libraries like Pandas,and Numpy.
- Experience in SQL and understanding of NoSQL DB’s.
- Hands-on experience in Data engineering.
- Must have good analytical skills and knowledge of statistics.
- Understanding of Data Science concepts.
- Bachelor degree in Computer Science or related field.
- Problem-solving skills and ability to work under pressure.
Nice to have:
- Experience in MongoDB or any NoSql DB.
- Experience in ElasticSearch.
- Knowledge of Tableau, Power BI or any other visualization tool.
- Experience with relational SQL & NoSQL databases including MySQL & MongoDB.
- Familiar with the basic principles of distributed computing and data modeling.
- Experience with distributed data pipeline frameworks like Celery, Apache Airflow, etc.
- Experience with NLP and NER models is a bonus.
- Experience building reusable code and libraries for future use.
- Experience building REST APIs.
Preference for candidates working in tech product companies
SQL, Python, Numpy,Pandas,Knowledge of Hive and Data warehousing concept will be a plus point.
JD
- Strong analytical skills with the ability to collect, organise, analyse and interpret trends or patterns in complex data sets and provide reports & visualisations.
- Work with management to prioritise business KPIs and information needs Locate and define new process improvement opportunities.
- Technical expertise with data models, database design and development, data mining and segmentation techniques
- Proven success in a collaborative, team-oriented environment
- Working experience with geospatial data will be a plus.