What you will be doing:
As a part of the Global Credit Risk and Data Analytics team, this person will be responsible for carrying out analytical initiatives which will be as follows: -
- Dive into the data and identify patterns
- Development of end-to-end Credit models and credit policy for our existing credit products
- Leverage alternate data to develop best-in-class underwriting models
- Working on Big Data to develop risk analytical solutions
- Development of Fraud models and fraud rule engine
- Collaborate with various stakeholders (e.g. tech, product) to understand and design best solutions which can be implemented
- Working on cutting-edge techniques e.g. machine learning and deep learning models
Example of projects done in past:
- Lazypay Credit Risk model using CatBoost modelling technique ; end-to-end pipeline for feature engineering and model deployment in production using Python
- Fraud model development, deployment and rules for EMEA region
Basic Requirements:
- 1-3 years of work experience as a Data scientist (in Credit domain)
- 2016 or 2017 batch from a premium college (e.g B.Tech. from IITs, NITs, Economics from DSE/ISI etc)
- Strong problem solving and understand and execute complex analysis
- Experience in at least one of the languages - R/Python/SAS and SQL
- Experience in in Credit industry (Fintech/bank)
- Familiarity with the best practices of Data Science
Add-on Skills :
- Experience in working with big data
- Solid coding practices
- Passion for building new tools/algorithms
- Experience in developing Machine Learning models
Similar jobs
Who We Are:
DeepIntent is leading the healthcare advertising industry with data-driven solutions built for the future. From day one, our mission has been to improve patient outcomes through the artful use of advertising, data science, and real-world clinical data.
What You’ll Do:
We are looking for a Senior Software Engineer based in Pune, India who can master both DeepIntent’s data architectures and pharma research and analytics methodologies to make significant contributions to how health media is analyzed by our clients. This role requires an Engineer who not only understands DBA functions but also how they impact research objectives and can work with researchers and data scientists to achieve impactful results.
This role will be in the Analytics Organization and will require integration and partnership with the Engineering Organization. The ideal candidate is a self-starter who is inquisitive who is not afraid to take on and learn from challenges and will constantly seek to improve the facets of the business they manage. The ideal candidate will also need to demonstrate the ability to collaborate and partner with others.
- Serve as the Engineering interface between Analytics and Engineering teams
- Develop and standardized all interface points for analysts to retrieve and analyze data with a focus on research methodologies and data based decisioning
- Optimize queries and data access efficiencies, serve as expert in how to most efficiently attain desired data points
- Build “mastered” versions of the data for Analytics specific querying use cases
- Help with data ETL, table performance optimization
- Establish formal data practice for the Analytics practice in conjunction with rest of DeepIntent
- Build & operate scalable and robust data architectures
- Interpret analytics methodology requirements and apply to data architecture to create standardized queries and operations for use by analytics teams
- Implement DataOps practices
- Master existing and new Data Pipelines and develop appropriate queries to meet analytics specific objectives
- Collaborate with various business stakeholders, software engineers, machine learning engineers, analysts
- Operate between Engineers and Analysts to unify both practices for analytics insight creation
Who You Are:
- Adept in market research methodologies and using data to deliver representative insights
- Inquisitive, curious, understands how to query complicated data sets, move and combine data between databases
- Deep SQL experience is a must
- Exceptional communication skills with ability to collaborate and translate with between technical and non technical needs
- English Language Fluency and proven success working with teams in the U.S.
- Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data
- Experience working with public clouds like GCP/AWS
- Good understanding of software engineering, DataOps, and data architecture, Agile and DevOps methodologies
- Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown
- Proficient with SQL,Python or JVM based language, Bash
- Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc.and big data databases like BigQuery, Clickhouse, etc
- Ability to think big, take bets and innovate, dive deep, hire and develop the best talent, learn and be curious
- Comfortable to work in EST Time Zone
- Partners with business stakeholders to translate business objectives into clearly defined analytical projects.
- Identify opportunities for text analytics and NLP to enhance the core product platform, select the best machine learning techniques for the specific business problem and then build the models that solve the problem.
- Own the end-end process, from recognizing the problem to implementing the solution.
- Define the variables and their inter-relationships and extract the data from our data repositories, leveraging infrastructure including Cloud computing solutions and relational database environments.
- Build predictive models that are accurate and robust and that help our customers to utilize the core platform to the maximum extent.
Skills and Qualification
- 12 to 15 yrs of experience.
- An advanced degree in predictive analytics, machine learning, artificial intelligence; or a degree in programming and significant experience with text analytics/NLP. He shall have a strong background in machine learning (unsupervised and supervised techniques). In particular, excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, logistic regression, MLPs, RNNs, etc.
- Experience with text mining, parsing, and classification using state-of-the-art techniques.
- Experience with information retrieval, Natural Language Processing, Natural Language
- Understanding and Neural Language Modeling.
- Ability to evaluate the quality of ML models and to define the right performance metrics for models in accordance with the requirements of the core platform.
- Experience in the Python data science ecosystem: Pandas, NumPy, SciPy, sci-kit-learn, NLTK, Gensim, etc.
- Excellent verbal and written communication skills, particularly possessing the ability to share technical results and recommendations to both technical and non-technical audiences.
- Ability to perform high-level work both independently and collaboratively as a project member or leader on multiple projects.
- Bring in industry best practices around creating and maintaining robust data pipelines for complex data projects with/without AI component
- programmatically ingesting data from several static and real-time sources (incl. web scraping)
- rendering results through dynamic interfaces incl. web / mobile / dashboard with the ability to log usage and granular user feedbacks
- performance tuning and optimal implementation of complex Python scripts (using SPARK), SQL (using stored procedures, HIVE), and NoSQL queries in a production environment
- Industrialize ML / DL solutions and deploy and manage production services; proactively handle data issues arising on live apps
- Perform ETL on large and complex datasets for AI applications - work closely with data scientists on performance optimization of large-scale ML/DL model training
- Build data tools to facilitate fast data cleaning and statistical analysis
- Ensure data architecture is secure and compliant
- Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability
- Work closely with APAC CDO and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).
You should be
- Expert in structured and unstructured data in traditional and Big data environments – Oracle / SQLserver, MongoDB, Hive / Pig, BigQuery, and Spark
- Have excellent knowledge of Python programming both in traditional and distributed models (PySpark)
- Expert in shell scripting and writing schedulers
- Hands-on experience with Cloud - deploying complex data solutions in hybrid cloud / on-premise environment both for data extraction/storage and computation
- Hands-on experience in deploying production apps using large volumes of data with state-of-the-art technologies like Dockers, Kubernetes, and Kafka
- Strong knowledge of data security best practices
- 5+ years experience in a data engineering role
- Science / Engineering graduate from a Tier-1 university in the country
- And most importantly, you must be a passionate coder who really cares about building apps that can help people do things better, smarter, and faster even when they sleep
Company Description
Miratech is an IT services and outsourcing company that provides services to multinational organizations all over the world. Our highly professional team achieves success with 99% of IT projects in financial, telecommunication, and technology domains. Founded in 1989, Miratech has its headquarters in New York, USA; with R&D centers in Poland, Philippines, Slovakia, Spain, and Ukraine. Technical complexity is our passion, stability is our standard, friendly work environment is our style. We empower our employees to grow together with the company, to achieve ambitious goals, and to be a part of the international relentless team which helps the visionaries to change the world.
Job Description
We are looking for a Bot Developer to join our team, who will help us working on solutions and implementing technologies.
The ideal candidate will have strong knowledge of technologies and programming languages through which conversational Chatbots are developed. A good understanding of dialog systems and development using Microsoft framework to develop and program conversational Chatbots is required.
Responsibilities:
- Designing and implementing voice and chat bots
- Troubleshoot and resolve issues related to voice/chat bots.
- Assist in planning and estimating development projects/sprints.
- Take part in code reviews and contribute to team knowledge sharing.
- Provide technical guidance and support to other team members.
- Work in an agile environment, using methodologies like Scrum or Kanban
Qualifications
- 2-3 years of experience in BOT development using node.js
- Strong experience in developing BOTs using Azure Bot Framework.
- Conversational AI - ML Based Using Azure Cognitive Services
- Conversational AI - ML Based services to build Conversational Bot using LUIS.
- Experience in working with REST API calls, JSON, and systems integration.
Secondary Skills
- Ability to work with business and technology teams to build and deploy an analytical solution as per client needs.
- Ability to multi-task, solve problems and think strategically.
- Strong communication and collaboration skills
About Us
We are an AI-Powered CX Cloud that enables enterprises to transform customer experience and boost revenue with our APIs by automating and analyzing customer interactions at scale. We assist across multiple voices and non-voice channels in 30+ languages whilst coaching and training agents with minimal costs.
The problem we are solving
In comparison to worldwide norms, customer support in traditional contact centers is quite appalling, due to a high number of queries, insufficient capacity of agents and inane customer support systems, businesses struggle with a multi-fold rise in customer discontent and bounce rate, resulting in connectivity failure points between them and customers. To address this issue, IITian couple Manish and Rashi Gupta founded Rezo's AI-Powered CX Cloud for Enterprises 2018 to help businesses avoid customer churn and boost revenue without incurring financial costs by providing 24x7 real-time responses to customer inquiries with minimal human interaction
Roles and Responsibilities :
- Speech Recognition model development across multiple languages.
- Solve critical real-world scenarios - Noisy channel ASR performance, Multi speaker detection, etc.
- Implement and deliver PoC's /UATs products on the Rezo platform.
- Responsible for product performance, robustness and reliability.
Requirements:
- 2+ years Experience with Bachelors's/Master degree with a focus on CS, Machine Learning, and Signal Processing.
- Strong knowledge of various ML concepts/algorithms and hands-on experience in relevant projects.
- Experience in machine learning platforms such as TensorFlow, and Pytorch and solid programming development skills (Python, C, C++ etc).
- Ability to learn new tools, languages and frameworks quickly.
- Familiarity with databases, data transformation techniques, and ability to work with unstructured data like OCR/ speech/text data.
- Previous experience with working in Conversational AI is a plus.
- Git portfolios will be helpful.
Life at Rezo.AI
- We take transparency very seriously. Along with a full view of team goals, get a top-level view across the board with our regular town hall meetings.
- A highly inclusive work culture that promotes a relaxed, creative, and productive environment.
- Practice autonomy, open communication, and growth opportunities, while maintaining a perfect work-life balance.
- Go on company-sponsored offsites, and blow off steam with your work buddies.
Perks & Benefits
Learning is a way of life. Unlock your full potential backed with cutting-edge tools and mentor-ship
Get the best in class medical insurance, programs for taking care of your mental health, and a Contemporary Leave Policy (beyond sick leaves)
Why Us?
We are a fast-paced start-up with some of the best talents from diverse backgrounds. Working together to solve customer service problems. We believe a diverse workforce is a powerful multiplier of innovation and growth, which is key to providing our clients with the best possible service and our employees with the best possible career. Diversity makes us smarter, more competitive, and more innovative.
Explore more here
http://www.rezo.ai/">www.rezo.ai
Strong knowledge in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, text mining, etc.
Sound Knowlegde querying databases and using statistical computer languages: R, Python, SQL, etc.
Strong understanding creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
We have a requirement for Collibra Developer
Experience required- 5-12 yrs
Having experience in Data Governence , Data Quality management
Ganit Inc. is the fastest growing Data Science & AI company in Chennai.
Founded in 2017, by 3 industry experts who are alumnus of IITs/SPJIMR with each of them having 17+ years of experience in the field of analytics.
We are in the business of maximising Decision Making Power (DMP) for companies by providing solutions at the intersection of hypothesis based analytics, discovery based AI and IoT. Our solutions are a combination of customised services and functional product suite.
We primarily operate as a US-based start-up and have clients across US, Asia-Pacific, Middle-East and have offices in USA - New Jersey & India - Chennai.
Started with 3 people, the company is fast growing with 100+ employees
1. What do we expect from you
- Should posses minimum 2 years of experience of data analytics model development and deployment
- Skills relating to core Statistics & Mathematics.
- Huge interest in handling numbers
- Ability to understand all domains in businesses across various sectors
- Natural passion towards numbers, business, coding, visualisation
2. Necessary skill set:
- Proficient in R/Python, Advanced Excel, SQL
- Should have worked with Retail/FMCG/CPG projects solving analytical problems in Sales/Marketing/Supply Chain functions
- Very good understanding of algorithms, mathematical models, statistical techniques, data mining, like Regression models, Clustering/ Segmentation, time series forecasting, Decision trees/Random forest, etc.
- Ability to choose the right model for the right data and translate that into code in R, Python, VBA (Proven capabilities)
- Should have handled large datasets and with through understanding of SQL
- Ability to handle a team of Data Analysts
3. Good to have skill set:
- Microsoft PowerBI / Tableau / Qlik View / Spotfire
4. Job Responsibilities:
- Translate business requirements into technical requirements
- Data extraction, preparation and transformation
- Identify, develop and implement statistical techniques and algorithms that address business challenges and adds value to the organisation
- Create and implement data models
- Interact with clients for queries and delivery adoption
5. Screening Methodology
- Problem Solving round (Telephonic Conversation)
- Technical discussion round (Telephonic Conversation)
- Final fitment discussion (Video Round