- Partnering with internal business owners (product, marketing, edit, etc.) to understand needs and develop custom analysis to optimize for user engagement and retention
- Good understanding of the underlying business and workings of cross functional teams for successful execution
- Design and develop analyses based on business requirement needs and challenges.
- Leveraging statistical analysis on consumer research and data mining projects, including segmentation, clustering, factor analysis, multivariate regression, predictive modeling, etc.
- Providing statistical analysis on custom research projects and consult on A/B testing and other statistical analysis as needed. Other reports and custom analysis as required.
- Identify and use appropriate investigative and analytical technologies to interpret and verify results.
- Apply and learn a wide variety of tools and languages to achieve results
- Use best practices to develop statistical and/ or machine learning techniques to build models that address business needs.
- 2 - 4 years of relevant experience in Data science.
- Preferred education: Bachelor's degree in a technical field or equivalent experience.
- Experience in advanced analytics, model building, statistical modeling, optimization, and machine learning algorithms.
- Machine Learning Algorithms: Crystal clear understanding, coding, implementation, error analysis, model tuning knowledge on Linear Regression, Logistic Regression, SVM, shallow Neural Networks, clustering, Decision Trees, Random forest, XGBoost, Recommender Systems, ARIMA and Anomaly Detection. Feature selection, hyper parameters tuning, model selection and error analysis, boosting and ensemble methods.
- Strong with programming languages like Python and data processing using SQL or equivalent and ability to experiment with newer open source tools.
- Experience in normalizing data to ensure it is homogeneous and consistently formatted to enable sorting, query and analysis.
- Experience designing, developing, implementing and maintaining a database and programs to manage data analysis efforts.
- Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG, HIVE).
- Experience in risk and credit score domains preferred.
Kaleidofin is a neobank for the informal sector, which provides solutions tailored to the customer’s goals and are intuitive to use. We are working towards creating fair and transparent financial solutions that can target millions of customers and enterprises in India that don’t have easy access to formal financial planning. Our name “kaleidofin” is inspired by the power of financial solutions to enable beautiful possibilities of a future life for each customer.
We believe that everyone deserves and requires access to financial solutions that are intuitive and easy to use, flexible and personalised to real goals that can make financial progress and financial freedom possible for everyone.
We believe financial solutions can provide customers powerful tools that solve their real life goals and challenges. For too long, the financial services industry has been a manufacturer producing products and fitting customers to their products. At kaleidofin, we want to flip this around, keep the customer at the centre and provide mass tailored solutions that are best suited to meet the customer’s own goals/challenges.
The demand for financial services is, therefore, a demand derived from an underlying goal and aspiration of an individual. There is an urgent need to make financial services and solutions intuitive for customers and embed them in their everyday life.
kaleidofin will leverage the full India stack, existing networks, analytics, structuring and user-centred design to drive outcomes for customers, in the process, we will also help enrich the digital asset of each such customer.
Our approach is
- To combine distinct financial products (credit, investment, insurance, savings) to form a solution that actually both resonates with and works for the customer.
- To build customer profiling, underwriting, solution design and machine learning suitability algorithms to solve this gigantic customer problem.
- To leverage networks such as agents, cooperatives, self-help groups, temp agencies, MFIs to deliver suitable solutions at enormous scale.
In a very short time span, global investors such as Oiko Credit, Flourish, Omidyar Network and Blume Ventures have supported Kaleidofin’s well thought out business model with $8 million in seed and Series A funding.
The company won the Amazon AI Conclave award for Fintech, was one of only ten startups chosen for the Google LaunchPad Accelerator program in 2019, was recognized as India’s Most Innovative Wealth, Asset and Investment Management Service/Product by the Internet & Mobile Association of India (IAMAI) and was selected to present at United Nations General Assembly Special Task Force Event.
With its focus to harness mobile technology to deliver a paperless experience as well as its focus to harness technology and analytics to predict the right product for the right customer, Kaleidofin aims to become a leading FinTech player bringing financial solutions to everyone.
We are currently seeking talented and highly motivated Data Analyst to lead in the development of our discovery and support platform. The successful candidate will join a small, global team of data focused associates that have successfully built, and maintained a best of class traditional, Kimball based, SQL server founded, data warehouse and Qlik Sense based BI Dashboards. The successful candidate will lead the conversion of managing our master data set, developing reports and analytics dashboards.
To do well in this role you need a very fine eye for detail, experience as a data analyst, and deep understanding of the popular data analysis tools and databases.
Specific responsibilities will be to:
- Managing master data, including creation, updates, and deletion.
- Managing users and user roles.
- Provide quality assurance of imported data, working with quality assurance analysts if necessary.
- Commissioning and decommissioning of data sets.
- Processing confidential data and information according to various compliance.
- Helping develop reports and analysis.
- Managing and designing the reporting environment, including data sources, security, and metadata.
- Supporting the data warehouse in identifying and revising reporting requirements.
- Supporting initiatives for data integrity and normalization.
- Assessing tests and implementing new or upgraded software and assisting with strategic decisions on new systems.
- Generating reports from single or multiple systems.
- Troubleshooting the reporting database environment and reports.
- Evaluating changes and updates to source production systems.
- Training end-users on new reports and dashboards.
- Providing technical expertise in data storage structures, data mining, and data cleansing.
- Master’s Degree (or equivalent experience) in computer science, data science or a scientific field that has relevance to healthcare in the United States.
- Work experience as a data analyst or in a related field for more than 5 years.
- Proficiency in statistics, data analysis, data visualization and research methods.
- Strong SQL and Excel skills with ability to learn other analytic tools.
- Experience with BI dashboard tools like Qlik Sense, Tableau, Power BI.
- Experience with AWS services like EC2, S3, Athena and QuickSight.
- Ability to work with stakeholders to assess potential risks.
- Ability to analyze existing tools and databases and provide software solution recommendations.
- Ability to translate business requirements into non-technical, lay terms.
- High-level experience in methodologies and processes for managing large-scale databases.
- Demonstrated experience in handling large data sets and relational databases.
- Understanding of addressing and metadata standards.
Type: Permanent, Full time
A Bachelor’s degree in computer science, computer engineering, other technical discipline, or equivalent work experience
- 4+ years of software development experience
- 4+ years exp in programming languages- Python, spark, Scala, Hadoop, hive
- Demonstrated experience with Agile or other rapid application development methods
- Demonstrated experience with object-oriented design and coding.
Please mail you rresume to poornimakattherateintraedgedotcomalong with NP, how soon can you join, ECTC, Availability for interview, Location
- Role: Machine Learning Lead
- Experience: 5+ Years
- Employee strength: 80+
- Remuneration: Most competitive in the market
• Advance knowledge of Python.
• Object Oriented Programming skills.
• Mathematical understanding of machine learning and deep learning algorithms.
• Thorough grasp on statistical terminologies.
• Libraries: Tensorflow, Keras, Pytorch, Statsmodels, Scikit-learn, SciPy, Numpy, Pandas, Matplotlib, Seaborn, Plotly
• Algorithms: Ensemble Algorithms, Artificial Neural Networks and Deep Learning, Clustering Algorithms, Decision Tree Algorithms, Dimensionality Reduction Algorithms, etc.
• MySQL, MongoDB, ElasticSearch or other NoSQL database implementations.
If interested kindly share your cv at tanya @tigihr. com
Who is IDfy?
IDfy is the Fintech ScaleUp of the Year 2021. We build technology products that identify people accurately. This helps businesses prevent fraud and engage with the genuine with the least amount of friction. If you have opened an account with HDFC Bank or ordered from Amazon and Zomato or transacted through Paytm and BharatPe or played on Dream11 and MPL, you might have already experienced IDfy. Without even knowing it. Well…that’s just how we roll. Global credit rating giant TransUnion is an investor in IDfy. So are international venture capitalists like MegaDelta Capital, BEENEXT, and Dream Incubator. Blume Ventures is an early investor and continues to place its faith in us. We have kept our 500 clients safe from fraud while helping the honest get the opportunities they deserve. Our 350-people strong family works and plays out of our offices in suburban Mumbai. IDfy has run verifications on 100 million people. In the next 2 years, we want to touch a billion users. If you wish to be part of this journey filled with lots of action and learning, we welcome you to be part of the team!
What are we looking for?
As a senior software engineer in Data Fabric POD, you would be responsible for producing and implementing functional software solutions. You will work with upper management to define software requirements and take the lead on operational and technical projects. You would be working with a data management and science platform which provides Data as a service (DAAS) and Insight as a service (IAAS) to internal employees and external stakeholders.
You are eager to learn technology-agnostic who loves working with data and drawing insights from it. You have excellent organization and problem-solving skills and are looking to build the tools of the future. You have exceptional communication skills and leadership skills and the ability to make quick decisions.
YOE: 3 - 10 yrs
Position: Sr. Software Engineer/Module Lead/Technical Lead
- Work break-down and orchestrating the development of components for each sprint.
- Identifying risks and forming contingency plans to mitigate them.
- Liaising with team members, management, and clients to ensure projects are completed to standard.
- Inventing new approaches to detecting existing fraud. You will also stay ahead of the game by predicting future fraud techniques and building solutions to prevent them.
- Developing Zero Defect Software that is secured, instrumented, and resilient.
- Creating design artifacts before implementation.
- Developing Test Cases before or in parallel with implementation.
- Ensuring software developed passes static code analysis, performance, and load test.
- Developing various kinds of components (such as UI Components, APIs, Business Components, image Processing, etc. ) that define the IDfy Platforms which drive cutting-edge Fraud Detection and Analytics.
- Developing software using Agile Methodology and tools that support the same.
- Apache BEAM, Clickhouse, Grafana, InfluxDB, Elixir, BigQuery, Logstash.
- An understanding of Product Development Methodologies.
- Strong understanding of relational databases especially SQL and hands-on experience with OLAP.
- Experience in the creation of data ingestion pipelines and ETL pipeline (Good to have Apache Beam or Apache Airflow experience).
- Strong design skills in defining API Data Contracts / OOAD / Microservices / Data Models.
Good to have:
- Experience with TimeSeries DBs (we use InfluxDB) and Alerting / Anomaly Detection Frameworks.
- Visualization Layers: Metabase, PowerBI, Tableau.
- Experience in developing software in the Cloud such as GCP / AWS.
- A passion to explore new technologies and express yourself through technical blogs.
Specialism- Advance Analytics, Data Science, regression, forecasting, analytics, SQL, R, python, decision tree, random forest, SAS, clustering classification
Senior Analytics Consultant- Responsibilities
- Understand business problem and requirements by building domain knowledge and translate to data science problem
- Conceptualize and design cutting edge data science solution to solve the data science problem, apply design thinking concepts
- Identify the right algorithms , tech stack , sample outputs required to efficiently adder the end need
- Prototype and experiment the solution to successfully demonstrate the value
Independently or with support from team execute the conceptualized solution as per plan by following project management guidelines
- Present the results to internal and client stakeholder in an easy to understand manner with great story telling, story boarding, insights and visualization
- Help build overall data science capability for eClerx through support in pilots, pre sales pitches, product development , practice development initiatives
Skills- Informatica with Big Data Management
1.Minimum 6 to 8 years of experience in informatica BDM development
2.Experience working on Spark/SQL
3.Develops informtica mapping/Sql
Experience: 3+ yrs
We are looking for a MySQL DBA who will be responsible for ensuring the performance, availability, and security of clusters of MySQL instances. You will also be responsible for design of database, database architecture, orchestrating upgrades, backups, and provisioning of database instances. You will also work in tandem with the other teams, preparing documentations and specifications as required.
Database design and data architecture
Provision MySQL instances, both in clustered and non-clustered configurations
Ensure performance, security, and availability of databases
Prepare documentations and specifications
Handle common database procedures, such as upgrade, backup, recovery, migration, etc.
Profile server resource usage, optimize and tweak as necessary
Skills and Qualifications:
Proven expertise in database design and data architecture for large scale systems
Strong proficiency in MySQL database management
Decent experience with recent versions of MySQL
Understanding of MySQL's underlying storage engines, such as InnoDB and MyISAM
Experience with replication configuration in MySQL
Knowledge of de-facto standards and best practices in MySQL
Proficient in writing and optimizing SQL statements
Knowledge of MySQL features, such as its event scheduler
Ability to plan resource requirements from high level specifications
Familiarity with other SQL/NoSQL databases such as Cassandra, MongoDB, etc.
Knowledge of limitations in MySQL and their workarounds in contrast to other popular relational databases
Data Scientist - Applied AI
Who we are?
Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.
What do we believe?
- Best practices are overrated
- Implementing best practices can only make one an average .
- Honesty and Transparency
- We believe in naked truth. We do what we tell and tell what we do.
- Client Partnership
- Client - Vendor relationship: No. We partner with clients instead.
- And our sales team comprises 100% of our clients.
How do we work ?
It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.
- Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
- Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
- Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
- Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
- Innovative: Innovate or Die. We love to challenge the status quo.
- Experimental: We encourage curiosity & making mistakes.
- Responsible: Driven. Self motivated. Self governing teams. We own it.
So, what are we hunting for ?
As a Data Scientist, you will help develop and enhance the algorithms and technology that powers our unique system. This role covers a wide range of challenges,
from developing new models using pre-existing components to enable current systems to
be more intelligent. You should be able to train models using existing data and use them in
most creative manner to deliver the smartest experience to customers. You will have to
develop multiple AI applications that push the threshold of intelligence in machines.
Working on multiple projects at a time, you will have to maintain a consistently high level of attention to detail while finding creative ways to provide analytical insights. You will also have to thrive in a fast, high-energy environment and should be able to balance multiple projects in real-time. The thrill of the next big challenge should drive you, and when faced with an obstacle, you should be able to find clever solutions.You must have the ability and interest to work on a range of different types of projects and business processes, and must have a background that demonstrates this ability.
Your bucket of Undertakings :
- Collaborate with team members to develop new models to be used for classification problems
- Work on software profiling, performance tuning and analysis, and other general software engineering tasks
- Use independent judgment to take existing data and build new models from it
- Collaborate and provide technical guidance and come up with new ideas,rapid prototyping and converting prototypes into scalable products
- Conduct experiments to assess the accuracy and recall of language processing modules and to study the effect of such experiences
- Lead AI R&D initiatives to include prototypes and minimum viable products
- Work closely with multiple teams on projects like Visual quality inspection, ML Ops, Conversational banking, Demand forecasting, Anomaly detection etc.
- Build reusable and scalable solutions for use across the customer base
- Prototype and demonstrate AI related products and solutions for customers
- Assist business development teams in the expansion and enhancement of a pipeline to support short- and long-range growth plans
- Identify new business opportunities and prioritize pursuits for AI
- Participate in long range strategic planning activities designed to meet the Company’s objectives and to increase its enterprise value and revenue goals
Education & Experience :
- BE/B.Tech/Masters in a quantitative field such as CS, EE, Information sciences, Statistics, Mathematics, Economics, Operations Research, or related, with focus on applied and foundational Machine Learning , AI , NLP and/or / data-driven statistical analysis & modelling
- 3+ years of Experience majorly in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions to multiple domains, including financial engineering, financial processes a plus
- Strong, proven programming skills and with machine learning and deep learning and Big data frameworks including TensorFlow, Caffe, Spark, Hadoop. Experience with writing complex programs and implementing custom algorithms in these and other environments
- Experience beyond using open source tools as-is, and writing custom code on top of, or in addition to, existing open source frameworks
- Proven capability in demonstrating successful advanced technology solutions (either prototypes, POCs, well-cited research publications, and/or products) using ML/AI/NLP/data science in one or more domains
- Research and implement novel machine learning and statistical approaches
- Experience in data management, data analytics middleware, platforms and infrastructure, cloud and fog computing is a plus
- Excellent communication skills (oral and written) to explain complex algorithms, solutions to stakeholders across multiple disciplines, and ability to work in a diverse team
- Extensive experience with Hadoop and Machine learning algorithms
- Exposure to Deep Learning, Neural Networks, or related fields and a strong interest and desire to pursue them
- Experience in Natural Language Processing, Computer Vision, Machine Learning or Machine Intelligence (Artificial Intelligence)
- Passion for solving NLP problems
- Experience with specialized tools and project for working with natural language processing
- Knowledge of machine learning frameworks like Tensorflow, Pytorch
- Experience with software version control systems like Github
- Fast learner and be able to work independently as well as in a team environment with good written and verbal communication skills
Nactus is at forefront of education reinvention, helping educators and learner’s community at large through innovative solutions in digital era. We are looking for an experienced AI specialist to join our revolution using the deep learning, artificial intelligence. This is an excellent opportunity to take advantage of emerging trends and technologies to a real-world difference.
Role and Responsibilities
- Manage and direct research and development (R&D) and processes to meet the needs of our AI strategy.
- Understand company and client challenges and how integrating AI capabilities can help create educational solutions.
- Analyse and explain AI and machine learning (ML) solutions while setting and maintaining high ethical standards.
- Knowledge of algorithms, object-oriented and functional design principles
- Demonstrated artificial intelligence, machine learning, mathematical and statistical modelling knowledge and skills.
- Well-developed programming skills – specifically in SAS or SQL and other packages with statistical and machine learning application, e.g. R, Python
- Experience with machine learning fundamentals, parallel computing and distributed systems fundamentals, or data structure fundamentals
- Experience with C, C++, or Python programming
- Experience with debugging and building AI applications.
- Robustness and productivity analyse conclusions.
- Develop a human-machine speech interface.
- Verify, evaluate, and demonstrate implemented work.
- Proven experience with ML, deep learning, Tensorflow, Python
along with metrics to track their progress
Managing available resources such as hardware, data, and personnel so that deadlines
Analysing the ML algorithms that could be used to solve a given problem and ranking
them by their success probability
Exploring and visualizing data to gain an understanding of it, then identifying
differences in data distribution that could affect performance when deploying the model
in the real world
Verifying data quality, and/or ensuring it via data cleaning
Supervising the data acquisition process if more data is needed
Defining validation strategies
Defining the pre-processing or feature engineering to be done on a given dataset
Defining data augmentation pipelines
Training models and tuning their hyper parameters
Analysing the errors of the model and designing strategies to overcome them
Deploying models to production