Role & Responsibilities: 1) Understand the business objectives, formulate hypotheses and collect the relevant data using SQL/R/Python. Analyse bureau, customer and lending performance data on a periodic basis to generate insights. Present complex information and data in an uncomplicated, easyto-understand way to drive action.2) Independently Build and refit robust models for achieving game-changing growth while managing risk.3) Identify and implement new analytical/modelling techniques to improve model performance across customer lifecycle (acquisitions, management, fraud, collections, etc.4) Help define the data infrastructure strategy for Indian subsidiary.a. Monitor data quality and quantity. b. Define a strategy for acquisition, storage, retention, and retrieval of data elements. e.g.: Identify new data types and collaborate with technology teams to capture them. c. Build a culture of strong automation and monitoring d. Staying connected to the Analytics industry trends - data, techniques, technology, etc. and leveraging them to continuously evolve data science standards at Credit Saison. Required Skills & Qualifications: 1) 3+ years working in data science domains with experience in building risk models. Fintech/Financial analysis experience is required. 2) Expert level proficiency in Analytical tools and languages such as SQL, Python, R/SAS, VBA etc. 3) Experience with building models using common modelling techniques (Logistic and linear regressions, decision trees, etc.) 4) Strong familiarity with Tableau//Power BI/Qlik Sense or other data visualization tools 5) Tier 1 college graduate (IIT/IIM/NIT/BITs preferred). 6) Demonstrated autonomy, thought leadership, and learning agility.
About the Role If you are interested in building large scale data pipelines that impacts how Uber makes decisions about Rider lifecycle and experience, join the Rider Data Platform team. Uber collects petabyte scale analytics data from the different Ride booking apps. Help us build the software systems and data models that will enable data scientists reason about user behavior and build models for consumption by different rider facing program teams. What You'll Do Identify unified data models collaborating with Data Science teams Streamline data processing of the original event sources and consolidate them in source of truth event logs Build and maintain real-time/batch data pipelines that can consolidate and clean up usage analytics Build systems that monitor data losses from the mobile sources Devise strategies to consolidate and compensate the data losses by correlating different sources Solve challenging data problems with cutting edge design and algorithms What You'll Need 4+ years experience in a competitive engineering environment Design: Knowledge of data structures and an eye for design. You can discuss the tradeoff between design choices, both on a theoretical level and on an applied level. Strong coding/debugging abilities: You have advanced knowledge of at least one programming language, and are happy to learn more. Our core languages are Java, Python, and Scala. Big data: Experience with Distributed systems such as Hadoop, Hive, Spark, Kafka is preferred. Data pipeline: Strong understanding in SQL, Database. Experience in building data pipelines is a great plus. Love getting your hands dirty with the data implementing custom ETLs to shape it into information. A team player: You believe that you can achieve more on a team that the whole is greater than the sum of its parts. You rely on others' candid feedback for continuous improvement. Business acumen: You understand requirements beyond the written word. Whether you're working on an API used by other developers, an internal tool consumed by our operation teams, or a feature used by millions of customers, your attention to details leads to a delightful user experience. About the Team Rider Data Platform team is a relatively new team tasked with shaping up the future architecture of Uber's Rider Data Stack. We are a bunch of engineers passionate about helping Uber grow by focusing our energy on building the next gen data platform to provide insights to the global Rider data in the most optimal manner. This would be instrumental in identifying gaps in the current implementation as well as formulating the key strategies for overall Rider experience. Uber At Uber, we ignite opportunity by setting the world in motion. We take on big problems to help drivers, riders, delivery partners, and eaters get moving in more than 600 cities around the world. We welcome people from all backgrounds who seek the opportunity to help build a future where everyone and everything can move independently. If you have the curiosity, passion, and collaborative spirit, work with us, and let's move the world forward, together.
• Strong problem solving skills with an emphasis on product development.• Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and drawinsights from large data sets.• Experience in building ML pipelines with Apache Spark, Python• Proficiency in implementing end to end Data Science Life cycle• Experience in Model fine tuning and advanced grid search techniques• Experience working with and creating data architectures.• Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neuralnetworks, etc.) and their real-world advantages/drawbacks.• Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,statistical tests and proper usage, etc.) and experience with applications.• Excellent written and verbal communication skills for coordinating across teams.• A drive to learn and master new technologies and techniques.• Assess the effectiveness and accuracy of new data sources and data gathering techniques.• Develop custom data models and algorithms to apply to data sets.• Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targetingand other business outcomes.• Develop company A/B testing framework and test model quality.• Coordinate with different functional teams to implement models and monitor outcomes.• Develop processes and tools to monitor and analyze model performance and data accuracy.Key skills:● Strong knowledge in Data Science pipelines with Python● Object oriented programming● A/B testing framework and model fine tuning● Proficiency in using scikit, numpy and pandas package in pythonNice to have:● Ability to work with containerised solutions: Docker/Compose/Swarm/Kubernetes● Unit testing, Test driven development practice● DevOps, Continuous integration/ continuous deployment experience● Agile development environment experience, familiarity with SCRUM● Deep learning knowledge
You are expected to build Deep learning models to process unstructured data like images and scanned financialdocuments and convert it to meaningful tables with optical character recognition technologies.You are also expected to build Machine Learning and Deep learning models for facial recognition, and object comparison.The ideal candidate will be passionate about artificial intelligence and stay up-to-date with the latest developments in thefield.• Strong problem solving skills with an emphasis on deep learning product development.• Experience in computer vision, optical character recognition, facial recognition and deep learning• Strong implementation knowledge on Array Data Structures 2D and 3D• Strong knowledge in image processing, CNN deep learning architectures, image augmentation with tensorflowand pytorch• Experience using machine learning computer languages (R, Python, SLQ, etc.) to manipulate data and drawinsights from large data sets.• Experience in building ML pipelines with Apache Spark, Python• Proficiency in implementing end to end Data Science Life cycle• Experience in Model fine tuning and advanced grid search techniques• Experience working with and creating data architectures.• Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neuralnetworks, etc.) and their real-world advantages/drawbacks.• Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statisticaltests and proper usage, etc.) and experience with applications.• Excellent written and verbal communication skills for coordinating across teams.• A drive to learn and master new technologies and techniques.• Assess the effectiveness and accuracy of new data sources and data gathering techniques.• Develop custom data models and algorithms to apply to data sets.• Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting andother business outcomes.• Develop company A/B testing framework and test model quality.• Coordinate with different functional teams to implement models and monitor outcomes.• Develop processes and tools to monitor and analyze model performance and data accuracy.Key skills:● Strong knowledge in Data Science pipelines with Python, deep learning architectures● OCR Tesseract, OpenCV, Scikit, Pandas, Numpy, pytorch and tensorflow● Object oriented programming● A/B testing framework and model fine tuningNice to have:● Ability to work with containerised solutions: Docker/Compose/Swarm/Kubernetes● Unit testing, Test driven development practice● DevOps, Continuous integration/ continuous deployment experience● Agile development environment experience, familiarity with SCRUM
Learngram is a Singapore based AI driven EdTech startup with a vision to empower institutions offer their educational content to learners through a tech platform that helps learners master the content. At Learngram, we are building an AI powered E-Learning platform.Our tech team is based out of Bengaluru. We are building a high quality team of result focused & innovative problem solvers. Founders:Vikas Goel - CEORaman Kishore - CTO We are looking for an early team member to join our startup. Someone who is passionate to be at the helm of building a company, who can build products ground up and is enthusiastic to be a very early member of a startup. Great opportunity to solve real life problems using AI and transform the way education is being delivered by helping students/learners in their learning journey. An ideal candidate is skilled in ML/Deep Learning NLP Image processing Speech processing AI deployment is passionate about solving real life problems has continuous learning attitude high ownership startup mindset optimistic Website: https://www.learngram.ai/
Description About Zycus : Headquartered in Princeton, U.S. in 1998, Zycus has grown every day to be established as an organization which now is a leading global provider of complete Source-to-Pay suite of procurement performance solutions. We develop cloud-based (SaaS) Source-to-Pay solutions for large global enterprises, and have successfully deployed about 200 solutions to over 1000 Global clients. We are proud to have as our clients, some of the best-of- breed companies across verticals like Manufacturing, Automotives, Banking and Finance, Oil and Gas, Food Processing, Electronics, Telecommunications, Chemicals, Health and Pharma, Education and more. With a team of 1000+employees, we are present in India with 3 development centers at Bengaluru, Mumbai & Pune and offices in the U.S., U.K., Australia, Dubai, Netherlands and Singapore. Know more about the LEADER of: Gartner’s 2013, 2015 & 2017 Magic Quadrant for Strategic Sourcing Application Suites and The Forrester Wave™: eProcurement, Q2 2017 We are in process of launching Merlin A.I. Studio™. The artificial intelligence (AI)-based platform will allow procurement teams to to build and deploy bots across the source-to-pay process. The bots will be used by firms leveraging more than 1,100 APIs from Zycus’ solution suite. “By deploying the intelligent bots from Merlin A.I. Studio™, procurement can put themselves in cruise control mode as the bots work towards accomplishing tasks with zero human intervention,” The Fortune 500-serving firm explained in its press release “Be it running an RFI event, discovering contract risks, negotiating with suppliers or transnational procurement; all one needs to do is launch the bot and see the magic unfold.” “It will empower procurement to transform their routine, repetitive & mundane procurement tasks, so that time, effort & resources can be optimized towards more strategic initiatives.”Exp : 1 to 10 Years Role : Data Scientist Location : Bangalore Education : Any Engineering From IIT ,NIT , IIIT ,VIT , BITS Pilani Please carry your original ID proof along with Hard Copy of Resume Requirements We are especially looking for applicants with a strong background in Analytics and Data mining (Web, Social and Big data), Machine Learning and Pattern Recognition, Natural Language Processing and Computational Linguistics, Statistical Modelling and Inferencing, Information Retrieval, Large Scale Distributed Systems and Cloud Computing, Econometrics and Quantitative Marketing, Applied Game Theory and Mechanism Design, Operations Research and Optimization, Human Computer Interaction and Information Visualization. Applicants with a background in other quantitative areas are also encouraged to apply. If you are passionate about research and developing innovative technologies of interest to Zycus and the research community at large, the BigData Experience Lab may be the right place for you. All successful candidates are expected to dive deep into problem areas of Zycus’s interest and invent technology solutions to not only advance the current products, but also to generate new product options that can strategically advantage Zycus Skills Master’s or Ph.D. in statistics, mathematics, or computer science Only from Tier 1 Colleges Experience using statistical computer languages such as R, Python, SQL, etc. Experience in statistical and data mining techniques, including generalized linear model/regression, random forest, boosting, trees, text mining, social network analysis Experience working with and creating data architectures Knowledge of machine learning techniques such as clustering, decision tree learning, and artificial neural networks Knowledge of advanced statistical techniques and concepts, including regression, properties of distributions, and statistical tests 2-10 years of experience manipulating data sets and building statistical models Experience using web services: Redshift, S3, Spark, DigitalOcean, etc. Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Gurobi, MySQL, etc. Experience visualizing/presenting data. Data Scientist will report in to Director Engineering - Data Scentist & the roles & responsibilities are as below: Work as the data strategist, identifying and integrating new datasets that can be leveraged through our product capabilities and work closely with the engineering team to strategize and execute the development of data products Execute analytical experiments methodically to help solve various problems and make a true impact across various domains and industries Identify relevant data sources and sets to mine for client business needs, and collect large structured and unstructured datasets and variables Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve models, and clean and validate data for uniformity and accuracy Analyze data for trends and patterns, and Interpret data with a clear objective in mind Implement analytical models into production by collaborating with software developers and machine learning engineers. Communicate analytic solutions to stakeholders and implement improvements as needed to operational systems Benefits : Along with a competitive compensation structure, Zycus believes in an open culture learning environment, where everyone gets a chance to share their ideas and deliver par excellence. Here's a sneak peek to our life at Zycus.
• Model design, feature planning, system infrastructure, production setup and monitoring, and release management. • Excellent understanding of machine learning techniques and algorithms, such as SVM, Decision Forests, k-NN, Naive Bayes etc.• Experience in selecting features, building and optimizing classifiers using machine learning techniques.• Prior experience with data visualization tools, such as D3.js, GGplot, etc..• Good knowledge on statistics skills, such as distributions, statistical testing, regression, etc..• Adequate presentation and communication skills to explain results and methodologies to non-technical stakeholders.• Basic understanding of the banking industry is value addDevelop, process, cleanse and enhance data collection procedures from multiple data sources.• Conduct & deliver experiments and proof of concepts to validate business ideas and potential value.• Test, troubleshoot and enhance the developed models in a distributed environments to improve it's accuracy.• Work closely with product teams to implement algorithms with Python and/or R.• Design and implement scalable predictive models, classifiers leveraging machine learning, data regression.• Facilitate integration with enterprise applications using APIs to enrich implementations
We are changing the way enterprises manage and consume data and insights by making data and insights platform smarter and better. As part of Cynepia Technologies, Customer Facing Data Scientists are critical to making our customers successful.- An ideal candidate should have strong fundamentals of applied data science in a business setting and should enjoy communicating and evangelizing data science solutions to business stakeholders. He must possess an understanding of the following :A) Product/Customer/People Skills :1. Someone who has been working for a fast-paced started environment in the capacity as an individual contributor and is focused on understanding customer problems/requirements and solving the same using Cynepia Products/Solutions that address the same.2. Executing data engineering/science workflows for customers3. Conducting and managing data science projects with customer's vision of success in mind.4. Engaging and collaborating with various customer teams and managing experience.5. Great oral and written communication and hunger to create and build is not negotiable.6. Strong customer interaction, management and organizational skills.B) Technical Skills :1. Experience with dataset preparation using python/R.2. Experience building and optimizing data pipelines, architectures and data sets.3. Hands-on experience building predictive models and preparing data for the same.4. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.Understand and write code that adheres to and ensures serviceability, performance, reliability, availability and scalability of the architecture in a large enterprise5. Prior Experience working with the adoption of enterprise data products/usecases in a fast-paced startup environment desired.6. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Skill Set SQL, Python, Numpy,Pandas,Knowledge of Hive and Data warehousing concept will be a plus point.JD - Strong analytical skills with the ability to collect, organise, analyse and interpret trends or patterns in complex data sets and provide reports & visualisations.- Work with management to prioritise business KPIs and information needs Locate and define new process improvement opportunities.- Technical expertise with data models, database design and development, data mining and segmentation techniques- Proven success in a collaborative, team-oriented environment- Working experience with geospatial data will be a plus.
About Role● Solving application security problems using ML by building a proof of concept and productionising ML pipelines● Conceptualising, designing and implementing ML platform/pipelines for solving ML problems at scale and in a quick iterative manner at massive data scale● Strong collaboration with Data scientist and other stakeholdersQualifications● Master’s degree in computer science from top universities (tier 1 only). Specialisation in ML is preferred. PhD from top Universities in the US and IISc in India is a strong plus.● Minimum 5 years of work experience in developing ML pipelines and products using Java and Python● Strong understanding of ML concepts and use of common ML algorithms applied to discrete sequences and high dimensional categorical data● Strong software development skills including familiarity with data structures & algorithms, software design practices.● Strong understanding of various ML frameworks● Experience of working on anomaly detection and security products will be a strong plus● Experience of working in SaaS enterprise companies will be a strong plus● Demonstration of knowledge in applied machine learning via participation in Kaggle-like challenges is an added bonusAs a founding engineer, you can expect top compensation, good startup equity to create mid/long term wealth, opportunity to create a top notch product/platform bottom-up, opportunity to work with some exceptional people with a strong workplace culture.
About us DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.Data Science@DataWeaveWe the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.How we work?It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale!What do we offer?- Some of the most challenging research problems in NLP and Computer Vision. Huge text and image datasets that you can play with!- Ability to see the impact of your work and the value you're adding to our customers almost immediately.- Opportunity to work on different problems and explore a wide variety of tools to figure out what really excites you.- A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours.- Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.- Last but not the least, competitive salary packages and fast paced growth opportunities.Who are we looking for?The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem. You are also expected to develop capabilities that open up new business productization opportunities. We are looking for someone with 6+ years of relevant experience working on problems in NLP or Computer Vision with a Master's degree (PhD preferred). Key problem areas- Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.- Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.- Document clustering, attribute tagging, data normalization, classification, summarization, sentiment analysis.- Image based clustering and classification, segmentation, object detection, extracting text from images, generative models, recommender systems.- Ensemble approaches for all the above problems using multiple text and image based techniques.Relevant set of skills- Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus, optimization, algorithms and complexity.- Background in one or more of information retrieval, data mining, statistical techniques, natural language processing, and computer vision.- Excellent coding skills on multiple programming languages with experience building production grade systems. Prior experience with Python is a bonus.- Experience building and shipping machine learning models that solve real world engineering problems. Prior experience with deep learning is a bonus.- Experience building robust clustering and classification models on unstructured data (text, images, etc). Experience working with Retail domain data is a bonus.- Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.- Experience working with a variety of tools and libraries for machine learning and visualization, including numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.- Use the command line like a pro. Be proficient in Git and other essential software development tools.- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.Role and responsibilities- Understand the business problems we are solving. Build data science capability that align with our product strategy.- Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.- Build robust clustering and classification models in an iterative manner that can be used in production.- Constantly think scale, think automation. Measure everything. Optimize proactively.- Take end to end ownership of the projects you are working on. Work with minimal supervision.- Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.- Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.- Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.
Write, test, debug and ship code and gather feedback on the scale, performance, security to incorporate back into the platform. Work with the founders to identify complex technical problems and solve them. Work with the product design and client experience development team to support them with scalable services Feed into the overall mission and vision of the eParchi’s platform over the period of the coming months and years. An ability to perform well in a fast-paced environment Excellent analytical and multitasking skills. ● Will have to dedicate full time until the college opens, after which he needs to dedicate 6-7 hours on weekdays and full time on weekends.
At Entropik Technologies, we build systems that measure and analyzes human emotions at an unprecedented scale, with accuracy, speed, and mission-critical availability. We work with some of the leading brands and agencies across the globe who utilizes our platform to improve overall customer experience, understand consumer behavior and their subconscious responses. The Data Science team at Entropik is a high profile team that is a center of innovation for the company and a major contributor to the company's core products. The types of challenges we solve have attracted people from industry and academia with diverse backgrounds. We're passionate about maintaining an open and collaborative environment, where team members bring their own unique style of thinking and tools to the table. Responsibilities: Work on challenging fundamental data science problems in affective computing Propose and develop solutions independently and work with other data scientists Drive the collection of new data and the refinement of existing data sources Continuous focus on enhancing the current models with an overall goal of improving the accuracy of different emotion touch points Prepare white papers, scientific publications and conference presentations Work closely with product and engineering teams to identify and answer important product questions Communicate findings to product managers and engineer Analyze and interpret the results Develop best practices for instrumentation and experimentation and communicate those to product engineering teams Requirements: Masters or Ph.D. in a relevant technical (deep learning, machine learning, computer science, physics, mathematics, statistics, or related field), or 4+ years' experience in a relevant role Extensive experience solving analytical problems using quantitative approaches using machine learning methods Should be experienced in Computer Vision and Visual Feature Extraction. Experienced with Deep Learning Libraries like Tensorflow, Pytorch and architectures like CNN, RCNN. Track record of using advanced statistical methods, information retrieval, data mining techniques Comfort manipulating and analyzing complex, high-volume, high-dimensional data from varying sources A strong passion for empirical research and for answering hard questions with data A flexible analytic approach that allows for results at varying levels of precision Fluency with at least one scripting language such as Python Experience with at least some of the following machine learning libraries: scikit-learn, H2O, SparkML, etc Experience with practical data science: source control workflows, deploying machine learning models in production, real-time machine learning.