Hi, looking for a Part-time trainer for Data Science,Hadoop etc at our New Marathahalli Brach, Bangalore. Experience: 1. Good domain knowledge on data science with min 3-4 yrs 2. Good knowledge on Hadoop, Statistics, R Programming,SQL,Python,Machine Learning,Deep Learning,NLP, Tensorflow etc 3. Able to take CLASSROOM or ONLINE Daily/weekend classes at Marathahalli Branch, Bangalore. 4. Need to handle 2-3 Batches per month (Incl Online & Classroom) A chance to earn around 50,000 to 2,00,000 permonth by working as a part-timer for us. Thanks
Job Brief and Requirements • We are looking for a Machine Learning/Natural Language Processing Engineer to help us improve our NLP products and create new NLP applications. • Experience in applying different NLP techniques to problems such as text classification, text summarization, question & answering, information retrieval, knowledge extraction, and conversational bots design potentially with both traditional & Deep Learning Techniques • NLP Skills/Tools: NLP, HMM, CRF, LDA, Word2Vec, Seq2Seq, spaCy, Nltk, Gensim, CoreNLP, NLU, NLG etc., • Ability to design & develop practical analytical approach keeping the context of data quality & availability, feasibility, scalability, turnaround time aspects. • Create language models from text data. These language models draw heavily from statistical, deep learning as well as rule based research in recent times around building taggers, parsers, knowledge graph based dictionaries etc. • Understanding of data creation. Develop highly scalable classifiers and tools leveraging machine learning and rules based models. • Work closely with product teams to implement algorithms that power user and developer-facing products. • Perform user research and evaluate user feedback.
About us DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions. Data Science@DataWeave We the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains. How we work? It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! What do we offer? - Some of the most challenging research problems in NLP and Computer Vision. Huge text and image datasets that you can play with! - Ability to see the impact of your work and the value you're adding to our customers almost immediately. - Opportunity to work on different problems and explore a wide variety of tools to figure out what really excites you. - A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours. - Learning opportunities with courses and tech conferences. Mentorship from seniors in the team. - Last but not the least, competitive salary packages and fast paced growth opportunities. Who are we looking for? The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem. You are also expected to develop capabilities that open up new business productization opportunities. We are looking for someone with a Master's degree and 1+ years of experience working on problems in NLP or Computer Vision. Key problem areas - Preprocessing and feature extraction noisy and unstructured data -- both text as well as images. - Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains. - Document clustering, attribute tagging, data normalization, classification, summarization, sentiment analysis. - Image based clustering and classification, segmentation, object detection, extracting text from images, generative models, recommender systems. - Ensemble approaches for all the above problems using multiple text and image based techniques. Relevant set of skills - Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus, optimization, algorithms and complexity. - Background in one or more of information retrieval, data mining, statistical techniques, natural language processing, and computer vision. - Excellent coding skills on multiple programming languages with experience building production grade systems. Prior experience with Python is a bonus. - Experience building and shipping machine learning models that solve real world engineering problems. Prior experience with deep learning is a bonus. - Experience building robust clustering and classification models on unstructured data (text, images, etc). Experience working with Retail domain data is a bonus. - Ability to process noisy and unstructured data to enrich it and extract meaningful relationships. - Experience working with a variety of tools and libraries for machine learning and visualization, including numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow. - Use the command line like a pro. Be proficient in Git and other essential software development tools. - Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus. - Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’. - It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub. Role and responsibilities - Understand the business problems we are solving. Build data science capability that align with our product strategy. - Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain. - Build robust clustering and classification models in an iterative manner that can be used in production. - Constantly think scale, think automation. Measure everything. Optimize proactively. - Take end to end ownership of the projects you are working on. Work with minimal supervision. - Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation. - Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities. - Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team. - Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.
About Vedantu --------------------------- If you have ever dreamed about being in the driver’s seat of a revolution, THIS is the place for you. Vedantu is an Ed-Tech startup which is into Live Online Tutoring. Recently raised Series B funding of $11M Job Description We are looking for a Data Scientist who will support our product, sales, leadership and marketing teams with insights gained from analyzing company data. The ideal candidate is adept at using large data sets to find opportunities for product, sales and process optimization and using models to test the effectiveness of different courses of action. They must have strong experience using a variety of data analysis methods, building and implementing models and using/creating appropriate algorithms. Desired Skills 1. Experience using statistical computer languages (R, Python,etc.) to manipulate data and draw insights from large data sets. 2. Process, cleanse, and verify the integrity of data used for analysis. 3. Comfortable manipulating and analyzing complex, high-volume, high-dimensionality data from varying, heterogeneous sources 4. Experience with messy real-world data -- handling missing/incomplete/inaccurate data 5. Understanding of a broad set of Algorithms and Applied Math. 6. Good at problem solving, probability and statistics and knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage) and experience with applications. 7. Knowledge of data scraping is preferable 8. Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks) and their real-world advantages/drawbacks. 9. Experience with big data tools (Hadoop, Hive, MapReduce) a plus.