• Using statistical and machine learning techniques to analyse large-scale user data, including text data and chat logs; • Applying machine learning techniques for text mining and information extraction based on structured, semi-structured and unstructured data; • Contributing to services like chatbots, voice portals and dialogue systems • Input your own ideas to improve existing processes on services and products
Knowledge in NLP frameworks like NLTK. ● Strong understanding of linear algebra, optimization, probability, statistics. ● Experience in the data science methodology from exploratory data analysis, feature engineering, model selection, deployment of the model at scale and model evaluation. ● Knowledge of various Machine Learning Algorithms such as Supervised, Unsupervised, Reinforcement learning algorithms. ● Should know the various data pre‑processing techniques and its impact on algorithm's accuracy, precision, and recall. ● Good understanding of Deep Learning i.e Convolutional Neural Nets (CNN), Recursive Neural Nets (RNN) Long Short Term Memory (LSTM) ● Worked on conversation/dialogue system in NLP area ● Knowledge and experience of at least one bot framework or API like Dialogflow, ChatterOn, Botpress, Azure bot or any other bot API ● Practical knowledge of formal syntax, formal semantics, corpus analysis, dialogue management Understanding morphology, lexicology, phonetics, pragmatics, automated speech analysis, and synthesis ● English communication skills required, mainly written but verbal English will be highly valued ● Experience in developing applications for messengers and mobile devices ● Experience with Python Web Framework (Django) ● Experience of any of the machine learning framework Tensorflow/Theano/Pytorch ● Should have good knowledge of distributed deep learning systems ● Experience in deploying production level deep learning system ● Good knowledge of hyperparameter tuning in machine learning architecture ● Any live project related to deep learning architecture is good to have.
Profile Brief/ Responsibilities • Keep up-to-date with latest technology trends. • Work closely with Project/Business/Research teams for identifying the best model for a given problem • Research and build highly efficient and state-of-the art models • Selecting features, building and optimizing models using machine learning techniques. Requirements • 2-5 years of relevant industrial experience in Machine Learning and Deep Learning with: Strong working knowledge in Python, C, C++, Linux. • Excellent understanding of Machine Learning Techniques and Algorithms. • Excellent understanding of Text Analytics concepts and methodologies - Named Entity Recognition, Text Classification, Event Detection, Sentiment Analysis, POS Tagging, Bag of Words. • Hands-on experience with Neural Networks (CNN/, RNN,/ DNN, /BNN,/LSTM, SSD, etc), Support Vector Machine, Conditional Random Field etc. • Experience with GPU/DSP/ISP/SoC architecture and system software • Python, Tensorflow/Caffe/CUDA/Keras • Ability to see big picture, think innovative and suggest out of box solutions. • Ability to write high performance structured code. • Exposure to recent developments in Deep Learning domain
Machine Learning Data Engineer Engineering Gurgaon, Haryana, India Job Description Who are we? BlueOptima provides industry leading objective metrics in software development using it’s proprietary Coding Effort Analytics that enable large organisations to deliver better software, faster, and at lower cost. Founded in 2007, BlueOptima is a profitable, independent, high growth software vendor commercialising technology initially devised in seminal research carried out at Cambridge University. We are headquartered in London with offices in New York, Bangalore, and Gurgaon. BlueOptima’s technology is deployed with global enterprises driving value from their software development activities For example, we work with seven of the world’s top ten Universal Banks (by revenue), three of the world’s top ten telecommunications companies (by revenue, excl. China). Our technology is pushing the limits of complex analytics on large data-sets with more than 15 billion static source code metric observations of software engineers working in an Enterprise software development environment. BlueOptima is an Equal Opportunities employer. Whom are we looking for? BlueOptima has a truly unique collection of vast datasets relating to the changes that software developers make in source code when working in an enterprise software development environment. We are looking for analytically minded individuals with expertise in statistical analysis, Machine Learning and Data Engineering. Who will work on real world problems, unique to the data that we have, develop new algorithms and tools to solve problems. The use of Machine Learning is a growing internal incentive and we have a large range of opportunities, to expand the value that we deliver to our clients. What does the role involve? As a Data Engineer you will be take problems and ideas from both our onsite Data Scientists, analyze what is involved, spec and build intelligent solutions using our data. You will take responsibility for the end to end process. Further to this, you are encouraged to identify new ideas, metrics and opportunities within our dataset and identify and report when an idea or approach isn’t being successful and should be stopped. You will use tools ranging from advance Machine Learning algorithms to Statistical approaches and will be able to select the best tool for the job. Finally, you will support and identify improvements to our existing algorithms and approaches. Responsibilities include: Solve problems using Machine Learning and advanced statistical techniques based on business needs. Identify opportunities to add value and solve problems using Machine Learning across the business. Develop tools to help senior managers identify actionable information based on metrics like BlueOptima Coding Effort and explain the insight they reveal to senior managers to support decision-making. Develop additional & supporting metrics for the BlueOptima product and data predominantly using R and Python and/or similar statistical tools. Producing ad hoc or bespoke analysis and reports. Coordinate with both engineers & client side data-scientists to understand requirements and opportunities to add value. Spec the requirements to solve a problem and identify the critical path and timelines and be able to give clear estimates. Resolve issues and find improvements to existing Machine Learning solution and explain their impacts. ESSENTIAL SKILLS / EXPERIENCE REQUIRED: Minimum Bachelor's degree in Computer Science/Statistics/Mathematics or equivalent. Minimum of 3+ years experience in developing solutions using Machine learning Algorithms. Strong Analytical skills demonstrated through data engineering or similar experience. Strong fundamentals in Statistical Analysis using R or a similar programming language. Experience apply Machine Learning algorithms and techniques to resolve problems on structured and unstructured data. An in depth understanding of a wide range of Machine Learning techniques, and an understanding of which algorithms are suited to which problems. A drive to not only identify a solution to a technical problem but to see it all the way through to inclusion in a product. Strong written and verbal communication skills Strong interpersonal and time management skills DESIRABLE SKILLS / EXPERIENCE: Experience with automating basic tasks to maximise time for more important problems. Experience with PostgreSQL or similar Rational Database. Experience with MongoDB or similar nosql database. Experience with Data Visualisation experience (via Tableau, Qlikview, SAS BI or similar) is preferable. Experience using task tracking systems e.g. Jira and distributed version control systems e.g. Git. Be comfortable explaining very technical concepts to non-expert people. Experience of project management and designing processes to deliver successful outcomes. Why work for us? Work with a unique a truly vast collection of datasets Above market remuneration Stimulating challenges that fully utilise your skills Work on real-world technical problems to which solution cannot simply be found on the internet Working alongside other passionate, talented engineers Hardware of your choice Our fast-growing company offers the potential for rapid career progression
Job Requirement Installation, configuration and administration of Big Data components (including Hadoop/Spark) for batch and real-time analytics and data hubs Capable of processing large sets of structured, semi-structured and unstructured data Able to assess business rules, collaborate with stakeholders and perform source-to-target data mapping, design and review. Familiar with data architecture for designing data ingestion pipeline design, Hadoop information architecture, data modeling and data mining, machine learning and advanced data processing Optional - Visual communicator ability to convert and present data in an easy comprehensible visualization using tools like D3.js, Tableau To enjoy being challenged, solve complex problems on a daily basis Proficient in executing efficient and robust ETL workflows To be able to work in teams and collaborate with others to clarify requirements To be able to tune Hadoop solutions to improve performance and end-user experience To have strong co-ordination and project management skills to handle complex projects Engineering background
- Strong grasp on Python and basic understanding of matrix algebra - Understanding of modern deep learning techniques like CNN, Attention, LSTM, etc - Experience with TensorFlow and Keras - Experience with Computer Vision and domain specific tools like opencv