Machine Learning Data Engineer Engineering Gurgaon, Haryana, India Job Description Who are we? BlueOptima provides industry leading objective metrics in software development using it’s proprietary Coding Effort Analytics that enable large organisations to deliver better software, faster, and at lower cost. Founded in 2007, BlueOptima is a profitable, independent, high growth software vendor commercialising technology initially devised in seminal research carried out at Cambridge University. We are headquartered in London with offices in New York, Bangalore, and Gurgaon. BlueOptima’s technology is deployed with global enterprises driving value from their software development activities For example, we work with seven of the world’s top ten Universal Banks (by revenue), three of the world’s top ten telecommunications companies (by revenue, excl. China). Our technology is pushing the limits of complex analytics on large data-sets with more than 15 billion static source code metric observations of software engineers working in an Enterprise software development environment. BlueOptima is an Equal Opportunities employer. Whom are we looking for? BlueOptima has a truly unique collection of vast datasets relating to the changes that software developers make in source code when working in an enterprise software development environment. We are looking for analytically minded individuals with expertise in statistical analysis, Machine Learning and Data Engineering. Who will work on real world problems, unique to the data that we have, develop new algorithms and tools to solve problems. The use of Machine Learning is a growing internal incentive and we have a large range of opportunities, to expand the value that we deliver to our clients. What does the role involve? As a Data Engineer you will be take problems and ideas from both our onsite Data Scientists, analyze what is involved, spec and build intelligent solutions using our data. You will take responsibility for the end to end process. Further to this, you are encouraged to identify new ideas, metrics and opportunities within our dataset and identify and report when an idea or approach isn’t being successful and should be stopped. You will use tools ranging from advance Machine Learning algorithms to Statistical approaches and will be able to select the best tool for the job. Finally, you will support and identify improvements to our existing algorithms and approaches. Responsibilities include: Solve problems using Machine Learning and advanced statistical techniques based on business needs. Identify opportunities to add value and solve problems using Machine Learning across the business. Develop tools to help senior managers identify actionable information based on metrics like BlueOptima Coding Effort and explain the insight they reveal to senior managers to support decision-making. Develop additional & supporting metrics for the BlueOptima product and data predominantly using R and Python and/or similar statistical tools. Producing ad hoc or bespoke analysis and reports. Coordinate with both engineers & client side data-scientists to understand requirements and opportunities to add value. Spec the requirements to solve a problem and identify the critical path and timelines and be able to give clear estimates. Resolve issues and find improvements to existing Machine Learning solution and explain their impacts. ESSENTIAL SKILLS / EXPERIENCE REQUIRED: Minimum Bachelor's degree in Computer Science/Statistics/Mathematics or equivalent. Minimum of 3+ years experience in developing solutions using Machine learning Algorithms. Strong Analytical skills demonstrated through data engineering or similar experience. Strong fundamentals in Statistical Analysis using R or a similar programming language. Experience apply Machine Learning algorithms and techniques to resolve problems on structured and unstructured data. An in depth understanding of a wide range of Machine Learning techniques, and an understanding of which algorithms are suited to which problems. A drive to not only identify a solution to a technical problem but to see it all the way through to inclusion in a product. Strong written and verbal communication skills Strong interpersonal and time management skills DESIRABLE SKILLS / EXPERIENCE: Experience with automating basic tasks to maximise time for more important problems. Experience with PostgreSQL or similar Rational Database. Experience with MongoDB or similar nosql database. Experience with Data Visualisation experience (via Tableau, Qlikview, SAS BI or similar) is preferable. Experience using task tracking systems e.g. Jira and distributed version control systems e.g. Git. Be comfortable explaining very technical concepts to non-expert people. Experience of project management and designing processes to deliver successful outcomes. Why work for us? Work with a unique a truly vast collection of datasets Above market remuneration Stimulating challenges that fully utilise your skills Work on real-world technical problems to which solution cannot simply be found on the internet Working alongside other passionate, talented engineers Hardware of your choice Our fast-growing company offers the potential for rapid career progression
Job Requirement Installation, configuration and administration of Big Data components (including Hadoop/Spark) for batch and real-time analytics and data hubs Capable of processing large sets of structured, semi-structured and unstructured data Able to assess business rules, collaborate with stakeholders and perform source-to-target data mapping, design and review. Familiar with data architecture for designing data ingestion pipeline design, Hadoop information architecture, data modeling and data mining, machine learning and advanced data processing Optional - Visual communicator ability to convert and present data in an easy comprehensible visualization using tools like D3.js, Tableau To enjoy being challenged, solve complex problems on a daily basis Proficient in executing efficient and robust ETL workflows To be able to work in teams and collaborate with others to clarify requirements To be able to tune Hadoop solutions to improve performance and end-user experience To have strong co-ordination and project management skills to handle complex projects Engineering background
- Strong grasp on Python and basic understanding of matrix algebra - Understanding of modern deep learning techniques like CNN, Attention, LSTM, etc - Experience with TensorFlow and Keras - Experience with Computer Vision and domain specific tools like opencv
About the job: - You will architect, code and deploy ML models (from scratch) to predict credit risk. - You will design, run, and analyze A/B and multivariate tests to test hypotheses aimed at optimizing user experience and portfolio risk. - You will perform data exploration and build statistical models on user behavior to discover opportunities for decreasing user defaults. And you must truly be excited about this part. - You’ll use behavioral and social data to gain insights into how humans make financial choices - You will spend a lot of time in building out predictive features from super sparse data sources. - You’ll continually acquire new data sources to develop a rich dataset that characterizes risk. - You will code, drink, breathe and live python, sklearn and pandas. It’s good to have experience in these but not a necessity - as long as you’re super comfortable in a language of your choice. About you: - You’ve strong computer science fundamentals - You’ve strong understanding of ML algorithms - Ideally, you have 2+ years of experience in using ML in industry environment - You know how to run tests and understand their results from a statistical perspective - You love freedom and hate being micromanaged. You own products end to end - You have a strong desire to learn and use the latest machine learning algorithms - It will be great if you have one of the following to share - a kaggle or a github profile - Degree in statistics/quant/engineering from Tier-1 institutes.
About the job: - You will work with data scientists to architect, code and deploy ML models - You will solve problems of storing and analyzing large scale data in milliseconds - architect and develop data processing and warehouse systems - You will code, drink, breathe and live python, sklearn and pandas. It’s good to have experience in these but not a necessity - as long as you’re super comfortable in a language of your choice. - You will develop tools and products that provide analysts ready access to the data About you: - Strong CS fundamentals - You have strong experience in working with production environments - You write code that is clean, readable and tested - Instead of doing it second time, you automate it - You have worked with some of the commonly used databases and computing frameworks (Psql, S3, Hadoop, Hive, Presto, Spark, etc) - It will be great if you have one of the following to share - a kaggle or a github profile - You are an expert in one or more programming languages (Python preferred). Also good to have experience with python-based application development and data science libraries. - Ideally, you have 2+ years of experience in tech and/or data. - Degree in CS/Maths from Tier-1 institutes.
JOB DESCRIPTION We're looking for Head, Machine learning (3+ years experience) for our company - Spotmentor Technologies. Right now our Technology team has 5 members and this is a head team member role and carries significant equity with it. We need someone who can lead the Machine learning function with both vision and hands-on work and is excited to use this area to develop B2B products for enterprise productivity. RESPONSIBILITIES • Collaborate with cross-functional team members to develop software libraries, tools, and methodologies as critical components of our computation platforms. • Also responsible for software profiling, performance tuning and analysis, and other general software engineering tasks. • Use independent judgment to take existing code, understand its function, and change/enhance as needed. • Work as a team leader rather than a member. REQUIREMENTS • Proficient in Python with sound knowledge in the machine learning libraries namely Scikit-learn, Numpy, Pandas, NLTK etc. • Experience with Deep Learning tools like TensorFlow, Keras, PyTorch etc and integrating using open source learning platforms is required. • Prior experience in building a fully functional Machine Learning Algorithm in the text analysis and multi-class classification with promising results. • Expert data scientist with professionalism in text classification, text analytics, regression and other machine learning algorithms. • Solid grasp of mathematical principles behind machine learning algorithms. • Proficient in using version control tools (Git, Mercurial etc). • Prior experience of using big data technologies like Hadoop, Spark etc. • Semantic Web experience is a big plus. • Should be from tier 1 colleges (IIT’s / NIT’s and BITS).
Requirements: Minimum 4-years work experience in building, managing and maintaining Analytics applications B.Tech/BE in CS/IT from Tier 1/2 Institutes Strong Fundamentals of Data Structures and Algorithms Good analytical & problem-solving skills Strong hands-on experience in Python In depth Knowledge of queueing systems (Kafka/ActiveMQ/RabbitMQ) Experience in building Data pipelines & Real time Analytics Systems Experience in SQL (MYSQL) & NoSQL (Mongo/Cassandra) databases is a plus Understanding of Service Oriented Architecture Delivered high-quality work with a significant contribution Expert in git, unit tests, technical documentation and other development best practices Experience in Handling small teams
We are looking for a Machine Learning Developer who possesses apassion for machine technology & big data and will work with nextgeneration Universal IoT platform.Responsibilities:•Design and build machine that learns , predict and analyze data.•Build and enhance tools to mine data at scale• Enable the integration of Machine Learning models in Chariot IoTPlatform•Ensure the scalability of Machine Learning analytics across millionsof networked sensors•Work with other engineering teams to integrate our streaming,batch, or ad-hoc analysis algorithms into Chariot IoT's suite ofapplications•Develop generalizable APIs so other engineers can use our workwithout needing to be a machine learning expert
-Precily AI: Automatic summarization, shortening a business document, book with our AI. Create a summary of the major points of the original document. AI can make a coherent summary taking into account variables such as length, writing style, and syntax. We're also working in the legal domain to reduce the high number of pending cases in India. We use Artificial Intelligence and Machine Learning capabilities such as NLP, Neural Networks in Processing the data to provide solutions for various industries such as Enterprise, Healthcare, Legal.
Are you passionate about discovering insights hidden in vast amounts of data? Does your curiosity lead you to play with data to decipher an observed pattern? Do you constantly think about how machine learning, statistical modelling and optimization methodologies can help create top-notch consumer experience? If yes, Data Science team at Shuttl is a perfect fit for you. Key Deliverables :- Identifying and modeling the key drivers of consumer behavior using internal and external data.- Developing prediction models to deliver key business outcomes.- Problems include working on demand forecasting, demand driven supply planning etc.- Developing optimized transportation network design for Shuttl operations to drive strategic growth- Developing algorithms to predict static and dynamic schedule of Shuttl arrival times.- Presenting clear and actionable recommendations to Products and Engineering leadership.- Relentless pursuit of achieving business goals using the power of data Requirements :- 4+ years of experience.- Ability to crisply define the business problem and translate it into analytical/empirical problem- Experience in building ML models or statistical models in R or Python- Knowledge of underlying mathematical foundations of statistical inference/ forecasting/ Optimization- Prior experience on network optimization problems if preferred.- Proficiency in Python, CPLEX, SQL, NoSQL, Cassandra
FarmGuide is a data driven tech startup aiming towards digitizing the periodic processes in place and bringing information symmetry in agriculture supply chain through transparent, dynamic & interactive software solutions. We, at FarmGuide (https://angel.co/farmguide) help Government in relevant and efficient policy making by ensuring seamless flow of information between stakeholders.Job Description :We are looking for individuals who want to help us design cutting edge scalable products to meet our rapidly growing business. We are building out the data science team and looking to hire across levels.- Solving complex problems in the agri-tech sector, which are long-standing open problems at the national level.- Applying computer vision techniques to satellite imagery to deduce artefacts of interest.- Applying various machine learning techniques to digitize existing physical corpus of knowledge in the sector.Key Responsibilities :- Develop computer vision algorithms for production use on satellite and aerial imagery- Implement models and data pipelines to analyse terabytes of data.- Deploy built models in production environment.- Develop tools to assess algorithm accuracy- Implement algorithms at scale in the commercial cloudSkills Required :- B.Tech/ M.Tech in CS or other related fields such as EE or MCA from IIT/NIT/BITS but not compulsory. - Demonstrable interest in Machine Learning and Computer Vision, such as coursework, open-source contribution, etc.- Experience with digital image processing techniques - Familiarity/Experience with geospatial, planetary, or astronomical datasets is valuable- Experience in writing algorithms to manipulate geospatial data- Hands-on knowledge of GDAL or open-source GIS tools is a plus- Familiarity with cloud systems (AWS/Google Cloud) and cloud infrastructure is a plus- Experience with high performance or large scale computing infrastructure might be helpful- Coding ability in R or Python. - Self-directed team player who thrives in a continually changing environmentWhat is on offer :- High impact role in a young start up with colleagues from IITs and other Tier 1 colleges- Chance to work on the cutting edge of ML (yes, we do train Neural Nets on GPUs) - Lots of freedom in terms of the work you do and how you do it - Flexible timings - Best start-up salary in industry with additional tax benefits
1. The candidate will work on building RNN based speech recognition systems 2. The candidate will work on building NLP based solution suite for speech data analytics 3. Adhoc Python coding will be required on a regular basis to support the application and data science team.
We are looking for Freelance online Data Scientist Trainers who can work with us part - time with the following skills: Experience using statistical computer languages (R, Python etc.) to manipulate data and draw insights from large data sets. Should have strong programming & Good applied statistics skills, such as distributions, statistical testing, regression, etc. Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.Few examples are k-NN, Naive Bayes, SVM, Decision Forests,Random Forest ,Support vector machine,Principal component analysis. Experience using natural language processing techniques and Deep Learning using TensorFlow will be a plus. Should have minimum 5 years of relevant work experience.
Job Title: Machine Learning Engineer 1) Minimum 1 year exp in Data Science/Natural Language Processing and Machine Learning 2) Good grasp of one of the following but not limited to languages: Java, C++, 3) Good grasp of Python, 4) Familiar with various ML libraries such as the following but not limited to: Scipy, Numpy, Tensorflow 5) Has worked previously in NLP and/or ML/Deep Learning Projects.
Experience/qualifications requirements : - 0 to 3 years of experience. - MS/Phd/Btech in Computer Science or Applied Mathematics with focus on Artificial Intelligence, Machine Learning, Chat Application . - Have published some research papers in Artificial Intelligence domain or contributed to some research which got implemented commercially. - Proficiency in any one language Python/ .Net/ C++. - Strong data structures and algorithmic skills. - Self-motivated and passionate about Artificial General Intelligence. Note-We are looking for full time candidate
We're looking for a passionate technology leader to join our fast-growing social media start-up. You should be experienced in managing and scaling an engineering team, have familiarity with integrating 3rd party Api’s, and be familiar with big data analytics. While much of your job will be architecturally focused, you must be comfortable getting your hands dirty and pushing out product releases when necessary. Responsibilities will include but are not limited to: - Defining the software technology strategy, architecture and roadmap - Develop the backend software (hands on) - Leading , Onboarding & Managing Our development team - Vetting and continuing to build our development team - Evangelizing the product and its technology - Innovation & Optimisation R&D Current Stack: - Frontend: Native Java - Backend: MongoDB & Python We have to build iOS and Android apps, so a proficiency in of all of these would be extremely beneficial. We've created a high demand for the fully-integrated software we are building. If you believe you are the one to help us deliver and grow our product, please apply.
We are building a Startup Intelligence platform to help Startups and investors with Market research, Competitive analysis, sales leads etc.
Transporter is an AI-enabled location stack that helps companies improve their commerce, engagement or operations through their mobile apps for the next generation of online commerce