BlueOptima’s #SaaS #technology objectively shows where #Coding Effort is invested throughout the #SoftwareDevelopment lifecycle. #AppDev #ApplicationDevelopment
Job Description Job Description Do you want to join an innovative team of scientists who use machine learning, NLP and statistical techniques to provide the best customer experience on the earth? Do you want to change the way that people work with customer experience? Our team wants to lead the technical innovations in these spaces and set the bar for every other company that exists. We love data, and we have lots of it. We're looking for business intelligence engineer to own end-to-end business problems and metrics which would have a direct impact on the bottom line of our business while improving customer experience. If you see how big data and cutting-edge technology can be used to improve customer experience, if you love to innovate, if you love to discover knowledge from big structured and unstructured data and if you deliver results, then we want you to be in our team. Major responsibilities Analyze and extract relevant information from large amounts of both structured and unstructured data to help automate and optimize key processes Design structured, multi-source data solutions to deliver the dashboards and reports that make data actionable Drive the collection of new data and the refinement of existing data sources to continually improve data quality Support data analysts and product managers by turning business requirements into functional specifications and then executing delivery Lead the technical lifecycle of data presentation from data sourcing to transforming into user-facing metrics Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Basic Qualifications Bachelors or Master’s Degree in Computer Science, Systems Analysis, or related field 3+ years’ experience in data modeling, ETL development, and Data Warehousing 3+ years’ experience with BI/DW/ETL projects. Strong background in data relationships, modeling, and mining Technical guru; SQL expert and God in one of these Python/ Spark/ Scala/ Tulia Strong communication and data presentation skills Strong problem solving ability Preferred Qualifications Experience working with large-scale data warehousing and analytics projects, including using AWS technologies – S3, EC2, Data-pipeline and other big data technologies. Distributed programming experience is highly recommended 2+ years of industry experience in predictive modeling and analysis Technically deep and business savvy enough to interface with all levels and disciplines within the organization.
Job Description: Software Engineer (Tact AI) About The Opportunity We’re looking for an experienced Machine Learning/Deep Learning/NLP engineer to make an immediate impact on our Tact’s Artificial Intelligence (AI) Platform team. Tact is a well-funded, early stage startup with a world class product, team and a growing customer base. We are funded by Accel, Redpoint, Upfront & Microsoft Ventures. About Us Led by former Salesforce and Siebel executive Chuck Ganapathi, Tact is on a mission to make enterprise software more human-friendly. Tact is the world’s first mobile sales productivity suite. It combines Salesforce and everyday sales tools into one app that works offline. Tact’s device-native, conversational platform is used by Fortune 500 companies to transform the daily sales experience in the field and maximize the value of their CRM investment. What You’ll Be Doing • You will be working with Tact’s Artificial Intelligence (AI) Platform team to design & deliver world class enterprise conversational platform. • Work with small cross functional teams to deliver product features on time with high quality. • Exploration, creation and optimization of intents, utterances, entities, and other signals for optimizing interactive conversational experience. • Participate in design and code reviews. What You Should Have • 3+ years of software development experience preferably in Java. • 2+ years of experience in Machine Learning/Deep Learning/Natural Language Processing (NLP). • Familiarity with conversational flow development experience using voice and text- based NLP channels like Amazon Alexa, Slack, Cortana, etc. • BS or MS in Computer Science or related field. • Excellent oral and written communication skills and desire to work with small teams in a dynamic startup culture. • Take ownership of features and projects and be self-driven to deliver them on time.
Position Description Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 5+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment
We're looking for Senior NLP Engineer (2+ years experience) for our company - Spotmentor Technologies. Right now our Technology team has 5 members and this role is for early team member and carries significant ESOPs with it. We need someone who can lead the NLP function with both vision and hands-on work and is excited to use this area to develop B2B products for enterprise productivity.RESPONSIBILITIES----------------------- • Collaborate with cross-functional team members to develop software libraries, tools, and methodologies as critical components of our computation platforms. • Use independent judgment to take existing code, understand its function, and change/enhance as needed. • Work as a team leader rather than a member. • Capable of adding valuable inputs to existing algorithms by reading NLP research papers.REQUIREMENTS-------------------- • Proficient in Python with sound knowledge in the data science libraries namely Numpy, Pandas, NLTK / spaCy etc. • Prior experience in building a fully functional NLP based Machine Learning Model with good results (NER/Classification/Topic Modeling). • Expert data scientist with professionalism in text classification, feature engineering, using embeddings, pos etc. • Knowledge of writing database queries (SQL/NoSQL). • Some background in information retrieval systems is a big plus.
Spotmentor is focussed on using the Intelligence-age tools and technologies like AI and Text analytics to create HR technology products which go beyond compliance and ERPs to give HR the power to become strategic, improve business results and increase the competitiveness. The HR and People departments have long sought to become strategic partners with businesses. We are focussed on taking this concept out of the board room meetings and making it a reality and you can be a part of this journey. At the end of it, you would be able to claim that there was an inflection point in History, which changed how business was transacted and you made that happen. Our first product is a Learning and Skill development platform which helps the organisations to acquire capabilities critical for them by helping employees attain their best potential through learning opportunities. Spotmentor was started by 4 IIT Kharagpur alumni with experiences in creating Technology products and Management consulting. We are looking for a Data Scientist who will help discover the information hidden in vast amounts of data, and help us make smarter decisions that benefit the employees of our customer organisations. Your primary focus will be on applying data mining techniques, doing statistical analysis, and building high quality prediction systems using structured and unstructured data. Technical Responsibilities: - Selecting features, building and optimizing classifiers using machine learning techniques - Data mining using state-of-the-art methods - Extending the existing data sets with third party sources of information - Processing, cleansing, and verifying the integrity of data used for analysis - Build recommendation systems - Automate scoring of documents using machine learning techniques Salary: This is a founding team member role with a salary of 20 Lacs to 30 Lacs per year and a meaningful ESOP component. Location: Gurgaon We believe in making Spotmentor the best place for the pursuit of excellence and diversity of opinions is an important tool to achieve that. Although as a startup our primary objective is growth, Spotmentor is focussed on creating a diverse and inclusive workplace where everyone can attain their best potential and we welcome female, minority and specially abled candidates to apply.
We are looking for a Machine Learning Engineer with 3+ years of experience with a background in Statistics and hands-on experience in the Python ecosystem, using sound Software Engineering practices. Skills & Knowledge: - Formal knowledge of fundamentals of probability & statistics along with the ability to apply basic statistical analysis methods like hypothesis testing, t-tests, ANOVA etc. - Hands-on knowledge of data formats, data extraction, loading, wrangling, transformation, pre-processing and analysis. - Thorough understanding of data-modeling and machine-learning concepts - Complete understanding and ability to apply, implement and adapt standard implementations of machine learning algorithms - Good understanding and ability to apply and adapt Neural Networks and Deep Learning, including common high-level Deep Learning architectures like CNNs and RNNs - Fundamentals of computer science & programming, especially Data structures (like multi-dimensional arrays, trees, and graphs) and Algorithms (like searching, sorting, and dynamic programming) - Fundamentals of software engineering and system design, such as requirements analysis, REST APIs, database queries, system and library calls, version control, etc. Languages and Libraries: - Hands-on experience with Python and Python Libraries for data analysis and machine learning, especially Scikit-learn, Tensorflow, Pandas, Numpy, Statsmodels, and Scipy. - Experience with R and its ecosystem is a plus - Knowledge of other open source machine learning and data modeling frameworks like Spark MLlib, H2O, etc. is a plus
Job Description Who are we? BlueOptima provides industry leading objective metrics in software development using it’s proprietary Coding Effort Analytics that enable large organisations to deliver better software, faster, and at lower cost. Founded in 2007, BlueOptima is a profitable, independent, high growth software vendor commercialising technology initially devised in seminal research carried out at Cambridge University. We are headquartered in London with offices in New York, Bangalore, and Gurgaon. BlueOptima’s technology is deployed with global enterprises driving value from their software development activities For example, we work with seven of the world’s top ten Universal Banks (by revenue), three of the world’s top ten telecommunications companies (by revenue, excl. China). Our technology is pushing the limits of complex analytics on large data-sets with more than 15 billion static source code metric observations of software engineers working in an Enterprise software development environment. BlueOptima is an Equal Opportunities employer. Whom are we looking for? BlueOptima has a truly unique collection of vast datasets relating to the changes that software developers make in source code when working in an enterprise software development environment. We are looking for analytically minded individuals with expertise in statistical analysis, Machine Learning and Data Engineering. Who will work on real world problems, unique to the data that we have, develop new algorithms and tools to solve problems. The use of Machine Learning is a growing internal incentive and we have a large range of opportunities, to expand the value that we deliver to our clients. What does the role involve? As a Data Engineer you will be take problems and ideas from both our onsite Data Scientists, analyze what is involved, spec and build intelligent solutions using our data. You will take responsibility for the end to end process. Further to this, you are encouraged to identify new ideas, metrics and opportunities within our dataset and identify and report when an idea or approach isn’t being successful and should be stopped. You will use tools ranging from advance Machine Learning algorithms to Statistical approaches and will be able to select the best tool for the job. Finally, you will support and identify improvements to our existing algorithms and approaches. Responsibilities include: Solve problems using Machine Learning and advanced statistical techniques based on business needs. Identify opportunities to add value and solve problems using Machine Learning across the business. Develop tools to help senior managers identify actionable information based on metrics like BlueOptima Coding Effort and explain the insight they reveal to senior managers to support decision-making. Develop additional & supporting metrics for the BlueOptima product and data predominantly using R and Python and/or similar statistical tools. Producing ad hoc or bespoke analysis and reports. Coordinate with both engineers & client side data-scientists to understand requirements and opportunities to add value. Spec the requirements to solve a problem and identify the critical path and timelines and be able to give clear estimates. Resolve issues and find improvements to existing Machine Learning solution and explain their impacts. ESSENTIAL SKILLS / EXPERIENCE REQUIRED: Minimum Bachelor's degree in Computer Science/Statistics/Mathematics or equivalent. Minimum of 3+ years experience in developing solutions using Machine learning Algorithms. Strong Analytical skills demonstrated through data engineering or similar experience. Strong fundamentals in Statistical Analysis using R or a similar programming language. Experience apply Machine Learning algorithms and techniques to resolve problems on structured and unstructured data. An in depth understanding of a wide range of Machine Learning techniques, and an understanding of which algorithms are suited to which problems. A drive to not only identify a solution to a technical problem but to see it all the way through to inclusion in a product. Strong written and verbal communication skills Strong interpersonal and time management skills DESIRABLE SKILLS / EXPERIENCE: Experience with automating basic tasks to maximise time for more important problems. Experience with PostgreSQL or similar Rational Database. Experience with MongoDB or similar nosql database. Experience with Data Visualisation experience (via Tableau, Qlikview, SAS BI or similar) is preferable. Experience using task tracking systems e.g. Jira and distributed version control systems e.g. Git. Be comfortable explaining very technical concepts to non-expert people. Experience of project management and designing processes to deliver successful outcomes. Why work for us? Work with a unique a truly vast collection of datasets Above market remuneration Stimulating challenges that fully utilise your skills Work on real-world technical problems to which solution cannot simply be found on the internet Working alongside other passionate, talented engineers Hardware of your choice Our fast-growing company offers the potential for rapid career progression
About us: GreyAtom is a Mumbai-based Ed-tech company, specializing in upskilling tech professionals through harnessing the power of data science. We are a turnkey solution to upgrading your skill set and career prospects. Data Science team at GreyAtom is going forward with mission of building systemic intelligence across GreyAtom product and ecosystem. We are looking forward to team player who understands Software Engineering, Data Science and has good grasp on business. Some Of The Problems We Are Focusing On Currently Are What is learner’s competency across various modules? How does a learner compare against other people in ecosystem ? Does my learning behaviour match that of people who got jobs in Data Science? Attrition Alert for the Student Personalization of learning path for each student Factors that predict learner’s success or drop out risk Mapping the Skill & Competency matrix for each learner At GreyAtom, Data scientists are embedded with the engineering and product team for the problem they are working on. This ensures that the data science solutions are envisioned along with product delivery. We have a very flat structure within the Data Science team, which enables us to focus on excellence and create a deep sense of ownership. Also being a young team we are able to democratize the process of problem selection. Our techniques span classification, clustering, matrix factorization, graphical models, networks and graph algorithms, topic modeling, image processing, deep learning and NLP, each one of them being exercised at a fairly large scale. If you want to challenge the state of the art and want to impact the wide open landscape in India, GreyAtom Data Science team is the place for you. A passionate data scientist who has experience in executing and evangelizing the Machine Learning or AI technologies in solving business problems resulting in uncompromised user experience cost savings and eliciting business insights buried in big data. Your Impact towards CA In this role, you'll help support GreyAtom charter to build Dataware by: Communicating with scientists as well as engineers. Bringing about significant innovation and solving complex problems in projects based on analytics May have indirect reports and manage a small project team. Mentoring, training, developing and serving as a knowledge resource for less experienced Software Engineers and Data professionals. Work and collaborate with the Product team to build Data Science into Commit.Live Conceptualise, design and deliver high-quality solutions and insightful analysis Conduct research and prototyping innovations; data and requirements gathering; solution scoping and architecture; Skills Required: Typically, 3 or more years of experience executing on projects as a lead and analytic computing experience. Mathematical skills including Statistics fundamentals, Statistical Modelling, Regression analysis, Time Series, Decision Trees, Correlation (Clustering, Association rules, K-Nearest Neighbours) Analytical skills include Data Analytics, Data Modelling, Machine Learning, Text Mining, Optimization Simulation skills (Genetic Algorithms, Monte Carlo Simulations, Linear Programming, Quadratic Programming etc) Data-Driven Problem Solving and Data Munging Papers published in journals in ML/AI area will have added advantage Hands-on experience of Python Understanding and manipulation of unstructured data Has experience with one or more cloud or devops services like AWS, Docker etc. Good business acumen of any vertical, preferably Edtech/ Learning Analytics vertical
- Strong grasp on Python and basic understanding of matrix algebra - Understanding of modern deep learning techniques like CNN, Attention, LSTM, etc - Experience with TensorFlow and Keras - Experience with Computer Vision and domain specific tools like opencv
Selecting features, building and optimizing models using machine learning techniques Data mining using state-of-the-art methods Extending company’s data with third party sources of information when needed Enhancing data collection procedures to include information that is relevant for building analytic systems Processing, cleansing, and verifying the integrity of data used for analysis Doing ad-hoc analysis and presenting results in a clear manner Creating automated anomaly detection systems and constant tracking of its performance Adopting new research methodologies including deep learning (CNNs LSTMs)on projects