We are looking for applicants with a strong background in Analytics and Data mining (Web, Social and Big data), Machine Learning and Pattern Recognition, Natural Language Processing and Computational Linguistics, Statistical Modelling and Inferencing, Information Retrieval, Large Scale Distributed Systems and Cloud Computing, Econometrics and Quantitative Marketing, Applied Game Theory and Mechanism Design, Operations Research and Optimization, Human Computer Interaction and Information Visualization. Applicants with a background in other quantitative areas are also encouraged to apply. We are looking for someone who can create and implement AI solutions. If you have built a product like IBM WATSON in the past and not just used WATSON to build applications, this could be the perfect role for you. All successful candidates are expected to dive deep into problem areas of Zycus’ interest and invent technology solutions to not only advance the current products, but also to generate new product options that can strategically advantage the organization. Skills: Experience in predictive modelling and predictive software development Skilled in Java, C++, Perl/Python (or similar scripting language) Experience in using R, Matlab, or any other statistical software Experience in mentoring junior team members, and guiding them on machine learning and data modelling applications Strong communication and data presentation skills Classification (svm, decision tree, random forest, neural network) Regression (linear, polynomial, logistic, etc) Classical Optimization(gradient descent, newton raphson, etc) Graph theory (network analytics) Heuristic optimisation (genetic algorithm, swarm theory) Deep learning (lstm, convolutional nn, recurrent nn) Must Have: Experience: 3-9 years The ideal candidate must have proven expertise in Artificial Intelligence (including deep learning algorithms), Machine Learning and/or NLP The candidate must also have expertise in programming traditional machine learning algorithms, algorithm design & usage Preferred experience with large data sets & distributed computing in Hadoop ecosystem Fluency with databases
Our current operations research product is deployed at some of the largest organizations in the world. This role will be responsible for re-architecting the existing solution and adding components of machine learning and intelligence to its logic. We are looking for a passionate individual who loves technology and is willing to design and create a flexible long-lasting product architecture.
Job Description The ideal candidate will have experience in the latest techniques in Artificial Intelligence, Natural Language Processing, Machine Learning (including Deep Learning approaches). Experiences in real- time collaboration is a plus. We're looking for someone with specialized knowledge and general skills. Someone who can architect new features one day and produce optimized code the next. The ideal candidate has a strong mix of education and practical experience Responsibilities - Deliver a commercially deployable platform that provides an intuitive collaboration solution targeted at enterprise customers - Use NLP and ML techniques to bring order to unstructured data - Experience in extracting signal from noise in large unstructured datasets a plus - Work within the Engineering Team to design, code, train, test, deploy and iterate on enterprise scale machine learning systems - Work alongside an excellent, cross-functional team across Engineering, Product and Design Required Skills - Experience in applying machine learning techniques, Natural Language Processing or Computer Vision - Strong analytical and problem-solving skills - Solid software engineering skills across multiple languages including but not limited to Java or Python, C/C++ - Problem solver - able to work independently, and be comfortable with deadlines and milestones - Deep understanding of ML techniques such as: classification, clustering, deep learning, optimization methods, supervised and unsupervised techniques - Strong communication skills and an easy going attitude - Proven ability to apply, debug, and develop machine learning models for real-world applications - Previous industry work experience required Preferred Skills - Familiarity with Git, MongoDB, RabbitMQ, Spark, NLTK, TensorFlow - Functional Programming - Experience with Natural Language Processing (NLP) - Experience building applications for the enterprise customer - Previous early-stage company experience is a plus
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar DevOps and production deployment engineer that is well versed with Python, Docker & Amazon Web Services (AWS) to get our AI components into production and deployed. AI/ML experience is a plus, but not necessary. The main task will be deploying models and algorithms developed by our AI team, and keeping the daily production pipeline running. You will be responsible for: Building Python scripts to deploy our AI components into pipeline and production Developing logic to ensure multiple different AI components work together seamlessly Managing our daily pipeline on both on-premise servers and AWS Working closely with the AI engineering, backend and frontend teams You should have the following qualities: Deep expertise in Python including: Multiprocessing / multithreaded applications Class-based inheritance and modules DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend) Deep expertise in Docker-based virtualization including: Creating & maintaining custom Docker images Automated building and deployment CI/CD Experience with maintaining cloud applications in AWS environments Experience in deploying machine learning algorithms into production (e.g. tensorflow, keras, opencv, etc) is a plus Experience with running Nvidia GPU / CUDA-based tasks is a plus Excited about working in a fast-changing startup environment Willingness to learn rapidly on the job, try different things, and deliver results Ideally a gamer or someone interested in watching gaming content online Skills: Required: Python, AWS, Docker, Multiprocessing / multithreaded programming, pymongo, sqlalchemy. Optional: AI, Machine Learning, Tensorflow, deploying Nvidia GPU / CUDA programs Seniority: We are looking for a mid-level engineerWork Experience: 2 years to 7 years Salary: Will be commensurate with experience. Who Should Apply: If you have the right experience, regardless of your seniority, please apply. About Sizzle Sizzle is building AI to automate gaming highlights, directly from Twitch and YouTube videos. Presently, there are a billion fans around the world that watch gaming videos on Twitch and YouTube. Sizzle is creating a new highlights experience for these fans, so they can catch up on their favorite streamers and esports leagues. Sizzle is available at www.sizzle.gg.
About UpGrad : About us: UpGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. UpGrad currently offers programs in Data Analytics, Product Management, Digital Marketing, and Entrepreneurship, and was rated as one of the top 10 most innovative companies in India for 2017 - https://www.fastcompany.com/most-innovative-companies/2017/sectors/india . We plan to launch 6 more programs in technology and management education. UpGrad is co-founded by 3 IITD alumni, and the 4th co-founder is serial entrepreneur Ronnie Screwvala. UpGrad has a committed capital of 100Cr and in the first year of operations, has built the largest revenue generating online program in India (PG Diploma in Data Analytics) and the largest enrolment online program in India (Startup India learning program). UpGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow. Position : Senior Data Scientist Position Type : Full Time Location : Mumbai Job Description: Are you excited by the challenge and the opportunity of applying data-science and data-analytics techniques to the fast developing education technology domain? Do you look forward to, the sense of ownership and achievement that comes with innovating and creating data products from scratch and pushing it live into Production systems? Do you want to work with a team of highly motivated members who are on a mission to empower individuals through education? If this is you, come join us and become a part of the UpGrad technology team. At UpGrad the technology team enables all the facets of the business - whether it’s bringing efficiency to our marketing and sales initiatives, to enhancing our student learning experience, to empowering our content, delivery and student success teams, to aiding our student’s for their desired career outcomes. We play the part of bringing together data & tech to solve these business problems and opportunities at hand. We are looking for an highly skilled, experienced and passionate data-scientist who can come on-board and help create the next generation of data-powered education tech product. The ideal candidate would be someone who has worked in a Data Science role before wherein he/she is comfortable working with unknowns, evaluating the data and the feasibility of applying scientific techniques to business problems and products, and have a track record of developing and deploying data-science models into live applications. Someone with a strong math, stats, data-science background, comfortable handling data (structured+unstructured) as well as strong engineering know-how to implement/support such data products in Production environment. Ours is a highly iterative and fast-paced environment, hence being flexible, communicating well and attention-to-detail are very important too. The ideal candidate should be passionate about the customer impact and comfortable working with multiple stakeholders across the company. Basic Qualifications: 3+ years of experience in analytics, data science, machine learning or comparable role Bachelor's degree in Computer Science, Data Science/Data Analytics, Math/Statistics or related discipline Experience in building and deploying Machine Learning models in Production systems Strong analytical skills: ability to make sense out of a variety of data and its relation/applicability to the business problem or opportunity at hand Strong programming skills: comfortable with Python - pandas, numpy, scipy, matplotlib; Databases - SQL and noSQL Strong communication skills: ability to both formulate/understand the business problem at hand as well as ability to discuss with non data-science background stakeholders Comfortable dealing with ambiguity and competing objectives Preferred Qualifications: Experience in Text Analytics, Natural Language Processing Advanced degree in Data Science/Data Analytics or Math/Statistics Comfortable with data-visualization tools and techniques Knowledge of AWS and Data Warehousing Passion for building data-products for Production systems - a strong desire to impact the product through data-science techniques
Description About Zycus : Headquartered in Princeton, U.S. in 1998, Zycus has grown every day to be established as an organization which now is a leading global provider of complete Source-to-Pay suite of procurement performance solutions. We develop cloud-based (SaaS) Source-to-Pay solutions for large global enterprises, and have successfully deployed about 200 solutions to over 1000 Global clients. We are proud to have as our clients, some of the best-of- breed companies across verticals like Manufacturing, Automotives, Banking and Finance, Oil and Gas, Food Processing, Electronics, Telecommunications, Chemicals, Health and Pharma, Education and more. With a team of 1000+employees, we are present in India with 3 development centers at Bengaluru, Mumbai & Pune and offices in the U.S., U.K., Australia, Dubai, Netherlands and Singapore. Know more about the LEADER of: Gartner’s 2013, 2015 & 2017 Magic Quadrant for Strategic Sourcing Application Suites and The Forrester Wave™: eProcurement, Q2 2017 We are in process of launching Merlin A.I. Studio™. The artificial intelligence (AI)-based platform will allow procurement teams to to build and deploy bots across the source-to-pay process. The bots will be used by firms leveraging more than 1,100 APIs from Zycus’ solution suite. “By deploying the intelligent bots from Merlin A.I. Studio™, procurement can put themselves in cruise control mode as the bots work towards accomplishing tasks with zero human intervention,” The Fortune 500-serving firm explained in its press release “Be it running an RFI event, discovering contract risks, negotiating with suppliers or transnational procurement; all one needs to do is launch the bot and see the magic unfold.” “It will empower procurement to transform their routine, repetitive & mundane procurement tasks, so that time, effort & resources can be optimized towards more strategic initiatives.”Exp : 1 to 10 Years Role : Data Scientist Location : Bangalore Education : Any Engineering From IIT ,NIT , IIIT ,VIT , BITS Pilani Please carry your original ID proof along with Hard Copy of Resume Requirements We are especially looking for applicants with a strong background in Analytics and Data mining (Web, Social and Big data), Machine Learning and Pattern Recognition, Natural Language Processing and Computational Linguistics, Statistical Modelling and Inferencing, Information Retrieval, Large Scale Distributed Systems and Cloud Computing, Econometrics and Quantitative Marketing, Applied Game Theory and Mechanism Design, Operations Research and Optimization, Human Computer Interaction and Information Visualization. Applicants with a background in other quantitative areas are also encouraged to apply. If you are passionate about research and developing innovative technologies of interest to Zycus and the research community at large, the BigData Experience Lab may be the right place for you. All successful candidates are expected to dive deep into problem areas of Zycus’s interest and invent technology solutions to not only advance the current products, but also to generate new product options that can strategically advantage Zycus Skills Master’s or Ph.D. in statistics, mathematics, or computer science Only from Tier 1 Colleges Experience using statistical computer languages such as R, Python, SQL, etc. Experience in statistical and data mining techniques, including generalized linear model/regression, random forest, boosting, trees, text mining, social network analysis Experience working with and creating data architectures Knowledge of machine learning techniques such as clustering, decision tree learning, and artificial neural networks Knowledge of advanced statistical techniques and concepts, including regression, properties of distributions, and statistical tests 2-10 years of experience manipulating data sets and building statistical models Experience using web services: Redshift, S3, Spark, DigitalOcean, etc. Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Gurobi, MySQL, etc. Experience visualizing/presenting data. Data Scientist will report in to Director Engineering - Data Scentist & the roles & responsibilities are as below: Work as the data strategist, identifying and integrating new datasets that can be leveraged through our product capabilities and work closely with the engineering team to strategize and execute the development of data products Execute analytical experiments methodically to help solve various problems and make a true impact across various domains and industries Identify relevant data sources and sets to mine for client business needs, and collect large structured and unstructured datasets and variables Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve models, and clean and validate data for uniformity and accuracy Analyze data for trends and patterns, and Interpret data with a clear objective in mind Implement analytical models into production by collaborating with software developers and machine learning engineers. Communicate analytic solutions to stakeholders and implement improvements as needed to operational systems Benefits : Along with a competitive compensation structure, Zycus believes in an open culture learning environment, where everyone gets a chance to share their ideas and deliver par excellence. Here's a sneak peek to our life at Zycus.
• Model design, feature planning, system infrastructure, production setup and monitoring, and release management. • Excellent understanding of machine learning techniques and algorithms, such as SVM, Decision Forests, k-NN, Naive Bayes etc.• Experience in selecting features, building and optimizing classifiers using machine learning techniques.• Prior experience with data visualization tools, such as D3.js, GGplot, etc..• Good knowledge on statistics skills, such as distributions, statistical testing, regression, etc..• Adequate presentation and communication skills to explain results and methodologies to non-technical stakeholders.• Basic understanding of the banking industry is value addDevelop, process, cleanse and enhance data collection procedures from multiple data sources.• Conduct & deliver experiments and proof of concepts to validate business ideas and potential value.• Test, troubleshoot and enhance the developed models in a distributed environments to improve it's accuracy.• Work closely with product teams to implement algorithms with Python and/or R.• Design and implement scalable predictive models, classifiers leveraging machine learning, data regression.• Facilitate integration with enterprise applications using APIs to enrich implementations
About Role● Solving application security problems using ML by building a proof of concept and productionising ML pipelines● Conceptualising, designing and implementing ML platform/pipelines for solving ML problems at scale and in a quick iterative manner at massive data scale● Strong collaboration with Data scientist and other stakeholdersQualifications● Master’s degree in computer science from top universities (tier 1 only). Specialisation in ML is preferred. PhD from top Universities in the US and IISc in India is a strong plus.● Minimum 5 years of work experience in developing ML pipelines and products using Java and Python● Strong understanding of ML concepts and use of common ML algorithms applied to discrete sequences and high dimensional categorical data● Strong software development skills including familiarity with data structures & algorithms, software design practices.● Strong understanding of various ML frameworks● Experience of working on anomaly detection and security products will be a strong plus● Experience of working in SaaS enterprise companies will be a strong plus● Demonstration of knowledge in applied machine learning via participation in Kaggle-like challenges is an added bonusAs a founding engineer, you can expect top compensation, good startup equity to create mid/long term wealth, opportunity to create a top notch product/platform bottom-up, opportunity to work with some exceptional people with a strong workplace culture.
We are looking for a Node.js Developer who is proficient with writing API's, working with data, using AWS and capable of applying algorithms mainly machine learning-based to solve problems and create/modify features for our students. Your primary focus will be the development of all server-side logic, definition and maintenance of the central database, and ensuring high performance and responsiveness to requests from the front-end. You will also be responsible for integrating the front-end elements built by your co-workers into the application. Therefore, a basic understanding of front-end technologies is necessary as well. Responsibilities Integration of user-facing elements developed by front-end developers with server-side logic Writing reusable, testable, and efficient code Design and implementation of low-latency, high-availability, and performant applications Implementation of security and data protection Use of algorithms to drive data analytics and features. Ability to use AWS to solve scale issues. Apply if you can only arrive for a face to face interview in Bangalore.
Recruitment has been a weird problem. While companies complain they can't get good talent, there are hordes of talented professionals who are unable to easily find their next big opportunity.At CutShort, we are building an intelligent and tech-enabled platform that removes noise and connects these two sides seamlessly. More than 4000 companies have used our platform to hire 3x more people in 1/3rd the time and professionals get a great experience that just works.As we take CutShort in the next growth phase, we want to make it more intelligent. A big initiative is to use our data to generate better results for our users.We should talk if:1. You have at least 1 year of full-time experience in using M/L (in the text mining, NLU, classification models, recommendation models) on real data to get real results.2. Beyond the tools, you have a sound understanding of the underlying mathematical models. 3. You want to work in a fast growing startup where you will need to work on everything from getting data, to cleaning it, to creating models, to integrating them and to continuously improve them. Will you be okay with it?4. You want to see your work actually making an impact on our user's life.Interested? Let's talk!
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar engineer that is well versed with AI and audio technologies around audio detection, speech-to-text, interpretation, and sentiment analysis. You will be responsible for: Developing audio algorithms to detect key moments within popular online games, such as: Streamer speaking, shouting, etc. Gunfire, explosions, and other in-game audio events Speech-to-text and sentiment analysis of the streamer’s narration Leveraging baseline technologies such as TensorFlow and others -- and building models on top of them Building neural network architectures for audio analysis as it pertains to popular games Specifying exact requirements for training data sets, and working with analysts to create the data sets Training final models, including techniques such as transfer learning, data augmentation, etc. to optimize models for use in a production environment Working with back-end engineers to get all of the detection algorithms into production, to automate the highlight creation You should have the following qualities: Solid understanding of AI frameworks and algorithms, especially pertaining to audio analysis, speech-to-text, sentiment analysis, and natural language processing Experience using Python, TensorFlow and other AI tools Demonstrated understanding of various algorithms for audio analysis, such as CNNs, LSTM for natural language processing, and others Nice to have: some familiarity with AI-based audio analysis including sentiment analysis Familiarity with AWS environments Excited about working in a fast-changing startup environment Willingness to learn rapidly on the job, try different things, and deliver results Ideally a gamer or someone interested in watching gaming content online Skills: Machine Learning, Audio Analysis, Sentiment Analysis, Speech-To-Text, Natural Language Processing, Neural Networks, TensorFlow, OpenCV, AWS, Python
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar engineer that is well versed with computer vision and AI technologies around image and video analysis. You will be responsible for: Developing computer vision algorithms to detect key moments within popular online games Leveraging baseline technologies such as TensorFlow, OpenCV, and others -- and building models on top of them Building neural network (CNN) architectures for image and video analysis, as it pertains to popular games Specifying exact requirements for training data sets, and working with analysts to create the data sets Training final models, including techniques such as transfer learning, data augmentation, etc. to optimize models for use in a production environment Working with back-end engineers to get all of the detection algorithms into production, to automate the highlight creation You should have the following qualities: Solid understanding of computer vision and AI frameworks and algorithms, especially pertaining to image and video analysis Experience using Python, TensorFlow, OpenCV and other computer vision tools Understand common computer vision object detection models in use today e.g. Inception, R-CNN, Yolo, MobileNet SSD, etc. Demonstrated understanding of various algorithms for image and video analysis, such as CNNs, LSTM for motion and inter-frame analysis, and others Familiarity with AWS environments Excited about working in a fast-changing startup environment Willingness to learn rapidly on the job, try different things, and deliver results Ideally a gamer or someone interested in watching gaming content online Skills: Machine Learning, Computer Vision, Image Processing, Neural Networks, TensorFlow, OpenCV, AWS, Python Seniority: We are open to junior or senior engineers. We're more interested in the proper skillsets. Salary: Will be commensurate with experience.
About us DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.Data Science@DataWeaveWe the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.How we work?It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale!What do we offer?- Some of the most challenging research problems in NLP and Computer Vision. Huge text and image datasets that you can play with!- Ability to see the impact of your work and the value you're adding to our customers almost immediately.- Opportunity to work on different problems and explore a wide variety of tools to figure out what really excites you.- A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours.- Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.- Last but not the least, competitive salary packages and fast paced growth opportunities.Who are we looking for?The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem. You are also expected to develop capabilities that open up new business productization opportunities. We are looking for someone with 6+ years of relevant experience working on problems in NLP or Computer Vision with a Master's degree (PhD preferred). Key problem areas- Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.- Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.- Document clustering, attribute tagging, data normalization, classification, summarization, sentiment analysis.- Image based clustering and classification, segmentation, object detection, extracting text from images, generative models, recommender systems.- Ensemble approaches for all the above problems using multiple text and image based techniques.Relevant set of skills- Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus, optimization, algorithms and complexity.- Background in one or more of information retrieval, data mining, statistical techniques, natural language processing, and computer vision.- Excellent coding skills on multiple programming languages with experience building production grade systems. Prior experience with Python is a bonus.- Experience building and shipping machine learning models that solve real world engineering problems. Prior experience with deep learning is a bonus.- Experience building robust clustering and classification models on unstructured data (text, images, etc). Experience working with Retail domain data is a bonus.- Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.- Experience working with a variety of tools and libraries for machine learning and visualization, including numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.- Use the command line like a pro. Be proficient in Git and other essential software development tools.- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.Role and responsibilities- Understand the business problems we are solving. Build data science capability that align with our product strategy.- Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.- Build robust clustering and classification models in an iterative manner that can be used in production.- Constantly think scale, think automation. Measure everything. Optimize proactively.- Take end to end ownership of the projects you are working on. Work with minimal supervision.- Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.- Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.- Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.