• Model design, feature planning, system infrastructure, production setup and monitoring, and release management. • Excellent understanding of machine learning techniques and algorithms, such as SVM, Decision Forests, k-NN, Naive Bayes etc.• Experience in selecting features, building and optimizing classifiers using machine learning techniques.• Prior experience with data visualization tools, such as D3.js, GGplot, etc..• Good knowledge on statistics skills, such as distributions, statistical testing, regression, etc..• Adequate presentation and communication skills to explain results and methodologies to non-technical stakeholders.• Basic understanding of the banking industry is value addDevelop, process, cleanse and enhance data collection procedures from multiple data sources.• Conduct & deliver experiments and proof of concepts to validate business ideas and potential value.• Test, troubleshoot and enhance the developed models in a distributed environments to improve it's accuracy.• Work closely with product teams to implement algorithms with Python and/or R.• Design and implement scalable predictive models, classifiers leveraging machine learning, data regression.• Facilitate integration with enterprise applications using APIs to enrich implementations
We are changing the way enterprises manage and consume data and insights by making data and insights platform smarter and better. As part of Cynepia Technologies, Customer Facing Data Scientists are critical to making our customers successful.- An ideal candidate should have strong fundamentals of applied data science in a business setting and should enjoy communicating and evangelizing data science solutions to business stakeholders. He must possess an understanding of the following :A) Product/Customer/People Skills :1. Someone who has been working for a fast-paced started environment in the capacity as an individual contributor and is focused on understanding customer problems/requirements and solving the same using Cynepia Products/Solutions that address the same.2. Executing data engineering/science workflows for customers3. Conducting and managing data science projects with customer's vision of success in mind.4. Engaging and collaborating with various customer teams and managing experience.5. Great oral and written communication and hunger to create and build is not negotiable.6. Strong customer interaction, management and organizational skills.B) Technical Skills :1. Experience with dataset preparation using python/R.2. Experience building and optimizing data pipelines, architectures and data sets.3. Hands-on experience building predictive models and preparing data for the same.4. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.Understand and write code that adheres to and ensures serviceability, performance, reliability, availability and scalability of the architecture in a large enterprise5. Prior Experience working with the adoption of enterprise data products/usecases in a fast-paced startup environment desired.6. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Skill Set SQL, Python, Numpy,Pandas,Knowledge of Hive and Data warehousing concept will be a plus point.JD - Strong analytical skills with the ability to collect, organise, analyse and interpret trends or patterns in complex data sets and provide reports & visualisations.- Work with management to prioritise business KPIs and information needs Locate and define new process improvement opportunities.- Technical expertise with data models, database design and development, data mining and segmentation techniques- Proven success in a collaborative, team-oriented environment- Working experience with geospatial data will be a plus.
About the Role If you are interested in building large scale data pipelines that impacts how Uber makes decisions about Rider lifecycle and experience, join the Rider Data Platform team. Uber collects petabyte scale analytics data from the different Ride booking apps. Help us build the software systems and data models that will enable data scientists reason about user behavior and build models for consumption by different rider facing program teams. What You'll Do Identify unified data models collaborating with Data Science teams Streamline data processing of the original event sources and consolidate them in source of truth event logs Build and maintain real-time/batch data pipelines that can consolidate and clean up usage analytics Build systems that monitor data losses from the mobile sources Devise strategies to consolidate and compensate the data losses by correlating different sources Solve challenging data problems with cutting edge design and algorithms What You'll Need 4+ years experience in a competitive engineering environment Design: Knowledge of data structures and an eye for design. You can discuss the tradeoff between design choices, both on a theoretical level and on an applied level. Strong coding/debugging abilities: You have advanced knowledge of at least one programming language, and are happy to learn more. Our core languages are Java, Python, and Scala. Big data: Experience with Distributed systems such as Hadoop, Hive, Spark, Kafka is preferred. Data pipeline: Strong understanding in SQL, Database. Experience in building data pipelines is a great plus. Love getting your hands dirty with the data implementing custom ETLs to shape it into information. A team player: You believe that you can achieve more on a team that the whole is greater than the sum of its parts. You rely on others' candid feedback for continuous improvement. Business acumen: You understand requirements beyond the written word. Whether you're working on an API used by other developers, an internal tool consumed by our operation teams, or a feature used by millions of customers, your attention to details leads to a delightful user experience. About the Team Rider Data Platform team is a relatively new team tasked with shaping up the future architecture of Uber's Rider Data Stack. We are a bunch of engineers passionate about helping Uber grow by focusing our energy on building the next gen data platform to provide insights to the global Rider data in the most optimal manner. This would be instrumental in identifying gaps in the current implementation as well as formulating the key strategies for overall Rider experience. Uber At Uber, we ignite opportunity by setting the world in motion. We take on big problems to help drivers, riders, delivery partners, and eaters get moving in more than 600 cities around the world. We welcome people from all backgrounds who seek the opportunity to help build a future where everyone and everything can move independently. If you have the curiosity, passion, and collaborative spirit, work with us, and let's move the world forward, together.
About Role● Solving application security problems using ML by building a proof of concept and productionising ML pipelines● Conceptualising, designing and implementing ML platform/pipelines for solving ML problems at scale and in a quick iterative manner at massive data scale● Strong collaboration with Data scientist and other stakeholdersQualifications● Master’s degree in computer science from top universities (tier 1 only). Specialisation in ML is preferred. PhD from top Universities in the US and IISc in India is a strong plus.● Minimum 5 years of work experience in developing ML pipelines and products using Java and Python● Strong understanding of ML concepts and use of common ML algorithms applied to discrete sequences and high dimensional categorical data● Strong software development skills including familiarity with data structures & algorithms, software design practices.● Strong understanding of various ML frameworks● Experience of working on anomaly detection and security products will be a strong plus● Experience of working in SaaS enterprise companies will be a strong plus● Demonstration of knowledge in applied machine learning via participation in Kaggle-like challenges is an added bonusAs a founding engineer, you can expect top compensation, good startup equity to create mid/long term wealth, opportunity to create a top notch product/platform bottom-up, opportunity to work with some exceptional people with a strong workplace culture.
Job Brief and Requirements • We are looking for a Machine Learning/Natural Language Processing Engineer to help us improve our NLP products and create new NLP applications. • Experience in applying different NLP techniques to problems such as text classification, text summarization, question & answering, information retrieval, knowledge extraction, and conversational bots design potentially with both traditional & Deep Learning Techniques • NLP Skills/Tools: NLP, HMM, CRF, LDA, Word2Vec, Seq2Seq, spaCy, Nltk, Gensim, CoreNLP, NLU, NLG etc., • Ability to design & develop practical analytical approach keeping the context of data quality & availability, feasibility, scalability, turnaround time aspects. • Create language models from text data. These language models draw heavily from statistical, deep learning as well as rule based research in recent times around building taggers, parsers, knowledge graph based dictionaries etc. • Understanding of data creation. Develop highly scalable classifiers and tools leveraging machine learning and rules based models. • Work closely with product teams to implement algorithms that power user and developer-facing products. • Perform user research and evaluate user feedback.
Qualifications for Big Data Engineer: We are looking for a candidate with 2+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, Hive etc. Experience with relational SQL and NoSQL databases. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience with AWS cloud services: EC2, EMR, RDS, Redshift Experience with stream-processing systems: Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Java, Scala, python etc.
About us DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.Data Science@DataWeaveWe the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.How we work?It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale!What do we offer?- Some of the most challenging research problems in NLP and Computer Vision. Huge text and image datasets that you can play with!- Ability to see the impact of your work and the value you're adding to our customers almost immediately.- Opportunity to work on different problems and explore a wide variety of tools to figure out what really excites you.- A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours.- Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.- Last but not the least, competitive salary packages and fast paced growth opportunities.Who are we looking for?The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem. You are also expected to develop capabilities that open up new business productization opportunities. We are looking for someone with 6+ years of relevant experience working on problems in NLP or Computer Vision with a Master's degree (PhD preferred). Key problem areas- Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.- Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.- Document clustering, attribute tagging, data normalization, classification, summarization, sentiment analysis.- Image based clustering and classification, segmentation, object detection, extracting text from images, generative models, recommender systems.- Ensemble approaches for all the above problems using multiple text and image based techniques.Relevant set of skills- Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus, optimization, algorithms and complexity.- Background in one or more of information retrieval, data mining, statistical techniques, natural language processing, and computer vision.- Excellent coding skills on multiple programming languages with experience building production grade systems. Prior experience with Python is a bonus.- Experience building and shipping machine learning models that solve real world engineering problems. Prior experience with deep learning is a bonus.- Experience building robust clustering and classification models on unstructured data (text, images, etc). Experience working with Retail domain data is a bonus.- Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.- Experience working with a variety of tools and libraries for machine learning and visualization, including numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.- Use the command line like a pro. Be proficient in Git and other essential software development tools.- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.Role and responsibilities- Understand the business problems we are solving. Build data science capability that align with our product strategy.- Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.- Build robust clustering and classification models in an iterative manner that can be used in production.- Constantly think scale, think automation. Measure everything. Optimize proactively.- Take end to end ownership of the projects you are working on. Work with minimal supervision.- Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.- Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.- Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.
Write, test, debug and ship code and gather feedback on the scale, performance, security to incorporate back into the platform. Work with the founders to identify complex technical problems and solve them. Work with the product design and client experience development team to support them with scalable services Feed into the overall mission and vision of the eParchi’s platform over the period of the coming months and years. An ability to perform well in a fast-paced environment Excellent analytical and multitasking skills. ● Will have to dedicate full time until the college opens, after which he needs to dedicate 6-7 hours on weekdays and full time on weekends.
At Entropik Technologies, we build systems that measure and analyzes human emotions at an unprecedented scale, with accuracy, speed, and mission-critical availability. We work with some of the leading brands and agencies across the globe who utilizes our platform to improve overall customer experience, understand consumer behavior and their subconscious responses. The Data Science team at Entropik is a high profile team that is a center of innovation for the company and a major contributor to the company's core products. The types of challenges we solve have attracted people from industry and academia with diverse backgrounds. We're passionate about maintaining an open and collaborative environment, where team members bring their own unique style of thinking and tools to the table. Responsibilities: Work on challenging fundamental data science problems in affective computing Propose and develop solutions independently and work with other data scientists Drive the collection of new data and the refinement of existing data sources Continuous focus on enhancing the current models with an overall goal of improving the accuracy of different emotion touch points Prepare white papers, scientific publications and conference presentations Work closely with product and engineering teams to identify and answer important product questions Communicate findings to product managers and engineer Analyze and interpret the results Develop best practices for instrumentation and experimentation and communicate those to product engineering teams Requirements: Masters or Ph.D. in a relevant technical (deep learning, machine learning, computer science, physics, mathematics, statistics, or related field), or 4+ years' experience in a relevant role Extensive experience solving analytical problems using quantitative approaches using machine learning methods Should be experienced in Computer Vision and Visual Feature Extraction. Experienced with Deep Learning Libraries like Tensorflow, Pytorch and architectures like CNN, RCNN. Track record of using advanced statistical methods, information retrieval, data mining techniques Comfort manipulating and analyzing complex, high-volume, high-dimensional data from varying sources A strong passion for empirical research and for answering hard questions with data A flexible analytic approach that allows for results at varying levels of precision Fluency with at least one scripting language such as Python Experience with at least some of the following machine learning libraries: scikit-learn, H2O, SparkML, etc Experience with practical data science: source control workflows, deploying machine learning models in production, real-time machine learning.
The duration of this internship is _1___ months. Igeeks Technologies provides Internships for final year engineering students in Bangalore with training, mini project and report guidance. Best place for carrying out final year internships in Bangalore, Karnataka. As per University standards igeeks technologies offering internships For B.E Students (ECE / TCE / EEE / CSE / ISE / MECH) . Registrations are already started for Intern in Embedded With IOT , Intern in IOT , Intern in JAVA , Intern in Python ML. Interested students can contact us Internship will be conducted as per industry standards and it will be completely hands on training. INTERNSHIP/ INDUSTRIAL TRAINING ATTENTION!!! BE 6TH AND 7TH SEMESTER STUDENTS FINAL YEAR INTERNSHIP FOR CS/EC/TCE/IS/MECH/IT/EEE/MCA/DIPLOMA /BCA ON Big Data Hadoop/Python Data Science ML/Big Data Spark /EMBEDDED SYSYTEMS/IOT/ARDUINO/RASPBERRY PI. DURATION 4 WEEKS. Internship cum Training program schedule: 5th July 2019 To 5 th August 2019. BENEFITS: TRAINING FROM INDUSTRY EXPERTS WORK ON REAL TIME PROJECTS COMPANY CERTIFICATION HANDS ON EXPERIENCE PROJECT REPORT MATERIALS. MINI PROJECT 8TH SEM IEEE PROJECT ASSISTANCE
(Senior) Data Scientist Job DescriptionAbout usDataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.Data Science@DataWeaveWe the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.How we work?It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! What do we offer?● Some of the most challenging research problems in NLP and Computer Vision. Huge text and imagedatasets that you can play with!● Ability to see the impact of your work and the value you're adding to our customers almost immediately.● Opportunity to work on different problems and explore a wide variety of tools to figure out what reallyexcites you.● A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexibleworking hours.● Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.● Last but not the least, competitive salary packages and fast paced growth opportunities.Who are we looking for?The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem. You are also expected to develop capabilities that open up new business productization opportunities.We are looking for someone with a Master's degree and 1+ years of experience working on problems in NLP or Computer Vision.If you have 4+ years of relevant experience with a Master's degree (PhD preferred), you will be considered for a senior role.Key problem areas● Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.● Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.● Document clustering, attribute tagging, data normalization, classification, summarization, sentimentanalysis.● Image based clustering and classification, segmentation, object detection, extracting text from images,generative models, recommender systems.● Ensemble approaches for all the above problems using multiple text and image based techniques.Relevant set of skills● Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus,optimization, algorithms and complexity.● Background in one or more of information retrieval, data mining, statistical techniques, natural languageprocessing, and computer vision.● Excellent coding skills on multiple programming languages with experience building production gradesystems. Prior experience with Python is a bonus.● Experience building and shipping machine learning models that solve real world engineering problems.Prior experience with deep learning is a bonus.● Experience building robust clustering and classification models on unstructured data (text, images, etc).Experience working with Retail domain data is a bonus.● Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.● Experience working with a variety of tools and libraries for machine learning and visualization, includingnumpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.● Use the command line like a pro. Be proficient in Git and other essential software development tools.● Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.● Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.● It's a huge bonus if you have some personal projects (including open source contributions) that you workon during your spare time. Show off some of your projects you have hosted on GitHub.Role and responsibilities● Understand the business problems we are solving. Build data science capability that align with our product strategy.● Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.● Build robust clustering and classification models in an iterative manner that can be used in production.● Constantly think scale, think automation. Measure everything. Optimize proactively.● Take end to end ownership of the projects you are working on. Work with minimal supervision.● Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.● Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.● Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.● Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.
We at artivatic are seeking passionate, talented and research focused natural processing language engineer with strong machine learning and mathematics background to help build industry-leading technology. - The ideal candidate will have research/implementation experience modeling and developing NLP tools and experience working with machine learning/deep learning algorithms.Qualifications :- Bachelors or Master degree in Computer Science, Mathematics or related field with specialization in natural language processing, Machine Learning or Deep Learning.- Publication record in conferences/journals is a plus.- 2+ years of working/research experience building NLP based solutions is preferred.Required Skills :- Hands-on Experience building NLP models using different NLP libraries ad toolkit like NLTK, Stanford NLP etc.- Good understanding of Rule-based, Statistical and probabilistic NLP techniques.- Good knowledge of NLP approaches and concepts like topic modeling, text summarization, semantic modeling, Named Entity recognition etc.- Good understanding of Machine learning and Deep learning algorithms.- Good knowledge of Data Structures and Algorithms.- Strong programming skills in Python/Java/Scala/C/C++.- Strong problem solving and logical skills.- A go-getter kind of attitude with a willingness to learn new technologies.- Well versed with software design paradigms and good development practices.Responsibilities :- Developing novel algorithms and modeling techniques to advance the state of the art in Natural Language Processing.- Developing NLP based tools and solutions end to end.
About Us upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow. upGrad was awarded the Best Tech for Education by IAMAI for 2018-19 upGrad was also ranked as one of the LinkedIn Top Startups 2018: The 25 most sought-after startups in India upGrad was earlier selected as one of the top ten most innovative companies in India by FastCompany. We were also covered by the Financial Times along with other disruptors in Ed-Tech upGrad is the official education partner for Government of India - Startup India program too Our program with IIIT B has been ranked #1 program in the country in the domain of Artificial Intelligence and Machine Learning Job Description We're looking for a hands-on technical leader to head the Data Science and Data Engineering vertical within the Technology team who can work alongside Business stakeholders & Product managers to drive the creation of data-driven products/platforms. Responsibilities: Ownership of end-to-end initiatives across all products and services in upGrad in the data domain - data engineering, data science, and product analytics Lead, mentor and guide our team of Data Analysts, Data-Engineers and Data Scientists Establishment and execution of the tech data roadmap balancing short term and long term needs to ensure that the architecture can scale and evolve Data Engineering would entail envisioning and extending our Redshift data-warehouse Architecting robust data-ingestion pipelines from internal and external sources Domain schema creation and democratizing data for all stakeholders Monitoring and Administration of all data services and tech that goes with it Mentoring and guiding our data-engineers to execute on these initiatives Data Science would entail ideating, development and delivery of data-science backed products and services across entire upGrad domain - Learn, Career, Sales and Marketing Understanding the business domain and ideating/brainstorming potential data-science solutions Collaborating with Product & business stakeholders on feasibility, impact, execution, delivery as well as adoption Hands-on architecting, reviewing and mentoring the data-science team to execute on these projects/initiatives both on data and tech side and deploy them into the Production environment Data Analytics would entail envisioning and supporting all product analytics initiatives. Ease of access of the right data-points and visualizations to enable teams to make strong data-backed decisions at all aspects in upGrad Guiding/Mentoring our data analysts on approach and data sources available for delivering on the analytics required Understanding all the nitty-gritty of the Product and end-to-end user delivery to able to deliver holistic analytics Collaborating with product and business stakeholders to make sure their data needs are fulfilled as per the need and feasibility Owning the administration of the various data-analytics tools and services. Overall would be a data evangelist who is constantly pushing the team to collect, organize, share and utilize data in meaningful ways to better the product and it’s services. At it’s core who is serious about data integrity, data security as well as ethical sourcing/utilization of data Qualifications: A suitable candidate in this position will be a strategic/innovative thinker who is proactive and self-drives requires minimal supervision and is able to build a strong data-focused team. A Tech Data expert, goto person for all things data, software and technology. 6+ years of experience in leading and building strong data-powered products and services Master’s or higher education in the field of Stats, Data, Computer Science preferred Hands-on experience in building data-warehouse, ETL pipelines from scratch and supporting those in Production Environments. Hands-on experience in building and deploying machine learning/natural language processing based data-products Strong drive to understand the domain and sharp analytical thinker to ideate on how data-driven products/services can help drive the business Strong programming skills in Python and SQL Strong understanding of relational and non-relational databases Strong understanding of the various technologies/tools/libraries upcoming and available in this area and can drive the adoption of those as per the business needs and future growth Ability to handle multiple simultaneous projects, prioritize and meet tight impact based deadlines; alongside being organized and calm when while dealing with a lot of uncertainties and unknowns Excellent interpersonal and communication skills