About UpGrad : About us: UpGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. UpGrad currently offers programs in Data Analytics, Product Management, Digital Marketing, and Entrepreneurship, and was rated as one of the top 10 most innovative companies in India for 2017 - https://www.fastcompany.com/most-innovative-companies/2017/sectors/india . We plan to launch 6 more programs in technology and management education. UpGrad is co-founded by 3 IITD alumni, and the 4th co-founder is serial entrepreneur Ronnie Screwvala. UpGrad has a committed capital of 100Cr and in the first year of operations, has built the largest revenue generating online program in India (PG Diploma in Data Analytics) and the largest enrolment online program in India (Startup India learning program). UpGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow. Position : Senior Data Scientist Position Type : Full Time Location : Mumbai Job Description: Are you excited by the challenge and the opportunity of applying data-science and data-analytics techniques to the fast developing education technology domain? Do you look forward to, the sense of ownership and achievement that comes with innovating and creating data products from scratch and pushing it live into Production systems? Do you want to work with a team of highly motivated members who are on a mission to empower individuals through education? If this is you, come join us and become a part of the UpGrad technology team. At UpGrad the technology team enables all the facets of the business - whether it’s bringing efficiency to our marketing and sales initiatives, to enhancing our student learning experience, to empowering our content, delivery and student success teams, to aiding our student’s for their desired career outcomes. We play the part of bringing together data & tech to solve these business problems and opportunities at hand. We are looking for an highly skilled, experienced and passionate data-scientist who can come on-board and help create the next generation of data-powered education tech product. The ideal candidate would be someone who has worked in a Data Science role before wherein he/she is comfortable working with unknowns, evaluating the data and the feasibility of applying scientific techniques to business problems and products, and have a track record of developing and deploying data-science models into live applications. Someone with a strong math, stats, data-science background, comfortable handling data (structured+unstructured) as well as strong engineering know-how to implement/support such data products in Production environment. Ours is a highly iterative and fast-paced environment, hence being flexible, communicating well and attention-to-detail are very important too. The ideal candidate should be passionate about the customer impact and comfortable working with multiple stakeholders across the company. Basic Qualifications: 3+ years of experience in analytics, data science, machine learning or comparable role Bachelor's degree in Computer Science, Data Science/Data Analytics, Math/Statistics or related discipline Experience in building and deploying Machine Learning models in Production systems Strong analytical skills: ability to make sense out of a variety of data and its relation/applicability to the business problem or opportunity at hand Strong programming skills: comfortable with Python - pandas, numpy, scipy, matplotlib; Databases - SQL and noSQL Strong communication skills: ability to both formulate/understand the business problem at hand as well as ability to discuss with non data-science background stakeholders Comfortable dealing with ambiguity and competing objectives Preferred Qualifications: Experience in Text Analytics, Natural Language Processing Advanced degree in Data Science/Data Analytics or Math/Statistics Comfortable with data-visualization tools and techniques Knowledge of AWS and Data Warehousing Passion for building data-products for Production systems - a strong desire to impact the product through data-science techniques
Description About Zycus : Headquartered in Princeton, U.S. in 1998, Zycus has grown every day to be established as an organization which now is a leading global provider of complete Source-to-Pay suite of procurement performance solutions. We develop cloud-based (SaaS) Source-to-Pay solutions for large global enterprises, and have successfully deployed about 200 solutions to over 1000 Global clients. We are proud to have as our clients, some of the best-of- breed companies across verticals like Manufacturing, Automotives, Banking and Finance, Oil and Gas, Food Processing, Electronics, Telecommunications, Chemicals, Health and Pharma, Education and more. With a team of 1000+employees, we are present in India with 3 development centers at Bengaluru, Mumbai & Pune and offices in the U.S., U.K., Australia, Dubai, Netherlands and Singapore. Know more about the LEADER of: Gartner’s 2013, 2015 & 2017 Magic Quadrant for Strategic Sourcing Application Suites and The Forrester Wave™: eProcurement, Q2 2017 We are in process of launching Merlin A.I. Studio™. The artificial intelligence (AI)-based platform will allow procurement teams to to build and deploy bots across the source-to-pay process. The bots will be used by firms leveraging more than 1,100 APIs from Zycus’ solution suite. “By deploying the intelligent bots from Merlin A.I. Studio™, procurement can put themselves in cruise control mode as the bots work towards accomplishing tasks with zero human intervention,” The Fortune 500-serving firm explained in its press release “Be it running an RFI event, discovering contract risks, negotiating with suppliers or transnational procurement; all one needs to do is launch the bot and see the magic unfold.” “It will empower procurement to transform their routine, repetitive & mundane procurement tasks, so that time, effort & resources can be optimized towards more strategic initiatives.”Exp : 1 to 10 Years Role : Data Scientist Location : Bangalore Education : Any Engineering From IIT ,NIT , IIIT ,VIT , BITS Pilani Please carry your original ID proof along with Hard Copy of Resume Requirements We are especially looking for applicants with a strong background in Analytics and Data mining (Web, Social and Big data), Machine Learning and Pattern Recognition, Natural Language Processing and Computational Linguistics, Statistical Modelling and Inferencing, Information Retrieval, Large Scale Distributed Systems and Cloud Computing, Econometrics and Quantitative Marketing, Applied Game Theory and Mechanism Design, Operations Research and Optimization, Human Computer Interaction and Information Visualization. Applicants with a background in other quantitative areas are also encouraged to apply. If you are passionate about research and developing innovative technologies of interest to Zycus and the research community at large, the BigData Experience Lab may be the right place for you. All successful candidates are expected to dive deep into problem areas of Zycus’s interest and invent technology solutions to not only advance the current products, but also to generate new product options that can strategically advantage Zycus Skills Master’s or Ph.D. in statistics, mathematics, or computer science Only from Tier 1 Colleges Experience using statistical computer languages such as R, Python, SQL, etc. Experience in statistical and data mining techniques, including generalized linear model/regression, random forest, boosting, trees, text mining, social network analysis Experience working with and creating data architectures Knowledge of machine learning techniques such as clustering, decision tree learning, and artificial neural networks Knowledge of advanced statistical techniques and concepts, including regression, properties of distributions, and statistical tests 2-10 years of experience manipulating data sets and building statistical models Experience using web services: Redshift, S3, Spark, DigitalOcean, etc. Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Gurobi, MySQL, etc. Experience visualizing/presenting data. Data Scientist will report in to Director Engineering - Data Scentist & the roles & responsibilities are as below: Work as the data strategist, identifying and integrating new datasets that can be leveraged through our product capabilities and work closely with the engineering team to strategize and execute the development of data products Execute analytical experiments methodically to help solve various problems and make a true impact across various domains and industries Identify relevant data sources and sets to mine for client business needs, and collect large structured and unstructured datasets and variables Devise and utilize algorithms and models to mine big data stores, perform data and error analysis to improve models, and clean and validate data for uniformity and accuracy Analyze data for trends and patterns, and Interpret data with a clear objective in mind Implement analytical models into production by collaborating with software developers and machine learning engineers. Communicate analytic solutions to stakeholders and implement improvements as needed to operational systems Benefits : Along with a competitive compensation structure, Zycus believes in an open culture learning environment, where everyone gets a chance to share their ideas and deliver par excellence. Here's a sneak peek to our life at Zycus.
• Model design, feature planning, system infrastructure, production setup and monitoring, and release management. • Excellent understanding of machine learning techniques and algorithms, such as SVM, Decision Forests, k-NN, Naive Bayes etc.• Experience in selecting features, building and optimizing classifiers using machine learning techniques.• Prior experience with data visualization tools, such as D3.js, GGplot, etc..• Good knowledge on statistics skills, such as distributions, statistical testing, regression, etc..• Adequate presentation and communication skills to explain results and methodologies to non-technical stakeholders.• Basic understanding of the banking industry is value addDevelop, process, cleanse and enhance data collection procedures from multiple data sources.• Conduct & deliver experiments and proof of concepts to validate business ideas and potential value.• Test, troubleshoot and enhance the developed models in a distributed environments to improve it's accuracy.• Work closely with product teams to implement algorithms with Python and/or R.• Design and implement scalable predictive models, classifiers leveraging machine learning, data regression.• Facilitate integration with enterprise applications using APIs to enrich implementations
We are changing the way enterprises manage and consume data and insights by making data and insights platform smarter and better. As part of Cynepia Technologies, Customer Facing Data Scientists are critical to making our customers successful.- An ideal candidate should have strong fundamentals of applied data science in a business setting and should enjoy communicating and evangelizing data science solutions to business stakeholders. He must possess an understanding of the following :A) Product/Customer/People Skills :1. Someone who has been working for a fast-paced started environment in the capacity as an individual contributor and is focused on understanding customer problems/requirements and solving the same using Cynepia Products/Solutions that address the same.2. Executing data engineering/science workflows for customers3. Conducting and managing data science projects with customer's vision of success in mind.4. Engaging and collaborating with various customer teams and managing experience.5. Great oral and written communication and hunger to create and build is not negotiable.6. Strong customer interaction, management and organizational skills.B) Technical Skills :1. Experience with dataset preparation using python/R.2. Experience building and optimizing data pipelines, architectures and data sets.3. Hands-on experience building predictive models and preparing data for the same.4. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.Understand and write code that adheres to and ensures serviceability, performance, reliability, availability and scalability of the architecture in a large enterprise5. Prior Experience working with the adoption of enterprise data products/usecases in a fast-paced startup environment desired.6. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Skill Set SQL, Python, Numpy,Pandas,Knowledge of Hive and Data warehousing concept will be a plus point.JD - Strong analytical skills with the ability to collect, organise, analyse and interpret trends or patterns in complex data sets and provide reports & visualisations.- Work with management to prioritise business KPIs and information needs Locate and define new process improvement opportunities.- Technical expertise with data models, database design and development, data mining and segmentation techniques- Proven success in a collaborative, team-oriented environment- Working experience with geospatial data will be a plus.
About Role● Solving application security problems using ML by building a proof of concept and productionising ML pipelines● Conceptualising, designing and implementing ML platform/pipelines for solving ML problems at scale and in a quick iterative manner at massive data scale● Strong collaboration with Data scientist and other stakeholdersQualifications● Master’s degree in computer science from top universities (tier 1 only). Specialisation in ML is preferred. PhD from top Universities in the US and IISc in India is a strong plus.● Minimum 5 years of work experience in developing ML pipelines and products using Java and Python● Strong understanding of ML concepts and use of common ML algorithms applied to discrete sequences and high dimensional categorical data● Strong software development skills including familiarity with data structures & algorithms, software design practices.● Strong understanding of various ML frameworks● Experience of working on anomaly detection and security products will be a strong plus● Experience of working in SaaS enterprise companies will be a strong plus● Demonstration of knowledge in applied machine learning via participation in Kaggle-like challenges is an added bonusAs a founding engineer, you can expect top compensation, good startup equity to create mid/long term wealth, opportunity to create a top notch product/platform bottom-up, opportunity to work with some exceptional people with a strong workplace culture.
Qualifications for Big Data Engineer: We are looking for a candidate with 2+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, Hive etc. Experience with relational SQL and NoSQL databases. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience with AWS cloud services: EC2, EMR, RDS, Redshift Experience with stream-processing systems: Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Java, Scala, python etc.
About us DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.Data Science@DataWeaveWe the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.How we work?It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale!What do we offer?- Some of the most challenging research problems in NLP and Computer Vision. Huge text and image datasets that you can play with!- Ability to see the impact of your work and the value you're adding to our customers almost immediately.- Opportunity to work on different problems and explore a wide variety of tools to figure out what really excites you.- A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours.- Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.- Last but not the least, competitive salary packages and fast paced growth opportunities.Who are we looking for?The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem. You are also expected to develop capabilities that open up new business productization opportunities. We are looking for someone with 6+ years of relevant experience working on problems in NLP or Computer Vision with a Master's degree (PhD preferred). Key problem areas- Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.- Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.- Document clustering, attribute tagging, data normalization, classification, summarization, sentiment analysis.- Image based clustering and classification, segmentation, object detection, extracting text from images, generative models, recommender systems.- Ensemble approaches for all the above problems using multiple text and image based techniques.Relevant set of skills- Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus, optimization, algorithms and complexity.- Background in one or more of information retrieval, data mining, statistical techniques, natural language processing, and computer vision.- Excellent coding skills on multiple programming languages with experience building production grade systems. Prior experience with Python is a bonus.- Experience building and shipping machine learning models that solve real world engineering problems. Prior experience with deep learning is a bonus.- Experience building robust clustering and classification models on unstructured data (text, images, etc). Experience working with Retail domain data is a bonus.- Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.- Experience working with a variety of tools and libraries for machine learning and visualization, including numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.- Use the command line like a pro. Be proficient in Git and other essential software development tools.- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.Role and responsibilities- Understand the business problems we are solving. Build data science capability that align with our product strategy.- Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.- Build robust clustering and classification models in an iterative manner that can be used in production.- Constantly think scale, think automation. Measure everything. Optimize proactively.- Take end to end ownership of the projects you are working on. Work with minimal supervision.- Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.- Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.- Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.
Write, test, debug and ship code and gather feedback on the scale, performance, security to incorporate back into the platform. Work with the founders to identify complex technical problems and solve them. Work with the product design and client experience development team to support them with scalable services Feed into the overall mission and vision of the eParchi’s platform over the period of the coming months and years. An ability to perform well in a fast-paced environment Excellent analytical and multitasking skills. ● Will have to dedicate full time until the college opens, after which he needs to dedicate 6-7 hours on weekdays and full time on weekends.
At Entropik Technologies, we build systems that measure and analyzes human emotions at an unprecedented scale, with accuracy, speed, and mission-critical availability. We work with some of the leading brands and agencies across the globe who utilizes our platform to improve overall customer experience, understand consumer behavior and their subconscious responses. The Data Science team at Entropik is a high profile team that is a center of innovation for the company and a major contributor to the company's core products. The types of challenges we solve have attracted people from industry and academia with diverse backgrounds. We're passionate about maintaining an open and collaborative environment, where team members bring their own unique style of thinking and tools to the table. Responsibilities: Work on challenging fundamental data science problems in affective computing Propose and develop solutions independently and work with other data scientists Drive the collection of new data and the refinement of existing data sources Continuous focus on enhancing the current models with an overall goal of improving the accuracy of different emotion touch points Prepare white papers, scientific publications and conference presentations Work closely with product and engineering teams to identify and answer important product questions Communicate findings to product managers and engineer Analyze and interpret the results Develop best practices for instrumentation and experimentation and communicate those to product engineering teams Requirements: Masters or Ph.D. in a relevant technical (deep learning, machine learning, computer science, physics, mathematics, statistics, or related field), or 4+ years' experience in a relevant role Extensive experience solving analytical problems using quantitative approaches using machine learning methods Should be experienced in Computer Vision and Visual Feature Extraction. Experienced with Deep Learning Libraries like Tensorflow, Pytorch and architectures like CNN, RCNN. Track record of using advanced statistical methods, information retrieval, data mining techniques Comfort manipulating and analyzing complex, high-volume, high-dimensional data from varying sources A strong passion for empirical research and for answering hard questions with data A flexible analytic approach that allows for results at varying levels of precision Fluency with at least one scripting language such as Python Experience with at least some of the following machine learning libraries: scikit-learn, H2O, SparkML, etc Experience with practical data science: source control workflows, deploying machine learning models in production, real-time machine learning.
The duration of this internship is _1___ months. Igeeks Technologies provides Internships for final year engineering students in Bangalore with training, mini project and report guidance. Best place for carrying out final year internships in Bangalore, Karnataka. As per University standards igeeks technologies offering internships For B.E Students (ECE / TCE / EEE / CSE / ISE / MECH) . Registrations are already started for Intern in Embedded With IOT , Intern in IOT , Intern in JAVA , Intern in Python ML. Interested students can contact us Internship will be conducted as per industry standards and it will be completely hands on training. INTERNSHIP/ INDUSTRIAL TRAINING ATTENTION!!! BE 6TH AND 7TH SEMESTER STUDENTS FINAL YEAR INTERNSHIP FOR CS/EC/TCE/IS/MECH/IT/EEE/MCA/DIPLOMA /BCA ON Big Data Hadoop/Python Data Science ML/Big Data Spark /EMBEDDED SYSYTEMS/IOT/ARDUINO/RASPBERRY PI. DURATION 4 WEEKS. Internship cum Training program schedule: 5th July 2019 To 5 th August 2019. BENEFITS: TRAINING FROM INDUSTRY EXPERTS WORK ON REAL TIME PROJECTS COMPANY CERTIFICATION HANDS ON EXPERIENCE PROJECT REPORT MATERIALS. MINI PROJECT 8TH SEM IEEE PROJECT ASSISTANCE
(Senior) Data Scientist Job DescriptionAbout usDataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.Data Science@DataWeaveWe the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.How we work?It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale! What do we offer?● Some of the most challenging research problems in NLP and Computer Vision. Huge text and imagedatasets that you can play with!● Ability to see the impact of your work and the value you're adding to our customers almost immediately.● Opportunity to work on different problems and explore a wide variety of tools to figure out what reallyexcites you.● A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexibleworking hours.● Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.● Last but not the least, competitive salary packages and fast paced growth opportunities.Who are we looking for?The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem. You are also expected to develop capabilities that open up new business productization opportunities.We are looking for someone with a Master's degree and 1+ years of experience working on problems in NLP or Computer Vision.If you have 4+ years of relevant experience with a Master's degree (PhD preferred), you will be considered for a senior role.Key problem areas● Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.● Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.● Document clustering, attribute tagging, data normalization, classification, summarization, sentimentanalysis.● Image based clustering and classification, segmentation, object detection, extracting text from images,generative models, recommender systems.● Ensemble approaches for all the above problems using multiple text and image based techniques.Relevant set of skills● Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus,optimization, algorithms and complexity.● Background in one or more of information retrieval, data mining, statistical techniques, natural languageprocessing, and computer vision.● Excellent coding skills on multiple programming languages with experience building production gradesystems. Prior experience with Python is a bonus.● Experience building and shipping machine learning models that solve real world engineering problems.Prior experience with deep learning is a bonus.● Experience building robust clustering and classification models on unstructured data (text, images, etc).Experience working with Retail domain data is a bonus.● Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.● Experience working with a variety of tools and libraries for machine learning and visualization, includingnumpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.● Use the command line like a pro. Be proficient in Git and other essential software development tools.● Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.● Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.● It's a huge bonus if you have some personal projects (including open source contributions) that you workon during your spare time. Show off some of your projects you have hosted on GitHub.Role and responsibilities● Understand the business problems we are solving. Build data science capability that align with our product strategy.● Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.● Build robust clustering and classification models in an iterative manner that can be used in production.● Constantly think scale, think automation. Measure everything. Optimize proactively.● Take end to end ownership of the projects you are working on. Work with minimal supervision.● Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.● Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.● Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.● Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.
We at artivatic are seeking passionate, talented and research focused natural processing language engineer with strong machine learning and mathematics background to help build industry-leading technology. - The ideal candidate will have research/implementation experience modeling and developing NLP tools and experience working with machine learning/deep learning algorithms.Qualifications :- Bachelors or Master degree in Computer Science, Mathematics or related field with specialization in natural language processing, Machine Learning or Deep Learning.- Publication record in conferences/journals is a plus.- 2+ years of working/research experience building NLP based solutions is preferred.Required Skills :- Hands-on Experience building NLP models using different NLP libraries ad toolkit like NLTK, Stanford NLP etc.- Good understanding of Rule-based, Statistical and probabilistic NLP techniques.- Good knowledge of NLP approaches and concepts like topic modeling, text summarization, semantic modeling, Named Entity recognition etc.- Good understanding of Machine learning and Deep learning algorithms.- Good knowledge of Data Structures and Algorithms.- Strong programming skills in Python/Java/Scala/C/C++.- Strong problem solving and logical skills.- A go-getter kind of attitude with a willingness to learn new technologies.- Well versed with software design paradigms and good development practices.Responsibilities :- Developing novel algorithms and modeling techniques to advance the state of the art in Natural Language Processing.- Developing NLP based tools and solutions end to end.