Job Description – Data Scientist About Company Profile Precily is a startup headquartered in Noida, IN. Precily is currently working with leading consulting & law firms, research firms & technology companies. Aura (Precily AI) is data-analysis platform for enterprises that increase the efficiency of the workforce by providing AI-based solutions. Responsibilities & Skills Required: The role requires deep knowledge in designing, planning, testing and deploying analytics solutions including the following: • Natural Language Processing (NLP), Neural Networks , Text Clustering, Topic Modelling, Information Extraction, Information Retrieval, Deep learning, Machine learning, cognitive science and analytics. • Proven experience implementing and deploying advanced AI solutions using R/Python. • Apply machine learning algorithms, statistical data analysis, text clustering, summarization, extracting insights from multiple data points. • Excellent understanding of Analytics concepts and methodologies including machine learning (unsupervised and supervised). • Hand on in handling large amounts of structured and unstructured data. • Measure, interpret, and derive learning from results of analysis that will lead to improvements document processing. Skills Required: • Python, R, NLP, NLG, Machine Learning, Deep Learning & Neural Networks • Word Vectorizers • Word Embeddings ( word2vec & GloVe ) • RNN ( CNN vs RNN ) • LSTM & GRU ( LSTM vs GRU ) • Pretrained Embeddings ( Implementation in RNN ) • Unsupervised Learning • Supervised Learning • Deep Neural Networks • Framework : Keras/tensorflow • Keras Embedding Layer output Please reach out to us: firstname.lastname@example.org
Data may be the new oil, but it takes just as much refining as crude oil. Our data intelligence platform is used by over 150 leaders to make better, data-driven decisions on everything from tracking national programs to pinpointing where to open the next school, health centre or even ice cream store. However, garbage in is garbage out! Our platform runs on data, but one of our biggest challenges is the incoming data's lack of structure and internal standard practices around such diverse datasets. Every partner has a new data format and a set of new data challenges to tackle. It's critical that we move beyond these challenges so we can extract key insights from the data and deliver them to our partners. A Data Associate at SocialCops will be responsible for data modelling, cleaning, structuring, and handling both primary and secondary data sets under the mentorship of our data scientists and economists. As a Data Associate, you will play a key role in converting a variety of messy datasets into clean and structured datasets by creating quality metadata files, running scalable R/Python scripts to model the data, and perform data validations. You will also carry data analysis, create data visualizations, and create data models to make sense of data to power critical decisions. As a Data Associate, you will play a critical role in ensuring that the sanity and quality of decisions made on our platform. With great power comes great responsibility :) If you are more interested in the Research & Analytics work at SocialCops, click here to submit your email and we'll be in touch.
About the company: GyanDhan is India's first education loan marketplace. We’re on the mission to ensure that financial obstacles never stand in the way of someone’s education dream. By leveraging data science, we provide awesome education loan options to students pursuing higher studies. Launched in Apr 2016, we have already received a phenomenal response with over INR 150 crores in sanctions (as of Sep 2017)! We count Sundaram Finance as an investor. We have been featured in TOI, Eenadu, Malayala Manorama and several startup newsletters such as YourStory, VCCircle etc. Want to be a part of the fight against inequality in education? Join us! Job description: Reporting to the CEO, the senior software developer will be responsible for leading the efforts on product development for GyanDhan. We are on the lookout for a candidate who will be part of the leadership team at GyanDhan in the future. The ideal candidate is not afraid to get his hands dirty, and is equally comfortable leading a team of developers when needed. Responsibilities Build a truly online interface that reduces the frictions in the current loan application process Focus on customer experience, both for the borrower and the lender side of the platform Partner with the decision science lead to ensure seamless integration with data providers and lending partners Desired profile / qualities Full-stack development experience Self-starter: Someone who can can handle develop independently Passionate about problem solving, ability to see the big picture Realizes the value of 80-20 Good to have: Prior startup experience / experience in setting up a tech team Prior experience at a bank / fin-tech / non-banking financial player in a technical role Github projects that we can look at Regular participation on Codechef, Topcoder etc. ( If yes, please cite your rank and pass on a link to your profile! ) FYI, our Tech Stack comprises: Ruby on Rails, Postgres, AWS, jQuery ESOPs will be over and above the base salary.
Selecting features, building and optimizing models using machine learning techniques Data mining using state-of-the-art methods Extending company’s data with third party sources of information when needed Enhancing data collection procedures to include information that is relevant for building analytic systems Processing, cleansing, and verifying the integrity of data used for analysis Doing ad-hoc analysis and presenting results in a clear manner Creating automated anomaly detection systems and constant tracking of its performance Adopting new research methodologies including deep learning (CNNs LSTMs)on projects
About the job: - You will architect, code and deploy ML models (from scratch) to predict credit risk. - You will design, run, and analyze A/B and multivariate tests to test hypotheses aimed at optimizing user experience and portfolio risk. - You will perform data exploration and build statistical models on user behavior to discover opportunities for decreasing user defaults. And you must truly be excited about this part. - You’ll use behavioral and social data to gain insights into how humans make financial choices - You will spend a lot of time in building out predictive features from super sparse data sources. - You’ll continually acquire new data sources to develop a rich dataset that characterizes risk. - You will code, drink, breathe and live python, sklearn and pandas. It’s good to have experience in these but not a necessity - as long as you’re super comfortable in a language of your choice. About you: - You’ve strong computer science fundamentals - You’ve strong understanding of ML algorithms - Ideally, you have 2+ years of experience in using ML in industry environment - You know how to run tests and understand their results from a statistical perspective - You love freedom and hate being micromanaged. You own products end to end - You have a strong desire to learn and use the latest machine learning algorithms - It will be great if you have one of the following to share - a kaggle or a github profile - Degree in statistics/quant/engineering from Tier-1 institutes.
About Us:Paysense is Nexus and Jungle funded startup focussed on improving the financial lives of the indian consumer. In an exclusive partnership with India Infoline (IIFL ~$600 mn networth), we are reinventing credit and access to it for the next billion Indian consumers. Traditional banks and lending institutions in India are serving less than 5% of all credit requirements. Primary sources of data on consumer spend and credit behaviour either do not exist, or, are fundamentally broken. We saw this as a challenging opportunity to build out large scale credit system from scratch. Our team comprises of engineers, data scientists and product leaders from Stanford, IIT and UCLA; and we are looking for mission driven team mates who are ambitious, work hard, are super smart and yet humble. If this sounds like you, we’d love to have a chat.About the job:- Backend development at Paysense means writing web systems that receive and process millions of requests per minute. - You will write code that stores and analyzes large scale data in milliseconds: architect and develop data processing and warehouse systems- Develop internal and external platform that powers next generation of credit lending systems- Design tech to build graph/network connecting users- Write systems that are integrated with all leading banks, bureaus and financial institutions.About you:- 2+ years of experience with one of the following is must: Python/Java/nodeJS.- Strong experience with Postgres or another SQL RDBMS.- Working knowledge of AWS or other cloud service.- Experience working in finance domain is a plus.- Experience with Python, Django, or data science toolkits (pandas, scikit, sparkML) is a plus.- Strong CS fundamentals - algorithms, database and systems design.- Degree in CS/Maths from tier-1 engineering institutes.
JOB DESCRIPTION We're looking for Head, Machine learning (3+ years experience) for our company - Spotmentor Technologies. Right now our Technology team has 5 members and this is a head team member role and carries significant equity with it. We need someone who can lead the Machine learning function with both vision and hands-on work and is excited to use this area to develop B2B products for enterprise productivity. RESPONSIBILITIES • Collaborate with cross-functional team members to develop software libraries, tools, and methodologies as critical components of our computation platforms. • Also responsible for software profiling, performance tuning and analysis, and other general software engineering tasks. • Use independent judgment to take existing code, understand its function, and change/enhance as needed. • Work as a team leader rather than a member. REQUIREMENTS • Proficient in Python with sound knowledge in the machine learning libraries namely Scikit-learn, Numpy, Pandas, NLTK etc. • Experience with Deep Learning tools like TensorFlow, Keras, PyTorch etc and integrating using open source learning platforms is required. • Prior experience in building a fully functional Machine Learning Algorithm in the text analysis and multi-class classification with promising results. • Expert data scientist with professionalism in text classification, text analytics, regression and other machine learning algorithms. • Solid grasp of mathematical principles behind machine learning algorithms. • Proficient in using version control tools (Git, Mercurial etc). • Prior experience of using big data technologies like Hadoop, Spark etc. • Semantic Web experience is a big plus. • Should be from tier 1 colleges (IIT’s / NIT’s and BITS).
We're looking for Senior NLP Engineer (2+ years experience) for our company - Spotmentor Technologies. Right now our Technology team has 5 members and this role is for early team member and carries significant ESOPs with it. We need someone who can lead the NLP function with both vision and hands-on work and is excited to use this area to develop B2B products for enterprise productivity.RESPONSIBILITIES----------------------- • Collaborate with cross-functional team members to develop software libraries, tools, and methodologies as critical components of our computation platforms. • Use independent judgment to take existing code, understand its function, and change/enhance as needed. • Work as a team leader rather than a member. • Capable of adding valuable inputs to existing algorithms by reading NLP research papers.REQUIREMENTS-------------------- • Proficient in Python with sound knowledge in the data science libraries namely Numpy, Pandas, NLTK / spaCy etc. • Prior experience in building a fully functional NLP based Machine Learning Model with good results (NER/Classification/Topic Modeling). • Expert data scientist with professionalism in text classification, feature engineering, using embeddings, pos etc. • Knowledge of writing database queries (SQL/NoSQL). • Some background in information retrieval systems is a big plus.
We are looking for a Machine Learning Developer who possesses apassion for machine technology & big data and will work with nextgeneration Universal IoT platform.Responsibilities:•Design and build machine that learns , predict and analyze data.•Build and enhance tools to mine data at scale• Enable the integration of Machine Learning models in Chariot IoTPlatform•Ensure the scalability of Machine Learning analytics across millionsof networked sensors•Work with other engineering teams to integrate our streaming,batch, or ad-hoc analysis algorithms into Chariot IoT's suite ofapplications•Develop generalizable APIs so other engineers can use our workwithout needing to be a machine learning expert
-Precily AI: Automatic summarization, shortening a business document, book with our AI. Create a summary of the major points of the original document. AI can make a coherent summary taking into account variables such as length, writing style, and syntax. We're also working in the legal domain to reduce the high number of pending cases in India. We use Artificial Intelligence and Machine Learning capabilities such as NLP, Neural Networks in Processing the data to provide solutions for various industries such as Enterprise, Healthcare, Legal.
Are you passionate about discovering insights hidden in vast amounts of data? Does your curiosity lead you to play with data to decipher an observed pattern? Do you constantly think about how machine learning, statistical modelling and optimization methodologies can help create top-notch consumer experience? If yes, Data Science team at Shuttl is a perfect fit for you. Key Deliverables :- Identifying and modeling the key drivers of consumer behavior using internal and external data.- Developing prediction models to deliver key business outcomes.- Problems include working on demand forecasting, demand driven supply planning etc.- Developing optimized transportation network design for Shuttl operations to drive strategic growth- Developing algorithms to predict static and dynamic schedule of Shuttl arrival times.- Presenting clear and actionable recommendations to Products and Engineering leadership.- Relentless pursuit of achieving business goals using the power of data Requirements :- 4+ years of experience.- Ability to crisply define the business problem and translate it into analytical/empirical problem- Experience in building ML models or statistical models in R or Python- Knowledge of underlying mathematical foundations of statistical inference/ forecasting/ Optimization- Prior experience on network optimization problems if preferred.- Proficiency in Python, CPLEX, SQL, NoSQL, Cassandra
FarmGuide is a data driven tech startup aiming towards digitizing the periodic processes in place and bringing information symmetry in agriculture supply chain through transparent, dynamic & interactive software solutions. We, at FarmGuide (https://angel.co/farmguide) help Government in relevant and efficient policy making by ensuring seamless flow of information between stakeholders.Job Description :We are looking for individuals who want to help us design cutting edge scalable products to meet our rapidly growing business. We are building out the data science team and looking to hire across levels.- Solving complex problems in the agri-tech sector, which are long-standing open problems at the national level.- Applying computer vision techniques to satellite imagery to deduce artefacts of interest.- Applying various machine learning techniques to digitize existing physical corpus of knowledge in the sector.Key Responsibilities :- Develop computer vision algorithms for production use on satellite and aerial imagery- Implement models and data pipelines to analyse terabytes of data.- Deploy built models in production environment.- Develop tools to assess algorithm accuracy- Implement algorithms at scale in the commercial cloudSkills Required :- B.Tech/ M.Tech in CS or other related fields such as EE or MCA from IIT/NIT/BITS but not compulsory. - Demonstrable interest in Machine Learning and Computer Vision, such as coursework, open-source contribution, etc.- Experience with digital image processing techniques - Familiarity/Experience with geospatial, planetary, or astronomical datasets is valuable- Experience in writing algorithms to manipulate geospatial data- Hands-on knowledge of GDAL or open-source GIS tools is a plus- Familiarity with cloud systems (AWS/Google Cloud) and cloud infrastructure is a plus- Experience with high performance or large scale computing infrastructure might be helpful- Coding ability in R or Python. - Self-directed team player who thrives in a continually changing environmentWhat is on offer :- High impact role in a young start up with colleagues from IITs and other Tier 1 colleges- Chance to work on the cutting edge of ML (yes, we do train Neural Nets on GPUs) - Lots of freedom in terms of the work you do and how you do it - Flexible timings - Best start-up salary in industry with additional tax benefits
1. The candidate will work on building RNN based speech recognition systems 2. The candidate will work on building NLP based solution suite for speech data analytics 3. Adhoc Python coding will be required on a regular basis to support the application and data science team.
Drive the creation of new models and capabilities that will leapfrog traditional bureau-based modelling done at most major financial institutions leveraging a wide array of data from both traditional and non-traditional sources. ● Acquire deep understanding of MSME lending domain ● Perform data integration, manipulation, and querying for purposes of reporting and more sophisticated analytics ● Engage in regular problem solving sessions with overall leadership team to present findings and refine their own analytics plan. ● Manage data analysis to develop fact-based recommendations for innovation projects. ● Mine Big Data and other unstructured data to tap untouched data sources and deliver insight into new and emerging solutions. ● Remain current on new developments in data analytics, Big Data, predictive analytics, machine learning and technology
To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : www.simplilearn.com If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.
Transporter is an AI-enabled location stack that helps companies improve their commerce, engagement or operations through their mobile apps for the next generation of online commerce