The role is for a large insurance engagement , the Individual would work with the client to identify , comprehend and solve using Statistical approaches using R studio. The role involves data analytics , Statistical Modelling . Tools involve R & SQL
We are building a complex Big Data product which aims to revolutionize sales prospecting by crunching billions of data points and applying sophisticated machine learning & AI algorithms. Hence, we are looking for a Data Governance Analyst to ensure that the quality of this product always stays top-notch and world class. If you love working with data, have an eye for detail and a strong adherence to quality then we’d love to hear from you. The Data Governance Analyst is a senior level position where the analyst will work under the general direction of the Chief Data Scientist and senior staff in the Data Management team. The primary responsibility is to treat data as an asset, and become the expert source for data standards and policy related questions. RESPONSIBILITIES: Manage the creation, deployment, and maturity of data governance processes and technology including master data, metadata, and data quality initiatives Identify opportunities to ensure transparent and high-quality data across multiple sources and platforms Reviews, cleans and adds business and formatting rules to every taxonomy/hierarchy in the product Database in support of long-term data governance. Develop objects within data quality tools for cleansing, de-duplicating, and other data preparation Collaborate with various teams to standardize data and ensure adherence to the data ingestion and governance standards Conduct root cause analysis and proposed improvement solutions Leverage subject matter expertise to ensure data products are understood by business user This position requires a proficient level of experienced analytical and programming capabilities, defining requirements, developing and/or maintaining computer applications/systems, and ability to meet business needs within deadlines. REQUIREMENTS: 3 - 6 years in Data analysis, data quality and data governance Proficiency with working with data (profiling, cleansing and transforming) in multiple databases Experience in setting up and managing master data, metadata, and over-all data quality Strong experience of data analysis and data management using a variety of open source tools Come with experience navigating unstructured, complex data environments Detail oriented results driven by the ability to manage multiple requirements in a dynamically changing environment Experience working with a complex big data product would be an added bonus Ability to pick up new software skills in short time frames Self-motivated and able to handle tasks with minimal supervision or questions Ability to write technical documents such as requirement specs or data standards
• Identifying and modeling the key drivers of consumer behavior using internal and external data. • Developing prediction models to deliver key business outcomes. • Problems include working on demand forecasting, demand driven supply planning etc. • Developing optimized transportation network design for Shuttl operations to drive strategic growth • Developing algorithms to predict static and dynamic schedule of Shuttl arrival times. • Presenting clear and actionable recommendations to Products and Engineering leadership. • Relentless pursuit of achieving business goals using the power of data
1. Product: - Identify important questions with the product team and answer them with data - Create a culture of measurement and metrics across the company by developing key success metrics for the product - Work with company leaders to understand product roadmap and influence/improve it using quantitative research - Be our source of bitter truth against all product assumptions 2. Sales: - Work with sales team to automate internal dashboards which help company leaders make strategic decisions - Develop reports/research based on our data that help reduce sales time - Work with our product team to build and maintain integrations with other business systems
SmartNomad is a revolutionary product that connects the world of travel using best-in-class data science. SmartNomad understands the traveler and using advanced analytical methods, creates one-click bookable intelligent itineraries with flight, hotels and activities tailored specifically to traveller's preference. Using cutting edge machine learning, SmartNomad serves as traveler's go-to virtual assistant during the trip. A truly superior customer experience lies at the heart of SmartNomad, it does all the heavy lifting so that nomads can explore the world carefree. We are looking for smart folks who can work with us in solving end to end data science problem.The work involves a lot of research , tapping into cutting edge research and solving problems like extracting insights from Text , building recommendations based on user preferences and their historical behaviour.
-Precily AI: Automatic summarization, shortening a business document, book with our AI. Create a summary of the major points of the original document. AI can make a coherent summary taking into account variables such as length, writing style, and syntax. We're also working in the legal domain to reduce the high number of pending cases in India. We use Artificial Intelligence and Machine Learning capabilities such as NLP, Neural Networks in Processing the data to provide solutions for various industries such as Enterprise, Healthcare, Legal.
Being in a small team, this role is for a sound developer who is in love with data. Whether it's working with huge amount of data, using data to drive insights or finding innovative ways to pull data. Let us know if you'd like to explore this role with us more.
Responsibilities Exp 3~5 years Build up a strong and scalable crawler system for leveraging external user & content data source from Facebook, Youtube and others internet products or service. Getting top trending keywords & topic from social media. Design and build initial version of the real-time analytics product from Machine Learning Models to recommend video contents in real time to 10M+ User Profiles independently. Architect and build Big Data infrastructures using Java, Kafka, Storm, Hadoop, Spark and other related frameworks, experience with Elastic search is a plus Excellent Analytical, Research and Problem Solving skills, in-depth knowledge of Data Structure Desired Skills and Experience B.S./M.S. degree in computer science, mathematics, statistics or a similar quantitative field with good college background 3+ years of work experience in relevant field (Data Engineer, R&D engineer, etc) Experience in Machine Learning and Prediction & Recommendation techniques Experience with Hadoop/MapReduce/Elastic-Stack/ELK and Big Data querying tools, such as Pig, Hive, and Impala Proficiency in a major programming language (e.g. Java/C/Scala) and/or a scripting language (Python) Experience with one or more NoSQL databases, such as MongoDB, Cassandra, HBase, Hive, Vertica, Elastic Search Experience with cloud solutions/AWS, strong knowledge in Linux and Apache Experience with any map-reduce SPARK/EMR Experience in building reports and/or data visualization Strong communication skills and ability to discuss the product with PMs and business owners
Key Skills: · Ability to balance the demands of multiple clients concurrently. · Ability to convey complex data science concepts to a non-technical audience. · Proficiency in MySQL or similar relational database software. · Extensive Microsoft Excel and Access knowledge and skills. · STATA or R and Python proficient. · Familiarity with R or Pandas for data analysis · Able to craft complex visualizations of data and system architecture; familiar with the creation of operational data models (program data ecosystem mapping and interpretation). Experience: · 3-7 years’ experience in an analytics-intensive position - data scientist, statistician, economist, or comparable. · 3-7 years’ experience with data and/or enterprise software implementation in the field: benchmarking of technology and systems, data management and architecture, program implementation and cycle analysis. · Experience performing data management exercises inside of large organizations (bonus points forinternational public sector entities). · Prior experience with complex data set merging, cleaning, and aggregation
Skills Required : SQL, Python Data Science Stack (strongly preferred), Machine Learning and Statistics, Data Visualization , A/B Testing, Bandit problems, Recommendation systems, Reinforcement learning Roles & Responsibilities · Ability to work independently to apply technical skills to solve a business objective · Exploratory data analysis employing large data sets involving visualization and basic statistical techniques to develop intuitions and identify key variables in data science projects · Data extraction and joining data from multiple sources; SQL is a must; Python preferred · Strong familiarity and experience with Machine Learning algorithms; The Python Data Science stack is ideal (e.g. sklearn, etc); Regression, Clustering, SVM, Decision Trees, DNNs, Recommendation Algorithms, etc. · Specific experience sought: real-world A/B testing, multi-armed bandit algorithms, causal inference · Ability to read and apply, with guidance, cutting edge Machine Learning research papers to solve business problems
Do apply if any of this sounds familiar! o You have expertise in NLP, Machine Learning, Information Retrieval and Data Mining. o Experience building systems based on machine learning and/or deep learning methods. o You have expertise in Graphical Models like HMM, CRF etc. o Familiar with learning to rank, matrix factorization, recommendation system. o You are familiar with the latest data science trends, tools and packages. o You have strong technical and programming skills. You are familiar with relevant technologies and languages (e.g. Python, Java, Scala etc.) o You have knowledge of Lucene based search-engines like ElasticSearch, Solr, etc and NoSQL DBs like Neo4j and MongoDB. o You are really smart and you have some way of proving it (e.g. you hold a MS/M.Tech or PhD in Computer Science, Machine Learning, Mathematics, Statistics or related field). o There is at least one project on your resume that you are extremely proud to present. o You have at least 4 years’ experience driving projects, tackling roadblocks and navigating solutions/projects through to completion o Execution - ability to manage own time and work effectively with others on projects o Communication - excellent verbal and written communication skills, ability to communicate technical topics to non-technical individuals Good to have: o Experience in a data-driven environment: Leveraging analytics and large amounts of (streaming) data to drive significant business impact. o Knowledge of MapReduce, Hadoop, Spark, etc. o Experience in creating compelling data visualizations
SquadRun is a profitable SaaS startup that leverages the best of machines and humans to automate digital operations/ business processes for enterprises. We have offices in Noida and San Francisco. This position is primarily based in Noida. We combine customised workflows, workflow automation and a distributed human talent pool of stay at home moms and college students delivering guaranteed SLAs of high quality output, speed and scale, at great cost efficiency. Please see this overview deck for more context. In every business, there are digital operations/ business processes that need to be executed. We are disrupting the business process outsourcing industry and in the process solving some of the most exciting data science problems. You would need to apply the best machine learning techniques to various aspects of business process automation and help businesses perform operational work with 10X efficiency (cost, speed, quality) than existing alternatives. For example, catalog management for commerce, content moderation for social businesses, training for AI algorithms, customer onboarding for banking and insurance etc. One of the biggest challenge lies in shaping the product which can have flexible modules that can come together to scale most major workflows. We’re looking for a growth hungry early member who would like to apply data science to a real product right from the ground-up. Roles & Responsibilities You would closely work on building a start-of-the-art work automation pipelines for our platform product with the Data Science Lead: Workforce management to acquire, qualify, match, train and verify / Quality Control. Workflow engine to replicate each business process smartly with different configurations. Work automation (SquadAI) to help contractors by automating part of the workflows and ‘Humans ’in the loop machine learning to automate cognitive decisions once we have enough quality training data. Facilitate data driven experiments to derive product insights. Participate in research in artificial intelligence and machine learning applications from time to time. Background & Key traits 1+ years experience working with data intensive problems at scale from inception to business impact. Experience with modeling techniques such as generalized linear models, cluster analysis, random forests, boosting, decision trees, time-series, neural networks, deep learning etc. Experience using programming (Python/R/Scala) and SQL languages in analytical contexts. Experience with distributed machine learning and computing framework would be a plus (Spark, Mahout or equivalent). Why should you consider this seriously? We’re one of the few applications of AI/ data science that actually has a massive market and business model to create a long term valuable business rather than a short term acquisition play! In the last one year, we have built a product and solved problems for some of the largest brands in the world and tested platform at scale (processed 50+ million units of data). Our customers include Uber, Sephora, Teespring, Snapdeal, the Tata and Flipkart Group, amongst others and we have plans to grow 10x in the next 1.5 years. We are a well balanced team of experienced entrepreneurs and are backed by top investors across India and the Silicon Valley. The platform empowers college students, stay at home mothers and grey collared workers as a stable source of income for working on their smartphone (Android App Link). Our contractors earn 3x of what a typical back office BPO employee makes. For us, this is truly impactful. Every day, we see success stories such as this one - how a single mother is sustaining herself. Empowering our contractors to be financially independent is a strong part of our vision! Compensation INR 8-12 L cash + ESOP upto 4L. To Apply Download our app, go through our website, social media pages, linkedin profiles, blogs, etc. Read about our hiring framework here (Must read!). Mail cover letter and resume to firstname.lastname@example.org with the subject “Jr. Data Scientist @ SquadRun Inc”. In your cover letter, tell us why you are a good fit for this role!
SquadRun is a profitable SaaS startup that leverages the best of machines and humans to automate digital operations/ business processes for enterprises. We have offices in Noida and San Francisco. This position is primarily based in Noida with some travel. We combine customised workflows, workflow automation and a distributed human talent pool of stay at home moms and college students delivering guaranteed SLAs of high quality output, speed and scale, at great cost efficiency. In every business, there are digital operations/ business processes that need to be executed. We are disrupting the business process outsourcing industry and in the process solving some of the most exciting data science problems. We’re looking for a senior candidate to own and build our data science and platform product roadmap right from the ground up. You would need to apply the best machine learning techniques to various aspects of business process automation and help businesses perform operational work with 10X efficiency (cost, speed, quality) than existing alternatives. For example, catalog management for commerce, content moderation for social businesses, training for AI algorithms, customer onboarding for banking and insurance etc. One of the biggest challenge lies in shaping the product which can have flexible modules that can come together to scale most major workflows. We are looking for an early member who not only is a data science wizard but has strong product chops. Sample Workflow: Roles & Responsibilities You would closely work on building a start-of-the-art work automation pipelines for our platform product: Workforce management to acquire, qualify, match, train and verify / Quality Control. Workflow engine to replicate each business process smartly with different configurations. Work automation (SquadAI) to help contractors by automating part of the workflows and ‘Humans ’in the loop machine learning to automate cognitive decisions once we have enough quality training data. Architect & Build the work automation product layer right from the scratch by working closely with the platform engineering team. Define the long term data science platform product roadmap with focus on a strong platform layer foundation. Collaborate closely with engineering, business operations & product teams and leverage your expertise in devising appropriate measurements and metrics, designing randomized controlled experiments, architecting business intelligence tooling and tackling hard, open-ended problem, etc. Facilitate data driven experiments to derive product insights. Identify new opportunities to leverage data science to different parts of the our platform. Build a sharp data science team and culture. Background & Key traits 4+ years experience working with data intensive problems at scale from inception to business impact. Experience with modeling techniques such as generalized linear models, cluster analysis, random forests, boosting, decision trees, time-series, neural networks & deep learning. Strong communication and documentation skills. Experience using programming (Python/R/Scala) and SQL languages in analytical contexts. Experience dealing with large datasets. Advanced degree in a relevant field (preferred). Experience with distributed machine learning and computing framework would be a plus (Spark, Mahout or equivalent). Why should you consider this seriously? We’re one of the few applications of AI/ data science that actually has a massive market and business model to create a long term valuable business rather than a short term acquisition play! In the last one year, we have built a product and solved problems for some of the largest brands in the world and tested platform at scale (processed 50+ million units of data). Our customers include Uber, Sephora, Teespring, Snapdeal, the Tata and Flipkart Group, amongst others and we have plans to grow 10x in the next 1.5 years. We are a well balanced team of experienced entrepreneurs and are backed by top investors across India and the Silicon Valley. The platform empowers college students, stay at home mothers and grey collared workers as a stable source of income for working on their smartphone (Android App Link). Our contractors earn 3x of what a typical back office BPO employee makes. For us, this is truly impactful. Every day, we see success stories such as this one - how a single mother is sustaining herself. Empowering our contractors to be financially independent is a strong part of our vision! Compensation INR 20-28L cash + ESOP upto 30L. To Apply Download our app, go through our website, social media pages, linkedin profiles, blogs, etc. Read about our hiring framework here. (Must read!). Mail cover letter and resume to email@example.com with the subject “Data Science & Product Lead @ SquadRun”. In your cover letter, tell us why you are a good fit for this role!
To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : www.simplilearn.com If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.
Factspan is a pure play analytics company. We provide the expertise services that organizations need to establish their own analytics centre of excellence. We understand the complexities of running a business. Our dedicated team of data scientists and consultants partner with the clients to provide analytics tools and solutions which organizations need every day to take data driven decisions. Our management leaders are former Directors and Sr. Managers from premier organizations like University of Michigan, Fidelity, Amazon and Hotwire. With offices in Seattle, USA and Bangalore, India; we use a global delivery model to service our clients. Our clients include Fortune 500 companies in Retail, Financial Services, Hospitality and Technology sectors. For more details and updates please refer to: www.factspan.com
Job Role Develop and refine algorithms for machine learning from large datasets. Write offline as well as efficient runtime programs for meaning extraction and real-time response systems. Develop and improve Ad-Targeting based on various criteria like demographics, location, user-interests and many more. Design and develop techniques for handling real-time budget and campaign updates. Be open to learning new technologies. Collaborate with team members in building products Skills Required MS/PhD in Computer Science or other highly quantitative field Minimum 8 - 10 yrs of hands on experience in different machine-learning techniques Strong expertise in Big-data processing (Combination of the technologies you should be familiar with Kafka, Storm, Logstash, ElasticSearch, Hadoop, Spark) Strong coding skills in at-least one object-oriented programming language (e.g. Java, Python) Strong problem solving and analytical ability Prior 3+ year experience in advertising technology is preferred
We are looking for a Data Scientist at India's fastest growing Coupons and Deals Website, to harness the power of analytics to drive the understanding, progression and user-engagement of our product. Currently we have 60L uniques and growing every month. The ideal candidate will have a background in Maths, Statistics, Computer Science and/or related Quantitative fields, and will have experience working with large data stores, statistical modeling, and will have some experience building software. Responsibilities: - Apply your expertise in quantitative analysis, data mining, and presentation of data to see beyond numbers and understand how users interact with products - Partner with Product team to solve problems, improve the product based on data, and identify trends and opportunities - Design and implement reporting and metrics that track and monitor the performance of our products, the quality of our data, and overall health of the business - Build data sets to empower operational and exploratory analysis - Automate analyses - Design and evaluate experiments monitoring product metrics, and understand root causes of changes in metrics - Build and analyze dashboards and reports - Influence product teams through presentation of findings - Communicate the state of business, experiment results, etc. to product and management teams - Understand ecosystems, user behaviors, and long-term trends - Identify levers to move key metrics - Evaluate and define metrics - Build models of user behaviors for analysis and powering production systems. - Identify and correct for any biases or errors in our data sets If you're looking for your most challenging role yet, this is truly the place for you. Your new home - http://blog.grabon.in/grabon-new-office/ Visit us at - http://www.grabon.in/
Transporter is an AI-enabled location stack that helps companies improve their commerce, engagement or operations through their mobile apps for the next generation of online commerce
JSM is a data sciences company, founded in 2009 with the purpose of helping clients make data driven decisions. JSM specifically focuses on unstructured data. It is estimated that 90% of all data generated is unstructured and still mostly under-utilized for actionable insights largely due to high costs involved in speedily mining these large volumes of data. JSM is committed to creating cost effective innovative solutions in pursuit of highly actionable, easy to consume insights with a clearly defined ROI
Fly NavaTechnologies is a start-up organization whose vision is to create the finest airline software for distinct competitive advantages in revenue generation and cost management. The software products have been designed and created by veterans of the airline and airline IT industry, to meet the needs of this special customer segment. The software will be constructive by innovative approach to age old practices of pricing, hedging, aircraft induction and will be path breaking to use, encouraging the users to rely and depend on its capabilities. Wewill leverage our competitive edge by incorporating new technology, big data models, operations research and predictive analytics into software products, a means of creating interest and creativity while using the software. This interest and creativity will increase potential revenues or reduce costs considerably, thereby creating a distinct competitive differentiation. FlyNava is convinced that when airline users create that differentiation easily, their alignment to the products will be self-motivated rather than mandated. High level of competitive advantage will also flow with the following All the products, solutions and services will be Copyright. FlyNava will benefit with high IPR value including its base thesis/research as the sole owners. Existing product companies are investing in other core areas which our business areas are predominantly manual process Solutions are based on master thesis which need 2-3 years to complete and more time to make them relevant for software development. Expertise in these areas are far and few. Responsible for Collecting, Cataloguing, Filtering of data and Benchmarking solutions - Contribute to model related Data Analytics and Reporting. - Contribute to Secured Software Release activities. Education & Experience : - B.E/B.Tech or M.Tech/MCA in Computer Science/ Information Science / Electronics & Communication - 3 - 6 Years of experience Must Have : - Strong in Data Analytics via Pyomo (for optimization) Scikit-learn (for small data ML algorithms) MLlib (Apache Spark big-data ML algorithms) - Strong in representing metrics and reports via json - Strong in scripting with Python - Familiar with Machine learning, pattern recognition Algorithms - Familiar with Software Development Life Cycle - Effective interpersonal skills Good to have : Social analytics Big data
We are a startup looking to redefine how education is perceived and as such believe in a strong independent motivation to learn new things and we look for the same in our employees. The job of the senior developer will bleed in to all aspects of the product cycle not limited to his/her area of expertise and will be expected to have the ability to adapt and learn rapidly on very short iterations of product cycles. Since we are in the early stages we will offer exciting ESOPs to our early employees as well. Cover letters are so old fashioned. Show us who you are with a cover-letter-video. Upload your video to youtube or vimeo. We look forward to meeting you.