Woovly, an early stage startup is about awakening interests, hobbies, and bucket lists of an individual. We at Woovly believe that every individual has a passion for some activity and that when pursued and accomplished gives him immense happiness. Woovly connects all such individuals based on their common passions. We are in the final stage of building the online platform that enables the social networking based on common interests.
VIA.com is the biggest B2b travel company in Asia. We are looking for someone with good experience in data analysis in online retail domain. SQL/Excel/Google analytics skill is must.
Do apply if any of this sounds familiar! o You have expertise in NLP, Machine Learning, Information Retrieval and Data Mining. o Experience building systems based on machine learning and/or deep learning methods. o You have expertise in Graphical Models like HMM, CRF etc. o Familiar with learning to rank, matrix factorization, recommendation system. o You are familiar with the latest data science trends, tools and packages. o You have strong technical and programming skills. You are familiar with relevant technologies and languages (e.g. Python, Java, Scala etc.) o You have knowledge of Lucene based search-engines like ElasticSearch, Solr, etc and NoSQL DBs like Neo4j and MongoDB. o You are really smart and you have some way of proving it (e.g. you hold a MS/M.Tech or PhD in Computer Science, Machine Learning, Mathematics, Statistics or related field). o There is at least one project on your resume that you are extremely proud to present. o You have at least 4 years’ experience driving projects, tackling roadblocks and navigating solutions/projects through to completion o Execution - ability to manage own time and work effectively with others on projects o Communication - excellent verbal and written communication skills, ability to communicate technical topics to non-technical individuals Good to have: o Experience in a data-driven environment: Leveraging analytics and large amounts of (streaming) data to drive significant business impact. o Knowledge of MapReduce, Hadoop, Spark, etc. o Experience in creating compelling data visualizations
Greedygame is looking for a data scientist who will help us make sense of the vast amount of available data in order to make smarter decisions and develop high-quality products. Your primary focus will be using data mining techniques, statistical analysis, machine learning, in order to build high-quality prediction systems and strong consumer engagement profiles. Responsibilities Build required statistical models and heuristics to predict, optimize, and guide various aspects of our business based on available data Interact with product and operations teams to identify gaps, questions, and issues for data analysis and experiment Develop and code software programs, algorithms and create automated processes which cleanse,integrate and evaluate large datasets from multiple sources Create systems to use data from user behavior to identify actionable insights. Convey these insights to product and operations teams from time to time. Help in redefining ad viewing experience for consumers on a global scale Skills Required Coding experience in Python, MySQL, NoSQL and building prototypes for algorithms. Comfortable and willing to learn any machine learning algorithm, reading research papers and delving deep into its maths Passionate and curious to learn the latest trends, methods and technologies in this field. What’s in it for you? - Opportunity to be a part of the big disruption we are creating in the ad-tech space. - Work with complete autonomy, and take on multiple responsibilities - Work in a fast paced environment, with uncapped opportunities to learn and grow - Office in one of the most happening places in India. - Amazing colleagues, weekly lunches and beer on fridays! What we are building: GreedyGame is a platform which enables blending of ads within mobile gaming experience using assets like background, characters, power-ups. It helps advertisers engage audiences while they are playing games, empowers game developers monetize their game development efforts through non-intrusive advertising and allows gamers to enjoy gaming content without having to deal with distractive advertising.
Couture.ai provides Artificial Intelligence as SaaS offering for global online retailers and fashion brands. Our prediction technology helps retailers tailor experiences for their customer, even without any prior interactions with them. We use our state of art DeepLearning technology to predict the behavior of newly acquired users, which helps retailers tailor experiences for each customer even without any prior interactions with them. After integrating our sdk with initial clients, we have seen product view increased by 25% and sales conversion went up to 3 times. Credible display of innovation in past projects (or Academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with Machine Learning libs, hands-on with RDBMS/NoSQL DBs, Big Data Analytics and forming Insights based on data, handling Unix & Production Server etc. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) is must. exceptionally bright engineers are welcome. Let us know, if this interests you to explore the profile further.
- Predict upcoming fashion trends from market intelligence, - Implicit identification of personal style from user's images, social media & interactions, - Personalization for user purchase points and size, - Virtual styling, and - Intelligent screenless interfaces Couture.ai provides Artificial Intelligence as SaaS offering for global online retailers and fashion brands. Our prediction technology helps retailers tailor experiences for their customer, even without any prior interactions with them. We use our state of art DeepLearning technology to predict the behavior of newly acquired users, which helps retailers tailor experiences for each customer even without any prior interactions with them. After integrating our sdk with initial clients, we have seen product view increased by 25% and sales conversion went up to 3 times. We are looking for a research-driven expert in AI & Data Science team, to take our innovations to next horizon. Credible display of innovation in past projects (or Academia) and expertise with Machine Learning is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with Machine Learning libs - TensorFlow, MLlib, SkiPy, NumPy, etc., experience with key DeepLearning algorithms etc. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) is must. Let us know if this interests you to explore the profile further.
To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : www.simplilearn.com If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.
Why - You are interested in creating impact for over 150 Million users registered with us. It's that simple! Where - Kickass Studio Office, Bangalore India. What - You will be responsible in creating a world class recommendation engine for our brand new product! You will partner with various teams to create enormous impact through the use of the latest analytical tools and techniques. At Meaww, high focus on impact and ownership allows for freedom to each to experiment and innovate. The ability to see what your contribution does to the business is a rare experience, add to that the fact that the impact of it is felt by your friends and family
We are looking for people with experience in Machine Learning. We work on DNNs and if you are deeply interested, can spend time in training as well (given you are very well versed on other ML concepts). Work is exciting and we have an awesome team on board already!
At Swiggy, we are looking for talented data scientists, who can help unlock the true potential from the unique data that we collect everyday. Some of our current focus areas include: SLA prediction Batching/Route Optimization Recommender Systems Personalization Cross Selling/Up Selling Fraud Modeling Inventory Stock Prediction An ideal candidate should have strong depth and breadth of knowledge in different areas of machine learning, statistics as well as reasonable programming skills to be able to fetch data, process it and create a prototype model for the problem.
Who and what of Razorpay Razorpay is a year old gathering of folks from various walks of life. We have suave designers, magical engineers, sales hustlers, calm-as-rock customer champions and then some. Together we are trying to bring modern technology into the backbone of the internet and that's online payments. We believe that someone is doing his/her job all wrong if the customers have to tear their hair over accepting payments online. In one year we have gone from nothing to a full fledged, modern and robust online payments system. Our customers swear by our tech and UX, and we are ecstatic to see people enjoy the results of our fanatic focus on making online payments simple and accessible. We provide the smoothest payment experience that money can buy right now in India. With a marquee list of investors, cash in bank and a low burn rate, we are here for years to stay, to lead and demonstrate how online payments should be. And you are? Someone who thinks technology is the answer to life, universe and everything. You don't want to hide behind the mundanity of a corporate job, a cog in the bigger scheme that's unknown. You want to own what you do and drive it to completion, with a touch of perfection befitting your zeal for your work. You swear by version control and you bow to the church of test driven development. You love your tech and automation is your religion. Now that you know us and we know you, let's talk about the culture you can expect at Razorpay. All things culture We treat all our employees as adults, even if they are legally not one. We think that you can do your best with a freedom to experiment, sense of ownership of your work, licence to fail and when you have peers who care just as much as you about their art. We default to transparency because everyone can contribute to anything, and it's only possible if you get to know what's going on. We prefer to mingle with the community through meetups and conferences on topics as broad as daylight. And at the end of the day we accept that we are humans, fallible to error, learning through errors together. After all, there's comedy in error too. We believe in having the best tools for the job. We are happy paying customers of GitHub, Clearbit, Zendesk and Slack, some of the most important tools in our workflow. Our people select the laptops on which they think they can do their best. We also provide free books for constant learning, free VMs for side-projects, and an in-office gym to keep you at your best, always. Hey developer, what will you get to work on? As a data scientist, you will get to: Work closely with our business & product teams to identify key questions and come up with data solutions Apply statistical and econometric models on large data sets to identify impacts, predict future performance, measure results, etc. Set up our data sciences and machine learning team Design, analyse and interpret the results of experiments Work on exciting challenges to improve payments flow What do we expect from you? As a data scientist we expect you to have: 2+ years of experience working with and analysing large data sets to solve problems Understand & tackle problems around data quality, clustering, dimensionality, etc Programming - data crunching languages like R or Python would be a plus Prior experience with data-distributed tools like Scalding, Hadoop, etc A willingness to learn new technology, whatever lets you deliver the best product Apart from these, we also expect the following, but we accept that you can be an absolutely great developer without fulfilling the below. So go ahead and apply even if the following aren’t applicable: Strong knowledge of statistics & experimental design Have a few weekend side-projects up on GitHub Have contributed to an open source project Have working knowledge of multiple languages
Fly NavaTechnologies is a start-up organization whose vision is to create the finest airline software for distinct competitive advantages in revenue generation and cost management. The software products have been designed and created by veterans of the airline and airline IT industry, to meet the needs of this special customer segment. The software will be constructive by innovative approach to age old practices of pricing, hedging, aircraft induction and will be path breaking to use, encouraging the users to rely and depend on its capabilities. Wewill leverage our competitive edge by incorporating new technology, big data models, operations research and predictive analytics into software products, a means of creating interest and creativity while using the software. This interest and creativity will increase potential revenues or reduce costs considerably, thereby creating a distinct competitive differentiation. FlyNava is convinced that when airline users create that differentiation easily, their alignment to the products will be self-motivated rather than mandated. High level of competitive advantage will also flow with the following All the products, solutions and services will be Copyright. FlyNava will benefit with high IPR value including its base thesis/research as the sole owners. Existing product companies are investing in other core areas which our business areas are predominantly manual process Solutions are based on master thesis which need 2-3 years to complete and more time to make them relevant for software development. Expertise in these areas are far and few. Responsible for Collecting, Cataloguing, Filtering of data and Benchmarking solutions - Contribute to model related Data Analytics and Reporting. - Contribute to Secured Software Release activities. Education & Experience : - B.E/B.Tech or M.Tech/MCA in Computer Science/ Information Science / Electronics & Communication - 3 - 6 Years of experience Must Have : - Strong in Data Analytics via Pyomo (for optimization) Scikit-learn (for small data ML algorithms) MLlib (Apache Spark big-data ML algorithms) - Strong in representing metrics and reports via json - Strong in scripting with Python - Familiar with Machine learning, pattern recognition Algorithms - Familiar with Software Development Life Cycle - Effective interpersonal skills Good to have : Social analytics Big data