Founded in 2012, Factspan analytics is a profitable company based in Bengaluru (Bangalore). It has 6-50 employees currently and works in the domain of IT Consultancy.
Skill development & recruitment are one of the most fundamental problems in the world. In this era of artificial intelligence & virtual reality, they remain relatively unsolved. Internshala is a technology company on a mission to equip students with relevant skills through online trainings and internships to help them build their careers. If done right, we believe our work will fundamentally change the way training & recruitment happens. It will benefit thousands of companies and millions of students, thus uplifting the entire industry-academia ecosystem. If your calling is to solve complex and real-world problems through elegant digital products & experiences, this role is for you. As a Product Manager, you will take on the challenge of building world-class internship and online training platforms. You will wear multiple hats & work with different teams to deliver high-quality products. Your responsibilities would include - 1. Interacting with customers & internal teams to deeply understand user requirements & pain-points 2.Developing product roadmap - Why, what, when & how to build 3. Designing elegant & thorough solutions to problems & getting them implemented along with developers 4. Tracking and analyzing the impact of those solutions in achieving the desired objectives 5. Getting a lot to stuff implemented in a short period of time through process & hustle You will get- 1. A shot at building something truly disruptive; there is no revolutionary product for education/recruitment like there is Uber for transportation/Amazon for e-commerce 2. A chance to really make a difference by creating products that will impact millions of lives for better 3. Lot of creative freedom - You’ll get to take strategic calls on product 4. Awesome colleagues & a great work environment You fit the bill, if- 1. You are able to demonstrate track-record of being a good hustler; you should be able to meticulously and relentlessly drive projects to completion 2. You are immensely passionate about building beautiful high-impact products 3. You are good with numbers; you should be capable of pulling data that you require and generating insights from it 4. You have a creative bent of mind; you are able to come up lots of ideas to when faced with a problem 5. You have strong communication skills Location - Gurgaon (Address) Compensation - INR 4.5 LPA for fresh graduates. For candidates with experience, it would be as per your experience and current compensation. This is an important position for us and we intend to make a competitive offer. Start date – Immediately
We at Faasos, are looking for a Data Scientist that will help us discover the information hidden in our ocean of data, and help us make smarter decisions to "delight" our customers .Your primary focus will be in applying data mining techniques, doing statistical analysis, and building high quality prediction systems integrated with our products. Requirements would include - build recommendation systems-, - automate customer scoring using machine learning techniques-, - develop internal A/B testing procedures-, - build automated fraud detection-, "creating optimal price prediction model" etc. Responsibilities - Selecting features, building and optimizing classifiers using machine learning techniques - Data mining using state-of-the-art methods - Extending company's data with third party sources of information when needed - Enhancing data collection procedures to include information that is relevant for building analytic systems - Processing, cleansing, and verifying the integrity of data used for analysis - Doing ad-hoc analysis and presenting results in a clear manner Skills and Qualifications - Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc. - Experience with common data science toolkits, such as Python, Pandas, Jupyter, Spark, Hadoop etc. Excellence in at least one of these is highly desirable - Great communication skills - Experience with data visualisation tools, such as D3.js, GGplot, etc. - Proficiency in using query languages such as SQL, Hive, Pig - Experience with NoSQL databases, such as MongoDB, Elasticsearch, etc - Good applied statistics skills, such as distributions, statistical testing, regression, etc. - Good scripting and programming skills - Data-oriented personality Interested can share their profiles.
The Last Mile Analytics & Quality Team in Hyderabad is looking for Transportation Quality Specialist who will act as first level support for address, geocode and static route management in Last Mile with multiple Transportation services along with other operational issues and activities related to Transportation process and optimization. Your solutions will impact our customers directly! This job requires you to constantly hit the ground running and your ability to learn quickly and work on disparate and overlapping tasks will define your success. High Impact production issues often require coordination between multiple Development, Operations and IT Support groups, so you get to experience a breadth of impact with various groups. Primary responsibilities include troubleshooting, diagnosing and fixing static route issues, developing monitoring solutions, performing software maintenance and configuration, implementing the fix for internally developed code, performing minor SQL queries, updating, tracking and resolving technical challenges. Responsibilities also include working alongside development on Amazon Corporate and Divisional Software projects, updating/enhancing our current tools, automation of support processes and documentation of our systems. The ideal candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, able to juggle multiple tasks at once, able to work independently and can maintain professionalism under pressure. You must be able to identify problems before they happen and implement solutions that detect and prevent outages. You must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience, and get the right things done. Internal job description Your solutions will impact our customers directly! This job requires you to constantly hit the ground running and your ability to learn quickly and work on disparate and overlapping tasks will define your success. High Impact production issues often require coordination between multiple Development, Operations and IT Support groups, so you get to experience a breadth of impact with various groups. Primary responsibilities include troubleshooting, diagnosing and fixing static route issues, developing monitoring solutions, performing software maintenance and configuration, implementing the fix for internally developed code, performing minor SQL queries, updating, tracking and resolving technical challenges. Responsibilities also include working alongside development on Amazon Corporate and Divisional Software projects, updating/enhancing our current tools, automation of support processes and documentation of our systems. The ideal candidate must be detail oriented, have superior verbal and written communication skills, strong organizational skills, able to juggle multiple tasks at once, able to work independently and can maintain professionalism under pressure. You must be able to identify problems before they happen and implement solutions that detect and prevent outages. You must be able to accurately prioritize projects, make sound judgments, work to improve the customer experience, and get the right things done. Basic qualifications - Bachelors degree in Computer Science or Engineering - Good communication skills- both verbal and written - Demonstrated ability to work in a team - Proficiency in MS Office, SQL, Excel. Preferred qualifications - Experience working with relational databases - Experience with Linux - Debugging and troubleshooting skills, with an enthusiastic attitude to support and resolve customer problems
Job Title: Distributed Systems Engineer - SDET Job Location: Pune, India Job Description: Are you looking to put your computer science skills to use? Are you looking to work for one of the hottest start-ups in Silicon Valley? Are you looking to define the next generation data management platform based on Apache Spark? Are you excited by the idea of being a Spark committer? If you answered yes to all of the questions above, we definitely want to talk to you. We are looking to add highly motivated engineers to work as a QE software engineer in our product development team in Pune. We work on cutting edge data management products that transform the way businesses operate. As a distributed systems engineer (if you are good) , you will get to work on defining key elements of our real time analytics platform, including 1. Distributed in memory data management 2. OLTP and OLAP querying in a single platform 3. Approximate Query Processing over large data sets 4. Online machine learning algorithms applied to streaming data sets 5. Streaming and continuous querying Requirements: 1. Experience in testing modern SQL, NewSQL products highly desirable 2. Experience with SQL language, JDBC, end to end testing of databases 3. Hands on Experience in writing SQL queries 4. Experience on database performance benchmarks like TPC-H, TPC-C and TPC-E a plus 5. Prior experience in benchmarking against Cassandra or MemSQL is a big plus 6. You should be able to program either in Java or have some exposure to functional programming in Scala 7. You should care about performance, and by that, we mean performance optimizations in a JVM 8. You should be self motivated and driven to succeed 9. If you are an open source committer on any project, especially an Apache project, you will fit right in 10. Experience working with Spark, SparkSQL, Spark Streaming is a BIG plus 11. Plans & authors Test plans and ensure testability is considered by development in all stages of the life cycle. 12. Plans, schedules and tracks the creations of Test plans / automation scripts using defined methodologies for manual and/or automated tests 13. Work as QE team member in troubleshooting, isolating, reproducing, tracking bugs and verifying fixes. 14. Analyze test results to ensure existing functionality and recommends corrective action. Documents test results, manages and maintains defect & test case databases to assist in process improvement and estimation of future releases. 15. Performs the assessment and planning of test efforts required for automation of new functions/features under development. Influences design changes to improve quality and feature testability. 16. If you have solved big complex problems, we want to talk to you 17. If you are a math geek, with a background in statistics, mathematics and you know what a linear regression is, this just might be the place for you 18. Exposure to stream data processing Storm, Samza is a plus Open source contributors: Send us your Github id Product: SnappyData is a new real-time analytics platform that combines probabilistic data structures, approximate query processing and in memory distributed data management to deliver powerful analytic querying and alerting capabilities on Apache Spark at a fraction of the cost of traditional big data analytics platforms. SnappyData fuses the Spark computational engine with a highly available, multi-tenanted in-memory database to execute OLAP and OLTP queries on streaming data. Further, SnappyData can store data in a variety of synopsis data structures to provide extremely fast responses on less resources. Finally, applications can either submit Spark programs or connect using JDBC/ODBC to run interactive or continuous SQL queries. Skills: 1. Distributed Systems, 2. Scala, 3. Apache Spark, 4. Spark SQL, 5. Spark Streaming, 6. Java, 7. YARN/Mesos What's in it for you: 1. Cutting edge work that is ultra meaningful 2. Colleagues who are the best of the best 3. Meaningful startup equity 4. Competitive base salary 5. Full benefits 6. Casual, Fun Office Company Overview: SnappyData is a Silicon Valley funded startup founded by engineers who pioneered the distributed in memory data business. It is advised by some of the legends of the computing industry who have been instrumental in creating multiple disruptions that have defined computing over the past 40 years. The engineering team that powers SnappyData built GemFire, one of the industry leading in memory data grids, which is used worldwide in mission critical applications ranging from finance to retail.
It is my pleasure to introduce you to IDEAS2IT Technologies, Chennai. If you are looking for a challenging position as a BigData engineer solving complex business problems by applying the latest in Data Science, Machine Learning and AI read on. Ideas2IT is a high-end product engineering firm that rolls out its own products and also helps Silicon Valley firms with their product engineering. We are looking for super smart Techies to be part of our Data Science Lab. You will be working on projects like An AI platform built using Google TensorFlow for a predictive hiring product. Betting odds platform that to match odds offered to leverage spreads PPO platform for predictive pricing and promotions for enterprise eCommerce. Part of your toolset will be Google TensorFlow, Python ML frameworks, Apache Spark, R, Google BigQuery, Scala / Octave, Kafka and so on. If you have any relevant experience great! If not, it doesn't matter. We believe in hiring people with high IQ and the right attitude over ready-made skills. As long as you are passionate about building world-class enterprise products and understand whatever technology that you are working on in-depth, we will bring you up to speed on all the technologies we use. Oh BTW, did we mention that you need to be super smart? Sounds interesting? Ideas2IT is a high-end product firm. Started by an ex-Googler, Murali Vivekanandan, we count Siemens, Motorola, eBay, Microsoft and Zynga among our clients. We solve some very interesting problems in the USA startup ecosystem and have created great products in the process. When we build, we build great! We actively contribute to open source projects. We've built our own frameworks. We're betting the house on Big Data, and with a Stanford grad leading the team, we're sure to win. We have rolled 2 of our products as separate companies last year and raised institutional funds - Pipecandy, Idearx.
Being in a small team, this role is for a sound developer who is in love with data. Whether it's working with huge amount of data, using data to drive insights or finding innovative ways to pull data. Let us know if you'd like to explore this role with us more.
We are a FinTech Startup that has great opportunity to disrupt financial services sector and has global outreach. We are working on exciting technologies from Big Data, Bayesian Algorithm, Machine Learning to NLP and high computational models. We plan to launch the product in multiple countries. We have a large vision and great opportunity to be game changer in the industry. We are not looking for iterations in pursuit of a business model. Our Business model is our IP. We are intending to create new algorithms to mine data to create a customer segment of one and add real value to our customers. Our work would entail creation of new intellectual properties and many opportunities for patents. If you are excited to join a startup that holistically transforms Banking sector in India, please reach out to me - email@example.com
We are a FinTech Startup that has great opportunity to disrupt financial services sector and has global outreach. We are working on exciting technologies from Big Data, Bayesian Algorithm, Machine Learning to NLP and high computational models. We plan to launch the product in multiple countries. We have a large vision and great opportunity to be game changer in the industry. We are not looking for iterations in pursuit of a business model. Our Business model is our IP. Looking for a smart CTO who is keen to ride this journey along with us. Following are key traits that we are looking for: Experience: - Prior Statup experience, preferably in Fintech - Willing to get hands dirty and be smart about approach Technology: - Strong Engineering Discipline - Knowledge of full stack technology - Exposure/Understanding of Machine Learning - Not looking for specific technology area as I expect CTO to have the ability to learn new technologies as required Leadership/Soft skills: - Strategic Thinking (more importantly, Thinking Big) - Innovative - think out of the box - Good Communication skills - Strong Customer focus We are currently in Product Development stage and are keen to bring on an outstanding CTO - who not only contributes from technology perspective but also from business perspective.
Core Skills Python Data science and analytical background Interacting with technical and non-technical stakeholders Machine learning and exploratory data analysis Desirable Technical Skills Data Science and Engineering: PySpark, PySpark ML, Python, Hive, Postgres, Sk-learn Machine Learning: Collaborative filtering, NLP, TF-IDF, Decision trees, Regression, Clustering AWS: EC2, RDS, EMR, Kinesis, S3 Cloud Providers: AWS (primary), Google Cloud, Microsoft Cognitive Services, Watson APIs Visualization & UI: Tableau, Plotly, Python Flask, Zeppelin