Shuttl is a bus aggregating platform offering shuttle bus service to its commuters in cities like Noida and Gurgaon. The effort is to address the daily commute problem faced by office goers. It's a mobile based minibus service aimed at making your daily commute more convenient. The vehicles are air-conditioned and operate with high frequency on fixed routes freeing you from the hassles of existing public transport options at a very economical price point.
We are looking for passionate problem solvers to work with our talented team of analytics professionals to build algorithms and models. If you are curious and crazy about optimised solutions, then you are the one we are looking for. The right candidate: 1. Should have good understanding of Machine Learning models, to be able to optimise the current models (training/ better approach/ design new algorithm etc.) 2. Should have hands-on experience in statistical modelling 3. Must have hands-on experience working with Microsoft Azure Machine Learning
Looking for enthusiastic Tableau Developer who can make sense for most of the real-time data.
We are looking for a driven and passionate machine learning engineer with a strong empathy for our users (students and teachers). Key responsibilities - Decide what data are needed to answer specific questions, and determine appropriate methods for finding and collecting this data. - Design experiments to validate collected data and analyse, interpret and discover patterns in this data. - Interact cross-functionally with a wide variety of people including students, teachers and engineers. Requirements - Feel a personal stake in the product. - Strong attention to detail and excellent analytical capabilities. - Experience in designing and implementing various machine learning models. - Deep understanding of core Machine Learning concepts.
Position Description Demonstrates up-to-date expertise in Software Engineering and applies this to the development, execution, and improvement of action plans Models compliance with company policies and procedures and supports company mission, values, and standards of ethics and integrity Provides and supports the implementation of business solutions Provides support to the business Troubleshoots business, production issues and on call support. Minimum Qualifications BS/MS in Computer Science or related field 5+ years’ experience building web applications Solid understanding of computer science principles Excellent Soft Skills Understanding the major algorithms like searching and sorting Strong skills in writing clean code using languages like Java and J2EE technologies. Understanding how to engineer the RESTful, Micro services and knowledge of major software patterns like MVC, Singleton, Facade, Business Delegate Deep knowledge of web technologies such as HTML5, CSS, JSON Good understanding of continuous integration tools and frameworks like Jenkins Experience in working with the Agile environments, like Scrum and Kanban. Experience in dealing with the performance tuning for very large-scale apps. Experience in writing scripting using Perl, Python and Shell scripting. Experience in writing jobs using Open source cluster computing frameworks like Spark Relational database design experience- MySQL, Oracle, SOLR, NoSQL - Cassandra, Mango DB and Hive. Aptitude for writing clean, succinct and efficient code. Attitude to thrive in a fun, fast-paced start-up like environment
We're an early stage film-tech startup with a mission to empower filmmakers and independent content creators with data-driven decision-making tools. We're looking for a data person to join the core team. Please get in touch if you would be excited to join us on this super exciting journey of disrupting the film production and distribution business. We are currently collaborating with Rana Daggubatt's Suresh Productions, and work out of their studio in Hyderabad - so exposure and opportunities to work on real issues faced by the media industry will be in plenty.
Transporter is an AI-enabled location stack that helps companies improve their commerce, engagement or operations through their mobile apps for the next generation of online commerce
Job Description: If you have good programming skills, and want to solve complex real world problems using artificial intelligence, machine learning and computer vision while learning these on the job, READ ON. Run by IIT Kanpur alumni, AIMonk is a computer vision startup in stealth mode. We are building a uniquitous platform for computer vision using Artificial intelligence. We are looking for an entry level(0-2 years of professional experience) programmer with deep interest in software engineering. This is a machine learning engineer position but there is no machine learning experience required. What we are looking for is sharp and curious brain who gets his/her high via solving problems. Willing to work in an early stage start-up, humility and is another skill-set required. College, pedigree doesn't matter but it is a good indicator of your skill-level. People who went to NIT, BITS are encouraged to apply. However, there is a programming challenge below. If you have the skills to solve that, it doesn't matter where you went to the college or what degree do you have. Be careful, if you work with us once, ordinary jobs will not interest you any more as they won't be challenging enough. Good thing, you will learn more than what you need to land those top 0.01% interesting jobs. Job Perks: Opportunity to work with the smartest people in the country on Artificial Intelligence and computer vision. Learning, tons of it. Autonomy, respect and freedom to set your own work-hours, opportunity to fail and learn. And of course! free beer and pizza once in a while. Problem statement: https://s3-ap-southeast-1.amazonaws.com/aimonk/SDE1-problem+statement.pdf
Machine Learning Data Engineer Engineering Gurgaon, Haryana, India Job Description Who are we? BlueOptima provides industry leading objective metrics in software development using it’s proprietary Coding Effort Analytics that enable large organisations to deliver better software, faster, and at lower cost. Founded in 2007, BlueOptima is a profitable, independent, high growth software vendor commercialising technology initially devised in seminal research carried out at Cambridge University. We are headquartered in London with offices in New York, Bangalore, and Gurgaon. BlueOptima’s technology is deployed with global enterprises driving value from their software development activities For example, we work with seven of the world’s top ten Universal Banks (by revenue), three of the world’s top ten telecommunications companies (by revenue, excl. China). Our technology is pushing the limits of complex analytics on large data-sets with more than 15 billion static source code metric observations of software engineers working in an Enterprise software development environment. BlueOptima is an Equal Opportunities employer. Whom are we looking for? BlueOptima has a truly unique collection of vast datasets relating to the changes that software developers make in source code when working in an enterprise software development environment. We are looking for analytically minded individuals with expertise in statistical analysis, Machine Learning and Data Engineering. Who will work on real world problems, unique to the data that we have, develop new algorithms and tools to solve problems. The use of Machine Learning is a growing internal incentive and we have a large range of opportunities, to expand the value that we deliver to our clients. What does the role involve? As a Data Engineer you will be take problems and ideas from both our onsite Data Scientists, analyze what is involved, spec and build intelligent solutions using our data. You will take responsibility for the end to end process. Further to this, you are encouraged to identify new ideas, metrics and opportunities within our dataset and identify and report when an idea or approach isn’t being successful and should be stopped. You will use tools ranging from advance Machine Learning algorithms to Statistical approaches and will be able to select the best tool for the job. Finally, you will support and identify improvements to our existing algorithms and approaches. Responsibilities include: Solve problems using Machine Learning and advanced statistical techniques based on business needs. Identify opportunities to add value and solve problems using Machine Learning across the business. Develop tools to help senior managers identify actionable information based on metrics like BlueOptima Coding Effort and explain the insight they reveal to senior managers to support decision-making. Develop additional & supporting metrics for the BlueOptima product and data predominantly using R and Python and/or similar statistical tools. Producing ad hoc or bespoke analysis and reports. Coordinate with both engineers & client side data-scientists to understand requirements and opportunities to add value. Spec the requirements to solve a problem and identify the critical path and timelines and be able to give clear estimates. Resolve issues and find improvements to existing Machine Learning solution and explain their impacts. ESSENTIAL SKILLS / EXPERIENCE REQUIRED: Minimum Bachelor's degree in Computer Science/Statistics/Mathematics or equivalent. Minimum of 3+ years experience in developing solutions using Machine learning Algorithms. Strong Analytical skills demonstrated through data engineering or similar experience. Strong fundamentals in Statistical Analysis using R or a similar programming language. Experience apply Machine Learning algorithms and techniques to resolve problems on structured and unstructured data. An in depth understanding of a wide range of Machine Learning techniques, and an understanding of which algorithms are suited to which problems. A drive to not only identify a solution to a technical problem but to see it all the way through to inclusion in a product. Strong written and verbal communication skills Strong interpersonal and time management skills DESIRABLE SKILLS / EXPERIENCE: Experience with automating basic tasks to maximise time for more important problems. Experience with PostgreSQL or similar Rational Database. Experience with MongoDB or similar nosql database. Experience with Data Visualisation experience (via Tableau, Qlikview, SAS BI or similar) is preferable. Experience using task tracking systems e.g. Jira and distributed version control systems e.g. Git. Be comfortable explaining very technical concepts to non-expert people. Experience of project management and designing processes to deliver successful outcomes. Why work for us? Work with a unique a truly vast collection of datasets Above market remuneration Stimulating challenges that fully utilise your skills Work on real-world technical problems to which solution cannot simply be found on the internet Working alongside other passionate, talented engineers Hardware of your choice Our fast-growing company offers the potential for rapid career progression
We are looking for a Machine Learning Developer who possesses apassion for machine technology & big data and will work with nextgeneration Universal IoT platform.Responsibilities:•Design and build machine that learns , predict and analyze data.•Build and enhance tools to mine data at scale• Enable the integration of Machine Learning models in Chariot IoTPlatform•Ensure the scalability of Machine Learning analytics across millionsof networked sensors•Work with other engineering teams to integrate our streaming,batch, or ad-hoc analysis algorithms into Chariot IoT's suite ofapplications•Develop generalizable APIs so other engineers can use our workwithout needing to be a machine learning expert
Should be experienced in building Machine learning pipelines. Should be proficient in Python and scientific packages like pandas, numpy, scikit, matplotlib, etc. Experience with techniques such as Data mining, Distributed Computing, Applied Mathematics and Algorthims, Probablity & statistics, Strong problem solving and conceptual thinking abilities Hands on experience in Model building Building highly customized and optimized data pipelines integrating third party API’s and inhouse data sources. Extracting features from text data using tools like Scapy Deep learning for NLP using any modern framework