Loading...

{{notif_text}}

Join the fight against Covid-19 in India. Check the resources we have curated for you here.https://fightcovid.cutshort.io

Data Scientist - Analytics
Posted by Gowshini Maheswaran

apply to this job
Remote, Bengaluru (Bangalore)
6 - 12 years
{{::renderSalaryString({min: 1500000, max: 4000000, duration: '', currency: 'INR', equity: false})}}

Skills

Data Science
R Programming
Python
Amazon Web Services (AWS)
Analytics
Machine Learning (ML)
SQL
Clustering
redshift
R

Job description

About Hypersonix Inc

Hypersonix offers a unified, AI-Powered Intelligent Enterprise Platform designed to help leading enterprises drive profitable revenue growth.

Founded

2018

Type

Product

Size

51-250 employees

Stage

View company

Why apply to jobs via CutShort

No long forms
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Discover employers in your network
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Make your network count
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
{{2101133 | number}}
Matches delivered
{{3712187 | number}}
Network size
{{6212 | number}}
Companies hiring

Similar jobs

Data Engineer

Founded 2020
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via GoKwik
{{rendered_skills_map[skill] || skill}}
Location icon
Remote only
Experience icon
1 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Founded by experienced founders and funded by Tier-1 VCs, GoKwik is a solution for democratizing the shopping experience on e-commerce platforms. Our aim is to provide a superior shopping experience for all our partners and improve both customer satisfaction and their GMV. Being an early-stage company, we are looking for self-driven, motivated people who want to build something exciting and are always looking out for the next big thing. We plan to build this company remotely, which brings freedom but also an added sense of responsibility. Data sits at the heart of Gokwik and plays a uniquely crucial role in what we do. With data we build intelligent systems to enhance customer experiences, tackle e-commerce fraud and personalize our product. Fundamentally, data underpins all operations at Gokwik and being part of the team gives you the chance to have a major impact across the Indian eCommerce ecosystem What you'll be doing: Maintaining and enhancing our core data infrastructure and ETL framework. Developing and owning core data engineering frameworks, and methodologies. Developing with an emphasis on scale, reusability, and simplicity. Complementing our data scientists by providing a reliable, secure, and maintainable modelling framework. What skills do you need You are fluent in Python. We primarily use Python 3.7.  You are deeply knowledgeable of some flavour of SQL. You have experience with real-time data frameworks -related services such as Kafka or Flink. You have experience with CI/CD - Docker experience is essential. You are comfortable on the command line and working with servers. You have worked with cloud-based infrastructure predominantly AWS Being Postgres-native. Experience with an MPP or columnar database. Extensive use of Docker or other orchestration related tools. Experience: 1-3 years Objectives Enable incremental data for Data Science models ingest 1 Syndicated data source deploy data for all merchants, who have provided data for models Test and deploy the latest version of models in production for all merchants, 0 ERROR in models deployment ETL / Pipeline for models Feature availability for usage in models in less than 1 hour Manage, scale and deploy the model(s) in production, 0 Error in production models Manage the availability of models across all RTO Touchpoints What will make you succeed: You are self-disciplined. We hate micromanagement and assume you do too. You can communicate concisely and effectively. You're a team player. Working in our team is a continuous process of learning and teaching. You are pragmatically lazy. If it can be automated, it will be automated. You are proactive. Taking initiative is hard, but we always recognise and reward it.

Job posted by
apply for job
apply for job
Yash Bhati picture
Yash Bhati
Job posted by
Yash Bhati picture
Yash Bhati
Apply for job
apply for job

Software Developer- Data Engineering / Java / Golang

Founded 2010
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote only
Experience icon
4 - 8 years
Salary icon
Best in industry{{renderSalaryString({min: 1500000, max: 4500000, duration: "undefined", currency: "INR", equity: false})}}

We are looking for an exceptional Software Developer for our Data Engineering India team who can- contribute to building a world-class big data engineering stack that will be used to fuel us Analytics and Machine Learning products. This person will be contributing to the architecture, operation, and enhancement of: Our petabyte-scale data platform with a key focus on finding solutions that can support Analytics and Machine Learning product roadmap. Everyday terabytes of ingested data need to be processed and made available for querying and insights extraction for various use cases. About the Organisation:   - It provides a dynamic, fun workplace filled with passionate individuals. We are at the cutting edge of advertising technology and there is never a dull moment at work.   - We have a truly global footprint, with our headquarters in Singapore and offices in Australia, United States, Germany, United Kingdom, and India.   - You will gain work experience in a global environment. We speak over 20 different languages, from more than 16 different nationalities and over 42% of our staff are multilingual. Job DescriptionPosition:Software Developer, Data Engineering teamLocation: Pune(Initially 100% Remote due to Covid 19 for coming 1 year)   Our bespoke Machine Learning pipelines. This will also provide opportunities to contribute to the prototyping, building, and deployment of Machine Learning models. You: Have at least 4+ years’ Experience. Deep technical understanding of Java or Golang. Production experience with Python is a big plus, extremely valuable supporting skill for us. Exposure to modern Big Data tech: Cassandra/Scylla, Kafka, Ceph, the Hadoop Stack, Spark, Flume, Hive, Druid etc… while at the same time understanding that certain problems may require completely novel solutions. Exposure to one or more modern ML tech stacks: Spark ML-Lib, TensorFlow, Keras, GCP ML Stack, AWS Sagemaker - is a plus. Experience includes working in Agile/Lean model Experience with supporting and troubleshooting large systems Exposure to configuration management tools such as Ansible or Salt Exposure to IAAS platforms such as AWS, GCP, Azure… Good addition - Experience working with large-scale data Good addition - Good to have experience architecting, developing, and operating data warehouses, big data analytics platforms, and high velocity data pipelines**** Not looking for a Big Data Developer / Hadoop Developer

Job posted by
apply for job
apply for job
Biswadeep RS picture
Biswadeep RS
Job posted by
Biswadeep RS picture
Biswadeep RS
Apply for job
apply for job

Principal Architect

Founded
Products and services{{j_company_types[ - 1]}}
{{j_company_sizes[ - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
10 - 16 years
Salary icon
Best in industry{{renderSalaryString({min: 5000000, max: 6000000, duration: "undefined", currency: "INR", equity: false})}}

**12+ years of industry experience and minimum 5-6 years of relevant experience. Responsibility: Development of ASR engine using frameworks like DeepSpeech, Kaldi, wav2letter, Pytorch-Kaldi, CMU Sphinx Assist to define technology required for Speech to Text services besides core engine and to design integration of these technologies. Work on improvement of model's accuracy and guide the team with best practices. Lead a team of 3-5 members Desired experience: Good understanding of machine learning(ML) tools. Should be well versed in classical speech processing methodologies like hidden Markov models (HMMs), Gaussian mixture models (GMMs), Artificial neural networks (ANNs), Language modeling, etc. Hands-on experience of current deep learning (DL) techniques like convolutional neural networks (CNNs), recurrent neural networks (RNNs), long-term short-term memory (LSTM), connectionist temporal classification (CTC), etc used for speech processing is essential. The candidate should have hands-on experience with open source tools such as Kaldi, Pytorch-Kaldi. Familiarity with any of the end-to-end ASR tools such as ESPNET or EESEN or Deep Speech Pytorch is desirable. The candidate should have a good understanding of WFSTs as implemented in OpenFST and Kaldi and should be able to modify the WFST decoder as per the application requirements. Experience in techniques used for resolving issues related to accuracy,  noise, confidence scoring etc.  Ability to implement recipes using scripting languages like bash and perl Ability to develop applications using python, c++, Java

Job posted by
apply for job
apply for job
Ankita Ghosh picture
Ankita Ghosh
Job posted by
Ankita Ghosh picture
Ankita Ghosh
Apply for job
apply for job

Sr Informatica developer

Founded 2009
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[3 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Chennai, Bengaluru (Bangalore), Hyderabad
Experience icon
4 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 900000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Should have good hands-on experience in Informatica MDM Customer 360, Data Integration(ETL) using PowerCenter, Data Quality. Must have strong skills in Data Analysis, Data Mapping for ETL processes, and Data Modeling. Experience with the SIF framework including real-time integration Should have experience in building C360 Insights using Informatica Should have good experience in creating performant design using Mapplets, Mappings, Workflows for Data Quality(cleansing), ETL. Should have experience in building different data warehouse architecture like Enterprise, Federated, and Multi-Tier architecture. Should have experience in configuring Informatica Data Director in reference to the Data Governance of users, IT Managers, and Data Stewards. Should have good knowledge in developing complex PL/SQL queries. Should have working experience on UNIX and shell scripting to run the Informatica workflows and to control the ETL flow. Should know about Informatica Server installation and knowledge on the Administration console. Working experience with Developer with Administration is added knowledge. Working experience in Amazon Web Services (AWS) is an added advantage. Particularly on AWS S3, Data pipeline, Lambda, Kinesis, DynamoDB, and EMR. Should be responsible for the creation of automated BI solutions, including requirements, design,development, testing, and deployment

Job posted by
apply for job
apply for job
Ramya D picture
Ramya D
Job posted by
Ramya D picture
Ramya D
Apply for job
apply for job

Team Lead- Computer Vision

Founded 2017
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Hyderabad
Experience icon
5 - 9 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 2000000, duration: "undefined", currency: "INR", equity: false})}}

Responsibilities Build and mentor the computer vision team at TransPacks Drive to productionize algorithms (industrial level) developed through hard-core research Own the design, development, testing, deployment, and craftsmanship of the team’s infrastructure and systems capable of handling massive amounts of requests with high reliability and scalability Leverage the deep and broad technical expertise to mentor engineers and provide leadership on resolving complex technology issues Entrepreneurial and out-of-box thinking essential for a technology startup Guide the team for unit-test code for robustness, including edge cases, usability, and general reliability   Eligibility Tech in Computer Science and Engineering/Electronics/Electrical Engineering, with demonstrated interest in Image Processing/Computer vision (courses, projects etc) and 6-8 years of experience Tech in Computer Science and Engineering/Electronics/Electrical Engineering, with demonstrated interest in Image Processing/Computer vision (Thesis work) and 4-7 years of experience D in Computer Science and Engineering/Electronics/Electrical Engineering, with demonstrated interest in Image Processing/Computer vision (Ph. D. Dissertation) and inclination to working in Industry to provide innovative solutions to practical problems Requirements In-depth understanding of image processing algorithms, pattern recognition methods, and rule-based classifiers Experience in feature extraction, object recognition and tracking, image registration, noise reduction, image calibration, and correction Ability to understand, optimize and debug imaging algorithms Understating and experience in openCV library Fundamental understanding of mathematical techniques involved in ML and DL schemas (Instance-based methods, Boosting methods, PGM, Neural Networks etc.) Thorough understanding of state-of-the-art DL concepts (Sequence modeling, Attention, Convolution etc.) along with knack to imagine new schemas that work for the given data. Understanding of engineering principles and a clear understanding of data structures and algorithms Experience in writing production level codes using either C++ or Java Experience with technologies/libraries such as python pandas, numpy, scipy Experience with tensorflow and scikit.

Job posted by
apply for job
apply for job
Pranav Asthana picture
Pranav Asthana
Job posted by
Pranav Asthana picture
Pranav Asthana
Apply for job
apply for job

Artificial Intelligence Engineer

Founded 2015
Products and services{{j_company_types[2 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Remote, Mumbai
Experience icon
3 - 5 years
Salary icon
Best in industry{{renderSalaryString({min: 500000, max: 700000, duration: "undefined", currency: "INR", equity: false})}}

Nactus is at forefront of education reinvention, helping educators and learner’s community at large through innovative solutions in digital era. We are looking for an experienced AI specialist to join our revolution using the deep learning, artificial intelligence.  This is an excellent opportunity to take advantage of emerging trends and technologies to a real-world difference.   Role and Responsibilities Manage and direct research and development (R&D) and processes to meet the needs of our AI strategy. Understand company and client challenges and how integrating AI capabilities can help create educational solutions. Analyse and explain AI and machine learning (ML) solutions while setting and maintaining high ethical standards.   Skills Required   Knowledge of algorithms, object-oriented and functional design principles Demonstrated artificial intelligence, machine learning, mathematical and statistical modelling knowledge and skills. Well-developed programming skills – specifically in SAS or SQL and other packages with statistical and machine learning application, e.g. R, Python Experience with machine learning fundamentals, parallel computing and distributed systems fundamentals, or data structure fundamentals Experience with C, C++, or Python programming Experience with debugging and building AI applications. Robustness and productivity analyse conclusions. Develop a human-machine speech interface. Verify, evaluate, and demonstrate implemented work. Proven experience with ML, deep learning, Tensorflow, Python

Job posted by
apply for job
apply for job
Rohit Dusad picture
Rohit Dusad
Job posted by
Rohit Dusad picture
Rohit Dusad
Apply for job
apply for job

Senior Data Scientist

Founded 2001
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[4 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
10 - 15 years
Salary icon
Best in industryBest in industry

About the Opportunity We are looking for a trailblazer & practitioner Data Scientist to lead the initiatives we’re taking, to improve the product offerings of Blackhawk Network, from the perspective of risk modelling and business forecasting (prescriptive & predictive). As a Senior Data Scientist, you will own the research charter for Data & Decision Science, to enable the business stakeholders to be data-driven and deterministic, by providing insights into the decisions at-hand and also roadmap planning. You will be the Senior member in a team of Data Scientists to provide mentorship and enable a culture of 360-degree analysis of business, with ownership of the Modelling Environments and Risk Engines. You will get the support to evangelise sound practices for prototyping of concepts, to fail-fast and/or maintain continuum of persistent research. You will collaborate with multi-disciplinary teams of engineers, product owners & business stakeholders to solve complex & ambiguous problems, in the domains of: Gift Cards E-Commerce (B2B & B2C) Forecasting of Inventory, Traffic & Breakage Risk Modelling (Scorecards) Fraud Detection & Prevention Loyalty & Rewards Programs Modelling Requirements A strong background in advanced mathematics, in particular statistics & probability theory, data mining, and machine learning. 10+ years of overall professional experience, with 5+ years in data science, doing exploratory data analysis, testing hypotheses, and building prescriptive & predictive models. Masters (or equivalent) degree in a quantitative discipline (Statistics, Operations Research, Data Science, Mathematics, Physics, Engineering etc.). Proficiency in a programming language of your own choice (Python, R, Matlab, etc.), and previous experience efficiently conducting research and creating on-demand reports. Strong Communication: ability to articulate clearly, navigate & adapt across different seniority levels. Ability to use statistical, algorithmic, data mining, and visualization techniques to model complex problems, find opportunities, discover solutions, and deliver actionable business insights. An excellent ability to learn new programming languages quickly and optimally. Be passionate about collaborating daily with your team and other groups while working via a distributed multi-geo global operating model. Be eager to help your teammates, share your knowledge with them, and learn from them. Ability to work with large, complex data sets to produce insights & results. Big pluses include significant experience managing or shipping out a product, leading a team, and working on open source projects.

Job posted by
apply for job
apply for job
Sachin Lala picture
Sachin Lala
Job posted by
Sachin Lala picture
Sachin Lala
Apply for job
apply for job

Data Engineer

Founded 2011
Products and services{{j_company_types[3 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 4 years
Salary icon
Best in industry{{renderSalaryString({min: 600000, max: 700000, duration: "undefined", currency: "INR", equity: false})}}

Rorko is looking for a Data Visualization Engineer experienced with up to 1 year of experience in relevant fields. The candidate should have the ability to represent data in a manner that non-technical people can understand. They should be able to create dynamic data visualizations to help our clients make meaningful decisions in an interactive, web-based format.

Job posted by
apply for job
apply for job
Shilpa Singh picture
Shilpa Singh
Job posted by
Shilpa Singh picture
Shilpa Singh
Apply for job
apply for job

Data Engineer

Founded 2018
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
via Rely
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 800000, max: 3500000, duration: "undefined", currency: "INR", equity: false})}}

Intro Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions. What will you doThe data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources. Create and maintain optimal data pipeline architecture and ETL processes Assemble large, complex data sets that meet functional / non-functional business requirements. Develop data pipeline and infrastructure to support real-time decisions Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs. What will you need• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse• Experience dealing with large scale Proficiency in writing and debugging complex SQLs Experience working with AWS big data tools• Ability to lead the project and implement best data practises and technology Data Pipelining Strong command in building & optimizing data pipelines, architectures and data sets Strong command on relational SQL & noSQL databases including Postgres Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Big Data: Strong experience in big data tools & applications Tools: Hadoop, Spark, HDFS etc AWS cloud services: EC2, EMR, RDS, Redshift Stream-processing systems: Storm, Spark-Streaming, Flink etc. Message queuing: RabbitMQ, Spark etc Software Development & Debugging Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc Strong hold on data structures & algorithms What would be a bonus Prior experience working in a fast-growth Startup Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data

Job posted by
apply for job
apply for job
Hizam Ismail picture
Hizam Ismail
Job posted by
Hizam Ismail picture
Hizam Ismail
Apply for job
apply for job

Aspirant - Data Science & AI

Founded 2012
Products and services{{j_company_types[1 - 1]}}
{{j_company_sizes[2 - 1]}} employees
{{j_company_stages[ - 1]}}
{{rendered_skills_map[skill] || skill}}
Location icon
Bengaluru (Bangalore)
Experience icon
0 - 10 years
Salary icon
Best in industry{{renderSalaryString({min: 300000, max: 900000, duration: "undefined", currency: "INR", equity: false})}}

APPLY LINK: http://bit.ly/2yipqSE Go through the entire job post thoroughly before pressing Apply. There is an eleven characters french word v*n*i*r*t*e mentioned somewhere in the whole text which is irrelevant to the context. You shall be required to enter this word while applying else application won't be considered submitted. ````````````````````````````````````````````````````````````````````````````````````````````````````` Aspirant - Data Science & AI Team: Sciences Full-Time, Trainee Bangaluru, India Relevant Exp: 0 - 10 Years Background: Top Tier institute Compensation: Above Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Busigence is a Decision Intelligence Company. We create decision intelligence products for real people by combining data, technology, business, and behavior enabling strengthened decisions. Scaling established startup by IIT alumni innovating & disrupting marketing domain through artificial intelligence. We bring those people onboard who are dedicated to deliver wisdom to humanity by solving the world’s most pressing problems differently thereby significantly impacting thousands of souls, everyday. We are a deep rooted organization with six years of success story having worked with folks from top tier background (IIT, NSIT, DCE, BITS, IIITs, NITs, IIMs, ISI etc.) maintaining an awesome culture with a common vision to build great data products. In past we have served fifty five customers and presently developing our second product, Robonate. First was emmoQ - an emotion intelligence platform. Third offering, H2HData, an innovation lab where we solve hard problems through data, science, & design. We work extensively & intensely on big data, data science, machine learning, deep learning, reinforcement learning, data analytics, natural language processing, cognitive computing, and business intelligence. First-and-Foremost Before you dive-in exploring this opportunity and press Apply, we wish you to evaluate yourself - We are looking for right candidate, not the best candidate. We love to work with someone who can mandatorily gel with our vision, beliefs, thoughts, methods, and values --- which are aligned with what can be expected in a true startup with ambitious goals. Skills are always secondary to us. Primarily, you must be someone who is not essentially looking for a job or career, rather starving for a challenge, you yourself probably don't know since when. A book can be written on what an applicant must have before joining a . For brevity, in nutshell, we need these three in you: 1. You must be [super sharp] (Just an analogue, but Irodov, Mensa, Feynman, Polya, ACM, NIPS, ICAAC, BattleCode, DOTA etc should have been your Done stuff. Can you relate solution 1 to problem 2? or Do you get confused even when solved similar problem in past? Are you able to grasp problem statement in one go? or get hanged?) 2. You must be [extremely energetic] (Do you raise eyebrows when asked to stretch your limits, both in terms of complexity or extra hours to put in? What comes first in your mind, let's finish it today or this can be done tomorrow too? Its Friday 10 PM at work -Tired?) 3. You must be [honourably honest] (Do you tell others what you think, or what they want to hear? Later is good for sales team for their customers, not for this role. Are you honest with your work? intrinsically with yourself first?) You know yourself the best. If not ask your loved ones and then decide. We clearly need exceedingly motivated people with entrepreneurial traits, not employee mindset - not at all. This is an immediate requirement. We shall have an accelerated interview process for fast closure - you would be required to be proactive and responsive. Real ROLE We are looking for students, graduates, and experienced folks with real passion for algorithms, computing, and analysis. You would be required to work with our sciences team on complex cases from data science, machine learning, and business analytics. Mandatory R1. Must know in-and-out of functional programming (https://docs.python.org/2/howto/functional.html) in Python with strong flair for data structures, linear algebra, & algorithms implementation. Only oops cannot not be accepted. R2. Must have soiled hands on methods, functions, and workarounds in NumPy, Pandas, Scikit-learn, SciPy, Stasmodels - collectively you should have implemented atleast 100 different techniques (we averaged out this figure with our past aspirants who have worked on this role) R3. Must have implemented complex mathematical logics through functional map-reduce framework in Python R4. Must have understanding on EDA cycle, machine learning algorithms, hyper-parameter optimization, ensemble learning, regularization, predictions, clustering, associations - at essential level R5. Must have solved atleast five problems through data science & machine learning. Mere coursera learning and/or Kaggle offline attempts shall not be accepted Preferred R6. Good to have required callibre to learn PySpark within four weeks once joined us R7. Good to have required callibre to grasp underlying business for a problem to be solved R8. Good to have understanding on CNNs, RNNs, MLP, Auto-Encoders - at basic level R9. Good to have solved atleast three problems through deep learning. Mere coursera learning and/or Kaggle offline attempts shall not be accepted R10. Good to have worked on pre-processing techniques for images, audio, and text - OpenCV, Librosa, NLTK R11. Good to have used pre-trained models - VGGNET, Inception, ResNet, WaveNet, Word2Vec Ideal YOU Y1. Degree in engineering, or any other data-heavy field at Bachelors level or above from a top tier institute Y2. Relevant experience of 0 - 10 years working on real-world problems in a reputed company or a proven startup Y3. You are a fanatical implementer who love to spend time with content, codes & workarounds, more than your loved ones Y4. You are true believer that human intelligence can be augmented through computer science & mathematics and your survival vinaigrette depends on getting the most from the data Y5. You are an entrepreneur mindset with ownership, intellectuality, & creativity as way to work. These are not fancy words, we mean it Actual WE W1. Real startup with Meaningful products W2. Revolutionary not just disruptive W3. Rules creators not followers W4. Small teams with real brains not herd of blockheads W5. Completely trust us and should be trusted back Why Us In addition to the regular stuff which every good startup offers – Lots of learning, Food, Parties, Open culture, Flexible working hours, and what not…. We offer you: You shall be working on our revolutionary products which are pioneer in their respective categories. This is a fact. We try real hard to hire fun loving crazy folks who are driven by more than a paycheck. You shall be working with creamiest talent on extremely challenging problems at most happening workplace. How to Apply You should apply online by clicking "Apply Now". For queries regarding an open position, please write to careers@busigence.com For more information, visit http://www.busigence.com Careers: http://careers.busigence.com Research: http://research.busigence.com Jobs: http://careers.busigence.com/jobs/data-science Feel right fit for the position, mandatorily attach PDF resume highlighting your A. Key Skills B. Knowledge Inputs C. Major Accomplishments D. Problems Solved E. Submissions – Github/ StackOverflow/ Kaggle/ Euler Project etc. (if applicable) If you don't see this open position that interests you, join our Talent Pool and let us know how you can make a difference here. Referrals are more than welcome. Keep us in loop.

Job posted by
apply for job
apply for job
Seema Verma picture
Seema Verma
Job posted by
Seema Verma picture
Seema Verma
Apply for job
apply for job
Did not find a job you were looking for?
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on CutShort.
Want to apply for this role at Hypersonix Inc?
Hiring team responds within a day
apply for this job
Why apply via CutShort?
Connect with actual hiring teams and get their fast response. No spam.
File upload not supportedAudio recording not supported
This browser does not support file upload. Please follow the instructions to upload your resume.This browser does not support audio recording. Please follow the instructions to record audio.
  1. Click on the 3 dots
  2. Click on "Copy link"
  3. Open Google Chrome (or any other browser) and enter the copied link in the URL bar
Done