Loading...

{{notif_text}}

SocialHelpouts is now CutShort! Read about it here
Who pays how much? Be informed with this salary report on Indian startups.

Locations

Navi Mumbai, mumbai

Experience

1 - 5 years

Salary

INR 2L - 6L

Skills

BigTable
python
Algorithms
Elastic Search
Hadoop
MapReduce
Natural Language Processing (NLP)

Job description

Snapwork Technologies is a results-driven design, development and products company, located at Navi Mumbai. We have been the Mobile Partner of choice for leading Financial & Insurance companies. We work with clients to plan a robust mobile strategy that will help them reap high dividends from their mobility efforts. At Snapwork it is really about simplicity. Our solutions combine user-centered research, user interface design, process engineering, and interface/mobile development with client's business objectives to create User Delight! Our teams work in a collaborative environment which promotes best practices and personal growth. We have achieved some particularly impressive awards for our Mobile App from Google and other prestigious excellence awards from IBM, Intel, Sequoia & YourStory SnapWork is looking for a Machine Learning enthusiast who can use their skills to research, build and implement solutions in the field of natural language processing and semantic knowledge extraction from structured data and unstructured text. You should have a deep love for Machine Learning, Natural Language processing and a strong desire to solve challenging problems. Responsibilities • Using NLP and machine learning techniques to create scalable solutions. • Utilize statistical natural language processing to mine unstructured data, and create insights; analyze and model structured data using advanced statistical methods and implement algorithms and software needed to perform analyses • Research and implement novel approaches to solve real world problems. • Working closely with the engineering teams to drive real-time model implementations and new feature creations. Requirements • An excellent problem solver with research oriented approach. • Previous experience with designing and developing real world solutions using machine learning techniques • Deep understanding of NLP and statistical models • Algorithms Geek and researcher in field of data science. • Experience with capturing, managing and processing Big Data will be an added advantage. • Familiarity with Elastic MapReduce, Hadoop, AWS, BigTable is a plus

About the company

Founded in 2006, Snapwork Technologies is a profitable company based in Navi Mumbai. It has 51-250 employees currently and works in the domain of Fintech.

Founded

2006

Type

Products & Services

Size

51-250 employees

Stage

Profitable
View company

Similar jobs

Software Developer

Founded
employees
Java
Data Structures
Algorithms
Python
HTML/CSS
Linux/Unix
Databases
Google Cloud Storage
Location icon
Bengaluru (Bangalore)
Experience icon
0 - 2 years

About Achira: Achira's cutting-edge micro fluidics technology empowers patients and doctors with convenient and timely access to accurate medical testing. We develop a proprietary lab-on-chip platform to perform rapid, quantitative and multiplexed immunoassays at a low cost. Core Values: • Translate cutting edge research into products that meet the market demand • Constantly innovate to build integrated solutions for healthcare needs • Ethical and Professional Standards and behavior • Encourage employee creativity; promoting merit and commitment Job Description: The selected candidate needs to possess the necessary skill to do the following task • Design and test a User Interface which articulate user perception for a medical diagnostic equipment • Create a backend database to acquire data from our instrument and making a front end application to visualize the same • The candidate is expected to possess strong knowledge in Data structures and Algorithms as varying amounts of data and data types are handled. • Very good with software version control and usage of Git tools. • Prior experience of Python app development and cloud database is an advantage. • Strong Knowledge on Linux based commands, Linux Operating system is required The candidate should work with interdisciplinary teams to achieve the end goals. Delivering the projects in competitive timelines is a common quality we expect in any candidate. Experience: 0-2 Years Who can apply? • Candidates who are ready to put their software skill to use by creating quality medical product to build a healthier world • Have Bachelor's degree specifically in Computer Science, Software Engineering, Information technology, or other Engineering or Technical discipline Other requirements Skill(s) Must have: Python, Java, C/C++ Programming, OOPS concepts, Data Structures, HTML/CSS. Skill(s) Good to have: Basic Android App Development, Angular JS

Job posted by
message
Job poster profile picture - Gokul R
Gokul R
Job posted by
Job poster profile picture - Gokul R
Gokul R
message now

Data Scientist

Founded 2013
Product
6-50 employees
Raised funding
Big Data
Data Science
Machine Learning
R Programming
Python
Haskell
Hadoop
Location icon
Mumbai
Experience icon
3 - 7 years

Data Scientist - We are looking for a candidate to build great recommendation engines and power an intelligent m.Paani user journey Responsibilities : - Data Mining using methods like associations, correlations, inferences, clustering, graph analysis etc. - Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume - Design and implement machine learning, information extraction, probabilistic matching algorithms and models - Care about designing the full machine learning pipeline. - Extending company's data with 3rd party sources. - Enhancing data collection procedures. - Processing, cleaning and verifying data collected. - Ad hoc analysis of the data and present clear results. - Creating advanced analytics products that provide actionable insights. The Individual : - We are looking for a candidate with the following skills, experience and attributes: Required : - Someone with 2+ years of work experience in machine learning. - Educational qualification relevant to the role. Degree in Statistics, certificate courses in Big Data, Machine Learning etc. - Knowledge of Machine Learning techniques and algorithms. - Knowledge in languages and toolkits like Python, R, Numpy. - Knowledge of data visualization tools like D3,js, ggplot2. - Knowledge of query languages like SQL, Hive, Pig . - Familiar with Big Data architecture and tools like Hadoop, Spark, Map Reduce. - Familiar with NoSQL databases like MongoDB, Cassandra, HBase. - Good applied statistics skills like distributions, statistical testing, regression etc. Compensation & Logistics : This is a full-time opportunity. Compensation will be in line with startup, and will be based on qualifications and experience. The position is based in Mumbai, India, and the candidate must live in Mumbai or be willing to relocate.

Job posted by
message
Job poster profile picture - Julie K
Julie K
Job posted by
Job poster profile picture - Julie K
Julie K
message now

Database Architect

Founded 2017
Products and services
6-50 employees
Raised funding
ETL
Data Warehouse (DWH)
DWH Cloud
Hadoop
Apache Hive
Spark
Mango DB
PostgreSQL
Location icon
Bengaluru (Bangalore)
Experience icon
5 - 10 years

he candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts

Job posted by
message
Job poster profile picture - Rahul Malani
Rahul Malani
Job posted by
Job poster profile picture - Rahul Malani
Rahul Malani
message now
DevOps
Elastic Search
Docker
MongoDB
servers
Location icon
Bengaluru (Bangalore)
Experience icon
2 - 5 years

If you want to work in a company that is changing life as you know it then this is the place to be. We are creating Artificial Intelligence(AI) based agents that allow machines, businesses, and customers communicate with each other instantly using the help of AI. We are currently looking for DevOps Engineer to work from Bangalore location. Below is the detailed requirement : Requirement: This candidate should have 2-5 years of experience into: 1. deploying and managing multiple servers 2. hands on experience managing DB technologies like Mongo/Redis/Elastic search 3. containerization, ideally using Docker and Kubernetes 4. working in real time streaming technologies like Kafka, Kinesis etc. 5. experience in Big data technologies like Hadoop, HFS, spark etc is preferred

Job posted by
message
Job poster profile picture - Shweta Singh
Shweta Singh
Job posted by
Job poster profile picture - Shweta Singh
Shweta Singh
message now

Senior Data Scientist

Founded 2016
Product
1-5 employees
Raised funding
Python
Statistical Analysis
Machine Learning
Natural Language Processing (NLP)
Location icon
Bengaluru (Bangalore)
Experience icon
3 - 12 years

This is one of those exceptional roles where, unlike most data science roles that shape the marketing/ optimization efforts, data science here helps build a consumer-facing product from the ground up. ------Key Skills Expected--------- • Strong understanding & experience in building efficient search & recommendation algorithms; experience in Machine/ Deep Learning would be beneficial • Driving semantics through NLP of unstructured and semi-structured data • Should be able to independently build large-scale crawlers, clean & analyze that information and use as an input-feed to ranking/ recommendation algorithms • Should have excellent back-end coding skills • Strong knowledge of hosting webservices like AWS • Experience in Python-Django would be a plus We are looking for self-starters who are looking to solve genuinely hard problems.

Job posted by
message
Job poster profile picture - Varun Gupta
Varun Gupta
Job posted by
Job poster profile picture - Varun Gupta
Varun Gupta
message now

Data Scientist

Founded 2015
Services
6-50 employees
Profitable
Big Data
Data Science
Machine Learning
R Programming
Python
Haskell
Hadoop
Location icon
Hyderabad
Experience icon
6 - 10 years

It is one of the largest communication technology companies in the world. They operate America's largest 4G LTE wireless network and the nation's premiere all-fiber broadband network.

Job posted by
message
Job poster profile picture - Sangita Deka
Sangita Deka
Job posted by
Job poster profile picture - Sangita Deka
Sangita Deka
message now

Data Scientist

Founded 2002
Products and services
6-50 employees
Profitable
Data Science
Machine Learning
R Programming
Python
Chatbot
Natural Language Processing (NLP)
Location icon
NCR (Delhi | Gurgaon | Noida), Goa, Bengaluru (Bangalore)
Experience icon
3 - 7 years

Founded in 2002, Srijan is today the largest pure-play Drupal agency in Asia. Srijan specializes in building high-traffic websites and complex web applications in Drupal and has been serving clients across USA, Asia, Europe, Australia and the Middle East. Srijan has also expanded its technology portfolio to better serve its clients to include development in JavaScript and Machine Learning/Data Science. We deploy dedicated engineering teams to enable continuous development and delivery of valuable software for our clients. Srijan uses agile and lean processes to help large enterprises such as a global office retail company, and one of the top management consulting firms of the world, build and deliver on their digital strategies using Drupal. Startups such as OnCorps and TheRecordXchange build online products. e-commerce and retail companies, and marketplaces streamline their product marketing platforms, including delivering on an Omnichannel strategy with Akeneo and Drupal Commerce In this role, the candidate will be responsible for building, testing, delivering cutting edge technology applications including scalable web interfaces (node react or angular js) and web-based chatbot (virtual assistant) solutions Candidates are expected to play a consultative role with a comprehensive understanding of Intelligent Virtual Assistants. Roles & Responsibility •Led, developed and worked in developing chatbot/ Virtual Assistant for a large enterprise successfully. •Played an active part in developing Google/ Amazon/ Microsoft/ Apple Virtual Assistant. •Experienced in ingesting large sets of unstructured text data including chat logs & voice logs and modeling it in Graph DB using semantic annotation. •Experience in Machine learning, data and text mining and predictive analysis. •Able to understand business requirement and able to deliver quick prototypes, strong communication skills •Experienced in modeling and coding NLP algorithms for conversational bot in production. *Experience with NoSQL databases like Cassandra * Experience in distributed caching frameworks like hazelcast, ignite, redis Candidate must possess passion for producing high-quality software, ready to jump in and solve complex problems, be able to mentor junior engineers and perform code reviews.

Job posted by
message
Job poster profile picture - Ashish Rao
Ashish Rao
Job posted by
Job poster profile picture - Ashish Rao
Ashish Rao
message now

Software Engineer

Founded 2010
Products and services
250+ employees
Raised funding
Java
Data Structures
Algorithms
C/C++
J2EE
Location icon
Bengaluru (Bangalore)
Experience icon
0 - 1 years

Chai Point is a popular fast growing F&B brand in India which is powered by technology at its core. Chai Point has connected its stores and suppliers using Shark - an in house developed cloud based automation platform. Chai Point pioneered in building a cloud based platform boxC.in which uses Internet of Things to efficiently manage Tea / Coffee / other beverages at its corporate clients. Chai Point has been the early adopter of Serverless Technologies using AWS Lambda which allows team to build highly scalable Micro Services which are easy to maintain. We are looking to hire Trainee Software Engineer to play a critical role in further enhancing our technology innovations and take them to the next level. You will play a pivotal role in building Chai Point’s next generation systems which will allow all business units and functions in Chai Point to work efficiently and give the best retail / online experience to Chai Point users. These systems will power the world's fastest growing largest and fastest growing Chai retail chain brand. Stream: IT / Computer Science Degree: BE / BTech / MTech / MS Skills: Good understanding of Data Structure and Algorithms. Sound understanding of operating systems, database management systems and related technologies. You should have good hands on programming experience with Java / JEE or any other Object Oriented Language (C++, C# etc) in your college projects. Good exposure to OOAD concepts and OOPs principles. Expectations: As a Trainee Software Engineer at Chai Point you will be expected to:  Adapt to a dynamic work environment.  Study and understand the product specifications thoroughly to design appropriate software solutions.  Be keen to learn new technologies for solving interesting business problems.  Develop code using industry best practices with good time and space complexities wherever applicable. Your code should be readable and easily understandable by your peers.  Develop JUnit test cases with good code coverage.  Optimize code and database queries to meet scaling needs.  Work with leading technologies like IoT, Spring Framework, AWS Lambda, AWS API Gateway, MySQL, AWS CloudFormation, AWS DynamoDB, AWS ElastiCache, Git, Jira and Jenkins among many others.  Work with independence and show ownership of tasks.

Job posted by
message
Job poster profile picture - Anurina .
Anurina .
Job posted by
Job poster profile picture - Anurina .
Anurina .
message now

Data Scientist

Founded 2011
Services
6-50 employees
Raised funding
Big Data
Data Science
Machine Learning
R Programming
Python
Haskell
Hadoop
Location icon
NCR (Delhi | Gurgaon | Noida)
Experience icon
2 - 4 years

URGENT! My client is looking for a Data Scientist, M. Tech / PHD, from Tier 1 Institutes (such as IIT / IISC), Min. 2 -3 years of experience, with skills in R, Python, Machine Learning. Position is with a very successful product development startup in the field of Artificial Intelligence and Big Data Analytics, based in Gurgaon. Send in your resumes at info@bidbox.in

Job posted by
message
Job poster profile picture - Suchit Aggarwal
Suchit Aggarwal
Job posted by
Job poster profile picture - Suchit Aggarwal
Suchit Aggarwal
message now

Fullstack Developer

Founded 2017
Products and services
1-5 employees
Raised funding
Javascript
Python
NodeJS (Node.js)
MongoDB
MySQL
Django
Elastic Search
Location icon
Hyderabad
Experience icon
1 - 7 years

Full Stack Developer for Integrating Deep Learning applications to web.

Job posted by
message
Job poster profile picture - Saurabh Arora
Saurabh Arora
Job posted by
Job poster profile picture - Saurabh Arora
Saurabh Arora
message now
Why waste time browsing all the jobs?
Just chat with Voila, your professional assistant. Find jobs, compare your salary and much more!
talk to voila
Awesome! You have connected your Facebook account. Like us on Facebook to stay updated.