Founded in 2006, Snapwork Technologies is a profitable company based in Navi Mumbai. It has 51-250 employees currently and works in the domain of Fintech.
About Achira: Achira's cutting-edge micro fluidics technology empowers patients and doctors with convenient and timely access to accurate medical testing. We develop a proprietary lab-on-chip platform to perform rapid, quantitative and multiplexed immunoassays at a low cost. Core Values: • Translate cutting edge research into products that meet the market demand • Constantly innovate to build integrated solutions for healthcare needs • Ethical and Professional Standards and behavior • Encourage employee creativity; promoting merit and commitment Job Description: The selected candidate needs to possess the necessary skill to do the following task • Design and test a User Interface which articulate user perception for a medical diagnostic equipment • Create a backend database to acquire data from our instrument and making a front end application to visualize the same • The candidate is expected to possess strong knowledge in Data structures and Algorithms as varying amounts of data and data types are handled. • Very good with software version control and usage of Git tools. • Prior experience of Python app development and cloud database is an advantage. • Strong Knowledge on Linux based commands, Linux Operating system is required The candidate should work with interdisciplinary teams to achieve the end goals. Delivering the projects in competitive timelines is a common quality we expect in any candidate. Experience: 0-2 Years Who can apply? • Candidates who are ready to put their software skill to use by creating quality medical product to build a healthier world • Have Bachelor's degree specifically in Computer Science, Software Engineering, Information technology, or other Engineering or Technical discipline Other requirements Skill(s) Must have: Python, Java, C/C++ Programming, OOPS concepts, Data Structures, HTML/CSS. Skill(s) Good to have: Basic Android App Development, Angular JS
Data Scientist - We are looking for a candidate to build great recommendation engines and power an intelligent m.Paani user journey Responsibilities : - Data Mining using methods like associations, correlations, inferences, clustering, graph analysis etc. - Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume - Design and implement machine learning, information extraction, probabilistic matching algorithms and models - Care about designing the full machine learning pipeline. - Extending company's data with 3rd party sources. - Enhancing data collection procedures. - Processing, cleaning and verifying data collected. - Ad hoc analysis of the data and present clear results. - Creating advanced analytics products that provide actionable insights. The Individual : - We are looking for a candidate with the following skills, experience and attributes: Required : - Someone with 2+ years of work experience in machine learning. - Educational qualification relevant to the role. Degree in Statistics, certificate courses in Big Data, Machine Learning etc. - Knowledge of Machine Learning techniques and algorithms. - Knowledge in languages and toolkits like Python, R, Numpy. - Knowledge of data visualization tools like D3,js, ggplot2. - Knowledge of query languages like SQL, Hive, Pig . - Familiar with Big Data architecture and tools like Hadoop, Spark, Map Reduce. - Familiar with NoSQL databases like MongoDB, Cassandra, HBase. - Good applied statistics skills like distributions, statistical testing, regression etc. Compensation & Logistics : This is a full-time opportunity. Compensation will be in line with startup, and will be based on qualifications and experience. The position is based in Mumbai, India, and the candidate must live in Mumbai or be willing to relocate.
he candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts
If you want to work in a company that is changing life as you know it then this is the place to be. We are creating Artificial Intelligence(AI) based agents that allow machines, businesses, and customers communicate with each other instantly using the help of AI. We are currently looking for DevOps Engineer to work from Bangalore location. Below is the detailed requirement : Requirement: This candidate should have 2-5 years of experience into: 1. deploying and managing multiple servers 2. hands on experience managing DB technologies like Mongo/Redis/Elastic search 3. containerization, ideally using Docker and Kubernetes 4. working in real time streaming technologies like Kafka, Kinesis etc. 5. experience in Big data technologies like Hadoop, HFS, spark etc is preferred
This is one of those exceptional roles where, unlike most data science roles that shape the marketing/ optimization efforts, data science here helps build a consumer-facing product from the ground up. ------Key Skills Expected--------- • Strong understanding & experience in building efficient search & recommendation algorithms; experience in Machine/ Deep Learning would be beneficial • Driving semantics through NLP of unstructured and semi-structured data • Should be able to independently build large-scale crawlers, clean & analyze that information and use as an input-feed to ranking/ recommendation algorithms • Should have excellent back-end coding skills • Strong knowledge of hosting webservices like AWS • Experience in Python-Django would be a plus We are looking for self-starters who are looking to solve genuinely hard problems.
It is one of the largest communication technology companies in the world. They operate America's largest 4G LTE wireless network and the nation's premiere all-fiber broadband network.
Chai Point is a popular fast growing F&B brand in India which is powered by technology at its core. Chai Point has connected its stores and suppliers using Shark - an in house developed cloud based automation platform. Chai Point pioneered in building a cloud based platform boxC.in which uses Internet of Things to efficiently manage Tea / Coffee / other beverages at its corporate clients. Chai Point has been the early adopter of Serverless Technologies using AWS Lambda which allows team to build highly scalable Micro Services which are easy to maintain. We are looking to hire Trainee Software Engineer to play a critical role in further enhancing our technology innovations and take them to the next level. You will play a pivotal role in building Chai Point’s next generation systems which will allow all business units and functions in Chai Point to work efficiently and give the best retail / online experience to Chai Point users. These systems will power the world's fastest growing largest and fastest growing Chai retail chain brand. Stream: IT / Computer Science Degree: BE / BTech / MTech / MS Skills: Good understanding of Data Structure and Algorithms. Sound understanding of operating systems, database management systems and related technologies. You should have good hands on programming experience with Java / JEE or any other Object Oriented Language (C++, C# etc) in your college projects. Good exposure to OOAD concepts and OOPs principles. Expectations: As a Trainee Software Engineer at Chai Point you will be expected to: Adapt to a dynamic work environment. Study and understand the product specifications thoroughly to design appropriate software solutions. Be keen to learn new technologies for solving interesting business problems. Develop code using industry best practices with good time and space complexities wherever applicable. Your code should be readable and easily understandable by your peers. Develop JUnit test cases with good code coverage. Optimize code and database queries to meet scaling needs. Work with leading technologies like IoT, Spring Framework, AWS Lambda, AWS API Gateway, MySQL, AWS CloudFormation, AWS DynamoDB, AWS ElastiCache, Git, Jira and Jenkins among many others. Work with independence and show ownership of tasks.
URGENT! My client is looking for a Data Scientist, M. Tech / PHD, from Tier 1 Institutes (such as IIT / IISC), Min. 2 -3 years of experience, with skills in R, Python, Machine Learning. Position is with a very successful product development startup in the field of Artificial Intelligence and Big Data Analytics, based in Gurgaon. Send in your resumes at email@example.com
Full Stack Developer for Integrating Deep Learning applications to web.