Founded by experienced founders and funded by Tier-1 VCs, GoKwik is a solution for democratizing the shopping experience on e-commerce platforms. Our aim is to provide a superior shopping experience for all our partners and improve both customer satisfaction and their GMV.
Being an early-stage company, we are looking for self-driven, motivated people who want to build something exciting and are always looking out for the next big thing. We plan to build this company remotely, which brings freedom but also an added sense of responsibility.
Data sits at the heart of Gokwik and plays a uniquely crucial role in what we do. With data we build intelligent systems to enhance customer experiences, tackle e-commerce fraud and personalize our product.
Fundamentally, data underpins all operations at Gokwik and being part of the team gives you the chance to have a major impact across the Indian eCommerce ecosystem
What you'll be doing:
What skills do you need
Experience: 1-3 years
What will make you succeed:
Job roles and responsibilities:
Technical Skills Required:
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar engineer that is well versed with AI and audio technologies around audio detection, speech-to-text, interpretation, and sentiment analysis.
You will be responsible for:
You should have the following qualities:
Machine Learning, Audio Analysis, Sentiment Analysis, Speech-To-Text, Natural Language Processing, Neural Networks, TensorFlow, OpenCV, AWS, Python
Only a solid grounding in computer engineering, Unix, data structures and algorithms would enable you to meet this challenge.
7+ years of experience architecting, developing, releasing, and maintaining large-scale big data platforms on AWS or GCP
Understanding of how Big Data tech and NoSQL stores like MongoDB, HBase/HDFS, ElasticSearch synergize to power applications in analytics, AI and knowledge graphs
Understandingof how data processing models, data location patterns, disk IO, network IO, shuffling affect large scale text processing - feature extraction, searching etc
Expertise with a variety of data processing systems, including streaming, event, and batch (Spark, Hadoop/MapReduce)
5+ years proficiency in configuring and deploying applications on Linux-based systems
5+ years of experience Spark - especially Pyspark for transforming large non-structured text data, creating highly optimized pipelines
Experience with RDBMS, ETL techniques and frameworks (Sqoop, Flume) and big data querying tools (Pig, Hive)
Stickler of world class best practices, uncompromising on the quality of engineering, understand standards and reference architectures and deep in Unix philosophy with appreciation of big data design patterns, orthogonal code design and functional computation models