Experienced in writing complex SQL select queries (window functions & CTE’s) with advanced SQL experience
Should be an individual contributor for initial few months based on project movement team will be aligned
Strong in querying logic and data interpretation
Solid communication and articulation skills
Able to handle stakeholders independently with less interventions of reporting manager
Develop strategies to solve problems in logical yet creative ways
Create custom reports and presentations accompanied by strong data visualization and storytelling
About Indium Software
- Data Engineer
Required skill set: AWS GLUE, AWS LAMBDA, AWS SNS/SQS, AWS ATHENA, SPARK, SNOWFLAKE, PYTHON
- Experience in AWS Glue
- Experience in Apache Parquet
- Proficient in AWS S3 and data lake
- Knowledge of Snowflake
- Understanding of file-based ingestion best practices.
- Scripting language - Python & pyspark
- Create and manage cloud resources in AWS
- Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
- Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
- Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
- Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
- Define process improvement opportunities to optimize data collection, insights and displays.
- Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
- Identify and interpret trends and patterns from complex data sets
- Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
- Key participant in regular Scrum ceremonies with the agile teams
- Proficient at developing queries, writing reports and presenting findings
- Mentor junior members and bring best industry practices
- 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
- Strong background in math, statistics, computer science, data science or related discipline
- Advanced knowledge one of language: Java, Scala, Python, C#
- Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
- Proficient with
- Data mining/programming tools (e.g. SAS, SQL, R, Python)
- Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
- Data visualization (e.g. Tableau, Looker, MicroStrategy)
- Comfortable learning about and deploying new technologies and tools.
- Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
- Good written and oral communication skills and ability to present results to non-technical audiences
- Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
- AWS certification
- Spark Streaming
- Kafka Streaming / Kafka Connect
- ELK Stack
- Cassandra / MongoDB
- CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
Sizzle is an exciting new startup that’s changing the world of gaming. At Sizzle, we’re building AI to automate gaming highlights, directly from Twitch and YouTube streams. We’re looking for a superstar engineer that is well versed with computer vision and AI technologies around image and video analysis.
You will be responsible for:
- Developing computer vision algorithms to detect key moments within popular online games
- Leveraging baseline technologies such as TensorFlow, OpenCV, and others -- and building models on top of them
- Building neural network (CNN) architectures for image and video analysis, as it pertains to popular games
- Specifying exact requirements for training data sets, and working with analysts to create the data sets
- Training final models, including techniques such as transfer learning, data augmentation, etc. to optimize models for use in a production environment
- Working with back-end engineers to get all of the detection algorithms into production, to automate the highlight creation
You should have the following qualities:
- Solid understanding of computer vision and AI frameworks and algorithms, especially pertaining to image and video analysis
- Experience using Python, TensorFlow, OpenCV and other computer vision tools
- Understand common computer vision object detection models in use today e.g. Inception, R-CNN, Yolo, MobileNet SSD, etc.
- Demonstrated understanding of various algorithms for image and video analysis, such as CNNs, LSTM for motion and inter-frame analysis, and others
- Familiarity with AWS environments
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Ideally a gamer or someone interested in watching gaming content online
Machine Learning, Computer Vision, Image Processing, Neural Networks, TensorFlow, OpenCV, AWS, Python
Seniority: We are open to junior or senior engineers. We're more interested in the proper skillsets.
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply. However, if you don't have AI or computer vision experience, please do not apply.
- 9 years and above of total experience preferably in bigdata space.
- Creating spark applications using Scala to process data.
- Experience in scheduling and troubleshooting/debugging Spark jobs in steps.
- Experience in spark job performance tuning and optimizations.
- Should have experience in processing data using Kafka/Pyhton.
- Individual should have experience and understanding in configuring Kafka topics to optimize the performance.
- Should be proficient in writing SQL queries to process data in Data Warehouse.
- Hands on experience in working with Linux commands to troubleshoot/debug issues and creating shell scripts to automate tasks.
- Experience on AWS services like EMR.
Vahak (https://www.vahak.in) is India’s largest & most trusted online transport marketplace & directory for road transport businesses and individual commercial vehicle (Trucks, Trailers, Containers, Hyva, LCVs) owners for online truck and load booking, transport business branding and transport business network expansion. Lorry owners can find intercity and intracity loads from all over India and connect with other businesses to find trusted transporters and best deals in the Indian logistics services market. With the Vahak app, users can book loads and lorries from a live transport marketplace with over 5 Lakh + Transporters and Lorry owners in over 10,000+ locations for daily transport requirements.
Vahak has raised a capital of $5 Million in a “Pre Series A” round from RTP Global along with participation from Luxor Capital and Leo Capital. The other marquee angel investors include Kunal Shah, Founder and CEO, CRED; Jitendra Gupta, Founder and CEO, Jupiter; Vidit Aatrey and Sanjeev Barnwal, Co-founders, Meesho; Mohd Farid, Co-founder, Sharechat; Amrish Rau, CEO, Pine Labs; Harsimarbir Singh, Co-founder, Pristyn Care; Rohit and Kunal Bahl, Co-founders, Snapdeal; and Ravish Naresh, Co-founder and CEO, Khatabook.
As a Senior Product Analyst, you will play a key role in defining & growing our mobile & web applications. You will be collaborating with Product Managers, Designers, Growth Marketers, and Engineers & Customer Success Managers. You will provide data-backed insights, dashboards & visualization capabilities to help teams assess product & business successes/failures & make data driven decisions while planning and prioritizing.
Our ideal candidate will have strong data analysis & data visualization skills, led or managed analytics teams, and have understanding of data setup/infrastructures. We love people who are humble and collaborative with hunger for excellence.
Improve Dashboards & Data Visualization Capabilities:
- Create & manage dashboards on various mobile & web tools like Clevertap, Branch, Google Analytics, Zoho Page Sense, Firebase and other data tools that marketing and product teams use on a daily basis.
- Create richer & holistic (collating multiple data sources/platforms) dashboards and reports on Tableau & Metabase for Product, Marketing & Business teams.
- Improve dashboards & setup alert systems on Sentry, Firebase, GA for product & technology teams to timely catch fatal crashes, errors.
Deliver Data Backed Insights & Periodic Analysis Reports:
- Perform deep data-analysis to identify issues, trends, opportunities and improvements possible in Acquisition, Activation, Transactions, Conversion Ratios & other product-driving metrics. Present findings to product, marketing and business teams.
- Perform analysis to assess the success of ongoing product initiatives, provide insights to product managers & designers to iterate product features and flows.
- Create & track product and marketing funnels to identify growth opportunities and problem areas. Collaborate with PMs and designers to discover underlying reasons for drop points. Analyze Funnel, Cohort, Trends, LTV, DAU, MAU, Retention & user behaviour using data & help teams make data-driven decisions.
Define Measurement & Success Criteria for Experiments, New Features and A/B Tests:
- Collaborate with Product Manager to design, execute and analyze A/B tests, define target KPIs & control segments.
- Help Product Managers define success criteria and measurement standards for new features and product offerings.
- Ensure technical setup is there to MONITOR important KPIs and collect custom events data. Define custom events & properties to be tracked on Clevertap, GA, Zoho Page Sense.
Drive increased engagement & retention using Clevertap:
- Define AARRR KPIs and Optimize them with CT multi-channel journeys by continuously performing A/B tests.
- Create detailed and diverse user-segments to identify high, medium & low activity cohorts. Create strategies to increase the size of high-activity cohorts.
- Analyze website & mobile traffic, create user journeys & campaigns to enable cross-channel product experience and marketing.
Manage, Mentor & lead Product Analytics:
- Act as an Analytics Subject Matter expert in the team. Train, hire and manage a team of analysts. Setup collaborative processes & management practices to help teams achieve important analytics goals.
- Assist the Product stakeholders make data driven decisions to achieve short and long-term goals relating to product growth.
- Collaborate & hire analysts to improve the overall product analysis capabilities.
- Research and be updated with upcoming technologies and trends in digital product analytics.
- Bachelor’s Degree in Computer Science, Engineering or any other related field.
- 4+ years of experience in mobile/web application analytics, business analytics, must have led or managed a team before.
- In-depth knowledge & hands on experience of digital analytics software like Zoho Page Sense, Google Analytics, CleverTap, Branch, Firebase, Facebook Analytics.
- Expertise in dash-boarding tools like Tableau, Metabase and Excel.
- Experience with data warehousing concepts and ETL development and tools
- Fetching and manipulating large data sets using SQL / Hive.
- Analysing and visualizing large data sets: R / python.
- Ability to prioritize and manage multiple milestones and projects efficiently.
- Team spirit, good time-management skills, strong communication skills to collaborate with various stakeholders.
- Detail oriented, advanced problem solving skills with customer-centric approach.
- Preliminary Interview With the Product Head
- Technical Interview
- Operational Interview With The Product Head
- Final Interview With The CEO
- Architect and design for our customers' data-driven applications and solutions and own back-end technology
- Develop architectures that are inherently secure, robust, scalable, modular, and API-centric
- Build distributed backend systems serving real-time analytics and machine learning features at scale
- Own the scalability, performance, and performance metrics of complex distributed systems.
- Apply architecture best practices that help increase execution velocity
- Collaborate with the key stakeholders, like business, product, and other technology teams
- Mentor junior members in the team
- Excellent Academic Background (MS/B.Tech from a top tier university)
- 6-10 years of experience in backend architecture and development with large data volumes
- Extensive hands-on experience in the Big Data Ecosystem (like Hadoop, Spark, Presto, Hive), Database (like
- MySQL, PostgreSQL), NoSQL (like MongoDB, Cassandra), and Data Warehousing like Redshift
- Experience in cloud-based technology solutions with scale and robustness
- Strong data management and migration experience including proficiency in data warehousing, data quality, and analysis.
- Experience in the development of microservices/REST APIs
- Experience with Agile and DevOps development methodology and tools like Jira, Confluence
- Understanding/exposure to complete product development cycle
- Strong Python Coding skills and OOP skills
- Should have worked on Big Data product Architecture
- Should have worked with any one of the SQL-based databases like MySQL, PostgreSQL and any one of
- NoSQL-based databases such as Cassandra, Elasticsearch etc.
- Hands on experience on frameworks like Spark RDD, DataFrame, Dataset
- Experience on development of ETL for data product
- Candidate should have working knowledge on performance optimization, optimal resource utilization, Parallelism and tuning of spark jobs
- Working knowledge on file formats: CSV, JSON, XML, PARQUET, ORC, AVRO
- Good to have working knowledge with any one of the Analytical Databases like Druid, MongoDB, Apache Hive etc.
- Experience to handle real-time data feeds (good to have working knowledge on Apache Kafka or similar tool)
- Python and Scala (Optional), Spark / PySpark, Parallel programming
- Desire to explore new technology and break new ground.
- Are passionate about Open Source technology, continuous learning, and innovation.
- Have the problem-solving skills, grit, and commitment to complete challenging work assignments and meet deadlines.
- Engineer enterprise-class, large-scale deployments, and deliver Cloud-based Serverless solutions to our customers.
- You will work in a fast-paced environment with leading microservice and cloud technologies, and continue to develop your all-around technical skills.
- Participate in code reviews and provide meaningful feedback to other team members.
- Create technical documentation.
- Develop thorough Unit Tests to ensure code quality.
Skills and Experience
- Advanced skills in troubleshooting and tuning AWS Lambda functions developed with Java and/or Python.
- Experience with event-driven architecture design patterns and practices
- Experience in database design and architecture principles and strong SQL abilities
- Message brokers like Kafka and Kinesis
- Experience with Hadoop, Hive, and Spark (either PySpark or Scala)
- Demonstrated experience owning enterprise-class applications and delivering highly available distributed, fault-tolerant, globally accessible services at scale.
- Good understanding of distributed systems.
- Candidates will be self-motivated and display initiative, ownership, and flexibility.
- AWS Lambda function development experience with Java and/or Python.
- Lambda triggers such as SNS, SES, or cron.
- Cloud development experience with AWS services, including:
- AWS CLI
- API Gateway
- Java 8 or higher
- ETL data pipeline building
- Data Lake Experience
- MongoDB or similar NoSQL DB.
- Relational Databases (e.g., MySQL, PostgreSQL, Oracle, etc.).
- Gradle and/or Maven.
- Experience with Unix and/or macOS.
- Immediate Joiners
Nice to have:
- AWS / GCP / Azure Certification.
- Cloud development experience with Google Cloud or Azure
We are building a global content marketplace that brings companies and content
creators together to scale up content creation processes across 50+ content verticals and 150+ industries. Over the past 2.5 years, we’ve worked with companies like India Today, Amazon India, Adobe, Swiggy, Dunzo, Businessworld, Paisabazaar, IndiGo Airlines, Apollo Hospitals, Infoedge, Times Group, Digit, BookMyShow, UpGrad, Yulu, YourStory, and 350+ other brands.
Our mission is to become the world’s largest content creation and distribution platform for all kinds of content creators and brands.
We are a 25+ member company and is scaling up rapidly in both team size and our ambition.
If we were to define the kind of people and the culture we have, it would be -
a) Individuals with an Extreme Sense of Passion About Work
b) Individuals with Strong Customer and Creator Obsession
c) Individuals with Extraordinary Hustle, Perseverance & Ambition
We are on the lookout for individuals who are always open to going the extra mile and thrive in a fast-paced environment. We are strong believers in building a great, enduring
a company that can outlast its builders and create a massive impact on the lives of our
employees, creators, and customers alike.
We are fortunate to be backed by some of the industry’s most prolific angel investors - Kunal Bahl and Rohit Bansal (Snapdeal founders), YourStory Media. (Shradha Sharma); Dr. Saurabh Srivastava, Co-founder of IAN and NASSCOM; Slideshare co-founder Amit Ranjan; Indifi's Co-founder and CEO Alok Mittal; Sidharth Rao, Chairman of Dentsu Webchutney; Ritesh Malik, Co-founder and CEO of Innov8; Sanjay Tripathy, former CMO, HDFC Life, and CEO of Agilio Labs; Manan Maheshwari, Co-founder of WYSH; and Hemanshu Jain, Co-founder of Diabeto.
Backed by Lightspeed Venture Partners
● Design, develop, test, deploy, maintain and improve ML models
● Implement novel learning algorithms and recommendation engines
● Apply Data Science concepts to solve routine problems of target users
● Translates business analysis needs into well-defined machine learning problems, and
selecting appropriate models and algorithms
● Create an architecture, implement, maintain and monitor various data source pipelines
that can be used across various different types of data sources
● Monitor performance of the architecture and conduct optimization
● Produce clean, efficient code based on specifications
● Verify and deploy programs and systems
● Troubleshoot, debug and upgrade existing applications
● Guide junior engineers for productive contribution to the development
The ideal candidate must -
ML and NLP Engineer
● 4 or more years of experience in ML Engineering
● Proven experience in NLP
● Familiarity with language generative model - GPT3
● Ability to write robust code in Python
● Familiarity with ML frameworks and libraries
● Hands on experience with AWS services like Sagemaker and Personalize
● Exposure to state of the art techniques in ML and NLP
● Understanding of data structures, data modeling, and software architecture
● Outstanding analytical and problem-solving skills
● Team player, an ability to work cooperatively with the other engineers.
● Ability to make quick decisions in high-pressure environments with limited information.