What You’ll Do
- Synthesize data to provide actionable insights to the leadership and the product team to influence strategy and growth.
- Build dashboards and tools, craft analysis, and tell stories with data to help our teams make better decisions.
- Track user performance data and map it to key learning metrics for deeper insights
- Monitor the performance of new features and recommend iterations based on your findings
- Partner with other stakeholders to scope features that solve user problems
- Ownership of full analytics for the Business Unit/ Performance metric: Planning, structuring, identifying key reports to be made, and automation of the reports
- Monitor and measure launched initiatives [A/B testing] and feed learnings back into the development process
- Prepare reports for the management stating trends, patterns, and predictions using relevant data
Required skills
- Bachelor’s degree in https://www.careerexplorer.com/degrees/mathematics-degree/">mathematics/ https://www.careerexplorer.com/degrees/finance-degree/">finance/https://www.careerexplorer.com/degrees/statistics-degree/">statistics, https://www.careerexplorer.com/degrees/economics-degree/">economics /https://www.careerexplorer.com/degrees/computer-science-degree/">computer science/information technology. Knowledge about EdTech, Learning Science have added advantage
- Strong statistical skills to help collect, measure, organize and analyze data
- 1-4 years of work experience in an EdTech product startup or analytics firm
- Experience in Data Visualisation, Excel, CleverTap & strong MongoDB database management skills
- Ability to learn new tech tools and softwares quickly
- Passion to work with data for a larger social impact
- Familiarity with coding (specifically Node environment) is a plus
- Ability to transform numbers into stories
About Humanitus Learning Science and Consultancy Pvt Limited
Similar jobs
Job Responsibilities:
1. Develop/debug applications using Python.
2. Improve code quality and code coverage for existing or new program.
3. Deploy and Integrate the Machine Learning models.
4. Test and validate the deployments.
5. ML Ops function.
Technical Skills
1. Graduate in Engineering or Technology with strong academic credentials
2. 4 to 8 years of experience as a Python developer.
3. Excellent understanding of SDLC processes
4. Strong knowledge of Unit testing, code quality improvement
5. Cloud based deployment and integration of applications/micro services.
6. Experience with NoSQL databases, such as MongoDB, Cassandra
7. Strong applied statistics skills
8. Knowledge of creating CI/CD pipelines and touchless deployment.
9. Knowledge about API, Data Engineering techniques.
10. AWS
11. Knowledge of Machine Learning and Large Language Model.
Nice to Have
1. Exposure to financial research domain
2. Experience with JIRA, Confluence
3. Understanding of scrum and Agile methodologies
4. Experience with data visualization tools, such as Grafana, GGplot, etc
Senior Database (PL/SQL)
Work Experience: 8+ Years
Number of Vacancies: 2
Location:
CTC: As per industry standards
Job Position: Oracle PLSQL Developer.
Required: Oracle Certified Database Developer
Key Skills:
- Must have basic knowledge in SQL Queries, Joins, DDL, DML, TCL, Types, Object, Collection Developer. Basic Oracle PLSQL programming experience (Procedures, packages, functions, exceptions.
- Develop, implement, and optimize stored procedures and functions using PLSQL
- Writing basic Queries, Package, Procedures, Functions, Triggers, Ref Cursors, Using Oracle 11g to 19c features, including triggers, stored procedures, queries, SQL Code, and design (stored procedures, functions, packages, tables, views, triggers, indexes, constraints, collections, bulk collects, etc..).
- Must have basic knowledge in PL/SQL Developer tool.
- Basic knowledge of MySql & Mongo DBA
- Strong communication skills
- Good interpersonal and teamwork skills
- PL/SQL, stored, procedure, functions, trigger
- Bulk Collection
- Utl_file
- Materilized View
- Performance handling
- Usage of Hint in Queries
- JSON (json object, json table, json queries)
- BLOB CLOB concept
- External table
- Dynamic SQL
What is the role?
You will be responsible for building and maintaining highly scalable data infrastructure for our cloud-hosted SAAS product. You will work closely with the Product Managers and Technical team to define and implement data pipelines for customer-facing and internal reports.
Key Responsibilities
- Design and develop resilient data pipelines.
- Write efficient queries to fetch data from the report database.
- Work closely with application backend engineers on data requirements for their stories.
- Designing and developing report APIs for the front end to consume.
- Focus on building highly available, fault-tolerant report systems.
- Constantly improve the architecture of the application by clearing the technical backlog.
- Adopt a culture of learning and development to constantly keep pace with and adopt new technolgies.
What are we looking for?
An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.
- Education - BE/MCA or equivalent
- Overall 8+ years of experience
- Expert level understanding of database concepts and BI.
- Well verse in databases such as MySQL, MongoDB and hands on experience in creating data models.
- Must have designed and implemented low latency data warehouse systems.
- Must have strong understanding of Kafka and related systems.
- Experience in clickhouse database preferred.
- Must have good knowledge of APIs and should be able to build interfaces for frontend engineers.
- Should be innovative and communicative in approach
- Will be responsible for functional/technical track of a project
Whom will you work with?
You will work with a top-notch tech team, working closely with the CTO and product team.
What can you look for?
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality on content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits.
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore and Dublin. We have three products in our portfolio: Plum, Empuls and Compass. Works with over 1000 global clients. We help our clients in engaging and motivating their employees, sales teams, channel partners or consumers for better business results.
What we look for:
We are looking for an associate who will be doing data crunching from various sources and finding the key points from the data. Also help us to improve/build new pipelines as per the requests. Also, this associate will be helping us to visualize the data if required and find flaws in our existing algorithms.
Responsibilities:
- Work with multiple stakeholders to gather the requirements of data or analysis and take action on them.
- Write new data pipelines and maintain the existing pipelines.
- Person will be gathering data from various DB’s and will be finding the required metrics out of it.
Required Skills:
- Experience with python and Libraries like Pandas,and Numpy.
- Experience in SQL and understanding of NoSQL DB’s.
- Hands-on experience in Data engineering.
- Must have good analytical skills and knowledge of statistics.
- Understanding of Data Science concepts.
- Bachelor degree in Computer Science or related field.
- Problem-solving skills and ability to work under pressure.
Nice to have:
- Experience in MongoDB or any NoSql DB.
- Experience in ElasticSearch.
- Knowledge of Tableau, Power BI or any other visualization tool.
In 2018-19, the mobile games market in India generated over $600 million in revenues. With close to 450 people in its Mumbai and Bangalore offices, Games24x7 is India’s largest mobile games business today and is very well positioned to become the 800-pound gorilla of what will be a $2 billion market by 2022. While Games24x7 continues to invest aggressively in its India centric mobile games, it is also diversifying its business by investing in international gaming and other tech opportunities.
Summary of Role
Position/Role Description :
The candidate will be part of a team managing databases (MySQL, MongoDB, Cassandra) and will be involved in designing, configuring and maintaining databases.
Job Responsibilities:
• Complete involvement in the database requirement starting from the design phase for every project.
• Deploying required database assets on production (DDL, DML)
• Good understanding of MySQL Replication (Master-slave, Master-Master, GTID-based)
• Understanding of MySQL partitioning.
• A better understanding of MySQL logs and Configuration.
• Ways to schedule backup and restoration.
• Good understanding of MySQL versions and their features.
• Good understanding of InnoDB-Engine.
• Exploring ways to optimize the current environment and also lay a good platform for new projects.
• Able to understand and resolve any database related production outages.
Job Requirements:
• BE/B.Tech from a reputed institute
• Experience in python scripting.
• Experience in shell scripting.
• General understanding of system hardware.
• Experience in MySQL is a must.
• Experience in MongoDB, Cassandra, Graph db will be preferred.
• Experience with Pecona MySQL tools.
• 6 - 8 years of experience.
Job Location: Bengaluru
Company Profile:
Easebuzz is a payment solutions (fintech organisation) company which enables online merchants to accept, process and disburse payments through developer friendly APIs. We are focusing on building plug n play products including the payment infrastructure to solve complete business problems. Definitely a wonderful place where all the actions related to payments, lending, subscription, eKYC is happening at the same time.
We have been consistently profitable and are constantly developing new innovative products, as a result, we are able to grow 4x over the past year alone. We are well capitalised and have recently closed a fundraise of $4M in March, 2021 from prominent VC firms and angel investors. The company is based out of Pune and has a total strength of 180 employees. Easebuzz’s corporate culture is tied into the vision of building a workplace which breeds open communication and minimal bureaucracy. An equal opportunity employer, we welcome and encourage diversity in the workplace. One thing you can be sure of is that you will be surrounded by colleagues who are committed to helping each other grow.
Easebuzz Pvt. Ltd. has its presence in Pune, Bangalore, Gurugram.
Salary: As per company standards.
Designation: Data Engineering
Location: Pune
Experience with ETL, Data Modeling, and Data Architecture
Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties
- Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue.
Experience with AWS cloud data lake for development of real-time or near real-time use cases
Experience with messaging systems such as Kafka/Kinesis for real time data ingestion and processing
Build data pipeline frameworks to automate high-volume and real-time data delivery
Create prototypes and proof-of-concepts for iterative development.
Experience with NoSQL databases, such as DynamoDB, MongoDB etc
Create and maintain optimal data pipeline architecture,
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Evangelize a very high standard of quality, reliability and performance for data models and algorithms that can be streamlined into the engineering and sciences workflow
Build and enhance data pipeline architecture by designing and implementing data ingestion solutions.
Employment Type
Full-time
Position: Big Data Engineer
What You'll Do
Punchh is seeking to hire Big Data Engineer at either a senior or tech lead level. Reporting to the Director of Big Data, he/she will play a critical role in leading Punchh’s big data innovations. By leveraging prior industrial experience in big data, he/she will help create cutting-edge data and analytics products for Punchh’s business partners.
This role requires close collaborations with data, engineering, and product organizations. His/her job functions include
- Work with large data sets and implement sophisticated data pipelines with both structured and structured data.
- Collaborate with stakeholders to design scalable solutions.
- Manage and optimize our internal data pipeline that supports marketing, customer success and data science to name a few.
- A technical leader of Punchh’s big data platform that supports AI and BI products.
- Work with infra and operations team to monitor and optimize existing infrastructure
- Occasional business travels are required.
What You'll Need
- 5+ years of experience as a Big Data engineering professional, developing scalable big data solutions.
- Advanced degree in computer science, engineering or other related fields.
- Demonstrated strength in data modeling, data warehousing and SQL.
- Extensive knowledge with cloud technologies, e.g. AWS and Azure.
- Excellent software engineering background. High familiarity with software development life cycle. Familiarity with GitHub/Airflow.
- Advanced knowledge of big data technologies, such as programming language (Python, Java), relational (Postgres, mysql), NoSQL (Mongodb), Hadoop (EMR) and streaming (Kafka, Spark).
- Strong problem solving skills with demonstrated rigor in building and maintaining a complex data pipeline.
- Exceptional communication skills and ability to articulate a complex concept with thoughtful, actionable recommendations.
Our client is an innovative Fintech company that is revolutionizing the business of short term finance. The company is an online lending startup that is driven by an app-enabled technology platform to solve the funding challenges of SMEs by offering quick-turnaround, paperless business loans without collateral. It counts over 2 million small businesses across 18 cities and towns as its customers. Its founders are IIT and ISB alumni with deep experience in the fin-tech industry, from earlier working with organizations like Axis Bank, Aditya Birla Group, Fractal Analytics, and Housing.com. It has raised funds of Rs. 100 Crore from finance industry stalwarts and is growing by leaps and bounds.
- Ensuring ease of data availability, with relevant dimensions, using Business Intelligence tools.
- Providing strong reporting and analytical information support to the management team.
- Transforming raw data into essential metrics basis needs of relevant stakeholders.
- Performing data analysis for generating reports on a periodic basis.
- Converting essential data into easy to reference visuals using Data Visualization tools (PowerBI, Metabase).
- Providing recommendations to update current MIS to improve reporting efficiency and consistency.
- Bringing fresh ideas to the table and keen observers of trends in the analytics and financial services industry.
What you need to have:
- MBA/ BE/ Graduate, with work experience of 3+ years.
- B.Tech /B.E.; MBA / PGDM
- Experience in Reporting, Data Management (SQL, MongoDB), Visualization (PowerBI, Metabase, Data studio)
- Work experience (into financial services, Indian Banks/ NBFCs in-house analytics units or Fintech/ analytics start-ups would be a plus.)
- Skilled at writing & optimizing large complicated SQL queries & MongoDB scripts.
- Strong knowledge of Banking/ Financial Services domain
- Experience with some of the modern relational databases
- Ability to work on multiple projects of different nature and self- driven,
- Liaise with cross-functional teams to resolve data issues and build strong reports
We are looking for a Data Engineer that will be responsible for collecting, storing, processing, and analyzing huge sets of data that is coming from different sources.
Responsibilities
Working with Big Data tools and frameworks to provide requested capabilities Identify development needs in order to improve and streamline operations Develop and manage BI solutions Implementing ETL process and Data Warehousing Monitoring performance and managing infrastructure
Skills
Proficient understanding of distributed computing principles Proficiency with Hadoop and Spark Experience with building stream-processing systems, using solutions such as Kafka and Spark-Streaming Good knowledge of Data querying tools SQL and Hive Knowledge of various ETL techniques and frameworks Experience with Python/Java/Scala (at least one) Experience with cloud services such as AWS or GCP Experience with NoSQL databases, such as DynamoDB,MongoDB will be an advantage Excellent written and verbal communication skills