● Ability to do exploratory analysis: Fetch data from systems and analyze trends.
● Developing customer segmentation models to improve the efficiency of marketing and product
campaigns.
● Establishing mechanisms for cross functional teams to consume customer insights to improve
engagement along the customer life cycle.
● Gather requirements for dashboards from business, marketing and operations stakeholders.
● Preparing internal reports for executive leadership and supporting their decision making.
● Analyse data, derive insights and embed it into Business actions.
● Work with cross functional teams.
Skills Required
• Data Analytics Visionary.
• Strong in SQL & Excel and good to have experience in Tableau.
• Experience in the field of Data Analysis, Data Visualization.
• Strong in analysing the Data and creating dashboards.
• Strong in communication, presentation and business intelligence.
• Multi-Dimensional, "Growth Hacker" Skill Set with strong sense of ownership for work.
• Aggressive “Take no prisoners” approach.
About Catalyst IQ
Our deep bench of CXOs would not only help cover the entire value chain of your core business but also help you grow professionally in your caree
Similar jobs
Senior Director – Strategy (Data & Analytics)
The client is a leading media agency who drive more strategic, creative and integrated media approach established to drive better business results. Their team is made up of best in class digital, offline, and integrated media experts who work together to enhance media's contribution to their client's business.
Responsibilities of the role:
We are looking for a talented Director to join our leadership team and manage the delivery of data engineering projects . In this role, you will be responsible to manage a team of highly skilled engineers, scientists and analysts to deliver data-driven solutions to exceed client goals.
As the Director, Data and Analytics, you will work with the agency and client to enable the data and tech solutions that lay the foundation for executing brand & performance marketing.
- Set the vision, strategy, and roadmap to deliver cutting edge data, technology and analytics solutions across a portfolio of clients .
- Become a trusted advisor to clients and consult on growth opportunities and identify new way to use data to solve business challenges and support growth,
- Support development and deployment of products and services across multiple cloud environments
- Collaborate with a cross-functional team of client leads, application developers, operations engineers, and architects to translate complex product requirements into technical specs and design requirements
- Act as a consultant and subject matter expert for internal stakeholders in GroupM Data & Analytics, GroupM Engineering and agency data science and tech leads
- Build & manage a team of experts in Data Strategy, Measurement/Business Analysts, Advanced Analytics, and Marketing Technology specialists to deliver best in class audience, measurement strategy in collaboration with Client, Strategy, and Investment teams.
What you will need:
· 7+ years of experience in data analytics / quantitative research
- Proven thought-leadership, and a deep understanding of the data and technology issues facing the marketing industry
- Experience building, managing and inspiring large and diverse teams of data and technology specialists
- A proven track record in working with a diverse array of clients to solve complex problems and delivering demonstrable business success. Including (but not limited to) the development of compelling and sophisticated data strategies and AdTech / martech strategies to enable marketing objectives.
· Master’s or Bachelor’s in Statistics, Mathematics or related quantitative field
About UpSolve
We built and deliver complex AI solutions which help drive business decisions faster and more accurately. We are a typical AI company and have a range of solutions developed on Video, Image and Text.
What you will do
- Stay informed on new technologies and implement cautiously
- Maintain necessary documentation for the project
- Fix the issues reported by application users
- Plan, build, and design solutions with a mental note of future requirements
- Coordinate with the development team to manage fixes, code changes, and merging
Location: Mumbai
Working Mode: Remote
What are we looking for
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- Minimum 2 years of professional experience in software development, with a focus on machine learning and full stack development.
- Strong proficiency in Python programming language and its machine learning libraries such as TensorFlow, PyTorch, or scikit-learn.
- Experience in developing and deploying machine learning models in production environments.
- Proficiency in web development technologies including HTML, CSS, JavaScript, and front-end frameworks such as React, Angular, or Vue.js.
- Experience in designing and developing RESTful APIs and backend services using frameworks like Flask or Django.
- Knowledge of databases and SQL for data storage and retrieval.
- Familiarity with version control systems such as Git.
- Strong problem-solving and analytical skills.
- Excellent communication and collaboration abilities.
- Ability to work effectively in a fast-paced and dynamic team environment.
- Good to have Cloud Exposure
Role Description
This is a full-time hybrid role as a GCP Data Engineer,. As a GCP Data Engineer, you will be responsible for managing large sets of structured and unstructured data and developing processes to convert data into insights, information, and knowledge.
Skill Name: GCP Data Engineer
Experience: 7-10 years
Notice Period: 0-15 days
Location :-Pune
If you have a passion for data engineering and possess the following , we would love to hear from you:
🔹 7 to 10 years of experience working on Software Development Life Cycle (SDLC)
🔹 At least 4+ years of experience in Google Cloud platform, with a focus on Big Query
🔹 Proficiency in Java and Python, along with experience in Google Cloud SDK & API Scripting
🔹 Experience in the Finance/Revenue domain would be considered an added advantage
🔹 Familiarity with GCP Migration activities and the DBT Tool would also be beneficial
You will play a crucial role in developing and maintaining our data infrastructure on the Google Cloud platform.
Your expertise in SDLC, Big Query, Java, Python, and Google Cloud SDK & API Scripting will be instrumental in ensuring the smooth operation of our data systems..
Join our dynamic team and contribute to our mission of harnessing the power of data to make informed business decisions.
TOP 3 SKILLS
Python (Language)
Spark Framework
Spark Streaming
Docker/Jenkins/ Spinakar
AWS
Hive Queries
He/She should be good coder.
Preff: - Airflow
Must have experience: -
Python
Spark framework and streaming
exposure to Machine Learning Lifecycle is mandatory.
Project:
This is searching domain project. Any searching activity which is happening on website this team create the model for the same, they create sorting/scored model for any search. This is done by the data
scientist This team is working more on the streaming side of data, the candidate would work extensively on Spark streaming and there will be a lot of work in Machine Learning.
INTERVIEW INFORMATION
3-4 rounds.
1st round based on data engineering batching experience.
2nd round based on data engineering streaming experience.
3rd round based on ML lifecycle (3rd round can be a techno-functional round based on previous
feedbacks otherwise 4th round will be a functional round if required.
Senior Data Engineer
Responsibilities:
● Clean, prepare and optimize data at scale for ingestion and consumption by machine learning models
● Drive the implementation of new data management projects and re-structure of the current data architecture
● Implement complex automated workflows and routines using workflow scheduling tools
● Build continuous integration, test-driven development and production deployment frameworks
● Drive collaborative reviews of design, code, test plans and dataset implementation performed by other data engineers in support of maintaining data engineering standards
● Anticipate, identify and solve issues concerning data management to improve data quality
● Design and build reusable components, frameworks and libraries at scale to support machine learning products
● Design and implement product features in collaboration with business and Technology stakeholders
● Analyze and profile data for the purpose of designing scalable solutions
● Troubleshoot complex data issues and perform root cause analysis to proactively resolve product and operational issues
● Mentor and develop other data engineers in adopting best practices
● Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders
Qualifications:
● 8+ years of experience developing scalable Big Data applications or solutions on distributed platforms
● Experience in Google Cloud Platform (GCP) and good to have other cloud platform tools
● Experience working with Data warehousing tools, including DynamoDB, SQL, and Snowflake
● Experience architecting data products in Streaming, Serverless and Microservices Architecture and platform.
● Experience with Spark (Scala/Python/Java) and Kafka
● Work experience with using Databricks (Data Engineering and Delta Lake components)
● Experience working with Big Data platforms, including Dataproc, Data Bricks etc
● Experience working with distributed technology tools including Spark, Presto, Databricks, Airflow
● Working knowledge of Data warehousing, Data modeling
● Experience working in Agile and Scrum development process
● Bachelor's degree in Computer Science, Information Systems, Business, or other relevant subject area
Role:
Senior Data Engineer
Total No. of Years:
8+ years of relevant experience
To be onboarded by:
Immediate
Notice Period:
Skills
Mandatory / Desirable
Min years (Project Exp)
Max years (Project Exp)
GCP Exposure
Mandatory Min 3 to 7
BigQuery, Dataflow, Dataproc, AI Building Blocks, Looker, Cloud Data Fusion, Dataprep .Spark and PySpark
Mandatory Min 5 to 9
Relational SQL
Mandatory Min 4 to 8
Shell scripting language
Mandatory Min 4 to 8
Python /scala language
Mandatory Min 4 to 8
Airflow/Kubeflow workflow scheduling tool
Mandatory Min 3 to 7
Kubernetes
Desirable 1 to 6
Scala
Mandatory Min 2 to 6
Databricks
Desirable Min 1 to 6
Google Cloud Functions
Mandatory Min 2 to 6
GitHub source control tool
Mandatory Min 4 to 8
Machine Learning
Desirable 1 to 6
Deep Learning
Desirable Min 1to 6
Data structures and algorithms
Mandatory Min 4 to 8