Seeking Data Analytics Trainer with Power BI and Tableau Expertise
Experience Required: Minimum 3 Years
Location: Indore
Part-Time / Full-Time Availability
We are actively seeking a qualified candidate to join our team as a Data Analytics Trainer, with a strong focus on Power BI and Tableau expertise. The ideal candidate should possess the following qualifications:
A track record of 3 to 6 years in delivering technical training and mentoring.
Profound understanding of Data Analytics concepts.
Strong proficiency in Excel and Advanced Excel.
Demonstrated hands-on experience and effective training skills in Python, Data Visualization, R Programming, and an in-depth understanding of both Power BI and Tableau.
Follow me on LinkedIn to get more job updates 👇

About Fxbytes technologies
Similar jobs
Responsibilities
- Work with large and complex blockchain data sets and derive investment relevant metrics in close partnership with financial analysts and blockchain engineers.
- Apply knowledge of statistics, programming, data modeling, simulation, and advanced mathematics to recognize patterns, identify opportunities, pose business questions, and make valuable discoveries leading to the development of fundamental metrics needed to evaluate various crypto assets.
- Build a strong understanding of existing metrics used to value various decentralized applications and protocols.
- Build customer facing metrics and dashboards.
- Work closely with analysts, engineers, Product Managers and provide feedback as we develop our data analytics and research platform.
Qualifications
- Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent practical experience (or) degree in an analytical field (e.g. Computer Science, Engineering, Mathematics, Statistics, Operations Research, Management Science)
- 3+ years experience with data analysis and metrics development
- 3+ years experience analyzing and interpreting data, drawing conclusions, defining recommended actions, and reporting results across stakeholders
- 2+ years experience writing SQL queries
- 2+ years experience scripting in Python
- Demonstrated curiosity in and excitement for Web3/blockchain technologies
Key Responsibilities:
- Collaborate with business stakeholders and data analysts to understand reporting requirements and translate them into effective Power BI solutions.
- Design and develop interactive and visually compelling dashboards, reports, and visualizations using Microsoft Power BI.
- Ensure data accuracy and consistency in the reports by working closely with data engineers and data architects.
- Optimize and streamline existing Power BI reports and dashboards for better performance and user experience.
- Develop and maintain data models and data connections to various data sources, ensuring seamless data integration.
- Implement security measures and data access controls to protect sensitive information in Power BI reports.
- Troubleshoot and resolve issues related to Power BI reports, data refresh, and connectivity problems.
- Stay updated with the latest Power BI features and capabilities, and evaluate their potential use in improving existing solutions.
- Conduct training sessions and workshops for end-users to promote self-service BI capabilities and enable them to create their own reports.
- Collaborate with the wider data and analytics team to identify opportunities for using Power BI to enhance business processes and decision-making.
Requirements:
- Bachelor's degree in Computer Science, Information Systems, or a related field.
- Proven experience as a Power BI Developer or similar role, with a strong portfolio showcasing previous Power BI projects.
- Proficient in Microsoft Power BI, DAX (Data Analysis Expressions), and M (Power Query) to manipulate and analyze data effectively.
- Solid understanding of data visualization best practices and design principles to create engaging and intuitive dashboards.
- Strong SQL skills and experience with data modeling and database design concepts.
- Knowledge of data warehousing concepts and ETL (Extract, Transform, Load) processes.
- Ability to work with various data sources, including relational databases, APIs, and cloud-based platforms.
- Excellent problem-solving skills and a proactive approach to identifying and addressing issues in Power BI reports.
- Familiarity with data security and governance practices in the context of Power BI development.
- Strong communication and interpersonal skills to collaborate effectively with cross-functional teams and business stakeholders.
- Experience with other BI tools (e.g., Tableau, QlikView) is a plus.
The role of a Power BI Developer is critical in enabling data-driven decision-making and empowering business users to gain valuable insights from data. The successful candidate will have a passion for data visualization and analytics, along with the ability to adapt to new technologies and drive continuous improvement in BI solutions. If you are enthusiastic about leveraging the power of data through Power BI, we encourage you to apply and join our dynamic team.
We are looking for an exceptionally talented Lead data engineer who has exposure in implementing AWS services to build data pipelines, api integration and designing data warehouse. Candidate with both hands-on and leadership capabilities will be ideal for this position.
Qualification: At least a bachelor’s degree in Science, Engineering, Applied Mathematics. Preferred Masters degree
Job Responsibilities:
• Total 6+ years of experience as a Data Engineer and 2+ years of experience in managing a team
• Have minimum 3 years of AWS Cloud experience.
• Well versed in languages such as Python, PySpark, SQL, NodeJS etc
• Has extensive experience in Spark ecosystem and has worked on both real time and batch processing
• Have experience in AWS Glue, EMR, DMS, Lambda, S3, DynamoDB, Step functions, Airflow, RDS, Aurora etc.
• Experience with modern Database systems such as Redshift, Presto, Hive etc.
• Worked on building data lakes in the past on S3 or Apache Hudi
• Solid understanding of Data Warehousing Concepts
• Good to have experience on tools such as Kafka or Kinesis
• Good to have AWS Developer Associate or Solutions Architect Associate Certification
• Have experience in managing a team
Job Description
Mandatory Requirements
-
Experience in AWS Glue
-
Experience in Apache Parquet
-
Proficient in AWS S3 and data lake
-
Knowledge of Snowflake
-
Understanding of file-based ingestion best practices.
-
Scripting language - Python & pyspark
CORE RESPONSIBILITIES
-
Create and manage cloud resources in AWS
-
Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
-
Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
-
Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
-
Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
-
Define process improvement opportunities to optimize data collection, insights and displays.
-
Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
-
Identify and interpret trends and patterns from complex data sets
-
Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
-
Key participant in regular Scrum ceremonies with the agile teams
-
Proficient at developing queries, writing reports and presenting findings
-
Mentor junior members and bring best industry practices.
QUALIFICATIONS
-
5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
-
Strong background in math, statistics, computer science, data science or related discipline
-
Advanced knowledge one of language: Java, Scala, Python, C#
-
Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
-
Proficient with
-
Data mining/programming tools (e.g. SAS, SQL, R, Python)
-
Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
-
Data visualization (e.g. Tableau, Looker, MicroStrategy)
-
Comfortable learning about and deploying new technologies and tools.
-
Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
-
Good written and oral communication skills and ability to present results to non-technical audiences
-
Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
-
AWS certification
-
Spark Streaming
-
Kafka Streaming / Kafka Connect
-
ELK Stack
-
Cassandra / MongoDB
-
CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
Job DescriptionPosition: Sr Data Engineer – Databricks & AWS
Experience: 4 - 5 Years
Company Profile:
Exponentia.ai is an AI tech organization with a presence across India, Singapore, the Middle East, and the UK. We are an innovative and disruptive organization, working on cutting-edge technology to help our clients transform into the enterprises of the future. We provide artificial intelligence-based products/platforms capable of automated cognitive decision-making to improve productivity, quality, and economics of the underlying business processes. Currently, we are transforming ourselves and rapidly expanding our business.
Exponentia.ai has developed long-term relationships with world-class clients such as PayPal, PayU, SBI Group, HDFC Life, Kotak Securities, Wockhardt and Adani Group amongst others.
One of the top partners of Cloudera (leading analytics player) and Qlik (leader in BI technologies), Exponentia.ai has recently been awarded the ‘Innovation Partner Award’ by Qlik in 2017.
Get to know more about us on our website: http://www.exponentia.ai/ and Life @Exponentia.
Role Overview:
· A Data Engineer understands the client requirements and develops and delivers the data engineering solutions as per the scope.
· The role requires good skills in the development of solutions using various services required for data architecture on Databricks Delta Lake, streaming, AWS, ETL Development, and data modeling.
Job Responsibilities
• Design of data solutions on Databricks including delta lake, data warehouse, data marts and other data solutions to support the analytics needs of the organization.
• Apply best practices during design in data modeling (logical, physical) and ETL pipelines (streaming and batch) using cloud-based services.
• Design, develop and manage the pipelining (collection, storage, access), data engineering (data quality, ETL, Data Modelling) and understanding (documentation, exploration) of the data.
• Interact with stakeholders regarding data landscape understanding, conducting discovery exercises, developing proof of concepts and demonstrating it to stakeholders.
Technical Skills
• Has more than 2 Years of experience in developing data lakes, and datamarts on the Databricks platform.
• Proven skill sets in AWS Data Lake services such as - AWS Glue, S3, Lambda, SNS, IAM, and skills in Spark, Python, and SQL.
• Experience in Pentaho
• Good understanding of developing a data warehouse, data marts etc.
• Has a good understanding of system architectures, and design patterns and should be able to design and develop applications using these principles.
Personality Traits
• Good collaboration and communication skills
• Excellent problem-solving skills to be able to structure the right analytical solutions.
• Strong sense of teamwork, ownership, and accountability
• Analytical and conceptual thinking
• Ability to work in a fast-paced environment with tight schedules.
• Good presentation skills with the ability to convey complex ideas to peers and management.
Education:
BE / ME / MS/MCA.
- Key responsibility is to design, develop & maintain efficient Data models for the organization maintained to ensure optimal query performance by the consumption layer.
- Developing, Deploying & maintaining a repository of UDXs written in Java / Python.
- Develop optimal Data Model design, analyzing complex distributed data deployments, and making recommendations to optimize performance basis data consumption patterns, performance expectations, the query is executed on the tables/databases, etc.
- Periodic Database health check and maintenance
- Designing collections in a no-SQL Database for efficient performance
- Document & maintain data dictionary from various sources to enable data governance
- Coordination with Business teams, IT, and other stakeholders to provide best-in-class data pipeline solutions, exposing data via APIs, loading in down streams, No-SQL Databases, etc
- Data Governance Process Implementation and ensuring data security
Requirements
- Extensive working experience in Designing & Implementing Data models in OLAP Data Warehousing solutions (Redshift, Synapse, Snowflake, Teradata, Vertica, etc).
- Programming experience using Python / Java.
- Working knowledge in developing & deploying User-defined Functions (UDXs) using Java / Python.
- Strong understanding & extensive working experience in OLAP Data Warehousing (Redshift, Synapse, Snowflake, Teradata, Vertica, etc) architecture and cloud-native Data Lake (S3, ADLS, BigQuery, etc) Architecture.
- Strong knowledge in Design, Development & Performance tuning of 3NF/Flat/Hybrid Data Model.
- Extensive technical experience in SQL including code optimization techniques.
- Strung knowledge of database performance and tuning, troubleshooting, and tuning.
- Knowledge of collection design in any No-SQL DB (DynamoDB, MongoDB, CosmosDB, etc), along with implementation of best practices.
- Ability to understand business functionality, processes, and flows.
- Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently.
- Any OLAP DWH DBA Experience and User Management will be added advantage.
- Knowledge in financial industry-specific Data models such as FSLDM, IBM Financial Data Model, etc will be added advantage.
- Experience in Snowflake will be added advantage.
- Working experience in BFSI/NBFC & data understanding of Loan/Mortgage data will be added advantage.
Functional knowledge
- Data Governance & Quality Assurance
- Modern OLAP Database Architecture & Design
- Linux
- Data structures, algorithm & data modeling techniques
- No-SQL database architecture
- Data Security
About antuit.ai
Antuit.ai is the leader in AI-powered SaaS solutions for Demand Forecasting & Planning, Merchandising and Pricing. We have the industry’s first solution portfolio – powered by Artificial Intelligence and Machine Learning – that can help you digitally transform your Forecasting, Assortment, Pricing, and Personalization solutions. World-class retailers and consumer goods manufacturers leverage antuit.ai solutions, at scale, to drive outsized business results globally with higher sales, margin and sell-through.
Antuit.ai’s executives, comprised of industry leaders from McKinsey, Accenture, IBM, and SAS, and our team of Ph.Ds., data scientists, technologists, and domain experts, are passionate about delivering real value to our clients. Antuit.ai is funded by Goldman Sachs and Zodius Capital.
The Role:
Antuit is looking for a Data / Sr. Data Scientist who has the knowledge and experience in developing machine learning algorithms, particularly in supply chain and forecasting domain with data science toolkits like Python.
In this role, you will design the approach, develop and test machine learning algorithms, implement the solution. The candidate should have excellent communication skills and be results driven with a customer centric approach to problem solving. Experience working in the demand forecasting or supply chain domain is a plus. This job also requires the ability to operate in a multi-geographic delivery environment and a good understanding of cross-cultural sensitivities.
Responsibilities:
Responsibilities includes, but are not limited to the following:
- Design, build, test, and implement predictive Machine Learning models.
- Collaborate with client to align business requirements with data science systems and process solutions that ensure client’s overall objectives are met.
- Create meaningful presentations and analysis that tell a “story” focused on insights, to communicate the results/ideas to key decision makers.
- Collaborate cross-functionally with domain experts to identify gaps and structural problems.
- Contribute to standard business processes and practices as part of a community of practise.
- Be the subject matter expert across multiple work streams and clients.
- Mentor and coach team members.
- Set a clear vision for the team members and working cohesively to attain it.
Qualifications and Skills:
Requirements
- Experience / Education:
- Master’s or Ph.D. in Computer Science, Computer Engineering, Electrical Engineering, Statistics, Applied Mathematics or other related
- 5+ years’ experience working in applied machine learning or relevant research experience for recent Ph.D. graduates.
- Highly technical:
- Skilled in machine learning, problem-solving, pattern recognition and predictive modeling with expertise in PySpark and Python.
- Understanding of data structures and data modeling.
- Effective communication and presentation skills
- Able to collaborate closely and effectively with teams.
- Experience in time series forecasting is preferred.
- Experience working in start-up type environment preferred.
- Experience in CPG and/or Retail preferred.
- Effective communication and presentation skills.
- Strong management track record.
- Strong inter-personal skills and leadership qualities.
Information Security Responsibilities
- Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System.
- Take part in Information Security training and act accordingly while handling information.
- Report all suspected security and policy breach to Infosec team or appropriate authority (CISO).
EEOC
Antuit.ai is an at-will, equal opportunity employer. We consider applicants for all positions without regard to race, color, religion, national origin or ancestry, gender identity, sex, age (40+), marital status, disability, veteran status, or any other legally protected status under local, state, or federal law.
- Python coding skills
- Scikit-learn, pandas, tensorflow/keras experience
- Machine learning: designing ml models and explaining them for regression, classification, dimensionality reduction, anomaly detection etc
- Implementing Machine learning models and pushing it to production
- Creating docker images for ML models, REST API creation in Python
- Additional Skills Compulsory:
- Knowledge and professional experience of text and NLP related projects such as - text classification, text summarization, topic modeling etc
- Additional Skills Compulsory:
- Knowledge and professional experience of vision and deep learning for documents - CNNs, Deep neural networks using tensorflow for Keras for object detection, OCR implementation, document extraction etc
Pipelines should be optimised to handle both real time data, batch update data and historical data.
Establish scalable, efficient, automated processes for complex, large scale data analysis.
Write high quality code to gather and manage large data sets (both real time and batch data) from multiple sources, perform ETL and store it in a data warehouse.
Manipulate and analyse complex, high-volume, high-dimensional data from varying sources using a variety of tools and data analysis techniques.
Participate in data pipelines health monitoring and performance optimisations as well as quality documentation.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.
Job Requirements :-
2+ years experience working in software development & data pipeline development for enterprise analytics.
2+ years of working with Python with exposure to various warehousing tools
In-depth working with any of commercial tools like AWS Glue, Ta-lend, Informatica, Data-stage, etc.
Experience with various relational databases like MySQL, MSSql, Oracle etc. is a must.
Experience with analytics and reporting tools (Tableau, Power BI, SSRS, SSAS).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business client.
Knowledge of Logistics and/or Transportation Domain is a plus.
Hands-on with traditional databases and ERP systems like Sybase and People-soft.

