Seeking Data Analytics Trainer with Power BI and Tableau Expertise
Experience Required: Minimum 3 Years
Location: Indore
Part-Time / Full-Time Availability
We are actively seeking a qualified candidate to join our team as a Data Analytics Trainer, with a strong focus on Power BI and Tableau expertise. The ideal candidate should possess the following qualifications:
A track record of 3 to 6 years in delivering technical training and mentoring.
Profound understanding of Data Analytics concepts.
Strong proficiency in Excel and Advanced Excel.
Demonstrated hands-on experience and effective training skills in Python, Data Visualization, R Programming, and an in-depth understanding of both Power BI and Tableau.
Follow me on LinkedIn to get more job updates 👇
https://www.linkedin.com/in/shweta-bharti-a105ab197/
About Fxbytes technologies
Similar jobs
About Slintel (a 6sense company) :
Slintel, a 6sense company, the leader in capturing technographics-powered buying intent, helps companies uncover the 3% of active buyers in their target market. Slintel evaluates over 100 billion data points and analyzes factors such as buyer journeys, technology adoption patterns, and other digital footprints to deliver market & sales intelligence.
Slintel's customers have access to the buying patterns and contact information of more than 17 million companies and 250 million decision makers across the world.
Slintel is a fast growing B2B SaaS company in the sales and marketing tech space. We are funded by top tier VCs, and going after a billion dollar opportunity. At Slintel, we are building a sales development automation platform that can significantly improve outcomes for sales teams, while reducing the number of hours spent on research and outreach.
We are a big data company and perform deep analysis on technology buying patterns, buyer pain points to understand where buyers are in their journey. Over 100 billion data points are analyzed every week to derive recommendations on where companies should focus their marketing and sales efforts on. Third party intent signals are then clubbed with first party data from CRMs to derive meaningful recommendations on whom to target on any given day.
6sense is headquartered in San Francisco, CA and has 8 office locations across 4 countries.
6sense, an account engagement platform, secured $200 million in a Series E funding round, bringing its total valuation to $5.2 billion 10 months after its $125 million Series D round. The investment was co-led by Blue Owl and MSD Partners, among other new and existing investors.
Linkedin (Slintel) : https://www.linkedin.com/company/slintel/">https://www.linkedin.com/company/slintel/
Industry : Software Development
Company size : 51-200 employees (189 on LinkedIn)
Headquarters : Mountain View, California
Founded : 2016
Specialties : Technographics, lead intelligence, Sales Intelligence, Company Data, and Lead Data.
Website (Slintel) : https://www.slintel.com/slintel">https://www.slintel.com/slintel
Linkedin (6sense) : https://www.linkedin.com/company/6sense/">https://www.linkedin.com/company/6sense/
Industry : Software Development
Company size : 501-1,000 employees (937 on LinkedIn)
Headquarters : San Francisco, California
Founded : 2013
Specialties : Predictive intelligence, Predictive marketing, B2B marketing, and Predictive sales
Website (6sense) : https://6sense.com/">https://6sense.com/
Acquisition News :
https://inc42.com/buzz/us-based-based-6sense-acquires-b2b-buyer-intelligence-startup-slintel/
Funding Details & News :
Slintel funding : https://www.crunchbase.com/organization/slintel">https://www.crunchbase.com/organization/slintel
6sense funding : https://www.crunchbase.com/organization/6sense">https://www.crunchbase.com/organization/6sense
https://www.nasdaq.com/articles/ai-software-firm-6sense-valued-at-%245.2-bln-after-softbank-joins-funding-round">https://www.nasdaq.com/articles/ai-software-firm-6sense-valued-at-%245.2-bln-after-softbank-joins-funding-round
https://www.bloomberg.com/news/articles/2022-01-20/6sense-reaches-5-2-billion-value-with-softbank-joining-round">https://www.bloomberg.com/news/articles/2022-01-20/6sense-reaches-5-2-billion-value-with-softbank-joining-round
https://xipometer.com/en/company/6sense">https://xipometer.com/en/company/6sense
Slintel & 6sense Customers :
https://www.featuredcustomers.com/vendor/slintel/customers
https://www.featuredcustomers.com/vendor/6sense/customers">https://www.featuredcustomers.com/vendor/6sense/customers
About the job
Responsibilities
- Work in collaboration with the application team and integration team to design, create, and maintain optimal data pipeline architecture and data structures for Data Lake/Data Warehouse
- Work with stakeholders including the Sales, Product, and Customer Support teams to assist with data-related technical issues and support their data analytics needs
- Assemble large, complex data sets from third-party vendors to meet business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Elastic search, MongoDB, and AWS technology
- Streamline existing and introduce enhanced reporting and analysis solutions that leverage complex data sources derived from multiple internal systems
Requirements
- 3+ years of experience in a Data Engineer role
- Proficiency in Linux
- Must have SQL knowledge and experience working with relational databases, query authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra, and Athena
- Must have experience with Python/ Scala
- Must have experience with Big Data technologies like Apache Spark
- Must have experience with Apache Airflow
- Experience with data pipeline and ETL tools like AWS Glue
- Experience working with AWS cloud services: EC2 S3 RDS, Redshift and other Data solutions eg. Databricks, Snowflake
Desired Skills and Experience
Python, SQL, Scala, Spark, ETL
Key Responsibilities:
•Design, development, support and maintain automated business intelligence products in Tableau.
•Rapidly design, develop and implement reporting applications that insert KPI metrics and actionable insights into the operational, tactical and strategic activities of key business functions.
•Develop strong communication skills with a proven success communicating with users, other tech teams.
•Identify business requirements, design processes that leverage/adapt the business logic and regularly communicate with business stakeholders to ensure delivery meets business needs.
•Design, code and review business intelligence projects developed in tools Tableau & Power BI.
•Work as a member and lead teams to implement BI solutions for our customers.
•Develop dashboards and data sources that meet and exceed customer requirements.
•Partner with business information architects to understand the business use cases that support and fulfill business and data strategy.
•Partner with Product Owners and cross functional teams in a collaborative and agile environment
•Provide best practices for data visualization and Tableau implementations.
•Work along with solution architect in RFI / RFP response solution design, customer presentations, demonstrations, POCs etc. for growth.
Desired Candidate Profile:
•6-10 years of programming experience and a demonstrated proficiency in Experience with Tableau Certifications in Tableau is highly preferred.
•Ability to architect and scope complex projects.
•Strong understanding of SQL and basic understanding of programming languages; experience with SAQL, SOQL, Python, or R a plus.
•Applied experience in Agile development processes (SCRUM)
•Ability to independently learn new technologies.
•Ability to show initiative and work independently with minimal direction.
•Presentation skills – demonstrated ability to simplify complex situations and ideas and distill them into compelling and effective written and oral presentations.
•Learn quickly – ability to understand and rapidly comprehend new areas, functional and technical, and apply detailed and critical thinking to customer solutions.
Education:
•Bachelor/master’s degree in Computer Science, Computer Engineering, quantitative studies, such as Statistics, Math, Operation Research, Economics and Advanced Analytics
AWS Glue Developer
Work Experience: 6 to 8 Years
Work Location: Noida, Bangalore, Chennai & Hyderabad
Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops,
Job Reference ID:BT/F21/IND
Job Description:
Design, build and configure applications to meet business process and application requirements.
Responsibilities:
7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.
Technical Experience:
Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.
➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.
➢ Create data pipeline architecture by designing and implementing data ingestion solutions.
➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.
➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.
➢ Author ETL processes using Python, Pyspark.
➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.
➢ ETL process monitoring using CloudWatch events.
➢ You will be working in collaboration with other teams. Good communication must.
➢ Must have experience in using AWS services API, AWS CLI and SDK
Professional Attributes:
➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.
➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.
➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.
Qualification:
➢ Degree in Computer Science, Computer Engineering or equivalent.
Salary: Commensurate with experience and demonstrated competence
About Us |
|
upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship, etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow.
|
The Data Engineer would be responsible for selecting and integrating Big Data tools and frameworks required. Would implement Data Ingestion & ETL/ELT processes
Required Experience, Skills and Qualifications:
- Hands on experience on Big Data tools/technologies like Spark, Databricks, Map Reduce, Hive, HDFS.
- Expertise and excellent understanding of big data toolset such as Sqoop, Spark-streaming, Kafka, NiFi
- Proficiency in any of the programming language: Python/ Scala/ Java with 4+ years’ experience
- Experience in Cloud infrastructures like MS Azure, Data lake etc
- Good working knowledge in NoSQL DB (Mongo, HBase, Casandra)
We are looking for an engineer with ML/DL background.
Ideal candidate should have the following skillset
1) Python
2) Tensorflow
3) Experience building and deploying systems
4) Experience with Theano/Torch/Caffe/Keras all useful
5) Experience Data warehousing/storage/management would be a plus
6) Experience writing production software would be a plus
7) Ideal candidate should have developed their own DL architechtures apart from using open source architechtures.
8) Ideal candidate would have extensive experience with computer vision applications
Candidates would be responsible for building Deep Learning models to solve specific problems. Workflow would look as follows:
1) Define Problem Statement (input -> output)
2) Preprocess Data
3) Build DL model
4) Test on different datasets using Transfer Learning
5) Parameter Tuning
6) Deployment to production
Candidate should have experience working on Deep Learning with an engineering degree from a top tier institute (preferably IIT/BITS or equivalent)
Intro
Our data and risk team is the core pillar of our business that harnesses alternative data sources to guide the decisions we make at Rely. The team designs, architects, as well as develop and maintain a scalable data platform the powers our machine learning models. Be part of a team that will help millions of consumers across Asia, to be effortlessly in control of their spending and make better decisions.
What will you do
The data engineer is focused on making data correct and accessible, and building scalable systems to access/process it. Another major responsibility is helping AI/ML Engineers write better code.
• Optimize and automate ingestion processes for a variety of data sources such as: click stream, transactional and many other sources.
- Create and maintain optimal data pipeline architecture and ETL processes
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Develop data pipeline and infrastructure to support real-time decisions
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data' technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs.
What will you need
• 2+ hands-on experience building and implementation of large scale production pipeline and Data Warehouse
• Experience dealing with large scale
- Proficiency in writing and debugging complex SQLs
- Experience working with AWS big data tools
• Ability to lead the project and implement best data practises and technology
Data Pipelining
- Strong command in building & optimizing data pipelines, architectures and data sets
- Strong command on relational SQL & noSQL databases including Postgres
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
Big Data: Strong experience in big data tools & applications
- Tools: Hadoop, Spark, HDFS etc
- AWS cloud services: EC2, EMR, RDS, Redshift
- Stream-processing systems: Storm, Spark-Streaming, Flink etc.
- Message queuing: RabbitMQ, Spark etc
Software Development & Debugging
- Strong experience in object-oriented programming/object function scripting languages: Python, Java, C++, Scala, etc
- Strong hold on data structures & algorithms
What would be a bonus
- Prior experience working in a fast-growth Startup
- Prior experience in the payments, fraud, lending, advertising companies dealing with large scale data