Tableau Development
at Opportunity is with an Investment Bank
Technical Proficiency:
Experience in leading self-service offering to business users. Tableau experience in Architecture, Administration and development is must.
- Experience in supporting and administration of middleware software systems in an enterprise setting. Excellent troubleshooting skills in a complex enterprise environment.
- Hadoop / Big Data domain experience is required, as is exposure to ecosystem technologies such as Hive, Impala and Spark.
Data Warehouse/ETL design and development methodologies knowledge and experience required - Scripting experience in Linux shell scripting and Python.
- Good SQL skills.
- Integrate/roll-out/administer/sustain/upgradeour installations of vendor BI products on Linux.
- Ensure products are configured properly to meet the necessary security, supportability, and performance requirements.
Experience with developing frameworks and utility services including logging/monitoring
Leadership Skills:
o Lead the team on Strategic BI and Big Data Governance.
o Excellent organization skills, attention to detail
o Demonstrated sense of responsibility and capability to deliver quickly
o Ability to build strong relationships in a multi-cultural environment across all levels within IT
o Promotes continuous process improvement especially in code quality, testability & reliability
o Exceptional strategic analysis, problem solving, issue resolution and decision making skills
o Excited about rapid changes in the market/domain and being flexible to adopt newer technologies and methods.
Education:
Similar jobs
Sr. Data Scientist (Global Media Agency)
at Global Media Agency - A client of Merito
Our client combines Adtech and Martech platform strategy with data science & data engineering expertise, helping our clients make advertising work better for people.
- Act as primary day-to-day contact on analytics to agency-client leads
- Develop bespoke analytics proposals for presentation to agencies & clients, for delivery within the teams
- Ensure delivery of projects and services across the analytics team meets our stakeholder requirements (time, quality, cost)
- Hands on platforms to perform data pre-processing that involves data transformation as well as data cleaning
- Ensure data quality and integrity
- Interpret and analyse data problems
- Build analytic systems and predictive models
- Increasing the performance and accuracy of machine learning algorithms through fine-tuning and further
- Visualize data and create reports
- Experiment with new models and techniques
- Align data projects with organizational goals
Requirements
- Min 6 - 7 years’ experience working in Data Science
- Prior experience as a Data Scientist within a digital media is desirable
- Solid understanding of machine learning
- A degree in a quantitative field (e.g. economics, computer science, mathematics, statistics, engineering, physics, etc.)
- Experience with SQL/ Big Query/GMP tech stack / Clean rooms such as ADH
- A knack for statistical analysis and predictive modelling
- Good knowledge of R, Python
- Experience with SQL, MYSQL, PostgreSQL databases
- Knowledge of data management and visualization techniques
- Hands-on experience on BI/Visual Analytics Tools like PowerBI or Tableau or Data Studio
- Evidence of technical comfort and good understanding of internet functionality desirable
- Analytical pedigree - evidence of having approached problems from a mathematical perspective and working through to a solution in a logical way
- Proactive and results-oriented
- A positive, can-do attitude with a thirst to continually learn new things
- An ability to work independently and collaboratively with a wide range of teams
- Excellent communication skills, both written and oral
Qualifications
- 5+ years of professional experience in experiment design and applied machine learning predicting outcomes in large-scale, complex datasets.
- Proficiency in Python, Azure ML, or other statistics/ML tools.
- Proficiency in Deep Neural Network, Python based frameworks.
- Proficiency in Azure DataBricks, Hive, Spark.
- Proficiency in deploying models into production (Azure stack).
- Moderate coding skills. SQL or similar required. C# or other languages strongly preferred.
- Outstanding communication and collaboration skills. You can learn from and teach others.
- Strong drive for results. You have a proven record of shepherding experiments to create successful shipping products/services.
- Experience with prediction in adversarial (energy) environments highly desirable.
- Understanding of the model development ecosystem across platforms, including development, distribution, and best practices, highly desirable.
As a dedicated Data Scientist on our Research team, you will apply data science and your machine learning expertise to enhance our intelligent systems to predict and provide proactive advice. You’ll work with the team to identify and build features, create experiments, vet ML models, and ship successful models that provide value additions for hundreds of EE customers.
At EE, you’ll have access to vast amounts of energy-related data from our sources. Our data pipelines are curated and supported by engineering teams (so you won't have to do much data engineering - you get to do the fun stuff.) We also offer many company-sponsored classes and conferences that focus on data science and ML. There’s great growth opportunity for data science at EE.
Data Engineer
Work at the intersection of Energy, Weather & Climate Sciences and Artificial Intelligence.
Responsibilities:
- Manage all real-time and batch ETL pipelines with complete ownership
- Develop systems for integration, storage and accessibility of multiple data streams from SCADA, IoT devices, Satellite Imaging, Weather Simulation Outputs, etc.
- Support team members on product development and mentor junior team members
Expectations:
- Ability to work on broad objectives and move from vision to business requirements to technical solutions
- Willingness to assume ownership of effort and outcomes
- High levels of integrity and transparency
Requirements:
- Strong analytical and data driven approach to problem solving
- Proficiency in python programming and working with numerical and/or imaging data
- Experience working on LINUX environments
- Industry experience in building and maintaining ETL pipelines
The present role is a Data engineer role for Crewscale– Toplyne Collaboration.
Crewscale is exclusive partner of Toplyne.
About Crewscale:
Crewscale is a premium technology company focusing on helping companies building world
class scalable products. We are a product based start-up having a code assessment platform
which is being used top technology disrupters across the world.
Crewscale works with premium product companies (Indian and International) like - Swiggy,
ShareChat Grab, Capillary, Uber, Workspan, Ovo and many more. We are responsible for
managing infrastructure for Swiggy as well.
We focus on building only world class tech product and our USP is building technology can
handle scale from 1 million to 1 billion hits.
We invite candidates who have a zeal to develop world class products to come and work with us.
Toplyne
Who are we? 👋
Toplyne is a global SaaS product built to help revenue teams, at businesses with a self-service motion, and a large user-base, identify which users to spend time on, when and for what outcome. Think self-service or freemium-led companies like Figma, Notion, Freshworks, and Slack. We do this by helping companies recognize signals across their - product engagement, sales, billing, and marketing data.
Founded in June 2021, Toplyne is backed by marquee investors like Sequoia,Together fund and a bunch of well known angels. You can read more about us on - https://bit.ly/ForbesToplyne" target="_blank">https://bit.ly/ForbesToplyne , https://bit.ly/YourstoryToplyne" target="_blank">https://bit.ly/YourstoryToplyne.
What will you get to work on? 🏗️
-
Design, Develop and maintain scalable data pipelines and Data warehouse to support continuing increases in data volume and complexity.
-
Develop and implement processes and systems to supervise data quality, data mining and ensuring production data is always accurate and available for key partners and business processes that depend on it.
-
Perform data analysis required to solve data related issues and assist in the resolution of data issues.
-
Complete ownership - You’ll build highly scalable platforms and services that support rapidly growing data needs in Toplyne. There’s no instruction book, it’s yours to write. You’ll figure it out, ship it, and iterate.
What do we expect from you? 🙌🏻
-
3-6 years of relevant work experience in a Data Engineering role.
-
Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
-
Experience building and optimising data pipelines, architectures and data sets.
-
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
-
Strong analytic skills related to working with unstructured datasets.
-
Good understanding of Airflow, Spark, NoSql databases, Kakfa is nice to have.
Years of Experience required: 4 years (Min)
Mandatory:-
· At least 3 years of experience in database or BI developer role
· Microsoft accreditation(s) preferred
· Proficient at design and development of ETL using MS SQL Integration Services (SSIS)
· Experience with common ETL Design Patterns
· Proficient at writing T-SQL, stored procedures and functions
· Proficient with analysis of data quality and data profiling
· Experience with writing technical documentation and communicating database design
· Experience working with an Agile team to create and develop operational processes and software
· Experience developing ETL of OLTP data for analytical/BI reporting; ability to perform as OLTP DBA is a plus
· Proficient in Excel BI Tools, XML, JSON and Version Control Systems (TFS experience is a plus)
· Experience in OLAP schema modeling for DWH
· Tableau desktop and Server experience is a plus.
Job Description
Experience: 3+ yrs
We are looking for a MySQL DBA who will be responsible for ensuring the performance, availability, and security of clusters of MySQL instances. You will also be responsible for design of database, database architecture, orchestrating upgrades, backups, and provisioning of database instances. You will also work in tandem with the other teams, preparing documentations and specifications as required.
Responsibilities:
Database design and data architecture
Provision MySQL instances, both in clustered and non-clustered configurations
Ensure performance, security, and availability of databases
Prepare documentations and specifications
Handle common database procedures, such as upgrade, backup, recovery, migration, etc.
Profile server resource usage, optimize and tweak as necessary
Skills and Qualifications:
Proven expertise in database design and data architecture for large scale systems
Strong proficiency in MySQL database management
Decent experience with recent versions of MySQL
Understanding of MySQL's underlying storage engines, such as InnoDB and MyISAM
Experience with replication configuration in MySQL
Knowledge of de-facto standards and best practices in MySQL
Proficient in writing and optimizing SQL statements
Knowledge of MySQL features, such as its event scheduler
Ability to plan resource requirements from high level specifications
Familiarity with other SQL/NoSQL databases such as Cassandra, MongoDB, etc.
Knowledge of limitations in MySQL and their workarounds in contrast to other popular relational databases
- 3-6 years of relevant work experience in a Data Engineering role.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing data pipelines, architectures, and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- A good understanding of Airflow, Spark, NoSQL databases, Kafka is nice to have.
- Premium Institute Candidates only
Backend Engineer
- Experience with relational SQL & NoSQL databases including MySQL & MongoDB.
- Familiar with the basic principles of distributed computing and data modeling.
- Experience with distributed data pipeline frameworks like Celery, Apache Airflow, etc.
- Experience with NLP and NER models is a bonus.
- Experience building reusable code and libraries for future use.
- Experience building REST APIs.
Preference for candidates working in tech product companies
● Working hand in hand with application developers and data scientists to help build softwares that scales in terms of performance and stability Skills ● 3+ years of experience managing large scale data infrastructure and building data pipelines/ data products. ● Proficient in - Any data engineering technologies and proficient in AWS data engineering technologies is plus. ● Language - python, scala or go ● Experience in working with real time streaming systems Experience in handling millions of events per day Experience in developing and deploying data models on Cloud ● Bachelors/Masters in Computer Science or equivalent experience Ability to learn and use skills in new technologies
Artificial intelligence trainer
at engigyan technology pvt Ltd