- Bachelor's Degree in Computer Science or a related technical field, and solid years of relevant experience.
- A strong grasp of SQL/Presto and at least one scripting (Python, preferable) or programming language.
- Experience with an enterprise class BI tools and it's auditing along with automations using REST API's.
- Experience with reporting tools – QuickSight (preferred, at least 2 years hands on).
- Tableau/Looker (both or anyone would suffice with at least 5+ years of hands on).
- 5+ years of experience with and detailed knowledge of data warehouse technical architectures, data modelling, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures and hands-on SQL coding.
- 5+ years of demonstrated quantitative and qualitative business intelligence.
- Experience with significant product analysis based business impact.
- 4+ years of large IT project delivery for BI oriented projects using agile framework.
- 2+ years of working with very large data warehousing environment.
- Experience in designing and delivering cross functional custom reporting solutions.
- Excellent oral and written communication skills including the ability to communicate effectively with both technical and non-technical stakeholders.
- Proven ability to meet tight deadlines, multi-task, and prioritize workload.
- A work ethic based on a strong desire to exceed expectations.
- Strong analytical and challenge process skills.
CTC: higher side of the market standards
Company Overview: A US-based High Growth company, a provider of data, analytics and talent solutions for Fortune 500 companies. Recently they have raised $100M in funding and are looking to accelerate growth and expand operations.
Must-have experience: heavy DWH experience, Data Engineering & Big Data, Python & Spark
2. Responsible for gathering system requirements working together with application architects
3. Responsible for generating scripts and templates required for the automatic provisioning of
4. Discover standard cloud services offerings, install, and execute processes and standards for
optimal use of cloud service provider offerings
5. Incident Management on IaaS, PaaS, SaaS.
6. Responsible for debugging technical issues inside a complex stack involving virtualization,
containers, microservices, etc.
7. Collaborate with the engineering teams to enable their applications to run
on Cloud infrastructure.
8. Experience with OpenStack, Linux, Amazon Web Services, Microsoft Azure, DevOps, NoSQL
etc will be plus.
9. Design, implement, configure, and maintain various Azure IaaS, PaaS, SaaS services.
10. Deploy and maintain Azure IaaS Virtual Machines and Azure Application and Networking
11. Optimize Azure billing for cost/performance (VM optimization, reserved instances, etc.)
12. Implement, and fully document IT projects.
13. Identify improvements to IT documentation, network architecture, processes/procedures,
14. Research products and new technologies to increase efficiency of business and operations
15. Keep all tickets and projects updated and track time in a detailed format
16. Should be able to multi-task and work across a range of projects and issues with various
timelines and priorities
• Minimum 1 year experience Azure and knowledge on Office365 services preferred.
• Formal education in IT preferred
• Experience with Managed Service business model a major plus
• Bachelor’s degree preferred
Data Warehouse and Analytics solutions that aggregate data across diverse sources and data types
including text, video and audio through to live stream and IoT in an agile project delivery
environment with a focus on DataOps and Data Observability. You will work with Azure SQL
Databases, Synapse Analytics, Azure Data Factory, Azure Datalake Gen2, Azure Databricks, Azure
Machine Learning, Azure Service Bus, Azure Serverless (LogicApps, FunctionApps), Azure Data
Catalogue and Purview among other tools, gaining opportunities to learn some of the most
advanced and innovative techniques in the cloud data space.
You will be building Power BI based analytics solutions to provide actionable insights into customer
data, and to measure operational efficiencies and other key business performance metrics.
You will be involved in the development, build, deployment, and testing of customer solutions, with
responsibility for the design, implementation and documentation of the technical aspects, including
integration to ensure the solution meets customer requirements. You will be working closely with
fellow architects, engineers, analysts, and team leads and project managers to plan, build and roll
out data driven solutions
Proven expertise in developing data solutions with Azure SQL Server and Azure SQL Data Warehouse (now
Demonstrated expertise of data modelling and data warehouse methodologies and best practices.
Ability to write efficient data pipelines for ETL using Azure Data Factory or equivalent tools.
Integration of data feeds utilising both structured (ex XML/JSON) and flat schemas (ex CSV,TXT,XLSX)
across a wide range of electronic delivery mechanisms (API/SFTP/etc )
Azure DevOps knowledge essential for CI/CD of data ingestion pipelines and integrations.
Scala, etc is required.
Expertise in creating technical and Architecture documentation (ex: HLD/LLD) is a must.
Proven ability to rapidly analyse and design solution architecture in client proposals is an added advantage.
Expertise with big data tools: Hadoop, Spark, Kafka, NoSQL databases, stream-processing systems is a plus.
5 or more years of hands-on experience in a data architect role with the development of ingestion,
integration, data auditing, reporting, and testing with Azure SQL tech stack.
full data and analytics project lifecycle experience (including costing and cost management of data
solutions) in Azure PaaS environment is essential.
Microsoft Azure and Data Certifications, at least fundamentals, are a must.
Experience using agile development methodologies, version control systems and repositories is a must.
A good, applied understanding of the end-to-end data process development life cycle.
A good working knowledge of data warehouse methodology using Azure SQL.
A good working knowledge of the Azure platform, it’s components, and the ability to leverage it’s
resources to implement solutions is a must.
Experience working in the Public sector or in an organisation servicing Public sector is a must,
Ability to work to demanding deadlines, keep momentum and deal with conflicting priorities in an
environment undergoing a programme of transformational change.
The ability to contribute and adhere to standards, have excellent attention to detail and be strongly driven
Experience with AWS or google cloud platforms will be an added advantage.
Experience with Azure ML services will be an added advantage Personal Attributes
Articulated and clear in communications to mixed audiences- in writing, through presentations and one-toone.
Ability to present highly technical concepts and ideas in a business-friendly language.
Ability to effectively prioritise and execute tasks in a high-pressure environment.
Calm and adaptable in the face of ambiguity and in a fast-paced, quick-changing environment
Extensive experience working in a team-oriented, collaborative environment as well as working
Comfortable with multi project multi-tasking consulting Data Architect lifestyle
Excellent interpersonal skills with teams and building trust with clients
Ability to support and work with cross-functional teams in a dynamic environment.
A passion for achieving business transformation; the ability to energise and excite those you work with
Initiative; the ability to work flexibly in a team, working comfortably without direct supervision.
Work at client location as Tableau Squad BA / Tech lead gathering visualization requirements and building Business Insights and Dashboards
Roles And Responsibilities
- Work with Business stakeholders to gather Dashboarding/reporting requirements
- Document user stories
- High level and detail design for Tableau application
- Suggest visualization options and collaborate on prototyping of Dashboards
- Actively collaborate with customers and colleagues to ensure delivery excellence
- Display a growth mindset by proactively seeking feedback
- Technical lead for developers guiding them with complex set analysis expressions and Dashboard development
- Experience in reporting requirement analysis
- Strong hands-on experience in Tableau design and development
- Experience in set analysis, storytelling, data load scripting and security setup of Tableau reports
- Experience in working with large scale RBDMS (Oracle/Teradata preferred)
- Working knowledge of QMC
- Good SQL knowledge
- Ability to present analysis and technical information to a non-technical audience
- Ability to work independently and collaboratively, as part of a team
- Excellent communication skills
- Ability to create documented standards and procedures for others to follow
• Responsible for designing, deploying, and maintaining analytics environment that processes data at scale
• Contribute design, configuration, deployment, and documentation for components that manage data ingestion, real time streaming, batch processing, data extraction, transformation, enrichment, and loading of data into a variety of cloud data platforms, including AWS and Microsoft Azure
• Identify gaps and improve the existing platform to improve quality, robustness, maintainability, and speed
• Evaluate new and upcoming big data solutions and make recommendations for adoption to extend our platform to meet advanced analytics use cases, such as predictive modeling and recommendation engines
• Data Modelling , Data Warehousing on Cloud Scale using Cloud native solutions.
• Perform development, QA, and dev-ops roles as needed to ensure total end to end responsibility of solutions
• Experience building, maintaining, and improving Data Models / Processing Pipeline / routing in large scale environments
• Fluency in common query languages, API development, data transformation, and integration of data streams
• Strong experience with large dataset platforms such as (e.g. Amazon EMR, Amazon Redshift, AWS Lambda & Fargate, Amazon Athena, Azure SQL Database, Azure Database for PostgreSQL, Azure Cosmos DB , Data Bricks)
• Fluency in multiple programming languages, such as Python, Shell Scripting, SQL, Java, or similar languages and tools appropriate for large scale data processing.
• Experience with acquiring data from varied sources such as: API, data queues, flat-file, remote databases
• Understanding of traditional Data Warehouse components (e.g. ETL, Business Intelligence Tools)
• Creativity to go beyond current tools to deliver the best solution to the problem
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
- Experience in migrating on-premise data warehouses to data platforms on AZURE cloud.
- Designing and implementing data engineering, ingestion, and transformation functions
Azure Synapse or Azure SQL data warehouse
Spark on Azure is available in HD insights and data bricks
- Experience with Azure Analysis Services
- Experience in Power BI
- Experience with third-party solutions like Attunity/Stream sets, Informatica
- Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
- Capacity Planning and Performance Tuning on Azure Stack and Spark.
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.
Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
Strong knowledge in Power BI (DAX + Power Query + Power BI Service + Power BI
Desktop Visualisations) and Azure Data Storages.
Should have experience in Power BI mobile Dashboards.
Strong knowledge in SQL.
Good knowledge of DWH concepts.
Work as an independent contributor at the client location.
Implementing Access Control and impose required Security.
Candidate must have very good communication skills.
- Creating, designing and developing data models
- Prepare plans for all ETL (Extract/Transformation/Load) procedures and architectures
- Validating results and creating business reports
- Monitoring and tuning data loads and queries
- Develop and prepare a schedule for a new data warehouse
- Analyze large databases and recommend appropriate optimization for the same
- Administer all requirements and design various functional specifications for data
- Provide support to the Software Development Life cycle
- Prepare various code designs and ensure efficient implementation of the same
- Evaluate all codes and ensure the quality of all project deliverables
- Monitor data warehouse work and provide subject matter expertise
- Hands-on BI practices, data structures, data modeling, SQL skills
- Minimum 1 year experience in Pyspark
Responsible for planning, connecting, designing, scheduling, and deploying data warehouse systems. Develops, monitors, and maintains ETL processes, reporting applications, and data warehouse design.
Role and Responsibility
· Plan, create, coordinate, and deploy data warehouses.
· Design end user interface.
· Create best practices for data loading and extraction.
· Develop data architecture, data modeling, and ETFL mapping solutions within structured data warehouse environment.
· Develop reporting applications and data warehouse consistency.
· Facilitate requirements gathering using expert listening skills and develop unique simple solutions to meet the immediate and long-term needs of business customers.
· Supervise design throughout implementation process.
· Design and build cubes while performing custom scripts.
· Develop and implement ETL routines according to the DWH design and architecture.
· Support the development and validation required through the lifecycle of the DWH and Business Intelligence systems, maintain user connectivity, and provide adequate security for data warehouse.
· Monitor the DWH and BI systems performance and integrity provide corrective and preventative maintenance as required.
· Manage multiple projects at once.
DESIRABLE SKILL SET
· Experience with technologies such as MySQL, MongoDB, SQL Server 2008, as well as with newer ones like SSIS and stored procedures
· Exceptional experience developing codes, testing for quality assurance, administering RDBMS, and monitoring of database
· High proficiency in dimensional modeling techniques and their applications
· Strong analytical, consultative, and communication skills; as well as the ability to make good judgment and work with both technical and business personnel
· Several years working experience with Tableau, MicroStrategy, Information Builders, and other reporting and analytical tools
· Working knowledge of SAS and R code used in data processing and modeling tasks
· Strong experience with Hadoop, Impala, Pig, Hive, YARN, and other “big data” technologies such as AWS Redshift or Google Big Data