Must-Have Skills: • Good experience in Pyspark - Including Dataframe core functions and Spark SQL • Good experience in SQL DBs - Be able to write queries including fair complexity. • Should have excellent experience in Big Data programming for data transformation and aggregations • Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption. • Good customer communication. • Good Analytical skill Technology Skills (Good to Have): Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. Designing and implementing data engineering, ingestion, and transformation functions Azure Synapse or Azure SQL data warehouse Spark on Azure is available in HD insights and data bricks
Primary Responsibilities:• Responsible for developing and maintaining applications with PySpark• Contribute to the overall design and architecture of the application developed and deployed.• Performance Tuning wrt to executor sizing and other environmental parameters, code optimization, partitions tuning, etc.• Interact with business users to understand requirements and troubleshoot issues.• Implement Projects based on functional specifications.Must-Have Skills:• Good experience in Pyspark - Including Dataframe core functions and Spark SQL• Good experience in SQL DBs - Be able to write queries including fair complexity.• Should have excellent experience in Big Data programming for data transformation and aggregations• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.• Good customer communication.• Good Analytical skills
Should have experience in Big data developmentStrong experience in Scala/SparkEnd client: SapientMode of Hiring : FTENotice should be less than 30days
Who are we and What do we do? Hypersonix.ai is disrupting the Business Intelligence and Analytics space with AI, ML and NLP capabilities to drive specific business insights with a conversational user experience. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in Restaurants, Hospitality and other industry verticals. How the team looks like? Coming out of the silos and working with complete visibility of the end-to-end product focusing on commerce. Flexibility to work on any areas of your choice Machine learning/Business Analytics/Development in Tech terms - Building and managing robust Insight based Platform, Efficient Data pipelines/ streaming systems, integrated user interfaces. Be part of the small and high impacting team Opportunity to take bigger roles/responsibilities at early stage Be part of re-designing the whole Business Intelligence products to make it more optimized and powerful and solve the business problems for commerce Good opportunities to use your Data Science, Machine/Deep learning skills on solving practical problems What will you be doing? Responsible for design, architecture, and delivery of a feature or component/product with the highest quality. Driving innovations in the platform constantly & remaining ahead of the curve Collaborates effectively with cross functional teams to deliver end-to-end products & features Demonstrates ability to multi-task and re-prioritize responsibilities based on changing requirements Provide functional, design, and code reviews in related areas of expertise with-in team and cross-team. Analyze and extract relevant information from large amounts of data to help automate and optimize key processes. Design, development, evaluate and deploy innovative and highly scalable models for predictive learning. Research and implement novel machine learning and statistical approaches. Work closely with software engineering teams to drive real-time model implementations and new feature creations. Work closely with business owners and operations staff to optimize various business operations Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. Mentors/coaches engineers to facilitate their development and provide technical leadership to them. Rises above detail to see broader issues and implications for whole product/team. What do we expect from you? A BS/MS/PhD in Computer Science or Machine learning or Operational research or Statistics or in a highly quantitative field 8+ years of hands-on experience in applied Machine Learning, Big data and products An ideal candidate should have Strong grasp of depth and breadth of knowledge in machine learning, data mining and data analytics concept as well as executions Strong Problem-solving ability Strong knowledge of scientific programming in scripting languages like Python Experience in designing and implementing information retrieval, web mining and neural network and other classification algorithms. Big thinker that can take broad visions and concepts and develop structured plans, actions and measurable metrics and then execute those plans. Expertise in using Python as well as with Spark ML, scikit-learn, Tensorflow or similar machine/Deep learning open-source software libraries Superior organization, communication, interpersonal and leadership skills Must be a proven performer and team player that enjoy challenging assignments in a high-energy, fast growing and start-up workplace Must be a self-starter who can work well with minimal guidance and in fluid environment Must be excited by challenges surrounding the development of massively scalable & distributed system Agility and ability to adapt quickly to changing requirements and scope and priorities Nice to have skills Experience of online Commerce (Retail/CPG/Restaurants/Hospitality) domain Experience of building products that are powered by data & insights Experience in Machine learning deployment and hands on knowledge of Kubeflow/ML-flow Experience of working on massively large-scale data Experience in data structures & algorithms skills
Hypersonix.ai is disrupting the Business Intelligence and Analytics space with AI, ML and NLP capabilities to drive specific business insights with a conversational user experience. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in Restaurants, Hospitality and other industry verticals.Hypersonix.ai is seeking a Data Evangelist who can work closely with customers to understand the data sources, acquire data and drive product success by delivering insights based on customer needs.Primary Responsibilities :- Lead and deliver complete application lifecycle design, development, deployment, and support for actionable BI and Advanced Analytics solutions- Design and develop data models and ETL process for structured and unstructured data that is distributed across multiple Cloud platforms- Develop and deliver solutions with data streaming capabilities for a large volume of data- Design, code and maintain parts of the product and drive customer adoption- Build data acquisition strategy to onboard customer data with speed and accuracy- Working both independently and with team members to develop, refine, implement, and scale ETL processes- On-going support and maintenance of live-clients for their data and analytics needs- Defining the data automation architecture to drive self-service data load capabilitiesRequired Qualifications :- Bachelors/Masters/Ph.D. in Computer Science, Information Systems, Data Science, Artificial Intelligence, Machine Learning or related disciplines- 10+ years of experience guiding the development and implementation of Data architecture in structured, unstructured, and semi-structured data environments.- Highly proficient in Big Data, data architecture, data modeling, data warehousing, data wrangling, data integration, data testing and application performance tuning- Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Flink, Storm, Druid and Hadoop- Strong with hands-on programming and scripting for Big Data ecosystem (Python, Scala, Spark, etc)- Experience building batch and streaming ETL data pipelines using workflow management tools like Airflow, Luigi, NiFi, Talend, etc- Familiarity with cloud-based platforms like AWS, Azure or GCP- Experience with cloud data warehouses like Redshift and Snowflake- Proficient in writing complex SQL queries.- Excellent communication skills and prior experience of working closely with customers- Data savvy who loves to understand large data trends and obsessed with data analysis- Desire to learn about, explore, and invent new tools for solving real-world problems using dataDesired Qualifications :- Cloud computing experience, Amazon Web Services (AWS)- Prior experience in Data Warehousing concepts, multi-dimensional data models- Full command of Analytics concepts including Dimension, KPI, Reports & Dashboards- Prior experience in managing client implementation of Analytics projects- Knowledge and prior experience of using machine learning tools
DevOps Engineer- Technology skill sets required for a matching profile- The experience between 4 to 8 years in a DevOps role with preferable startup experience.- Should be hands-on with writing stable automation and monitoring scripts or cron jobs. Must have sound knowledge of industry standards around monitoring, alerting, high availability, auto-scaling, etc.- Exhaustive experience with cloud especially AWS and its ecosystem.- Required to have sound knowledge of deploying and troubleshooting all layers of application from the network, frontend, backend, and databases.- Must have experience with containers, Kubernetes, istio, and microservices.- Hands-on with tools for centralized logging (ELK), infra monitoring, and alerting.- Must have set up a CI-CD pipeline with a good amount of validation and automation.- Desire to work in fast-paced startups.
Research and develop statistical learning models for data analysis Collaborate with product management and engineering departments to understand company needs and devise possible solutions Keep up-to-date with latest technology trends Communicate results and ideas to key decision makers Implement new statistical or other mathematical methodologies as needed for specific models or analysis Optimize joint development efforts through appropriate database use and project design Qualifications/Requirements: Masters or PhD in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background Excellent understanding of machine learning techniques and algorithms, including clustering, anomaly detection, optimization, neural network etc 3+ years experiences building data science-driven solutions including data collection, feature selection, model training, post-deployment validation Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning models Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow Good team worker with excellent communication skills written, verbal and presentation Desired Experience: Experience with AWS, S3, Flink, Spark, Kafka, Elastic Search Knowledge and experience with NLP technology Previous work in a start-up environment
Developing telemetry software to connect Junos devices to the cloud Fast prototyping and laying the SW foundation for product solutions Moving prototype solutions to a production cloud multitenant SaaS solution Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources Build analytics tools that utilize the data pipeline to provide significant insights into customer acquisition, operational efficiency and other key business performance metrics. Work with partners including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics specialists to strive for greater functionality in our data systems. Qualification and Desired Experiences Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background 5+ years experiences building data pipelines for data science-driven solutions Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow Good team worker with excellent interpersonal skills written, verbal and presentation Create and maintain optimal data pipeline architecture, Assemble large, sophisticated data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Experience with AWS, S3, Flink, Spark, Kafka, Elastic Search Previous work in a start-up environment 3+ years experiences building data pipelines for data science-driven solutions Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background We are looking for a candidate with 9+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience with AWS cloud services: EC2, EMR, RDS, Redshift Experience with stream-processing systems: Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing ‘big data’ data pipelines, architectures and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and find opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Proven understanding of message queuing, stream processing, and highly scalable ‘big data’ data stores. Strong project management and interpersonal skills. Experience supporting and working with multi-functional teams in a multidimensional environment.
At HypersoniX our platform technology is aimed to solve regular and persistent problem in data platform domain. We’ve established ourselves as a leading developer of innovative software solutions. We’re looking for a highly-skilled Data-Platform engineer to join our program and platform design team. Our ideal candidate will have expert knowledge of software development processes and solid experience in designing/developing/evaluating/troubleshooting data platform and data driven applications If finding issues and fixing them with beautiful, meticulous code are among the talents that make you tick, we’d like to hear from you. Objectives of this Role: • Design, and develop creative and innovative frameworks/components for data platforms, as we continue to experience dramatic growth in the usage and visibility of our products • work closely with data scientist and product owners to come up with better design/development approach for application and platform to scale and serve the needs. • Examine existing systems, identifying flaws and creating solutions to improve service uptime and time-to-resolve through monitoring and automated remediation • Plan and execute full software development life cycles (SDLC) for each assigned project, adhering to company standards and expectations Daily and Monthly Responsibilities: • Design and build tools/frameworks/scripts to automate development, testing deployment, management and monitoring of the company’s 24x7 services and products • Plan and scale distributed software and applications, applying synchronous and asynchronous design patterns, write code, and deliver with urgency and quality • Collaborate with global team, producing project work plans and analyzing the efficiency and feasibility of project operations, • manage large volume of data and process them on Realtime and batch orientation as needed. • while leveraging global technology stack and making localized improvements Track, document, and maintain software system functionality—both internally and externally, leveraging opportunities to improve engineering productivity • Code review, Git operation, CI-CD, Mentor and assign task to junior team members Skills and Qualifications • Bachelor’s degree in software engineering or information technology• 5-7 years’ experience engineering software and networking platforms • 5+ years professional experience with Python or Java or Scala. • Strong experience in API development and API integration. • proven knowledge on data migration, platform migration, CI-CD process, orchestration workflows like Airflow or Luigi or Azkaban etc. • Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Hadoop, No-SQl platform • Prior experience in Datawarehouse and OLAP design and deployment. • Proven ability to document design processes, including development, tests, analytics, and troubleshooting • Experience with rapid development cycles in a web-based/Multi Cloud environment • Strong scripting and test automation abilities Good to have Qualifications • Working knowledge of relational databases as well as ORM and SQL technologies • Proficiency with Multi OS env, Docker and Kubernetes • Proven experience designing interactive applications and largescale platforms • Desire to continue to grow professional capabilities with ongoing training and educational opportunities.
Job Description Job Title: Data Engineer Tech Job Family: DACI • Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field) • 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering • 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC) Preferred Qualifications: • Master's Degree in Computer Science, CIS, or related field • 2 years of IT experience developing and implementing business systems within an organization • 4 years of experience working with defect or incident tracking software • 4 years of experience with technical documentation in a software development environment • 2 years of experience working with an IT Infrastructure Library (ITIL) framework • 2 years of experience leading teams, with or without direct reports • Experience with application and integration middleware • Experience with database technologies Data Engineering • 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role) • Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role) BI Engineering • Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role) Platform Engineering • 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role) • Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role) Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.
Bachelor's or Master’s degree in Computer Science or equivalent area 10 to 20 years of experience in software development Hands-on experience designing and building B2B or B2C products 3+ years architecting SaaS/Web based customer facing products, leading engineering teams as software/technical architect Experiences of engineering practices such as code refactoring, microservices, design and enterprise integration patterns, test and design-driven development, continuous integration, building highly scalable applications, application and infrastructure security Strong cloud infrastructure experience with AWS and/or Azure Experience building event driven systems and working with message queues/topics Broad working experience across multiple programming languages and frameworks with in-depth experience in one or more of the following: .Net, Java, Scala or Go-lang Hands-on experience with relational databases like SQL Server, PostgreSQL and document stores like Elasticsearch or MongoDB Hands-on experience with Big Data processing technologies like Hadoop/Spark is a plus Hands-on experience with container technologies like Docker, Kubernetes Knowledge of Agile software development process
Specification: Location: Bangalore Designation: Senior Engineer/Tech Lead (Designation will be decided based up on the skills, CTC etc) Qualification: Bachelor's or master's degree in Computer Science or equivalent area 6-12 years of experience in software development building complex enterprise systems that involve large scale data processing Must have very good experience in any of the following languages such as Java, Scala, C# Hands-on experience with databases like SQL Server, PostgreSQL or similar is required Knowledge of document stores like Elasticsearch or MongoDB is desirable Hands-on experience with Big Data processing technologies like Hadoop/Spark is required Strong cloud infrastructure experience with AWS and / or Azure Experience with container technologies like Docker, Kubernetes Experiences of engineering practices such as code refactoring, design patterns, design driven development, continuous integration, building highly scalable applications, application security Knowledge of Agile software development process What youll do: As a Sr. Engineer or Technical Lead, you will be involved in leading software development projects in a hands-on manner. You will spend about 70% of your time writing and reviewing code, creating software designs. Your expertise will expand into database design, core middle tier modules, performance tuning, cloud technologies, DevOps and continuous delivery domains.
Minimum of 4 years’ experience of working on DW/ETL projects and expert hands-on working knowledge of ETL tools. Experience with Data Management & data warehouse development Star schemas, Data Vaults, RDBMS, and ODS Change Data capture Slowly changing dimensions Data governance Data quality Partitioning and tuning Data Stewardship Survivorship Fuzzy Matching Concurrency Vertical and horizontal scaling ELT, ETL Spark, Hadoop, MPP, RDBMS Experience with Dev/OPS architecture, implementation and operation Hand's on working knowledge of Unix/Linux Building Complex SQL Queries. Expert SQL and data analysis skills, ability to debug and fix data issue. Complex ETL program design coding Experience in Shell Scripting, Batch Scripting. Good communication (oral & written) and inter-personal skills Expert SQL and data analysis skill, ability to debug and fix data issue Work closely with business teams to understand their business needs and participate in requirements gathering, while creating artifacts and seek business approval. Helping business define new requirements, Participating in End user meetings to derive and define the business requirement, propose cost effective solutions for data analytics and familiarize the team with the customer needs, specifications, design targets & techniques to support task performance and delivery. Propose good design & solutions and adherence to the best Design & Standard practices. Review & Propose industry best tools & technology for ever changing business rules and data set. Conduct Proof of Concepts (POC) with new tools & technologies to derive convincing benchmarks. Prepare the plan, design and document the architecture, High-Level Topology Design, Functional Design, and review the same with customer IT managers and provide detailed knowledge to the development team to familiarize them with customer requirements, specifications, design standards and techniques. Review code developed by other programmers, mentor, guide and monitor their work ensuring adherence to programming and documentation policies. Work with functional business analysts to ensure that application programs are functioning as defined. Capture user-feedback/comments on the delivered systems and document it for the client and project manager’s review. Review all deliverables before final delivery to client for quality adherence. Technologies (Select based on requirement) Databases - Oracle, Teradata, Postgres, SQL Server, Big Data, Snowflake, or Redshift Tools – Talend, Informatica, SSIS, Matillion, Glue, or Azure Data Factory Utilities for bulk loading and extracting Languages – SQL, PL-SQL, T-SQL, Python, Java, or Scala J/ODBC, JSON Data Virtualization Data services development Service Delivery - REST, Web Services Data Virtualization Delivery – Denodo ELT, ETL Cloud certification Azure Complex SQL Queries Data Ingestion, Data Modeling (Domain), Consumption(RDMS)
About slice slice is a fintech startup focused on India’s young population. We aim to build a smart, simple, and transparent platform to redesign the financial experience for millennials and bring success and happiness to people’s lives. Growing with the new generation is what we dream about and all that we want. We believe that personalization combined with an extreme focus on superior customer service is the key to build long-lasting relations with young people. About team/role In this role, you will have the opportunity to create a significant impact on our business & most importantly our customers through your technical expertise on data as we take on challenges that can reshape the financial experience for the next generation. If you are a highly motivated team player with a knack for problem solving through technology, then we have a perfect job for you. What you’ll do Work closely with Engineering and Analytics teams to assist in Schema Designing, Normalization of Databases, Query optimization etc. Work with AWS cloud services: S3, EMR, Glue, RDS Create new and improve existing infrastructure for ETL workflows from a wide variety of data sources using SQL, NoSQL and AWS big data technologies Manage and monitor performance, capacity and security of database systems and regularly perform server tuning and maintenance activities Debug and troubleshoot database errors Identify, design and implement internal process improvements; optimising data delivery, re-designing infrastructure for greater scalability, data archival Qualification: 2+ years experience working as a Data Engineer Experience with a scripting language - PYTHON preferably Experience with Spark and Hadoop technologies. Experience with AWS big data tools is a plus. Experience with SQL and NoSQL databases technologies like Redshift, MongoDB, Postgres/MySQL, bigQuery, Casandra. Experience on Graph DB (Neo4j and OrientDB) and Search DB (Elastic Search) is a plus. Experience in handling ETL JOBS
What is Contentstack? Contentstack combines the best Content Management System (CMS) and Digital Experience Platform (DXP) technology. It enables enterprises to manage content across all digital channels and create inimitable digital experiences. The Contentstack platform was designed from the ground up for large-scale, complex, and mission-critical deployments. Recently recognized as the Gartner PeerInsights Customers' Choice for WCM, Contentstack is the preferred API-first, headless CMS for enterprises across the globe. What Are We Looking For? Contentstack is looking for a Data Engineer. Roles and responsibilities: Primary responsibilities included designing and scaling ETL pipelines, and ensuring data sanity. Collaborate with multiple groups and produce operational efficiency Develop, construct, test and maintain architectures Align architecture with business requirements Identify ways to improve data reliability, efficiency and quality Optimize database systems for performance and reliability Implementation of model workflows to prepare/analyse/learn/predict and supply the outcomes through API contract(s) Establishing programming patterns, documenting components and provide infrastructure for analysis and execution Set up practices on data reporting and continuous monitoring Provide excellence, open to new ideas and contribute to communities Industrialise the data science models and embed intelligence in product & business applications Find hidden patterns using data Prepare data for predictive and prescriptive modeling Deploy sophisticated analytics programs, machine learning and statistical methods Mandatory Skills 3+ relevant work experience as a Data Engineer Working experience in HDFS, Big table, MR, Spark, Data warehouse, ETL etc.. Advanced proficiency in Java,Scala, SQL, NoSQL Strong knowledge in Shell/Perl/R/Python/Ruby Proficiency in Statistical procedures, Experiments and Machine Learning techniques. Exceptional problem solving abilities Job type – Full time employment Job location – Mumbai/ Pune/ Bangalore/Remote Work schedule – Monday to Friday, 10am to 7pm Minimum qualification – Graduate. Years of experience – 3 + yearsNo of position - 2 Travel opportunities - On need basis within/outside India. Candidate should have valid passport What Really Gets Us Excited About You? Experience in working with product based start-up companies Knowledge of working with SAAS products. What Do We Offer? Interesting Work | We hire curious trendspotters and brave trendsetters. This is NOT your boring, routine, cushy, rest-and-vest corporate job. This is the “challenge yourself” role where you learn something new every day, never stop growing, and have fun while you’re doing it. Tribe Vibe | We are more than colleagues, we are a tribe. We have a strict “no a**hole policy” and enforce it diligently. This means we spend time together - with spontaneous office happy hours, organized outings, and community volunteer opportunities. We are a diverse and distributed team, but we like to stay connected. Bragging Rights | We are dreamers and dream makers, hustlers, and honeybadgers. Our efforts pay off and we work with the most prestigious brands, from big-name retailers to airlines, to professional sports teams. Your contribution will make an impact with many of the most recognizable names in almost every industry including Chase, The Miami HEAT, Cisco, Shell, Express, Riot Games, IcelandAir, Morningstar, and many more! A Seat at the Table | One Team One Dream is one of our values, and it shows. We don’t believe in artificial hierarchies. If you’re part of the tribe, you get a seat at the table. This includes unfiltered access to our C-Suite and regular updates about the business and its performance. Which, btw, is through the roof, so it’s a great time to be joining…
Technology Skills: Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. Designing and implementing data engineering, ingestion, and transformation functions Good to Have: Experience with Azure Analysis Services Experience in Power BI Experience with third-party solutions like Attunity/Stream sets, Informatica Experience with PreSales activities (Responding to RFPs, Executing Quick POCs) Capacity Planning and Performance Tuning on Azure Stack and Spark.