22+ ADF Jobs in India
Apply to 22+ ADF Jobs on CutShort.io. Find your next job, effortlessly. Browse ADF Jobs and apply today!
at Gipfel & Schnell Consultings Pvt Ltd
Data Engineer
Brief Posting Description:
This person will work independently or with a team of data engineers on cloud technology products, projects, and initiatives. Work with all customers, both internal and external, to make sure all data related features are implemented in each solution. Will collaborate with business partners and other technical teams across the organization as required to deliver proposed solutions.
Detailed Description:
· Works with Scrum masters, product owners, and others to identify new features for digital products.
· Works with IT leadership and business partners to design features for the cloud data platform.
· Troubleshoots production issues of all levels and severities, and tracks progress from identification through resolution.
· Maintains culture of open communication, collaboration, mutual respect and productive behaviors; participates in the hiring, training, and retention of top tier talent and mentors team members to new and fulfilling career experiences.
· Identifies risks, barriers, efficiencies and opportunities when thinking through development approach; presents possible platform-wide architectural solutions based on facts, data, and best practices.
· Explores all technical options when considering solution, including homegrown coding, third-party sub-systems, enterprise platforms, and existing technology components.
· Actively participates in collaborative effort through all phases of software development life cycle (SDLC), including requirements analysis, technical design, coding, testing, release, and customer technical support.
· Develops technical documentation, such as system context diagrams, design documents, release procedures, and other pertinent artifacts.
· Understands lifecycle of various technology sub-systems that comprise the enterprise data platform (i.e., version, release, roadmap), including current capabilities, compatibilities, limitations, and dependencies; understands and advises of optimal upgrade paths.
· Establishes relationships with key IT, QA, and other corporate partners, and regularly communicates and collaborates accordingly while working on cross-functional projects or production issues.
Job Requirements:
EXPERIENCE:
2 years required; 3 - 5 years preferred experience in a data engineering role.
2 years required, 3 - 5 years preferred experience in Azure data services (Data Factory, Databricks, ADLS, Synapse, SQL DB, etc.)
EDUCATION:
Bachelor’s degree information technology, computer science, or data related field preferred
SKILLS/REQUIREMENTS:
Expertise working with databases and SQL.
Strong working knowledge of Azure Data Factory and Databricks
Strong working knowledge of code management and continuous integrations systems (Azure DevOps or Github preferred)
Strong working knowledge of cloud relational databases (Azure Synapse and Azure SQL preferred)
Familiarity with Agile delivery methodologies
Familiarity with NoSQL databases (such as CosmosDB) preferred.
Any experience with Python, DAX, Azure Logic Apps, Azure Functions, IoT technologies, PowerBI, Power Apps, SSIS, Informatica, Teradata, Oracle DB, and Snowflake preferred but not required.
Ability to multi-task and reprioritize in a dynamic environment.
Outstanding written and verbal communication skills
Working Environment:
General Office – Work is generally performed within an office environment, with standard office equipment. Lighting and temperature are adequate and there are no hazardous or unpleasant conditions caused by noise, dust, etc.
physical requirements:
Work is generally sedentary in nature but may require standing and walking for up to 10% of the time.
Mental requirements:
Employee required to organize and coordinate schedules.
Employee required to analyze and interpret complex data.
Employee required to problem-solve.
Employee required to communicate with the public.
Hiring for Azure Data Engineers.
Location: Bangalore
Employment type: Full-time, permanent
website: www.amazech.com
Qualifications:
B.E./B.Tech/M.E./M.Tech in Computer Science, Information Technology, Electrical or Electronic with good academic background.
Experience and Required Skill Sets:
• Minimum 5 years of hands-on experience with Azure Data Lake, Azure Data Factory, SQL Data Warehouse, Azure Blob, Azure Storage Explorer
• Experience in Data warehouse/analytical systems using Azure Synapse.
Proficient in creating Azure Data Factory pipelines for ETL processing; copy activity, custom Azure development, Synapse, etc.
• Knowledge of Azure Data Catalog, Event Grid, Service Bus, SQL, and Purview.
• Good technical knowledge in Microsoft SQL Server BI Suite (ETL, Reporting, Analytics, Dashboards) using SSIS, SSAS, SSRS, Power BI
• Design and develop batch and real-time streaming of data loads to data warehouse systems
Other Requirements:
A Bachelor's or Master's degree (Engineering or computer-related degree preferred)
Strong understanding of Software Development Life Cycles including Agile/Scrum
Responsibilities:
• Ability to create complex, enterprise-transforming applications that meet and exceed client expectations.
• Responsible for the bottom line. Strong project management abilities. Ability to encourage the team to stick to timelines.
Job Description:
An Azure Data Engineer is responsible for designing, implementing, and maintaining pipelines and ETL/ ELT flow solutions on the Azure cloud platform. This role requires a strong understanding of migration database technologies and the ability to deploy and manage database solutions in the Azure cloud environment.
Key Skills:
· Min. 3+ years of Experience with data modeling, data warehousing, and building ETL pipelines.
· Must have a firm knowledge of SQL, NoSQL, SSIS SSRS, and ETL/ELT Concepts.
· Should have hands-on experience in Databricks, ADF (Azure Data Factory), ADLS, Cosmos DB.
· Excel in the design, creation, and management of very large datasets
· Detailed knowledge of cloud-based data warehouses, architecture, infrastructure components, ETL, and reporting analytics tools and environments.
· Skilled with writing, tuning, and troubleshooting SQL queries
· Experience with Big Data technologies such as Data storage, Data mining, Data analytics, and Data visualization.
· Should be familiar with programming and should be able to write and debug the code in any of the programming languages like Node, Python, C#, .Net, Java.
Technical Expertise and Familiarity:
- Cloud Technologies: Azure (ADF, ADB, Logic Apps, Azure SQL database, Azure Key Vaults, ADLS, Synapse)
- Database: CosmosDB, Document DB
- IDEs: Visual Studio, VS Code, MS SQL Server
- Data Modelling,ELT, ETL Methodology
ADF Developer with top Conglomerates for Kochi location_ Air India
conducting F2F Interviews on 22nd April 2023
Experience - 2-12 years.
Location - Kochi only (work from the office only)
Notice period - 1 month only.
If you are interested, please share the following information at your earliest
4 - 8 overall experience.
- 1-2 years’ experience in Azure Data Factory - schedule Jobs in Flows and ADF Pipelines, Performance Tuning, Error logging etc..
- 1+ years of experience with Power BI - designing and developing reports, dashboards, metrics and visualizations in Powe BI.
- (Required) Participate in video conferencing calls - daily stand-up meetings and all day working with team members on cloud migration planning, development, and support.
- Proficiency in relational database concepts & design using star, Azure Datawarehouse, and data vault.
- Requires 2-3 years of experience with SQL scripting (merge, joins, and stored procedures) and best practices.
- Knowledge on deploying and run SSIS packages in Azure.
- Knowledge of Azure Data Bricks.
- Ability to write and execute complex SQL queries and stored procedures.
- Experience and expertise in Python Development and its different libraries like Pyspark, pandas, NumPy
- Expertise in ADF, Databricks.
- Creating and maintaining data interfaces across a number of different protocols (file, API.).
- Creating and maintaining internal business process solutions to keep our corporate system data in sync and reduce manual processes where appropriate.
- Creating and maintaining monitoring and alerting workflows to improve system transparency.
- Facilitate the development of our Azure cloud infrastructure relative to Data and Application systems.
- Design and lead development of our data infrastructure including data warehouses, data marts, and operational data stores.
- Experience in using Azure services such as ADLS Gen 2, Azure Functions, Azure messaging services, Azure SQL Server, Azure KeyVault, Azure Cognitive services etc.
* Formulates and recommends standards for achieving maximum performance
and efficiency of the DW ecosystem.
* Participates in the Pre-sales activities for solutions of various customer
problem-statement/situations.
* Develop business cases and ROI for the customer/clients.
* Interview stakeholders and develop BI roadmap for success given project
prioritization
* Evangelize self-service BI and visual discovery while helping to automate any
manual process at the client site.
* Work closely with the Engineering Manager to ensure prioritization of
customer deliverables.
* Champion data quality, integrity, and reliability throughout the organization by
designing and promoting best practices.
*Implementation 20%
* Help DW/DE team members with issues needing technical expertise or
complex systems and/or programming knowledge.
* Provide on-the-job training for new or less experienced team members.
* Develop a technical excellence team
Requirements
- experience designing business intelligence solutions
- experience with ETL Process, Data warehouse architecture
- experience with Azure Data services i.e., ADF, ADLS Gen 2, Azure SQL dB,
Synapse, Azure Databricks, and Power BI
- Good analytical and problem-solving skills
- Fluent in relational database concepts and flat file processing concepts
- Must be knowledgeable in software development lifecycles/methodologies
- Azure Data Factory, Azure Data Bricks, Talend, BODS, Jenkins
- Microsoft Office (mandatory)
- Strong knowledge on Databases, Azure Synapse, data management, SQL
- Knowledge on any cloud platforms (Azure, AWS etc.,)
- Azure Data Factory, Azure Data Bricks, Talend, BODS, Jenkins
- Microsoft Office (mandatory)
- Strong knowledge on Databases, Azure Synapse, data management, SQL
- Knowledge on any cloud platforms (Azure, AWS etc.,)
Technologies & Languages
- Azure
- Databricks
- SQL Sever
- ADF
- Snowflake
- Data Cleaning
- ETL
- Azure Devops
- Intermediate Python/Pyspark
- Intermediate SQL
- Beginners' knowledge/willingness to learn Spotfire
- Data Ingestion
- Familiarity with CI/CD or Agile
Must have:
- Azure – VM, Data Lake, Data Bricks, Data Factory, Azure DevOps
- Python/Spark (PySpark)
- SQL
Good to have:
- Docker
- Kubernetes
- Scala
He/she should have a good understanding in:
- How to build pipelines – ETL and Injection
- Data Warehousing
- Monitoring
Responsibilities:
Must be able to write quality code and build secure, highly available systems.
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc with the guidance.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Monitoring performance and advising any necessary infrastructure changes.
Defining data retention policies.
Implementing the ETL process and optimal data pipeline architecture
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
Create design documents that describe the functionality, capacity, architecture, and process.
Develop, test, and implement data solutions based on finalized design documents.
Work with data and analytics experts to strive for greater functionality in our data systems.
Proactively identify potential production issues and recommend and implement solutions
A global provider of Business Process Management company
Power BI Developer
Senior visualization engineer with 5 years’ experience in Power BI to develop and deliver solutions that enable delivery of information to audiences in support of key business processes. In addition, Hands-on experience on Azure data services like ADF and databricks is a must.
Ensure code and design quality through execution of test plans and assist in development of standards & guidelines working closely with internal and external design, business, and technical counterparts.
Candidates should have worked in agile development environments.
Desired Competencies:
- Should have minimum of 3 years project experience using Power BI on Azure stack.
- Should have good understanding and working knowledge of Data Warehouse and Data Modelling.
- Good hands-on experience of Power BI
- Hands-on experience T-SQL/ DAX/ MDX/ SSIS
- Data Warehousing on SQL Server (preferably 2016)
- Experience in Azure Data Services – ADF, DataBricks & PySpark
- Manage own workload with minimum supervision.
- Take responsibility of projects or issues assigned to them
- Be personable, flexible and a team player
- Good written and verbal communications
- Have a strong personality who will be able to operate directly with users
Our Client company is into Computer Software. (EC1)
- Establish and maintain a trusted advisor relationship within the company’s IT, Commercial Digital Solutions, Functions, and Businesses you interact with
- Establish and maintain close working relationships with teams responsible for delivering solutions to the company’s businesses and functions
- Perform key management and thought leadership in the areas of advanced data techniques, including data modeling, data access, data integration, data visualization, big data solutions, text mining, data discovery, statistical methods, and database design
- Work with business partners to define ways to leverage data to develop platforms and solutions to drive business growth
- Engage collaboratively with project teams to support project objectives through the application of sound data architectural principles; support a project with knowledge of existing data assets and provide guidance on reusable data structures
- Share knowledge of external and internal data capabilities and trends, provide leadership, and facilitate the evaluation of vendors and products
- Utilize advanced data analysis, including statistical analysis and data mining techniques
- Collaborate with others to set an enterprise data vision with solid recommendations, and work to gain business and IT consensus
Basic Qualifications
- Overall 10+ years of IT Environment Experience.
- 3+ years of experience partnering with business managers to develop technical strategies and architectures to support their objectives
- 3+ years in Azure Data Factory.
- 2 +azure data bricks, azure cosmos DB, multi-factor authentication, event hub, azure active directory, logic apps.
- 2+ years of hands-on experience with analytics deployment in the cloud (prefer Azure)
- 5+ years of delivering analytics in modern data architecture (Hadoop, Massively Parallel Processing Database Platforms, and Semantic Modeling)
- Demonstrable knowledge of ETL and ELT patterns and when to use either one; experience selecting among different tools that could be leveraged to accomplish this (Talend, Informatica, Azure Data Factory, SSIS, SAP Data Services)
- Demonstrable knowledge of and experience with different scripting languages (python, JavaScript, PIG, or object-oriented programming like Java or . NET)
Preferred Qualifications
- Bachelor’s degree in Computer Science, MIS, related field, or equivalent experience
- Experience working with solutions delivery teams using Agile/Scrum or similar methodologies
- 2+ years of experience designing solutions leveraging Microsoft Cortana Intelligence Suite of Products [Azure SQL, Azure SQL DW, Cosmos DB, HDInsight, DataBricks]
- Experience with enterprise systems, like CRM, ERP, Field Services Management, Supply Chain solutions, HR systems
- Ability to work independently, establishing strategic objectives, project plans, and milestones
- Exceptional written, verbal & presentation skills
Job Overview
We are looking for a Data Engineer to join our data team to solve data-driven critical
business problems. The hire will be responsible for expanding and optimizing the existing
end-to-end architecture including the data pipeline architecture. The Data Engineer will
collaborate with software developers, database architects, data analysts, data scientists and platform team on data initiatives and will ensure optimal data delivery architecture is
consistent throughout ongoing projects. The right candidate should have hands on in
developing a hybrid set of data-pipelines depending on the business requirements.
Responsibilities
- Develop, construct, test and maintain existing and new data-driven architectures.
- Align architecture with business requirements and provide solutions which fits best
- to solve the business problems.
- Build the infrastructure required for optimal extraction, transformation, and loading
- of data from a wide variety of data sources using SQL and Azure ‘big data’
- technologies.
- Data acquisition from multiple sources across the organization.
- Use programming language and tools efficiently to collate the data.
- Identify ways to improve data reliability, efficiency and quality
- Use data to discover tasks that can be automated.
- Deliver updates to stakeholders based on analytics.
- Set up practices on data reporting and continuous monitoring
Required Technical Skills
- Graduate in Computer Science or in similar quantitative area
- 1+ years of relevant work experience as a Data Engineer or in a similar role.
- Advanced SQL knowledge, Data-Modelling and experience working with relational
- databases, query authoring (SQL) as well as working familiarity with a variety of
- databases.
- Experience in developing and optimizing ETL pipelines, big data pipelines, and datadriven
- architectures.
- Must have strong big-data core knowledge & experience in programming using Spark - Python/Scala
- Experience with orchestrating tool like Airflow or similar
- Experience with Azure Data Factory is good to have
- Build processes supporting data transformation, data structures, metadata,
- dependency and workload management.
- Experience supporting and working with cross-functional teams in a dynamic
- environment.
- Good understanding of Git workflow, Test-case driven development and using CICD
- is good to have
- Good to have some understanding of Delta tables It would be advantage if the candidate also have below mentioned experience using
- the following software/tools:
- Experience with big data tools: Hadoop, Spark, Hive, etc.
- Experience with relational SQL and NoSQL databases
- Experience with cloud data services
- Experience with object-oriented/object function scripting languages: Python, Scala, etc.
we are looking for candidates, Those have good experiance between 3 to 7 years in ADF,Azure sql &Azure Data bricks.
Location - Remotly.(any where india)
Skill -ADF,ADB,Azure sql
Salary - 13 to 18 LPA
Less Notice period (who can join immediate or maximum 15 Days).
Regards
Gayatri P
Fragma Data systems
Required Experience: 5 - 7 Years
Skills : ADF, Azure, SSIS, python
Job Description
Azure Data Engineer with hands on SSIS migrations and ADF expertise.
Roles & Responsibilities
•Overall, 6+ years’ experience in Cloud Data Engineering, with hands on experience in ADF (Azure Data Factory) is required.
Hands on experience with SSIS to ADF migration is preferred.
SQL Server Integration Services (SSIS) workloads to SSIS in ADF. ( Must have done at least one migration)
Hands on experience implementing Azure Data Factory frameworks, scheduling, and performance tuning.
Hands on experience in migrating SSIS solutions to ADF
Hands on experience in ADF coding side.
Hands on experience with MPP Database architecture
Hands on experience in python
Responsibilities
Understand business requirement and actively provide inputs from Data perspective.
Experience of SSIS development.
Experience in Migrating SSIS packages to Azure SSIS Integrated Runtime
Experience in Data Warehouse / Data mart development and migration
Good knowledge and Experience on Azure Data Factory
Expert level knowledge of SQL DB & Datawarehouse
Should know at least one programming language (python or PowerShell)
Should be able to analyse and understand complex data flows in SSIS.
Knowledge on Control-M
Knowledge of Azure data lake is required.
Excellent interpersonal/communication skills (both oral/written) with the ability to communicate
at various levels with clarity & precision.
Build simple to complex pipelines & dataflows.
Work with other Azure stack modules like Azure Data Lakes, SQL DW, etc.
Requirements
Bachelor’s degree in Computer Science, Computer Engineering, or relevant field.
A minimum of 5 years’ experience in a similar role.
Strong knowledge of database structure systems and data mining.
Excellent organizational and analytical abilities.
Outstanding problem solver.
Good written and verbal communication skills.
- Data pre-processing, data transformation, data analysis, and feature engineering
- Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
- Required skills:
- Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
- Fluency in Python (Pandas), PySpark, SQL, or similar
- Azure data factory experience (min 12 months)
- Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
- Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
- Ability to work independently with demonstrated experience in project or program management
- Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment
- Working knowledge of setting up and running HD insight applications
- Hands on experience in Spark, Scala & Hive
- Hands on experience in ADF – Azure Data Factory
- Hands on experience in Big Data & Hadoop ECO Systems
- Exposure to Azure Service categories like PaaS components and IaaS subscriptions
- Ability to Design, Develop ingestion & processing frame work for ETL applications
- Hands on experience in powershell scripting, deployment on Azure
- Experience in performance tuning and memory configuration
- Should be adaptable to learn & work on new technologies
- Should have Communication Good written and spoken
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
- Experience in migrating on-premise data warehouses to data platforms on AZURE cloud.
- Designing and implementing data engineering, ingestion, and transformation functions
-
Azure Synapse or Azure SQL data warehouse
-
Spark on Azure is available in HD insights and data bricks
- Experience with Azure Analysis Services
- Experience in Power BI
- Experience with third-party solutions like Attunity/Stream sets, Informatica
- Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
- Capacity Planning and Performance Tuning on Azure Stack and Spark.
A Chemical & Purifier Company headquartered in the US.
- Create and maintain optimal data pipeline architecture,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Author data services using a variety of programming languages
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Azure ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across national boundaries through multiple data centres and Azure regions.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Work in an Agile environment with Scrum teams.
- Ensure data quality and help in achieving data governance.
Basic Qualifications
- 2+ years of experience in a Data Engineer role
- Undergraduate degree required (Graduate degree preferred) in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
- Experience using the following software/tools:
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases
- Experience with data pipeline and workflow management tools
- Experience with Azure cloud services: ADLS, ADF, ADLA, AAS
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Understanding of ELT and ETL patterns and when to use each. Understanding of data models and transforming data into the models
- Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency, and workload management
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Experience supporting and working with cross-functional teams in a dynamic environment
Roles & Responsibilities
Must Have
- Ability to design and develop database architectures
- Expert working knowledge and experience of DBMS
- Ability to write complex T-SQL queries
- Expert knowledge of Azure Data Factory (ADF)
- Ability to create and manage SSIS packages while managing a full ETL lifecycle
- Proficiency in data cleansing and reconciliation
- Ability to assist others in topics related to data management
- Ability to quickly investigate and troubleshoot any data/database issues
- Expert knowledge in MS SQL Database Server administration, performance tuning and maintenance experience.
Should Have
- Ability to work with end-users and project teams to analyze, document and create workflow processes
- Knowledge of SSRS (SQL Server Reporting Services)
- Attention to detail, critical thinking and problem solving skills.
- Excellent verbal/written communication skills and be a good team player.
- Azure SQL knowledge (SQL-as-a-service)
- Understanding of Agile Methodology.
- Working knowledge of Git.
Could Have
- DevOps knowledge (Infrastructure-as-a-code)
- Powershell scripting language
- Monitoring tools like Nagios, Azure tools.