2+ Hadoop Jobs in Ahmedabad | Hadoop Job openings in Ahmedabad
Apply to 2+ Hadoop Jobs in Ahmedabad on CutShort.io. Explore the latest Hadoop Job opportunities across top companies like Google, Amazon & Adobe.
About Job
We are seeking an experienced Data Engineer to join our data team. As a Senior Data Engineer, you will work on various data engineering tasks including designing and optimizing data pipelines, data modelling, and troubleshooting data issues. You will collaborate with other data team members, stakeholders, and data scientists to provide data-driven insights and solutions to the organization. Experience required is of 3+ Years.
Responsibilities:
Design and optimize data pipelines for various data sources
Design and implement efficient data storage and retrieval mechanisms
Develop data modelling solutions and data validation mechanisms
Troubleshoot data-related issues and recommend process improvements
Collaborate with data scientists and stakeholders to provide data-driven insights and solutions
Coach and mentor junior data engineers in the team
Skills Required:
3+ years of experience in data engineering or related field
Strong experience in designing and optimizing data pipelines, and data modelling
Strong proficiency in programming languages Python
Experience with big data technologies like Hadoop, Spark, and Hive
Experience with cloud data services such as AWS, Azure, and GCP
Strong experience with database technologies like SQL, NoSQL, and data warehousing
Knowledge of distributed computing and storage systems
Understanding of DevOps and power automate and Microsoft Fabric will be an added advantage
Strong analytical and problem-solving skills
Excellent communication and collaboration skills
Qualifications
Bachelor's degree in Computer Science, Data Science, or a Computer related field (Master's degree preferred)
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.
Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.