Cutshort logo
Spark Jobs in Ahmedabad

4+ Spark Jobs in Ahmedabad | Spark Job openings in Ahmedabad

Apply to 4+ Spark Jobs in Ahmedabad on CutShort.io. Explore the latest Spark Job opportunities across top companies like Google, Amazon & Adobe.

icon
Tecblic Private LImited
Ahmedabad
7 - 8 yrs
₹8L - ₹18L / yr
Windows Azure
Data engineering
skill iconPython
SQL
Data modeling
+4 more

Job Description: Data Engineer

Location: Ahmedabad

Experience: 7+ years

Employment Type: Full-Time



We are looking for a highly motivated and experienced Data Engineer to join our  team. As a Data Engineer, you will play a critical role in designing, building, and optimizing data pipelines that ensure the availability, reliability, and performance of our data infrastructure. You will collaborate closely with data scientists, analysts, and cross-functional teams to provide timely and efficient data solutions.


Responsibilities


● Design and optimize data pipelines for various data sources


● Design and implement efficient data storage and retrieval mechanisms


● Develop data modelling solutions and data validation mechanisms


● Troubleshoot data-related issues and recommend process improvements


● Collaborate with data scientists and stakeholders to provide data-driven insights and solutions


● Coach and mentor junior data engineers in the team


Skills Required: 


● Minimum 5 years of experience in data engineering or related field


● Proficient in designing and optimizing data pipelines and data modeling


● Strong programming expertise in Python


● Hands-on experience with big data technologies such as Hadoop, Spark, and Hive


● Extensive experience with cloud data services such as AWS, Azure, and GCP


● Advanced knowledge of database technologies like SQL, NoSQL, and data warehousing


● Knowledge of distributed computing and storage systems


● Familiarity with DevOps practices and power automate and Microsoft Fabric will be an added advantage


● Strong analytical and problem-solving skills with outstanding communication and collaboration abilities


Qualifications


● Bachelor's degree in Computer Science, Data Science, or a Computer related field


Read more
Tecblic Private LImited
Ahmedabad
8 - 12 yrs
₹6L - ₹28L / yr
Data Structures
Data Visualization
databricks
Azure data factory
Spark
+6 more

Data Architecture and Engineering Lead


Responsibilities:

  • Lead Data Architecture: Own the design, evolution, and delivery of enterprise data architecture across cloud and hybrid environments. Develop relational and analytical data models (conceptual, logical, and physical) to support business needs and ensure data integrity.
  • Consolidate Core Systems: Unify data sources across airport systems into a single analytical platform optimised for business value.
  • Build Scalable Infrastructure: Architect cloud-native solutions that support both batch and streaming data workflows using tools like Databricks, Kafka, etc.
  • Implement Governance Frameworks: Define and enforce enterprise-wide data standards for access control, privacy, quality, security, and lineage.
  • Enable Metadata & Cataloguing: Deploy metadata management and cataloguing tools to enhance data discoverability and self-service analytics.
  • Operationalise AI/ML Pipelines: Lead data architecture that supports AI/ML initiatives, including forecasting, pricing models, and personalisation.
  • Partner Across Functions: Translate business needs into data architecture solutions by collaborating with leaders in Operations, Finance, HR, Legal, Technology.
  • Optimize Cloud Cost & Performance: Roll out compute and storage systems that balance cost efficiency, performance, and observability across platforms.


Qualifications:

  • 12+ years of experience in data architecture, with 3+ years in a senior or leadership role across cloud or hybrid environments
  • Proven ability to design and scale large data platforms supporting analytics, real-time reporting, and AI/ML use cases
  • Hands-on expertise with ingestion, transformation, and orchestration pipelines
  • Extensive experience with Microsoft Azure data services, including Azure Data Lake Storage, Azure Databricks, Azure Data Factory and related technologies.
  • Strong knowledge of ERP data models, especially SAP and MS Dynamics
  • Experience with data governance, compliance (GDPR/CCPA), metadata cataloguing, and security practices
  • Familiarity with distributed systems and streaming frameworks like Spark or Flink
  • Strong stakeholder management and communication skills, with the ability to influence both technical and business teams


Tools & Technologies


  • Warehousing: Azure Databricks Delta, BigQuery
  • Big Data: Apache Spark
  • Cloud Platforms: Azure (ADLS, AKS, EventHub, ServiceBus)
  • Streaming: Kafka, Pub/Sub
  • RDBMS: PostgreSQL, MS SQL
  • NoSQL: Redis
  • Monitoring: Azure Monitoring, App Insight, Prometheus, Grafana

 

 

 


Read more
Product and Service based company

Product and Service based company

Agency job
via Jobdost by Sathish Kumar
Hyderabad, Ahmedabad
4 - 8 yrs
₹15L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Apache
Snow flake schema
skill iconPython
Spark
+13 more

Job Description

 

Mandatory Requirements 

  • Experience in AWS Glue

  • Experience in Apache Parquet 

  • Proficient in AWS S3 and data lake 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 

  • Identify and interpret trends and patterns from complex data sets 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 

  • Key participant in regular Scrum ceremonies with the agile teams  

  • Proficient at developing queries, writing reports and presenting findings 

  • Mentor junior members and bring best industry practices.

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C# 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools. 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 

  • Good written and oral communication skills and ability to present results to non-technical audiences 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus: 

  • AWS certification

  • Spark Streaming 

  • Kafka Streaming / Kafka Connect 

  • ELK Stack 

  • Cassandra / MongoDB 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more
Discite Analytics Private Limited
Uma Sravya B
Posted by Uma Sravya B
Ahmedabad
4 - 7 yrs
₹12L - ₹20L / yr
Hadoop
Big Data
Data engineering
Spark
Apache Beam
+13 more
Responsibilities:
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.

Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort