Cutshort logo
Big data Jobs in Ahmedabad

8+ Big data Jobs in Ahmedabad | Big data Job openings in Ahmedabad

Apply to 8+ Big data Jobs in Ahmedabad on CutShort.io. Explore the latest Big data Job opportunities across top companies like Google, Amazon & Adobe.

icon
Tecblic Private LImited
Ahmedabad
4 - 5 yrs
₹8L - ₹12L / yr
Microsoft Windows Azure
SQL
skill iconPython
PySpark
ETL
+2 more

šŸš€ We Are Hiring: Data Engineer | 4+ Years Experience šŸš€


Job description

šŸ” Job Title: Data Engineer

šŸ“ Location: Ahmedabad

šŸš€ Work Mode: On-Site Opportunity

šŸ“… Experience: 4+ Years

šŸ•’ Employment Type: Full-Time

ā±ļø Availability : Immediate Joiner Preferred


Join Our Team as a Data Engineer

We are seeking a passionate and experienced Data Engineer to be a part of our dynamic and forward-thinking team in Ahmedabad. This is an exciting opportunity for someone who thrives on transforming raw data into powerful insights and building scalable, high-performance data infrastructure.

As a Data Engineer, you will work closely with data scientists, analysts, and cross-functional teams to design robust data pipelines, optimize data systems, and enable data-driven decision-making across the organization.


Your Key Responsibilities

Architect, build, and maintain scalable and reliable data pipelines from diverse data sources.

Design effective data storage, retrieval mechanisms, and data models to support analytics and business needs.

Implement data validation, transformation, and quality monitoring processes.

Collaborate with cross-functional teams to deliver impactful, data-driven solutions.

Proactively identify bottlenecks and optimize existing workflows and processes.

Provide guidance and mentorship to junior engineers in the team.


Skills & Expertise We’re Looking For

3+ years of hands-on experience in Data Engineering or related roles.

Strong expertise in Python and data pipeline design.

Experience working with Big Data tools like Hadoop, Spark, Hive.

Proficiency with SQL, NoSQL databases, and data warehousing solutions.

Solid experience in cloud platforms - Azure

Familiar with distributed computing, data modeling, and performance tuning.

Understanding of DevOps, Power Automate, and Microsoft Fabric is a plus.

Strong analytical thinking, collaboration skills, Excellent Communication Skill and the ability to work independently or as part of a team.


Qualifications

Bachelor’s degree in Computer Science, Data Science, or a related field.

Read more
consulting & implementation services in the area of Oil & Gas, Mining and Manufacturing Industry

consulting & implementation services in the area of Oil & Gas, Mining and Manufacturing Industry

Agency job
via Jobdost by Sathish Kumar
Ahmedabad, Hyderabad, Pune, Delhi
5 - 7 yrs
₹18L - ₹25L / yr
AWS Lambda
AWS Simple Notification Service (SNS)
AWS Simple Queuing Service (SQS)
skill iconPython
PySpark
+9 more
  1. Data Engineer

Ā Required skill set: AWS GLUE, AWS LAMBDA, AWS SNS/SQS, AWS ATHENA, SPARK, SNOWFLAKE, PYTHON

Mandatory Requirements  

  • Experience in AWS Glue
  • Experience in Apache Parquet 
  • Proficient in AWS S3 and data lake 
  • Knowledge of Snowflake
  • Understanding of file-based ingestion best practices.
  • Scripting language - Python & pyspark 

CORE RESPONSIBILITIESĀ 

  • Create and manage cloud resources in AWS 
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
  • Define process improvement opportunities to optimize data collection, insights and displays.
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 
  • Identify and interpret trends and patterns from complex data sets 
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 
  • Key participant in regular Scrum ceremonies with the agile teams  
  • Proficient at developing queries, writing reports and presenting findings 
  • Mentor junior members and bring best industry practices 

 QUALIFICATIONSĀ 

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 
  • Strong background in math, statistics, computer science, data science or related discipline
  • Advanced knowledge one of language: Java, Scala, Python, C# 
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  
  • Proficient with
  • Data mining/programming tools (e.g. SAS, SQL, R, Python)
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
  • Data visualization (e.g. Tableau, Looker, MicroStrategy)
  • Comfortable learning about and deploying new technologies and tools. 
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 
  • Good written and oral communication skills and ability to present results to non-technical audiences 
  • Knowledge of business intelligence and analytical tools, technologies and techniques.

  

Familiarity and experience in the following is a plus:  

  • AWS certification
  • Spark Streaming 
  • Kafka Streaming / Kafka Connect 
  • ELK Stack 
  • Cassandra / MongoDB 
  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
Read more
Product and Service based company

Product and Service based company

Agency job
via Jobdost by Sathish Kumar
Hyderabad, Ahmedabad
4 - 8 yrs
₹15L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Apache
Snow flake schema
skill iconPython
Spark
+13 more

Job Description

Ā 

Mandatory RequirementsĀ 

  • Experience in AWS Glue

  • Experience in Apache ParquetĀ 

  • Proficient in AWS S3 and data lakeĀ 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWSĀ 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologiesĀ 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platformĀ 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculationsĀ 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexibleĀ 

  • Identify and interpret trends and patterns from complex data setsĀ 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.Ā 

  • Key participant in regular Scrum ceremonies with the agile teamsĀ Ā 

  • Proficient at developing queries, writing reports and presenting findingsĀ 

  • Mentor junior members and bring best industry practices.

Ā 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)Ā 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C#Ā 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, SnowflakeĀ Ā 

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools.Ā 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.Ā 

  • Good written and oral communication skills and ability to present results to non-technical audiencesĀ 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus:Ā 

  • AWS certification

  • Spark StreamingĀ 

  • Kafka Streaming / Kafka ConnectĀ 

  • ELK StackĀ 

  • Cassandra / MongoDBĀ 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more
Discite Analytics Private Limited
Uma Sravya B
Posted by Uma Sravya B
Ahmedabad
4 - 7 yrs
₹12L - ₹20L / yr
Hadoop
Big Data
Data engineering
Spark
Apache Beam
+13 more
Responsibilities:
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.

Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
Read more
Leading IT MNC Company

Leading IT MNC Company

Agency job
Ahmedabad
3 - 12 yrs
₹5L - ₹12L / yr
ASP.NET
skill iconC#
skill icon.NET
Windows Azure
AWS CloudFormation
+6 more

Rules & Responsibilities:

Technical Skills:

  1. .Net– Net, C#, .Net core, MVC, Framework, Web API, Web Services, Micro Service and SQL
  2. Azure – Azure Cloud, SaaS, PaaS, IaaS, Azure Relational, and No-SQL Database, Big Data Services

Responsibilities

  • Good understanding of and experience in working on Microsoft Azure (IAAS/PAAS/SAAS)
  • Ability to architect, design, and implement cloud-based solutions
  • Proven track record of designing and implement the IoT-based solutions/Big Data solutions/applications to the Azure cloud platform.
  • Experience in building .Net-based enterprise distributed solutions in Windows and Linux.
  • Experience in using CI and CD tools. Jenkins/ Azure pipeline and Terraform. Experience in using another tooling such as Ansible, CloudFormation, etc.
  • Good understanding of HA/DR Setups in Cloud
  • Experience and working knowledge of Virtualization, Networking, Data Center, and Security
  • Deep hands-on experience in the design, development, and deployment of business software at scale.
  • Strong hands-on experience in Azure Cloud Platform
  • Experience in Kubernetes, Docker, and other cloud deployment, container technologies
  • Experience / knowledge of other cloud offerings (e.g. AWS, GCP) will be added advantage
  • Experience with monitoring tools like Prometheus, Grafana, Datadog, etc.
Read more
Springboot

Springboot

Agency job
via Jobdost by Sathish Kumar
Remote, Ahmedabad
3 - 5 yrs
₹8L - ₹12L / yr
Spring
skill iconSpring Boot
skill iconJava
Hibernate (Java)
skill iconGit
+3 more
1. Experience in Spring Boot

2. Hands-on experience with Hibernate/JPA

3. Added advantage if known MicroServices and Design Patterns

4. Experience working with tools like Git, Jenkins, Maven

5. Working knowledge with Oracle or MySQL Database

6. Strong agile/scrum development experience
Read more
Kalibre global konnects

at Kalibre global konnects

7 recruiters
Monika Sanghvi
Posted by Monika Sanghvi
Ahmedabad
1 - 4 yrs
₹3L - ₹5L / yr
skill iconPython
skill iconR Programming
skill iconData Science
skill iconData Analytics
Big Data
Job Description:-
ļ‚· Job Title:- Assistant Manager - Business Analytics
ļ‚· Age: Max. 35years.
ļ‚· Working Days:- 6 days a week
ļ‚· Location:- Ahmedabad, Gujarat
ļ‚· Monthly CTC:- Salary will commensurate with experience.
ļ‚· Educational Qualification:- The candidate should have bachelor’s degree in
IT/Engineering from any recognized university..
ļ‚· Experience:- 2+ years of work experience in AI/ML/business analytics with Institute of
repute or corporate.
Required Technical Skills:-
ļ‚· A fair bit of understanding of Business Analytics, Data Science, Visualization/Big Data
etc.
ļ‚· Basic knowledge of different analytical tools such as R Programming, Python etc.
ļ‚· Hands on experience in Moodle Development (desirable).
ļ‚· Good knowledge in customizing Moodle functionalities and developing custom themes
for Moodle (desirable).
ļ‚· An analytical mind-set who enjoy helping participants solving problems and turning data
into useful actionable information
Key Responsibilities include:-
ļ‚· Understand the tools and technologies specific to e-learning and blended learning
development and delivery.
ļ‚· Provide academic as well as technical assistance to the faculty members teaching the
analytics courses.
ļ‚· Working closely with the Instructors and assisting them in programming, coding, testing
etc.
ļ‚· Preparing the lab study material in coordination with the Instructors and assisting
students in programming lab and solving their doubts.
ļ‚· Works on assignments dealing with the routine and daily operation, use, and
configuration of the Learning Management System (LMS).
ļ‚· Administers learning technology platforms including the creation of courses,
certifications and other e-learning programs on the platforms.
ļ‚· Responsible to provide support within the eLearning department, provide technical
support to our external clients, and administrate the Learning Management System.
ļ‚· Creates user groups and assigns content and assessments to the right target audience,
runs reports and creates learning evens in the LMS system.
ļ‚· Performs regular maintenance of LMS database, including adding or removing courses.
ļ‚· Uploads, tests, deploys and maintains all training materials/learning assets hosted in the
LMS.
ļ‚· Ability to Multi-task.
ļ‚· Ability to demonstrate accuracy on detailed oriented and repetitive job assignments.
ļ‚· Responsible and reliable
Read more
Sameeksha Capital
Ahmedabad
1 - 2 yrs
₹1L - ₹3L / yr
skill iconJava
skill iconPython
Data Structures
Algorithms
skill iconC++
+1 more
Looking for Alternative Data Programmer for equity fund
The programmer should be proficient in python and should be able to work totally independently. Should also have skill to work with databases and have strong capability to understand how to fetch data from various sources, organise the data and identify useful information through efficient code.
Familiarity with PythonĀ 
Some examples of work:Ā 
Text search on earnings transcripts for keywords to identify future trends.Ā Ā 
IntegrationĀ of internal and external financial database
Web scrapingĀ to capture clean and organize data
Automatic updating of our financial models by importing data from machine readable formats such as XBRLĀ 
Fetching data from public databases such as RBI, NSE, BSE, DGCA and process the same.Ā 
Back-testing of data either to test historical cause and effect relation on market performance/portfolio performance, as well as back testing of our screener criteria in devising strategy
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort