Cutshort logo
Big data Jobs in Ahmedabad

9+ Big data Jobs in Ahmedabad | Big data Job openings in Ahmedabad

Apply to 9+ Big data Jobs in Ahmedabad on CutShort.io. Explore the latest Big data Job opportunities across top companies like Google, Amazon & Adobe.

icon
Kanerika Software

at Kanerika Software

3 candid answers
2 recruiters
Soyam Gupta
Posted by Soyam Gupta
Hyderabad, Indore, Ahmedabad
7 - 15 yrs
₹22L - ₹40L / yr
Data governance
Data management
Meta-data management
Data security
Microsoft Windows Azure
+5 more

What You Will Do :


As a Data Governance Lead at Kanerika, you will be responsible for defining, leading, and operationalizing the data governance framework, ensuring enterprise-wide alignment and regulatory compliance.


Required Qualifications :


- 7+ years of experience in data governance and data management.


- Proficient in Microsoft Purview and Informatica data governance tools.


- Strong in metadata management, lineage mapping, classification, and security.


- Experience with ADF, REST APIs, Talend, dbt, and automation via Azure tools.


- Knowledge of GDPR, CCPA, HIPAA, SOX and related compliance needs.


- Skilled in bridging technical governance with business and compliance goals.


Tools & Technologies :


- Microsoft Purview, Collibra, Atlan, Informatica Axon, IBM IG Catalog


- Microsoft Purview capabilities :


1. Label creation & policy setup


2. Auto-labeling & DLP


3. Compliance Manager, Insider Risk, Records & Lifecycle Management


4. Unified Catalog, eDiscovery, Data Map, Audit, Compliance alerts, DSPM.


Key Responsibilities :


1. Governance Strategy & Stakeholder Alignment :


- Develop and maintain enterprise data governance strategies, policies, and standards.


- Align governance with business goals : compliance, analytics, and decision-making.


- Collaborate across business, IT, legal, and compliance teams for role alignment.


- Drive governance training, awareness, and change management programs.


2. Microsoft Purview Administration & Implementation :


- Manage Microsoft Purview accounts, collections, and RBAC aligned to org structure.


- Optimize Purview setup for large-scale environments (50TB+).


- Integrate with Azure Data Lake, Synapse, SQL DB, Power BI, Snowflake.


- Schedule scans, set classification jobs, and maintain collection hierarchies.


3. Metadata & Lineage Management :


- Design metadata repositories and maintain business glossaries and data dictionaries.


- Implement ingestion workflows via ADF, REST APIs, PowerShell, Azure Functions.


- Ensure lineage mapping (ADF ? Synapse ? Power BI) and impact analysis.


4. Data Classification & Security Governance :


- Define classification rules and sensitivity labels (PII, PCI, PHI).


- Integrate with MIP, DLP, Insider Risk Management, and Compliance Manager.


- Enforce records management, lifecycle policies, and information barriers.


5. Data Quality & Policy Management :


- Define KPIs and dashboards to monitor data quality across domains.


- Collaborate on rule design, remediation workflows, and exception handling.


- Ensure policy compliance (GDPR, HIPAA, CCPA, etc.) and risk management.


6. Business Glossary & Stewardship :


- Maintain business glossary with domain owners and stewards in Purview.


- Enforce approval workflows, standard naming, and steward responsibilities.


- Conduct metadata audits for glossary and asset documentation quality.


7. Automation & Integration :


- Automate governance processes using PowerShell, Azure Functions, Logic Apps.


- Create pipelines for ingestion, lineage, glossary updates, tagging.


- Integrate with Power BI, Azure Monitor, Synapse Link, Collibra, BigID, etc.


8. Monitoring, Auditing & Compliance :


- Set up dashboards for audit logs, compliance reporting, metadata coverage.


- Oversee data lifecycle management across its phases.


- Support internal and external audit readiness with proper documentation.



Read more
Tecblic Private LImited
Ahmedabad
4 - 5 yrs
₹8L - ₹12L / yr
Microsoft Windows Azure
SQL
skill iconPython
PySpark
ETL
+2 more

🚀 We Are Hiring: Data Engineer | 4+ Years Experience 🚀


Job description

🔍 Job Title: Data Engineer

📍 Location: Ahmedabad

🚀 Work Mode: On-Site Opportunity

📅 Experience: 4+ Years

🕒 Employment Type: Full-Time

⏱️ Availability : Immediate Joiner Preferred


Join Our Team as a Data Engineer

We are seeking a passionate and experienced Data Engineer to be a part of our dynamic and forward-thinking team in Ahmedabad. This is an exciting opportunity for someone who thrives on transforming raw data into powerful insights and building scalable, high-performance data infrastructure.

As a Data Engineer, you will work closely with data scientists, analysts, and cross-functional teams to design robust data pipelines, optimize data systems, and enable data-driven decision-making across the organization.


Your Key Responsibilities

Architect, build, and maintain scalable and reliable data pipelines from diverse data sources.

Design effective data storage, retrieval mechanisms, and data models to support analytics and business needs.

Implement data validation, transformation, and quality monitoring processes.

Collaborate with cross-functional teams to deliver impactful, data-driven solutions.

Proactively identify bottlenecks and optimize existing workflows and processes.

Provide guidance and mentorship to junior engineers in the team.


Skills & Expertise We’re Looking For

3+ years of hands-on experience in Data Engineering or related roles.

Strong expertise in Python and data pipeline design.

Experience working with Big Data tools like Hadoop, Spark, Hive.

Proficiency with SQL, NoSQL databases, and data warehousing solutions.

Solid experience in cloud platforms - Azure

Familiar with distributed computing, data modeling, and performance tuning.

Understanding of DevOps, Power Automate, and Microsoft Fabric is a plus.

Strong analytical thinking, collaboration skills, Excellent Communication Skill and the ability to work independently or as part of a team.


Qualifications

Bachelor’s degree in Computer Science, Data Science, or a related field.

Read more
consulting & implementation services in the area of Oil & Gas, Mining and Manufacturing Industry

consulting & implementation services in the area of Oil & Gas, Mining and Manufacturing Industry

Agency job
via Jobdost by Sathish Kumar
Ahmedabad, Hyderabad, Pune, Delhi
5 - 7 yrs
₹18L - ₹25L / yr
AWS Lambda
AWS Simple Notification Service (SNS)
AWS Simple Queuing Service (SQS)
skill iconPython
PySpark
+9 more
  1. Data Engineer

 Required skill set: AWS GLUE, AWS LAMBDA, AWS SNS/SQS, AWS ATHENA, SPARK, SNOWFLAKE, PYTHON

Mandatory Requirements  

  • Experience in AWS Glue
  • Experience in Apache Parquet 
  • Proficient in AWS S3 and data lake 
  • Knowledge of Snowflake
  • Understanding of file-based ingestion best practices.
  • Scripting language - Python & pyspark 

CORE RESPONSIBILITIES 

  • Create and manage cloud resources in AWS 
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
  • Define process improvement opportunities to optimize data collection, insights and displays.
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 
  • Identify and interpret trends and patterns from complex data sets 
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 
  • Key participant in regular Scrum ceremonies with the agile teams  
  • Proficient at developing queries, writing reports and presenting findings 
  • Mentor junior members and bring best industry practices 

QUALIFICATIONS 

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 
  • Strong background in math, statistics, computer science, data science or related discipline
  • Advanced knowledge one of language: Java, Scala, Python, C# 
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  
  • Proficient with
  • Data mining/programming tools (e.g. SAS, SQL, R, Python)
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
  • Data visualization (e.g. Tableau, Looker, MicroStrategy)
  • Comfortable learning about and deploying new technologies and tools. 
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 
  • Good written and oral communication skills and ability to present results to non-technical audiences 
  • Knowledge of business intelligence and analytical tools, technologies and techniques.

  

Familiarity and experience in the following is a plus:  

  • AWS certification
  • Spark Streaming 
  • Kafka Streaming / Kafka Connect 
  • ELK Stack 
  • Cassandra / MongoDB 
  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
Read more
Product and Service based company

Product and Service based company

Agency job
via Jobdost by Sathish Kumar
Hyderabad, Ahmedabad
4 - 8 yrs
₹15L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Apache
Snow flake schema
skill iconPython
Spark
+13 more

Job Description

 

Mandatory Requirements 

  • Experience in AWS Glue

  • Experience in Apache Parquet 

  • Proficient in AWS S3 and data lake 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 

  • Identify and interpret trends and patterns from complex data sets 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 

  • Key participant in regular Scrum ceremonies with the agile teams  

  • Proficient at developing queries, writing reports and presenting findings 

  • Mentor junior members and bring best industry practices.

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C# 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools. 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 

  • Good written and oral communication skills and ability to present results to non-technical audiences 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus: 

  • AWS certification

  • Spark Streaming 

  • Kafka Streaming / Kafka Connect 

  • ELK Stack 

  • Cassandra / MongoDB 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more
Discite Analytics Private Limited
Uma Sravya B
Posted by Uma Sravya B
Ahmedabad
4 - 7 yrs
₹12L - ₹20L / yr
Hadoop
Big Data
Data engineering
Spark
Apache Beam
+13 more
Responsibilities:
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.

Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
Read more
Leading IT MNC Company

Leading IT MNC Company

Agency job
Ahmedabad
3 - 12 yrs
₹5L - ₹12L / yr
ASP.NET
skill iconC#
skill icon.NET
Windows Azure
AWS CloudFormation
+6 more

Rules & Responsibilities:

Technical Skills:

  1. .Net– Net, C#, .Net core, MVC, Framework, Web API, Web Services, Micro Service and SQL
  2. Azure – Azure Cloud, SaaS, PaaS, IaaS, Azure Relational, and No-SQL Database, Big Data Services

Responsibilities

  • Good understanding of and experience in working on Microsoft Azure (IAAS/PAAS/SAAS)
  • Ability to architect, design, and implement cloud-based solutions
  • Proven track record of designing and implement the IoT-based solutions/Big Data solutions/applications to the Azure cloud platform.
  • Experience in building .Net-based enterprise distributed solutions in Windows and Linux.
  • Experience in using CI and CD tools. Jenkins/ Azure pipeline and Terraform. Experience in using another tooling such as Ansible, CloudFormation, etc.
  • Good understanding of HA/DR Setups in Cloud
  • Experience and working knowledge of Virtualization, Networking, Data Center, and Security
  • Deep hands-on experience in the design, development, and deployment of business software at scale.
  • Strong hands-on experience in Azure Cloud Platform
  • Experience in Kubernetes, Docker, and other cloud deployment, container technologies
  • Experience / knowledge of other cloud offerings (e.g. AWS, GCP) will be added advantage
  • Experience with monitoring tools like Prometheus, Grafana, Datadog, etc.
Read more
Springboot

Springboot

Agency job
via Jobdost by Sathish Kumar
Remote, Ahmedabad
3 - 5 yrs
₹8L - ₹12L / yr
Spring
skill iconSpring Boot
skill iconJava
Hibernate (Java)
skill iconGit
+3 more
1. Experience in Spring Boot

2. Hands-on experience with Hibernate/JPA

3. Added advantage if known MicroServices and Design Patterns

4. Experience working with tools like Git, Jenkins, Maven

5. Working knowledge with Oracle or MySQL Database

6. Strong agile/scrum development experience
Read more
Kalibre global konnects

at Kalibre global konnects

7 recruiters
Monika Sanghvi
Posted by Monika Sanghvi
Ahmedabad
1 - 4 yrs
₹3L - ₹5L / yr
skill iconPython
skill iconR Programming
skill iconData Science
skill iconData Analytics
Big Data
Job Description:-
 Job Title:- Assistant Manager - Business Analytics
 Age: Max. 35years.
 Working Days:- 6 days a week
 Location:- Ahmedabad, Gujarat
 Monthly CTC:- Salary will commensurate with experience.
 Educational Qualification:- The candidate should have bachelor’s degree in
IT/Engineering from any recognized university..
 Experience:- 2+ years of work experience in AI/ML/business analytics with Institute of
repute or corporate.
Required Technical Skills:-
 A fair bit of understanding of Business Analytics, Data Science, Visualization/Big Data
etc.
 Basic knowledge of different analytical tools such as R Programming, Python etc.
 Hands on experience in Moodle Development (desirable).
 Good knowledge in customizing Moodle functionalities and developing custom themes
for Moodle (desirable).
 An analytical mind-set who enjoy helping participants solving problems and turning data
into useful actionable information
Key Responsibilities include:-
 Understand the tools and technologies specific to e-learning and blended learning
development and delivery.
 Provide academic as well as technical assistance to the faculty members teaching the
analytics courses.
 Working closely with the Instructors and assisting them in programming, coding, testing
etc.
 Preparing the lab study material in coordination with the Instructors and assisting
students in programming lab and solving their doubts.
 Works on assignments dealing with the routine and daily operation, use, and
configuration of the Learning Management System (LMS).
 Administers learning technology platforms including the creation of courses,
certifications and other e-learning programs on the platforms.
 Responsible to provide support within the eLearning department, provide technical
support to our external clients, and administrate the Learning Management System.
 Creates user groups and assigns content and assessments to the right target audience,
runs reports and creates learning evens in the LMS system.
 Performs regular maintenance of LMS database, including adding or removing courses.
 Uploads, tests, deploys and maintains all training materials/learning assets hosted in the
LMS.
 Ability to Multi-task.
 Ability to demonstrate accuracy on detailed oriented and repetitive job assignments.
 Responsible and reliable
Read more
Sameeksha Capital
Ahmedabad
1 - 2 yrs
₹1L - ₹3L / yr
skill iconJava
skill iconPython
Data Structures
Algorithms
skill iconC++
+1 more
Looking for Alternative Data Programmer for equity fund
The programmer should be proficient in python and should be able to work totally independently. Should also have skill to work with databases and have strong capability to understand how to fetch data from various sources, organise the data and identify useful information through efficient code.
Familiarity with Python 
Some examples of work: 
Text search on earnings transcripts for keywords to identify future trends.  
Integration of internal and external financial database
Web scraping to capture clean and organize data
Automatic updating of our financial models by importing data from machine readable formats such as XBRL 
Fetching data from public databases such as RBI, NSE, BSE, DGCA and process the same. 
Back-testing of data either to test historical cause and effect relation on market performance/portfolio performance, as well as back testing of our screener criteria in devising strategy
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort