Cutshort logo
Data architecture Jobs in Ahmedabad

11+ Data architecture Jobs in Ahmedabad | Data architecture Job openings in Ahmedabad

Apply to 11+ Data architecture Jobs in Ahmedabad on CutShort.io. Explore the latest Data architecture Job opportunities across top companies like Google, Amazon & Adobe.

icon
Brand Manufacturer for Bearded Men
Agency job
via Qrata by Prajakta Kulkarni
Ahmedabad
3 - 10 yrs
₹15L - ₹30L / yr
Analytics
Business Intelligence (BI)
Business Analysis
skill iconPython
SQL
+2 more
Analytics Head

Technical must haves:

● Extensive exposure to at least one Business Intelligence Platform (if possible, QlikView/Qlik
Sense) – if not Qlik, ETL tool knowledge, ex- Informatica/Talend
● At least 1 Data Query language – SQL/Python
● Experience in creating breakthrough visualizations
● Understanding of RDMS, Data Architecture/Schemas, Data Integrations, Data Models and Data Flows is a must
● A technical degree like BE/B. Tech a must

Technical Ideal to have:

● Exposure to our tech stack – PHP
● Microsoft workflows knowledge

Behavioural Pen Portrait:

● Must Have: Enthusiastic, aggressive, vigorous, high achievement orientation, strong command
over spoken and written English
● Ideal: Ability to Collaborate

Preferred location is Ahmedabad, however, if we find exemplary talent then we are open to remote working model- can be discussed.
Read more
Techcronus Business Solutions Pvt. Ltd.
Bhumika Gondaliya
Posted by Bhumika Gondaliya
Ahmedabad, Gujarat
3 - 5 yrs
₹7L - ₹10L / yr
Data Warehouse (DWH)
Informatica
ETL
Data migration
Data integration
+7 more

Role & Responsibilities:


  • Ability to architect Azure cloud-based application modernization, Azure infrastructure setup, configuration and management
  • Ability to create / recreate / rewrite or refactor applications based for cloud resource optimization.
  • Make use of Azure Integration Services: Logic Apps, Service Bus, API Management and Event Grid.
  • Assure that data is cleansed, mapped, transformed, and otherwise optimized for storage and use according to business and technical requirements.
  • Solution design using Microsoft Azure services and tools including Data Factory, Data Lake, Synapse etc.
  • Extracting data, troubleshooting and maintaining the data warehouse.
  • Experience of SQL and Dataverse databases is mandatory.
  • The ability to automate tasks and deploy production standard code (with unit testing, continuous integration, versioning etc.).
  • Load transformed data into storage and reporting structures in destinations including data warehouse, high speed indexes, real-time reporting systems and analytics applications.
  • Build data pipelines to collectively bring together data.
  • Utilize Microsoft Azure PaaS and SaaS solution development technologies including Azure Functions, Azure Notifications Hub, Azure App Service and Key Vault
  • Setup Fresh / Modify existing CI/CD pipelines development (YAML or Classic).
  • Hands-on experience with automation tools, cloud computing platforms, and scripting languages
  • Ability to learn and implement automation tools and technologies, such as Azure DevOps, Docker, and Terraform on the Azure platform.
  • Knowledge of containerization and container orchestration, such as Kubernetes
  • Experience with Azure monitoring and error logging tools, debugging skills, problem solving ability.


Read more
Leading Grooming Platform
Agency job
via Qrata by Blessy Fernandes
Remote, Ahmedabad
3 - 6 yrs
₹15L - ₹25L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+3 more
  • Extensive exposure to at least one Business Intelligence Platform (if possible, QlikView/Qlik Sense) – if not Qlik, ETL tool knowledge, ex- Informatica/Talend
  • At least 1 Data Query language – SQL/Python
  • Experience in creating breakthrough visualizations
  • Understanding of RDMS, Data Architecture/Schemas, Data Integrations, Data Models and Data Flows is a must
Read more
Service based company
Agency job
via Vmultiply solutions by Mounica Buddharaju
Ahmedabad, Rajkot
2 - 4 yrs
₹3L - ₹6L / yr
skill iconPython
skill iconAmazon Web Services (AWS)
SQL
ETL


Qualifications :

  • Minimum 2 years of .NET development experience (ASP.Net 3.5 or greater and C# 4 or greater).
  • Good knowledge of MVC, Entity Framework, and Web API/WCF.
  • ASP.NET Core knowledge is preferred.
  • Creating APIs / Using third-party APIs
  • Working knowledge of Angular is preferred.
  • Knowledge of Stored Procedures and experience with a relational database (MSSQL 2012 or higher).
  • Solid understanding of object-oriented development principles
  • Working knowledge of web, HTML, CSS, JavaScript, and the Bootstrap framework
  • Strong understanding of object-oriented programming
  • Ability to create reusable C# libraries
  • Must be able to write clean comments, readable C# code, and the ability to self-learn.
  • Working knowledge of GIT

Qualities required :

Over above tech skill we prefer to have

  • Good communication and Time Management Skill.
  • Good team player and ability to contribute on a individual basis.

  • We provide the best learning and growth environment for candidates.












Skills:


    NET Core

   .NET Framework

    ASP.NET Core

    ASP.NET MVC

    ASP.NET Web API  

   C#

   HTML


Read more
Product and Service based company
Hyderabad, Ahmedabad
8 - 12 yrs
₹15L - ₹30L / yr
SQL server
Relational Database (RDBMS)
NOSQL Databases
Oracle
Database Design
+3 more

Job Description

Job Responsibilities

  • Design and implement robust database solutions including

    • Security, backup and recovery

    • Performance, scalability, monitoring and tuning,

    • Data management and capacity planning,

    • Planning, and implementing failover between database instances.

  • Create data architecture strategies for each subject area of the enterprise data model.

  • Communicate plans, status and issues to higher management levels.

  • Collaborate with the business, architects and other IT organizations to plan a data strategy, sharing important information related to database concerns and constrains

  • Produce all project data architecture deliverables..

  • Create and maintain a corporate repository of all data architecture artifacts.

 

Skills Required:

  • Understanding of data analysis, business principles, and operations

  • Software architecture and design Network design and implementation

  • Data visualization, data migration and data modelling

  • Relational database management systems

  • DBMS software, including SQL Server  

  • Database and cloud computing design, architectures and data lakes

  • Information management and data processing on multiple platforms 

  • Agile methodologies and enterprise resource planning implementation

  • Demonstrate database technical functionality, such as performance tuning, backup and recovery, monitoring.

  • Excellent skills with advanced features such as database encryption, replication, partitioning, etc.

  • Strong problem solving, organizational and communication skill.

Read more
Ahmedabad, Hyderabad, Pune, Delhi
5 - 7 yrs
₹18L - ₹25L / yr
AWS Lambda
AWS Simple Notification Service (SNS)
AWS Simple Queuing Service (SQS)
skill iconPython
PySpark
+9 more
  1. Data Engineer

 Required skill set: AWS GLUE, AWS LAMBDA, AWS SNS/SQS, AWS ATHENA, SPARK, SNOWFLAKE, PYTHON

Mandatory Requirements  

  • Experience in AWS Glue
  • Experience in Apache Parquet 
  • Proficient in AWS S3 and data lake 
  • Knowledge of Snowflake
  • Understanding of file-based ingestion best practices.
  • Scripting language - Python & pyspark 

CORE RESPONSIBILITIES 

  • Create and manage cloud resources in AWS 
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
  • Define process improvement opportunities to optimize data collection, insights and displays.
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 
  • Identify and interpret trends and patterns from complex data sets 
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 
  • Key participant in regular Scrum ceremonies with the agile teams  
  • Proficient at developing queries, writing reports and presenting findings 
  • Mentor junior members and bring best industry practices 

QUALIFICATIONS 

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 
  • Strong background in math, statistics, computer science, data science or related discipline
  • Advanced knowledge one of language: Java, Scala, Python, C# 
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  
  • Proficient with
  • Data mining/programming tools (e.g. SAS, SQL, R, Python)
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
  • Data visualization (e.g. Tableau, Looker, MicroStrategy)
  • Comfortable learning about and deploying new technologies and tools. 
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 
  • Good written and oral communication skills and ability to present results to non-technical audiences 
  • Knowledge of business intelligence and analytical tools, technologies and techniques.

  

Familiarity and experience in the following is a plus:  

  • AWS certification
  • Spark Streaming 
  • Kafka Streaming / Kafka Connect 
  • ELK Stack 
  • Cassandra / MongoDB 
  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
Read more
Consulting and Services company
Hyderabad, Ahmedabad
5 - 10 yrs
₹5L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Apache
skill iconPython
PySpark

Data Engineer 

  

Mandatory Requirements  

  • Experience in AWS Glue 
  • Experience in Apache Parquet  
  • Proficient in AWS S3 and data lake  
  • Knowledge of Snowflake 
  • Understanding of file-based ingestion best practices. 
  • Scripting language - Python & pyspark 

 

CORE RESPONSIBILITIES 

  • Create and manage cloud resources in AWS  
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies  
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform  
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations  
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data. 
  • Define process improvement opportunities to optimize data collection, insights and displays. 
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible  
  • Identify and interpret trends and patterns from complex data sets  
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.  
  • Key participant in regular Scrum ceremonies with the agile teams   
  • Proficient at developing queries, writing reports and presenting findings  
  • Mentor junior members and bring best industry practices  

 

QUALIFICATIONS 

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)  
  • Strong background in math, statistics, computer science, data science or related discipline 
  • Advanced knowledge one of language: Java, Scala, Python, C#  
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake   
  • Proficient with 
  • Data mining/programming tools (e.g. SAS, SQL, R, Python) 
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum) 
  • Data visualization (e.g. Tableau, Looker, MicroStrategy) 
  • Comfortable learning about and deploying new technologies and tools.  
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.  
  • Good written and oral communication skills and ability to present results to non-technical audiences  
  • Knowledge of business intelligence and analytical tools, technologies and techniques. 

 

Familiarity and experience in the following is a plus:  

  • AWS certification 
  • Spark Streaming  
  • Kafka Streaming / Kafka Connect  
  • ELK Stack  
  • Cassandra / MongoDB  
  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools 
Read more
Product and Service based company
Hyderabad, Ahmedabad
4 - 8 yrs
₹15L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Apache
Snow flake schema
skill iconPython
Spark
+13 more

Job Description

 

Mandatory Requirements 

  • Experience in AWS Glue

  • Experience in Apache Parquet 

  • Proficient in AWS S3 and data lake 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 

  • Identify and interpret trends and patterns from complex data sets 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 

  • Key participant in regular Scrum ceremonies with the agile teams  

  • Proficient at developing queries, writing reports and presenting findings 

  • Mentor junior members and bring best industry practices.

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C# 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools. 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 

  • Good written and oral communication skills and ability to present results to non-technical audiences 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus: 

  • AWS certification

  • Spark Streaming 

  • Kafka Streaming / Kafka Connect 

  • ELK Stack 

  • Cassandra / MongoDB 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more
IT Product based Org.
Agency job
via OfficeDay Innovation by OFFICEDAY INNOVATION
Ahmedabad
3 - 5 yrs
₹10L - ₹12L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
skill iconDeep Learning
+7 more
  • 3+ years of Experience majoring in applying AI/ML/ NLP / deep learning / data-driven statistical analysis & modelling solutions.
  • Programming skills in Python, knowledge in Statistics.
  • Hands-on experience developing supervised and unsupervised machine learning algorithms (regression, decision trees/random forest, neural networks, feature selection/reduction, clustering, parameter tuning, etc.). Familiarity with reinforcement learning is highly desirable.
  • Experience in the financial domain and familiarity with financial models are highly desirable.
  • Experience in image processing and computer vision.
  • Experience working with building data pipelines.
  • Good understanding of Data preparation, Model planning, Model training, Model validation, Model deployment and performance tuning.
  • Should have hands on experience with some of these methods: Regression, Decision Trees,CART, Random Forest, Boosting, Evolutionary Programming, Neural Networks, Support Vector Machines, Ensemble Methods, Association Rules, Principal Component Analysis, Clustering, ArtificiAl Intelligence
  • Should have experience in using larger data sets using Postgres Database. 

 

Read more
Simform Solutions

at Simform Solutions

4 recruiters
Dipali Pithava
Posted by Dipali Pithava
Ahmedabad
4 - 8 yrs
₹5L - ₹12L / yr
ETL
Informatica
Data Warehouse (DWH)
Relational Database (RDBMS)
DBA
+4 more
We are looking for Lead DBA, with 4-7 years of experience

We are a fast-growing digital, cloud, and mobility services provider with a principal market being North
America. We are looking for talented database/SQL experts for the management and analytics of large
data in various enterprise projects.

Responsibilities
 Translate business needs to technical specifications
 Manage and maintain various database servers (backup, replicas, shards, jobs)
 Develop and execute database queries and conduct analyses
 Occasionally write scripts for ETL jobs.
 Create tools to store data (e.g. OLAP cubes)
 Develop and update technical documentation

Requirements
 Proven experience as a database programmer and administrator
 Background in data warehouse design (e.g. dimensional modeling) and data mining
 Good understanding of SQL and NoSQL databases, online analytical processing (OLAP) and ETL
(Extract, transform, load) framework
 Advance Knowledge of SQL queries, SQL Server Reporting Services (SSRS) and SQL Server
Integration Services (SSIS)
 Familiarity with BI technologies (strong Tableu hands-on experience) is a plus
 Analytical mind with a problem-solving aptitude
Read more
Discite Analytics Private Limited
Uma Sravya B
Posted by Uma Sravya B
Ahmedabad
4 - 7 yrs
₹12L - ₹20L / yr
Hadoop
Big Data
Data engineering
Spark
Apache Beam
+13 more
Responsibilities:
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.

Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort