Cutshort logo
Apache Beam Jobs in Ahmedabad

11+ Apache Beam Jobs in Ahmedabad | Apache Beam Job openings in Ahmedabad

Apply to 11+ Apache Beam Jobs in Ahmedabad on CutShort.io. Explore the latest Apache Beam Job opportunities across top companies like Google, Amazon & Adobe.

icon
Discite Analytics Private Limited
Uma Sravya B
Posted by Uma Sravya B
Ahmedabad
4 - 7 yrs
₹12L - ₹20L / yr
Hadoop
Big Data
Data engineering
Spark
Apache Beam
+13 more
Responsibilities:
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.

Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
Read more
Leading Grooming Platform
Agency job
via Qrata by Blessy Fernandes
Remote, Ahmedabad
3 - 6 yrs
₹15L - ₹25L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+3 more
  • Extensive exposure to at least one Business Intelligence Platform (if possible, QlikView/Qlik Sense) – if not Qlik, ETL tool knowledge, ex- Informatica/Talend
  • At least 1 Data Query language – SQL/Python
  • Experience in creating breakthrough visualizations
  • Understanding of RDMS, Data Architecture/Schemas, Data Integrations, Data Models and Data Flows is a must
Read more
Techcronus Business Solutions Pvt. Ltd.
Bhumika Gondaliya
Posted by Bhumika Gondaliya
Ahmedabad, Gujarat
3 - 5 yrs
₹7L - ₹10L / yr
Data Warehouse (DWH)
Informatica
ETL
Data migration
Data integration
+7 more

Role & Responsibilities:


  • Ability to architect Azure cloud-based application modernization, Azure infrastructure setup, configuration and management
  • Ability to create / recreate / rewrite or refactor applications based for cloud resource optimization.
  • Make use of Azure Integration Services: Logic Apps, Service Bus, API Management and Event Grid.
  • Assure that data is cleansed, mapped, transformed, and otherwise optimized for storage and use according to business and technical requirements.
  • Solution design using Microsoft Azure services and tools including Data Factory, Data Lake, Synapse etc.
  • Extracting data, troubleshooting and maintaining the data warehouse.
  • Experience of SQL and Dataverse databases is mandatory.
  • The ability to automate tasks and deploy production standard code (with unit testing, continuous integration, versioning etc.).
  • Load transformed data into storage and reporting structures in destinations including data warehouse, high speed indexes, real-time reporting systems and analytics applications.
  • Build data pipelines to collectively bring together data.
  • Utilize Microsoft Azure PaaS and SaaS solution development technologies including Azure Functions, Azure Notifications Hub, Azure App Service and Key Vault
  • Setup Fresh / Modify existing CI/CD pipelines development (YAML or Classic).
  • Hands-on experience with automation tools, cloud computing platforms, and scripting languages
  • Ability to learn and implement automation tools and technologies, such as Azure DevOps, Docker, and Terraform on the Azure platform.
  • Knowledge of containerization and container orchestration, such as Kubernetes
  • Experience with Azure monitoring and error logging tools, debugging skills, problem solving ability.


Read more
Service based company
Agency job
via Vmultiply solutions by Mounica Buddharaju
Ahmedabad, Rajkot
2 - 4 yrs
₹3L - ₹6L / yr
Python
Amazon Web Services (AWS)
SQL
ETL


Qualifications :

  • Minimum 2 years of .NET development experience (ASP.Net 3.5 or greater and C# 4 or greater).
  • Good knowledge of MVC, Entity Framework, and Web API/WCF.
  • ASP.NET Core knowledge is preferred.
  • Creating APIs / Using third-party APIs
  • Working knowledge of Angular is preferred.
  • Knowledge of Stored Procedures and experience with a relational database (MSSQL 2012 or higher).
  • Solid understanding of object-oriented development principles
  • Working knowledge of web, HTML, CSS, JavaScript, and the Bootstrap framework
  • Strong understanding of object-oriented programming
  • Ability to create reusable C# libraries
  • Must be able to write clean comments, readable C# code, and the ability to self-learn.
  • Working knowledge of GIT

Qualities required :

Over above tech skill we prefer to have

  • Good communication and Time Management Skill.
  • Good team player and ability to contribute on a individual basis.

  • We provide the best learning and growth environment for candidates.












Skills:


    NET Core

   .NET Framework

    ASP.NET Core

    ASP.NET MVC

    ASP.NET Web API  

   C#

   HTML


Read more
Brand Manufacturer for Bearded Men
Agency job
via Qrata by Prajakta Kulkarni
Ahmedabad
3 - 10 yrs
₹15L - ₹30L / yr
Analytics
Business Intelligence (BI)
Business Analysis
Python
SQL
+2 more
Analytics Head

Technical must haves:

● Extensive exposure to at least one Business Intelligence Platform (if possible, QlikView/Qlik
Sense) – if not Qlik, ETL tool knowledge, ex- Informatica/Talend
● At least 1 Data Query language – SQL/Python
● Experience in creating breakthrough visualizations
● Understanding of RDMS, Data Architecture/Schemas, Data Integrations, Data Models and Data Flows is a must
● A technical degree like BE/B. Tech a must

Technical Ideal to have:

● Exposure to our tech stack – PHP
● Microsoft workflows knowledge

Behavioural Pen Portrait:

● Must Have: Enthusiastic, aggressive, vigorous, high achievement orientation, strong command
over spoken and written English
● Ideal: Ability to Collaborate

Preferred location is Ahmedabad, however, if we find exemplary talent then we are open to remote working model- can be discussed.
Read more
Ahmedabad, Hyderabad, Pune, Delhi
5 - 7 yrs
₹18L - ₹25L / yr
AWS Lambda
AWS Simple Notification Service (SNS)
AWS Simple Queuing Service (SQS)
Python
PySpark
+9 more
  1. Data Engineer

 Required skill set: AWS GLUE, AWS LAMBDA, AWS SNS/SQS, AWS ATHENA, SPARK, SNOWFLAKE, PYTHON

Mandatory Requirements  

  • Experience in AWS Glue
  • Experience in Apache Parquet 
  • Proficient in AWS S3 and data lake 
  • Knowledge of Snowflake
  • Understanding of file-based ingestion best practices.
  • Scripting language - Python & pyspark 

CORE RESPONSIBILITIES 

  • Create and manage cloud resources in AWS 
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
  • Define process improvement opportunities to optimize data collection, insights and displays.
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 
  • Identify and interpret trends and patterns from complex data sets 
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 
  • Key participant in regular Scrum ceremonies with the agile teams  
  • Proficient at developing queries, writing reports and presenting findings 
  • Mentor junior members and bring best industry practices 

QUALIFICATIONS 

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 
  • Strong background in math, statistics, computer science, data science or related discipline
  • Advanced knowledge one of language: Java, Scala, Python, C# 
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  
  • Proficient with
  • Data mining/programming tools (e.g. SAS, SQL, R, Python)
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
  • Data visualization (e.g. Tableau, Looker, MicroStrategy)
  • Comfortable learning about and deploying new technologies and tools. 
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 
  • Good written and oral communication skills and ability to present results to non-technical audiences 
  • Knowledge of business intelligence and analytical tools, technologies and techniques.

  

Familiarity and experience in the following is a plus:  

  • AWS certification
  • Spark Streaming 
  • Kafka Streaming / Kafka Connect 
  • ELK Stack 
  • Cassandra / MongoDB 
  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
Read more
Product and Service based company
Hyderabad, Ahmedabad
4 - 8 yrs
₹15L - ₹30L / yr
Amazon Web Services (AWS)
Apache
Snow flake schema
Python
Spark
+13 more

Job Description

 

Mandatory Requirements 

  • Experience in AWS Glue

  • Experience in Apache Parquet 

  • Proficient in AWS S3 and data lake 

  • Knowledge of Snowflake

  • Understanding of file-based ingestion best practices.

  • Scripting language - Python & pyspark

CORE RESPONSIBILITIES

  • Create and manage cloud resources in AWS 

  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies 

  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform 

  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations 

  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data.

  • Define process improvement opportunities to optimize data collection, insights and displays.

  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible 

  • Identify and interpret trends and patterns from complex data sets 

  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. 

  • Key participant in regular Scrum ceremonies with the agile teams  

  • Proficient at developing queries, writing reports and presenting findings 

  • Mentor junior members and bring best industry practices.

 

QUALIFICATIONS

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales) 

  • Strong background in math, statistics, computer science, data science or related discipline

  • Advanced knowledge one of language: Java, Scala, Python, C# 

  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake  

  • Proficient with

  • Data mining/programming tools (e.g. SAS, SQL, R, Python)

  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

  • Data visualization (e.g. Tableau, Looker, MicroStrategy)

  • Comfortable learning about and deploying new technologies and tools. 

  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines. 

  • Good written and oral communication skills and ability to present results to non-technical audiences 

  • Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus: 

  • AWS certification

  • Spark Streaming 

  • Kafka Streaming / Kafka Connect 

  • ELK Stack 

  • Cassandra / MongoDB 

  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Read more
Consulting and Services company
Hyderabad, Ahmedabad
5 - 10 yrs
₹5L - ₹30L / yr
Amazon Web Services (AWS)
Apache
Python
PySpark

Data Engineer 

  

Mandatory Requirements  

  • Experience in AWS Glue 
  • Experience in Apache Parquet  
  • Proficient in AWS S3 and data lake  
  • Knowledge of Snowflake 
  • Understanding of file-based ingestion best practices. 
  • Scripting language - Python & pyspark 

 

CORE RESPONSIBILITIES 

  • Create and manage cloud resources in AWS  
  • Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies  
  • Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform  
  • Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations  
  • Develop an infrastructure to collect, transform, combine and publish/distribute customer data. 
  • Define process improvement opportunities to optimize data collection, insights and displays. 
  • Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible  
  • Identify and interpret trends and patterns from complex data sets  
  • Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.  
  • Key participant in regular Scrum ceremonies with the agile teams   
  • Proficient at developing queries, writing reports and presenting findings  
  • Mentor junior members and bring best industry practices  

 

QUALIFICATIONS 

  • 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)  
  • Strong background in math, statistics, computer science, data science or related discipline 
  • Advanced knowledge one of language: Java, Scala, Python, C#  
  • Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake   
  • Proficient with 
  • Data mining/programming tools (e.g. SAS, SQL, R, Python) 
  • Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum) 
  • Data visualization (e.g. Tableau, Looker, MicroStrategy) 
  • Comfortable learning about and deploying new technologies and tools.  
  • Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.  
  • Good written and oral communication skills and ability to present results to non-technical audiences  
  • Knowledge of business intelligence and analytical tools, technologies and techniques. 

 

Familiarity and experience in the following is a plus:  

  • AWS certification 
  • Spark Streaming  
  • Kafka Streaming / Kafka Connect  
  • ELK Stack  
  • Cassandra / MongoDB  
  • CI/CD: Jenkins, GitLab, Jira, Confluence other related tools 
Read more
Simform Solutions

at Simform Solutions

4 recruiters
Dipali Pithava
Posted by Dipali Pithava
Ahmedabad
4 - 8 yrs
₹5L - ₹12L / yr
ETL
Informatica
Data Warehouse (DWH)
Relational Database (RDBMS)
DBA
+4 more
We are looking for Lead DBA, with 4-7 years of experience

We are a fast-growing digital, cloud, and mobility services provider with a principal market being North
America. We are looking for talented database/SQL experts for the management and analytics of large
data in various enterprise projects.

Responsibilities
 Translate business needs to technical specifications
 Manage and maintain various database servers (backup, replicas, shards, jobs)
 Develop and execute database queries and conduct analyses
 Occasionally write scripts for ETL jobs.
 Create tools to store data (e.g. OLAP cubes)
 Develop and update technical documentation

Requirements
 Proven experience as a database programmer and administrator
 Background in data warehouse design (e.g. dimensional modeling) and data mining
 Good understanding of SQL and NoSQL databases, online analytical processing (OLAP) and ETL
(Extract, transform, load) framework
 Advance Knowledge of SQL queries, SQL Server Reporting Services (SSRS) and SQL Server
Integration Services (SSIS)
 Familiarity with BI technologies (strong Tableu hands-on experience) is a plus
 Analytical mind with a problem-solving aptitude
Read more
LendingKart

at LendingKart

5 recruiters
Mohammed Nayeem
Posted by Mohammed Nayeem
Bengaluru (Bangalore), Ahmedabad
2 - 5 yrs
₹2L - ₹13L / yr
Python
Data Science
SQL
Roles and Responsibilities:
 Mining large volumes of credit behavior data to generate insights around product holdings and monetization opportunities for cross sell
 Use data science to size opportunity and product potential for launch of any new product/pilots
 Build propensity models using heuristics and campaign performance to maximize efficiency.
 Conduct portfolio analysis and establish key metrics for cross sell partnership

Desired profile/Skills:
 2-5 years of experience with a degree in any quantitative discipline such as Engineering, Computer Science, Economics, Statistics or Mathematics
 Excellent problem solving and comprehensive analytical skills – ability to structure ambiguous problem statements, perform detailed analysis and derive crisp insights.
 Solid experience in using python and SQL
 Prior work experience in a financial services space would be highly valued

Location: Bangalore/ Ahmedabad
Read more
Sameeksha Capital
Ahmedabad
1 - 2 yrs
₹1L - ₹3L / yr
Java
Python
Data Structures
Algorithms
C++
+1 more
Looking for Alternative Data Programmer for equity fund
The programmer should be proficient in python and should be able to work totally independently. Should also have skill to work with databases and have strong capability to understand how to fetch data from various sources, organise the data and identify useful information through efficient code.
Familiarity with Python 
Some examples of work: 
Text search on earnings transcripts for keywords to identify future trends.  
Integration of internal and external financial database
Web scraping to capture clean and organize data
Automatic updating of our financial models by importing data from machine readable formats such as XBRL 
Fetching data from public databases such as RBI, NSE, BSE, DGCA and process the same. 
Back-testing of data either to test historical cause and effect relation on market performance/portfolio performance, as well as back testing of our screener criteria in devising strategy
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort