Cutshort logo
Arahas Technologies logo
Senior GCP Data Lead
Senior GCP Data Lead
Arahas Technologies's logo

Senior GCP Data Lead

Nidhi Shivane's profile picture
Posted by Nidhi Shivane
3 - 8 yrs
ā‚¹10L - ā‚¹20L / yr
Pune
Skills
Spark
Hadoop
Big Data
Data engineering
PySpark
Google Cloud Platform (GCP)
Bigquery
SQL


Role Description

This is a full-time hybrid role as a GCP Data Engineer,. As a GCP Data Engineer, you will be responsible for managing large sets of structured and unstructured data and developing processes to convert data into insights, information, and knowledge.

Skill Name: GCP Data Engineer

Experience: 7-10 years

Notice Period: 0-15 days

Location :-Pune

If you have a passion for data engineering and possess the following , we would love to hear from you:


šŸ”¹ 7 to 10 years of experience working on Software Development Life Cycle (SDLC)

šŸ”¹ At least 4+ years of experience in Google Cloud platform, with a focus on Big Query

šŸ”¹ Proficiency in Java and Python, along with experience in Google Cloud SDK & API Scripting

šŸ”¹ Experience in the Finance/Revenue domain would be considered an added advantage

šŸ”¹ Familiarity with GCP Migration activities and the DBT Tool would also be beneficial


You will play a crucial role in developing and maintaining our data infrastructure on the Google Cloud platform.

Your expertise in SDLC, Big Query, Java, Python, and Google Cloud SDK & API Scripting will be instrumental in ensuring the smooth operation of our data systems..


Join our dynamic team and contribute to our mission of harnessing the power of data to make informed business decisions.

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Arahas Technologies

Founded :
2022
Type
Size :
100-1000
Stage :
Profitable
About
N/A
Company social profiles
instagramlinkedin

Similar jobs

Epik Solutions
Sakshi Sarraf
Posted by Sakshi Sarraf
Bengaluru (Bangalore), Noida
4 - 13 yrs
ā‚¹7L - ā‚¹18L / yr
skill iconPython
SQL
databricks
skill iconScala
Spark
+2 more

Job Description:


As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:


Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.


Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.


Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.


Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.


Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.


Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.


Documentation and collaboration: You will document data pipelines, data flows, and data transformation processes. You will collaborate with data scientists, analysts, and other stakeholders to understand their data requirements and provide data engineering support.


Skills and Qualifications:


Strong experience with Azure Databricks, Python, SQL, ADF, PySpark, and Scala.

Proficiency in designing and developing data pipelines and ETL processes.

Solid understanding of data modeling concepts and database design principles.

Familiarity with data integration and orchestration using Azure Data Factory.

Knowledge of data quality management and data governance practices.

Experience with performance tuning and optimization of data pipelines.

Strong problem-solving and troubleshooting skills related to data engineering.

Excellent collaboration and communication skills to work effectively in cross-functional teams.

Understanding of cloud computing principles and experience with Azure services.


Read more
SmartHub Innovation Pvt Ltd
Sathya Venkatesh
Posted by Sathya Venkatesh
Bengaluru (Bangalore)
5 - 7 yrs
ā‚¹15L - ā‚¹20L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+3 more

JD Code: SHI-LDE-01Ā 

Version#: 1.0Ā 

Date of JD Creation: 27-March-2023Ā 

Position Title: Lead Data EngineerĀ 

Reporting to: Technical DirectorĀ 

Location: Bangalore Urban, India (on-site)Ā 

Ā 

SmartHub.ai (www.smarthub.ai) is a fast-growing Startup headquartered in Palo Alto, CA, and with offices in Seattle and Bangalore. We operate at the intersection of AI, IoT & Edge Computing. With strategic investments from leaders in infrastructure & data management, SmartHub.ai is redefining the Edge IoT space. Our ā€œSoftware Defined Edgeā€ products help enterprises rapidly accelerate their Edge Infrastructure Management & Intelligence. We empower enterprises to leverage their Edge environment to increase revenue, efficiency of operations, manage safety and digital risks by using Edge and AI technologies.Ā 

Ā 

SmartHub is an equal opportunity employer and will always be committed to nurture a workplace culture that supports, inspires and respects all individuals, encourages employees to bring their best selves to work, laugh and share. We seek builders who hail from a variety of backgrounds, perspectives and skills to join our team.Ā Ā 

SummaryĀ 

This role requires the candidate to translate business and product requirements to build, maintain, optimize data systems which can be relational or non-relational in nature. The candidate is expected to tune and analyse the data including from a short and long-term trend analysis and reporting, AI/ML uses cases.Ā 

We are looking for a talented technical professional with at least 8 years of proven experience in owning, architecting, designing, operating and optimising databases that are used for large scale analytics and reports.Ā 

ResponsibilitiesĀ 

  • Provide technical & architectural leadership for the next generation of product development.Ā 
  • Innovate, Research & Evaluate new technologies and tools for a quality output.Ā 
  • Architect, Design and Implement ensuring scalability, performance and security.Ā 
  • Code and implement new algorithms to solve complex problems.Ā 
  • Analyze complex data, develop, optimize and transform large data sets both structured and unstructured.Ā 
  • Ability to deploy and administrator the database and continuously tuning for performance especially container orchestration stacks such as KubernetesĀ Ā 
  • Develop analytical models and solutions Mentor Junior members technically in Architecture, Designing and robust Coding.Ā 
  • Work in an Agile development environment while continuously evaluating and improvising engineering processesĀ 

RequiredĀ 

  • At least 8 years of experience with significant depth in designing and building scalable distributed database systems for enterprise class products, experience of working in product development companies.Ā 
  • Should have been feature/component lead for several complex features involving large datasets.Ā 
  • Strong background in relational and non-relational database like Postgres, MongoDB, Hadoop etl.Ā 
  • Deep exp database optimization, tuning ertise in SQL, Time Series Databases, Apache Drill, HDFS, Spark are good to haveĀ 
  • Excellent analytical and problem-solving skill sets.Ā 
  • Experience inĀ  for high throughput is highly desirableĀ 
  • Exposure to database provisioning in Kubernetes/non-Kubernetes environments, configuration and tuning in a highly available mode.Ā 
  • Demonstrated ability to provide technical leadership and mentoring to the teamĀ 


Read more
Kaleidofin
at Kaleidofin
3 recruiters
Poornima B
Posted by Poornima B
Chennai, Bengaluru (Bangalore)
5 - 7 yrs
Best in industry
Business Intelligence (BI)
PowerBI
skill iconPython
SQL
skill iconR Language
+2 more
We are looking for a leader to design, develop and deliver strategic data-centric insights leveraging the next generation analytics and BI technologies. We want someone who is data-centric and insight-centric, less report centric. We are looking for someone wishing to
make an impact by enabling innovation and growth; someone with passion for what they do and a vision for the future.

Responsibilities:

  • Be the analytical expert in Kaleidofin, managing ambiguous problems by using data to execute sophisticated quantitative modeling and deliver actionable insights.
  • Develop comprehensive skills including project management, business judgment, analytical problem solving and technical depth.
  • Become an expert on data and trends, both internal and external to Kaleidofin.
  • Communicate key state of the business metrics and develop dashboards to enable teams to understand business metrics independently.
  • Collaborate with stakeholders across teams to drive data analysis for key business questions, communicate insights and drive the planning process with company executives.
  • Automate scheduling and distribution of reports and support auditing and value realization.
  • Partner with enterprise architects to define and ensure proposed.
  • Business Intelligence solutions adhere to an enterprise reference architecture.
  • Design robust data-centric solutions and architecture that incorporates technology and strong BI solutions to scale up and eliminate repetitive tasks

Requirements:

  • Experience leading development efforts through all phases of SDLC.
  • 5+ years "hands-on" experience designing Analytics and Business Intelligence solutions.
  • Experience with Quicksight, PowerBI, Tableau and Qlik is a plus.
  • Hands on experience in SQL, data management, and scripting (preferably Python).
  • Strong data visualisation design skills, data modeling and inference skills.
  • Hands-on and experience in managing small teams.
  • Financial services experience preferred, but not mandatory.
  • Strong knowledge of architectural principles, tools, frameworks, and best practices.
  • Excellent communication and presentation skills to communicate and collaborate with all levels of the organisation.
  • Team handling preferred for 5+yrs experience candidates.
  • Notice period less than 30 days.
Read more
Oneture Technologies
at Oneture Technologies
1 recruiter
Ravi Mevcha
Posted by Ravi Mevcha
Mumbai, Navi Mumbai
2 - 4 yrs
ā‚¹8L - ā‚¹12L / yr
Spark
Big Data
ETL
Data engineering
ADF
+4 more

Job Overview


We are looking for a Data Engineer to join our data team to solve data-driven critical

business problems. The hire will be responsible for expanding and optimizing the existing

end-to-end architecture including the data pipeline architecture. The Data Engineer will

collaborate with software developers, database architects, data analysts, data scientists and platform team on data initiatives and will ensure optimal data delivery architecture is

consistent throughout ongoing projects. The right candidate should have hands on in

developing a hybrid set of data-pipelines depending on the business requirements.

Responsibilities

  • Develop, construct, test and maintain existing and new data-driven architectures.
  • Align architecture with business requirements and provide solutions which fits best
  • to solve the business problems.
  • Build the infrastructure required for optimal extraction, transformation, and loading
  • of data from a wide variety of data sources using SQL and Azure ā€˜big dataā€™
  • technologies.
  • Data acquisition from multiple sources across the organization.
  • Use programming language and tools efficiently to collate the data.
  • Identify ways to improve data reliability, efficiency and quality
  • Use data to discover tasks that can be automated.
  • Deliver updates to stakeholders based on analytics.
  • Set up practices on data reporting and continuous monitoring

Required Technical Skills

  • Graduate in Computer Science or in similar quantitative area
  • 1+ years of relevant work experience as a Data Engineer or in a similar role.
  • Advanced SQL knowledge, Data-Modelling and experience working with relational
  • databases, query authoring (SQL) as well as working familiarity with a variety of
  • databases.
  • Experience in developing and optimizing ETL pipelines, big data pipelines, and datadriven
  • architectures.
  • Must have strong big-data core knowledge & experience in programming using Spark - Python/Scala
  • Experience with orchestrating tool like Airflow or similar
  • Experience with Azure Data Factory is good to have
  • Build processes supporting data transformation, data structures, metadata,
  • dependency and workload management.
  • Experience supporting and working with cross-functional teams in a dynamic
  • environment.
  • Good understanding of Git workflow, Test-case driven development and using CICD
  • is good to have
  • Good to have some understanding of Delta tables It would be advantage if the candidate also have below mentioned experience using
  • the following software/tools:
  • Experience with big data tools: Hadoop, Spark, Hive, etc.
  • Experience with relational SQL and NoSQL databases
  • Experience with cloud data services
  • Experience with object-oriented/object function scripting languages: Python, Scala, etc.
Read more
Avhan Technologies Pvt Ltd
Aarti Vohra
Posted by Aarti Vohra
Kolkata
7 - 10 yrs
ā‚¹8L - ā‚¹20L / yr
MDX
DAX
SQL
SQL server
Microsoft Analysis Services
+3 more
Exp : 7 to 8 years
Notice Period: Immediate to 15 days
Job Location : Kolkata
Ā 
Responsibilities:
ā€¢ Develop and improve solutions spanning data processing activities from the data lake (stage) to star schemas and reporting viewā€™s / tables and finally into SSAS.
ā€¢ Develop and improve Microsoft Analysis Services cubes (tabular and dimensional)
ā€¢ Collaborate with other teams within the organization and be able to devise the technical solution as it relates to the business & technical requirements
ā€¢ Mentor team members and be proactive in training and coaching team members to develop their proficiency in Analysis Services
ā€¢ Maintain documentation for all processes implemented
ā€¢ Adhere to and suggest improvements to coding standards, applying best practices
Ā 
Skillsets:
ā€¢ Proficient in MDX and DAX for query in SSAS
Read more
British Telecom
Agency job
via posterity consulting by Kapil Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
ā‚¹8L - ā‚¹14L / yr
Data engineering
Big Data
Google Cloud Platform (GCP)
ETL
Datawarehousing
+6 more
You'll have the following skills & experience:

ā€¢ Problem Solving:. Resolving production issues to fix service P1-4 issues. Problems relating to
introducing new technology, and resolving major issues in the platform and/or service.
ā€¢ Software Development Concepts: Understands and is experienced with the use of a wide range of
programming concepts and is also aware of and has applied a range of algorithms.
ā€¢ Commercial & Risk Awareness: Able to understand & evaluate both obvious and subtle commercial
risks, especially in relation to a programme.
Experience you would be expected to have
ā€¢ Cloud: experience with one of the following cloud vendors: AWS, Azure or GCP
ā€¢ GCP : Experience prefered, but learning essential.
ā€¢ Big Data: Experience with Big Data methodology and technologies
ā€¢ Programming : Python or Java worked with Data (ETL)
ā€¢ DevOps: Understand how to work in a Dev Ops and agile way / Versioning / Automation / Defect
Management ā€“ Mandatory
ā€¢ Agile methodology - knowledge of Jira
Read more
Marktine
at Marktine
1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
3 - 7 yrs
ā‚¹10L - ā‚¹24L / yr
skill iconData Science
skill iconR Programming
skill iconPython
SQL
skill iconMachine Learning (ML)
+1 more

Responsibilities:

  • Design and develop strong analytics system and predictive models
  • Managing a team of data scientists, machine learning engineers, and big data specialists
  • Identify valuable data sources and automate data collection processes
  • Undertake pre-processing of structured and unstructured data
  • Analyze large amounts of information to discover trends and patterns
  • Build predictive models and machine-learning algorithms
  • Combine models through ensemble modeling
  • Present information using data visualization techniques
  • Propose solutions and strategies to business challenges
  • Collaborate with engineering and product development teams

Requirements:

  • Proven experience as a seasoned Data Scientist
  • Good Experience in data mining processes
  • Understanding of machine learning and Knowledge of operations research is a value addition
  • Strong understanding and experience in R, SQL, and Python; Knowledge base with Scala, Java, or C++ is an asset
  • Experience using business intelligence tools (e. g. Tableau) and data frameworks (e. g. Hadoop)
  • Strong math skills (e. g. statistics, algebra)
  • Problem-solving aptitude
  • Excellent communication and presentation skills
  • Experience in Natural Language Processing (NLP)
  • Strong competitive coding skills
  • BSc/BA in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred
Read more
Cervello
Agency job
via StackNexus by suman kattella
Hyderabad
5 - 7 yrs
ā‚¹5L - ā‚¹15L / yr
Data engineering
Data modeling
Data Warehouse (DWH)
SQL
Windows Azure
+3 more
Contract Jobs - LongtermĀ for 1 year
Ā 
Client - Cervello
Job Role - Data Engineer
Location - Remote till covid ( Hyderabad Stacknexus office post covid)
Experience - 5 - 7 years
Skills Required - Should have hands-on experience inĀ  Azure Data Modelling, Python, SQL and Azure Data bricks.
Notice period - Immediate to 15 days
Read more
Graphene Services Pte Ltd
Swetha Seshadri
Posted by Swetha Seshadri
Bengaluru (Bangalore)
2 - 5 yrs
Best in industry
skill iconPython
MySQL
SQL
NOSQL Databases
PowerBI
+2 more

About GrapheneĀ Ā 

Graphene is a Singapore Head quartered AI company which has been recognized as Singaporeā€™s BestĀ Ā 

Start Up By Switzerlandā€™sĀ Seedstarsworld, and also been awarded as best AI platform for healthcare inĀ VivatechĀ Paris. Graphene India is also a member of the exclusive NASSCOMĀ DeeptechĀ club. We are developing an AIĀ plaformĀ which is disrupting and replacing traditional Market Research with unbiased insights with a focus on healthcare, consumer goods and financial services.Ā Ā 

Ā Ā 

Graphene was founded by Corporate leaders from Microsoft and P&G, and works closely with the Singapore Government & Universities in creating cutting edge technology which is gaining traction with many Fortune 500 companies in India, Asia and USA.Ā Ā 

Grapheneā€™s culture is grounded in delivering customer delight by recruiting high potential talent and providing an intense learning and collaborative atmosphere, with many ex-employees now hired by large companies across the world.Ā Ā 

Ā Ā 

Graphene has a 6-year track record of delivering financially sustainable growth and is one of the rare start-ups which is self-funded and is yet profitable and debt free. We have already created a strong bench strength of Singaporean leaders and are recruiting and grooming more talent with a focus on our US expansion.Ā Ā Ā 

Ā Ā 

Job title: -Ā Data AnalystĀ 

Job DescriptionĀ Ā 

Data AnalystĀ responsible forĀ storage, data enrichment, data transformation, data gathering based on data requests, testing and maintaining data pipelines.Ā 

Responsibilities and DutiesĀ Ā 

  • Managing end to end data pipeline from data source to visualization layerĀ 
  • Ensure data integrity; Ability to pre-empt data errorsĀ 
  • Organized managing and storage of dataĀ 
  • Provide quality assurance of data, working with quality assurance analysts if necessary.Ā 
  • Commissioning and decommissioning of data sets.Ā 
  • Processing confidential data and information according to guidelines.Ā 
  • Helping develop reports and analysis.Ā 
  • Troubleshooting the reporting database environment and reports.Ā 
  • Managing and designing the reporting environment, including data sources, security, and metadata.Ā 
  • Supporting the data warehouse in identifying and revising reporting requirements.Ā 
  • Supporting initiatives for data integrity and normalization.Ā 
  • Evaluating changes and updates to source production systems.Ā 
  • Training end-users on new reports and dashboards.Ā 
  • Initiate data gathering based on data requirementsĀ 
  • Analyse the raw data to check if the requirement is satisfiedĀ 

Ā 

Qualifications and SkillsĀ Ā Ā 

Ā Ā 

  • Technologies required:Ā Python,Ā SQL/Ā No-SQL database(CosmosDB)Ā Ā Ā Ā Ā 
  • Experience requiredĀ 2Ā ā€“ 5 Years. Experience inĀ Data Analysis using PythonĀ 

ā€¢Ā  Understanding ofĀ software development life cycleĀ Ā Ā 

  • Plan, coordinate, develop, testĀ and supportĀ data pipelines, document,Ā support forĀ reporting dashboards (PowerBI)Ā 
  • Automation steps needed toĀ transform and enrich data.Ā Ā Ā 
  • Communicate issues, risks, and concerns proactively to management. DocumentĀ theĀ process thoroughly to allow peers to assist with support as needed.Ā Ā Ā 
  • Excellent verbal and written communication skillsĀ Ā Ā 
Read more
PriceSenz
at PriceSenz
4 recruiters
Karthik Padmanabhan
Posted by Karthik Padmanabhan
Remote only
2 - 15 yrs
ā‚¹1L - ā‚¹20L / yr
ETL
SQL
Informatica PowerCenter

If you are an outstanding ETL Developer with a passion for technology and looking forward to being part of a great development organization, we would love to hear from you. We are offering technology consultancy services to our Fortune 500 customers with a primary focus on digital technologies. Our customers are looking for top-tier talents in the industry and willing to compensate based on your skill and expertise. The nature of our engagement is Contract in most cases. If you are looking for the next big step in your career, we are glad to partner with you.Ā 

Ā 

Below is the job description for your review.

Extensive hands- on experience in designing and developing ETL packages using SSIS

Extensive experience in performance tuning of SSIS packages

In- depth knowledge of data warehousing concepts and ETL systems, relational databases like SQL Server 2012/ 2014.

Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos