GCP ARCHITECT / LEAD ENGINEER

at MNC Pune based IT company

icon
Pune
icon
10 - 18 yrs
icon
₹35L - ₹40L / yr
icon
Full time
Skills
Google Cloud Platform (GCP)
Dataflow architecture
Data migration
Data processing
Big Data
cloud sql
big query
pobsub
gcs bucket

CANDIDATE WILL BE DEPLOYED IN A FINANCIAL CAPTIVE ORGANIZATION @ PUNE (KHARADI)

 

Below are the job Details :-

 

Experience 10 to 18 years

 

Mandatory skills –

  • data migration,
  • data flow

The ideal candidate for this role will have the below experience and qualifications:  

  • Experience of building a range of Services in a Cloud Service provider (ideally GCP)  
  • Hands-on design and development of Google Cloud Platform (GCP), across a wide range of GCP services including hands on experience of GCP storage & database technologies. 
  • Hands-on experience in architecting, designing or implementing solutions on GCP, K8s, and other Google technologies. Security and Compliance, e.g. IAM and cloud compliance/auditing/monitoring tools 
  • Desired Skills within the GCP stack - Cloud Run, GKE, Serverless, Cloud Functions, Vision API, DLP, Data Flow, Data Fusion 
  • Prior experience of migrating on-prem applications to cloud environments. Knowledge and hands on experience on Stackdriver, pub-sub, VPC, Subnets, route tables, Load balancers, firewalls both for on premise and the GCP.  
  • Integrate, configure, deploy and manage centrally provided common cloud services (e.g. IAM, networking, logging, Operating systems, Containers.)  
  • Manage SDN in GCP Knowledge and experience of DevOps technologies around Continuous Integration & Delivery in GCP using Jenkins.  
  • Hands on experience of Terraform, Kubernetes, Docker, Stackdriver, Terraform  
  • Programming experience in one or more of the following languages: Python, Ruby, Java, JavaScript, Go, Groovy, Scala  
  • Knowledge or experience in DevOps tooling such as Jenkins, Git, Ansible, Splunk, Jira or Confluence, AppD, Docker, Kubernetes  
  • Act as a consultant and subject matter expert for internal teams to resolve technical deployment obstacles, improve product's vision. Ensure compliance with centrally defined Security 
  • Financial experience is preferred 
  • Ability to learn new technologies and rapidly prototype newer concepts 
  • Top-down thinker, excellent communicator, and great problem solver

 

Exp:- 10  to 18 years

 

Location:- Pune

 

Candidate must have experience in below.

  • GCP Data Platform
  • Data Processing:- Data Flow, Data Prep, Data Fusion
  • Data Storage:- Big Query, Cloud Sql,
  • Pub Sub, GCS Bucket
Read more
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Senior Data Quality Engineer

at Quicken Inc

Founded 1982  •  Product  •  100-500 employees  •  Profitable
ETL
Informatica
Data Warehouse (DWH)
Python
ETL QA
Big Data
icon
Bengaluru (Bangalore)
icon
5 - 8 yrs
icon
₹20L - ₹30L / yr
  • Graduate+ in Mathematics, Statistics, Computer Science, Economics, Business, Engineering or equivalent work experience.
  • Total experience of 5+ years with at least 2 years in managing data quality for high scale data platforms.
  • Good knowledge of SQL querying.
  • Strong skill in analysing data and uncovering patterns using SQL or Python.
  • Excellent understanding of data warehouse/big data concepts such data extraction, data transformation, data loading (ETL process).
  • Strong background in automation and building automated testing frameworks for data ingestion and transformation jobs.
  • Experience in big data technologies a big plus.
  • Experience in machine learning, especially in data quality applications a big plus.
  • Experience in building data quality automation frameworks a big plus.
  • Strong experience working with an Agile development team with rapid iterations. 
  • Very strong verbal and written communication, and presentation skills.
  • Ability to quickly understand business rules.
  • Ability to work well with others in a geographically distributed team.
  • Keen observation skills to analyse data, highly detail oriented.
  • Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution.
  • Able to identify stakeholders, build relationships, and influence others to get work done.
  • Self-directed and self-motivated individual who takes complete ownership of the product and its outcome.
Read more
Job posted by
Shreelakshmi M

Python Developer

at Testbook

Founded 2013  •  Services  •  100-1000 employees  •  Raised funding
Python
pandas
Big Data
Google Cloud Platform (GCP)
Amazon Web Services (AWS)
icon
Navi Mumbai
icon
1 - 2 yrs
icon
₹1L - ₹5L / yr

About Us:-

The fastest rising startup in the EdTech space, focussed on Engineering and Government Job Exams and with an eye to capture UPSC, PSC, and international exams. Testbook is poised to revolutionize the industry. With a registered user base of over 2.2 Crore students, more than 450 crore questions solved on the WebApp, and a knockout Android App. Testbook has raced to the front and is ideally placed to capture bigger markets.

Testbook is the perfect incubator for talent. You come, you learn, you conquer. You train under the best mentors and become an expert in your field in your own right. That being said, the flexibility in the projects you choose, how and when you work on them, what you want to add to them is respected in this startup. You are the sole master of your work.

The IIT pedigree of the co-founders has attracted some of the brightest minds in the country to Testbook. A team that is quickly swelling in ranks, it now stands at 500+ in-house employees and hundreds of remote interns and freelancers. And the number is rocketing weekly. Now is the time to join the force.



In this role you will get to:-

  • Work with state-of-the-art data frameworks and technologies like Dataflow(Apache Beam), Dataproc(Apache Spark & Hadoop), Apache Kafka, Google PubSub, Apache Airflow, and others.
  • You will work cross-functionally with various teams, creating solutions that deal with large volumes of data.
  • You will work with the team to set and maintain standards and development practices.
  • You will be a keen advocate of quality and continuous improvement.
  • You will modernize the current data systems to develop Cloud-enabled Data and Analytics solutions
  • Drive the development of cloud-based data lake, hybrid data warehouses & business intelligence platforms
  • Improve upon the data ingestion models, ETL jobs, and alerts to maintain data integrity and data availability
  • Build Data Pipelines to ingest structured and Unstructured Data.
  • Gain hands-on experience with new data platforms and programming languages
  • Analyze and provide data-supported recommendations to improve product performance and customer acquisition
  • Design, Build and Support resilient production-grade applications and web services

 

Who you are:-

  • 1+ years of work experience in Software Engineering and development.
  • Very strong understanding of Python & pandas library.Good understanding of Scala, R, and other related languages
  • Experience with data transformation & data analytics in both batch & streaming mode using cloud-native technologies.
  • Strong experience with the big data technologies like Hadoop, Spark, BigQuery, DataProc, Dataflow
  • Strong analytical and communication skills.
  • Experience working with large, disconnected, and/or unstructured datasets.
  • Experience building and optimizing data pipelines, architectures, and data sets using cloud-native technologies.
  • Hands-on experience with any cloud tech like GCP/AWS is a plus.
Read more
Job posted by
Kush Semwal

Data Engineer

at Searce Inc

Founded 2004  •  Products & Services  •  100-1000 employees  •  Profitable
Big Data
Hadoop
Apache Hive
Architecture
Data engineering
Java
Python
Scala
ETL
icon
Mumbai
icon
5 - 12 yrs
icon
₹10L - ₹20L / yr
JD of Data Engineer
As a Data Engineer, you are a full-stack data engineer that loves solving business problems.
You work with business leads, analysts and data scientists to understand the business domain
and engage with fellow engineers to build data products that empower better decision making.
You are passionate about data quality of our business metrics and flexibility of your solution that
scales to respond to broader business questions.
If you love to solve problems using your skills, then come join the Team Searce. We have a
casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses
on productivity and creativity, and allows you to be part of a world-class team while still being
yourself.

What You’ll Do
● Understand the business problem and translate these to data services and engineering
outcomes
● Explore new technologies and learn new techniques to solve business problems
creatively
● Think big! and drive the strategy for better data quality for the customers
● Collaborate with many teams - engineering and business, to build better data products

What We’re Looking For
● Over 1-3 years of experience with
○ Hands-on experience of any one programming language (Python, Java, Scala)
○ Understanding of SQL is must
○ Big data (Hadoop, Hive, Yarn, Sqoop)
○ MPP platforms (Spark, Pig, Presto)
○ Data-pipeline & scheduler tool (Ozzie, Airflow, Nifi)
○ Streaming engines (Kafka, Storm, Spark Streaming)
○ Any Relational database or DW experience
○ Any ETL tool experience
● Hands-on experience in pipeline design, ETL and application development
Read more
Job posted by
Reena Bandekar

Machine Learning Engineer

at Carsome

Founded 2015  •  Product  •  1000-5000 employees  •  Raised funding
Python
Amazon Web Services (AWS)
Django
Flask
TensorFlow
Big Data
athena
icon
Remote, Kuala Lumpur
icon
2 - 5 yrs
icon
₹20L - ₹30L / yr
Carsome is a growing startup that is utilising data to improve the experience of second hand car shoppers. This involves developing, deploying & maintaining machine learning models that are used to improve our customers' experience. We are looking for candidates who are aware of the machine learning project lifecycle and can help managing ML deployments.

Responsibilities: - Write and maintain production level code in Python for deploying machine learning models - Create and maintain deployment pipelines through CI/CD tools (preferribly GitLab CI) - Implement alerts and monitoring for prediction accuracy and data drift detection - Implement automated pipelines for training and replacing models - Work closely with with the data science team to deploy new models to production Required Qualifications: - Degree in Computer Science, Data Science, IT or a related discipline. - 2+ years of experience in software engineering or data engineering. - Programming experience in Python - Experience in data profiling, ETL development, testing and implementation - Experience in deploying machine learning models

Good to have: - Experience in AWS resources for ML and data engineering (SageMaker, Glue, Athena, Redshift, S3) - Experience in deploying TensorFlow models - Experience in deploying and managing ML Flow
Read more
Job posted by
Piyush Palkar
Big Data
Hadoop
Data engineering
data engineer
Google Cloud Platform (GCP)
Data Warehouse (DWH)
ETL
Systems Development Life Cycle (SDLC)
Java
Scala
Python
SQL
Scripting
Teradata
HiveQL
Pig
Spark
Apache Kafka
Windows Azure
icon
Remote, Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹4L - ₹16L / yr
Job Description
Job Title: Data Engineer
Tech Job Family: DACI
• Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field)
• 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering
• 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)
Preferred Qualifications:
• Master's Degree in Computer Science, CIS, or related field
• 2 years of IT experience developing and implementing business systems within an organization
• 4 years of experience working with defect or incident tracking software
• 4 years of experience with technical documentation in a software development environment
• 2 years of experience working with an IT Infrastructure Library (ITIL) framework
• 2 years of experience leading teams, with or without direct reports
• Experience with application and integration middleware
• Experience with database technologies
Data Engineering
• 2 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role)
• Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role)
BI Engineering
• Expertise in MicroStrategy/Power BI/SQL, Scripting, Teradata or equivalent RDBMS, Hadoop (OLAP on Hadoop), Dashboard development, Mobile development (specific to the BI Engineering role)
Platform Engineering
• 2 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy (specific to the Platform Engineering role)
• Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components (specific to the Platform Engineering role)
Lowe’s is an equal opportunity employer and administers all personnel practices without regard to race, color, religion, sex, age, national origin, disability, sexual orientation, gender identity or expression, marital status, veteran status, genetics or any other category protected under applicable law.
Read more
Job posted by
Sanjay Biswakarma

AI Engineer

at StatusNeo

Founded 2020  •  Products & Services  •  100-1000 employees  •  Profitable
Artificial Intelligence (AI)
Amazon Web Services (AWS)
Windows Azure
Hadoop
Scala
Python
Google Cloud Platform (GCP)
postgres
icon
Gurugram, Hyderabad, Bengaluru (Bangalore)
icon
1 - 3 yrs
icon
₹3L - ₹12L / yr


·       Build data products and processes alongside the core engineering and technology team.

·       Collaborate with senior data scientists to curate, wrangle, and prepare data for use in their advanced analytical models

·       Integrate data from a variety of sources, assuring that they adhere to data quality and accessibility standards

·       Modify and improve data engineering processes to handle ever larger, more complex, and more types of data sources and pipelines

·       Use Hadoop architecture and HDFS commands to design and optimize data queries at scale

·       Evaluate and experiment with novel data engineering tools and advises information technology leads and partners about new capabilities to determine optimal solutions for particular technical problems or designated use cases .
Read more
Job posted by
Alex P

Data Engineer

at SpringML India Private Limited

Founded 2015  •  Services  •  100-1000 employees  •  Profitable
Big Data
Hadoop
Apache Spark
Spark
Data Structures
Data engineering
SQL
kafka
icon
Hyderabad
icon
4 - 11 yrs
icon
₹8L - ₹20L / yr

SpringML is looking to hire a top-notch Senior  Data Engineer who is passionate about working with data and using the latest distributed framework to process large dataset. As an Associate Data Engineer, your primary role will be to design and build data pipelines. You will be focused on helping client projects on data integration, data prep and implementing machine learning on datasets. In this role, you will work on some of the latest technologies, collaborate with partners on early win, consultative approach with clients, interact daily with executive leadership, and help build a great company. Chosen team members will be part of the core team and play a critical role in scaling up our emerging practice.

RESPONSIBILITIES:

 

  • Ability to work as a member of a team assigned to design and implement data integration solutions.
  • Build Data pipelines using standard frameworks in Hadoop, Apache Beam and other open-source solutions.
  • Learn quickly – ability to understand and rapidly comprehend new areas – functional and technical – and apply detailed and critical thinking to customer solutions.
  • Propose design solutions and recommend best practices for large scale data analysis

 

SKILLS:

 

  • B.tech  degree in computer science, mathematics or other relevant fields.
  • 4+years of experience in ETL, Data Warehouse, Visualization and building data pipelines.
  • Strong Programming skills – experience and expertise in one of the following: Java, Python, Scala, C.
  • Proficient in big data/distributed computing frameworks such as Apache,Spark, Kafka,
  • Experience with Agile implementation methodologies
Read more
Job posted by
Kayal Vizhi

Scala Spark Engineer

at Skanda Projects

Founded 2010  •  Services  •  100-1000 employees  •  Profitable
Scala
Apache Spark
Big Data
icon
Bengaluru (Bangalore)
icon
2 - 8 yrs
icon
₹6L - ₹25L / yr
PreferredSkills- • Should have minimum 3 years of experience in Software development • Strong experience in spark Scala development • Person should have strong experience in AWS cloud platform services • Should have good knowledge and exposure in Amazon EMR, EC2 • Should be good in over databases like dynamodb, snowflake
Read more
Job posted by
Nagraj Kumar

Data Engineer

at Product / Internet / Media Companies

Agency job
via archelons
Big Data
Hadoop
Data processing
Python
Data engineering
HDFS
Spark
Data lake
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹15L - ₹30L / yr

REQUIREMENT:

  •  Previous experience of working in large scale data engineering
  •  4+ years of experience working in data engineering and/or backend technologies with cloud experience (any) is mandatory.
  •  Previous experience of architecting and designing backend for large scale data processing.
  •  Familiarity and experience of working in different technologies related to data engineering – different database technologies, Hadoop, spark, storm, hive etc.
  •  Hands-on and have the ability to contribute a key portion of data engineering backend.
  •  Self-inspired and motivated to drive for exceptional results.
  •  Familiarity and experience working with different stages of data engineering – data acquisition, data refining, large scale data processing, efficient data storage for business analysis.
  •  Familiarity and experience working with different DB technologies and how to scale them.

RESPONSIBILITY:

  •  End to end responsibility to come up with data engineering architecture, design, development and then implementation of it.
  •  Build data engineering workflow for large scale data processing.
  •  Discover opportunities in data acquisition.
  •  Bring industry best practices for data engineering workflow.
  •  Develop data set processes for data modelling, mining and production.
  •  Take additional tech responsibilities for driving an initiative to completion
  •  Recommend ways to improve data reliability, efficiency and quality
  •  Goes out of their way to reduce complexity.
  •  Humble and outgoing - engineering cheerleaders.
Read more
Job posted by
Meenu Singh

Data Engineer

at Liquintel

Founded 2018  •  Services  •  20-100 employees  •  Bootstrapped
Big Data
Apache Spark
MongoDB
Relational Database (RDBMS)
Apache Hive
SQL
icon
Bengaluru (Bangalore)
icon
0 - 1 yrs
icon
₹2L - ₹3L / yr

We are looking for BE/BTech graduates (2018/2019 pass out) who want to build their career as Data Engineer covering technologies like Hadoop, NoSQL, RDBMS, Spark, Kafka, Hive, ETL, MDM & Data Quality. You should be willing to learn, explore, experiment, develop POCs/Solutions using these technologies with guidance and support from highly experienced Industry Leaders. You should be passionate about your work and willing to go extra mile to achieve results.

We are looking for candidates who believe in commitment and in building strong relationships. We need people who are passionate about solving problems through software and are flexible.

Required Experience, Skills and Qualifications

Passionate to learn and explore new technologies

Any RDBMS experience (SQL Server/Oracle/MySQL)

Any ETL tool experience (Informatica/Talend/Kettle/SSIS)

Understanding of Big Data technologies

Good Communication Skills

Excellent Mathematical / Logical / Reasoning Skills

Read more
Job posted by
Kamal Prithiani
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at MNC Pune based IT company?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort