11+ Google Cloud Platform (GCP) Jobs in Ahmedabad | Google Cloud Platform (GCP) Job openings in Ahmedabad
Apply to 11+ Google Cloud Platform (GCP) Jobs in Ahmedabad on CutShort.io. Explore the latest Google Cloud Platform (GCP) Job opportunities across top companies like Google, Amazon & Adobe.
We are looking for a seasoned DevOps Engineer with a strong background in solution architecture, ideally from the Banking or BFSI (Banking, Financial Services, and Insurance) domain. This role is crucial for implementing scalable, secure infrastructure and CI/CD practices tailored to the needs of high-compliance, high-availability environments. The ideal candidate will have deep expertise in Docker, Kubernetes, cloud platforms, and solution architecture, with knowledge of ML/AI and database management as a plus.
Key Responsibilities:
● Infrastructure & Solution Architecture: Design secure, compliant, and high-
performance cloud infrastructures (AWS, Azure, or GCP) optimized for BFSI-specific
applications.
● Containerization & Orchestration: Lead Docker and Kubernetes initiatives,
deploying applications with a focus on security, compliance, and resilience.
● CI/CD Pipelines: Build and maintain CI/CD pipelines suited to BFSI workflows,
incorporating automated testing, security checks, and rollback mechanisms.
● Cloud Infrastructure & Database Management: Manage cloud resources and
automate provisioning using Terraform, ensuring security standards. Optimize
relational and NoSQL databases for BFSI application needs.
● Monitoring & Incident Response: Implement monitoring and alerting (e.g.,
Prometheus, Grafana) for rapid incident response, ensuring uptime and reliability.
● Collaboration: Work closely with compliance, security, and development teams,
aligning infrastructure with BFSI standards and regulations.
Qualifications:
● Education: Bachelor’s or Master’s degree in Computer Science, Engineering,
Information Technology, or a related field.
● Experience: 5+ years of experience in DevOps with cloud infrastructure and solution
architecture expertise, ideally in ML/AI environments.
● Technical Skills:
○ Cloud Platforms: Proficient in AWS, Azure, or GCP; certifications (e.g., AWS
Solutions Architect, Azure Solutions Architect) are a plus.
○ Containerization & Orchestration: Expertise with Docker and Kubernetes,
including experience deploying and managing clusters at scale.
○ CI/CD Pipelines: Hands-on experience with CI/CD tools like Jenkins, GitLab
CI, or GitHub Actions, with automation and integration for ML/AI workflows
preferred.
○ Infrastructure as Code: Strong knowledge of Terraform and/or
CloudFormation for infrastructure provisioning.
○ Database Management: Proficiency in relational databases (PostgreSQL,
MySQL) and NoSQL databases (MongoDB, DynamoDB), with a focus on
optimization and scalability.
○ ML/AI Infrastructure: Experience supporting ML/AI pipelines, model serving,
and data processing within cloud or hybrid environments.
○ Monitoring and Logging: Proficient in monitoring tools like Prometheus and
Grafana, and log management solutions like ELK Stack or Splunk.
○ Scripting and Automation: Strong skills in Python, Bash, or PowerShell for
scripting and automating processes.
About Job
We are seeking an experienced Data Engineer to join our data team. As a Senior Data Engineer, you will work on various data engineering tasks including designing and optimizing data pipelines, data modelling, and troubleshooting data issues. You will collaborate with other data team members, stakeholders, and data scientists to provide data-driven insights and solutions to the organization. Experience required is of 3+ Years.
Responsibilities:
Design and optimize data pipelines for various data sources
Design and implement efficient data storage and retrieval mechanisms
Develop data modelling solutions and data validation mechanisms
Troubleshoot data-related issues and recommend process improvements
Collaborate with data scientists and stakeholders to provide data-driven insights and solutions
Coach and mentor junior data engineers in the team
Skills Required:
3+ years of experience in data engineering or related field
Strong experience in designing and optimizing data pipelines, and data modelling
Strong proficiency in programming languages Python
Experience with big data technologies like Hadoop, Spark, and Hive
Experience with cloud data services such as AWS, Azure, and GCP
Strong experience with database technologies like SQL, NoSQL, and data warehousing
Knowledge of distributed computing and storage systems
Understanding of DevOps and power automate and Microsoft Fabric will be an added advantage
Strong analytical and problem-solving skills
Excellent communication and collaboration skills
Qualifications
Bachelor's degree in Computer Science, Data Science, or a Computer related field (Master's degree preferred)
Company Name: Petpooja!
Location: Ahmedabad
Designation: DevOps Engineer
Experience: Between 2 to 7 Years
Candidates from Ahmedabad will be preferred
Job Location: Ahmedabad
Job Responsibilities: - -
- Planned, implement, and maintain the software development infrastructure.
- Introduce and oversee software development automation across cloud providers like AWS and Azure
- Help develop, manage, and monitor continuous integration and delivery systems
- Collaborate with software developers, QA specialists, and other team members to ensure the timely and successful delivery of new software releases
- Contribute to software design and development, including code review and feedback
- Assist with troubleshooting and problem-solving when issues arise
- Keep up with the latest industry trends and best practices while ensuring the company meets configuration requirements
- Participate in team improvement initiatives
- Help create and maintain internal documentation using Git or other similar applications
- Provide on-call support as needed
Qualification Required:
1. You should have Experience handling various services on the AWS cloud.
2. Previous experience as a Site reliability engineer would be an advantage.
3. You will be well versed with various commands and hands-on with Linux, Ubuntu administration, and other aspects of the Software development team requirement.
4. At least 2 to 7 years of experience with managing AWS Services such as Auto Scaling, Route 53, and various other internal networks.
5. Would recommend if having an AWS Certification.
Position Summary
Cloud Engineer helps to solutionize, enable, migrate and onboard clients to a secure cloud
platform, which offload the heavy lifting for the clients so that they can focus on their own business
value creation.
Job Description
- Assessing existing customer systems and/or cloud environment to determine the best migration approach and supporting tools used
- Build a secure and compliant cloud environment, with a proven enterprise operating model, on-going cost optimisation, and day-to-day infrastructure management
- Provide and implement cloud solutions to reduce operational overhead and risk, and automates common activities, such as change requests, monitoring, patch management, security, and backup services, and provides full-lifecycle services to provision, run, and support their infrastructure
- Collaborate with internal service teams to meet the clients’ needs for their infrastructure and application deployments
- Troubleshoot complex infrastructure deployments, recreate customer issues, and build proof of concept environments that abide by cloud-best-practices & well architecture frameworks
- Apply advanced troubleshooting techniques to provide unique solutions to our customers’ individual needs
- Work on critical, highly complex customer problems that will span across multiple cloud platforms and services
- Identify and drive improvements on process and technical related issues. Act as an escalation point of contact for the clients
- Drive clients meetings & communication during reviews
Requirement:
- Degree in computer science or a similar field.
- At least 2 year of experience in the field of cloud computing.
- Experience with CI/CD systems.
- Strong in Cloud services
- Exposure to AWS/GCP and other cloud-based infrastructure platforms
- Experience with AWS configuration and management – EC2, S3, EBS, ELB, IAM, VPC, RDS, CloudFront etc
- Exposure in architecting, designing, developing, and implementing cloud solutions on AWS platforms or other cloud server such as Azure, Google Cloud
- Proficient in the use and administration of all versions of MS Windows Server
- Experience with Linux and Windows system administration and web server configuration and monitoring
- Solid programming skills in Python, Java, Perl
- Good understanding of software design principles and best practices
- Good knowledge of REST APIs
- Should have hands-on experience in any deployment orchestration tool (Jenkins, Urbancode, Bamboo, etc.)
- Experience with Docker, Kubernetes, and Helm charts
- Hands on experience in Ansible and Git repositories
- Knowledge in Maven / Gradle
- Azure, AWS, and GCP certifications are preferred.
- Troubleshooting and analytical skills.
- Good communication and collaboration skills.
What you’ll be doing at Novo:
● Systems thinking
● Creating best practices, templates, and automation for build, test, integration and
deployment pipelines on multiple projects
● Designing and developing tools for easily creating and managing dev/test infrastructure
and services in AWS cloud
● Providing expertise and guidance on CI/CD, Github, and other development tools via
containerization
● Monitoring and support systems in Dev, UAT and production environments
● Building mock services and production-like data sources for use in development and
testing
● Managing Github integrations, feature flag systems, code coverage tools, and other
development & monitoring tools tools
● Participating in support rotations to help troubleshoot to infrastructure issues
Stacks you eat everyday ( For Devops Engineer )
● Creating and working with containers, as well as using container orchestration tools
(Kubernetes / Docker)
● AWS: S3, EKS, EC2, RDS, Route53, VPC etc.
● Fair understanding of Linux
● Good knowledge of CI/CD : Jenkins / CircleCI / Github Actions
● Basic level of monitoring
● Support for Deployment along with various Web Servers and Linux environments , both
backend and frontend.
at Intuitive Technology Partners
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Excellent communication skills
Open to work on EST time zone(6pm to 3am)
Technical Skills:
· In depth understanding of DevSecOps process and governance
· Understanding of various branching strategies
· Hands on experience working with various testing and scanning tools (ex. SonarQube, Snyk, Blackduck, etc.)
· Expertise working with one or more CICD platforms (ex. Azure DevOps, GitLab, GitHub Actions, etc)
· Expertise within one CSP and experience/working knowledge of a second CSP (Azure, AWS, GCP)
· Proficient with Terraform
· Hands on experience working with Kubernetes
· Proficient working with GIT version control
· Hands on experience working with monitoring/observability tool(s) (Splunk, Data Dog, Dynatrace, etc)
· Hands on experience working with Configuration Management platform(s) (Chef, Saltstack, Ansible, etc)
· Hands on experience with GitOps
and Artificial Intelligence (AI). It is headquartered in Ahmedabad, India, having a branch office in
Pune.
We have worked on / are working on Software Engineering projects that touch upon making
full-fledged products. Starting from UI/UX aspects, responsive and blazing fast front-ends,
platform-specific applications (Android, iOS, web applications, desktop applications), very
large scale infrastructure, cutting edge machine learning, and deep learning (AI in general).
The projects/products have wide-ranging applications in finance, healthcare, e-commerce,
legal, HR/recruiting, pharmaceutical, leisure sports and computer gaming domains. All of this
is using core concepts of computer science such as distributed systems, operating systems,
computer networks, process parallelism, cloud computing, embedded systems and the
Internet of Things.
PRIMARY RESPONSIBILITIES:
● Own the design, development, evaluation and deployment of highly-scalable software
products involving front-end and back-end development.
● Maintain quality, responsiveness and stability of the system.
● Design and develop memory-efficient, compute-optimized solutions for the
software.
● Design and administer automated testing tools and continuous integration
tools.
● Produce comprehensive and usable software documentation.
● Evaluate and make decisions on the use of new tools and technologies.
● Mentor other development engineers.
KNOWLEDGE AND SKILL REQUIREMENTS:
● Mastery of one or more back-end programming languages (Python, Java, Scala, C++
etc.)
● Proficiency in front-end programming paradigms and libraries (for example : HTML,
CSS and advanced JavaScript libraries and frameworks such as Angular, Knockout,
React). - Knowledge of automated and continuous integration testing tools (Jenkins,
Team City, Circle CI etc.)
● Proven experience of platform-level development for large-scale systems.
● Deep understanding of various database systems (MySQL, Mongo,
Cassandra).
● Ability to plan and design software system architecture.
● Development experience for mobile, browsers and desktop systems is
desired.
● Knowledge and experience of using distributed systems (Hadoop, Spark)
and cloud environments (Amazon EC2, Google Compute Engine, Microsoft
Azure).
● Experience working in agile development. Knowledge and prior experience of tools
like Jira is desired.
● Experience with version control systems (Git, Subversion or Mercurial).
Website: www.https://sgnldomain.online/click?redirect=http%3A%2F%2Ftatvic.com%2F&dID=1615815352065&linkName=tatvic.com" target="_blank">tatvic.com
Job Description
Responsibilities of Technical lAnalyst:
Responsibilities w.r.t Customer:
-
Requirement gathering by asking questions to the customer and getting to the main objective behind the query.
-
Analyze and understand and when necessary document the requirement
-
Design or develop PoC as per the business requirements / Customer development team requirements
-
Completing the implementation of the task as per the schedule
-
Should make extensive use of analytics data to generate insights and recommendations
-
Work with client technical team to fix the tracking issues
Team Responsibilities
-
Participate in the recruitment process based on your seniority. This includes interviews, creating test as required based on the need to recruit people who are compatible with our culture and skilled to accomplish the job.
-
Share and coach colleagues in making the team more effective.
-
Share and Create content for training and publishing. These includes blogs and webinars.
-
Identify potential team members who are worthy of band change and prepare them with required guidance
-
Identify the repetitive tasks and manage to delegate to other team members or automate the process to reduce TAT & improve the productivity
Technical Responsibilities
-
Understanding the Event Schemas and Scripting
-
Designing solutions using GTM configurations for various environments
-
Creating database schemas as per business requirements
-
Create technical specifications and test plans
-
Development within any frameworks/platforms as used by the customers websites platform for implementing tracking components.
-
Understand and implement the various scripts and code which can work with different Analytics Tools
-
Understand the connection of GCP with GA 360 and its working
-
Integrate existing software products and getting various platforms to work together
-
Configure alerts on top of tracking to monitor & correct the same
-
Building reusable, optimized, scalable, secure code and libraries for future use
-
Initiate new research tasks which enable the analyst at Tatvic or Client to leverage to improve the business KPIs
● Improve CI/CD tooling using gitlab.
● Implement and improve monitoring and alerting.
● Build and maintain highly available systems.
● Implement the CI pipeline.
● Implement and maintain monitoring stacks.
● Lead and guide the team in identifying and implementing new technologies.
● Implement and own the CI.
● Manage CD tooling.
● Implement and maintain monitoring and alerting.
● Build and maintain highly available production systems.
Skills
● Configuration Management experience such as Kubernetes, Ansible or similar.
● Managing production infrastructure with Terraform, CloudFormation, etc.
● Strong Linux, system administration background.
● Ability to present and communicate the architecture in a visual form. Strong knowledge of AWS,
Azure, GCP.
Skills We Require:- Dev Ops, AWS Admin, terraform, Infrastructure as a Code
SUMMARY:-
- Implement integrations requested by customers
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Have good hands on experience on Dev Ops, AWS Admin, terraform, Infrastructure as a Code
Have knowledge on EC2, Lambda, S3, ELB, VPC, IAM, Cloud Watch, Centos, Server Hardening
Ability to understand business requirements and translate them into technical requirements
A knack for benchmarking and optimizationIntuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is looking for highly talented hands-on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
Job Description :
- Extensive exp. with K8s (EKS/GKE) and k8s eco-system tooling e,g., Prometheus, ArgoCD, Grafana, Istio etc.
- Extensive AWS/GCP Core Infrastructure skills
- Infrastructure/ IAC Automation, Integration - Terraform
- Kubernetes resources engineering and management
- Experience with DevOps tools, CICD pipelines and release management
- Good at creating documentation(runbooks, design documents, implementation plans )
Linux Experience :
- Namespace
- Virtualization
- Containers
Networking Experience
- Virtual networking
- Overlay networks
- Vxlans, GRE
Kubernetes Experience :
Should have experience in bringing up the Kubernetes cluster manually without using kubeadm tool.
Observability
Experience in observability is a plus
Cloud automation :
Familiarity with cloud platforms exclusively AWS, DevOps tools like Jenkins, terraform etc.