Cutshort logo
Google cloud platform gcp jobs

50+ Google Cloud Platform (GCP) Jobs in India

Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!

icon
krtrimaiq cognitive solutions
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹12L / yr
LookML
Google Cloud Platform (GCP)
Looker

Job Title: LookML Developer

Company: KrtrimaIQ Cognitive solutions

Job Type: Full-time

Location: Bengaluru

Experience: 5+ years


About the Company:

Krtrimaiq Cognitive Solutions is a leading provider of innovative data analytics and cognitive computing solutions. Our mission is to empower businesses with actionable insights through advanced technologies. We foster a collaborative and inclusive work environment where creativity and innovation thrive. Our team is composed of passionate professionals dedicated to pushing the boundaries of technology and delivering exceptional results for our clients.


Job Overview:

We are seeking a skilled LookML Developer to join our dynamic team in Bengaluru. In this role, you will be responsible for designing, developing, and maintaining LookML models and dashboards that provide valuable insights to our clients. You will work closely with data analysts and stakeholders to understand their requirements and translate them into effective data models. This is an exciting opportunity to leverage your expertise in LookML and contribute to impactful data-driven decision-making processes.


Responsibilities:

Key responsibilities include:

- Designing and developing LookML models to support business intelligence reporting and analytics.

- Collaborating with data analysts to gather requirements and translate them into technical specifications.

- Creating and optimizing Looker dashboards and visualizations for end-users.

- Ensuring data accuracy and integrity by implementing best practices in data modeling.

- Troubleshooting and resolving issues related to LookML and Looker functionalities.

- Staying updated with the latest trends and advancements in Looker and data analytics.


Skills Required:

Required skills and qualifications include:

- Proficiency in LookML and experience with Looker.

- Strong understanding of data modeling concepts and best practices.

- Experience with Google Cloud Platform (GCP) and its data services.

- Ability to work collaboratively in a team environment and communicate effectively with stakeholders.

- Strong analytical and problem-solving skills


Read more
Umanist India
Chennai
7 - 8 yrs
₹21L - ₹22L / yr
Google Cloud Platform (GCP)
skill iconMachine Learning (ML)
skill iconPython

Job Title: Software Engineer Consultant/Expert 34192 

Location: Chennai

Work Type: Onsite

Notice Period: Immediate Joiners only or serving candidates upto 30 days.

 

Position Description:

  • Candidate with strong Python experience.
  • Full Stack Development in GCP End to End Deployment/ ML Ops Software Engineer with hands-on n both front end, back end and ML Ops
  • This is a Tech Anchor role.

Experience Required:

  • 7 Plus Years
Read more
Bengaluru (Bangalore)
5 - 8 yrs
₹8L - ₹25L / yr
skill iconMachine Learning (ML)
Production management
Large Language Models (LLM)
AIML
Google Cloud Platform (GCP)

Job description

We are looking for a Data Scientist with strong AI/ML engineering skills to join our high-impact team at KrtrimaIQ Cognitive Solutions. This is not a notebook-only role — you must have production-grade experience deploying and scaling AI/ML models in cloud environments, especially GCP, AWS, or Azure.

This role involves building, training, deploying, and maintaining ML models at scale, integrating them with business applications. Basic model prototyping won't qualify — we’re seeking hands-on expertise in building scalable machine learning pipelines.


Key Responsibilities


Design, train, test, and deploy end-to-end ML models on GCP (or AWS/Azure) to support product innovation and intelligent automation.

Implement GenAI use cases using LLMs

Perform complex data mining and apply statistical algorithms and ML techniques to derive actionable insights from large datasets.

Drive the development of scalable frameworks for automated insight generation, predictive modeling, and recommendation systems.

Work on impactful AI/ML use cases in Search & Personalization, SEO Optimization, Marketing Analytics, Supply Chain Forecasting, and Customer Experience.

Implement real-time model deployment and monitoring using tools like Kubeflow, Vertex AI, Airflow, PySpark, etc.

Collaborate with business and engineering teams to frame problems, identify data sources, build pipelines, and ensure production-readiness.

Maintain deep expertise in cloud ML architecture, model scalability, and performance tuning.

Stay up to date with AI trends, LLM integration, and modern practices in machine learning and deep learning.


Technical Skills Required Core ML & AI Skills (Must-Have):

Strong hands-on ML engineering (70% of the role) — supervised/unsupervised learning, clustering, regression, optimization.

Experience with real-world model deployment and scaling, not just notebooks or prototypes.

Good understanding of ML Ops, model lifecycle, and pipeline orchestration.

Strong with Python 3, Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch, Seaborn, Matplotlib, etc.

SQL proficiency and experience querying large datasets.

Deep understanding of linear algebra, probability/statistics, Big-O, and scientific experimentation.

Cloud experience in GCP (preferred), AWS, or Azure.

Cloud & Big Data Stack


Hands-on experience with:

GCP tools – Vertex AI, Kubeflow, BigQuery, GCS

Or equivalent AWS/Azure ML stacks

Familiar with Airflow, PySpark, or other pipeline orchestration tools.

Experience reading/writing data from/to cloud services.



Qualifications


Bachelor's/Master’s/Ph.D. in Computer Science, Mathematics, Engineering, Data Science, Statistics, or related quantitative field.

4+ years of experience in data analytics and machine learning roles.

2+ years of experience in Python or similar programming languages (Java, Scala, Rust).

Must have experience deploying and scaling ML models in production.


Nice to Have


Experience with LLM fine-tuning, Graph Algorithms, or custom deep learning architectures.

Background in academic research to production applications.

Building APIs and monitoring production ML models.

Familiarity with advanced math – Graph Theory, PDEs, Optimization Theory.


Communication & Collaboration


Strong ability to explain complex models and insights to both technical and non-technical stakeholders.

Ask the right questions, clarify objectives, and align analytics with business goals.

Comfortable working cross-functionally in agile and collaborative teams.


Important Note:

This is a Data Science-heavy role — 70% of responsibilities involve building, training, deploying, and scaling AI/ML models.

Cloud experience is mandatory (GCP preferred, AWS/Azure acceptable).

Only candidates with hands-on experience in deploying ML models into production (not just notebooks) will be considered.


Read more
Appiness Interactive Pvt. Ltd.
S Suriya Kumar
Posted by S Suriya Kumar
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Indore, Gurugram, Delhi, Ahmedabad, Jaipur
5 - 8 yrs
₹5L - ₹25L / yr
skill iconPython
skill iconR Programming
skill iconJava
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+8 more

Company Description

Appiness Interactive Pvt. Ltd. is a Bangalore-based product development and UX firm that

specializes in digital services for startups to fortune-500s. We work closely with our clients to

create a comprehensive soul for their brand in the online world, engaged through multiple

platforms of digital media. Our team is young, passionate, and aggressive, not afraid to think

out of the box or tread the un-trodden path in order to deliver the best results for our clients.

We pride ourselves on Practical Creativity where the idea is only as good as the returns it

fetches for our clients.


Key Responsibilities:

  • Design and implement advanced AI/ML models and algorithms to address real-world challenges.
  • Analyze large and complex datasets to derive actionable insights and train predictive models.
  • Build and deploy scalable, production-ready AI solutions on cloud platforms such as AWS, Azure, or GCP.
  • Collaborate closely with cross-functional teams, including data engineers, product managers, and software developers, to integrate AI solutions into business workflows.
  • Continuously monitor and optimize model performance, ensuring scalability, robustness, and reliability.
  • Stay abreast of the latest advancements in AI, ML, and Generative AI technologies, and proactively apply them where applicable.
  • Implement MLOps best practices using tools such as MLflow, Docker, and CI/CD pipelines.
  • Work with Large Language Models (LLMs) like GPT and LLaMA, and develop Retrieval-Augmented Generation (RAG) pipelines when needed.


Required Skills:

  • Strong programming skills in Python (preferred); experience with R or Java is also valuable.
  • Proficiency with machine learning libraries and frameworks such as TensorFlow, PyTorch, and Scikit-learn.
  • Hands-on experience with cloud platforms like AWS, Azure, or GCP.
  • Solid foundation in data structures, algorithms, statistics, and machine learning principles.
  • Familiarity with MLOps tools and practices, including MLflow, Docker, and Kubernetes.
  • Proven experience in deploying and maintaining AI/ML models in production environments.
  • Exposure to Large Language Models (LLMs), Generative AI, and vector databases is a strong plus.
Read more
Springer Capital
Andrew Rose
Posted by Andrew Rose
Remote only
0 - 1 yrs
₹5000 - ₹7000 / mo
Attention to detail
Troubleshooting
Data modeling
Warehousing concepts
Google Cloud Platform (GCP)
+15 more

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process. 

 

Responsibilities: 

▪ Design, build, and maintain scalable data pipelines for structured and unstructured data sources 

▪ Develop ETL processes to collect, clean, and transform data from internal and external systems 

▪ Support integration of data into dashboards, analytics tools, and reporting systems 

▪ Collaborate with data analysts and software developers to improve data accessibility and performance 

▪ Document workflows and maintain data infrastructure best practices 

▪ Assist in identifying opportunities to automate repetitive data tasks 

Read more
CLOUDSUFI

at CLOUDSUFI

3 recruiters
Ayushi Dwivedi
Posted by Ayushi Dwivedi
Noida
4 - 8 yrs
₹15L - ₹22L / yr
Google Cloud Platform (GCP)
skill iconKubernetes
Terraform
skill iconDocker
helm

About Us

CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.


Our Values

We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.


Equal Opportunity Statement

CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/


What are we looking for

We are seeking a highly skilled and experienced Senior DevOps Engineer to join our team. The ideal candidate will have extensive expertise in modern DevOps tools and practices, particularly in managing CI/CD pipelines, infrastructure as code, and cloud-native environments. This role involves designing, implementing, and maintaining robust, scalable, and efficient infrastructure and deployment pipelines to support our development and operations teams.


Required Skills and Experience:

- 7+ years of experience in DevOps, infrastructure automation, or related fields.

- Advanced expertise in Terraform for infrastructure as code.

- Solid experience with Helm for managing Kubernetes applications.

- Proficient with GitHub for version control, repository management, and workflows.

- Extensive experience with Kubernetes for container orchestration and management.

- In-depth understanding of Google Cloud Platform (GCP) services and architecture.

- Strong scripting and automation skills (e.g., Python, Bash, or equivalent).

- Excellent problem-solving skills and attention to detail. - Strong communication and collaboration abilities in agile development environments.


Preferred Qualifications:

- Experience with other CI/CD tools (e.g., Jenkins, GitLab CI/CD).

- Knowledge of additional cloud platforms (e.g., AWS, Azure).

- Certification in Kubernetes (CKA/CKAD) or Google Cloud (GCP Professional DevOps Engineer).


Behavioral Competencies

• Must have worked with US/Europe based clients in onsite/offshore delivery models.

• Should have very good verbal and written communication, technical articulation, listening and presentation skills.

• Should have proven analytical and problem solving skills.

• Should have collaborative mindset for cross-functional team work

• Passion for solving complex search problems

• Should have demonstrated effective task prioritization, time management and internal/external stakeholder management skills.

• Should be a quick learner, self starter, go-getter and team player.

• Should have experience of working under stringent deadlines in a Matrix organization structure.

Read more
CLOUDSUFI
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 6 yrs
₹20L - ₹25L / yr
Google Cloud Platform (GCP)
DevOps
skill iconKubernetes
skill icongrafana
skill iconDocker

About Us


CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.


Our Values


We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.


Equal Opportunity Statement


CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.


What we are looking for:


Experience: 3-6years

Education: BTech / BE / MCA / MSc Computer Science


About Role:


Primary Skills -Devops with strong circleCI, argoCD, Github, Terraform, Helm, kubernetes and google cloud experience


Required Skills and Experience:


  •  3+ years of experience in DevOps, infrastructure automation, or related fields.
  •  Strong proficiency with CircleCI for building and managing CI/CD pipelines.
  •  Advanced expertise in Terraform for infrastructure as code.
  •  Solid experience with Helm for managing Kubernetes applications.
  •  Hands-on knowledge of ArgoCD for GitOps-based deployment strategies.
  •  Proficient with GitHub for version control, repository management, and workflows.
  •  Extensive experience with Kubernetes for container orchestration and management.
  •  In-depth understanding of Google Cloud Platform (GCP) services and architecture.
  •  Strong scripting and automation skills (e.g., Python, Bash, or equivalent).
  •  Familiarity with monitoring and logging tools like Prometheus, Grafana, and ELK stack.
  •  Excellent problem-solving skills and attention to detail.
  •  Strong communication and collaboration abilities in agile development environments.


Note :Kindly share your LinkedIn profile when applying.

Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
5 - 10 yrs
₹10L - ₹20L / yr
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Azure
skill iconJavascript
skill iconReact.js
+5 more

 Location: Hybrid/ Remote

Openings: 2

Experience: 5–12 Years

Qualification: Bachelor’s or Master’s in Computer Science or a related technical field


Key Responsibilities

Architect & Design:

  • Provide technical and architectural direction for complex frontend solutions,ensuring alignment with enterprise standards and best practices.
  • Conduct design and code reviews to maintain high-quality, reusable, and scalable frontend interfaces for enterprise applications.
  • Collaborate with cross-functional teams to define and enforce UI/UX design guidelines, accessibility standards, and performance benchmarks.
  • Identify and address potential security vulnerabilities in frontend implementations, ensuring compliance with security and data privacy requirements.

Development & Debugging:

  • Write clean, maintainable, and efficient frontend code.
  • Debug and troubleshoot code to ensure robust, high-performing applications.
  • Develop reusable frontend libraries that can be leveraged across multiple projects.

AI Awareness (Preferred):

  • Understand AI/ML fundamentals and how they can enhance frontend applications.
  • Collaborate with teams integrating AI-based features into chat applications.

Collaboration & Reporting:

  • Work closely with cross-functional teams to align on architecture and deliverables.
  • Regularly report progress, identify risks, and propose mitigation strategies.

Quality Assurance:

  • Implement unit tests and end-to-end tests to ensure code quality.
  • Participate in code reviews and enforce best practices.


Required Skills 

  • 5-10 years of experience architecting and developing cloud-based global applications in a public cloud environment (AWS, Azure, or GCP).
  • Strong hands-on expertise in frontend technologies: JavaScript, HTML5, CSS3
  • Proficiency with Modern frameworks like React, Angular, or Node.js
  • Backend familiarity with Java, Spring Boot (or similar technologies).
  • Experience developing real-world, at-scale products.
  • General knowledge of cloud platforms (AWS, Azure, or GCP) and their structure, use, and capabilities.
  • Strong problem-solving, debugging, and performance optimization skills.
Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
5 - 10 yrs
₹7L - ₹20L / yr
Fullstack Developer
skill iconJavascript
skill iconHTML/CSS
skill iconReact.js
skill iconSpring Boot
+9 more

Location: Hybrid/ Remote

Openings: 2

Experience: 5+ Years

Qualification: Bachelor’s or Master’s in Computer Science or related field


Job Responsibilities


Problem Solving & Optimization:

  • Analyze and resolve complex technical and application issues.
  • Optimize application performance, scalability, and reliability.

Design & Develop:

  • Build, test, and deploy scalable full-stack applications with high performance and security.
  • Develop clean, reusable, and maintainable code for both frontend and backend.

AI Integration (Preferred):

  • Collaborate with the team to integrate AI/ML models into applications where applicable.
  • Explore Generative AI, NLP, or machine learning solutions that enhance product capabilities.

Technical Leadership & Mentorship:

  • Provide guidance, mentorship, and code reviews for junior developers.
  • Foster a culture of technical excellence and knowledge sharing.

Agile & Delivery Management:

  • Participate in Agile ceremonies (sprint planning, stand-ups, retrospectives).
  • Define and scope backlog items, track progress, and ensure timely delivery.

Collaboration:

  • Work closely with cross-functional teams (product managers, designers, QA) to deliver high-quality solutions.
  • Coordinate with geographically distributed teams.

Quality Assurance & Security:

  • Conduct peer reviews of designs and code to ensure best practices.
  • Implement security measures and ensure compliance with industry standards.

Innovation & Continuous Improvement:

  • Identify areas for improvement in the software development lifecycle.
  • Stay updated with the latest tech trends, especially in AI and cloud technologies, and recommend new tools or frameworks.

Required Skills

  • Strong proficiency in JavaScript, HTML5, CSS3
  • Hands-on expertise with frontend frameworks like React, Angular, or Vue.js
  • Backend development experience with Java, Spring Boot (Node.js is a plus)
  • Knowledge of REST APIs, microservices, and scalable architectures
  • Familiarity with cloud platforms (AWS, Azure, or GCP)
  • Experience with Agile/Scrum methodologies and JIRA for project tracking
  • Proficiency in Git and version control best practices
  • Strong debugging, performance optimization, and problem-solving skills
  • Ability to analyze customer requirements and translate them into technical specifications
Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
0 - 2 yrs
₹3.5L - ₹4.5L / yr
Fullstack Developer
skill iconJavascript
skill iconReact.js
skill iconNodeJS (Node.js)
RESTful APIs
+6 more

Location: Hybrid/ Remote

Openings: 5

Experience: 0 - 2Years

Qualification: Bachelor’s or Master’s in Computer Science or a related technical field


Key Responsibilities:

Backend Development & APIs

  • Build microservices that provide REST APIs to power web frontends.
  • Design clean, reusable, and scalable backend code meeting enterprise security standards.
  • Conceptualize and implement optimized data storage solutions for high-performance systems.

Deployment & Cloud

  • Deploy microservices using a common deployment framework on AWS and GCP.
  • Inspect and optimize server code for speed, security, and scalability.

Frontend Integration

  • Work on modern front-end frameworks to ensure seamless integration with back-end services.
  • Develop reusable libraries for both frontend and backend codebases.


AI Awareness (Preferred)

  • Understand how AI/ML or Generative AI can enhance enterprise software workflows.
  • Collaborate with AI specialists to integrate AI-driven features where applicable.

Quality & Collaboration

  • Participate in code reviews to maintain high code quality.
  • Collaborate with teams using Agile/Scrum methodologies for rapid and structured delivery.


Required Skills:

  • Proficiency in JavaScript (ES6+), Webpack, Mocha, Jest
  • Experience with recent frontend frameworks – React.js, Redux.js, Node.js (or similar)
  • Deep understanding of HTML5, CSS3, SASS/LESS, and Content Management Systems
  • Ability to design and implement RESTful APIs and understand their impact on client-side applications
  • Familiarity with cloud platforms (AWS, Azure, or GCP) – deployment, storage, and scalability
  • Experience working with Agile and Scrum methodologies
  • Strong backend expertise in Java, J2EE, Spring Boot is a plus but not mandatory
Read more
Remote only
3 - 6 yrs
$25K - $35K / yr
Google Cloud Platform (GCP)
Natural Language Processing (NLP)
Chatbot
skill iconPython
skill iconNodeJS (Node.js)
+2 more

Ontrac Solutions is a leading technology consulting firm specializing in cutting-edge solutions that drive business transformation. We partner with organizations to modernize their infrastructure, streamline processes, and deliver tangible results.


Our client is actively seeking a Conversational AI Engineer with deep hands-on experience in Google Contact Center AI (CCAI) to join a high-impact digital transformation project via a GCP Premier Partner. As part of a staff augmentation model, you will be embedded within the client’s technology or contact center innovation team, delivering scalable virtual agent solutions that improve customer experience, agent productivity, and call deflection.


Key Responsibilities:

  • Lead the design and buildout of Dialogflow CX/ES agents across chat and voice channels.
  • Integrate virtual agents with client systems and platforms (e.g., Genesys, Twilio, NICE CXone, Salesforce).
  • Develop fulfillment logic using Google Cloud Functions, Cloud Run, and backend integrations (via REST APIs and webhooks).
  • Collaborate with stakeholders to define intent models, entity schemas, and user flows.
  • Implement Agent Assist and CCAI Insights to augment live agent productivity.
  • Leverage Google Cloud tools including Pub/Sub, Logging, and BigQuery to support and monitor the solution.
  • Support tuning, regression testing, and enhancement of NLP performance using live utterance data.
  • Ensure adherence to enterprise security and compliance requirements.


Required Skills & Qualifications:

  • 3+ years developing conversational AI experiences, including at least 1–2 years with Google Dialogflow CX or ES.
  • Solid experience across GCP services (Functions, Cloud Run, IAM, BigQuery, etc.).
  • Strong skills in Python or Node.js for webhook fulfillment and backend logic.
  • Knowledge of NLU/NLP modeling, training, and performance tuning.
  • Prior experience working in client-facing or embedded roles via consulting or staff augmentation.
  • Ability to communicate effectively with technical and business stakeholders.


Nice to Have:

  • Hands-on experience with Agent Assist, CCAI Insights, or third-party CCaaS tools (Genesys, Twilio Flex, NICE CXone).
  • Familiarity with Vertex AI, AutoML, or other GCP ML services.
  • Experience in regulated industries (healthcare, finance, insurance, etc.).
  • Google Cloud certification in Cloud Developer or CCAI Engineer.




Read more
Deqode

at Deqode

1 recruiter
Shubham Das
Posted by Shubham Das
Mumbai, Chennai, Gurugram
6 - 9 yrs
₹12L - ₹17L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
DevOps
helm
+3 more

We are looking for a highly skilled DevOps/Cloud Engineer with over 6 years of experience in infrastructure automation, cloud platforms, networking, and security. If you are passionate about designing scalable systems and love solving complex cloud and DevOps challenges—this opportunity is for you.

Key Responsibilities

  • Design, deploy, and manage cloud-native infrastructure using Kubernetes (K8s), Helm, Terraform, and Ansible
  • Automate provisioning and orchestration workflows for cloud and hybrid environments
  • Manage and optimize deployments on AWS, Azure, and GCP for high availability and cost efficiency
  • Troubleshoot and implement advanced network architectures including VPNs, firewalls, load balancers, and routing protocols
  • Implement and enforce security best practices: IAM, encryption, compliance, and vulnerability management
  • Collaborate with development and operations teams to improve CI/CD workflows and system observability

Required Skills & Qualifications

  • 6+ years of experience in DevOps, Infrastructure as Code (IaC), and cloud-native systems
  • Expertise in Helm, Terraform, and Kubernetes
  • Strong hands-on experience with AWS and Azure
  • Solid understanding of networking, firewall configurations, and security protocols
  • Experience with CI/CD tools like Jenkins, GitHub Actions, or similar
  • Strong problem-solving skills and a performance-first mindset

Why Join Us?

  • Work on cutting-edge cloud infrastructure across diverse industries
  • Be part of a collaborative, forward-thinking team
  • Flexible hybrid work model – work from anywhere while staying connected
  • Opportunity to take ownership and lead critical DevOps initiatives


Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore)
8 - 12 yrs
₹15L - ₹22L / yr
Data engineering
Google Cloud Platform (GCP)
Data Transformation Tool (DBT)
Google Dataform
BigQuery
+6 more

Job Title : Data Engineer – GCP + Spark + DBT

Location : Bengaluru (On-site at Client Location | 3 Days WFO)

Experience : 8 to 12 Years

Level : Associate Architect

Type : Full-time


Job Overview :

We are looking for a seasoned Data Engineer to join the Data Platform Engineering team supporting a Unified Data Platform (UDP). This role requires hands-on expertise in DBT, GCP, BigQuery, and PySpark, with a solid foundation in CI/CD, data pipeline optimization, and agile delivery.


Mandatory Skills : GCP, DBT, Google Dataform, BigQuery, PySpark/Spark SQL, Advanced SQL, CI/CD, Git, Agile Methodologies.


Key Responsibilities :

  • Design, build, and optimize scalable data pipelines using BigQuery, DBT, and PySpark.
  • Leverage GCP-native services like Cloud Storage, Pub/Sub, Dataproc, Cloud Functions, and Composer for ETL/ELT workflows.
  • Implement and maintain CI/CD for data engineering projects with Git-based version control.
  • Collaborate with cross-functional teams including Infra, Security, and DataOps for reliable, secure, and high-quality data delivery.
  • Lead code reviews, mentor junior engineers, and enforce best practices in data engineering.
  • Participate in Agile sprints, backlog grooming, and Jira-based project tracking.

Must-Have Skills :

  • Strong experience with DBT, Google Dataform, and BigQuery
  • Hands-on expertise with PySpark/Spark SQL
  • Proficient in GCP for data engineering workflows
  • Solid knowledge of SQL optimization, Git, and CI/CD pipelines
  • Agile team experience and strong problem-solving abilities

Nice-to-Have Skills :

  • Familiarity with Databricks, Delta Lake, or Kafka
  • Exposure to data observability and quality frameworks (e.g., Great Expectations, Soda)
  • Knowledge of MDM patterns, Terraform, or IaC is a plus
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Pune
6 - 10 yrs
₹12L - ₹23L / yr
skill iconMachine Learning (ML)
skill iconDeep Learning
Natural Language Processing (NLP)
Computer Vision
Data engineering
+8 more

Job Title : AI Architect

Location : Pune (On-site | 3 Days WFO)

Experience : 6+ Years

Shift : US or flexible shifts


Job Summary :

We are looking for an experienced AI Architect to design and deploy AI/ML solutions that align with business goals.

The role involves leading end-to-end architecture, model development, deployment, and integration using modern AI/ML tools and cloud platforms (AWS/Azure/GCP).


Key Responsibilities :

  • Define AI strategy and identify business use cases
  • Design scalable AI/ML architectures
  • Collaborate on data preparation, model development & deployment
  • Ensure data quality, governance, and ethical AI practices
  • Integrate AI into existing systems and monitor performance

Must-Have Skills :

  • Machine Learning, Deep Learning, NLP, Computer Vision
  • Data Engineering, Model Deployment (CI/CD, MLOps)
  • Python Programming, Cloud (AWS/Azure/GCP)
  • Distributed Systems, Data Governance
  • Strong communication & stakeholder collaboration

Good to Have :

  • AI certifications (Azure/GCP/AWS)
  • Experience in big data and analytics
Read more
Blitzy

at Blitzy

2 candid answers
1 product
Eman Khan
Posted by Eman Khan
Pune
6 - 10 yrs
₹40L - ₹70L / yr
skill iconPython
skill iconDjango
skill iconFlask
FastAPI
Google Cloud Platform (GCP)
+1 more

Requirements

  • 7+ years of experience with Python
  • Strong expertise in Python frameworks (Django, Flask, or FastAPI)
  • Experience with GCP, Terraform, and Kubernetes
  • Deep understanding of REST API development and GraphQL
  • Strong knowledge of SQL and NoSQL databases
  • Experience with microservices architecture
  • Proficiency with CI/CD tools (Jenkins, CircleCI, GitLab)
  • Experience with container orchestration using Kubernetes
  • Understanding of cloud architecture and serverless computing
  • Experience with monitoring and logging solutions
  • Strong background in writing unit and integration tests
  • Familiarity with AI/ML concepts and integration points


Responsibilities

  • Design and develop scalable backend services for our AI platform
  • Architect and implement complex systems with high reliability
  • Build and maintain APIs for internal and external consumption
  • Work closely with AI engineers to integrate ML functionality
  • Optimize application performance and resource utilization
  • Make architectural decisions that balance immediate needs with long-term scalability
  • Mentor junior engineers and promote best practices
  • Contribute to the evolution of our technical standards and processes
Read more
eazeebox

at eazeebox

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
2yrs+
Upto ₹12L / yr (Varies
)
DevOps
CI/CD
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Ansible
+6 more

About Eazeebox

Eazeebox is India’s first B2B Quick Commerce platform for home electrical goods. We empower electrical retailers with access to 100+ brands, flexible credit options, and 4-hour delivery—making supply chains faster, smarter, and more efficient. Our tech-driven approach enables sub-3 hour inventory-aware fulfilment across micro-markets, with a goal of scaling to 50+ orders/day per store.


About the Role

We’re looking for a DevOps Engineer to help scale and stabilize the cloud-native backbone that powers Eazeebox. You’ll play a critical role in ensuring our microservices architecture remains reliable, responsive, and performant—especially during peak retailer ordering windows.


What We’re Looking For

  • 2+ years in a DevOps or SRE role in production-grade, cloud-native environments (AWS-focused)
  • Solid hands-on experience with Docker, Kubernetes/EKS, and container networking
  • Proficiency with CI/CD tools, especially GitHub Actions
  • Experience with staged rollout strategies for microservices
  • Familiarity with event-driven architectures using SNS, SQS, and Step Functions
  • Strong ability to optimize cloud costs without compromising uptime or performance
  • Scripting/automation skills in Python, Go, or Bash
  • Good understanding of observability, on-call readiness, and incident response workflows


Nice to Have

  • Experience in B2B commerce, delivery/logistics networks, or on-demand operations
  • Exposure to real-time inventory systems or marketplaces
  • Worked on high-concurrency, low-latency backend systems
Read more
Product company for financial operations automation platform

Product company for financial operations automation platform

Agency job
via Esteem leadership by Suma Raju
Hyderabad
3 - 6 yrs
₹20L - ₹22L / yr
skill iconPython
skill iconJava
skill iconKubernetes
Google Cloud Platform (GCP)

Mandatory Criteria

  • Candidate must have Strong hands-on experience with Kubernetes of at least 1+ years in production environments.
  • Candidate should have Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI).
  • Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
  • Candidate should have strong Backend experience.
  • Hands-on experience with BigQuery or Snowflake for data analytics and integration.


About the Role


We are looking for a highly skilled and motivated Cloud Backend Engineer with 3–5 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.

# Experience with Kubernetes is mandatory.

 

Key Responsibilities

  • Design and develop scalable, reliable backend services and cloud-native applications.
  • Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
  • Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
  • Implement and manage CI/CD pipelines and infrastructure automation.
  • Collaborate with frontend, DevOps, and product teams in an agile environment.
  • Ensure high code quality through testing, reviews, and documentation.

 

Required Skills

  • Strong hands-on experience with Kubernetes of atleast 1+ years in production environments (mandatory).
  • Expertise in at least one public cloud platform [GCP (Preferred)AWSAzure, or OCI].
  • Proficient in backend programming with PythonJava, or Kotlin (at least one is required).
  • Solid understanding of distributed systems, microservices, and cloud-native architecture.
  • Experience with containerization using Docker and Kubernetes-native deployment workflows.
  • Working knowledge of SQL and relational databases.

  

Preferred Qualifications

  • Experience working across multiple cloud platforms.
  • Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
  • Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
  • Hands-on experience with BigQuery or Snowflake for data analytics and integration.

  

Nice to Have

  • Knowledge of NoSQL databases or event-driven/message-based architectures.
  • Experience with serverless services, managed data pipelines, or data lake platforms.
Read more
Moative

at Moative

3 candid answers
Eman Khan
Posted by Eman Khan
Chennai
3 - 5 yrs
₹10L - ₹25L / yr
skill iconPython
NumPy
pandas
Scikit-Learn
Natural Language Toolkit (NLTK)
+4 more

About Moative

Moative, an Applied AI company, designs and builds transformation AI solutions for traditional industries in energy, utilities, healthcare & lifesciences, and more. Through Moative Labs, we build AI micro-products and launch AI startups with partners in vertical markets that align with our theses.


Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.


Our Team: Our team of 20+ employees consist of data scientists, AI/ML Engineers, and mathematicians from top engineering and research institutes such as IITs, CERN, IISc, UZH, Ph.Ds. Our team includes academicians, IBM Research Fellows, and former founders.


Work you’ll do

As a Data Scientist at Moative, you’ll play a crucial role in extracting valuable insights from data to drive informed decision-making. You’ll work closely with cross-functional teams to build predictive models and develop solutions to complex business problems. You will also be involved in conducting experiments, building POCs and prototypes.


Responsibilities

  • Support end-to-end development and deployment of ML/ AI models - from data preparation, data analysis and feature engineering to model development, validation and deployment
  • Gather, prepare and analyze data, write code to develop and validate models, and continuously monitor and update them as needed.
  • Collaborate with domain experts, engineers, and stakeholders in translating business problems into data-driven solutions
  • Document methodologies and results, present findings and communicate insights to non-technical audiences


Skills & Requirements

  • Proficiency in Python and familiarity with basic Python libraries for data analysis and ML algorithms (such as NumPy, Pandas, ScikitLearn, NLTK). 
  • Strong understanding and experience with data analysis, statistical and mathematical concepts and ML algorithms 
  • Working knowledge of cloud platforms (e.g., AWS, Azure, GCP).
  • Broad understanding of data structures and data engineering.
  • Strong communication skills
  • Strong collaboration skills, continuous learning attitude and a problem solving mind-set


Working at Moative

Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:

  • Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
  • Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
  • Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
  • Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
  • High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.


If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.  


That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers. 


The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to be present in the city. We intend to move to a hybrid model in a few months time.

Read more
AI driven consulting firm

AI driven consulting firm

Agency job
via PLEXO HR Solutions by Upashna Kumari
Pune
1 - 3 yrs
₹3L - ₹5L / yr
Google Cloud Platform (GCP)
CI/CD
skill iconKubernetes
Terraform
Linux/Unix

What You’ll Do:

We’re looking for a skilled DevOps Engineer to help us build and maintain reliable, secure, and scalable infrastructure. You will work closely with our development, product, and security teams to streamline deployments, improve performance, and ensure cloud infrastructure resilience.


Responsibilities:

● Deploy, manage, and monitor infrastructure on Google Cloud Platform (GCP)

● Build CI/CD pipelines using Jenkins and integrate them with Git workflows

● Design and manage Kubernetes clusters and helm-based deployments

● Manage infrastructure as code using Terraform

● Set up logging, monitoring, and alerting (Stackdriver, Prometheus, Grafana)

● Ensure security best practices across cloud resources, networks, and secrets

● Automate repetitive operations and improve system reliability

● Collaborate with developers to troubleshoot and resolve issues in staging/production environments


What We’re Looking For:

Required Skills:

● 1–3 years of hands-on experience in a DevOps or SRE role

● Strong knowledge of GCP services (IAM, GKE, Cloud Run, VPC, Cloud Build, etc.)

● Proficiency in Kubernetes (deployment, scaling, troubleshooting)

● Experience with Terraform for infrastructure provisioning

● CI/CD pipeline setup using Jenkins, GitHub Actions, or similar tools

● Understanding of DevSecOps principles and cloud security practices

● Good command over Linux, shell scripting, and basic networking concepts


Nice to have:

● Experience with Docker, Helm, ArgoCD

● Exposure to other cloud platforms (AWS, Azure)

● Familiarity with incident response and disaster recovery planning

● Knowledge of logging and monitoring tools like ELK, Prometheus, Grafana

Read more
Us healthcare company

Us healthcare company

Agency job
via People Impact by Ranjita Shrivastava
Hyderabad, Chennai
4 - 8 yrs
₹20L - ₹30L / yr
ai/ml
TensorFlow
skill iconPython
Google Cloud Platform (GCP)
Vertex

·                    Design, develop, and implement AI/ML models and algorithms.

·                    Focus on building Proof of Concept (POC) applications to demonstrate the feasibility and value of AI solutions.

·                    Write clean, efficient, and well-documented code.

·                    Collaborate with data engineers to ensure data quality and availability for model training and evaluation.

·                    Work closely with senior team members to understand project requirements and contribute to technical solutions.

·                    Troubleshoot and debug AI/ML models and applications.

·                    Stay up-to-date with the latest advancements in AI/ML.

·                    Utilize machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) to develop and deploy models.

·                    Develop and deploy AI solutions on Google Cloud Platform (GCP).

·                    Implement data preprocessing and feature engineering techniques using libraries like Pandas and NumPy.

·                    Utilize Vertex AI for model training, deployment, and management.

·                    Integrate and leverage Google Gemini for specific AI functionalities.

Qualifications:

·                    Bachelor’s degree in computer science, Artificial Intelligence, or a related field.

·                    3+ years of experience in developing and implementing AI/ML models.

·                    Strong programming skills in Python.

·                    Experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn.

·                    Good understanding of machine learning concepts and techniques.

·                    Ability to work independently and as part of a team.

·                    Strong problem-solving skills.

·                    Good communication skills.

·                    Experience with Google Cloud Platform (GCP) is preferred.

·                    Familiarity with Vertex AI is a plus.


Read more
Snaphyr

Snaphyr

Agency job
via SnapHyr by Shagun Jaiswal
Gurugram
2 - 5 yrs
₹10L - ₹30L / yr
Cloud Computing
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Windows Azure
CI/CD
+2 more

 Job Opening: Cloud and Observability Engineer

📍 Location: Work From Office – Gurgaon (Sector 43)

🕒 Experience: 2+ Years

💼 Employment Type: Full-Time


Role Overview:

As a Cloud and Observability Engineer, you will play a critical role in helping customers transition and optimize their monitoring and observability infrastructure. You'll be responsible for building high-quality extension packages for alerts, dashboards, and parsing rules using the organization Platform. Your work will directly impact the reliability, scalability, and efficiency of monitoring across cloud-native environments.

This is a work-from-office role requiring collaboration with global customers and internal stakeholders.


Key Responsibilities:

  • Extension Delivery:
  • Develop, enhance, and maintain extension packages for alerts, dashboards, and parsing rules to improve monitoring experience.
  • Conduct in-depth research to create world-class observability solutions (e.g., for cloud-native and container technologies).
  • Customer & Internal Support:
  • Act as a technical advisor to both internal teams and external clients.
  • Respond to queries, resolve issues, and incorporate feedback related to deployed extensions.
  • Observability Solutions:
  • Design and implement optimized monitoring architectures.
  • Migrate and package dashboards, alerts, and rules based on customer environments.
  • Automation & Deployment:
  • Use CI/CD tools and version control systems to package and deploy monitoring components.
  • Continuously improve deployment workflows.
  • Collaboration & Enablement:
  • Work closely with DevOps, engineering, and customer success teams to gather requirements and deliver solutions.
  • Deliver technical documentation and training for customers.


Requirements:


Professional Experience:


  • Minimum 2 years in Systems Engineering or similar roles.
  • Focus on monitoring, observability, and alerting tools.
  • Cloud & Container Tech:
  • Hands-on experience with AWS, Azure, or GCP.
  • Experience with Kubernetes, EKS, GKE, or AKS.
  • Cloud DevOps certifications (preferred).


Observability Tools:

  • Practical experience with at least two observability platforms (e.g., Prometheus, Grafana, Datadog, etc.).
  • Strong understanding of alertingdashboards, and infrastructure monitoring.


Scripting & Automation:

  • Familiarity with CI/CDdeployment pipelines, and version control.
  • Experience in packaging and managing observability assets.
  • Technical Skills:
  • Working knowledge of PromQLGrafana, and related query languages.
  • Willingness to learn Dataprime and Lucene syntax.
  • Soft Skills:
  • Excellent problem-solving and debugging abilities.
  • Strong verbal and written communication in English.
  • Ability to work across US and European time zones as needed.


Why Join Us?

  • Opportunity to work on cutting-edge observability platforms.
  • Collaborate with global teams and top-tier clients.
  • Shape the future of cloud monitoring and performance optimization.
  • Growth-oriented, learning-focused environment.


Read more
Zenius IT Services Pvt Ltd

at Zenius IT Services Pvt Ltd

2 candid answers
Sunita Pradhan
Posted by Sunita Pradhan
Hyderabad
3 - 4 yrs
₹4L - ₹8L / yr
ASP.NET
skill iconC#
skill iconReact.js
skill iconJavascript
TypeScript
+14 more

Job Overview:

We are looking for a highly skilled Full-Stack Developer with expertise in .NET Core, to develop and maintain scalable web applications and microservices. The ideal candidate will have strong problem-solving skills, experience in modern software development, and a passion for creating robust, high-performance applications.


Key Responsibilities:


Backend Development:

  • Design, develop, and maintain microservices and APIs using.NET Core. Should have a good understanding of .NET Framework.
  • Implement RESTful APIs, ensuring high performance and security.
  • Optimize database queries and design schemas for SQL Server / Snowflake / MongoDB.

Software Architecture & DevOps:

  • Design and implement scalable microservices architecture.
  • Work with Docker, Kubernetes, and CI/CD pipelines for deployment and automation.
  • Ensure best practices in security, scalability, and performance.

Collaboration & Agile Development:

  • Work closely with UI/UX designers, backend engineers, and product managers.
  • Participate in Agile/Scrum ceremonies, code reviews, and knowledge-sharing sessions.
  • Write clean, maintainable, and well-documented code.


Required Skills & Qualifications:

  • 7+ years of experience as a Full-Stack Developer.
  • Strong experience in .NET Core, C#.
  • Proficiency in React.js, JavaScript (ES6+), TypeScript.
  • Experience with RESTful APIs, Microservices architecture.
  • Knowledge of SQL / NoSQL databases (SQL Server, Snowflake, MongoDB).
  • Experience with Git, CI/CD pipelines, Docker, and Kubernetes.
  • Familiarity with Cloud services (Azure, AWS, or GCP) is a plus.
  • Strong debugging and troubleshooting skills.


Nice-to-Have:

  • Experience with GraphQL, gRPC, WebSockets.
  • Exposure to serverless architecture and cloud-based solutions.
  • Knowledge of authentication/authorization frameworks (OAuth, JWT, Identity Server).
  • Experience with unit testing and integration testing.
Read more
Deqode

at Deqode

1 recruiter
Roshni Maji
Posted by Roshni Maji
Gurugram
3 - 6 yrs
₹8L - ₹23L / yr
skill icon.NET
skill iconC#
skill iconAngular (2+)
skill iconReact.js
Google Cloud Platform (GCP)
+2 more

Job Title: .NET Full Stack Developer


Experience: 3 to 6 Years


Work Mode: Hybrid (2-3 days from office)


Location: Gurgaon


Joiners: Immediate joiners or candidates who have completed their notice period preferred


Key Responsibilities


  • Design, develop, and maintain web applications using .NET (C#) on the backend and Angular/React on the frontend.
  • Develop RESTful APIs and integrate them with front-end components.
  • Collaborate with UI/UX designers, backend developers, and product managers to deliver high-quality features.
  • Write clean, maintainable, and efficient code following best practices.
  • Participate in code reviews and contribute to continuous improvement of development processes.
  • Troubleshoot and debug issues across the application stack.
  • Work with DevOps teams to support CI/CD pipelines and deployment.
  • Ensure application scalability, performance, and security.
  • Contribute to documentation, unit testing, and version control.



Required Skills


  • Strong proficiency in C# and .NET Core/.NET Framework.
  • Experience with JavaScript and modern front-end frameworks like Angular or React (preference for Angular).
  • Exposure to cloud platforms – Azure (preferred), AWS, or GCP.
  • Good understanding of HTML5, CSS3, and TypeScript.
  • Experience in RESTful API development.
  • Familiarity with Entity Framework and SQL-based databases like SQL Server.
  • Understanding of version control systems like Git.
  • Basic knowledge of CI/CD practices and tools like Azure DevOps or Jenkins.


Read more
Madurai
11 - 16 yrs
₹27L - ₹35L / yr
Google Cloud Platform (GCP)
GCP Data
Architect
Data architecture

Job Title: GCP Data Architect

Location: Madurai

Experience: 12+ Years

Notice Period: Immediate 


About TechMango

TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project.


Role Summary

As a GCP Data Architect, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals.


Key Responsibilities:

  • Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP)
  • Define data strategy, standards, and best practices for cloud data engineering and analytics
  • Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery
  • Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery)
  • Architect data lakes, warehouses, and real-time data platforms
  • Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP)
  • Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers
  • Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards
  • Provide technical leadership in architectural decisions and future-proofing the data ecosystem

Required Skills & Qualifications:

  • 10+ years of experience in data architecture, data engineering, or enterprise data platforms
  • Minimum 3–5 years of hands-on experience in GCP Data Service
  • Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner
  • Python / Java / SQL
  • Data modeling (OLTP, OLAP, Star/Snowflake schema)
  • Experience with real-time data processing, streaming architectures, and batch ETL pipelines
  • Good understanding of IAM, networking, security models, and cost optimization on GCP
  • Prior experience in leading cloud data transformation projects
  • Excellent communication and stakeholder management skills

Preferred Qualifications:

  • GCP Professional Data Engineer / Architect Certification
  • Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics
  • Exposure to AI/ML use cases and MLOps on GCP
  • Experience working in agile environments and client-facing roles

What We Offer:

  • Opportunity to work on large-scale data modernization projects with global clients
  • A fast-growing company with a strong tech and people culture
  • Competitive salary, benefits, and flexibility
  • Collaborative environment that values innovation and leadership
Read more
DAITA

at DAITA

5 candid answers
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Tirupur, Tamil Nadu
3yrs+
Upto ₹30L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconPython
FastAPI
skill iconDjango
skill iconGo Programming (Golang)
+16 more

About Us

DAITA is a German AI startup revolutionizing the global textile supply chain by digitizing factory-to-brand workflows. We are building cutting-edge AI-powered SaaS and Agentic Systems that automate order management, production tracking, and compliance — making the supply chain smarter, faster, and more transparent.


Fresh off a $500K pre-seed raise, our passionate team is on the ground in India, collaborating directly with factories and brands to build our MVP and create real-world impact. If you’re excited by the intersection of AI, SaaS, and supply chain innovation, join us to help reshape how textiles move from factory floors to global brands.


Role Overview

We’re seeking a versatile Full-Stack Engineer to join our growing engineering team. You’ll be instrumental in designing and building scalable, secure, and high-performance applications that power our AI-driven platform. Working closely with Founders, ML Engineers, and Pilot Customers, you’ll transform complex AI workflows into intuitive, production-ready features.


What You’ll Do

•⁠ ⁠Design, develop, and deploy backend services, APIs, and microservices powering our platform.

•⁠ ⁠Build responsive, user-friendly frontend applications tailored for factory and brand users.

•⁠ ⁠Integrate AI/ML models and agentic workflows into seamless production environments.

•⁠ ⁠Develop features supporting order parsing, supply chain tracking, compliance, and reporting.

•⁠ ⁠Collaborate cross-functionally to iterate rapidly, test with users, and deliver impactful releases.

•⁠ ⁠Optimize applications for performance, scalability, and cost-efficiency on cloud platforms.

•⁠ ⁠Establish and improve CI/CD pipelines, deployment processes, and engineering best practices.

•⁠ ⁠Write clear documentation and maintain clean, maintainable code.


Required Skills

•⁠ ⁠3–5 years of professional Full-Stack development experience

•⁠ ⁠Strong backend skills with frameworks like Node.js, Python (FastAPI, Django), Go, or similar

•⁠ ⁠Frontend experience with React, Vue.js, Next.js, or similar modern frameworks

•⁠ ⁠Solid knowledge and experience with relational databases (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Redis, Neon)

•⁠ ⁠Strong API design skills (REST mandatory; GraphQL a plus)

•⁠ ⁠Containerization expertise with Docker

•⁠ ⁠Container orchestration and management with Kubernetes (including experience with Helm charts, operators, or custom resource definitions)

•⁠ ⁠Cloud deployment and infrastructure experience on AWS, GCP or Azure

•⁠ ⁠Hands-on experience deploying AI/ML models in cloud-native environments (AWS, GCP or Azure) with scalable infrastructure and monitoring.

•⁠ ⁠Experience with managed AI/ML services like AWS SageMaker, GCP Vertex AI, Azure ML, Together.ai, or similar

•⁠ ⁠Experience with CI/CD pipelines and DevOps tools such as Jenkins, GitHub Actions, Terraform, Ansible, or ArgoCD

•⁠ ⁠Familiarity with monitoring, logging, and observability tools like Prometheus, Grafana, ELK stack (Elasticsearch, Logstash, Kibana), or Helicone


Nice-to-have

•⁠ ⁠Experience with TypeScript for full-stack AI SaaS development

•⁠ ⁠Use of modern UI frameworks and tooling like Tailwind CSS

•⁠ ⁠Familiarity with modern AI-first SaaS concepts viz. vector databases for fast ML data retrieval, prompt engineering for LLM integration, integrating with OpenRouter or similar LLM orchestration frameworks etc.

•⁠ ⁠Knowledge of MLOps tools like Kubeflow, MLflow, or Seldon for model lifecycle management.

•⁠ ⁠Background in building data pipelines, real-time analytics, and predictive modeling.

•⁠ ⁠Knowledge of AI-driven security tools and best practices for SaaS compliance.

•⁠ ⁠Proficiency in cloud automation, cost optimization, and DevOps for AI workflows.

•⁠ ⁠Ability to design and implement hyper-personalized, adaptive user experiences.


What We Value

•⁠ ⁠Ownership: You take full responsibility for your work and ship high-quality solutions quickly.

•⁠ ⁠Bias for Action: You’re pragmatic, proactive, and focused on delivering results.

•⁠ ⁠Clear Communication: You articulate ideas, challenges, and solutions effectively across teams.

•⁠ ⁠Collaborative Spirit: You thrive in a cross-functional, distributed team environment.

•⁠ ⁠Customer Focus: You build with empathy for end users and real-world usability.

•⁠ ⁠Curiosity & Adaptability: You embrace learning, experimentation, and pivoting when needed.

•⁠ ⁠Quality Mindset: You write clean, maintainable, and well-tested code.


Why Join DAITA?

•⁠ ⁠Be part of a mission-driven startup transforming a $1+ Trillion global industry.

•⁠ ⁠Work closely with founders and AI experts on cutting-edge technology.

•⁠ ⁠Directly impact real-world supply chains and sustainability.

•⁠ ⁠Grow your skills in AI, SaaS, and supply chain tech in a fast-paced environment.

Read more
AiSensy

at AiSensy

2 candid answers
1 video
Pratistha Sonowal
Posted by Pratistha Sonowal
Gurugram
5 - 10 yrs
₹10L - ₹80L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)
skill iconGitHub
skill iconJenkins
Terraform
+8 more

DevOps Engineer

AiSensy  

Gurugram, Haryana, India (On-site)


About AiSensy


AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Adani, Delhi Transport Corporation, Yakult, Godrej, Aditya Birla Hindalco., Wipro, Asian Paints, India Today Group Skullcandy, Vivo, Physicswallah, Cosco grow their revenues via WhatsApp.


  • Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing
  • 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year
  • Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more
  • High Impact as Businesses drive 25-80% Revenues using AiSensy Platform
  • Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors


Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀


What You’ll Do (Key Responsibilities)

🔹 CI/CD & Automation:

  • Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins.
  • Automate deployment processes to improve efficiency and reduce downtime.

🔹 Infrastructure Management:

  • Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code.
  • Deploy and maintain Dockerized applications on Kubernetes clusters for scalability.

🔹 Cloud & Security:

  • Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure.
  • Optimize cloud costs and ensure security best practices are in place.

🔹 Monitoring & Troubleshooting:

  • Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime.
  • Proactively identify and resolve infrastructure-related issues.

🔹 Scripting & Automation:

  • Use Python or Bash scripting to automate repetitive DevOps tasks.
  • Build internal tools for system health monitoring, logging, and debugging.


What We’re Looking For (Must-Have Skills)

✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket)

✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins

✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi

✅ Containerization & Orchestration: Experience with Docker & Kubernetes

✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers

✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana

✅ Scripting Knowledge: Python or Bash for automation


Bonus Skills (Good to Have, Not Mandatory)

➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking

➕ Experience with Microsoft/Linux/F5 Technologies

➕ Hands-on knowledge of Database servers

Read more
Snaphyr

Snaphyr

Agency job
via SnapHyr by MUKESHKUMAR CHAUHAN
Gurugram
2 - 6 yrs
₹10L - ₹70L / yr
Cloud Computing
Google Cloud Platform (GCP)
Microsoft Windows Azure
AWE
CI/CD

🌐 Job Opening: Cloud and Observability Engineer

📍 Location: Work From Office – Gurgaon (Sector 43)

🕒 Experience: 2+ Years

💼 Employment Type: Full-Time

Role Overview:

As a Cloud and Observability Engineer, you will play a critical role in helping customers transition and optimize their monitoring and observability infrastructure. You'll be responsible for building high-quality extension packages for alerts, dashboards, and parsing rules using the organization Platform. Your work will directly impact the reliability, scalability, and efficiency of monitoring across cloud-native environments.

This is a work-from-office role requiring collaboration with global customers and internal stakeholders.

Key Responsibilities:

  • Extension Delivery:
  • Develop, enhance, and maintain extension packages for alerts, dashboards, and parsing rules to improve monitoring experience.
  • Conduct in-depth research to create world-class observability solutions (e.g., for cloud-native and container technologies).
  • Customer & Internal Support:
  • Act as a technical advisor to both internal teams and external clients.
  • Respond to queries, resolve issues, and incorporate feedback related to deployed extensions.
  • Observability Solutions:
  • Design and implement optimized monitoring architectures.
  • Migrate and package dashboards, alerts, and rules based on customer environments.
  • Automation & Deployment:
  • Use CI/CD tools and version control systems to package and deploy monitoring components.
  • Continuously improve deployment workflows.
  • Collaboration & Enablement:
  • Work closely with DevOps, engineering, and customer success teams to gather requirements and deliver solutions.
  • Deliver technical documentation and training for customers.

Requirements:

  • Professional Experience:
  • Minimum 2 years in Systems Engineering or similar roles.
  • Focus on monitoring, observability, and alerting tools.
  • Cloud & Container Tech:
  • Hands-on experience with AWS, Azure, or GCP.
  • Experience with Kubernetes, EKS, GKE, or AKS.
  • Cloud DevOps certifications (preferred).
  • Observability Tools:
  • Practical experience with at least two observability platforms (e.g., Prometheus, Grafana, Datadog, etc.).
  • Strong understanding of alerting, dashboards, and infrastructure monitoring.
  • Scripting & Automation:
  • Familiarity with CI/CD, deployment pipelines, and version control.
  • Experience in packaging and managing observability assets.
  • Technical Skills:
  • Working knowledge of PromQL, Grafana, and related query languages.
  • Willingness to learn Dataprime and Lucene syntax.
  • Soft Skills:
  • Excellent problem-solving and debugging abilities.
  • Strong verbal and written communication in English.
  • Ability to work across US and European time zones as needed.

Why Join Us?

  • Opportunity to work on cutting-edge observability platforms.
  • Collaborate with global teams and top-tier clients.
  • Shape the future of cloud monitoring and performance optimization.
  • Growth-oriented, learning-focused environment.
Read more
Snaphyr

Snaphyr

Agency job
via SnapHyr by MUKESHKUMAR CHAUHAN
Gurugram
3 - 5 yrs
₹10L - ₹70L / yr
Security Information and Event Management (SIEM)
SOAR
WAF
IPS
skill iconAmazon Web Services (AWS)
+2 more

🛡️ Job Opening: Security Operations Center (SOC) Analyst – Gurgaon (Sector 43)

📍 Location: Gurgaon, Sector 43

🕒 Experience: 3+ Years

💼 Employment Type: Full-Time

Who We’re Looking For:

We’re seeking a dynamic and experienced SOC Analyst to join our growing cybersecurity team. If you're passionate about threat detection, incident response, and working hands-on with cutting-edge security tools — this role is for you.

Key Responsibilities:

  • Monitor, detect, investigate, and respond to cybersecurity threats in real-time.
  • Work hands-on with tools such as SIEM, SOAR, WAF, IPS/IDS, etc.
  • Collaborate with customers and internal teams to provide timely and clear communication around security events.
  • Analyze threat scenarios and provide actionable intelligence and mitigation strategies.
  • Create and refine detection rules, playbooks, and escalation workflows.
  • Assist in continuous improvement of SOC procedures and threat detection capabilities.

Requirements:

  • Minimum 3 years of experience in a SOC/MDR (Managed Detection and Response) environment, preferably in a customer-facing role.
  • Strong technical skills in using security platforms like SIEM, SOAR, WAF, IPS, etc.
  • Solid understanding of security principles, threat vectors, and incident response methodologies.
  • Familiarity with cloud platforms such as AWS, Azure, or GCP.
  • Excellent analytical, communication, and problem-solving skills.

Preferred Qualifications:

  • Security certifications such as CEH, OSCP, CSA, or equivalent are a plus.
  • Experience in scripting or automation (Python, PowerShell, etc.) is an added advantage.

Why Join Us?

  • Be part of a fast-paced and innovative cybersecurity team.
  • Work on real-world threats and cutting-edge technologies.
  • Collaborative work environment with a focus on growth and learning.
Read more
Blitzy

at Blitzy

2 candid answers
1 product
Eman Khan
Posted by Eman Khan
Pune
5yrs+
₹11L - ₹30L / yr
skill iconPython
Selenium
Playwright
skill iconGit
Google Cloud Platform (GCP)
+2 more

About the role

We are looking for a Senior Automation Engineer to architect and implement automated testing frameworks that validate the runtime behavior of code generated by our AI platform. This role is critical in ensuring that our platform's output performs correctly in production environments. You'll work at the intersection of AI and quality assurance, creating innovative testing solutions that can validate AI-generated applications during actual execution.


What Success Looks Like

  • You architect and implement automated testing frameworks that validate the runtime behavior and performance of AI-generated applications
  • You develop intelligent test suites that can automatically assess application functionality in production environments
  • You create testing frameworks that can validate runtime behavior across multiple languages and frameworks
  • You establish quality metrics and testing protocols that measure real-world performance of generated applications
  • You build systems to automatically detect and flag runtime issues in deployed applications
  • You collaborate with our AI team to improve the platform based on runtime performance data
  • You implement automated integration and end-to-end testing that ensures generated applications work as intended in production
  • You develop metrics and monitoring systems to track runtime performance across different customer deployments


Areas of Ownership 

Our hiring process is designed for you to demonstrate deep expertise in automation testing with a focus on AI-powered systems.


Required Technical Experience:

  • 4+ years of experience with Selenium and automated testing frameworks
  • Strong expertise in Python (our primary automation language)
  • Experience with CI/CD tools (Jenkins, CircleCI, or similar)
  • Proficiency in version control systems (Git)
  • Experience testing distributed systems
  • Understanding of modern software development practices
  • Experience working with cloud platforms (GCP preferred)


Ways to stand out

  • Experience with runtime monitoring and testing of distributed systems
  • Knowledge of performance testing and APM (Application Performance Monitoring)
  • Experience with end-to-end testing of complex applications
  • Background in developing testing systems for enterprise-grade applications
  • Understanding of distributed tracing and monitoring
  • Experience with chaos engineering and resilience testing
Read more
Mrproptek
Prerna Mittal
Posted by Prerna Mittal
Mr proptek, office no. 901, 9th floor, CP67 mall, sector 67, Mohali
7 - 10 yrs
₹20L - ₹30L / yr
skill iconReact.js
skill iconReact Native
skill iconNodeJS (Node.js)
Fullstack Developer
RESTful APIs
+3 more

Engineering Head / Tech Lead (React + Node.js)

About MrPropTek

MrPropTek is building the future of real estate technology. We're looking for a hands-on Engineering Head / Tech Lead to drive our tech strategy and lead the development of scalable web applications across frontend and backend using React and Node.js.

Responsibilities

  • Lead and mentor a team of full-stack developers
  • Architect and build scalable, high-performance applications
  • Drive end-to-end development using React (frontend) and Node.js (backend)
  • Collaborate with product, design, and business teams to align on priorities
  • Enforce code quality, best practices, and agile processes
  • Oversee deployment, performance, and security of systems

Requirements

  • 7+ years in software development; 3+ years in a tech lead or engineering management role
  • Deep expertise in React.jsNode.js, JavaScript/TypeScript
  • Experience with REST APIs, cloud platforms (AWS/GCP), and databases (SQL/NoSQL)
  • Strong leadership, communication, and decision-making skills
  • Startup or fast-paced team experience preferred

Job Location- Mohali, Delhi/ NCR

Job Type: Full-time

Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore)
6 - 12 yrs
₹20L - ₹36L / yr
skill iconJava
skill iconSpring Boot
RESTful APIs
Agile/Scrum
Team leadership
+7 more

About the Company


We are hiring for a fast-growing, well-funded product startup backed by a leadership team with a proven track record of building billion-dollar digital businesses. The company is focused on delivering enterprise-grade SaaS products in the Cybersecurity domain for B2B markets. You’ll be part of a passionate and dynamic engineering team building innovative solutions using modern tech stacks.


Key Responsibilities

  • Design and develop scalable microservices using Java and Spring Boot

  • Build and manage robust RESTful APIs

  • Collaborate with cross-functional teams in an Agile setup

  • Lead and mentor junior engineers, driving technical excellence

  • Contribute to architecture discussions and code reviews

  • Work with PostgreSQL, implement data integrity and consistency

  • Deploy and manage services on cloud platforms like GCP or Azure

  • Utilize Docker/Kubernetes for containerization and orchestration


Must-Have Skills


  • Strong backend experience with Java, Spring Boot, REST APIs

  • Proficiency in frontend development with React.js

  • Experience with PostgreSQL and database optimization

  • Hands-on with cloud platforms (GCP or Azure)

  • Familiarity with Docker and Kubernetes

  • Strong understanding of:

  • API Gateways

  • Hibernate & JPA

  • Transaction management & ACID properties

  • Multi-threading and context switching


Good to Have

  • Experience in Cybersecurity or Healthcare domain

Exposure to CI/CD pipelines and DevOps practices


Read more
Egen Solutions
Hemavathi Panduri
Posted by Hemavathi Panduri
Hyderabad
4 - 8 yrs
₹12L - ₹25L / yr
skill iconPython
Google Cloud Platform (GCP)
ETL
Apache Airflow

We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems.


Key Responsibilities:

  • Design, develop, test, and maintain scalable ETL data pipelines using Python.
  • Work extensively on Google Cloud Platform (GCP) services such as:
  • Dataflow for real-time and batch data processing
  • Cloud Functions for lightweight serverless compute
  • BigQuery for data warehousing and analytics
  • Cloud Composer for orchestration of data workflows (based on Apache Airflow)
  • Google Cloud Storage (GCS) for managing data at scale
  • IAM for access control and security
  • Cloud Run for containerized applications
  • Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery.
  • Implement and enforce data quality checks, validation rules, and monitoring.
  • Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions.
  • Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects.
  • Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL.
  • Document pipeline designs, data flow diagrams, and operational support procedures.

Required Skills:

  • 4–8 years of hands-on experience in Python for backend or data engineering projects.
  • Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.).
  • Solid understanding of data pipeline architecture, data integration, and transformation techniques.
  • Experience in working with version control systems like GitHub and knowledge of CI/CD practices.
  • Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.).



Read more
venanalytics

at venanalytics

2 candid answers
Rincy jain
Posted by Rincy jain
Remote only
3 - 5 yrs
₹9L - ₹15L / yr
Google Cloud Platform (GCP)
skill iconDocker
skill iconKubernetes
skill iconDjango
API
+1 more

About the Role


We are looking for a highly motivated DevOps Engineer with a strong background in cloud technologies, big data ecosystems, and software development lifecycles to lead cross-functional teams in delivering high-impact projects. The ideal candidate will combine excellent project management skills with technical acumen in GCP, DevOps, and Python-based applications.


Key Responsibilities


  • Lead end-to-end project planning, execution, and delivery, ensuring alignment
  • Create and maintain project documentation including detailed timelines, sprint boards, risk logs, and weekly status reports.
  • Facilitate Agile ceremonies: daily stand-ups, sprint planning, retrospectives, and backlog grooming.
  • Actively manage risks, scope changes, resource allocation, and project dependencies to ensure delivery without disruptions.
  • Ensure compliance with QA processes and security/compliance standards throughout the SDLC.
  • Collaborate with stakeholders and senior leadership to communicate progress, blockers, and key milestones.
  • Provide mentorship and support to cross-functional team members to drive continuous improvement and team performance.
  • Coordinate with clients and act as a key point of contact for requirement gathering, updates, and escalations.


Required Skills & Experience



Cloud & DevOps

  • Proficient in Google Cloud Platform (GCP) services: Compute, Storage, Networking, IAM.
  • Hands-on experience with cloud deployments and infrastructure as code.
  • Strong working knowledge of CI/CD pipelines, Docker, Kubernetes, and Terraform (or similar tools).

Big Data & Data Engineering

  • Experience with large-scale data processing using tools like PySpark, Hadoop, Hive, HDFS, and Spark Streaming (preferred).
  • Proven experience in managing and optimizing big data pipelines and ensuring high performance.

Programming & Frameworks

  • Strong proficiency in Python with experience in Django (REST APIs, ORM, deployment workflows).
  • Familiarity with Git and version control best practices.
  • Basic knowledge of Linux administration and shell scripting.

Nice to Have

  • Knowledge or prior experience in the Media & Advertising domain.
  • Experience in client-facing roles and handling stakeholder communications.
  • Proven ability to manage technical teams (5–6 members).

 

Why Join Us?

  • Work on cutting-edge cloud and data engineering projects
  • Collaborate with a talented, fast-paced team
  • Flexible work setup and culture of ownership



Read more
A leading software company

A leading software company

Agency job
via BOS consultants by Manka Joshi
Remote only
6 - 9 yrs
₹12L - ₹15L / yr
databricks
PySpark
Large Language Models (LLM)
Vector database
Google Cloud Platform (GCP)
+1 more

1. Solid Databricks & pyspark experience 

2. Must have worked in projects dealing with data at terabyte scale

3. Must have knowledge of spark optimization techniques 

4. Must have experience setting up job pipelines in Databricks 

5. Basic knowledge of gcp and big query is required 

6. Understanding LLMs and vector db

Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Mumbai
4 - 8 yrs
₹8L - ₹16L / yr
skill iconC#
skill icon.NET
.NET Compact Framework
SQL
Microsoft Windows Azure
+4 more

Key Responsibilities:

● Design, develop, and maintain scalable web applications using .NET Core, .NET

Framework, C#, and related technologies.

● Participate in all phases of the SDLC, including requirements gathering, architecture

design, coding, testing, deployment, and support.

● Build and integrate RESTful APIs, and work with SQL Server, Entity Framework, and

modern front-end technologies such as Angular, React, and JavaScript.

● Conduct thorough code reviews, write unit tests, and ensure adherence to coding

standards and best practices.

● Lead or support .NET Framework to .NET Core migration initiatives, ensuring

minimal disruption and optimal performance.

● Implement and manage CI/CD pipelines using tools like Azure DevOps, Jenkins, or

GitLab CI/CD.

● Containerize applications using Docker and deploy/manage them on orchestration

platforms like Kubernetes or GKE.

● Lead and execute database migration projects, particularly transitioning from SQL

Server to PostgreSQL.

● Manage and optimize Cloud SQL for PostgreSQL, including configuration, tuning, and

ongoing maintenance.

● Leverage Google Cloud Platform (GCP) services such as GKE, Cloud SQL, Cloud

Run, and Dataflow to build and maintain cloud-native solutions.

● Handle schema conversion and data transformation tasks as part of migration and

modernization efforts.


Required Skills & Experience:

● 5+ years of hands-on experience with C#, .NET Core, and .NET Framework.

● Proven experience in application modernization and cloud-native development.


● Strong knowledge of containerization (Docker) and orchestration tools like

Kubernetes/GKE.

● Expertise in implementing and managing CI/CD pipelines.

● Solid understanding of relational databases and experience in SQL Server to

PostgreSQL migrations.

● Familiarity with cloud infrastructure, especially GCP services relevant to application

hosting and data processing.

● Excellent problem-solving, communication,

Read more
Digital Media Streaming / OTT Platform

Digital Media Streaming / OTT Platform

Agency job
via FxConsulting by RADHIKA PASRICHA
Hyderabad
10 - 15 yrs
₹55L - ₹70L / yr
skill iconPython
Microservices
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

About You: ● Education ranging from a Bachelor’s of Science degree in computer science or related engineering degree. ● 12+ years of high level API, abstraction layers, and application software development experience. ● 5+ years experience building scalable, serverless solutions in GCP or AWS ● 4+ years of experience in Python, MongoDB ● Experience with large-scale distributed systems and streaming data services. ● Experience building, developing, and maintaining cloud native infrastructure, serverless architecture, micro-operations, and workflow automation. ● You are a hardworking problem-solver who thrives in finding solutions to difficult technical challenges. ● Experience with modern high-level languages and databases including Javascript, MongoDB, and Python. ● Experience in Github, Gitlab, CI/CD, Jira, unit testing, integration testing, regression testing, and collaborative documentation. ● Expertise with GCP, Kubernetes, Docker, or containerization, is a great plus. ● Ability to write and assess clean, functional, high quality and testable code for each of our projects. ● Positive and proactive, solution-focused contributor and team motivation.

Read more
appscrip

at appscrip

2 recruiters
Kanika Gaur
Posted by Kanika Gaur
Bengaluru (Bangalore)
2 - 4 yrs
₹4L - ₹10L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
DevOps
Google Cloud Platform (GCP)

Job Title: Sr. DevOps Engineer

Experience Required: 2 to 4 years in DevOps or related fields

Employment Type: Full-time


About the Role:

We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.


Key Responsibilities:

Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).

CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.

Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.

Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.

Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.

Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.

Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.

Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.


Required Skills & Qualifications:

Technical Expertise:

Strong proficiency in cloud platforms like AWS, Azure, or GCP.

Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).

Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.

Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.

Proficiency in scripting languages (e.g., Python, Bash, PowerShell).

Soft Skills:

Excellent communication and leadership skills.

Strong analytical and problem-solving abilities.

Proven ability to manage and lead a team effectively.

Experience:

4 years + of experience in DevOps or Site Reliability Engineering (SRE).

4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.

Strong understanding of microservices, APIs, and serverless architectures.


Nice to Have:

Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.

Experience with GitOps tools such as ArgoCD or Flux.

Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).


Perks & Benefits:

Competitive salary and performance bonuses.

Comprehensive health insurance for you and your family.

Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.

Flexible working hours and remote work options.

Collaborative and inclusive work culture.


Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.


You can directly contact us: Nine three one six one two zero one three two

Read more
venanalytics

at venanalytics

2 candid answers
Rincy jain
Posted by Rincy jain
Mumbai
3 - 5 yrs
₹10L - ₹15L / yr
Google Cloud Platform (GCP)
DevOps
skill iconPython
Big Data
CI/CD
+3 more

About the Role


We are looking for a highly motivated Project Manager with a strong background in cloud technologies, big data ecosystems, and software development lifecycles to lead cross-functional teams in delivering high-impact projects. The ideal candidate will combine excellent project management skills with technical acumen in GCP, DevOps, and Python-based applications.


Key Responsibilities

  • Lead end-to-end project planning, execution, and delivery, ensuring alignment with business goals and timelines.
  • Create and maintain project documentation including detailed timelines, sprint boards, risk logs, and weekly status reports.
  • Facilitate Agile ceremonies: daily stand-ups, sprint planning, retrospectives, and backlog grooming.
  • Actively manage risks, scope changes, resource allocation, and project dependencies to ensure delivery without disruptions.
  • Ensure compliance with QA processes and security/compliance standards throughout the SDLC.
  • Collaborate with stakeholders and senior leadership to communicate progress, blockers, and key milestones.
  • Provide mentorship and support to cross-functional team members to drive continuous improvement and team performance.
  • Coordinate with clients and act as a key point of contact for requirement gathering, updates, and escalations.


Required Skills & Experience


Cloud & DevOps

  • Proficient in Google Cloud Platform (GCP) services: Compute, Storage, Networking, IAM.
  • Hands-on experience with cloud deployments and infrastructure as code.
  • Strong working knowledge of CI/CD pipelines, Docker, Kubernetes, and Terraform (or similar tools).

Big Data & Data Engineering

  • Experience with large-scale data processing using tools like PySpark, Hadoop, Hive, HDFS, and Spark Streaming (preferred).
  • Proven experience in managing and optimizing big data pipelines and ensuring high performance.

Programming & Frameworks

  • Strong proficiency in Python with experience in Django (REST APIs, ORM, deployment workflows).
  • Familiarity with Git and version control best practices.
  • Basic knowledge of Linux administration and shell scripting.


Nice to Have

  • Knowledge or prior experience in the Media & Advertising domain.
  • Experience in client-facing roles and handling stakeholder communications.
  • Proven ability to manage technical teams (5–6 members).


Why Join Us?

  • Work on cutting-edge cloud and data engineering projects
  • Collaborate with a talented, fast-paced team
  • Flexible work setup and culture of ownership
  • Continuous learning and upskilling environment
  • Inclusive health benefits included




Read more
Us healthcare company

Us healthcare company

Agency job
via People Impact by Ranjita Shrivastava
Hyderabad, Chennai
11 - 20 yrs
₹50L - ₹60L / yr
Generative AI
skill iconPython
TensorFlow
Google Cloud Platform (GCP)
POC

Job Title: AI Solutioning Architect – Healthcare IT

Role Summary:

The AI Solutioning Architect leads the design and implementation of AI-driven solutions across the organization, ensuring alignment with business goals and healthcare IT standards. This role defines the AI/ML architecture, guides technical execution, and fosters innovation using platforms like Google Cloud (GCP).

Key Responsibilities:

  • Architect scalable AI solutions from data ingestion to deployment.
  • Align AI initiatives with business objectives and regulatory requirements (HIPAA).
  • Collaborate with cross-functional teams to deliver AI projects.
  • Lead POCs, evaluate AI tools/platforms, and promote GCP adoption.
  • Mentor technical teams and ensure best practices in MLOps.
  • Communicate complex concepts to diverse stakeholders.

Qualifications:

  • Bachelor’s/Master’s in Computer Science or related field.
  • 12+ years in software development/architecture with strong AI/ML focus.
  • Experience in healthcare IT and compliance (HIPAA).
  • Proficient in Python/Java and ML frameworks (TensorFlow, PyTorch).
  • Hands-on with GCP (preferred) or other cloud platforms.
  • Strong leadership, problem-solving, and communication skills.


Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore)
10 - 18 yrs
₹35L - ₹54L / yr
skill iconReact.js
skill iconJavascript
TypeScript
Micro-Frontend Architecture
webpack
+7 more

Job Title : Lead Web Developer / Frontend Engineer

Experience Required : 10+ Years

Location : Bangalore (Hybrid – 3 Days Work From Office)

Work Timings : 11:00 AM to 8:00 PM IST

Notice Period : Immediate or Up to 30 Days (Preferred)

Work Mode : Hybrid

Interview Mode : Face-to-Face mandatory (for Round 2)


Role Overview :

We are hiring a Lead Frontend Engineer with 10+ Years of experience to drive the development of scalable, modern, and high-performance web applications.

This is a hands-on technical leadership role focused on React.js, micro-frontends, and Backend for Frontend (BFF) architecture, requiring both coding expertise and team leadership skills.


Mandatory Skills :

React.js, JavaScript/TypeScript, HTML, CSS, micro-frontend architecture, Backend for Frontend (BFF), Webpack, Jenkins (CI/CD), GCP, RDBMS/SQL, Git, and team leadership.


Core Responsibilities :

  • Design and develop cloud-based web applications using React.js, HTML, CSS.
  • Collaborate with UX/UI designers and backend engineers to implement seamless user experiences.
  • Lead and mentor a team of frontend developers.
  • Write clean, well-documented, scalable code using modern JavaScript/TypeScript practices.
  • Implement CI/CD pipelines using Jenkins, deploy applications to CDNs.
  • Integrate with GCP services, optimize front-end performance.
  • Stay updated with modern frontend technologies and design patterns.
  • Use Git for version control and collaborative workflows.
  • Implement JavaScript libraries for web analytics and performance monitoring.


Key Requirements :

  • 10+ Years of experience as a frontend/web developer.
  • Strong proficiency in React.js, JavaScript/TypeScript, HTML, CSS.
  • Experience with micro-frontend architecture and Backend for Frontend (BFF) patterns.
  • Proficiency in frontend design frameworks and libraries (jQuery, Node.js).
  • Strong understanding of build tools like Webpack, CI/CD using Jenkins.
  • Experience with GCP and deploying to CDNs.
  • Solid experience in RDBMS, SQL.
  • Familiarity with Git and agile development practices.
  • Excellent debugging, problem-solving, and communication skills.
  • Bachelor’s/Master’s in Computer Science or a related field.


Nice to Have :

  • Experience with Node.js.
  • Previous experience working with web analytics frameworks.
  • Exposure to JavaScript observability tools.


Interview Process :

1. Round 1 : Online Technical Interview (via Geektrust – 1 Hour)

2. Round 2 : Face-to-Face Interview with the Indian team in Bangalore (3 Hours – Mandatory)

3. Round 3 : Online Interview with CEO (30 Minutes)


Important Notes :

  • Face-to-face interview in Bangalore is mandatory for Round 2.
  • Preference given to candidates currently in Bangalore or willing to travel for interviews.
  • Remote applicants who cannot attend the in-person round will not be considered.
Read more
YOptima Media Solutions Pvt Ltd
Bengaluru (Bangalore)
8 - 12 yrs
₹40L - ₹60L / yr
skill iconReact.js
skill iconNodeJS (Node.js)
Google Cloud Platform (GCP)
Langchaing
Generative AI

Why This Role Matters

We’re looking for a Principal Engineer to lead the architecture and execution of our GenAI-powered, self-serve marketing platforms. You will work directly with the CEO to shape, build, and scale products that change how marketers interact with data and AI. This is intrapreneurship in action — not a sandbox innovation lab, but a real-world product with traction, velocity, and high stakes.


What You'll Do

  • Co-own product architecture and direction alongside the CEO.
  • Build GenAI-native, full-stack platforms from MVP to scale — powered by LLMs, agents, and predictive AI.
  • Own the full stack: React (frontend), Node.js/Python (backend), GCP (infra), BigQuery (data), and vector databases (AI).
  • Lead a lean, high-caliber team with a hands-on, unblock-and-coach mindset.
  • Drive rapid iteration with rigor, balancing short-term delivery with long-term resilience.
  • Ensure scalability, observability, and fault tolerance in multi-tenant, cloud-native environments.
  • Bridge business and tech — aligning execution with evolving user and market insights.


What You Bring

  • 8–12 years of experience building and scaling full-stack, data-heavy or AI-driven products.
  • Fluency in React, Node.js, and Google Cloud (Functions, BigQuery, Cloud SQL, Airflow, etc.).
  • Hands-on experience with GenAI tools (LangChain, OpenAI APIs, LlamaIndex) is a bonus.
  • Track record of shipping products from ambiguity to impact.
  • Strong product mindset — your goal is user value, not just elegant code.
  • Architectural leadership with ownership of engineering rigor and scaling best practices.
  • Startup or founder DNA — you’ve built things from scratch and know how to move fast without breaking things.


Who You Are

  • A former founder, senior IC, or tech lead who’s done zero-to-one and 1-to-n scaling.
  • Hungry for ownership and velocity — frustrated by bureaucracy or stagnation.
  • You code because you care about solving real problems for real users.
  • You’re pragmatic, hands-on, and grounded in first principles.
  • You understand that great software isn't just shipped — it's hardened, maintained, and evolves with minimal manual effort.
  • You’re open to evolving into a founding engineer role with influence over the tech vision and culture.


What You Get

  • Equity in a high-growth product-led startup.
  • A chance to build global products out of India with full-stack and GenAI innovation.
  • Access to high-context decision-making and direct collaboration with the CEO.
  • A tight, ego-free team and a culture that values clarity, ownership, learning, and candor.


Why YOptima?

YOptima is redefining how leading marketers unlock growth through full-funnel, AI-powered media solutions. As part of our growth journey, this is your opportunity to own the growth charter for leading brands and agencies globally and shape the narrative of a next-generation marketing platform.


Ready to lead, build, and scale?

We’d love to hear from you.


Read more
GoFloaters
Sundar Shyam
Posted by Sundar Shyam
Chennai
1 - 2 yrs
₹3L - ₹6L / yr
skill iconReact.js
skill iconReact Native
skill iconNodeJS (Node.js)
Google Cloud Platform (GCP)

About the Role:

We’re looking for a skilled developer to build and maintain web and mobile apps using React, React Native, and Node.js. You’ll work on both the frontend and backend, collaborating with our team to deliver high-quality products.

What You’ll Do:

  • Build and maintain full stack applications for web and mobile
  • Write clean, efficient code with React, React Native, and Node.js
  • Work with designers and other developers to deliver new features
  • Debug, troubleshoot, and optimize existing apps
  • Stay updated on the latest tech and best practices

What We’re Looking For:

  • Solid experience with React, React Native, and Node.js
  • Comfortable building both web and mobile applications
  • Good understanding of REST APIs and databases
  • Familiar with Git and agile workflows
  • Team player with clear communication skills

Nice to Have:

  • Experience with testing and CI/CD
  • Knowledge of UI/UX basics


Read more
Ongrid
Kapil bhardwaj
Posted by Kapil bhardwaj
Gurugram
5 - 8 yrs
₹20L - ₹30L / yr
skill iconJava
06692
Spring
Microservices
skill iconDocker
+13 more

Requirements

  • Bachelors/Masters in Computer Science or a related field
  • 5-8 years of relevant experience
  • Proven track record of Team Leading/Mentoring a team successfully.
  • Experience with web technologies and microservices architecture both frontend and backend.
  • Java, Spring framework, hibernate
  • MySQL, Mongo, Solr, Redis, 
  • Kubernetes, Docker
  • Strong understanding of Object-Oriented Programming, Data Structures, and Algorithms.
  • Excellent teamwork skills, flexibility, and ability to handle multiple tasks.
  • Experience with API Design, ability to architect and implement an intuitive customer and third-party integration story
  • Ability to think and analyze both breadth-wise (client, server, DB, control flow) and depth-wise (threads, sessions, space-time complexity) while designing and implementing services
  • Exceptional design and architectural skills
  • Experience of cloud providers/platforms like GCP and AWS


Roles & Responsibilities

  • Develop new user-facing features.
  • Work alongside the product to understand our requirements, and design, develop and iterate, think through the complex architecture.
  • Writing clean, reusable, high-quality, high-performance, maintainable code.
  • Encourage innovation and efficiency improvements to ensure processes are productive.
  • Ensure the training and mentoring of the team members.
  • Ensure the technical feasibility of UI/UX designs and optimize applications for maximum speed.
  • Research and apply new technologies, techniques, and best practices.
  • Team mentorship and leadership.



Read more
Product company for financial operations automation platform

Product company for financial operations automation platform

Agency job
via Esteem leadership by Suma Raju
Hyderabad
4 - 5 yrs
₹20L - ₹25L / yr
skill iconPython
skill iconKubernetes
Google Cloud Platform (GCP)
skill iconJava
skill iconAmazon Web Services (AWS)

Mandatory Criteria :

  • Candidate must have Strong hands-on experience with Kubernetes of atleast 2 years in production environments.
  • Candidate should have Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI).
  • Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
  • Candidate should have strong Backend experience.
  • Hands-on experience with BigQuery or Snowflake for data analytics and integration.


About the Role


We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–7 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.

# Experience with Kubernetes is mandatory.


Key Responsibilities

  • Design and develop scalable, reliable backend services and cloud-native applications.
  • Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
  • Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
  • Implement and manage CI/CD pipelines and infrastructure automation.
  • Collaborate with frontend, DevOps, and product teams in an agile environment.
  • Ensure high code quality through testing, reviews, and documentation.

 

Required Skills

  • Strong hands-on experience with Kubernetes of atleast 2 years in production environments (mandatory).
  • Expertise in at least one public cloud platform [GCP (Preferred)AWSAzure, or OCI].
  • Proficient in backend programming with PythonJava, or Kotlin (at least one is required).
  • Solid understanding of distributed systems, microservices, and cloud-native architecture.
  • Experience with containerization using Docker and Kubernetes-native deployment workflows.
  • Working knowledge of SQL and relational databases.

  

Preferred Qualifications

  • Experience working across multiple cloud platforms.
  • Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
  • Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
  • Hands-on experience with BigQuery or Snowflake for data analytics and integration.

 

Nice to Have

  • Knowledge of NoSQL databases or event-driven/message-based architectures.
  • Experience with serverless services, managed data pipelines, or data lake platforms.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vishakha Walunj
Posted by Vishakha Walunj
Mumbai
5 - 10 yrs
Best in industry
Terraform
Windows Azure
Google Cloud Platform (GCP)
DevOps
CI/CD

Key Skills Required:

  • Strong hands-on experience with Terraform
  • Proficiency in CI/CD tools (e.g., Jenkins, GitLab CI, Azure DevOps, etc.)
  • Experience working on Azure or GCP cloud platforms (at least one is mandatory)
  • Good understanding of DevOps practices


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Praffull Shinde
Posted by Praffull Shinde
Pune, Mumbai, Bengaluru (Bangalore)
4 - 8 yrs
₹14L - ₹26L / yr
skill iconPython
PySpark
skill iconDjango
skill iconFlask
RESTful APIs
+3 more

Job title - Python developer

Exp – 4 to 6 years

Location – Pune/Mum/B’lore

 

PFB JD

Requirements:

  • Proven experience as a Python Developer
  • Strong knowledge of core Python and Pyspark concepts
  • Experience with web frameworks such as Django or Flask
  • Good exposure to any cloud platform (GCP Preferred)
  • CI/CD exposure required
  • Solid understanding of RESTful APIs and how to build them
  • Experience working with databases like Oracle DB and MySQL
  • Ability to write efficient SQL queries and optimize database performance
  • Strong problem-solving skills and attention to detail
  • Strong SQL programing (stored procedure, functions)
  • Excellent communication and interpersonal skill

Roles and Responsibilities

  • Design, develop, and maintain data pipelines and ETL processes using pyspark
  • Work closely with data scientists and analysts to provide them with clean, structured data.
  • Optimize data storage and retrieval for performance and scalability.
  • Collaborate with cross-functional teams to gather data requirements.
  • Ensure data quality and integrity through data validation and cleansing processes.
  • Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
  • Stay up to date with industry best practices and emerging technologies in data engineering.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Shikha Nagar
Posted by Shikha Nagar
Pune, Mumbai, Bengaluru (Bangalore)
8 - 10 yrs
Best in industry
Terraform
Google Cloud Platform (GCP)
skill iconKubernetes
DevOps
SQL Azure

We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.

You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.


Key Responsibilities:

1. Cloud Infrastructure Design & Management

· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.

· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.

· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.

· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.

2. Kubernetes & Container Orchestration

· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).

· Work with Helm charts, Istio, and service meshes for microservices deployments.

· Automate scaling, rolling updates, and zero-downtime deployments.


3. Serverless & Compute Services

· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.

· Optimize containerized applications running on Cloud Run for cost efficiency and performance.


4. CI/CD & DevOps Automation

· Design, implement, and manage CI/CD pipelines using Azure DevOps.

· Automate infrastructure deployment using Terraform, Bash and Power shell scripting

· Integrate security and compliance checks into the DevOps workflow (DevSecOps).


Required Skills & Qualifications:

✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.

✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.


Read more
CoinCROWD
Supriya Singh
Posted by Supriya Singh
Remote only
5 - 10 yrs
₹25L - ₹45L / yr
skill iconDocker
skill iconKubernetes
Google Cloud Platform (GCP)
Terraform
skill iconJenkins
+4 more

Who we are


At CoinCROWD, were building the next-gen wallet for real-world crypto utility. Our flagship product, CROWD Wallet, is secure, intuitive, gasless, and designed to bring digital currencies into everyday spending from a coffee shop to cross-border payments.


Were redefining the wallet experience for everyday users, combining the best of Web3 + AI to create a secure, scalable, and delightful platform


Were more than just a blockchain company, were an AI-native, crypto-forward startup. We ship fast, think long, and believe in building agentic, self-healing infrastructure that can scale across geographies and blockchains. If that excites you, lets talk.


What Youll Be Doing :


As the DevOps Lead at CoinCROWD, youll own our infrastructure from end to end, designing, deploying, and scaling secure systems to support blockchain transactions, AI agents, and token operations across global users.


You will :


- Lead the CI/CD, infra automation, observability, and multi-region deployments of CoinCROWD products.


- Manage cloud and container infrastructure using GCP, Docker, Kubernetes, Terraform.


- Deploy and maintain scalable, secure blockchain infrastructure using QuickNode, Alchemy, Web3Auth, and other Web3 APIs.


- Implement infrastructure-level AI agents or scripts for auto-scaling, failure prediction, anomaly detection, and alert management (using LangChain, LLMs, or tools like n8n).


- Ensure 99.99% uptime for wallet systems, APIs, and smart contract layers.


- Build and optimize observability across on-chain/off-chain systems using tools like Prometheus,

Grafana, Sentry, Loki, ELK Stack.


- Create auto-healing, self-monitoring pipelines that reduce human ops time via Agentic AI workflows.


- Collaborate with engineering and security teams on smart contract deployment pipelines, token rollouts, and app store release automation.


Agentic Ops : What it means


- Use GPT-based agents to auto-document infra changes or failure logs.


- Run LangChain agents that triage alerts, perform log analysis, or suggest infra optimizations.


- Build CI/CD workflows that self-update or auto-tune based on system usage.


- Integrate AI to detect abnormal wallet behaviors, fraud attempts, or suspicious traffic spikes


What Were Looking For :


- 5 to 10 years of DevOps/SRE experience, with at least 2 to 3 years in Web3, fintech, or high-scale infra.


- Deep expertise with Docker, Kubernetes, Helm, and cloud providers (GCP preferred).


- Hands-on with Terraform, Ansible, GitHub Actions, Jenkins, or similar IAC and pipeline tools.


- Experience maintaining or scaling blockchain infra (EVM nodes, RPC endpoints, APIs).


- Understanding of smart contract CI/CD, token lifecycle (ICO, vesting, etc.), and wallet integrations.


- Familiarity with AI DevOps tools, or interest in building LLM-enhanced internal tooling.


- Strong grip on security best practices, key management, and secrets infrastructure (Vault, SOPS, AWS KMS).


Bonus Points :


- You've built or run infra for a token launch, DEX, or high-TPS crypto wallet.


- You've deployed or automated a blockchain node network at scale.


- You've used AI/LLMs to write ops scripts, manage logs, or analyze incidents.


- You've worked with systems handling real-money movement with tight uptime and security requirements.


Why Join CoinCROWD :


- Equity-first model: Build real value as we scale.


- Be the architect of infrastructure that supports millions of real-world crypto transactions.


- Build AI-powered ops that scale without a 24/7 pager culture


- Work remotely with passionate people who ship fast and iterate faster.


- Be part of one of the most ambitious crossovers of AI + Web3 in 2025.


Read more
InvestPulse

at InvestPulse

2 candid answers
1 product
Invest Pulse
Posted by Invest Pulse
Remote only
2 - 5 yrs
₹3L - ₹6L / yr
skill iconPython
Langchaing
CrewAI
skill iconReact.js
skill iconPostgreSQL
+5 more

LendFlow is an AI-powered home loan assessment platform that helps mortgage brokers and lenders save hours by automating document analysis, income validation, and serviceability assessment. We turn complex financial documents into clear insights—fast.

We’re building a smart assistant that ingests client docs (bank statements, payslips, loan summaries) and uses modular AI agents to extract, classify, and summarize financial data in minutes, not hours. Think OCR + AI agents + compliance-ready outputs.


🛠️ What You’ll Be Building

As part of our early technical team, you’ll help us develop and launch our MVP. Key modules include:

  • Document ingestion and OCR processing (Textract, Document AI)
  • AI agent workflows using LangChain or CrewAI
  • Serviceability calculators with business rule engines
  • React + Next.js frontend for brokers and analysts
  • FastAPI backend with PostgreSQL
  • Security, encryption, audit logging (privacy-first design)


🎯 We’re Looking For:

Must-Have Skills:

  • Strong experience with Python (FastAPI, OCR, LLMs, prompt engineering)
  • Familiarity with AI agent frameworks (LangChain, CrewAI, Autogen, or similar)
  • Frontend skills in React.js / Next.js
  • Experience with PostgreSQL and cloud storage (AWS/GCP)
  • Understanding of financial documents and data privacy best practices

Bonus Points:

  • Experience with OCR tools like Amazon Textract, Tesseract, or Document AI
  • Building ML/NLP pipelines in real-world apps
  • Prior work in fintech, lending, or proptech sectors


Read more
a leading company

a leading company

Agency job
via BOS consultants by Manka Joshi
Remote only
3 - 7 yrs
₹18L - ₹24L / yr
Google Cloud Platform (GCP)
Adobe Experience Manager (AEM)

Must have handled at least one project of medium to high complexity of migrating ETL pipelines and data warehouses to cloud.


Min 3 years of experience with premium consulting companies.


mandatory experience in GCP.

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort