Cutshort logo
Google cloud platform gcp jobs

50+ Google Cloud Platform (GCP) Jobs in India

Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!

icon
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
1 - 8 yrs
₹12L - ₹34L / yr
skill iconPython
skill iconDjango
skill iconReact.js
FastAPI
TypeScript
+7 more

Please note that salary will be based on experience.


Job Title: Full Stack Engineer

Location: Bengaluru (Indiranagar) – Work From Office (5 Days)

Job Summary

We are seeking a skilled Full Stack Engineer with solid hands-on experience across frontend and backend development. You will work on mission-critical features, ensuring seamless performance, scalability, and reliability across our products.

Responsibilities

  • Design, develop, and maintain scalable full-stack applications.
  • Build responsive, high-performance UIs using Typescript & Next.js.
  • Develop backend services and APIs using Python (FastAPI/Django).
  • Work closely with product, design, and business teams to translate requirements into intuitive solutions.
  • Contribute to architecture discussions and drive technical best practices.
  • Own features end-to-end — design, development, testing, deployment, and monitoring.
  • Ensure robust security, code quality, and performance optimization.

Tech Stack

Frontend: Typescript, Next.js, React, Tailwind CSS

Backend: Python, FastAPI, Django

Databases: PostgreSQL, MongoDB, Redis

Cloud & Infra: AWS/GCP, Docker, Kubernetes, CI/CD

Other Tools: Git, GitHub, Elasticsearch, Observability tools

Requirements

Must-Have:

  • 2+ years of professional full-stack engineering experience.
  • Strong expertise in either frontend (Typescript/Next.js) or backend (Python/FastAPI/Django) with familiarity in both.
  • Experience building RESTful services and microservices.
  • Hands-on experience with Git, CI/CD, and cloud platforms (AWS/GCP/Azure).
  • Strong debugging, problem-solving, and optimization skills.
  • Ability to thrive in fast-paced, high-ownership startup environments.

Good-to-Have:

  • Exposure to Docker, Kubernetes, and observability tools.
  • Experience with message queues or event-driven architecture.


Perks & Benefits

  • Upskilling support – courses, tools & learning resources.
  • Fun team outings, hackathons, demos & engagement initiatives.
  • Flexible Work-from-Home: 12 WFH days every 6 months.
  • Menstrual WFH: up to 3 days per month.
  • Mobility benefits: relocation support & travel allowance.
  • Parental support: maternity, paternity & adoption leave.


Read more
CoffeeBeans

at CoffeeBeans

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Hyderabad
4 - 8 yrs
Upto ₹28L / yr (Varies
)
skill iconJava
Microservices
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
skill iconKubernetes

Key Responsibilities

  •     Design, develop, and implement backend services using Java (latest version), Spring Boot, and Microservices architecture.
  •     Participate in the end-to-end development lifecycle, from requirement analysis to deployment and support.
  •     Collaborate with cross-functional teams (UI/UX, DevOps, Product) to deliver high-quality, scalable software solutions.
  •     Integrate APIs and manage data flow between services and front-end systems.
  •     Work on cloud-based deployment using AWS or GCP environments.
  •     Ensure performance, security, and scalability of services in production.
  •     Contribute to technical documentation, code reviews, and best practice implementations.

Required Skills:

  •     Strong hands-on experience with Core Java (latest versions), Spring Boot, and Microservices.
  •     Solid understanding of RESTful APIs, JSON, and distributed systems.
  •     Basic knowledge of Kubernetes (K8s) for containerization and orchestration.
  •     Working experience or strong conceptual understanding of cloud platforms (AWS / GCP).
  •     Exposure to CI/CD pipelines, version control (Git), and deployment automation.
  •     Familiarity with security best practices, logging, and monitoring tools.

Preferred Skills:

  •     Experience with end-to-end deployment on AWS or GCP.
  •     Familiarity with payment gateway integrations or fintech applications.
  •     Understanding of DevOps concepts and infrastructure-as-code tools (Added advantage).
Read more
GrowthArc

at GrowthArc

2 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote, Bengaluru (Bangalore)
4yrs+
Upto ₹35L / yr (Varies
)
skill iconGo Programming (Golang)
RESTful APIs
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+4 more

Job Summary:

We are seeking an experienced Golang Developer with 4+ years of hands-on experience to design, develop, and maintain scalable Restful APIs and microservices. The ideal candidate should be proficient in cloud platforms and have strong problem-solving skills to work in dynamic environments.


Key Responsibilities:

  • Develop and maintain high-quality Restful APIs using Golang.
  • Design and implement microservices architecture for scalable applications.
  • Collaborate with cross-functional teams to define and deliver features.
  • Deploy, manage, and troubleshoot applications on cloud platforms (AWS, Azure, GCP, etc.).
  • Write efficient, reusable, and testable code following best practices.
  • Participate in code reviews, debugging, and performance tuning.
  • Ensure security and data protection in application development.

Qualifications:

  • 4+ years of professional experience in Golang development.
  • Strong knowledge of Restful API design and implementation.
  • Hands-on experience with microservices architecture.
  • Familiarity with one or more cloud platforms (AWS, Azure, GCP).
  • Experience with containerization technologies like Docker and Kubernetes is a plus.
  • Good understanding of CI/CD pipelines and DevOps practices.
  • Excellent problem-solving and communication skills.
Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Bengaluru (Bangalore)
2 - 4 yrs
₹7L - ₹18L / yr
CI/CD
skill iconJenkins
gitlab
ArgoCD
skill iconAmazon Web Services (AWS)
+8 more

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.


Key Responsibilities

CI/CD and Infrastructure Automation

  • Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
  • Automate deployments using tools such as Terraform, Helm, and Kubernetes
  • Improve build and release processes to support high-performance and low-latency trading applications
  • Work efficiently with Linux/Unix environments

Cloud and On-Prem Infrastructure Management

  • Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
  • Ensure system reliability, scalability, and high availability
  • Implement Infrastructure as Code (IaC) to standardize and streamline deployments

Performance Monitoring and Optimization

  • Monitor system performance and latency using Prometheus, Grafana, and ELK stack
  • Implement proactive alerting and fault detection to ensure system stability
  • Troubleshoot and optimize system components for maximum efficiency

Security and Compliance

  • Apply DevSecOps principles to ensure secure deployment and access management
  • Maintain compliance with financial industry regulations such as SEBI
  • Conduct vulnerability assessments and maintain logging and audit controls


Required Skills and Qualifications

  • 2+ years of experience as a DevOps Engineer in a software or trading environment
  • Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
  • Proficiency in cloud platforms such as AWS and GCP
  • Hands-on experience with Docker and Kubernetes
  • Experience with Terraform or CloudFormation for IaC
  • Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
  • Familiarity with Prometheus, Grafana, and ELK stack
  • Proficiency in scripting using Python, Bash, or Go
  • Solid understanding of security best practices including IAM, encryption, and network policies


Good to Have (Optional)

  • Experience with low-latency trading infrastructure or real-time market data systems
  • Knowledge of high-frequency trading environments
  • Exposure to FIX protocol, FPGA, or network optimization techniques
  • Familiarity with Redis or Nginx for real-time data handling


Why Join Us?

  • Work with a team that expects and delivers excellence.
  • A culture where risk-taking is rewarded, and complacency is not.
  • Limitless opportunities for growth—if you can handle the pace.
  • A place where learning is currency, and outperformance is the only metric that matters.
  • The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.


This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.


Read more
Auxo AI
kusuma Gullamajji
Posted by kusuma Gullamajji
Bengaluru (Bangalore), Hyderabad, Mumbai, Gurugram
2 - 8 yrs
₹10L - ₹35L / yr
GCP
skill iconPython
SQL
Google Cloud Platform (GCP)

Responsibilities:

Build and optimize batch and streaming data pipelines using Apache Beam (Dataflow)

Design and maintain BigQuery datasets using best practices in partitioning, clustering, and materialized views

Develop and manage Airflow DAGs in Cloud Composer for workflow orchestration

Implement SQL-based transformations using Dataform (or dbt)

Leverage Pub/Sub for event-driven ingestion and Cloud Storage for raw/lake layer data architecture

Drive engineering best practices across CI/CD, testing, monitoring, and pipeline observability

Partner with solution architects and product teams to translate data requirements into technical designs

Mentor junior data engineers and support knowledge-sharing across the team

Contribute to documentation, code reviews, sprint planning, and agile ceremonies

Requirements

2+ years of hands-on experience in data engineering, with at least 2 years on GCP

Proven expertise in BigQuery, Dataflow (Apache Beam), Cloud Composer (Airflow)

Strong programming skills in Python and/or Java

Experience with SQL optimization, data modeling, and pipeline orchestration

Familiarity with Git, CI/CD pipelines, and data quality monitoring frameworks

Exposure to Dataform, dbt, or similar tools for ELT workflows

Solid understanding of data architecture, schema design, and performance tuning

Excellent problem-solving and collaboration skills

Bonus Skills:

GCP Professional Data Engineer certification

Experience with Vertex AI, Cloud Functions, Dataproc, or real-time streaming architectures

Familiarity with data governance tools (e.g., Atlan, Collibra, Dataplex)

Exposure to Docker/Kubernetes, API integration, and infrastructure-as-code (Terraform)

Read more
Hyderabad, Bengaluru (Bangalore)
5 - 12 yrs
₹25L - ₹35L / yr
skill iconC#
SQL
skill iconAmazon Web Services (AWS)
skill icon.NET
skill iconJava
+3 more

Senior Software Engineer

Location: Hyderabad, India


Who We Are:

Since our inception back in 2006, Navitas has grown to be an industry leader in the digital transformation space, and we’ve served as trusted advisors supporting our client base within the commercial, federal, and state and local markets.


What We Do:

At our very core, we’re a group of problem solvers providing our award-winning technology solutions to drive digital acceleration for our customers! With proven solutions, award-winning technologies, and a team of expert problem solvers, Navitas has consistently empowered customers to use technology as a competitive advantage and deliver cutting-edge transformative solutions.


What You’ll Do:

Build, Innovate, and Own:

  • Design, develop, and maintain high-performance microservices in a modern .NET/C# environment.
  • Architect and optimize data pipelines and storage solutions that power our AI-driven products.
  • Collaborate closely with AI and data teams to bring machine learning models into production systems.
  • Build integrations with external services and APIs to enable scalable, interoperable solutions.
  • Ensure robust security, scalability, and observability across distributed systems.
  • Stay ahead of the curve — evaluating emerging technologies and contributing to architectural decisions for our next-gen platform.

Responsibilities will include but are not limited to:

  • Provide technical guidance and code reviews that raise the bar for quality and performance.
  • Help create a growth-minded engineering culture that encourages experimentation, learning, and accountability.

What You’ll Need:

  • Bachelor’s degree in Computer Science or equivalent practical experience.
  • 8+ years of professional experience, including 5+ years designing and maintaining scalable backend systems using C#/.NET and microservices architecture.
  • Strong experience with SQL and NoSQL data stores.
  • Solid hands-on knowledge of cloud platforms (AWS, GCP, or Azure).
  • Proven ability to design for performance, reliability, and security in data-intensive systems.
  • Excellent communication skills and ability to work effectively in a global, cross-functional environment.

Set Yourself Apart With:

  • Startup experience - specifically in building product from 0-1
  • Exposure to AI/ML-powered systems, data engineering, or large-scale data processing.
  • Experience in healthcare or fintech domains.
  • Familiarity with modern DevOps practices, CI/CD pipelines, and containerization (Docker/Kubernetes).

Equal Employer/Veterans/Disabled

Navitas Business Consulting is an affirmative action and equal opportunity employer. If reasonable accommodation is needed to participate in the job application or interview process, to perform essential job functions, and/or to receive other benefits and privileges of employment, please contact Navitas Human Resources.

Navitas is an equal opportunity employer. We provide employment and opportunities for advancement, compensation, training, and growth according to individual merit, without regard to race, color, religion, sex (including pregnancy), national origin, sexual orientation, gender identity or expression, marital status, age, genetic information, disability, veteran-status veteran or military status, or any other characteristic protected under applicable Federal, state, or local law. Our goal is for each staff member to have the opportunity to grow to the limits of their abilities and to achieve personal and organizational objectives. We will support positive programs for equal treatment of all staff and full utilization of all qualified employees at all levels within Navita

Read more
Biofourmis

at Biofourmis

44 recruiters
Roopa Ramalingamurthy
Posted by Roopa Ramalingamurthy
Remote only
5 - 10 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Summary:

We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.


Responsibilities:

  • Deploy, configure, and troubleshoot various infrastructure and application environments
  • Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
  • Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
  • Collaborate with application teams on infrastructure design and issues
  • Architect solutions that optimally meet business needs
  • Implement CI/CD pipelines and automate deployment processes
  • Disaster recovery and infrastructure restoration
  • Restore/Recovery operations from backups
  • Automate routine tasks
  • Execute company initiatives in the infrastructure space
  • Expertise with observability tools like ELK, Prometheus, Grafana , Loki


Qualifications:

  • Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
  • Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
  • Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
  • Experience in architecting solutions that optimally meet business needs
  • Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
  • Strong understanding of system concepts like high availability, scalability, and redundancy
  • Ability to work with application teams on infrastructure design and issues
  • Excellent problem-solving and troubleshooting skills
  • Experience with automation of routine tasks
  • Good communication and interpersonal skills


Education and Experience:

  • Bachelor's degree in Computer Science or a related field
  • 5 to 10 years of experience as a DevOps Engineer or in a related role
  • Experience with observability tools like ELK, Prometheus, Grafana


Working Conditions:

The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.


Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.

Read more
appscrip

at appscrip

2 recruiters
Kanika Gaur
Posted by Kanika Gaur
Bengaluru (Bangalore)
1 - 3 yrs
₹4L - ₹10L / yr
DevOps
Windows Azure
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

Job Title: Sr. DevOps Engineer

Experience Required: 2 to 4 years in DevOps or related fields

Employment Type: Full-time


About the Role:

We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.


Key Responsibilities:

Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).

CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.

Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.

Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.

Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.

Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.

Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.

Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.


Required Skills & Qualifications:

Technical Expertise:

Strong proficiency in cloud platforms like AWS, Azure, or GCP.

Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).

Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.

Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.

Proficiency in scripting languages (e.g., Python, Bash, PowerShell).

Soft Skills:

Excellent communication and leadership skills.

Strong analytical and problem-solving abilities.

Proven ability to manage and lead a team effectively.

Experience:

4 years + of experience in DevOps or Site Reliability Engineering (SRE).

4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.

Strong understanding of microservices, APIs, and serverless architectures.


Nice to Have:

Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.

Experience with GitOps tools such as ArgoCD or Flux.

Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).


Perks & Benefits:

Competitive salary and performance bonuses.

Comprehensive health insurance for you and your family.

Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.

Flexible working hours and remote work options.

Collaborative and inclusive work culture.


Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.


You can directly contact us: Nine three one six one two zero one three two

Read more
Whiz IT Services
Sheeba Harish
Posted by Sheeba Harish
Remote only
10 - 15 yrs
₹20L - ₹20L / yr
skill iconJava
skill iconSpring Boot
Microservices
API
Apache Kafka
+5 more

We are looking for highly experienced Senior Java Developers who can architect, design, and deliver high-performance enterprise applications using Spring Boot and Microservices . The role requires a strong understanding of distributed systems, scalability, and data consistency.

Read more
Proximity Works

at Proximity Works

1 video
5 recruiters
Eman Khan
Posted by Eman Khan
Remote only
5 - 8 yrs
₹25L - ₹43L / yr
skill iconReact.js
skill iconNextJs (Next.js)
skill iconNodeJS (Node.js)
NestJS
Google Cloud Platform (GCP)
+3 more

We’re seeking a highly skilled, hands-on Product Engineer with strong end-to-end product development experience - from concept (0→1) through iterative launch and scaling. You’ll own features across the stack (frontend, backend, infra), with a strong sense of UX, design quality, and user empathy.


The ideal candidate has shipped real products publicly on Product Hunt, App Store, or live web platforms and can demonstrate thoughtful product decisions, iteration, and measurable user impact. You’ll work in a lean, high-velocity pod alongside ML and DevOps specialists, contributing not just code but direction, polish, and execution. This is a role for builders who thrive on accountability, autonomy, and the craft of taking products live.


Responsibilities:

  • Full-Stack Product Ownership: Design, build, and ship features end-to-end - from concept to deployment - ensuring smooth, performant, and polished user experiences.
  • Frontend & UX Craft: Implement high-quality, responsive, and performant interfaces. Own details like animations, progress indicators, and state management for media-heavy workflows.
  • Backend & Infrastructure: Build robust APIs, real-time features, and integrations with ML pipelines. Ensure reliability, scalability, and maintainability.
  • Media & Streaming Systems: Develop performant video workflows, upload/processing flows, streaming, and playback experiences with real-time progress tracking.
  • AI/ML Integration: Collaborate with MLOps engineers to integrate and productionize ML features, handling data flows, inference endpoints, and user-facing AI interactions.
  • DevOps & CI/CD: Manage efficient deployments using modern cloud tooling (preferably GCP) and maintain CI/CD pipelines for rapid iteration.
  • Product Thinking & Iteration: Drive decisions grounded in user feedback and metrics. Balance trade-offs between polish, speed, and scalability.
  • Collaboration & Initiative: Work closely with designers and specialists, proactively proposing improvements and solutions rather than just surfacing blockers.


Requirements

  • Experience: Proven record of shipping live products (preferably 3+ years in product-focused engineering).
  • Portfolio Evidence: Links to real products, public launches, or active projects (Product Hunt, GitHub, App Store, live demos).
  • Full-Stack Skills: Proficient in modern frontend frameworks (Next.js, Firebase Storage SDK, React Query) and backend environments (NestJS, TypeORM, PostgreSQL).
  • Media/Video Expertise: Experience with streaming, video processing, encoding, or real-time rendering systems.
  • Cloud & CI/CD: Familiar with GCP, containerized deployments (Docker/Kubernetes), and automated pipelines.
  • AI/ML Exposure: Comfortable integrating ML models or AI APIs into production features.
  • Execution Mindset: Bias toward action, accountability, and high ownership of outcomes.
  • Product Sense: Demonstrated ability to think from a user’s perspective — balancing technical feasibility with product value.


Benefits

  • Best in class salary: We hire only the best, and we pay accordingly.
  • Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
  • Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.


About us

Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media, and Entertainment companies in the world! We’re headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies.


Today, we are a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting-edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders who have built and led multiple tech, product, and design teams.

Read more
Infilect

at Infilect

3 recruiters
Indira Ashrit
Posted by Indira Ashrit
Bengaluru (Bangalore)
2 - 3 yrs
₹12L - ₹15L / yr
skill iconKubernetes
skill iconDocker
cicd
Google Cloud Platform (GCP)

Job Description:


Infilect is a GenAI company pioneering the use of Image Recognition in Consumer Packaged Goods retail.


We are looking for a Senior DevOps Engineer to be responsible and accountable for the smooth running of our Cloud, AI workflows, and AI-based Computer Systems. Furthermore, the candidate will supervise the implementation and maintenance of the company’s computing needs including the in-house GPU & AI servers along with AI workloads.



Responsibilities

  • Understanding and automating AI based deployment an AI based workflows
  • Implementing various development, testing, automation tools, and IT infrastructure
  • Manage Cloud, computer systems and other IT assets.
  • Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
  • Design, develop, implement, and coordinate systems, policies, and procedures for Cloud and on-premise systems
  • Ensure the security of data, network access, and backup systems
  • Act in alignment with user needs and system functionality to contribute to organizational policy
  • Identify problematic areas, perform RCA and implement strategic solutions in time
  • Preserve assets, information security, and control structures
  • Handle monthly/annual cloud budget and ensure cost effectiveness


Requirements and skills

  • Well versed in automation tools such as Docker, Kubernetes, Puppet, Ansible etc.
  • Working Knowledge of Python, SQL database stack or any full-stack with relevant tools.
  • Understanding agile development, CI/CD, sprints, code reviews, Git and GitHub/Bitbucket workflows
  • Well versed with ELK stack or any other logging, monitoring and analysis tools
  • Proven working experience of 2+ years as an DevOps/Tech lead/IT Manager or relevant positions
  • Excellent knowledge of technical management, information analysis, and of computer hardware/software systems
  • Hands-on experience with computer networks, network administration, and network installation
  • Knowledge in ISO/SOC Type II implementation with be a 
  • BE/B.Tech/ME/M.Tech in Computer Science, IT, Electronics or a similar field


Read more
CGI Inc

at CGI Inc

3 recruiters
Shruthi BT
Posted by Shruthi BT
Bengaluru (Bangalore), Mumbai, Pune, Hyderabad, Chennai
8 - 15 yrs
₹15L - ₹25L / yr
Google Cloud Platform (GCP)
Data engineering
Big query

Google Data Engineer - SSE


Position Description

Google Cloud Data Engineer

Notice Period: Immediate to 30 days serving

Job Description:

We are seeking a highly skilled Data Engineer with extensive experience in Google Cloud Platform (GCP) data services and big data technologies. The ideal candidate will be responsible for designing, implementing, and optimizing scalable data solutions while ensuring high performance, reliability, and security.

Key Responsibilities:


• Design, develop, and maintain scalable data pipelines and architectures using GCP data services.

• Implement and optimize solutions using BigQuery, Dataproc, Composer, Pub/Sub, Dataflow, GCS, and BigTable.

• Work with GCP databases such as Bigtable, Spanner, CloudSQL, AlloyDB, ensuring performance, security, and availability.

• Develop and manage data processing workflows using Apache Spark, Hadoop, Hive, Kafka, and other Big Data technologies.

• Ensure data governance and security using Dataplex, Data Catalog, and other GCP governance tooling.

• Collaborate with DevOps teams to build CI/CD pipelines for data workloads using Cloud Build, Artifact Registry, and Terraform.

• Optimize query performance and data storage across structured and unstructured datasets.

• Design and implement streaming data solutions using Pub/Sub, Kafka, or equivalent technologies.


Required Skills & Qualifications:


• 8-15 years of experience

• Strong expertise in GCP Dataflow, Pub/Sub, Cloud Composer, Cloud Workflow, BigQuery, Cloud Run, Cloud Build.

• Proficiency in Python and Java, with hands-on experience in data processing and ETL pipelines.

• In-depth knowledge of relational databases (SQL, MySQL, PostgreSQL, Oracle) and NoSQL databases (MongoDB, Scylla, Cassandra, DynamoDB).

• Experience with Big Data platforms such as Cloudera, Hortonworks, MapR, Azure HDInsight, IBM Open Platform.

• Strong understanding of AWS Data services such as Redshift, RDS, Athena, SQS/Kinesis.

• Familiarity with data formats such as Avro, ORC, Parquet.

• Experience handling large-scale data migrations and implementing data lake architectures.

• Expertise in data modeling, data warehousing, and distributed data processing frameworks.

• Deep understanding of data formats such as Avro, ORC, Parquet.

• Certification in GCP Data Engineering Certification or equivalent.


Good to Have:


• Experience in BigQuery, Presto, or equivalent.

• Exposure to Hadoop, Spark, Oozie, HBase.

• Understanding of cloud database migration strategies.

• Knowledge of GCP data governance and security best practices.

Read more
Forbes Advisor

at Forbes Advisor

3 candid answers
Nikita Sinha
Posted by Nikita Sinha
Chennai
10 - 16 yrs
Upto ₹50L / yr (Varies
)
DevOps
CI/CD
skill iconPython
Bash
skill iconAmazon Web Services (AWS)
+1 more

We are looking for a Cloud Security Engineer to join our organization. The ideal candidate will have strong hands-on experience in ensuring robust security controls across both applications and organizational data. This candidate is expected to work closely with multiple stakeholders to architect, implement, and monitor effective safeguards. The ideal candidate will champion secure design, conduct risk assessments, drive vulnerability management, and promote data protection best practices for the organization


Responsibilities

  • Design and implement security measures for website and API applications.
  • Conduct security-first code reviews, vulnerability assessments, and posture audits for business-critical applications.
  • Conduct security testing activities like SAST & DAST by integrating them within the project’s CI/CD pipelines and development workflows.
  • Manage all penetration testing activities including working with external vendors for security certification of business-critical applications.
  • Develop and manage data protection policies and RBAC controls for sensitive organizational data like PII, revenue, secrets, etc.
  • Oversee encryption, key management, and secure data storage solutions.
  • Monitor threats and responds to incidents involving application and data breaches.
  • Collaborate with engineering, data, product and compliance teams to achieve security-by-design principles.
  • Ensure compliance with regulatory standards (GDPR, HIPAA, etc.) and internal organizational policies.
  • Automate recurrent security tasks using scripts and security tools.
  • Maintain documentation around data flows, application architectures, and security controls.

Requirements

  • 10+ years’ experience in application security and/or data security engineering.
  • Strong understanding of security concepts including zero trust architecture, threat modeling, security frameworks (like SOC 2, ISO 27001), and best practices in corporate security environments.
  • Strong knowledge of modern web/mobile application architectures and common vulnerabilities (like OWASP Top 10, etc.)
  • Proficiency in secure coding practices and code reviews for major programming languages including Java, .NET, Python, JavaScript, Typescript, React, etc.
  • Hands-on experience in at-least two Software tooling in areas of vulnerability scanning and static/dynamic analysis. Software tooling can include Checkmarx, Veracode, SonarQube, Burp Suite, AppScan, etc.
  • Advanced understanding of data encryption, key management, and secure storage (SQL, NoSQL, Cloud) and secure transfer mechanisms.
  • Working experience in Cloud Environments like AWS & GCP and familiarity with the recommended security best practices. 
  • Familiarity with regulatory frameworks such as GDPR, HIPAA, PCI DSS and the controls needed to implement them.
  • Experience integrating security into DevOps/CI/CD processes.
  • Hands-on Experience with automation in any of the scripting languages (Python, Bash, etc.)
  • Ability to conduct incident response and forensic investigations related to application/data breaches.
  • Excellent communication and documentation skills.


Good To Have:

  • Cloud Security certifications in any one of the below
  • AWS Certified Security – Specialty 
  • GCP Professional Cloud Security
  • Experience with container security (Docker, Kubernetes) and cloud security tools (AWS, Azure, GCP).
  • Experience in safeguard data storage solutions like GCP GCS, BigQuery, etc.
  • Hands-on work with any SIEM/SOC platforms for monitoring and alerting.
  • Knowledge of data loss prevention (DLP) solutions and IAM (identity and access management) systems.

Perks:

  • Day off on the 3rd Friday of every month (one long weekend each month)
  • Monthly Wellness Reimbursement Program to promote health and well-being
  • Monthly Office Commutation Reimbursement Program
  • Paid paternity and maternity leave
Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Mumbai, Bengaluru (Bangalore)
10 - 18 yrs
₹25L - ₹50L / yr
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
+7 more

Type: Client-Facing Technical Architecture, Infrastructure Solutioning & Domain Consulting (India + International Markets)


Role Overview

Tradelab is seeking a senior Solution Architect who can interact with both Indian and international clients (Dubai, Singapore, London, US), helping them understand our trading systems, OMS/RMS/CMS stack, HFT platforms, feed systems, and Matching Engine. The architect will design scalable, secure, and ultra-low-latency deployments tailored to global forex markets, brokers, prop firms, liquidity providers, and market makers.


Key Responsibilities

1. Client Engagement (India + International Markets)

  • Engage with brokers, prop trading firms, liquidity providers, and financial institutions across India, Dubai, Singapore, and global hubs.
  • Explain Tradelab’s capabilities, architecture, and deployment options.
  • Understand region-specific latency expectations, connectivity options, and regulatory constraints.

2. Requirement Gathering & Solutioning

  • Capture client needs, throughput, order concurrency, tick volumes, and market data handling.
  • Assess infra readiness (cloud/on-prem/colo).
  • Propose architecture aligned with forex markets.

3. Global Architecture & Deployment Design

  • Design multi-region infrastructure using AWS/Azure/GCP.
  • Architect low-latency routing between India–Singapore–Dubai.
  • Support deployments in DCs like Equinix SG1/DX1.

4. Networking & Security Architecture

  • Architect multicast/unicast feeds, VPNs, IPSec tunnels, BGP routes.
  • Implement network hardening, segmentation, WAF/firewall rules.

5. DevOps, Cloud Engineering & Scalability

  • Build CI/CD pipelines, Kubernetes autoscaling, cost-optimized AWS multi-region deployments.
  • Design global failover models.

6. BFSI & Trading Domain Expertise

  • Indian broking, international forex, LP aggregation, HFT.
  • OMS/RMS, risk engines, LP connectivity, and matching engines.

7. Latency, Performance & Capacity Planning

  • Benchmark and optimize cross-region latency.
  • Tune performance for high tick volumes and volatility bursts.

8. Documentation & Consulting

  • Prepare HLDs, LLDs, SOWs, cost sheets, and deployment of playbooks.
  • Required Skills
  • AWS: EC2, VPC, EKS, NLB, MSK/Kafka, IAM, Global Accelerator.
  • DevOps: Kubernetes, Docker, Helm, Terraform.
  • Networking: IPSec, GRE, VPN, BGP, multicast (PIM/IGMP).
  • Message buses: Kafka, RabbitMQ, Redis Streams.

Domain Skills

  • Deep Broking Domain Understanding.
  • Indian broking + global forex/CFD.
  • FIX protocol, LP integration, market data feeds.
  • Regulations: SEBI, DFSA, MAS, ESMA.

Soft Skills

  • Excellent communication and client-facing ability.
  • Strong presales and solutioning mindset.
  • Preferred Qualifications
  • B.Tech/BE/M.Tech in CS or equivalent.
  • AWS Architect Professional, CCNP, CKA.

Why Join Us?

  • Experience in colocation/global trading infra.
  • Work with a team that expects and delivers excellence.
  • A culture where risk-taking is rewarded, and complacency is not.
  • Limitless opportunities for growth—if you can handle the pace.
  • A place where learning is currency, and outperformance is the only metric that matters.
  • The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.

This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.


Read more
Synorus
Synorus Admin
Posted by Synorus Admin
Remote only
0 - 3 yrs
₹0.1L - ₹0.6L / yr
Remotion
skill iconNextJs (Next.js)
skill iconReact.js
skill iconHTML/CSS
TypeScript
+4 more

About Us


At Synorus, we’re building a suite of intelligent, AI-powered products that redefine how people interact with technology — from real-time video editing tools to legal intelligence and creative automation systems.

We are looking for a Frontend Developer who is passionate about crafting seamless, elegant, and high-performance user interfaces that bring next-generation AI experiences to life.


Key Responsibilities

  • Design, develop, and maintain modular, scalable front-end components using React, Next.js, and TypeScript.
  • Implement interactive, media-rich interfaces powered by AI and real-time data.
  • Work closely with backend and AI teams to integrate APIs and WebSocket-based data flows.
  • Ensure pixel-perfect, responsive, and accessible user interfaces across platforms and devices.
  • Optimize performance through efficient rendering, lazy loading, and dynamic imports.
  • Maintain high-quality code standards using TypeScript, ESLint, and testing frameworks.
  • Contribute to our evolving design system and component library shared across products.
  • Collaborate with designers and engineers to deliver intuitive, creative, and impactful user experiences.


Skills & Experience

  • Strong proficiency in React, Next.js, TypeScript, and modern JavaScript (ES6+).
  • Expertise in Tailwind CSS, Framer Motion, and other animation or motion libraries.
  • Experience with state management tools such as Valtio, Redux, or Zustand.
  • Familiarity with design tools like Figma and understanding of responsive grid systems.
  • Experience integrating APIs and working with real-time data through WebSockets.
  • Understanding of accessibility (WCAG), cross-browser compatibility, and performance optimization.
  • Bonus: Experience with Remotion, Canvas APIs, or WebGL for video or AI-enhanced UIs.


Ideal Candidate

  • Obsessed with clean, maintainable, and scalable UI code.
  • Understands both design aesthetics and engineering trade-offs.
  • Self-driven, detail-oriented, and thrives in a fast-paced startup environment.
  • Excited to experiment with emerging technologies — AI, real-time collaboration, or creative tools.
  • Loves solving complex problems through thoughtful, user-centric design.


Education

  • Bachelor’s or Master’s in Computer Science, Engineering, or equivalent hands-on experience.
  • A strong project portfolio or GitHub profile is highly preferred.


Why Join Us

  • Work directly with the founding team and AI engineers on products shaping the future of creativity and automation.
  • Be part of a fast-growing ecosystem where your work impacts multiple real-world products.
  • Experience a flat hierarchy, flexible hours, and an environment that rewards innovation.
  • Access to cutting-edge technologies, mentorship, and rapid growth opportunities.
Read more
Bits In Glass

at Bits In Glass

3 candid answers
Nikita Sinha
Posted by Nikita Sinha
Pune, Hyderabad, Mohali
4 - 7 yrs
Upto ₹30L / yr (Varies
)
DevOps
CI/CD
Google Cloud Platform (GCP)


As a Google Cloud Infrastructure / DevOps Engineer, you will design, implement, and maintain cloud infrastructure while enabling efficient development operations. This role bridges development and operations, with a strong focus on automation, scalability, reliability, and collaboration. You will work closely with cross-functional teams to optimize systems and enhance CI/CD pipelines.


Key Responsibilities:

Cloud Infrastructure Management

  • Manage and monitor Google Cloud Platform (GCP) services and components.
  • Ensure high availability, scalability, and security of cloud resources.

CI/CD Pipeline Implementation

  • Design and implement automated pipelines for application releases.
  • Build and maintain CI/CD workflows.
  • Collaborate with developers to streamline deployment processes.
  • Automate testing, deployment, and rollback procedures.

Infrastructure as Code (IaC)

  • Use Terraform (or similar tools) to define and manage infrastructure.
  • Maintain version-controlled infrastructure code.
  • Ensure environment consistency across dev, staging, and production.

Monitoring & Troubleshooting

  • Monitor system performance, resource usage, and application health.
  • Troubleshoot cloud infrastructure and deployment pipeline issues.
  • Implement proactive monitoring and alerting.

Security & Compliance

  • Apply cloud security best practices.
  • Ensure compliance with industry standards and internal policies.
  • Collaborate with security teams to address vulnerabilities.

Collaboration & Documentation

  • Work closely with development, operations, and QA teams.
  • Document architecture, processes, and configurations.
  • Share knowledge and best practices with the team.

Qualifications:


Education

  • Bachelor’s degree in Computer Science, Information Technology, or a related field.

Experience

  • Minimum 3 years of industry experience.
  • At least 1 year designing and managing production systems on GCP.
  • Familiarity with GCP services (Compute Engine, GKE, Cloud Storage, etc.).
  • Exposure to Docker, Kubernetes, and microservices architecture.

Skills

  • Proficiency in Python or Bash for automation.
  • Strong understanding of DevOps principles.
  • Knowledge of Jenkins or other CI/CD tools.
  • Experience with GKE for container orchestration.
  • Familiarity with event streaming platforms (Kafka, Google Cloud Pub/Sub).

About the Company: Bits In Glass – India

Industry Leader

  • Established for 20+ years with global operations in the US, Canada, UK, and India.
  • In 2021, Bits In Glass joined hands with Crochet Technologies, strengthening global delivery capabilities.
  • Offices in Pune, Hyderabad, and Chandigarh.
  • Specialized Pega Partner since 2017, ranked among the top 30 Pega partners globally.
  • Long-standing sponsor of the annual PegaWorld event.
  • Elite Appian partner since 2008 with deep industry expertise.
  • Dedicated global Pega Center of Excellence (CoE) supporting customers and development teams worldwide.

Employee Benefits

  • Career Growth: Clear pathways for advancement and professional development.
  • Challenging Projects: Work on innovative, high-impact global projects.
  • Global Exposure: Collaborate with international teams and clients.
  • Flexible Work Arrangements: Supporting work-life balance.
  • Comprehensive Benefits: Competitive compensation, health insurance, paid time off.
  • Learning Opportunities: Upskill on AI-enabled Pega solutions, data engineering, integrations, cloud migration, and more.

Company Culture

  • Collaborative Environment: Strong focus on teamwork, innovation, and knowledge sharing.
  • Inclusive Workplace: Diverse and respectful workplace culture.
  • Continuous Learning: Encourages certifications, learning programs, and internal knowledge sessions.

Core Values

  • Integrity: Ethical practices and transparency.
  • Excellence: Commitment to high-quality work.
  • Client-Centric Approach: Delivering solutions tailored to client needs.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Praffull Shinde
Posted by Praffull Shinde
Pune, Mumbai, Bengaluru (Bangalore)
8 - 14 yrs
Best in industry
Google Cloud Platform (GCP)
Terraform
skill iconKubernetes
DevOps
skill iconPython

JD for Cloud engineer

 

Job Summary:


We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.


You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.

 

Key Responsibilities:

1. Cloud Infrastructure Design & Management

  • Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
  • Implement Google Cloud Storage, Cloud SQL, filestore,  for data storage and processing needs.
  • Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
  • Optimize resource allocation, monitoring, and cost efficiency across GCP environments.


2. Kubernetes & Container Orchestration

  • Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
  • Work with Helm charts for microservices deployments.
  • Automate scaling, rolling updates, and zero-downtime deployments.

 

3. Serverless & Compute Services

  • Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
  • Optimize containerized applications running on Cloud Run for cost efficiency and performance.

 

4. CI/CD & DevOps Automation

  • Design, implement, and manage CI/CD pipelines using Azure DevOps.
  • Automate infrastructure deployment using Terraform, Bash and Powershell scripting
  • Integrate security and compliance checks into the DevOps workflow (DevSecOps).

 

 

Required Skills & Qualifications:

Experience: 8+ years in Cloud Engineering, with a focus on GCP.

Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.

Read more
CoffeeBeans

at CoffeeBeans

2 candid answers
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore), Pune, Hyderabad
6.5 - 8 yrs
Upto ₹28L / yr (Varies
)
skill iconJava
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
Azure
NOSQL Databases

Role Overview

We are seeking a skilled Java Developer with a strong background in building scalable, high-quality, and high-performance digital applications on the Java technology stack. This role is critical for developing microservice architectures and managing data with distributed databases and GraphQLinterfaces.


Skills:

Java, GCP or any other Cloud platform, NoSQL, docker, contanerization


Primary Responsibilities:


  • Design and develop scalable services/microservices using Java/Node and MVC architecture, ensuring clean, performant, and maintainable code.
  • Implement GraphQL APIs to enhance the functionality and performance of applications.
  • Work with Cassandra and other distributed database systems to design robust, scalable database schemas that support business processes.
  • Design and Develop functionality/application for given requirements by focusing on Functional, Non Functional and Maintenance needs.
  • Collaborate within team and with cross functional teams to effectively implement, deploy and monitor applications.
  • Document and Improve existing processes/tools.
  • Support and Troubleshoot production incidents with a sense of urgency by understanding customer impact.
  • Proficient in developing applications and web services, as well as cloud-native apps using MVC framworks like Spring Boot, and REST API.
  • Thorough understanding and hands-on experience with containerization and orchestration technologies like Docker, Kubernetes, etc.
  • Strong background in working with cloud platforms, especially GCP
  • Demonstrated expertise in building and deploying services using CI/CD pipelines, leveraging tools like GitHub, CircleCI, Jenkins, and GitLab.
  • Comprehensive knowledge of distributed database designs.
  • Experience in building Observablity in applications with OTel OR Promothues is a plus
  • Experience working in NodeJS is a plus.


Soft Skills Required:

  • Should be able to work independently in highly cross functional projects/environment.
  • Team player who pays attention to detail and has a Team win mindset.
Read more
Hashone Careers

at Hashone Careers

2 candid answers
Madhavan I
Posted by Madhavan I
Bengaluru (Bangalore), Hyderabad, Pune
6 - 8 yrs
₹12L - ₹30L / yr
skill iconJava
Google Cloud Platform (GCP)

Job Title

Senior Developer - Java+GCP

Job Description

Job Role: Senior Developer - Java + GCP 

Years of Experience: 6 - 8 years 

Work Location: Bangalore / Pune / Hyderabad 

Work Mode: Hybrid (3 days WFO)


Job Description:


We are seeking a skilled Java / Node Developer with a strong background in building scalable, high-quality, and high-performance digital applications on the Java technology stack. This role is critical for developing microservice architectures and managing data with distributed databases and GraphQLinterfaces.


Primary Responsibilities:


  • Design and develop scalable services/microservices using Java/Node and MVC architecture, ensuring clean, performant,and maintainable code.
  • Implement GraphQL APIs to enhance the functionality and performance of applications.
  • Work with Cassandra and other distributed database systems to design robust, scalable database schemas that support business processes.
  • Design and Develop functionality/application for given requirements by focusing on Functional, Non Functional and Maintenance needs.
  • Collaborate within team and with cross functional teams to effectively implement, deploy and monitor applications.
  • Document and Improve existing processes/tools.
  • Support and Troubleshoot production incidents with a sense of urgency by understanding customer impact.
  • Proficient in developing applications and web services, as well as cloud-native apps using MVC framworks like Spring Boot, and REST API.
  • Thorough understanding and hands-on experience with containerization and orchestration technologies like Docker, Kubernetes, etc.
  • Strong background in working with cloud platforms, especially GCP
  • Demonstrated expertise in building and deploying services using CI/CD pipelines, leveraging tools like GitHub, CircleCI, Jenkins, and GitLab.
  • Comprehensive knowledge of distributed database designs.
  • Experience in building Observablity in applications with OTel OR Promothues is a plus
  • Experience working in NodeJS is a plus.


Soft Skills Required:

  • Should be able to work independently in highly cross functional projects/environment.
  • Team player who pays attention to detail and has a Team win mindset.

View Less

Skills

java, GCP, NOSQL, DOCKER, CONTAINERIZATION

Read more
Intineri infosol Pvt Ltd

at Intineri infosol Pvt Ltd

2 candid answers
Shivani Pandey
Posted by Shivani Pandey
Remote only
4 - 10 yrs
₹5L - ₹15L / yr
skill iconPython
2D Geometry Concept
3D Geometry Concept
NumPy
SciPy
+9 more

Job Title: Python Developer

Experience Level: 4+ years

 

Job Summary:

We are seeking a skilled Python Developer with strong experience in developing and maintaining APIs. Familiarity with 2D and 3D geometry concepts is a strong plus. The ideal candidate will be passionate about clean code, scalable systems, and solving complex geometric and computational problems.


Key Responsibilities:

·       Design, develop, and maintain robust and scalable APIs using Python.

·       Work with geometric data structures and algorithms (2D/3D).

·       Collaborate with cross-functional teams including front-end developers, designers, and product managers.

·       Optimize code for performance and scalability.

·       Write unit and integration tests to ensure code quality.

·       Participate in code reviews and contribute to best practices.

 

Required Skills:

·       Strong proficiency in Python.

·       Experience with RESTful API development (e.g., Flask, FastAPI, Django REST Framework).

·       Good understanding of 2D/3D geometry, computational geometry, or CAD-related concepts.

·       Familiarity with libraries such as NumPySciPyShapelyOpen3D, or PyMesh.

·       Experience with version control systems (e.g., Git).

·       Strong problem-solving and analytical skills.

 

Good to Have:

·       Experience with 3D visualization tools or libraries (e.g., VTK, Blender API, Three.js via Python bindings).

·       Knowledge of mathematical modeling or simulation.

·       Exposure to cloud platforms (AWS, Azure, GCP).

·       Familiarity with CI/CD pipelines.

 

Education:

·       Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Mumbai, Pune
7 - 12 yrs
₹1L - ₹45L / yr
Google Cloud Platform (GCP)
skill iconKubernetes
skill iconDocker
google kubernetes engineer
azure devops
+2 more

Required Skills & Qualifications:

✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.

✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

Read more
Reliable Group

at Reliable Group

2 candid answers
Nilesh Gend
Posted by Nilesh Gend
Pune
5 - 12 yrs
₹15L - ₹35L / yr
Google Cloud Platform (GCP)
Ansible
Terraform

Job Title: GCP Cloud Engineer/Lead


Location: Pune, Balewadi

Shift / Time Zone: 1:30 PM – 10:30 PM IST (3:00 AM – 12:00 PM EST, 3–4 hours overlap with US Eastern Time)


Role Summary

We are seeking an experienced GCP Cloud Engineer to join our team supporting CVS. The ideal candidate will have a strong background in Google Cloud Platform (GCP) architecture, automation, microservices, and Kubernetes, along with the ability to translate business strategy into actionable technical initiatives. This role requires a blend of hands-on technical expertise, cross-functional collaboration, and customer engagement to ensure scalable and secure cloud solutions.


Key Responsibilities

  • Design, implement, and manage cloud infrastructure on Google Cloud Platform (GCP) leveraging best practices for scalability, performance, and cost efficiency.
  • Develop and maintain microservices-based architectures and containerized deployments using Kubernetes and related technologies.
  • Evaluate and recommend new tools, services, and architectures that align with enterprise cloud strategies.
  • Collaborate closely with Infrastructure Engineering Leadership to translate long-term customer strategies into actionable enablement plans, onboarding frameworks, and proactive support programs.
  • Act as a bridge between customers, Product Management, and Engineering teams, translating business needs into technical requirements and providing strategic feedback to influence product direction.
  • Identify and mitigate technical risks and roadblocks in collaboration with executive stakeholders and engineering teams.
  • Advocate for customer needs within the engineering organization to enhance adoption, performance, and cost optimization.
  • Contribute to the development of Customer Success methodologies and mentor other engineers in best practices.


Must-Have Skills

  • 8+ years of total experience, with 5+ years specifically as a GCP Cloud Engineer.
  • Deep expertise in Google Cloud Platform (GCP) — including Compute Engine, Cloud Storage, Networking, IAM, and Cloud Functions.
  • Strong experience in microservices-based architecture and Kubernetes container orchestration.
  • Hands-on experience with infrastructure automation tools (Terraform, Ansible, or similar).
  • Proven ability to design, automate, and optimize CI/CD pipelines for cloud workloads.
  • Excellent problem-solving, communication, and collaboration skills.
  • GCP Professional Certification (Cloud Architect / DevOps Engineer / Cloud Engineer) preferred or in progress.
  • Ability to multitask effectively in a fast-paced, dynamic environment with shifting priorities.


Good-to-Have Skills

  • Experience with Cloud Monitoring, Logging, and Security best practices in GCP.
  • Exposure to DevOps tools (Jenkins, GitHub Actions, ArgoCD, or similar).
  • Familiarity with multi-cloud or hybrid-cloud environments.
  • Knowledge of Python, Go, or Shell scripting for automation and infrastructure management.
  • Understanding of network design, VPC architecture, and service mesh (Istio/Anthos).
  • Experience working with enterprise-scale customers and cross-functional product teams.
  • Strong presentation and stakeholder communication skills, particularly with executive audiences.


Read more
Pune
3 - 7 yrs
₹7L - ₹10L / yr
skill iconPython
Google Cloud Platform (GCP)
skill iconMongoDB
grpc
RabbitMQ
+3 more

Advanced Backend Development: Design, build, and maintain efficient, reusable, and reliable Python code. Develop complex backend services using FastAPI, MongoDB, and Postgres.

Microservices Architecture Design: Lead the design and implementation of a scalable microservices architecture, ensuring systems are robust and reliable.

Database Management and Optimization: Oversee and optimize the performance of MongoDB and Postgres databases, ensuring data integrity and security.

Message Broker Implementation: Implement and manage sophisticated message broker systems like RabbitMQ or Kafka for asynchronous processing and inter-service communication.

Git and Version Control Expertise: Utilize Git for sophisticated source code management. Lead code reviews and maintain high standards in code quality.

Project and Team Management: Manage backend development projects, coordinating with cross-functional teams. Mentor junior developers and contribute to team growth and skill development. Cloud Infrastructure Management: Extensive work with cloud services, specifically Google Cloud Platform (GCP), for deployment, scaling, and management of applications.

Performance Tuning and Optimization: Focus on optimizing applications for maximum speed, efficiency, and scalability.

Unit Testing and Quality Assurance: Develop and maintain thorough unit tests for all developed code. Lead initiatives in test-driven development (TDD) to ensure code quality and reliability.

 Security Best Practices: Implement and advocate for security best practices, data protection protocols, and compliance standards across all backend services.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Mumbai, Pune
3 - 7 yrs
Best in industry
Google Cloud Platform (GCP)
Shell Scripting
skill iconJava
skill iconRuby
Product Management

Job Title: Site Reliability Engineer (SRE) / Application Support Engineer

Experience: 3–7 Years

Location: Bangalore / Mumbai / Pune

About the Role

The successful candidate will join the S&C Site Reliability Engineering (SRE) Team, responsible for providing Tier 2/3 support to S&C business applications and environments. This role requires close collaboration with client-facing teams (Client Services, Product, and Research) as well as Infrastructure, Technology, and Application Development teams to maintain and support production and non-production environments.

Key Responsibilities

  • Provide Tier 2/3 product technical support and issue resolution.
  • Develop and maintain software tools to improve operations and support efficiency.
  • Manage system and software configurations; troubleshoot environment-related issues.
  • Identify opportunities to optimize system performance through configuration improvements or development suggestions.
  • Plan, document, and deploy software applications across Unix/Linux, Azure, and GCP environments.
  • Collaborate with Development and QA teams throughout the software release lifecycle.
  • Analyze and improve release and deployment processes to drive automation and efficiency.
  • Coordinate with infrastructure teams for maintenance, planned downtimes, and resource management across production and non-production environments.
  • Participate in on-call support (minimum one week per month) for off-hour emergencies and maintenance activities.

Required Skills & Qualifications

  • Education:
  • Bachelor’s degree in Computer Science, Engineering, or a related field (BE/MCA).
  • Master’s degree is a plus.
  • Experience:
  • 3–7 years in Production Support, Application Management, or Application Development (support/maintenance).
  • Technical Skills:
  • Strong Unix/Linux administration skills.
  • Excellent scripting skills — Shell, Python, Batch (mandatory).
  • Database expertise — Oracle (must have).
  • Understanding of Software Development Life Cycle (SDLC).
  • PowerShell knowledge is a plus.
  • Experience in Java or Ruby development is desirable.
  • Exposure to cloud platforms (GCP, Azure, or AWS) is an added advantage.
  • Soft Skills:
  • Excellent problem-solving and troubleshooting abilities.
  • Strong collaboration and communication skills.
  • Ability to work in a fast-paced, cross-functional environment.


Read more
Deltek
Remote only
7 - 14 yrs
Best in industry
skill iconAmazon Web Services (AWS)
Windows Azure
OCI
Google Cloud Platform (GCP)

Title – Principal Cloud Architect

Company Summary :

As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com

 

Business Summary :

The Deltek Global Cloud team focuses on the delivery of first-class services and solutions for our customers. We are an innovative and dynamic team that is passionate about transforming the Deltek cloud services that power our customers' project success. Our diverse, global team works cross-functionally to make an impact on the business. If you want to work in a transformational environment, where education and training are encouraged, consider Deltek as the next step in your career!

External Job Title :

 

Principal Cloud Cost Optimization Engineer

Position Responsibilities :

The Cloud Cost Optimization Engineer plays a key role in supporting the full lifecycle of cloud financial management (FinOps) at Deltek—driving visibility, accountability, and efficiency across our cloud investments. This role is responsible for managing cloud spend, forecasting, and identifying optimization opportunities that support Deltek's cloud expansion and financial performance goals.

We are seeking a candidate with hands-on experience in Cloud FinOps practices, software development capabilities, AI/automation expertise, strong analytical skills, and a passion for driving financial insights that enable smarter business decisions. The ideal candidate is a self-starter with excellent cross-team collaboration abilities and a proven track record of delivering results in a fast-paced environment.

Key Responsibilities:

  • Prepare and deliver monthly reports and presentations on cloud spend performance versus plan and forecast for Finance, IT, and business leaders.
  • Support the evaluation, implementation, and ongoing management of cloud consumption and financial management tools.
  • Apply financial and vendor management principles to support contract optimization, cost modeling, and spend management.
  • Clearly communicate technical and financial insights, presenting complex topics in a simple, actionable manner to both technical and non-technical audiences.
  • Partner with engineering, product, and infrastructure teams to identify cost drivers, promote best practices for efficient cloud consumption, and implement savings opportunities.
  • Lead cost optimization initiatives, including analyzing and recommending savings plans, reserved instances, and right-sizing opportunities across AWS, Azure, and OCI.
  • Collaborate with the Cloud Governance team to ensure effective tagging strategies and alerting frameworks are deployed and maintained at scale.
  • Support forecasting by partnering with infrastructure and engineering teams to understand demand plans and proactively manage capacity and spend.
  • Build and maintain financial models and forecasting tools that provide actionable insights into current and future cloud expenditures.
  • Develop and maintain automated FinOps solutions using Python, SQL, and cloud-native services (Lambda, Azure Functions) to streamline cost analysis, anomaly detection, and reporting workflows.
  • Design and implement AI-powered cost optimization tools leveraging GenAI APIs (OpenAI, Claude, Bedrock) to automate spend analysis, generate natural language insights, and provide intelligent recommendations to stakeholders.
  • Build custom integrations and data pipelines connecting cloud billing APIs, FinOps platforms, and internal systems to enable real-time cost visibility and automated alerting.
  • Develop and sustain relationships with internal stakeholders, onboarding them to FinOps tools, processes, and continuous cost optimization practices.
  • Create and maintain KPIs, scorecards, and financial dashboards to monitor cloud spend and optimization progress.
  • Drive a culture of optimization by translating financial insights into actionable engineering recommendations, promoting cost-conscious architecture, and leveraging automation for resource optimization.
  • Use FinOps tools and services to analyze cloud usage patterns and provide technical cost-saving recommendations to application teams.
  • Develop self-service FinOps portals and chatbots using GenAI to enable teams to query cost data, receive optimization recommendations, and understand cloud spending through natural language interfaces.
  • Leverage Generative AI tools to enhance FinOps automation, streamline reporting, and improve team productivity across forecasting, optimization, and anomaly detection.

Qualifications :

 

  • Bachelor's degree in Finance, Computer Science, Information Systems, or a related field.
  • 4+ years of professional experience in Cloud FinOps, IT Financial Management, or Cloud Cost Governance within an IT organization.
  • 6-8 years of overall experience in Cloud Infrastructure Management, DevOps, Software Development, or related technical roles with hands-on cloud platform expertise
  • Hands-on experience with native cloud cost management tools (e.g., AWS Cost Explorer, Azure Cost Management, OCI Cost Analysis) and/or third-party FinOps platforms (e.g., Cloudability, CloudHealth, Apptio).
  • Proven experience working within the FinOps domain in a large enterprise environment.
  • Strong background in building and managing custom reports, dashboards, and financial insights.
  • Deep understanding of cloud financial management practices, including chargeback/showback models, cost savings and avoidance tracking, variance analysis, and financial forecasting.
  • Solid knowledge of cloud provider pricing models, billing structures, and optimization strategies.
  • Practical experience with cloud optimization and governance practices such as anomaly detection, capacity planning, rightsizing, tagging strategies, and storage lifecycle policies.
  • Skilled in leveraging automation to drive operational efficiency in cloud cost management processes.
  • Strong analytical and data storytelling skills, with the ability to collect, interpret, and present complex financial and technical data to diverse audiences.
  • Experience developing KPIs, scorecards, and metrics aligned with business goals and industry benchmarks.
  • Ability to influence and drive change management initiatives that increase adoption and maturity of FinOps practices.
  • Highly results-driven, detail-oriented, and goal-focused, with a passion for continuous improvement.
  • Strong communicator and collaborative team player with a passion for mentoring and educating others.
  • Strong proficiency in Python and SQL for data analysis, automation, and tool development, with demonstrated experience building production-grade scripts and applications.
  • Hands-on development experience building automation solutions, APIs, or internal tools for cloud management or financial operations.
  • Practical experience with GenAI technologies including prompt engineering, and integrating LLM APIs (OpenAI, Claude, Bedrock) into business workflows.
  • Experience with Infrastructure as Code (Terraform etc.) and CI/CD pipelines for deploying FinOps automation and tooling.
  • Familiarity with data visualization libraries (e.g. PowerBI ) and building interactive dashboards programmatically.
  • Knowledge of ML/AI frameworks is a plus.
  • Experience building chatbots or conversational AI interfaces for internal tooling is a plus.
  • FinOps Certified Practitioner.
  • AWS, Azure, or OCI cloud certifications are preferred.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Mumbai, Pune
3 - 7 yrs
Best in industry
Google Cloud Platform (GCP)
AZURE
skill iconJava
skill iconRuby
Oracle NoSQL Database
+5 more

Required skills and experience

• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)

• Master’s degree a plus

• 3-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.

• Excellent problem-solving/troubleshooting skills, fast learner

• Strong knowledge of Unix Administration.

• Strong scripting skills in Shell, Python, Batch is must.

• Strong Database experience – Oracle

• Strong knowledge of Software Development Life Cycle

• Power shell is nice to have

• Software development skillsets in Java or Ruby.

• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Moulina Dey
Posted by Moulina Dey
Pune, Bengaluru (Bangalore), Mumbai
3 - 6 yrs
₹2L - ₹14L / yr
technical product support
Linux/Unix
Google Cloud Platform (GCP)
SRE
Reliability engineering

Department: S&C – Site Reliability Engineering (SRE)  

Experience Required: 4–8 Years  

Location: Bangalore / Pune /Mumbai 

Employment Type: Full-time


  • Provide Tier 2/3 technical product support to internal and external stakeholders. 
  • Develop automation tools and scripts to improve operational efficiency and support processes. 
  • Manage and maintain system and software configurations; troubleshoot environment/application-related issues. 
  • Optimize system performance through configuration tuning or development enhancements. 
  • Plan, document, and deploy applications in Unix/Linux, Azure, and GCP environments
  • Collaborate with Development, QA, and Infrastructure teams throughout the release and deployment of lifecycles
  • Drive automation initiatives for release and deployment processes. 
  • Coordinate with infrastructure teams to manage hardware/software resources, maintenance, and scheduled downtimes across production and non-production environments. 
  • Participate in on-call rotations (minimum one week per month) to address critical incidents and off-hour maintenance tasks. 

 

Key Competencies 

  • Strong analytical, troubleshooting, and critical thinking abilities. 
  • Excellent cross-functional collaboration skills. 
  • Strong focus on documentation, process improvement, and system reliability
  • Proactive, detail-oriented, and adaptable in a fast-paced work environment. 


Read more
Payal
Payal Sangoi
Posted by Payal Sangoi
Bengaluru (Bangalore)
2 - 3 yrs
₹8L - ₹10L / yr
Linux/Unix
skill iconDocker
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+2 more

Junior DevOps Engineer

Experience: 2–3 years


About Us

We are a fast-growing fintech/trading company focused on building scalable, high-performance systems for financial markets. Our technology stack powers real-time trading, risk management, and analytics platforms. We are looking for a motivated Junior DevOps Engineer to join our dynamic team and help us maintain and improve our infrastructure.

Key Responsibilities

  • Support deployment, monitoring, and maintenance of trading and fintech applications.
  • Automate infrastructure provisioning and deployment pipelines using tools like Ansible, Terraform, or similar.
  • Collaborate with development and operations teams to ensure high availability, reliability, and security of systems.
  • Troubleshoot and resolve production issues in a fast-paced environment.
  • Implement and maintain CI/CD pipelines for continuous integration and delivery.
  • Monitor system performance and optimize infrastructure for scalability and cost-efficiency.
  • Assist in maintaining compliance with financial industry standards and security best practices.

Required Skills

  • 2–3 years of hands-on experience in DevOps or related roles.
  • Proficiency in Linux/Unix environments.
  • Experience with containerization (Docker) and orchestration (Kubernetes).
  • Familiarity with cloud platforms (AWS, GCP, or Azure).
  • Working knowledge of scripting languages (Bash, Python).
  • Experience with configuration management tools (Ansible, Puppet, Chef).
  • Understanding of networking concepts and security practices.
  • Exposure to monitoring tools (Prometheus, Grafana, ELK stack).
  • Basic understanding of CI/CD tools (Jenkins, GitLab CI, GitHub Actions).

Preferred Skills

  • Experience in fintech, trading, or financial services.
  • Knowledge of high-frequency trading systems or low-latency environments.
  • Familiarity with financial data protocols and APIs.
  • Understanding of regulatory requirements in financial technology.

What We Offer

  • Opportunity to work on cutting-edge fintech/trading platforms.
  • Collaborative and learning-focused environment.
  • Competitive salary and benefits.
  • Career growth in a rapidly expanding domain.



Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Pune, Mumbai
5 - 8 yrs
₹20L - ₹35L / yr
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconPostgreSQL
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)

🚀 We’re Hiring: React + Node.js Developer (Full Stack)

📍 Location: Mumbai / Pune (Final location will be decided post-interview)

💼 Experience: 5–8 years

🕒 Notice Period: Immediate to 15 days

About the Role:

We’re looking for a skilled Full Stack Developer with hands-on experience in React and Node.js, and a passion for building scalable, high-performance applications.


Key Skills & Responsibilities:

Strong expertise in React (frontend) and Node.js (backend).

Experience with relational databases (PostgreSQL / MySQL).

Familiarity with production systems and cloud services (AWS / GCP).

Strong grasp of OOP / FP and clean coding principles (e.g., SOLID).

Hands-on with Docker, and good to have exposure to Kubernetes, RabbitMQ, Redis.

Experience or interest in AI APIs & tools is a plus.

Excellent communication and collaboration skills.

Bonus: Contributions to open-source projects.



Read more
Borderless Access

at Borderless Access

4 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
13yrs+
Upto ₹35L / yr (Varies
)
skill iconPython
skill iconJava
skill iconNodeJS (Node.js)
skill iconSpring Boot
skill iconJavascript
+13 more

About Borderless Access

Borderless Access is a company that believes in fostering a culture of innovation and collaboration to build and deliver digital-first products for market research methodologies. This enables our customers to stay ahead of their competition.

We are committed to becoming the global leader in providing innovative digital offerings for consumers backed by advanced analytics, AI, ML, and cutting-edge technological capabilities.

Our Borderless Product Innovation and Operations team is dedicated to creating a top-tier market research platform that will drive our organization's growth. To achieve this, we're embracing modern technologies and a cutting-edge tech stack for faster, higher-quality product development.

The Product Development team is the core of our strategy, fostering collaboration and efficiency. If you're passionate about innovation and eager to contribute to our rapidly evolving market research domain, we invite you to join our team.


Key Responsibilities

  • Lead, mentor, and grow a cross-functional team of engineers specializing.
  • Foster a culture of collaboration, accountability, and continuous learning.
  • Oversee the design and development of robust platform architecture with a focus on scalability, security, and maintainability.
  • Establish and enforce engineering best practices including code reviews, unit testing, and CI/CD pipelines.
  • Promote clean, maintainable, and well-documented code across the team.
  • Lead architectural discussions and technical decision-making, with clear and concise documentation for software components and systems.
  • Collaborate with Product, Design, and other stakeholders to define and prioritize platform features.
  • Track and report on key performance indicators (KPIs) such as velocity, code quality, deployment frequency, and incident response times.
  • Ensure timely delivery of high-quality software aligned with business goals.
  • Work closely with DevOps to ensure platform reliability, scalability, and observability.
  • Conduct regular 1:1s, performance reviews, and career development planning.
  • Conduct code reviews and provide constructive feedback to ensure code quality and maintainability.
  • Participate in the entire software development lifecycle, from requirements gathering to deployment and maintenance.


Added Responsibilities

  • Defining and adhering to the development process.
  • Taking part in regular external audits and maintaining artifacts.
  • Identify opportunities for automation to reduce repetitive tasks.
  • Mentor and coach team members in the teams.
  • Continuously optimize application performance and scalability.
  • Collaborate with the Marketing team to understand different user journeys.


Growth and Development

The following are some of the growth and development activities that you can look forward to at Borderless Access as an Engineering Manager:

  • Develop leadership skills – Enhance your leadership abilities through workshops or coaching from Senior Leadership and Executive Leadership.
  • Foster innovation – Become part of a culture of innovation and experimentation within the product development and operations team.
  • Drive business objectives – Become part of defining and taking actions to meet the business objectives.


About You

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • 8+ years of experience in software development.
  • Experience with microservices architecture and container orchestration.
  • Excellent problem-solving and analytical skills.
  • Strong communication and collaboration skills.
  • Solid understanding of data structures, algorithms, and software design patterns.
  • Solid understanding of enterprise system architecture patterns.
  • Experience in managing a small to medium-sized team with varied experiences.
  • Strong proficiency in back-end development, including programming languages like Python, Java, or Node.js, and frameworks like Spring or Express.
  • Strong proficiency in front-end development, including HTML, CSS, JavaScript, and popular frameworks like React or Angular.
  • Experience with databases (e.g., MySQL, PostgreSQL, MongoDB).
  • Experience with cloud platforms AWS, Azure, or GCP (preferred is Azure).
  • Knowledge of containerization technologies Docker and Kubernetes.


Read more
Agentic AI Platform

Agentic AI Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
4 - 7 yrs
₹25L - ₹50L / yr
Microservices
API
Cloud Computing
skill iconJava
skill iconPython
+18 more

ROLES AND RESPONSIBILITIES:

We are looking for a Software Engineering Manager to lead a high-performing team focused on building scalable, secure, and intelligent enterprise software. The ideal candidate is a strong technologist who enjoys coding, mentoring, and driving high-quality software delivery in a fast-paced startup environment.


KEY RESPONSIBILITIES:

  • Lead and mentor a team of software engineers across backend, frontend, and integration areas.
  • Drive architectural design, technical reviews, and ensure scalability and reliability.
  • Collaborate with Product, Design, and DevOps teams to deliver high-quality releases on time.
  • Establish best practices in agile development, testing automation, and CI/CD pipelines.
  • Build reusable frameworks for low-code app development and AI-driven workflows.
  • Hire, coach, and develop engineers to strengthen technical capabilities and team culture.


IDEAL CANDIDATE:

  • B.Tech/B.E. in Computer Science from a Tier-1 Engineering College.
  • 3+ years of professional experience as a software engineer, with at least 1 year mentoring or managing engineers.
  • Strong expertise in backend development (Java / Node.js / Go / Python) and familiarity with frontend frameworks (React / Angular / Vue).
  • Solid understanding of microservices, APIs, and cloud architectures (AWS/GCP/Azure).
  • Experience with Docker, Kubernetes, and CI/CD pipelines.
  • Excellent communication and problem-solving skills.



PREFERRED QUALIFICATIONS:

  • Experience building or scaling SaaS or platform-based products.
  • Exposure to GenAI/LLM, data pipelines, or workflow automation tools.
  • Prior experience in a startup or high-growth product environment.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Bipasha Rath
Posted by Bipasha Rath
Mumbai, Pune
5 - 9 yrs
Best in industry
Google Cloud Platform (GCP)
Terraform
iac
Axure

We are seeking Cloud Developer with experience in GCP/Azure along with Terraform coding. They will help to manage & standardize the IaC module.

 

Experience:5 - 8 Years

Location:Mumbai & Pune

Mode of Work:Full Time

 Key Responsibilities:

  • Design,   develop, and maintain robust software applications using most   common and popular coding languages suitable for the application   design, with a strong focus on clean, maintainable, and efficient   code.
  • Develop,   maintain, and enhance Terraform modules to encapsulate common   infrastructure patterns and promote code reuse and standardization.
  • Develop   RESTful APIs and backend services aligned with modern architectural   practices.
  • Apply   object-oriented programming principles and design patterns to build   scalable systems.
  • Build   and maintain automated test frameworks and scripts to   ensure high product quality.
  • Troubleshoot   and resolve technical issues across application layers, from code to   infrastructure.
  • Work   with cloud platforms such as Azure or   Google Cloud Platform (GCP).
  • Use Git and   related version control practices effectively in a team-based   development environment.
  • Integrate   and experiment with AI development tools like GitHub   Copilot, Azure OpenAI, or similar to boost engineering   efficiency.

 

Requirements:

  • 5+   years of experience
  • Experience   with IaC Module
  • Terraform   coding experience along with Terraform   Module as a part of central platform team
  • Azure/GCP   cloud experience is a must
  • Experience   with C#/Python/Java Coding - is good to have

 

 If interested please share your updated resume with below details :

Total Experience -

Relevant Experience -

Current Location -

Current CTC -

Expected CTC -

Notice period -

Any offer in hand -


Read more
Agentic AI Platform

Agentic AI Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
3 - 6 yrs
₹10L - ₹25L / yr
DevOps
skill iconPython
Google Cloud Platform (GCP)
Linux/Unix
CI/CD
+21 more

Review Criteria

  • Strong DevOps /Cloud Engineer Profiles
  • Must have 3+ years of experience as a DevOps / Cloud Engineer
  • Must have strong expertise in cloud platforms – AWS / Azure / GCP (any one or more)
  • Must have strong hands-on experience in Linux administration and system management
  • Must have hands-on experience with containerization and orchestration tools such as Docker and Kubernetes
  • Must have experience in building and optimizing CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins
  • Must have hands-on experience with Infrastructure-as-Code tools such as Terraform, Ansible, or CloudFormation
  • Must be proficient in scripting languages such as Python or Bash for automation
  • Must have experience with monitoring and alerting tools like Prometheus, Grafana, ELK, or CloudWatch
  • Top tier Product-based company (B2B Enterprise SaaS preferred)


Preferred

  • Experience in multi-tenant SaaS infrastructure scaling.
  • Exposure to AI/ML pipeline deployments or iPaaS / reverse ETL connectors.


Role & Responsibilities

We are seeking a DevOps Engineer to design, build, and maintain scalable, secure, and resilient infrastructure for our SaaS platform and AI-driven products. The role will focus on cloud infrastructure, CI/CD pipelines, container orchestration, monitoring, and security automation, enabling rapid and reliable software delivery.


Key Responsibilities:

  • Design, implement, and manage cloud-native infrastructure (AWS/Azure/GCP).
  • Build and optimize CI/CD pipelines to support rapid release cycles.
  • Manage containerization & orchestration (Docker, Kubernetes).
  • Own infrastructure-as-code (Terraform, Ansible, CloudFormation).
  • Set up and maintain monitoring & alerting frameworks (Prometheus, Grafana, ELK, etc.).
  • Drive cloud security automation (IAM, SSL, secrets management).
  • Partner with engineering teams to embed DevOps into SDLC.
  • Troubleshoot production issues and drive incident response.
  • Support multi-tenant SaaS scaling strategies.


Ideal Candidate

  • 3–6 years' experience as DevOps/Cloud Engineer in SaaS or enterprise environments.
  • Strong expertise in AWS, Azure, or GCP.
  • Strong expertise in LINUX Administration.
  • Hands-on with Kubernetes, Docker, CI/CD tools (GitHub Actions, GitLab, Jenkins).
  • Proficient in Terraform/Ansible/CloudFormation.
  • Strong scripting skills (Python, Bash).
  • Experience with monitoring stacks (Prometheus, Grafana, ELK, CloudWatch).
  • Strong grasp of cloud security best practices.



Read more
Cspar Enterprises Private Limited
Bhopal, Bengaluru (Bangalore)
4 - 10 yrs
₹3L - ₹8L / yr
skill iconDjango
RESTful APIs
deployment tools
RabbitMQ
Apache Kafka
+11 more

Designation: Senior Python Django Developer 

Position: Senior Python Developer

Job Types: Full-time, Permanent

Pay: Up to ₹800,000.00 per year

Schedule: Day shift

Ability to commute/relocate: Bhopal Indrapuri (MP) And Bangalore JP Nagar

 

Experience: Back-end development: 4 years (Required)

 

Job Description:

We are looking for a highly skilled Senior Python Django Developer with extensive experience in building and scaling financial or payments-based applications. The ideal candidate has a deep understanding of system design, architecture patterns, and testing best practices, along with a strong grasp of the startup environment.

This role requires a balance of hands-on coding, architectural design, and collaboration across teams to deliver robust and scalable financial products.

 

Responsibilities:

  • Design and develop scalable, secure, and high-performance applications using Python (Django framework).
  • Architect system components, define database schemas, and optimize backend services for speed and efficiency.
  • Lead and implement design patterns and software architecture best practices.
  • Ensure code quality through comprehensive unit testing, integration testing, and participation in code reviews.
  • Collaborate closely with Product, DevOps, QA, and Frontend teams to build seamless end-to-end solutions.
  • Drive performance improvements, monitor system health, and troubleshoot production issues.
  • Apply domain knowledge in payments and finance, including transaction processing, reconciliation, settlements, wallets, UPI, etc.
  • Contribute to technical decision-making and mentor junior developers.

 

Requirements:

  • 4 to 10 years of professional backend development experience with Python and Django.
  • Strong background in payments/financial systems or FinTech applications.
  • Proven experience in designing software architecture in a microservices or modular monolith environment.
  • Experience working in fast-paced startup environments with agile practices.
  • Proficiency in RESTful APIs, SQL (PostgreSQL/MySQL), NoSQL (MongoDB/Redis).
  • Solid understanding of Docker, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure).
  • Hands-on experience with test-driven development (TDD) and frameworks like pytest, unit test, or factory boy.
  • Familiarity with security best practices in financial applications (PCI compliance, data encryption, etc.).

 

Preferred Skills:

  • Exposure to event-driven architecture (Celery, Kafka, RabbitMQ).
  • Experience integrating with third-party payment gateways, banking APIs, or financial instruments.
  • Understanding of DevOps and monitoring tools (Prometheus, ELK, Grafana).
  • Contributions to open-source or personal finance-related projects.


Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
4 - 6 yrs
₹30L - ₹35L / yr
Fullstack Developer
Mobile App Development
Google Cloud Platform (GCP)
skill iconReact Native
skill iconFlutter
+8 more

DataHavn IT Solutions is a company that specializes in big data and cloud computing, artificial intelligence and machine learning, application development, and consulting services. We want to be in the frontrunner into anything to do with data and we have the required expertise to transform customer businesses by making right use of data.

 

 

About the Role

We're seeking a talented and versatile Full Stack Developer with a strong foundation in mobile app development to join our dynamic team. You'll play a pivotal role in designing, developing, and maintaining high-quality software applications across various platforms.

Responsibilities

  • Full Stack Development: Design, develop, and implement both front-end and back-end components of web applications using modern technologies and frameworks.
  • Mobile App Development: Develop native mobile applications for iOS and Android platforms using Swift and Kotlin, respectively.
  • Cross-Platform Development: Explore and utilize cross-platform frameworks (e.g., React Native, Flutter) for efficient mobile app development.
  • API Development: Create and maintain RESTful APIs for integration with front-end and mobile applications.
  • Database Management: Work with databases (e.g., MySQL, PostgreSQL) to store and retrieve application data.
  • Code Quality: Adhere to coding standards, best practices, and ensure code quality through regular code reviews.
  • Collaboration: Collaborate effectively with designers, project managers, and other team members to deliver high-quality solutions.

Qualifications

  • Bachelor's degree in Computer Science, Software Engineering, or a related field.
  • Strong programming skills in [relevant programming languages, e.g., JavaScript, Python, Java, etc.]. 
  • Experience with [relevant frameworks and technologies, e.g., React, Angular, Node.js, Swift, Kotlin, etc.].
  • Understanding of software development methodologies (e.g., Agile, Waterfall).
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.
  • Strong communication and interpersonal skills.

Preferred Skills (Optional)

  • Experience with cloud platforms (e.g., AWS, Azure, GCP).
  • Knowledge of DevOps practices and tools.
  • Experience with serverless architectures.
  • Contributions to open-source projects.

What We Offer

  • Competitive salary and benefits package.
  • Opportunities for professional growth and development.
  • A collaborative and supportive work environment.
  • A chance to work on cutting-edge projects.


Read more
Data Havn

Data Havn

Agency job
via Infinium Associate by Toshi Srivastava
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
4 - 6 yrs
₹40L - ₹45L / yr
skill iconR Programming
Google Cloud Platform (GCP)
skill iconData Science
skill iconPython
Data Visualization
+3 more

DataHavn IT Solutions is a company that specializes in big data and cloud computing, artificial intelligence and machine learning, application development, and consulting services. We want to be in the frontrunner into anything to do with data and we have the required expertise to transform customer businesses by making right use of data.

 

About the Role:

As a Data Scientist specializing in Google Cloud, you will play a pivotal role in driving data-driven decision-making and innovation within our organization. You will leverage the power of Google Cloud's robust data analytics and machine learning tools to extract valuable insights from large datasets, develop predictive models, and optimize business processes.

Key Responsibilities:

  • Data Ingestion and Preparation:
  • Design and implement efficient data pipelines for ingesting, cleaning, and transforming data from various sources (e.g., databases, APIs, cloud storage) into Google Cloud Platform (GCP) data warehouses (BigQuery) or data lakes (Dataflow).
  • Perform data quality assessments, handle missing values, and address inconsistencies to ensure data integrity.
  • Exploratory Data Analysis (EDA):
  • Conduct in-depth EDA to uncover patterns, trends, and anomalies within the data.
  • Utilize visualization techniques (e.g., Tableau, Looker) to communicate findings effectively.
  • Feature Engineering:
  • Create relevant features from raw data to enhance model performance and interpretability.
  • Explore techniques like feature selection, normalization, and dimensionality reduction.
  • Model Development and Training:
  • Develop and train predictive models using machine learning algorithms (e.g., linear regression, logistic regression, decision trees, random forests, neural networks) on GCP platforms like Vertex AI.
  • Evaluate model performance using appropriate metrics and iterate on the modeling process.
  • Model Deployment and Monitoring:
  • Deploy trained models into production environments using GCP's ML tools and infrastructure.
  • Monitor model performance over time, identify drift, and retrain models as needed.
  • Collaboration and Communication:
  • Work closely with data engineers, analysts, and business stakeholders to understand their requirements and translate them into data-driven solutions.
  • Communicate findings and insights in a clear and concise manner, using visualizations and storytelling techniques.

Required Skills and Qualifications:

  • Strong proficiency in Python or R programming languages.
  • Experience with Google Cloud Platform (GCP) services such as BigQuery, Dataflow, Cloud Dataproc, and Vertex AI.
  • Familiarity with machine learning algorithms and techniques.
  • Knowledge of data visualization tools (e.g., Tableau, Looker).
  • Excellent problem-solving and analytical skills.
  • Ability to work independently and as part of a team.
  • Strong communication and interpersonal skills.

Preferred Qualifications:

  • Experience with cloud-native data technologies (e.g., Apache Spark, Kubernetes).
  • Knowledge of distributed systems and scalable data architectures.
  • Experience with natural language processing (NLP) or computer vision applications.
  • Certifications in Google Cloud Platform or relevant machine learning frameworks.


Read more
This is a Full Time role with our Client

This is a Full Time role with our Client

Agency job
via eTalent Services by JaiPrakash Bharti
Remote only
5 - 10 yrs
₹5L - ₹14L / yr
skill iconGo Programming (Golang)
Google Cloud Platform (GCP)

Golang / GCP Lead Developer

Experience: 5+ Years (with at least 3 years of hands-on experience in GoLang and GCP)

Salary: 14 LPA

Location: Remote

Employment Type: Full-time

 

Detailed Job Description / Skill Set:

We are looking for a Lead Developer with 5 years of experience in development projects, including at least 3 years of hands-on experience in GoLang and Google Cloud Platform (GCP).

The candidate should have prior experience in leading application development projects using GoLang and GCP technologies.

Knowledge or experience in the Cable domain, OSS/BSS systems, and an understanding of Cable MSO operations is preferred.

The role requires strong coordination skills to manage multiple stakeholders and ensure timely delivery of high-quality project deliverables.

Strong written and verbal communication skills are essential.

 

Mandatory Skills:

GoLang

GCP

 

Good to Have Skills:

OSS/BSS

Read more
Payal
Bengaluru (Bangalore)
2 - 4 yrs
₹10L - ₹15L / yr
Phoenix
Ecto,
Google Cloud Platform (GCP)
skill iconPostgreSQL
skill iconRedis
+8 more

JD: Elixir Developer- Trading & Fintech

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make

a mark in the trading and fintech industry. If you are looking for just another backend role, this

isn’t it. We want risk-takers, relentless learners, and those who find joy in pushing their limits

every day. If you thrive in high-stakes environments and have a deep passion for performance

driven backend systems, we want you.

About the Role

We’re looking for an Elixir Developer who is passionate about building scalable, high

performance backend systems. You’ll work closely with our engineering team to design,

develop, and maintain reliable applications that power mission-critical systems.

Key Responsibilities

• Develop and maintain backend services using Elixir and Phoenix framework.

• Build scalable, fault-tolerant, and distributed systems.

• Integrate APIs, databases, and message queues for real-time applications.

• Optimize system performance and ensure low latency and high throughput.

• Collaborate with frontend, DevOps, and product teams to deliver seamless solutions.

• Write clean, maintainable, and testable code with proper documentation.

• Participate in code reviews, architectural discussions, and deployment automation.

Required Skills & Experience

• 2–4 years of hands-on experience in Elixir (or strong functional programming background).

• Experience with Phoenix, Ecto, and RESTful API development.

• Solid understanding of OTP (Open Telecom Platform) concepts like GenServer, Supervisors, etc.

• Proficiency in PostgreSQL, Redis, or similar databases.

• Familiarity with Docker, Kubernetes, or cloud platforms (AWS/GCP/Azure).

• Understanding of CI/CD pipelines, version control (Git), and agile development.

Good to Have

• Experience with microservices architecture or real-time data systems.

• Knowledge of GraphQL, LiveView, or PubSub.

• Exposure to performance profiling, observability, or monitoring tools.

Why Join Us?

• Work with a team that expects and delivers excellence.

• A culture where risk-taking is rewarded, and complacency is not.

• Limitless opportunities for growth—if you can handle the pace.

• A place where learning is currency, and outperformance is the only metric that matters.

• The opportunity to build systems that move markets, execute trades in microseconds, and redefine

fintech.

This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.

Read more
Codemonk

at Codemonk

4 candid answers
4 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
7yrs+
Upto ₹35L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconPython
Google Cloud Platform (GCP)
RESTful APIs
SQL
+4 more

Like us, you'll be deeply committed to delivering impactful outcomes for customers.

  • 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
  • Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
  • Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
  • Experience writing batch/cron jobs using Python and Shell scripting.
  • Experience in web application development using JavaScript and JavaScript libraries.
  • Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
  • Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
  • Understanding of code versioning tools such as Git.
  • Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
  • Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
Read more
Codemonk

at Codemonk

4 candid answers
4 recruiters
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
1yr+
Upto ₹18L / yr (Varies
)
skill iconGo Programming (Golang)
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconReact.js
+5 more

Key Responsibilities:

  1. Application Development: Design and implement both client-side and server-side architecture using JavaScript frameworks and back-end technologies like Golang.
  2. Database Management: Develop and maintain relational and non-relational databases (MySQL, PostgreSQL, MongoDB) and optimize database queries and schema design.
  3. API Development: Build and maintain RESTfuI APIs and/or GraphQL services to integrate with front-end applications and third-party services.
  4. Code Quality & Performance: Write clean, maintainable code and implement best practices for scalability, performance, and security.
  5. Testing & Debugging: Perform testing and debugging to ensure the stability and reliability of applications across different environments and devices.
  6. Collaboration: Work closely with product managers, designers, and DevOps engineers to deliver features aligned with business goals.
  7. Documentation: Create and maintain documentation for code, systems, and application architecture to ensure knowledge transfer and team alignment.

Requirements:

  1. Experience: 1+ years in backend development in micro-services ecosystem, with proven experience in front-end and back-end frameworks.
  2. 1+ years experience Golang is mandatory
  3. Problem-Solving & DSA: Strong analytical skills and attention to detail.
  4. Front-End Skills: Proficiency in JavaScript and modern front-end frameworks (React, Angular, Vue.js) and familiarity with HTML/CSS.
  5. Back-End Skills: Experience with server-side languages and frameworks like Node.js, Express, Python or GoLang.
  6. Database Knowledge: Strong knowledge of relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB).
  7. API Development: Hands-on experience with RESTfuI API design and integration, with a plus for GraphQL.
  8. DevOps Understanding: Familiarity with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes) is a bonus.
  9. Soft Skills: Excellent problem-solving skills, teamwork, and strong communication abilities.

Nice-to-Have:

  1. UI/UX Sensibility: Understanding of responsive design and user experience principles.
  2. CI/CD Knowledge: Familiarity with CI/CD tools and workflows (Jenkins, GitLab CI).
  3. Security Awareness: Basic understanding of web security standards and best practices.
Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
3 - 5 yrs
₹5L - ₹20L / yr
Automation
Manual testing
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
SQL
+4 more

🚀 Hiring: QA Engineer (Manual + Automation)

⭐ Experience: 3+ Years

📍 Location: Bangalore

⭐ Work Mode:- Hybrid

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


💫 About the Role:

We’re looking for a skilled QA Engineer You’ll ensure product quality through manual and automated testing across web, mobile, and APIs — working with tools and technologies like Postman, Playwright, Appium, Rest Assured, GCP/AWS, and React/Next.js.


Key Responsibilities:

✅ Develop & maintain automated tests using Cucumber, Playwright, Pytest, etc.

✅ Perform API testing using Postman.

✅ Work on cloud platforms (GCP/AWS) and CI/CD (Jenkins).

✅ Test web & mobile apps (Appium, BrowserStack, LambdaTest).

✅ Collaborate with developers to ensure seamless releases.


Must-Have Skills:

✅ API Testing (Postman)

✅ Cloud (GCP / AWS)

✅ Frontend understanding (React / Next.js)

✅ Strong SQL & Git skills

✅ Familiarity with OpenAI APIs


Read more
Intraintel.ai

Intraintel.ai

Agency job
via Recruit Square by Priyanka choudhary
Remote only
6 - 9 yrs
₹10L - ₹20L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
Google Cloud Platform (GCP)


About IntraIntel.ai

At IntraIntel.ai, we are building a next-generation, multi-tenant AI platform that enables organizations across industries—healthcare, clinical research, manufacturing, and textiles—to harness the power of intelligent automation and Generative AI. Our platform seamlessly integrates AI agents, RAG pipelines, and LLM-based workflows into a unified, scalable, and secure ecosystem hosted on Google Cloud Platform (GCP).

We are looking for a Full Stack Developer with deep experience in AI-integrated applications, cloud-native architecture, and end-to-end platform development—someone passionate about building intelligent systems that push the boundaries of innovation.

Key Responsibilities

1. Full Stack Development

  • Design, build, and maintain full-stack applications with Node.js, Express.js, and modern frontend frameworks such as React.js / Angular.

  • Implement RESTful APIs, GraphQL endpoints, and real-time communication features supporting multi-tenant AI workloads.

  • Optimize backend logic for scalability, modularity, and high availability on GCP.

  • Integrate AI-driven features (RAG, chatbots, data pipelines) into user-facing experiences.

2. AI Integration & Agentic Architecture

  • Work alongside AI engineers and architects to integrate LLMs, RAG pipelines, and AI agents (using frameworks like LangChain, CrewAI, or LlamaIndex) into the product stack.

  • Develop APIs and connectors for prompt orchestration, vector storage (FAISS, Chroma, Pinecone), and model inference workflows.

  • Implement context-aware AI features with secure data access boundaries and performance optimization.

3. Cloud Infrastructure & CI/CD

  • Deploy, manage, and optimize applications on Google Cloud Platform (GCP) using services such as Cloud Run, GKE, BigQuery, Cloud Storage, IAM, and Pub/Sub.

  • Set up and maintain CI/CD pipelines using GitHub Actions, Cloud Build, or Terraform for automated testing, integration, and deployment.

  • Manage infrastructure as code (IaC), automate containerized builds, and optimize deployment strategies for multi-environment scalability.

4. UI/UX Collaboration

  • Collaborate with product and design teams to transform mockups into seamless user experiences using Figma and front-end frameworks.

  • Contribute to UX optimization, ensuring that AI-driven features are intuitive, responsive, and visually engaging.

  • Work with designers to ensure front-end consistency across multi-tenant environments.

5. Performance, Security & Monitoring

  • Ensure data privacy, scalability, and compliance through role-based access control (RBAC), encryption, and secure API practices.

  • Monitor system performance using Cloud Monitoring / OpenTelemetry, ensuring uptime and reliability.

  • Participate in architectural discussions to enhance system observability and security posture.

Required Skills & Qualifications

Technical Proficiency

  • Backend: Node.js, Express.js, Python (for AI integration), REST/GraphQL APIs

  • Frontend: React.js / Angular / Vue.js, HTML5, CSS3, TypeScript, Next.js

  • Database: PostgreSQL, MongoDB, Firestore, Redis

  • Cloud: Google Cloud Platform (GCP) – Cloud Run, IAM, GKE, BigQuery, Cloud Storage

  • AI Integration: LLM APIs (OpenAI, Gemini, Claude), LangChain, RAG, vector databases (FAISS, Pinecone, Chroma)

  • DevOps: Docker, Kubernetes, Terraform, Cloud Build, GitHub Actions

  • Version Control: Git, Bitbucket

  • UI/UX Collaboration: Figma, Material UI, responsive design principles

Experience & Attributes

  • 5+ years of experience in full-stack development, preferably on AI or SaaS platforms.

  • Strong understanding of multi-tenant architectures and modular design principles.

  • Proven experience in CI/CD pipeline automation and infrastructure management.

  • Experience in integrating AI services, chatbots, or intelligent recommendation systems.

  • Strong problem-solving skills and ability to collaborate in a fast-paced, cross-functional environment.

  • Excellent communication skills and documentation habits.

Preferred Qualifications

  • Prior experience working with AI-driven SaaS or agentic AI platforms.

  • Familiarity with PromptOps / MLOps practices and versioning workflows for LLMs.

  • Experience in data governance and security compliance (HIPAA, GDPR, or SOC2).

  • Cloud certifications (GCP Professional Cloud Developer / Architect) are a plus.

Why Join IntraIntel.ai

  • Work on cutting-edge AI agentic architectures with real-world enterprise impact.

  • Join a fast-growing, innovation-driven team shaping the future of AI platforms.

  • Build products at scale across diverse industries with a unified mission.

  • Collaborative and flexible environment encouraging ownership and creativity.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Bipasha Rath
Posted by Bipasha Rath
Pune, Mumbai
5 - 8 yrs
Best in industry
Google Cloud Platform (GCP)
AZURE
Terraform
skill icon.NET
skill iconPython
+2 more

Job Description:


Position - Cloud Developer

Experience - 5 - 8 years

Location - Mumbai & Pune


Responsibilities:

  • Design, develop, and maintain robust software applications using most common and popular coding languages suitable for the application design, with a strong focus on clean, maintainable, and efficient code.
  • Develop, maintain, and enhance Terraform modules to encapsulate common infrastructure patterns and promote code reuse and standardization.
  • Develop RESTful APIs and backend services aligned with modern architectural practices.
  • Apply object-oriented programming principles and design patterns to build scalable systems.
  • Build and maintain automated test frameworks and scripts to ensure high product quality.
  • Troubleshoot and resolve technical issues across application layers, from code to infrastructure.
  • Work with cloud platforms such as Azure or Google Cloud Platform (GCP).
  • Use Git and related version control practices effectively in a team-based development environment.
  • Integrate and experiment with AI development tools like GitHub Copilot, Azure OpenAI, or similar to boost engineering efficiency.


Skills:

  • 5+ years of experience
  • Experience with IaC Module
  • Terraform coding experience along with Terraform Module as a part of central platform team
  • Azure/GCP cloud experience is a must
  • Experience with C#/Python/Java Coding - is good to have


Read more
Wissen Technology
Pune, Mumbai, Bengaluru (Bangalore)
4 - 10 yrs
Best in industry
Google Cloud Platform (GCP)
skill iconPython
skill iconKubernetes
Shell Scripting
SRE Engineer
+1 more

Dear Candidate,


Greetings from Wissen Technology. 

We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.   

 About Wissen Technology:

  • The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
  • Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
  • Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
  • Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
  • Globally present with offices US, India, UK, Australia, Mexico, and Canada.
  • We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
  • Wissen Technology has been certified as a Great Place to Work®.
  • Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
  • Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
  • The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.

We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few.



Job Description: 

Please find below details:


Experience - 4+ Years

Location- Bangalore/Mumbai/Pune


Team Responsibilities

The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.

 

Resource's key Responsibilities


• Provide Tier 2/3 product technical support.

• Building software to help operations and support activities.

• Manage system\software configurations and troubleshoot environment issues.

• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.

• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.

• Collaborate with development and software testing teams throughout the release process.

• Analyze release and deployment processes to identify key areas for automation and optimization.

• Manage hardware and software resources & coordinate maintenance, planned downtimes with

infrastructure group across all the environments. (Production / Non-Production).

• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.

 

Required skills and experience

• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)

• Master’s degree a plus

• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.

• Excellent problem-solving/troubleshooting skills, fast learner

• Strong knowledge of Unix Administration.

• Strong scripting skills in Shell, Python, Batch is must.

• Strong Database experience – Oracle

• Strong knowledge of Software Development Life Cycle

• Power shell is nice to have

• Software development skillsets in Java or Ruby.

• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have




Read more
Gurugram
12 - 17 yrs
₹60L - ₹100L / yr
skill iconNodeJS (Node.js)
skill iconJava
People Management
Systems architecture
Scalability
+6 more

Director - Backend | Snapmint About Snapmint.


Experience: 12+ years, Location: Gurgaon.


Founded by serial Entrepreneurs from IIT Bombay, Snapmint is challenging the way banking is done by building the banking experience ground up. Our first product provides purchase financing at 0% interest rates to 300 Million banked consumers in India who do not have credit cards using instant credit scoring and advanced underwriting systems. We look at hundreds of variables, much beyond traditional credit models. With real time credit approval, seamless digital loan servicing and repayments technology we are revolutionizing the way banking is done for todays smartphone-wielding Indian. https://snapmint.com/


Job Overview: As Director Backend, you will lead a team of backend engineers, driving the development of scalable, reliable, and performant systems. You will work closely with product management, front-end engineers, and other cross-functional teams to deliver high-quality solutions while ensuring alignment with the companys technical and business goals. You will play a key role in coaching and mentoring engineers, promoting best practices, and helping to grow the backend engineering capabilities.


Key Responsibilities:

  • Lead, mentor, and manage a team of backend engineers, ensuring high-quality delivery and fostering a collaborative work environment.
  • Collaborate with product managers, engineers, and other stakeholders to define technical solutions and design scalable backend architectures.
  • Own the development and maintenance of backend systems, APIs, and services.
  • Drive technical initiatives, including infrastructure improvements, performance optimizations, and platform scalability.
  • Guide the team in implementing industry best practices for code quality, security, and performance.
  • Participate in code reviews, providing constructive feedback and maintaining high coding standards.
  • Promote agile methodologies and ensure the team adheres to sprint timelines and goals.
  • Develop and track key performance indicators (KPIs) to measure team productivity and system reliability.
  • Foster a culture of continuous learning, experimentation, and improvement within the backend engineering team.


Requirements:

  • Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).
  • 12+ years of experience in backend development with a proven track record of leading engineering teams.
  • Strong experience with backend language ie. Node.js
  • Experience working with databases (SQL, NoSQL), caching systems, and RESTful APIs.
  • Familiarity with cloud platforms like AWS, GCP, or Azure and containerization technologies (e.g., Docker, Kubernetes).
  • Solid understanding of software development principles, version control, and CI/CD practices.
  • Excellent problem-solving skills and the ability to architect complex systems.
  • Strong leadership, communication, and interpersonal skills.
  • Ability to thrive in a fast-paced, dynamic environment and manage multiple priorities effectively.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Sonali RajeshKumar
Posted by Sonali RajeshKumar
Bengaluru (Bangalore), Pune, Mumbai
4 - 9 yrs
Best in industry
Google Cloud Platform (GCP)
Reliability engineering
skill iconPython
Shell Scripting

Dear Candidate,


Greetings from Wissen Technology. 

We have an exciting Job opportunity for GCP SRE Engineer Professionals. Please refer to the Job Description below and share your profile if interested.   

 About Wissen Technology:

  • The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
  • Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
  • Our workforce consists of 1000+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
  • Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
  • Globally present with offices US, India, UK, Australia, Mexico, and Canada.
  • We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
  • Wissen Technology has been certified as a Great Place to Work®.
  • Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
  • Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
  • The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.

We have served client across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include likes of Morgan Stanley, MSCI, State Street Corporation, Flipkart, Swiggy, Trafigura, GE to name a few


Job Description: 

Please find below details:


Experience - 4+ Years

Location- Bangalore/Mumbai/Pune


Team Responsibilities

The successful candidate shall be part of the S&C – SRE Team. Our team provides a tier 2/3 support to S&C Business. This position involves collaboration with the client facing teams like Client Services, Product and Research teams and Infrastructure/Technology and Application development teams to perform Environment and Application maintenance and support.

 

Resource's key Responsibilities


• Provide Tier 2/3 product technical support.

• Building software to help operations and support activities.

• Manage system\software configurations and troubleshoot environment issues.

• Identify opportunities for optimizing system performance through changes in configuration or suggestions for development.

• Plan, document and deploy software applications on our Unix/Linux/Azure and GCP based systems.

• Collaborate with development and software testing teams throughout the release process.

• Analyze release and deployment processes to identify key areas for automation and optimization.

• Manage hardware and software resources & coordinate maintenance, planned downtimes with

infrastructure group across all the environments. (Production / Non-Production).

• Must spend minimum one week a month as on call support to help with off-hour emergencies and maintenance activities.

 

Required skills and experience

• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)

• Master’s degree a plus

• 6-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.

• Excellent problem-solving/troubleshooting skills, fast learner

• Strong knowledge of Unix Administration.

• Strong scripting skills in Shell, Python, Batch is must.

• Strong Database experience – Oracle

• Strong knowledge of Software Development Life Cycle

• Power shell is nice to have

• Software development skillsets in Java or Ruby.

• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have


Read more
Estuate Software
Deekshith K Naidu
Posted by Deekshith K Naidu
Hyderabad
5 - 12 yrs
₹5L - ₹35L / yr
Google Cloud Platform (GCP)
Apache Airflow
ETL
skill iconPython
Big query
+1 more

Job Title: Data Engineer / Integration Engineer

 

Job Summary:

We are seeking a highly skilled Data Engineer / Integration Engineer to join our team. The ideal candidate will have expertise in Python, workflow orchestration, cloud platforms (GCP/Google BigQuery), big data frameworks (Apache Spark or similar), API integration, and Oracle EBS. The role involves designing, developing, and maintaining scalable data pipelines, integrating various systems, and ensuring data quality and consistency across platforms. Knowledge of Ascend.io is a plus.

Key Responsibilities:

  • Design, build, and maintain scalable data pipelines and workflows.
  • Develop and optimize ETL/ELT processes using Python and workflow automation tools.
  • Implement and manage data integration between various systems, including APIs and Oracle EBS.
  • Work with Google Cloud Platform (GCP) or Google BigQuery (GBQ) for data storage, processing, and analytics.
  • Utilize Apache Spark or similar big data frameworks for efficient data processing.
  • Develop robust API integrations for seamless data exchange between applications.
  • Ensure data accuracy, consistency, and security across all systems.
  • Monitor and troubleshoot data pipelines, identifying and resolving performance issues.
  • Collaborate with data analysts, engineers, and business teams to align data solutions with business goals.
  • Document data workflows, processes, and best practices for future reference.

Required Skills & Qualifications:

  • Strong proficiency in Python for data engineering and workflow automation.
  • Experience with workflow orchestration tools (e.g., Apache Airflow, Prefect, or similar).
  • Hands-on experience with Google Cloud Platform (GCP) or Google BigQuery (GBQ).
  • Expertise in big data processing frameworks, such as Apache Spark.
  • Experience with API integrations (REST, SOAP, GraphQL) and handling structured/unstructured data.
  • Strong problem-solving skills and ability to optimize data pipelines for performance.
  • Experience working in an agile environment with CI/CD processes.
  • Strong communication and collaboration skills.

Preferred Skills & Nice-to-Have:

  • Experience with Ascend.io platform for data pipeline automation.
  • Knowledge of SQL and NoSQL databases.
  • Familiarity with Docker and Kubernetes for containerized workloads.
  • Exposure to machine learning workflows is a plus.

Why Join Us?

  • Opportunity to work on cutting-edge data engineering projects.
  • Collaborative and dynamic work environment.
  • Competitive compensation and benefits.
  • Professional growth opportunities with exposure to the latest technologies.

How to Apply:

Interested candidates can apply by sending their resume to [your email/contact].

 

Read more
This is a Full time role is with our Client

This is a Full time role is with our Client

Agency job
via eTalent Services by JaiPrakash Bharti
Tiruchirappalli, Chennai
5 - 10 yrs
₹15L - ₹20L / yr
skill iconPython
Google Cloud Platform (GCP)
Bigquery
skill iconDocker
Data Engineer
+1 more

Senior Data Engineer

Experience: 5+ years

Chennai/ Trichy (Hybrid)


Type: Fulltime

 

Skills: GCP + Airflow + Bigquery + Python + Docker

 

The Role

As a Senior Data Engineer, you own new initiatives, design and build world-class platforms to measure and optimize ad performance. You ensure industry-leading scalability and reliability of mission-critical systems processing billions of real-time transactions a day. You apply state-of-the-art technologies, frameworks, and strategies to address complex challenges with Big Data processing and analytics. You work closely with the talented engineers across different time zones in building industry-first solutions to measure and optimize ad performance.

 

What you’ll do

● Write solid code with a focus on high performance for services supporting high throughput and low latency

● Architect, design, and build big data processing platforms handling tens of TBs/Day, serve thousands of clients, and support advanced analytic workloads

● Providing meaningful and relevant feedback to junior developers and staying up-to-date with system changes

● Explore the technological landscape for new ways of producing, processing, and analyzing data to gain insights into both our users and our product features

● Design, develop, and test data-driven products, features, and APIs that scale

● Continuously improve the quality of deliverables and SDLC processes

● Operate production environments, investigate issues, assess their impact, and develop feasible solutions.

● Understand business needs and work with product owners to establish priorities 

● Bridge the gap between Business / Product requirements and technical details

● Work in multi-functional agile teams with end-to-end responsibility for product development and delivery

 

Who you are

● 3-5+ years of programming experience in coding, object-oriented design, and/or functional programming, including Python or a related language

● Love what you do and are passionate about crafting clean code, and have a steady foundation.

● Deep understanding of distributed system technologies, standards, and protocols, and have 2+ years of experience working in distributed systems like Airflow, BigQuery, Spark, Kafka Eco System ( Kafka Connect, Kafka Streams, or Kinesis), and building data pipelines at scale.

● Excellent SQL, DBT query writing abilities, and data understanding

● Care about agile software processes, data-driven development, reliability, and responsible experimentation 

● Genuine desire to automate decision-making, processes, and workflows

● Experience working with orchestration tools like Airflow

● Good understanding of semantic layers and experience in tools like LookerML, Kube

● Excellent communication skills and a team player

● Google BigQuery or Snowflake

● Cloud environment, Google Cloud Platform 

● Container technologies - Docker / Kubernetes

● Ad-serving technologies and standards 

● Familiarity with AI tools like Cursor AI, CoPilot.

Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Mohali, Dehradun, Panchkula, Chennai
6 - 14 yrs
₹12L - ₹28L / yr
Test Automation (QA)
skill iconKubernetes
helm
skill iconDocker
skill iconAmazon Web Services (AWS)
+13 more

Job Title : Senior QA Automation Architect (Cloud & Kubernetes)

Experience : 6+ Years

Location : India (Multiple Offices)

Shift Timings : 12 PM to 9 PM (Noon Shift)

Working Days : 5 Days WFO (NO Hybrid)


About the Role :

We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.

You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.


Key Responsibilities :

  • Architect and maintain test automation frameworks for microservices.
  • Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
  • Ensure reliability, scalability, and observability of test systems.
  • Work closely with DevOps and Cloud teams to streamline automation infrastructure.

Mandatory Skills :

  • Kubernetes, Helm, Docker, Linux
  • Cloud Platforms : AWS / Azure / GCP
  • CI/CD Tools : Jenkins, GitHub Actions
  • Scripting : Python, Pytest, Bash
  • Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
  • IaC Practices : Terraform / Ansible

Good to Have :

  • Experience with Service Mesh (Istio/Linkerd).
  • Container Security or DevSecOps exposure.
Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Remote only
9 - 15 yrs
₹15L - ₹36L / yr
Google Cloud Platform (GCP)
databricks
Architecture
bigquery
Google Cloud Storage
+2 more

Experience Level

10+ years of experience in data engineering, with at least 3–5 years providing architectural guidance, leading teams, and standardizing enterprise data solutions. Must have deep expertise in Databricks, GCP, and modern data architecture patterns.


Key Responsibilities

- Provide architectural guidance and define standards for data engineering implementations.

- Lead and mentor a team of data engineers, fostering best practices in design, development, and operations.

- Own and drive improvements in performance, scalability, and reliability of data pipelines and platforms.

- Standardize data architecture patterns and reusable frameworks across multiple projects.

- Collaborate with cross-functional stakeholders (Product, Analytics, Business) to align data solutions with organizational goals.

- Design data models, schemas, and dataflows for efficient storage, querying, and analytics.

- Establish and enforce strong data governance practices, ensuring security, compliance, and data quality.

- Work closely with governance teams to implement lineage, cataloging, and access control in compliance with standards.

- Design and optimize ETL pipelines using Databricks, PySpark, and SQL.

- Ensure robust CI/CD practices are implemented for data workflows, leveraging Terraform and modern DevOps practices.

- Leverage GCP services such as Cloud Functions, Cloud Run, BigQuery, Pub/Sub, and Dataflow for building scalable solutions.

- Evaluate and adopt emerging technologies, with exposure to Gen AI and advanced analytics capabilities.


Qualifications & Skills

- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.

- Extensive hands-on experience with Databricks (Autoloader, DLT, Delta Lake, CDF) and PySpark.

- Expertise in SQL and advanced query optimization.

- Proficiency in Python for data engineering and automation tasks.

- Strong expertise with GCP services: Cloud Functions, Cloud Run, BigQuery, Pub/Sub, Dataflow, GCS.

- Deep understanding of CI/CD pipelines, infrastructure-as-code (Terraform), and DevOps practices.

- Proven ability to provide architectural guidance and lead technical teams.

- Experience designing data models, schemas, and governance frameworks.

- Knowledge of Gen AI concepts and ability to evaluate practical applications.

- Excellent communication, leadership, and stakeholder management skills.



Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort