Cutshort logo
Google cloud platform gcp jobs

50+ Google Cloud Platform (GCP) Jobs in India

Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!

icon
Ciroos

Ciroos

Agency job
via Uplers by Sainayan Rai
Gurugram
5 - 8 yrs
₹50L - ₹70L / yr
skill iconKubernetes
skill iconGo Programming (Golang)
skill iconRust
skill iconC++
skill iconAmazon Web Services (AWS)
+4 more

What You'll Work On


• Design and develop a next-generation scalable observability platform for modern cloud-native and hybrid infrastructures that works in tandem with AI agents.

• Create intelligent AI agents to analyze logs, traces, and metrics in real time, delivering automated insights and remediation.

• Build scalable and fault tolerant AI agent frameworks

• Engineer and optimize large-scale analytics pipelines to process high-velocity telemetry data.

• Build resilient distributed systems with high reliability, performance, and fault tolerance.

• Implement and fine-tune LLMs for natural language querying and automated troubleshooting.

• Partner with ML engineers to streamline AI model deployment and management.


What We're Looking For


• Strong programming skills in Python and Golang (experience with Rust is a plus)

• Track record of building distributed systems and large-scale analytics pipelines • Hands-on experience with cloud infrastructure (AWS, GCP, or Azure) and Kubernetes

• Deep understanding of observability technologies (Prometheus, OpenTelemetry, Grafana, Elastic, etc.)

• Knowledge of LLMs, AI agents, agent frameworks liks langchain, autogen is a plus

• Experience with stream processing and real-time data processing frameworks

• Proficiency in database technologies (SQL & NoSQL, Clickhouse, Time-Series DBs)

• Bachelor's degree in Computer Science, Engineering, or related field (Master's/PhD is a plus)

Read more
Matchmaking platform

Matchmaking platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai
2 - 5 yrs
₹15L - ₹28L / yr
skill iconData Science
skill iconPython
Natural Language Processing (NLP)
MySQL
skill iconMachine Learning (ML)
+15 more

Review Criteria

  • Strong Data Scientist/Machine Learnings/ AI Engineer Profile
  • 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
  • Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
  • Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
  • Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
  • Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
  • Preferred (Company) – Must be from product companies

 

Job Specific Criteria

  • CV Attachment is mandatory
  • What's your current company?
  • Which use cases you have hands on experience?
  • Are you ok for Mumbai location (if candidate is from outside Mumbai)?
  • Reason for change (if candidate has been in current company for less than 1 year)?
  • Reason for hike (if greater than 25%)?

 

Role & Responsibilities

  • Partner with Product to spot high-leverage ML opportunities tied to business metrics.
  • Wrangle large structured and unstructured datasets; build reliable features and data contracts.
  • Build and ship models to:
  • Enhance customer experiences and personalization
  • Boost revenue via pricing/discount optimization
  • Power user-to-user discovery and ranking (matchmaking at scale)
  • Detect and block fraud/risk in real time
  • Score conversion/churn/acceptance propensity for targeted actions
  • Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
  • Design and run A/B tests with guardrails.
  • Build monitoring for model/data drift and business KPIs


Ideal Candidate

  • 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
  • Proven, hands-on success in at least two (preferably 3–4) of the following:
  • Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
  • Fraud/risk detection (severe class imbalance, PR-AUC)
  • Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
  • Propensity models (payment/churn)
  • Programming: strong Python and SQL; solid git, Docker, CI/CD.
  • Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
  • ML breadth: recommender systems, NLP or user profiling, anomaly detection.
  • Communication: clear storytelling with data; can align stakeholders and drive decisions.



Read more
Codemonk

at Codemonk

4 candid answers
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
7yrs+
Upto ₹35L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconPython
Google Cloud Platform (GCP)
RESTful APIs
SQL
+4 more

Like us, you'll be deeply committed to delivering impactful outcomes for customers.

  • 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
  • Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
  • Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
  • Experience writing batch/cron jobs using Python and Shell scripting.
  • Experience in web application development using JavaScript and JavaScript libraries.
  • Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
  • Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
  • Understanding of code versioning tools such as Git.
  • Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
  • Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Chennai, Kochi (Cochin), Pune, Trivandrum, Thiruvananthapuram
5 - 7 yrs
₹10L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconJenkins
CI/CD
skill iconDocker
skill iconKubernetes
+15 more

Job Description

We are seeking a highly skilled Site Reliability Engineer (SRE) with strong expertise in Google Cloud Platform (GCP) and CI/CD automation to lead cloud infrastructure initiatives. The ideal candidate will design and implement robust CI/CD pipelines, automate deployments, ensure platform reliability, and drive continuous improvement in cloud operations and DevOps practices.


Key Responsibilities:

  • Design, develop, and optimize end-to-end CI/CD pipelines using Jenkins, with a strong focus on Declarative Pipeline syntax.
  • Automate deployment, scaling, and management of applications across various GCP services including GKE, Cloud Run, Compute Engine, Cloud SQL, Cloud Storage, VPC, and Cloud Functions.
  • Collaborate closely with development and DevOps teams to ensure seamless integration of applications into the CI/CD pipeline and GCP environment.
  • Implement and manage monitoring, logging, and ing solutions to maintain visibility, reliability, and performance of cloud infrastructure and applications.
  • Ensure compliance with security best practices and organizational policies across GCP environments.
  • Document processes, configurations, and architectural decisions to maintain operational transparency.
  • Stay updated with the latest GCP services, DevOps, and SRE best practices to enhance infrastructure efficiency and reliability.


Mandatory Skills:

  • Google Cloud Platform (GCP) – Hands-on experience with core GCP compute, networking, and storage services.
  • Jenkins – Expertise in Declarative Pipeline creation and optimization.
  • CI/CD – Strong understanding of automated build, test, and deployment workflows.
  • Solid understanding of SRE principles including automation, scalability, observability, and system reliability.
  • Familiarity with containerization and orchestration tools (Docker, Kubernetes – GKE).
  • Proficiency in scripting languages such as Shell, Python, or Groovy for automation tasks.


Preferred Skills:

  • Experience with TerraformAnsible, or Cloud Deployment Manager for Infrastructure as Code (IaC).
  • Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.
  • Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).
  • GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.


Skills

Gcp, Jenkins, CICD Aws,


Nice to Haves

Experience with Terraform, Ansible, or Cloud Deployment Manager for Infrastructure as Code (IaC).

Exposure to monitoring and observability tools like Stackdriver, Prometheus, or Grafana.

Knowledge of multi-cloud or hybrid environments (AWS experience is a plus).

GCP certification (Professional Cloud DevOps Engineer / Cloud Architect) preferred.

 

******

Notice period - 0 to 15days only

Location – Pune, Trivandrum, Kochi, Chennai

Read more
Lightningrowth

at Lightningrowth

1 candid answer
Reagan Ahlquist
Posted by Reagan Ahlquist
Remote only
3 - 6 yrs
$30K - $50K / yr
skill iconPython
RESTful APIs
SQL
English Proficiency
Facebook API
+3 more

Marketing Data Engineer (Remote)


Full-Time Contractor

Lightningrowth is a U.S.-based marketing company that specializes in Facebook lead generation for home-remodeling businesses. Although all ads run on Facebook, our clients use many different CRMs — which means we must manage, clean, and sync large volumes of lead data across multiple systems.

We’re hiring a Marketing Data Engineer to maintain and improve the Python scripts and data pipelines that keep everything running smoothly.

This is a remote role ideal for a mid-level engineer with strong Python, API, SQL, and communication skills.


What You’ll Do

  • Maintain and improve Python scripts for:
  • GoHighLevel (GHL) API
  • Facebook/Meta Marketing API
  • Build new API integrations for client CRMs and software tools
  • Extract, clean, and transform data before loading into BigQuery
  • Write and update SQL used for dashboards and reporting
  • Ensure data accuracy and monitor automated pipeline reliability
  • Help optimize automation flows (Make.com or similar)
  • Document your work clearly and communicate updates to the team

Required Skills

  • Strong Python (requests, pandas, JSON handling)
  • Hands-on experience with REST APIs (auth, pagination, rate limits)
  • Solid SQL skills (BigQuery experience preferred)
  • Experience with ETL / data pipelines
  • Ability to build API integrations from documentation
  • Good spoken and written English communication
  • Comfortable working independently in a fully remote setup

Nice to Have

  • Experience with GoHighLevel or CRM APIs
  • Familiarity with:
  • Google BigQuery
  • Google Cloud Functions / Cloud Run
  • Make.com automations
  • Looker Studio dashboards
  • Experience optimizing large datasets or API usage

Experience Level

3–6 years of hands-on data engineering, backend Python work, or API integrations.

Compensation

  • $2,500 – $4,000 USD per month (depending on experience)

How to Apply

Please include:

  • Your resume
  • Links to any Python/API/SQL samples (GitHub, snippets, etc.)
  • A short note on why you’re a good fit

Qualified candidates will complete a short Python + API + SQL test.

Read more
Upswing

Upswing

Agency job
via Talentfoxhr by ANMOL SINGH
Pune
2 - 5 yrs
₹5L - ₹7.5L / yr
skill iconPython
Google Cloud Platform (GCP)
FastAPI
RabbitMQ
Apache Kafka
+7 more

🚀 We’re Hiring: Python Developer – Pune 🚀


Are you a skilled Python Developer looking to work on high-performance, scalable backend systems?

If you’re passionate about building robust applications and working with modern technologies — this opportunity is for you! 💼✨


📍 Location: Pune

🏢 Role: Python Backend Developer

🕒 Type: Full-Time | Permanent


🔍 What We’re Looking For:

We need a strong backend professional with experience in:

🐍 Python (Advanced)

⚡ FastAPI

🛢️ MongoDB & Postgres

📦 Microservices Architecture

📨 Message Brokers (RabbitMQ / Kafka)

🌩️ Google Cloud Platform (GCP)

🧪 Unit Testing & TDD

🔐 Backend Security Standards

🔧 Git & Project Collaboration


🛠️ Key Responsibilities:

✔ Build and optimize Python backend services using FastAPI

✔ Design scalable microservices

✔ Manage and tune MongoDB & Postgres

✔ Implement message brokers for async workflows

✔ Drive code reviews and uphold coding standards

✔ Mentor team members

✔ Manage cloud deployments on GCP

✔ Ensure top-notch performance, scalability & security

✔ Write robust unit tests and follow TDD


🎓 Qualifications:

➡ 2–4 years of backend development experience

➡ Strong hands-on Python + FastAPI

➡ Experience with microservices, DB management & cloud tech

➡ Knowledge of Agile/Scrum

➡ Bonus: Docker, Kubernetes, CI/CD


Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Noida, Delhi
6 - 8 yrs
₹15L - ₹18L / yr
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconDocker
skill iconKubernetes
+2 more

Hands on experience with Infra as a code tools like terraform, ansible, puppet, chef, cloud formation , etc..


Expertise in any Cloud (AWS, Azure, GCP)


Good understand of version control (Git, Gitlab, GitHub)


Hands-on experience in Container Infrastructure (Docker, Kubernetes)



Ensuring availability, performance, security and scalability of production system


Hands on experience with Infrastructure Automation tools like Chef/Puppet/Ansible, Terraform, ARM, Cloud Formation


Hand on experience with Artifact repositories (Nexus, JFrog Artifactory)


Hands on experience with CI/CD tools on-premises/cloud (Jenkins, CircleCI, etc. )


Hands on experience with Monitoring, Logging, and Security (CloudWatch, cloud trail, log analytics, hosted tools such as ELK, EFK, Splunk, Datadog, Prometheus,)


Hands-on experience with scripting languages like Python, Ant, Bash, and Shell


Hands-on experience in designing pipelines & pipelines as code.


Hands-on experience in end-to-end deployment process & strategy


Hands-on experience of GCP/AWS/AZURE with a good understanding of computing, networks, storage, IAM, Security, and integration services

Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Amruta Mundale
Posted by Amruta Mundale
Pune
4 - 8 yrs
Best in industry
skill iconJava
SQL
skill iconSpring Boot
Apache
skill iconAmazon Web Services (AWS)
+1 more

What You’ll Do:

  • Setting up formal data practices for the company.
  • Building and running super stable and scalable data architectures.
  • Making it easy for folks to add and use new data with self-service pipelines.
  • Getting DataOps practices in place.
  • Designing, developing, and running data pipelines to help out Products, Analytics, data scientists and machine learning engineers.
  • Creating simple, reliable data storage, ingestion, and transformation solutions that are a breeze to deploy and manage.
  • Writing and Managing reporting API for different products.
  • Implementing different methodologies for different reporting needs.
  • Teaming up with all sorts of people – business folks, other software engineers, machine learning engineers, and analysts.

Who You Are:

  • Bachelor’s degree in engineering (CS / IT) or equivalent degree from a well-known Institute / University.
  • 3.5+ years of experience in building and running data pipelines for tons of data.
  • Experience with public clouds like GCP or AWS.
  • Experience with Apache open-source projects like Spark, Druid, Airflow, and big data databases like BigQuery, Clickhouse.
  • Experience making data architectures that are optimised for both performance and cost.
  • Good grasp of software engineering, DataOps, data architecture, Agile, and DevOps.
  • Proficient in SQL, Java, Spring Boot, Python, and Bash.
  • Good communication skills for working with technical and non-technical people.
  • Someone who thinks big, takes chances, innovates, dives deep, gets things done, hires and develops the best, and is always learning and curious.


Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad, Chennai, Kochi (Cochin), Bengaluru (Bangalore), Trivandrum, Thiruvananthapuram
12 - 15 yrs
₹20L - ₹40L / yr
skill iconJava
DevOps
CI/CD
ReAct (Reason + Act)
skill iconReact.js
+6 more

Role Proficiency:

Leverage expertise in a technology area (e.g. Java Microsoft technologies or Mainframe/legacy) to design system architecture.


Knowledge Examples:

  • Domain/ Industry Knowledge: Basic knowledge of standard business processes within the relevant industry vertical and customer business domain
  1. Technology Knowledge: Demonstrates working knowledge of more than one technology area related to own area of work (e.g. Java/JEE 5+ Microsoft technologies or Mainframe/legacy) customer technology landscape multiple frameworks (Struts JSF Hibernate etc.) within one technology area and their applicability. Consider low level details such as data structures algorithms APIs and libraries and best practices for one technology stack configuration parameters for successful deployment and configuration parameters for high performance within one technology stack
  2. Technology Trends: Demonstrates working knowledge of technology trends related to one technology stack and awareness of technology trends related to least two technologies
  3. Architecture Concepts and Principles: Demonstrates working knowledge of standard architectural principles models patterns (e.g. SOA N-Tier EDA etc.) and perspective (e.g. TOGAF Zachman etc.) integration architecture including input and output components existing integration methodologies and topologies source and external system non functional requirements data architecture deployment architecture architecture governance
  4. Design Patterns Tools and Principles: Applies specialized knowledge of design patterns design principles practices and design tools. Knowledge of documentation of design using tolls like EA
  5. Software Development Process Tools & Techniques: Demonstrates thorough knowledge of end-to-end SDLC process (Agile and Traditional) SDLC methodology programming principles tools best practices (refactoring code code package etc.)
  6. Project Management Tools and Techniques: Demonstrates working knowledge of project management process (such as project scoping requirements management change management risk management quality assurance disaster management etc.) tools (MS Excel MPP client specific time sheets capacity planning tools etc.)
  7. Project Management: Demonstrates working knowledge of project governance framework RACI matrix and basic knowledge of project metrics like utilization onsite to offshore ratio span of control fresher ratio SLAs and quality metrics
  8. Estimation and Resource Planning: Working knowledge of estimation and resource planning techniques (e.g. TCP estimation model) company specific estimation templates
  9. Working knowledge of industry knowledge management tools (such as portals wiki) company and customer knowledge management tools techniques (such as workshops classroom training self-study application walkthrough and reverse KT)
  10. Technical Standards Documentation & Templates: Demonstrates working knowledge of various document templates and standards (such as business blueprint design documents and test specifications)
  11. Requirement Gathering and Analysis: Demonstrates working knowledge of requirements gathering for ( non functional) requirements analysis for functional and non functional requirement analysis tools (such as functional flow diagrams activity diagrams blueprint storyboard) techniques (business analysis process mapping etc.) and requirements management tools (e.g.MS Excel) and basic knowledge of functional requirements gathering. Specifically identify Architectural concerns and to document them as part of IT requirements including NFRs
  12. Solution Structuring: Demonstrates working knowledge of service offering and products


Additional Comments:

Looking for a Senior Java Architect with 12+ years of experience. Key responsibilities include:

• Excellent technical background and end to end architecture to design and implement scalable maintainable and high performing systems integrating front end technologies with back-end services.

• Collaborate with front-end teams to architect React -based user interfaces that are robust, responsive and aligned with overall technical architecture.

• Expertise in cloud-based applications on Azure, leveraging key Azure services.

• Lead the adoption of DevOps practices, including CI/CD pipelines, automation, monitoring and logging to ensure reliable and efficient deployment cycles.

• Provide technical leadership to development teams, guiding them in building solutions that adhere to best practices, industry standards and customer requirements.

• Conduct code reviews to maintain high quality code and collaborate with team to ensure code is optimized for performance, scalability and security.

• Collaborate with stakeholders to defined requirements and deliver technical solutions aligned with business goals.

• Excellent communication skills

• Mentor team members providing guidance on technical challenges and helping them grow their skill set.

• Good to have experience in GCP and retail domain.

 

Skills: Devops, Azure, Java


Must-Haves

Java (12+ years), React, Azure, DevOps, Cloud Architecture

Strong Java architecture and design experience.

Expertise in Azure cloud services.

Hands-on experience with React and front-end integration.

Proven track record in DevOps practices (CI/CD, automation).

Notice period - 0 to 15days only

Location: Hyderabad, Chennai, Kochi, Bangalore, Trivandrum

Excellent communication and leadership skills.

Read more
IndArka Energy Pvt Ltd

at IndArka Energy Pvt Ltd

3 recruiters
Mita Hemant
Posted by Mita Hemant
Bengaluru (Bangalore)
4 - 5 yrs
₹15L - ₹18L / yr
Microsoft Windows Azure
CI/CD
Scripting language
skill iconDocker
skill iconKubernetes
+3 more

About Us

At Arka Energy, we're redefining how renewable energy is experienced and adopted in homes. Our focus is on developing next-generation residential solar energy solutions through a unique combination of custom product design, intuitive simulation software, and high-impact technology. With engineering teams in Bangalore and the Bay Area, we’re committed to building innovative products that transform rooftops into smart energy ecosystems.

Our flagship product is a 3D simulation platform that models rooftops and commercial sites, allowing users to design solar layouts and generate accurate energy estimates — streamlining the residential solar design process like never before.

 

What We're Looking For

We're seeking a Senior DevOps Engineer who will be responsible for managing and automating cloud infrastructure and services, ensuring seamless integration and deployment of applications, and maintaining high availability and reliability. You will work closely with development and operations teams to streamline processes and enhance productivity.

Key Responsibilities

  • Design and implement CI/CD pipelines using Azure DevOps.
  • Automate infrastructure provisioning and configuration in the Azure cloud environment.
  • Monitor and manage system health, performance, and security.
  • Collaborate with development teams to ensure smooth and secure deployment of applications.
  • Troubleshoot and resolve issues related to deployment and operations.
  • Implement best practices for configuration management and infrastructure as code.
  • Maintain documentation of processes and solutions.

 

Requirements

  • Total relevant experience of 4 to 5 years.
  • Proven experience as a DevOps Engineer, specifically with Azure.
  • Experience with CI/CD tools and practices.
  • Strong understanding of infrastructure as code (IaC) using tools like Terraform or ARM templates.
  • Knowledge of scripting languages such as PowerShell or Python.
  • Familiarity with containerization technologies like Docker and Kubernetes.
  • Good to have – knowledge on AWS, Digital Ocean, GCP
  • Excellent troubleshooting and problem-solving skills
  • High ownership, self-starter attitude, and ability to work independently
  • Strong aptitude and reasoning ability with a growth mindset

 

Nice to Have

·        Experience working in a SaaS or product-driven startup

·        Familiarity with solar industry (preferred but not required)

Read more
Technology, Information and Internet Company

Technology, Information and Internet Company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹65L / yr
Data Structures
CI/CD
Microservices
Architecture
Cloud Computing
+19 more

Required Skills: CI/CD Pipeline, Data Structures, Microservices, Determining overall architectural principles, frameworks and standards, Cloud expertise (AWS, GCP, or Azure), Distributed Systems


Criteria:

  • Candidate must have 6+ years of backend engineering experience, with 1–2 years leading engineers or owning major systems.
  • Must be strong in one core backend language: Node.js, Go, Java, or Python.
  • Deep understanding of distributed systems, caching, high availability, and microservices architecture.
  • Hands-on experience with AWS/GCP, Docker, Kubernetes, and CI/CD pipelines.
  • Strong command over system design, data structures, performance tuning, and scalable architecture
  • Ability to partner with Product, Data, Infrastructure, and lead end-to-end backend roadmap execution.


Description

What This Role Is All About

We’re looking for a Backend Tech Lead who’s equally obsessed with architecture decisions and clean code, someone who can zoom out to design systems and zoom in to fix that one weird memory leak. You’ll lead a small but sharp team, drive the backend roadmap, and make sure our systems stay fast, lean, and battle-tested.

 

What You’ll Own

● Architect backend systems that handle India-scale traffic without breaking a sweat.

● Build and evolve microservices, APIs, and internal platforms that our entire app depends on.

● Guide, mentor, and uplevel a team of backend engineers—be the go-to technical brain.

● Partner with Product, Data, and Infra to ship features that are reliable and delightful.

● Set high engineering standards—clean architecture, performance, automation, and testing.

● Lead discussions on system design, performance tuning, and infra choices.

● Keep an eye on production like a hawk: metrics, monitoring, logs, uptime.

● Identify gaps proactively and push for improvements instead of waiting for fires.

 

What Makes You a Great Fit

● 6+ years of backend experience; 1–2 years leading engineers or owning major systems.

● Strong in one core language (Node.js / Go / Java / Python) — pick your sword.

● Deep understanding of distributed systems, caching, high-availability, and microservices.

● Hands-on with AWS/GCP, Docker, Kubernetes, CI/CD pipelines.

● You think data structures and system design are not interviews — they’re daily tools.

● You write code that future-you won’t hate.

● Strong communication and a let’s figure this out attitude.

 

Bonus Points If You Have

● Built or scaled consumer apps with millions of DAUs.

● Experimented with event-driven architecture, streaming systems, or real-time pipelines.

● Love startups and don’t mind wearing multiple hats.

● Experience on logging/monitoring tools like Grafana, Prometheus, ELK, OpenTelemetry.

 

Why company Might Be Your Best Move

● Work on products used by real people every single day.

● Ownership from day one—your decisions will shape our core architecture.

● No unnecessary hierarchy; direct access to founders and senior leadership.

● A team that cares about quality, speed, and impact in equal measure.

● Build for Bharat — complex constraints, huge scale, real impact.


Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Kochi (Cochin), Trivandrum, Hyderabad, Thiruvananthapuram
8 - 10 yrs
₹10L - ₹25L / yr
Business Analysis
Data Visualization
PowerBI
SQL
Tableau
+18 more

Job Description – Senior Technical Business Analyst

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

 

About the Role

We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.

As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.

 

Key Responsibilities

Business & Analytical Responsibilities

  • Partner with business teams to understand one-line problem statements and translate them into detailed business requirementsopportunities, and project scope.
  • Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
  • Create documentation including Business Requirement Documents (BRDs)user storiesprocess flows, and analytical models.
  • Break down business needs into concise, actionable, and development-ready user stories in Jira.

Data & Technical Responsibilities

  • Collaborate with data engineering teams to design, review, and validate data pipelinesdata models, and ETL/ELT workflows.
  • Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
  • Apply foundational data science concepts such as statistical analysispredictive modeling, and machine learning fundamentals.
  • Validate and ensure data quality, consistency, and accuracy across datasets and systems.

Collaboration & Execution

  • Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
  • Assist in development, testing, and rollout of data-driven solutions.
  • Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.

 

Required Skillsets

Core Technical Skills

  • 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
  • Data Analytics: SQL, descriptive analytics, business problem framing.
  • Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
  • Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
  • Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.

 

Soft Skills

  • Strong analytical thinking and structured problem-solving capability.
  • Ability to convert business problems into clear technical requirements.
  • Excellent communication, documentation, and presentation skills.
  • High curiosity, adaptability, and eagerness to learn new tools and techniques.

 

Educational Qualifications

  • BE/B.Tech or equivalent in:
  • Computer Science / IT
  • Data Science

 

What We Look For

  • Demonstrated passion for data and analytics through projects and certifications.
  • Strong commitment to continuous learning and innovation.
  • Ability to work both independently and in collaborative team environments.
  • Passion for solving business problems using data-driven approaches.
  • Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.

 

Why Join Us?

  • Exposure to modern data platforms, analytics tools, and AI technologies.
  • A culture that promotes innovation, ownership, and continuous learning.
  • Supportive environment to build a strong career in data and analytics.

 

Skills: Data Analytics, Business Analysis, Sql


Must-Haves

Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R

 

******

Notice period - 0 to 15 days (Max 30 Days)

Educational Qualifications: BE/B.Tech or equivalent in: (Computer Science / IT) /Data Science

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

Read more
Forbes Advisor

at Forbes Advisor

3 candid answers
Nikita Sinha
Posted by Nikita Sinha
Chennai
11 - 16 yrs
Upto ₹50L / yr (Varies
)
Google Webmaster Tools
CI/CD
Cloud Computing
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

A DevSecOps Staff Engineer integrates security into DevOps practices, designing secure CI/CD

pipelines, building and automating secure cloud infrastructure and ensuring compliance across

development, operations, and security teams.


Responsibilities

• Design, build and maintain secure CI/CD pipelines utilizing DevSecOps principles and

practices to increase automation and reduce human involvement in the process

• Integrate tools of SAST, DAST, SCA, etc. within pipelines to enable automated application

building, testing, securing and deployment.

• Implement security controls for cloud platforms (AWS, GCP), including IAM, container

security (EKS/ECS), and data encryption for services like S3 or BigQuery, etc.

• Automate vulnerability scanning, monitoring, and compliance processes by collaborating

with DevOps and Development teams to minimize risks in deployment pipelines.

• Suggesting architecture improvements, recommending process improvements.

• Review cloud deployment architectures and implement required security controls.

• Mentor other engineers on security practices and processes.


Requirements

• Bachelor's degree, preferably in CS or a related field, or equivalent experience

• 10+ years of overall industry experience with AWS Certified - Security Specialist.• Must have implementation experience using security tools and processes related to SAST,

DAST and Pen Testing

• AWS-specific: 5+ years’ experience with using a broad range of AWS technologies (e.g.

EC2, RDS, ELB, S3, VPC, CloudWatch) to develop and maintain an Amazon AWS based

cloud solution, with an emphasis on best practice cloud security.

• Experienced with CI/CD tool chain (GitHub Actions, Packages, Jenkins, etc.)

• Passionate about solving security challenges and being informed of available and

emerging security threats and various security technologies.

• Must be familiar with the OWASP Top 10 Security Risks and Controls

• Good skills in at least one or more scripting languages: Python, Bash

• Good knowledge in Kubernetes, Docker Swarm or other cluster management software.

• Willing to work in shifts as required


Good to Have

• AWS Certified DevOps Engineer

• Observability: Experience with system monitoring tools (e.g. CloudWatch, New Relic,

etc.).

• Experience with Terraform/Ansible/Chef/Puppet

• Operating Systems: Windows and Linux system administration.


Perks:

● Day off on the 3rd Friday of every month (one long weekend each month)

● Monthly Wellness Reimbursement Program to promote health well-being

● Monthly Office Commutation Reimbursement Program

● Paid paternity and maternity leaves

Read more
Kuku FM
Bengaluru (Bangalore)
5 - 12 yrs
₹30L - ₹60L / yr
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

We're seeking an experienced Engineer to join our engineering team, handling massive-scale data processing and analytics infrastructure that supports over 1B daily events, 3M+ DAU, and 50k+ hours of content. The ideal candidate will bridge the gap between raw data collection and actionable insights, while supporting our ML initiatives.

Key Responsibilities

  • Lead and scale the Infrastructure Pod, setting technical direction for data, platform, and DevOps initiatives.
  • Architect and evolve our cloud infrastructure to support 1B+ daily events — ensuring reliability, scalability, and cost efficiency.
  • Collaborate with Data Engineering and ML pods to build high-performance pipelines and real-time analytics systems.
  • Define and implement SLOs, observability standards, and best practices for uptime, latency, and data reliability.
  • Mentor and grow engineers, fostering a culture of technical excellence, ownership, and continuous learning.
  • Partner with leadership on long-term architecture and scaling strategy — from infrastructure cost optimization to multi-region availability.
  • Lead initiatives on infrastructure automation, deployment pipelines, and platform abstractions to improve developer velocity.
  • Own security, compliance, and governance across infrastructure and data systems.

 

Who You Are

  • Previously a Tech Co-founder / Founding Engineer / First Infra Hire who scaled a product from early MVP to significant user or data scale.
  • 5–12 years of total experience, with at least 2+ years in leadership or team-building roles.
  • Deep experience with cloud infrastructure (AWS/GCP), 
  • Experience with containers (Docker, Kubernetes), and IaC tools (Terraform, Pulumi, or CDK).
  • Hands-on expertise in data-intensive systems, streaming (Kafka, RabbitMQ, Spark Streaming), and distributed architecture design.
  • Proven experience building scalable CI/CD pipelines, observability stacks (Prometheus, Grafana, ELK), and infrastructure for data and ML workloads.
  • Comfortable being hands-on when needed — reviewing design docs, debugging issues, or optimizing infrastructure.
  • Strong system design and problem-solving skills; understands trade-offs between speed, cost, and scalability.
  • Passionate about building teams, not just systems — can recruit, mentor, and inspire engineers.

 

Preferred Skills

  • Experience managing infra-heavy or data-focused teams.
  • Familiarity with real-time streaming architectures.
  • Exposure to ML infrastructure, data governance, or feature stores.
  • Prior experience in the OTT / streaming / consumer platform domain is a plus.
  • Contributions to open-source infra/data tools or strong engineering community presence.

 

What We Offer

  • Opportunity to build and scale infrastructure from the ground up, with full ownership and autonomy.
  • High-impact leadership role shaping our data and platform backbone.
  • Competitive compensation + ESOPs.
  • Continuous learning budget and certification support.
  • A team that values velocity, clarity, and craftsmanship.

 

Success Metrics

  • Reduction in infra cost per active user and event processed.
  • Increase in developer velocity (faster pipeline deployments, reduced MTTR).
  • High system availability and data reliability SLAs met.
  • Successful rollout of infra automation and observability frameworks.
  • Team growth, retention, and technical quality.


Read more
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Bengaluru (Bangalore)
1 - 8 yrs
₹5L - ₹30L / yr
skill iconPython
skill iconReact.js
skill iconPostgreSQL
TypeScript
skill iconNextJs (Next.js)
+11 more


Job Summary

We are seeking a highly skilled Full Stack Engineer with 2+ years of hands-on experience to join our high-impact engineering team. You will work across the full stack—building scalable, high-performance frontends using Typescript & Next.js and developing robust backend services using Python (FastAPI/Django).

This role is crucial in shaping product experiences and driving innovation at scale.


Mandatory Candidate Background

  • Experience working in product-based companies only
  • Strong academic background
  • Stable work history
  • Excellent coding skills and hands-on development experience
  • Strong foundation in Data Structures & Algorithms (DSA)
  • Strong problem-solving mindset
  • Understanding of clean architecture and code quality best practices


Key Responsibilities

  • Design, develop, and maintain scalable full-stack applications
  • Build responsive, performant, user-friendly UIs using Typescript & Next.js
  • Develop APIs and backend services using Python (FastAPI/Django)
  • Collaborate with product, design, and business teams to translate requirements into technical solutions
  • Ensure code quality, security, and performance across the stack
  • Own features end-to-end: architecture, development, deployment, and monitoring
  • Contribute to system design, best practices, and the overall technical roadmap


Requirements

Must-Have:

  • 2+ years of professional full-stack engineering experience
  • Strong expertise in Typescript / Next.js OR Python (FastAPI, Django) — must be familiar with both areas
  • Experience building RESTful APIs and microservices
  • Hands-on experience with Git, CI/CD pipelines, and cloud platforms (AWS/GCP/Azure)
  • Strong debugging, optimization, and problem-solving abilities
  • Comfortable working in fast-paced startup environments


Good-to-Have:

  • Experience with containerization (Docker/Kubernetes)
  • Exposure to message queues or event-driven architectures
  • Familiarity with modern DevOps and observability tooling


Read more
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Pune
6 - 8 yrs
₹12L - ₹22L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconJavascript
skill iconGo Programming (Golang)
Elixir
+10 more


Job Description – Full Stack Developer (React + Node.js)

Experience: 5–8 Years

Location: Pune

Work Mode: WFO

Employment Type: Full-time


About the Role

We are looking for an experienced Full Stack Developer with strong hands-on expertise in React and Node.js to join our engineering team. The ideal candidate should have solid experience building scalable applications, working with production systems, and collaborating in high-performance tech environments.


Key Responsibilities

  • Design, develop, and maintain scalable full-stack applications using React and Node.js.
  • Collaborate with cross-functional teams to define, design, and deliver new features.
  • Write clean, maintainable, and efficient code following OOP/FP and SOLID principles.
  • Work with relational databases such as PostgreSQL or MySQL.
  • Deploy and manage applications in cloud environments (preferably GCP or AWS).
  • Optimize application performance, troubleshoot issues, and ensure high availability in production systems.
  • Utilize containerization tools like Docker for efficient development and deployment workflows.
  • Integrate third-party services and APIs, including AI APIs and tools.
  • Contribute to improving development processes, documentation, and best practices.


Required Skills

  • Strong experience with React.js (frontend).
  • Solid hands-on experience with Node.js (backend).
  • Good understanding of relational databases: PostgreSQL / MySQL.
  • Experience working in production environments and debugging live systems.
  • Strong understanding of OOP or Functional Programming, and clean coding standards.
  • Knowledge of Docker or other containerization tools.
  • Experience with cloud platforms (GCP or AWS).
  • Excellent written and verbal communication skills.


Good to Have

  • Experience with Golang or Elixir.
  • Familiarity with Kubernetes, RabbitMQ, Redis, etc.
  • Contributions to open-source projects.
  • Previous experience working with AI APIs or machine learning tools.


Read more
Virtana

at Virtana

2 candid answers
Krutika Devadiga
Posted by Krutika Devadiga
Pune
4 - 10 yrs
Best in industry
skill iconJava
skill iconKubernetes
skill iconGo Programming (Golang)
skill iconPython
Apache Kafka
+13 more

Senior Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 


Work Location: Pune/ Chennai


Job Type: Hybrid

 

Role Responsibilities: 

  • The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform 
  • Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform. 
  • Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.  
  • Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation 
  • Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution 
  • Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery 

 

Required Qualifications:    

  • Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS) 
  • Experience with CI/CD and cloud-based software development and delivery 
  • Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM. 
  • Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required. 
  • Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent 
  • Highly effective verbal and written communication skills and ability to lead and participate in multiple projects 
  • Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities 
  • Must be results-focused, team-oriented and with a strong work ethic 

 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills 
  • Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 

  

About Virtana:  Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

Read more
Media and Entertainment Industry

Media and Entertainment Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Noida
4 - 8 yrs
₹20L - ₹45L / yr
TypeScript
skill iconMongoDB
Microservices
MVC Framework
Google Cloud Platform (GCP)
+14 more

Required Skills: TypeScript, MVC, Cloud experience (Azure, AWS, etc.), mongodb, Express.js, Nest.js

 

Criteria:

Need candidates from Growing startups or Product based companies only

1. 4–8 years’ experience in backend engineering

2. Minimum 2+ years hands-on experience with:

  • TypeScript
  • Express.js / Nest.js

3. Strong experience with MongoDB (or MySQL / PostgreSQL / DynamoDB)

4. Strong understanding of system design & scalable architecture

5. Hands-on experience in:

  • Event-driven architecture / Domain-driven design
  • MVC / Microservices

6. Strong in automated testing (especially integration tests)

7. Experience with CI/CD pipelines (GitHub Actions or similar)

8. Experience managing production systems

9. Solid understanding of performance, reliability, observability

10. Cloud experience (AWS preferred; GCP/Azure acceptable)

11. Strong coding standards — Clean Code, code reviews, refactoring

 

Description 

About the opportunity

We are looking for an exceptional Senior Software Engineer to join our Backend team. This is a unique opportunity to join a fast-growing company where you will get to solve real customer and business problems, shape the future of a product built for Bharat and build the engineering culture of the team. You will have immense responsibility and autonomy to push the boundaries of engineering to deliver scalable and resilient systems.

As a Senior Software Engineer, you will be responsible for shipping innovative features at breakneck speed, designing the architecture, mentoring other engineers on the team and pushing for a high bar of engineering standards like code quality, automated testing, performance, CI/CD, etc. If you are someone who loves solving problems for customers, technology, the craft of software engineering, and the thrill of building startups, we would like to talk to you.

 

What you will be doing

  • Build and ship features in our Node.js (and now migrating to TypeScript) codebase that directly impact user experience and help move the top and bottom line of the business.
  • Collaborate closely with our product, design and data team to build innovative features to deliver a world class product to our customers. At company, product managers don’t “tell” what to build. In fact, we all collaborate on how to solve a problem for our customers and the business. Engineering plays a big part in it.
  • Design scalable platforms that empower our product and marketing teams to rapidly experiment.
  • Own the quality of our products by writing automated tests, reviewing code, making systems observable and resilient to failures.
  • Drive code quality and pay down architectural debt by continuous analysis of our codebases and systems, and continuous refactoring.
  • Architect our systems for faster iterations, releasability, scalability and high availability using practices like Domain Driven Design, Event Driven Architecture, Cloud Native Architecture and Observability.
  • Set the engineering culture with the rest of the team by defining how we should work as a team, set standards for quality, and improve the speed of engineering execution.

 

The role could be ideal for you if you

  • Experience of 4-8 years of working in backend engineering with at least 2 years of production experience in TypeScript, Express.js (or another popular framework like Nest.js) and MongoDB (or any popular database like MySQL, PostgreSQL, DynamoDB, etc.).
  • Well versed with one or more architectures and design patterns such as MVC, Domain Driven Design, CQRS, Event Driven Architecture, Cloud Native Architecture, etc.
  • Experienced in writing automated tests (especially integration tests) and Continuous Integration. At company, engineers own quality and hence, writing automated tests is crucial to the role.
  • Experience with managing production infrastructure using technologies like public cloud providers (AWS, GCP, Azure, etc.). Bonus: if you have experience in using Kubernetes.
  • Experience in observability techniques like code instrumentation for metrics, tracing and logging.
  • Care deeply about code quality, code reviews, software architecture (think about Object Oriented Programming, Clean Code, etc.), scalability and reliability. Bonus: if you have experience in this from your past roles.
  • Understand the importance of shipping fast in a startup environment and constantly try to find ingenious ways to achieve the same.
  • Collaborate well with everyone on the team. We communicate a lot and don’t hesitate to get quick feedback from other members on the team sooner than later.
  • Can take ownership of goals and deliver them with high accountability.

 

Don’t hesitate to try out new technologies. At company, nobody is limited to a role. Every engineer in our team is an expert of at least one technology but often ventures out in adjacent technologies like React.js, Flutter, Data Platforms, AWS and Kubernetes. If you are not excited by this, you will not like working at company. Bonus: if you have experience in adjacent technologies like AWS (or any public cloud provider, Github Actions (or CircleCI), Kubernetes, Infrastructure as Code (Terraform, Pulumi, etc.), etc.

 

 

Read more
Bits In Glass

at Bits In Glass

3 candid answers
Nikita Sinha
Posted by Nikita Sinha
Pune, Hyderabad, Mohali
6 - 11 yrs
Upto ₹30L / yr (Varies
)
Data modeling
Google Cloud Platform (GCP)

Responsibilities

  • Act as a liaison between business and technical teams to bridge gaps and support successful project delivery.
  • Maintain high-quality metadata and data artifacts that are accurate, complete, consistent, unambiguous, reliable, accessible, traceable, and valid.
  • Create and deliver high-quality data models while adhering to defined data governance practices and standards.
  • Translate high-level functional or business data requirements into technical solutions, including database design and data mapping.
  • Participate in requirement-gathering activities, elicitation, gap analysis, data analysis, effort estimation, and review processes.

Qualifications

  • 6–12 years of strong data analysis and/or data modeling experience.
  • Strong individual contributor with solid understanding of SDLC and Agile methodologies.
  • Comprehensive expertise in conceptual, logical, and physical data modeling.

Skills

  • Strong financial domain knowledge and data analysis capabilities.
  • Excellent communication and stakeholder management skills.
  • Ability to work effectively in a fast-paced and continuously evolving environment.
  • Problem-solving mindset with a solution-oriented approach.
  • Team player with a self-starter attitude and strong sense of ownership.
  • Proficiency in SQL, MS Office tools, GCP BigQuery, Erwin, and Visual Paradigm (preferred).
Read more
Virtana

at Virtana

2 candid answers
Krutika Devadiga
Posted by Krutika Devadiga
Pune
8 - 13 yrs
Best in industry
skill iconJava
skill iconKubernetes
skill iconAmazon Web Services (AWS)
skill iconSpring Boot
skill iconGo Programming (Golang)
+13 more

Company Overview:

Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud. 

As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.

 

Position Overview:

As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.



Work Location: Pune


Job Type: Hybrid

 

Key Responsibilities:

  • Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
  • Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
  • Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
  • Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
  • Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
  • Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
  • Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
  • Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.


Qualifications:

  • Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
  • 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
  • Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
  • Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
  • Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
  • Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
  • Solid experience designing and delivering features with high quality on aggressive schedules.
  • Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
  • Familiarity with performance optimization techniques and principles for backend systems.
  • Excellent problem-solving and critical-thinking abilities.
  • Outstanding communication and collaboration skills.


Why Join Us:

  • Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
  • Collaborative and innovative work environment.
  • Competitive salary and benefits package.
  • Professional growth and development opportunities.
  • Chance to work on cutting-edge technology and products that make a real impact.


If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.

Read more
Remote only
5 - 15 yrs
₹10L - ₹15L / yr
FastAPI
skill iconPython
RESTful APIs
SQL
NOSQL Databases
+5 more


Summary:

We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.

Job Type:

Full-Time Contractor (12 months)

Location:

Remote / On-site (Jaipur preferred, as per project needs)

Experience:

5+ years in backend development

Key Responsibilities:

  • Design, develop, and maintain robust backend services using Python and FastAPI.
  •  Implement and manage Prisma ORM for database operations.
  • Build scalable APIs and integrate with SQL databases and third-party services.
  • Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
  • Collaborate with front-end developers and other team members to deliver high-quality web applications.
  • Ensure application performance, security, and reliability.
  • Participate in code reviews, testing, and deployment processes.

Required Skills:

  • Expertise in Python backend development with strong experience in FastAPI.
  • Solid understanding of RESTful API design and implementation.
  • Proficiency in SQL databases and ORM tools (preferably Prisma)
  • Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
  • Familiarity with CI/CD pipelines and containerization (Docker).
  • Knowledge of cloud architecture best practices.

Added Advantage:

  • Front-end development knowledge (React, Angular, or similar frameworks).
  • Exposure to AWS/GCP cloud platforms.
  • Experience with NoSQL databases.

Eligibility:

  • Minimum 5 years of professional experience in backend development.
  • Available for full-time engagement.
  • Please excuse if you are currently engaged in other projects—we require dedicated availability.

 

Read more
Remote only
2 - 7 yrs
₹3L - ₹7L / yr
Google Cloud Platform (GCP)
AWS CloudFormation
Penetration testing
Cyber Security

We are seeking an experienced Cloud Penetration Tester to assess, exploit, and strengthen the security of our cloud environments (AWS, Azure, GCP). The role involves simulating real-world cyber-attacks, identifying vulnerabilities, and delivering actionable remediation recommendations. Key Responsibilities

• Perform in-depth penetration testing on cloud infrastructures (AWS/Azure/GCP).

• Conduct cloud-specific vulnerability assessments and configuration reviews.

• Simulate cyber-attacks to identify weaknesses in cloud applications, networks, APIs, and IAM configurations.

• Evaluate cloud-native security controls (Security Groups, IAM roles, Key Management, WAF, CloudTrail, etc.).

• Test containerized and serverless environments (Docker, Kubernetes, Lambda, Cloud Functions).

• Identify misconfigurations, privilege escalation paths, insecure storage, authentication issues, and API exploits.

• Prepare detailed technical reports and executive summaries with remediation steps.

• Work with DevOps and Cloud teams to improve security posture and ensure secure architecture.

• Assist in threat modeling and secure design of new cloud features/services.

• Stay updated on modern cloud attack tools and techniques (e.g., Pacu, ScoutSuite, Prowler, KubeHound).


🧠 Skills & Qualifications

• Strong understanding of cloud platforms (AWS, Azure, GCP).

• Hands-on experience with cloud penetration testing tools:

• Pacu, ScoutSuite, Prowler, CloudBrute, Burp Suite, Metasploit, Nmap.

• Familiarity with cloud-native security concepts:

• IAM, VPC, S3, Key Management, Containers, Serverless, API Gateway, WAF, CloudTrail.

• Knowledge of network, API, and web application security.

• Solid understanding of cloud attack vectors: SSRF, misconfigurations, privilege escalation, credential theft, etc.

• Ability to produce high-quality penetration testing reports.

• Scripting skills (Python, PowerShell, Bash) for automation.

• Certifications preferred: OSCP, OSWE, CEH, CCSP, AWS Security Specialty, Azure Security Engineer.


⭐ Preferred Personality Traits

• Strong analytical and exploit development mindset.

• Detail-oriented with the ability to think like an attacker.

• Strong communication skills for explaining findings.

• Continuous learner with curiosity about emerging cloud threats.

Read more
Codemonk

at Codemonk

4 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
1yr+
Upto ₹18L / yr (Varies
)
skill iconGo Programming (Golang)
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
skill iconReact.js
+5 more

Key Responsibilities:

  1. Application Development: Design and implement both client-side and server-side architecture using JavaScript frameworks and back-end technologies like Golang.
  2. Database Management: Develop and maintain relational and non-relational databases (MySQL, PostgreSQL, MongoDB) and optimize database queries and schema design.
  3. API Development: Build and maintain RESTfuI APIs and/or GraphQL services to integrate with front-end applications and third-party services.
  4. Code Quality & Performance: Write clean, maintainable code and implement best practices for scalability, performance, and security.
  5. Testing & Debugging: Perform testing and debugging to ensure the stability and reliability of applications across different environments and devices.
  6. Collaboration: Work closely with product managers, designers, and DevOps engineers to deliver features aligned with business goals.
  7. Documentation: Create and maintain documentation for code, systems, and application architecture to ensure knowledge transfer and team alignment.

Requirements:

  1. Experience: 1+ years in backend development in micro-services ecosystem, with proven experience in front-end and back-end frameworks.
  2. 1+ years experience Golang is mandatory
  3. Problem-Solving & DSA: Strong analytical skills and attention to detail.
  4. Front-End Skills: Proficiency in JavaScript and modern front-end frameworks (React, Angular, Vue.js) and familiarity with HTML/CSS.
  5. Back-End Skills: Experience with server-side languages and frameworks like Node.js, Express, Python or GoLang.
  6. Database Knowledge: Strong knowledge of relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB).
  7. API Development: Hands-on experience with RESTfuI API design and integration, with a plus for GraphQL.
  8. DevOps Understanding: Familiarity with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes) is a bonus.
  9. Soft Skills: Excellent problem-solving skills, teamwork, and strong communication abilities.

Nice-to-Have:

  1. UI/UX Sensibility: Understanding of responsive design and user experience principles.
  2. CI/CD Knowledge: Familiarity with CI/CD tools and workflows (Jenkins, GitLab CI).
  3. Security Awareness: Basic understanding of web security standards and best practices.
Read more
Inferigence Quotient

at Inferigence Quotient

1 recruiter
Neeta Trivedi
Posted by Neeta Trivedi
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹15L / yr
skill iconPython
skill iconNodeJS (Node.js)
FastAPI
skill iconDocker
skill iconJavascript
+16 more

3-5 years of experience as full stack developer with essential requirements on the following technologies: FastAPI, JavaScript, React.js-Redux, Node.js, Next.js, MongoDB, Python, Microservices, Docker, and MLOps.


Experience in Cloud Architecture using Kubernetes (K8s), Google Kubernetes Engine, Authentication and Authorisation Tools, DevOps Tools and Scalable and Secure Cloud Hosting is a significant plus.


Ability to manage a hosting environment, ability to scale applications to handle the load changes, knowledge of accessibility and security compliance.

 

Testing of API endpoints.

 

Ability to code and create functional web applications and optimising them for increasing response time and efficiency. Skilled in performance tuning, query plan/ explain plan analysis, indexing, table partitioning.

 

Expert knowledge of Python and corresponding frameworks with their best practices, expert knowledge of relational databases, NoSQL.


Ability to create acceptance criteria, write test cases and scripts, and perform integrated QA techniques.

 

Must be conversant with Agile software development methodology. Must be able to write technical documents, coordinate with test teams. Proficiency using Git version control.

Read more
Noida
8 - 13 yrs
₹25L - ₹38L / yr
skill iconJava
Google Cloud Platform (GCP)
Cloud Computing
Spring
RESTful APIs
+1 more

About Us :


CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.


Our Values :


We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.


Equal Opportunity Statement :


CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace.


About the Role


Job Title: Lead Java Developer


Location: Noida(Hybrid)


Experience: 7-12 years


Education: BTech / BE / ME /MTech/ MCA / MSc Computer Science



Primary Skills - Java 8-17+, Core Java, Design patterns (more than Singleton & Factory), Webservices development,REST/SOAP, XML & JSON manipulation, OAuth 2.0, CI/CD, SQL / NoSQL


Secondary Skills -Kafka, Jenkins, Kubernetes, Google Cloud Platform (GCP), SAP JCo library, Terraform


Certifications (Optional): OCPJP (the Oracle Certified Professional Java Programmer) / Google Professional Cloud



Required Experience:


● Must have integration component development experience using Java 8/9 technologies andservice-oriented architecture (SOA)


● Must have in-depth knowledge of design patterns and integration architecture


● Must have experience in system scalability and maintenance for complex enterprise applications and integration solutions


● Experience with developing solutions on Google Cloud Platform will be an added advantage.


● Should have good hands-on experience with Software Engineering tools viz. Eclipse, NetBeans, JIRA,Confluence, BitBucket, SVN etc.


● Should be very well verse with current technology trends in IT Solutions e.g. Cloud Platform Development,DevOps, Low Code solutions, Intelligent Automation


Good to Have:


● Experience of developing 3-4 integration adapters/connectors for enterprise applications (ERP, CRM, HCM,SCM, Billing etc.) using industry standard frameworks and methodologies following Agile/Scrum


Behavioral competencies required:


● Must have worked with US/Europe based clients in onsite/offshore delivery model


● Should have very good verbal and written communication, technical articulation, listening and presentation skills


● Should have proven analytical and problem solving skills


● Should have demonstrated effective task prioritization, time management and internal/external stakeholder management skills


● Should be a quick learner and team player


● Should have experience of working under stringent deadlines in a Matrix organization structure


● Should have demonstrated appreciable Organizational Citizenship Behavior (OCB) in past organizations


Job Responsibilities:


● Writing the design specifications and user stories for the functionalities assigned.


● Develop assigned components / classes and assist QA team in writing the test cases


● Create and maintain coding best practices and do peer code / solution reviews


● Participate in Daily Scrum calls, Scrum Planning, Retro and Demos meetings


● Bring out technical/design/architectural challenges/risks during execution, develop action plan for mitigation and aversion of identified risks


● Comply with development processes, documentation templates and tools prescribed by CloudSufi or and its clients


● Work with other teams and Architects in the organization and assist them on technical Issues/Demos/POCs and proposal writing for prospective clients


● Contribute towards the creation of knowledge repository, reusable assets/solution accelerators and IPs


● Provide feedback to junior developers and be a coach and mentor for them


● Provide training sessions on the latest technologies and topics to others employees in the organization


● Participate in organization development activities time to time - Interviews, CSR/Employee engagement activities, participation in business events/conferences, implementation of new policies, systems and procedures as decided by Management team

Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
1 - 8 yrs
₹12L - ₹34L / yr
skill iconPython
skill iconReact.js
skill iconDjango
FastAPI
TypeScript
+7 more

Please note that salary will be based on experience.


Job Title: Full Stack Engineer

Location: Bengaluru (Indiranagar) – Work From Office (5 Days)

Job Summary

We are seeking a skilled Full Stack Engineer with solid hands-on experience across frontend and backend development. You will work on mission-critical features, ensuring seamless performance, scalability, and reliability across our products.

Responsibilities

  • Design, develop, and maintain scalable full-stack applications.
  • Build responsive, high-performance UIs using Typescript & Next.js.
  • Develop backend services and APIs using Python (FastAPI/Django).
  • Work closely with product, design, and business teams to translate requirements into intuitive solutions.
  • Contribute to architecture discussions and drive technical best practices.
  • Own features end-to-end — design, development, testing, deployment, and monitoring.
  • Ensure robust security, code quality, and performance optimization.

Tech Stack

Frontend: Typescript, Next.js, React, Tailwind CSS

Backend: Python, FastAPI, Django

Databases: PostgreSQL, MongoDB, Redis

Cloud & Infra: AWS/GCP, Docker, Kubernetes, CI/CD

Other Tools: Git, GitHub, Elasticsearch, Observability tools

Requirements

Must-Have:

  • 2+ years of professional full-stack engineering experience.
  • Strong expertise in either frontend (Typescript/Next.js) or backend (Python/FastAPI/Django) with familiarity in both.
  • Experience building RESTful services and microservices.
  • Hands-on experience with Git, CI/CD, and cloud platforms (AWS/GCP/Azure).
  • Strong debugging, problem-solving, and optimization skills.
  • Ability to thrive in fast-paced, high-ownership startup environments.

Good-to-Have:

  • Exposure to Docker, Kubernetes, and observability tools.
  • Experience with message queues or event-driven architecture.


Perks & Benefits

  • Upskilling support – courses, tools & learning resources.
  • Fun team outings, hackathons, demos & engagement initiatives.
  • Flexible Work-from-Home: 12 WFH days every 6 months.
  • Menstrual WFH: up to 3 days per month.
  • Mobility benefits: relocation support & travel allowance.
  • Parental support: maternity, paternity & adoption leave.
Read more
CoffeeBeans

at CoffeeBeans

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Hyderabad
4 - 8 yrs
Upto ₹28L / yr (Varies
)
skill iconJava
Microservices
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
skill iconKubernetes

Key Responsibilities

  •     Design, develop, and implement backend services using Java (latest version), Spring Boot, and Microservices architecture.
  •     Participate in the end-to-end development lifecycle, from requirement analysis to deployment and support.
  •     Collaborate with cross-functional teams (UI/UX, DevOps, Product) to deliver high-quality, scalable software solutions.
  •     Integrate APIs and manage data flow between services and front-end systems.
  •     Work on cloud-based deployment using AWS or GCP environments.
  •     Ensure performance, security, and scalability of services in production.
  •     Contribute to technical documentation, code reviews, and best practice implementations.

Required Skills:

  •     Strong hands-on experience with Core Java (latest versions), Spring Boot, and Microservices.
  •     Solid understanding of RESTful APIs, JSON, and distributed systems.
  •     Basic knowledge of Kubernetes (K8s) for containerization and orchestration.
  •     Working experience or strong conceptual understanding of cloud platforms (AWS / GCP).
  •     Exposure to CI/CD pipelines, version control (Git), and deployment automation.
  •     Familiarity with security best practices, logging, and monitoring tools.

Preferred Skills:

  •     Experience with end-to-end deployment on AWS or GCP.
  •     Familiarity with payment gateway integrations or fintech applications.
  •     Understanding of DevOps concepts and infrastructure-as-code tools (Added advantage).
Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Bengaluru (Bangalore)
2 - 4 yrs
₹7L - ₹18L / yr
CI/CD
skill iconJenkins
gitlab
ArgoCD
skill iconAmazon Web Services (AWS)
+8 more

About Us:

Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.


Key Responsibilities

CI/CD and Infrastructure Automation

  • Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
  • Automate deployments using tools such as Terraform, Helm, and Kubernetes
  • Improve build and release processes to support high-performance and low-latency trading applications
  • Work efficiently with Linux/Unix environments

Cloud and On-Prem Infrastructure Management

  • Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
  • Ensure system reliability, scalability, and high availability
  • Implement Infrastructure as Code (IaC) to standardize and streamline deployments

Performance Monitoring and Optimization

  • Monitor system performance and latency using Prometheus, Grafana, and ELK stack
  • Implement proactive alerting and fault detection to ensure system stability
  • Troubleshoot and optimize system components for maximum efficiency

Security and Compliance

  • Apply DevSecOps principles to ensure secure deployment and access management
  • Maintain compliance with financial industry regulations such as SEBI
  • Conduct vulnerability assessments and maintain logging and audit controls


Required Skills and Qualifications

  • 2+ years of experience as a DevOps Engineer in a software or trading environment
  • Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
  • Proficiency in cloud platforms such as AWS and GCP
  • Hands-on experience with Docker and Kubernetes
  • Experience with Terraform or CloudFormation for IaC
  • Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
  • Familiarity with Prometheus, Grafana, and ELK stack
  • Proficiency in scripting using Python, Bash, or Go
  • Solid understanding of security best practices including IAM, encryption, and network policies


Good to Have (Optional)

  • Experience with low-latency trading infrastructure or real-time market data systems
  • Knowledge of high-frequency trading environments
  • Exposure to FIX protocol, FPGA, or network optimization techniques
  • Familiarity with Redis or Nginx for real-time data handling


Why Join Us?

  • Work with a team that expects and delivers excellence.
  • A culture where risk-taking is rewarded, and complacency is not.
  • Limitless opportunities for growth—if you can handle the pace.
  • A place where learning is currency, and outperformance is the only metric that matters.
  • The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.


This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.


Read more
Auxo AI
kusuma Gullamajji
Posted by kusuma Gullamajji
Bengaluru (Bangalore), Hyderabad, Mumbai, Gurugram
2 - 8 yrs
₹10L - ₹35L / yr
GCP
skill iconPython
SQL
Google Cloud Platform (GCP)

Responsibilities:

Build and optimize batch and streaming data pipelines using Apache Beam (Dataflow)

Design and maintain BigQuery datasets using best practices in partitioning, clustering, and materialized views

Develop and manage Airflow DAGs in Cloud Composer for workflow orchestration

Implement SQL-based transformations using Dataform (or dbt)

Leverage Pub/Sub for event-driven ingestion and Cloud Storage for raw/lake layer data architecture

Drive engineering best practices across CI/CD, testing, monitoring, and pipeline observability

Partner with solution architects and product teams to translate data requirements into technical designs

Mentor junior data engineers and support knowledge-sharing across the team

Contribute to documentation, code reviews, sprint planning, and agile ceremonies

Requirements

2+ years of hands-on experience in data engineering, with at least 2 years on GCP

Proven expertise in BigQuery, Dataflow (Apache Beam), Cloud Composer (Airflow)

Strong programming skills in Python and/or Java

Experience with SQL optimization, data modeling, and pipeline orchestration

Familiarity with Git, CI/CD pipelines, and data quality monitoring frameworks

Exposure to Dataform, dbt, or similar tools for ELT workflows

Solid understanding of data architecture, schema design, and performance tuning

Excellent problem-solving and collaboration skills

Bonus Skills:

GCP Professional Data Engineer certification

Experience with Vertex AI, Cloud Functions, Dataproc, or real-time streaming architectures

Familiarity with data governance tools (e.g., Atlan, Collibra, Dataplex)

Exposure to Docker/Kubernetes, API integration, and infrastructure-as-code (Terraform)

Read more
Hyderabad, Bengaluru (Bangalore)
5 - 12 yrs
₹25L - ₹35L / yr
skill iconC#
SQL
skill iconAmazon Web Services (AWS)
skill icon.NET
skill iconJava
+3 more

Senior Software Engineer

Location: Hyderabad, India


Who We Are:

Since our inception back in 2006, Navitas has grown to be an industry leader in the digital transformation space, and we’ve served as trusted advisors supporting our client base within the commercial, federal, and state and local markets.


What We Do:

At our very core, we’re a group of problem solvers providing our award-winning technology solutions to drive digital acceleration for our customers! With proven solutions, award-winning technologies, and a team of expert problem solvers, Navitas has consistently empowered customers to use technology as a competitive advantage and deliver cutting-edge transformative solutions.


What You’ll Do:

Build, Innovate, and Own:

  • Design, develop, and maintain high-performance microservices in a modern .NET/C# environment.
  • Architect and optimize data pipelines and storage solutions that power our AI-driven products.
  • Collaborate closely with AI and data teams to bring machine learning models into production systems.
  • Build integrations with external services and APIs to enable scalable, interoperable solutions.
  • Ensure robust security, scalability, and observability across distributed systems.
  • Stay ahead of the curve — evaluating emerging technologies and contributing to architectural decisions for our next-gen platform.

Responsibilities will include but are not limited to:

  • Provide technical guidance and code reviews that raise the bar for quality and performance.
  • Help create a growth-minded engineering culture that encourages experimentation, learning, and accountability.

What You’ll Need:

  • Bachelor’s degree in Computer Science or equivalent practical experience.
  • 8+ years of professional experience, including 5+ years designing and maintaining scalable backend systems using C#/.NET and microservices architecture.
  • Strong experience with SQL and NoSQL data stores.
  • Solid hands-on knowledge of cloud platforms (AWS, GCP, or Azure).
  • Proven ability to design for performance, reliability, and security in data-intensive systems.
  • Excellent communication skills and ability to work effectively in a global, cross-functional environment.

Set Yourself Apart With:

  • Startup experience - specifically in building product from 0-1
  • Exposure to AI/ML-powered systems, data engineering, or large-scale data processing.
  • Experience in healthcare or fintech domains.
  • Familiarity with modern DevOps practices, CI/CD pipelines, and containerization (Docker/Kubernetes).

Equal Employer/Veterans/Disabled

Navitas Business Consulting is an affirmative action and equal opportunity employer. If reasonable accommodation is needed to participate in the job application or interview process, to perform essential job functions, and/or to receive other benefits and privileges of employment, please contact Navitas Human Resources.

Navitas is an equal opportunity employer. We provide employment and opportunities for advancement, compensation, training, and growth according to individual merit, without regard to race, color, religion, sex (including pregnancy), national origin, sexual orientation, gender identity or expression, marital status, age, genetic information, disability, veteran-status veteran or military status, or any other characteristic protected under applicable Federal, state, or local law. Our goal is for each staff member to have the opportunity to grow to the limits of their abilities and to achieve personal and organizational objectives. We will support positive programs for equal treatment of all staff and full utilization of all qualified employees at all levels within Navita

Read more
Biofourmis

at Biofourmis

44 recruiters
Roopa Ramalingamurthy
Posted by Roopa Ramalingamurthy
Remote only
5 - 10 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Summary:

We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.


Responsibilities:

  • Deploy, configure, and troubleshoot various infrastructure and application environments
  • Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
  • Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
  • Collaborate with application teams on infrastructure design and issues
  • Architect solutions that optimally meet business needs
  • Implement CI/CD pipelines and automate deployment processes
  • Disaster recovery and infrastructure restoration
  • Restore/Recovery operations from backups
  • Automate routine tasks
  • Execute company initiatives in the infrastructure space
  • Expertise with observability tools like ELK, Prometheus, Grafana , Loki


Qualifications:

  • Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
  • Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
  • Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
  • Experience in architecting solutions that optimally meet business needs
  • Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
  • Strong understanding of system concepts like high availability, scalability, and redundancy
  • Ability to work with application teams on infrastructure design and issues
  • Excellent problem-solving and troubleshooting skills
  • Experience with automation of routine tasks
  • Good communication and interpersonal skills


Education and Experience:

  • Bachelor's degree in Computer Science or a related field
  • 5 to 10 years of experience as a DevOps Engineer or in a related role
  • Experience with observability tools like ELK, Prometheus, Grafana


Working Conditions:

The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.


Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.

Read more
appscrip

at appscrip

2 recruiters
Kanika Gaur
Posted by Kanika Gaur
Bengaluru (Bangalore)
1 - 3 yrs
₹4L - ₹10L / yr
DevOps
Windows Azure
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

Job Title: Sr. DevOps Engineer

Experience Required: 2 to 4 years in DevOps or related fields

Employment Type: Full-time


About the Role:

We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.


Key Responsibilities:

Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).

CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.

Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.

Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.

Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.

Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.

Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.

Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.


Required Skills & Qualifications:

Technical Expertise:

Strong proficiency in cloud platforms like AWS, Azure, or GCP.

Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).

Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.

Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.

Proficiency in scripting languages (e.g., Python, Bash, PowerShell).

Soft Skills:

Excellent communication and leadership skills.

Strong analytical and problem-solving abilities.

Proven ability to manage and lead a team effectively.

Experience:

4 years + of experience in DevOps or Site Reliability Engineering (SRE).

4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.

Strong understanding of microservices, APIs, and serverless architectures.


Nice to Have:

Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.

Experience with GitOps tools such as ArgoCD or Flux.

Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).


Perks & Benefits:

Competitive salary and performance bonuses.

Comprehensive health insurance for you and your family.

Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.

Flexible working hours and remote work options.

Collaborative and inclusive work culture.


Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.


You can directly contact us: Nine three one six one two zero one three two

Read more
Whiz IT Services
Sheeba Harish
Posted by Sheeba Harish
Remote only
10 - 15 yrs
₹20L - ₹20L / yr
skill iconJava
skill iconSpring Boot
Microservices
API
Apache Kafka
+5 more

We are looking for highly experienced Senior Java Developers who can architect, design, and deliver high-performance enterprise applications using Spring Boot and Microservices . The role requires a strong understanding of distributed systems, scalability, and data consistency.

Read more
Infilect

at Infilect

3 recruiters
Indira Ashrit
Posted by Indira Ashrit
Bengaluru (Bangalore)
2 - 3 yrs
₹12L - ₹15L / yr
skill iconKubernetes
skill iconDocker
cicd
Google Cloud Platform (GCP)

Job Description:


Infilect is a GenAI company pioneering the use of Image Recognition in Consumer Packaged Goods retail.


We are looking for a Senior DevOps Engineer to be responsible and accountable for the smooth running of our Cloud, AI workflows, and AI-based Computer Systems. Furthermore, the candidate will supervise the implementation and maintenance of the company’s computing needs including the in-house GPU & AI servers along with AI workloads.



Responsibilities

  • Understanding and automating AI based deployment an AI based workflows
  • Implementing various development, testing, automation tools, and IT infrastructure
  • Manage Cloud, computer systems and other IT assets.
  • Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
  • Design, develop, implement, and coordinate systems, policies, and procedures for Cloud and on-premise systems
  • Ensure the security of data, network access, and backup systems
  • Act in alignment with user needs and system functionality to contribute to organizational policy
  • Identify problematic areas, perform RCA and implement strategic solutions in time
  • Preserve assets, information security, and control structures
  • Handle monthly/annual cloud budget and ensure cost effectiveness


Requirements and skills

  • Well versed in automation tools such as Docker, Kubernetes, Puppet, Ansible etc.
  • Working Knowledge of Python, SQL database stack or any full-stack with relevant tools.
  • Understanding agile development, CI/CD, sprints, code reviews, Git and GitHub/Bitbucket workflows
  • Well versed with ELK stack or any other logging, monitoring and analysis tools
  • Proven working experience of 2+ years as an DevOps/Tech lead/IT Manager or relevant positions
  • Excellent knowledge of technical management, information analysis, and of computer hardware/software systems
  • Hands-on experience with computer networks, network administration, and network installation
  • Knowledge in ISO/SOC Type II implementation with be a 
  • BE/B.Tech/ME/M.Tech in Computer Science, IT, Electronics or a similar field


Read more
CGI Inc

at CGI Inc

3 recruiters
Shruthi BT
Posted by Shruthi BT
Bengaluru (Bangalore), Mumbai, Pune, Hyderabad, Chennai
8 - 15 yrs
₹15L - ₹25L / yr
Google Cloud Platform (GCP)
Data engineering
Big query

Google Data Engineer - SSE


Position Description

Google Cloud Data Engineer

Notice Period: Immediate to 30 days serving

Job Description:

We are seeking a highly skilled Data Engineer with extensive experience in Google Cloud Platform (GCP) data services and big data technologies. The ideal candidate will be responsible for designing, implementing, and optimizing scalable data solutions while ensuring high performance, reliability, and security.

Key Responsibilities:


• Design, develop, and maintain scalable data pipelines and architectures using GCP data services.

• Implement and optimize solutions using BigQuery, Dataproc, Composer, Pub/Sub, Dataflow, GCS, and BigTable.

• Work with GCP databases such as Bigtable, Spanner, CloudSQL, AlloyDB, ensuring performance, security, and availability.

• Develop and manage data processing workflows using Apache Spark, Hadoop, Hive, Kafka, and other Big Data technologies.

• Ensure data governance and security using Dataplex, Data Catalog, and other GCP governance tooling.

• Collaborate with DevOps teams to build CI/CD pipelines for data workloads using Cloud Build, Artifact Registry, and Terraform.

• Optimize query performance and data storage across structured and unstructured datasets.

• Design and implement streaming data solutions using Pub/Sub, Kafka, or equivalent technologies.


Required Skills & Qualifications:


• 8-15 years of experience

• Strong expertise in GCP Dataflow, Pub/Sub, Cloud Composer, Cloud Workflow, BigQuery, Cloud Run, Cloud Build.

• Proficiency in Python and Java, with hands-on experience in data processing and ETL pipelines.

• In-depth knowledge of relational databases (SQL, MySQL, PostgreSQL, Oracle) and NoSQL databases (MongoDB, Scylla, Cassandra, DynamoDB).

• Experience with Big Data platforms such as Cloudera, Hortonworks, MapR, Azure HDInsight, IBM Open Platform.

• Strong understanding of AWS Data services such as Redshift, RDS, Athena, SQS/Kinesis.

• Familiarity with data formats such as Avro, ORC, Parquet.

• Experience handling large-scale data migrations and implementing data lake architectures.

• Expertise in data modeling, data warehousing, and distributed data processing frameworks.

• Deep understanding of data formats such as Avro, ORC, Parquet.

• Certification in GCP Data Engineering Certification or equivalent.


Good to Have:


• Experience in BigQuery, Presto, or equivalent.

• Exposure to Hadoop, Spark, Oozie, HBase.

• Understanding of cloud database migration strategies.

• Knowledge of GCP data governance and security best practices.

Read more
Tradelab Technologies
Aakanksha Yadav
Posted by Aakanksha Yadav
Mumbai, Bengaluru (Bangalore)
10 - 18 yrs
₹25L - ₹50L / yr
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)
+7 more

Type: Client-Facing Technical Architecture, Infrastructure Solutioning & Domain Consulting (India + International Markets)


Role Overview

Tradelab is seeking a senior Solution Architect who can interact with both Indian and international clients (Dubai, Singapore, London, US), helping them understand our trading systems, OMS/RMS/CMS stack, HFT platforms, feed systems, and Matching Engine. The architect will design scalable, secure, and ultra-low-latency deployments tailored to global forex markets, brokers, prop firms, liquidity providers, and market makers.


Key Responsibilities

1. Client Engagement (India + International Markets)

  • Engage with brokers, prop trading firms, liquidity providers, and financial institutions across India, Dubai, Singapore, and global hubs.
  • Explain Tradelab’s capabilities, architecture, and deployment options.
  • Understand region-specific latency expectations, connectivity options, and regulatory constraints.

2. Requirement Gathering & Solutioning

  • Capture client needs, throughput, order concurrency, tick volumes, and market data handling.
  • Assess infra readiness (cloud/on-prem/colo).
  • Propose architecture aligned with forex markets.

3. Global Architecture & Deployment Design

  • Design multi-region infrastructure using AWS/Azure/GCP.
  • Architect low-latency routing between India–Singapore–Dubai.
  • Support deployments in DCs like Equinix SG1/DX1.

4. Networking & Security Architecture

  • Architect multicast/unicast feeds, VPNs, IPSec tunnels, BGP routes.
  • Implement network hardening, segmentation, WAF/firewall rules.

5. DevOps, Cloud Engineering & Scalability

  • Build CI/CD pipelines, Kubernetes autoscaling, cost-optimized AWS multi-region deployments.
  • Design global failover models.

6. BFSI & Trading Domain Expertise

  • Indian broking, international forex, LP aggregation, HFT.
  • OMS/RMS, risk engines, LP connectivity, and matching engines.

7. Latency, Performance & Capacity Planning

  • Benchmark and optimize cross-region latency.
  • Tune performance for high tick volumes and volatility bursts.

8. Documentation & Consulting

  • Prepare HLDs, LLDs, SOWs, cost sheets, and deployment of playbooks.
  • Required Skills
  • AWS: EC2, VPC, EKS, NLB, MSK/Kafka, IAM, Global Accelerator.
  • DevOps: Kubernetes, Docker, Helm, Terraform.
  • Networking: IPSec, GRE, VPN, BGP, multicast (PIM/IGMP).
  • Message buses: Kafka, RabbitMQ, Redis Streams.

Domain Skills

  • Deep Broking Domain Understanding.
  • Indian broking + global forex/CFD.
  • FIX protocol, LP integration, market data feeds.
  • Regulations: SEBI, DFSA, MAS, ESMA.

Soft Skills

  • Excellent communication and client-facing ability.
  • Strong presales and solutioning mindset.
  • Preferred Qualifications
  • B.Tech/BE/M.Tech in CS or equivalent.
  • AWS Architect Professional, CCNP, CKA.

Why Join Us?

  • Experience in colocation/global trading infra.
  • Work with a team that expects and delivers excellence.
  • A culture where risk-taking is rewarded, and complacency is not.
  • Limitless opportunities for growth—if you can handle the pace.
  • A place where learning is currency, and outperformance is the only metric that matters.
  • The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.

This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.


Read more
Synorus
Synorus Admin
Posted by Synorus Admin
Remote only
0 - 3 yrs
₹0.1L - ₹0.6L / yr
Remotion
skill iconNextJs (Next.js)
skill iconReact.js
skill iconHTML/CSS
TypeScript
+4 more

About Us


At Synorus, we’re building a suite of intelligent, AI-powered products that redefine how people interact with technology — from real-time video editing tools to legal intelligence and creative automation systems.

We are looking for a Frontend Developer who is passionate about crafting seamless, elegant, and high-performance user interfaces that bring next-generation AI experiences to life.


Key Responsibilities

  • Design, develop, and maintain modular, scalable front-end components using React, Next.js, and TypeScript.
  • Implement interactive, media-rich interfaces powered by AI and real-time data.
  • Work closely with backend and AI teams to integrate APIs and WebSocket-based data flows.
  • Ensure pixel-perfect, responsive, and accessible user interfaces across platforms and devices.
  • Optimize performance through efficient rendering, lazy loading, and dynamic imports.
  • Maintain high-quality code standards using TypeScript, ESLint, and testing frameworks.
  • Contribute to our evolving design system and component library shared across products.
  • Collaborate with designers and engineers to deliver intuitive, creative, and impactful user experiences.


Skills & Experience

  • Strong proficiency in React, Next.js, TypeScript, and modern JavaScript (ES6+).
  • Expertise in Tailwind CSS, Framer Motion, and other animation or motion libraries.
  • Experience with state management tools such as Valtio, Redux, or Zustand.
  • Familiarity with design tools like Figma and understanding of responsive grid systems.
  • Experience integrating APIs and working with real-time data through WebSockets.
  • Understanding of accessibility (WCAG), cross-browser compatibility, and performance optimization.
  • Bonus: Experience with Remotion, Canvas APIs, or WebGL for video or AI-enhanced UIs.


Ideal Candidate

  • Obsessed with clean, maintainable, and scalable UI code.
  • Understands both design aesthetics and engineering trade-offs.
  • Self-driven, detail-oriented, and thrives in a fast-paced startup environment.
  • Excited to experiment with emerging technologies — AI, real-time collaboration, or creative tools.
  • Loves solving complex problems through thoughtful, user-centric design.


Education

  • Bachelor’s or Master’s in Computer Science, Engineering, or equivalent hands-on experience.
  • A strong project portfolio or GitHub profile is highly preferred.


Why Join Us

  • Work directly with the founding team and AI engineers on products shaping the future of creativity and automation.
  • Be part of a fast-growing ecosystem where your work impacts multiple real-world products.
  • Experience a flat hierarchy, flexible hours, and an environment that rewards innovation.
  • Access to cutting-edge technologies, mentorship, and rapid growth opportunities.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Praffull Shinde
Posted by Praffull Shinde
Pune, Mumbai, Bengaluru (Bangalore)
8 - 14 yrs
Best in industry
Google Cloud Platform (GCP)
Terraform
skill iconKubernetes
DevOps
skill iconPython

JD for Cloud engineer

 

Job Summary:


We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.


You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.

 

Key Responsibilities:

1. Cloud Infrastructure Design & Management

  • Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
  • Implement Google Cloud Storage, Cloud SQL, filestore,  for data storage and processing needs.
  • Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
  • Optimize resource allocation, monitoring, and cost efficiency across GCP environments.


2. Kubernetes & Container Orchestration

  • Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
  • Work with Helm charts for microservices deployments.
  • Automate scaling, rolling updates, and zero-downtime deployments.

 

3. Serverless & Compute Services

  • Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
  • Optimize containerized applications running on Cloud Run for cost efficiency and performance.

 

4. CI/CD & DevOps Automation

  • Design, implement, and manage CI/CD pipelines using Azure DevOps.
  • Automate infrastructure deployment using Terraform, Bash and Powershell scripting
  • Integrate security and compliance checks into the DevOps workflow (DevSecOps).

 

 

Required Skills & Qualifications:

Experience: 8+ years in Cloud Engineering, with a focus on GCP.

Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.

Read more
Intineri infosol Pvt Ltd

at Intineri infosol Pvt Ltd

2 candid answers
Shivani Pandey
Posted by Shivani Pandey
Remote only
4 - 10 yrs
₹5L - ₹15L / yr
skill iconPython
2D Geometry Concept
3D Geometry Concept
NumPy
SciPy
+9 more

Job Title: Python Developer

Experience Level: 4+ years

 

Job Summary:

We are seeking a skilled Python Developer with strong experience in developing and maintaining APIs. Familiarity with 2D and 3D geometry concepts is a strong plus. The ideal candidate will be passionate about clean code, scalable systems, and solving complex geometric and computational problems.


Key Responsibilities:

·       Design, develop, and maintain robust and scalable APIs using Python.

·       Work with geometric data structures and algorithms (2D/3D).

·       Collaborate with cross-functional teams including front-end developers, designers, and product managers.

·       Optimize code for performance and scalability.

·       Write unit and integration tests to ensure code quality.

·       Participate in code reviews and contribute to best practices.

 

Required Skills:

·       Strong proficiency in Python.

·       Experience with RESTful API development (e.g., Flask, FastAPI, Django REST Framework).

·       Good understanding of 2D/3D geometry, computational geometry, or CAD-related concepts.

·       Familiarity with libraries such as NumPySciPyShapelyOpen3D, or PyMesh.

·       Experience with version control systems (e.g., Git).

·       Strong problem-solving and analytical skills.

 

Good to Have:

·       Experience with 3D visualization tools or libraries (e.g., VTK, Blender API, Three.js via Python bindings).

·       Knowledge of mathematical modeling or simulation.

·       Exposure to cloud platforms (AWS, Azure, GCP).

·       Familiarity with CI/CD pipelines.

 

Education:

·       Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Mumbai, Pune
7 - 12 yrs
₹1L - ₹45L / yr
Google Cloud Platform (GCP)
skill iconKubernetes
skill iconDocker
google kubernetes engineer
azure devops
+2 more

Required Skills & Qualifications:

✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.

✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

Read more
Reliable Group

at Reliable Group

2 candid answers
Nilesh Gend
Posted by Nilesh Gend
Pune
5 - 12 yrs
₹15L - ₹35L / yr
Google Cloud Platform (GCP)
Ansible
Terraform

Job Title: GCP Cloud Engineer/Lead


Location: Pune, Balewadi

Shift / Time Zone: 1:30 PM – 10:30 PM IST (3:00 AM – 12:00 PM EST, 3–4 hours overlap with US Eastern Time)


Role Summary

We are seeking an experienced GCP Cloud Engineer to join our team supporting CVS. The ideal candidate will have a strong background in Google Cloud Platform (GCP) architecture, automation, microservices, and Kubernetes, along with the ability to translate business strategy into actionable technical initiatives. This role requires a blend of hands-on technical expertise, cross-functional collaboration, and customer engagement to ensure scalable and secure cloud solutions.


Key Responsibilities

  • Design, implement, and manage cloud infrastructure on Google Cloud Platform (GCP) leveraging best practices for scalability, performance, and cost efficiency.
  • Develop and maintain microservices-based architectures and containerized deployments using Kubernetes and related technologies.
  • Evaluate and recommend new tools, services, and architectures that align with enterprise cloud strategies.
  • Collaborate closely with Infrastructure Engineering Leadership to translate long-term customer strategies into actionable enablement plans, onboarding frameworks, and proactive support programs.
  • Act as a bridge between customers, Product Management, and Engineering teams, translating business needs into technical requirements and providing strategic feedback to influence product direction.
  • Identify and mitigate technical risks and roadblocks in collaboration with executive stakeholders and engineering teams.
  • Advocate for customer needs within the engineering organization to enhance adoption, performance, and cost optimization.
  • Contribute to the development of Customer Success methodologies and mentor other engineers in best practices.


Must-Have Skills

  • 8+ years of total experience, with 5+ years specifically as a GCP Cloud Engineer.
  • Deep expertise in Google Cloud Platform (GCP) — including Compute Engine, Cloud Storage, Networking, IAM, and Cloud Functions.
  • Strong experience in microservices-based architecture and Kubernetes container orchestration.
  • Hands-on experience with infrastructure automation tools (Terraform, Ansible, or similar).
  • Proven ability to design, automate, and optimize CI/CD pipelines for cloud workloads.
  • Excellent problem-solving, communication, and collaboration skills.
  • GCP Professional Certification (Cloud Architect / DevOps Engineer / Cloud Engineer) preferred or in progress.
  • Ability to multitask effectively in a fast-paced, dynamic environment with shifting priorities.


Good-to-Have Skills

  • Experience with Cloud Monitoring, Logging, and Security best practices in GCP.
  • Exposure to DevOps tools (Jenkins, GitHub Actions, ArgoCD, or similar).
  • Familiarity with multi-cloud or hybrid-cloud environments.
  • Knowledge of Python, Go, or Shell scripting for automation and infrastructure management.
  • Understanding of network design, VPC architecture, and service mesh (Istio/Anthos).
  • Experience working with enterprise-scale customers and cross-functional product teams.
  • Strong presentation and stakeholder communication skills, particularly with executive audiences.


Read more
Pune
3 - 7 yrs
₹7L - ₹10L / yr
skill iconPython
Google Cloud Platform (GCP)
skill iconMongoDB
grpc
RabbitMQ
+3 more

Advanced Backend Development: Design, build, and maintain efficient, reusable, and reliable Python code. Develop complex backend services using FastAPI, MongoDB, and Postgres.

Microservices Architecture Design: Lead the design and implementation of a scalable microservices architecture, ensuring systems are robust and reliable.

Database Management and Optimization: Oversee and optimize the performance of MongoDB and Postgres databases, ensuring data integrity and security.

Message Broker Implementation: Implement and manage sophisticated message broker systems like RabbitMQ or Kafka for asynchronous processing and inter-service communication.

Git and Version Control Expertise: Utilize Git for sophisticated source code management. Lead code reviews and maintain high standards in code quality.

Project and Team Management: Manage backend development projects, coordinating with cross-functional teams. Mentor junior developers and contribute to team growth and skill development. Cloud Infrastructure Management: Extensive work with cloud services, specifically Google Cloud Platform (GCP), for deployment, scaling, and management of applications.

Performance Tuning and Optimization: Focus on optimizing applications for maximum speed, efficiency, and scalability.

Unit Testing and Quality Assurance: Develop and maintain thorough unit tests for all developed code. Lead initiatives in test-driven development (TDD) to ensure code quality and reliability.

 Security Best Practices: Implement and advocate for security best practices, data protection protocols, and compliance standards across all backend services.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Mumbai, Pune
3 - 7 yrs
Best in industry
Google Cloud Platform (GCP)
Shell Scripting
skill iconJava
skill iconRuby
Product Management

Job Title: Site Reliability Engineer (SRE) / Application Support Engineer

Experience: 3–7 Years

Location: Bangalore / Mumbai / Pune

About the Role

The successful candidate will join the S&C Site Reliability Engineering (SRE) Team, responsible for providing Tier 2/3 support to S&C business applications and environments. This role requires close collaboration with client-facing teams (Client Services, Product, and Research) as well as Infrastructure, Technology, and Application Development teams to maintain and support production and non-production environments.

Key Responsibilities

  • Provide Tier 2/3 product technical support and issue resolution.
  • Develop and maintain software tools to improve operations and support efficiency.
  • Manage system and software configurations; troubleshoot environment-related issues.
  • Identify opportunities to optimize system performance through configuration improvements or development suggestions.
  • Plan, document, and deploy software applications across Unix/Linux, Azure, and GCP environments.
  • Collaborate with Development and QA teams throughout the software release lifecycle.
  • Analyze and improve release and deployment processes to drive automation and efficiency.
  • Coordinate with infrastructure teams for maintenance, planned downtimes, and resource management across production and non-production environments.
  • Participate in on-call support (minimum one week per month) for off-hour emergencies and maintenance activities.

Required Skills & Qualifications

  • Education:
  • Bachelor’s degree in Computer Science, Engineering, or a related field (BE/MCA).
  • Master’s degree is a plus.
  • Experience:
  • 3–7 years in Production Support, Application Management, or Application Development (support/maintenance).
  • Technical Skills:
  • Strong Unix/Linux administration skills.
  • Excellent scripting skills — Shell, Python, Batch (mandatory).
  • Database expertise — Oracle (must have).
  • Understanding of Software Development Life Cycle (SDLC).
  • PowerShell knowledge is a plus.
  • Experience in Java or Ruby development is desirable.
  • Exposure to cloud platforms (GCP, Azure, or AWS) is an added advantage.
  • Soft Skills:
  • Excellent problem-solving and troubleshooting abilities.
  • Strong collaboration and communication skills.
  • Ability to work in a fast-paced, cross-functional environment.


Read more
Deltek
Remote only
7 - 14 yrs
Best in industry
skill iconAmazon Web Services (AWS)
Windows Azure
OCI
Google Cloud Platform (GCP)

Title – Principal Cloud Architect

Company Summary :

As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com

 

Business Summary :

The Deltek Global Cloud team focuses on the delivery of first-class services and solutions for our customers. We are an innovative and dynamic team that is passionate about transforming the Deltek cloud services that power our customers' project success. Our diverse, global team works cross-functionally to make an impact on the business. If you want to work in a transformational environment, where education and training are encouraged, consider Deltek as the next step in your career!

External Job Title :

 

Principal Cloud Cost Optimization Engineer

Position Responsibilities :

The Cloud Cost Optimization Engineer plays a key role in supporting the full lifecycle of cloud financial management (FinOps) at Deltek—driving visibility, accountability, and efficiency across our cloud investments. This role is responsible for managing cloud spend, forecasting, and identifying optimization opportunities that support Deltek's cloud expansion and financial performance goals.

We are seeking a candidate with hands-on experience in Cloud FinOps practices, software development capabilities, AI/automation expertise, strong analytical skills, and a passion for driving financial insights that enable smarter business decisions. The ideal candidate is a self-starter with excellent cross-team collaboration abilities and a proven track record of delivering results in a fast-paced environment.

Key Responsibilities:

  • Prepare and deliver monthly reports and presentations on cloud spend performance versus plan and forecast for Finance, IT, and business leaders.
  • Support the evaluation, implementation, and ongoing management of cloud consumption and financial management tools.
  • Apply financial and vendor management principles to support contract optimization, cost modeling, and spend management.
  • Clearly communicate technical and financial insights, presenting complex topics in a simple, actionable manner to both technical and non-technical audiences.
  • Partner with engineering, product, and infrastructure teams to identify cost drivers, promote best practices for efficient cloud consumption, and implement savings opportunities.
  • Lead cost optimization initiatives, including analyzing and recommending savings plans, reserved instances, and right-sizing opportunities across AWS, Azure, and OCI.
  • Collaborate with the Cloud Governance team to ensure effective tagging strategies and alerting frameworks are deployed and maintained at scale.
  • Support forecasting by partnering with infrastructure and engineering teams to understand demand plans and proactively manage capacity and spend.
  • Build and maintain financial models and forecasting tools that provide actionable insights into current and future cloud expenditures.
  • Develop and maintain automated FinOps solutions using Python, SQL, and cloud-native services (Lambda, Azure Functions) to streamline cost analysis, anomaly detection, and reporting workflows.
  • Design and implement AI-powered cost optimization tools leveraging GenAI APIs (OpenAI, Claude, Bedrock) to automate spend analysis, generate natural language insights, and provide intelligent recommendations to stakeholders.
  • Build custom integrations and data pipelines connecting cloud billing APIs, FinOps platforms, and internal systems to enable real-time cost visibility and automated alerting.
  • Develop and sustain relationships with internal stakeholders, onboarding them to FinOps tools, processes, and continuous cost optimization practices.
  • Create and maintain KPIs, scorecards, and financial dashboards to monitor cloud spend and optimization progress.
  • Drive a culture of optimization by translating financial insights into actionable engineering recommendations, promoting cost-conscious architecture, and leveraging automation for resource optimization.
  • Use FinOps tools and services to analyze cloud usage patterns and provide technical cost-saving recommendations to application teams.
  • Develop self-service FinOps portals and chatbots using GenAI to enable teams to query cost data, receive optimization recommendations, and understand cloud spending through natural language interfaces.
  • Leverage Generative AI tools to enhance FinOps automation, streamline reporting, and improve team productivity across forecasting, optimization, and anomaly detection.

Qualifications :

 

  • Bachelor's degree in Finance, Computer Science, Information Systems, or a related field.
  • 4+ years of professional experience in Cloud FinOps, IT Financial Management, or Cloud Cost Governance within an IT organization.
  • 6-8 years of overall experience in Cloud Infrastructure Management, DevOps, Software Development, or related technical roles with hands-on cloud platform expertise
  • Hands-on experience with native cloud cost management tools (e.g., AWS Cost Explorer, Azure Cost Management, OCI Cost Analysis) and/or third-party FinOps platforms (e.g., Cloudability, CloudHealth, Apptio).
  • Proven experience working within the FinOps domain in a large enterprise environment.
  • Strong background in building and managing custom reports, dashboards, and financial insights.
  • Deep understanding of cloud financial management practices, including chargeback/showback models, cost savings and avoidance tracking, variance analysis, and financial forecasting.
  • Solid knowledge of cloud provider pricing models, billing structures, and optimization strategies.
  • Practical experience with cloud optimization and governance practices such as anomaly detection, capacity planning, rightsizing, tagging strategies, and storage lifecycle policies.
  • Skilled in leveraging automation to drive operational efficiency in cloud cost management processes.
  • Strong analytical and data storytelling skills, with the ability to collect, interpret, and present complex financial and technical data to diverse audiences.
  • Experience developing KPIs, scorecards, and metrics aligned with business goals and industry benchmarks.
  • Ability to influence and drive change management initiatives that increase adoption and maturity of FinOps practices.
  • Highly results-driven, detail-oriented, and goal-focused, with a passion for continuous improvement.
  • Strong communicator and collaborative team player with a passion for mentoring and educating others.
  • Strong proficiency in Python and SQL for data analysis, automation, and tool development, with demonstrated experience building production-grade scripts and applications.
  • Hands-on development experience building automation solutions, APIs, or internal tools for cloud management or financial operations.
  • Practical experience with GenAI technologies including prompt engineering, and integrating LLM APIs (OpenAI, Claude, Bedrock) into business workflows.
  • Experience with Infrastructure as Code (Terraform etc.) and CI/CD pipelines for deploying FinOps automation and tooling.
  • Familiarity with data visualization libraries (e.g. PowerBI ) and building interactive dashboards programmatically.
  • Knowledge of ML/AI frameworks is a plus.
  • Experience building chatbots or conversational AI interfaces for internal tooling is a plus.
  • FinOps Certified Practitioner.
  • AWS, Azure, or OCI cloud certifications are preferred.

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Janane Mohanasankaran
Posted by Janane Mohanasankaran
Bengaluru (Bangalore), Mumbai, Pune
3 - 7 yrs
Best in industry
Google Cloud Platform (GCP)
AZURE
skill iconJava
skill iconRuby
Oracle NoSQL Database
+5 more

Required skills and experience

• Bachelor's degree, Computer Science, Engineering or other similar concentration (BE/MCA)

• Master’s degree a plus

• 3-8 years’ experience in Production Support/ Application Management/ Application Development (support/ maintenance) role.

• Excellent problem-solving/troubleshooting skills, fast learner

• Strong knowledge of Unix Administration.

• Strong scripting skills in Shell, Python, Batch is must.

• Strong Database experience – Oracle

• Strong knowledge of Software Development Life Cycle

• Power shell is nice to have

• Software development skillsets in Java or Ruby.

• Worked upon any of the cloud platforms – GCP/Azure/AWS is nice to have

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Moulina Dey
Posted by Moulina Dey
Pune, Bengaluru (Bangalore), Mumbai
3 - 6 yrs
₹2L - ₹14L / yr
technical product support
Linux/Unix
Google Cloud Platform (GCP)
SRE
Reliability engineering

Department: S&C – Site Reliability Engineering (SRE)  

Experience Required: 4–8 Years  

Location: Bangalore / Pune /Mumbai 

Employment Type: Full-time


  • Provide Tier 2/3 technical product support to internal and external stakeholders. 
  • Develop automation tools and scripts to improve operational efficiency and support processes. 
  • Manage and maintain system and software configurations; troubleshoot environment/application-related issues. 
  • Optimize system performance through configuration tuning or development enhancements. 
  • Plan, document, and deploy applications in Unix/Linux, Azure, and GCP environments
  • Collaborate with Development, QA, and Infrastructure teams throughout the release and deployment of lifecycles
  • Drive automation initiatives for release and deployment processes. 
  • Coordinate with infrastructure teams to manage hardware/software resources, maintenance, and scheduled downtimes across production and non-production environments. 
  • Participate in on-call rotations (minimum one week per month) to address critical incidents and off-hour maintenance tasks. 

 

Key Competencies 

  • Strong analytical, troubleshooting, and critical thinking abilities. 
  • Excellent cross-functional collaboration skills. 
  • Strong focus on documentation, process improvement, and system reliability
  • Proactive, detail-oriented, and adaptable in a fast-paced work environment. 


Read more
Payal
Payal Sangoi
Posted by Payal Sangoi
Bengaluru (Bangalore)
2 - 3 yrs
₹8L - ₹10L / yr
Linux/Unix
skill iconDocker
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+2 more

Junior DevOps Engineer

Experience: 2–3 years


About Us

We are a fast-growing fintech/trading company focused on building scalable, high-performance systems for financial markets. Our technology stack powers real-time trading, risk management, and analytics platforms. We are looking for a motivated Junior DevOps Engineer to join our dynamic team and help us maintain and improve our infrastructure.

Key Responsibilities

  • Support deployment, monitoring, and maintenance of trading and fintech applications.
  • Automate infrastructure provisioning and deployment pipelines using tools like Ansible, Terraform, or similar.
  • Collaborate with development and operations teams to ensure high availability, reliability, and security of systems.
  • Troubleshoot and resolve production issues in a fast-paced environment.
  • Implement and maintain CI/CD pipelines for continuous integration and delivery.
  • Monitor system performance and optimize infrastructure for scalability and cost-efficiency.
  • Assist in maintaining compliance with financial industry standards and security best practices.

Required Skills

  • 2–3 years of hands-on experience in DevOps or related roles.
  • Proficiency in Linux/Unix environments.
  • Experience with containerization (Docker) and orchestration (Kubernetes).
  • Familiarity with cloud platforms (AWS, GCP, or Azure).
  • Working knowledge of scripting languages (Bash, Python).
  • Experience with configuration management tools (Ansible, Puppet, Chef).
  • Understanding of networking concepts and security practices.
  • Exposure to monitoring tools (Prometheus, Grafana, ELK stack).
  • Basic understanding of CI/CD tools (Jenkins, GitLab CI, GitHub Actions).

Preferred Skills

  • Experience in fintech, trading, or financial services.
  • Knowledge of high-frequency trading systems or low-latency environments.
  • Familiarity with financial data protocols and APIs.
  • Understanding of regulatory requirements in financial technology.

What We Offer

  • Opportunity to work on cutting-edge fintech/trading platforms.
  • Collaborative and learning-focused environment.
  • Competitive salary and benefits.
  • Career growth in a rapidly expanding domain.



Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Pune
5 - 8 yrs
₹20L - ₹25L / yr
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconPostgreSQL
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)

🚀 We’re Hiring: React + Node.js Developer (Full Stack)

📍 Location: Pune

💼 Experience: 5–8 years

🕒 Notice Period: Immediate to 15 days


About the Role:

We’re looking for a skilled Full Stack Developer with hands-on experience in React and Node.js, and a passion for building scalable, high-performance applications.


Key Skills & Responsibilities:

Strong expertise in React (frontend) and Node.js (backend).

Experience with relational databases (PostgreSQL / MySQL).

Familiarity with production systems and cloud services (AWS / GCP).

Strong grasp of OOP / FP and clean coding principles (e.g., SOLID).

Hands-on with Docker, and good to have exposure to Kubernetes, RabbitMQ, Redis.

Experience or interest in AI APIs & tools is a plus.

Excellent communication and collaboration skills.

Bonus: Contributions to open-source projects.

Read more
Agentic AI Platform

Agentic AI Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Gurugram
4 - 7 yrs
₹25L - ₹50L / yr
Microservices
API
Cloud Computing
skill iconJava
skill iconPython
+18 more

ROLES AND RESPONSIBILITIES:

We are looking for a Software Engineering Manager to lead a high-performing team focused on building scalable, secure, and intelligent enterprise software. The ideal candidate is a strong technologist who enjoys coding, mentoring, and driving high-quality software delivery in a fast-paced startup environment.


KEY RESPONSIBILITIES:

  • Lead and mentor a team of software engineers across backend, frontend, and integration areas.
  • Drive architectural design, technical reviews, and ensure scalability and reliability.
  • Collaborate with Product, Design, and DevOps teams to deliver high-quality releases on time.
  • Establish best practices in agile development, testing automation, and CI/CD pipelines.
  • Build reusable frameworks for low-code app development and AI-driven workflows.
  • Hire, coach, and develop engineers to strengthen technical capabilities and team culture.


IDEAL CANDIDATE:

  • B.Tech/B.E. in Computer Science from a Tier-1 Engineering College.
  • 3+ years of professional experience as a software engineer, with at least 1 year mentoring or managing engineers.
  • Strong expertise in backend development (Java / Node.js / Go / Python) and familiarity with frontend frameworks (React / Angular / Vue).
  • Solid understanding of microservices, APIs, and cloud architectures (AWS/GCP/Azure).
  • Experience with Docker, Kubernetes, and CI/CD pipelines.
  • Excellent communication and problem-solving skills.



PREFERRED QUALIFICATIONS:

  • Experience building or scaling SaaS or platform-based products.
  • Exposure to GenAI/LLM, data pipelines, or workflow automation tools.
  • Prior experience in a startup or high-growth product environment.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Bipasha Rath
Posted by Bipasha Rath
Mumbai, Pune
5 - 9 yrs
Best in industry
Google Cloud Platform (GCP)
Terraform
iac
Axure

We are seeking Cloud Developer with experience in GCP/Azure along with Terraform coding. They will help to manage & standardize the IaC module.

 

Experience:5 - 8 Years

Location:Mumbai & Pune

Mode of Work:Full Time

 Key Responsibilities:

  • Design,   develop, and maintain robust software applications using most   common and popular coding languages suitable for the application   design, with a strong focus on clean, maintainable, and efficient   code.
  • Develop,   maintain, and enhance Terraform modules to encapsulate common   infrastructure patterns and promote code reuse and standardization.
  • Develop   RESTful APIs and backend services aligned with modern architectural   practices.
  • Apply   object-oriented programming principles and design patterns to build   scalable systems.
  • Build   and maintain automated test frameworks and scripts to   ensure high product quality.
  • Troubleshoot   and resolve technical issues across application layers, from code to   infrastructure.
  • Work   with cloud platforms such as Azure or   Google Cloud Platform (GCP).
  • Use Git and   related version control practices effectively in a team-based   development environment.
  • Integrate   and experiment with AI development tools like GitHub   Copilot, Azure OpenAI, or similar to boost engineering   efficiency.

 

Requirements:

  • 5+   years of experience
  • Experience   with IaC Module
  • Terraform   coding experience along with Terraform   Module as a part of central platform team
  • Azure/GCP   cloud experience is a must
  • Experience   with C#/Python/Java Coding - is good to have

 

 If interested please share your updated resume with below details :

Total Experience -

Relevant Experience -

Current Location -

Current CTC -

Expected CTC -

Notice period -

Any offer in hand -


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort