Cutshort logo
Google cloud platform gcp jobs

50+ Google Cloud Platform (GCP) Jobs in India

Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!

icon
CoffeeBeans

at CoffeeBeans

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Hyderabad
5 - 8 yrs
Upto ₹25L / yr (Varies
)
DevOps
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)

We are seeking an experienced Lead DevOps Engineer with deep expertise in Kubernetes infrastructure design and implementation. This role requires someone who can architect, build, and manage enterprise-grade Kubernetes clusters from the ground up. You’ll lead modernization initiatives, shape infrastructure strategy, and work with cutting-edge cloud-native technologies.


🚀 Key Responsibilities

Infrastructure Design & Implementation

  • Architect and design enterprise-grade Kubernetes clusters across AWS, Azure, and GCP.
  • Build production-ready Kubernetes infrastructure with HA, scalability, and security best practices.
  • Implement Infrastructure as Code with Terraform, Helm, and GitOps workflows.
  • Set up monitoring, logging, and observability for Kubernetes workloads.
  • Design and execute backup and disaster recovery strategies for containerized applications.

Leadership & Team Management

  • Lead a team of 3–4 DevOps engineers, providing technical mentorship.
  • Drive best practices in containerization, orchestration, and cloud-native development.
  • Collaborate with development teams to optimize deployment strategies.
  • Conduct code reviews and maintain infrastructure quality standards.
  • Build knowledge-sharing culture with documentation and training.

Operational Excellence

  • Manage and scale CI/CD pipelines integrated with Kubernetes.
  • Implement security policies (RBAC, network policies, container scanning).
  • Optimize cluster performance and cost-efficiency.
  • Automate operations to minimize manual interventions.
  • Ensure 99.9% uptime for production workloads.

Strategic Planning

  • Define the infrastructure roadmap aligned with business needs.
  • Evaluate and adopt new cloud-native technologies.
  • Perform capacity planning and cloud cost optimization.
  • Drive risk assessment and mitigation strategies.

🛠 Must-Have Technical Skills

Kubernetes Expertise

  • 6+ years of hands-on Kubernetes experience in production.
  • Deep knowledge of Kubernetes architecture (etcd, API server, scheduler, kubelet).
  • Advanced Kubernetes networking (CNI, Ingress, Service mesh).
  • Strong grasp of Kubernetes storage (CSI, PVs, StorageClasses).
  • Experience with Operators and Custom Resource Definitions (CRDs).

Infrastructure as Code

  • Terraform (advanced proficiency).
  • Helm (developing and managing complex charts).
  • Config management tools (Ansible, Chef, Puppet).
  • GitOps workflows (ArgoCD, Flux).

Cloud Platforms

  • Hands-on experience with at least 2 of the following:
  • AWS: EKS, EC2, VPC, IAM, CloudFormation
  • Azure: AKS, VNets, ARM templates
  • GCP: GKE, Compute Engine, Deployment Manager

CI/CD & DevOps Tools

  • Jenkins, GitLab CI, GitHub Actions, Azure DevOps
  • Docker (advanced optimization and security practices)
  • Container registries (ECR, ACR, GCR, Docker Hub)
  • Strong Git workflows and branching strategies

Monitoring & Observability

  • Prometheus & Grafana (metrics and dashboards)
  • ELK/EFK stack (centralized logging)
  • Jaeger/Zipkin (tracing)
  • AlertManager (intelligent alerting)

💡 Good-to-Have Skills

  • Service Mesh (Istio, Linkerd, Consul)
  • Serverless (Knative, OpenFaaS, AWS Lambda)
  • Running databases in Kubernetes (Postgres, MongoDB operators)
  • ML pipelines (Kubeflow, MLflow)
  • Security tools (Aqua, Twistlock, Falco, OPA)
  • Compliance (SOC2, PCI-DSS, GDPR)
  • Python/Go for automation
  • Advanced Shell scripting (Bash/PowerShell)

🎓 Qualifications

  • Bachelor’s in Computer Science, Engineering, or related field.
  • Certifications (preferred):
  • Certified Kubernetes Administrator (CKA)
  • Certified Kubernetes Application Developer (CKAD)
  • Cloud provider certifications (AWS/Azure/GCP).

Experience

  • 6–7 years of DevOps/Infrastructure engineering.
  • 4+ years of Kubernetes in production.
  • 2+ years in a lead role managing teams.
  • Experience with large-scale distributed systems and microservices.


Read more
Anscer Robotics

at Anscer Robotics

1 recruiter
DeepanRaj R
Posted by DeepanRaj R
Bengaluru (Bangalore)
4 - 7 yrs
₹16L - ₹20L / yr
MERN Stack
Fullstack Developer
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMongoDB
+16 more

Job Description

As a Senior Full Stack Developer, you will play a pivotal role in designing, developing, and deploying high-performance web applications to manage, monitor, and interface with our robots. You are expected to lead by example—writing clean code, mentoring junior developers, and advocating for best practices across both front-end and back-end stacks. You'll also contribute to architectural decisions, scalability strategies, and end-to-end product delivery.


Key Responsibilities  

1. Full Stack Application Development  

  • Design and develop robust, scalable, and secure web applications using modern frameworks (React/Next.js, Node.js/Hono, or similar).
  • Implement end-to-end features—from the UI to backend APIs and database interactions.
  • Write reusable components, services, and APIs following SOLID and DRY principles.

2. System Design & Architecture  

  • Collaborate with architects and engineering leads to plan system architecture and micro services structure.
  • Lead decisions around system performance, scalability, and modularization.
  • Design APIs, data models, and services that can handle real-time robot communication, telemetry data, and control interfaces.

3. Technical Leadership & Mentorship  

  • Review and guide code contributions from junior and mid-level developers.
  • Conduct regular 1:1s or mentoring sessions to uplift the team’s technical skills.
  • Encourage a culture of ownership, continuous learning, and healthy code review habits.

4. DevOps, CI/CD & Cloud Integration  

  • Collaborate with DevOps to set up and maintain pipelines (CI/CD) for automated testing, deployment, and containerization (Docker/Kubernetes).
  • Work with cloud platforms (AWS, GCP, Azure) to deploy applications securely and efficiently.
  • Ensure logging, monitoring, and alerting are in place for production-grade systems.  

5. Testing & Quality Assurance  

  • Drive implementation of testing strategies including unit, integration, and end-to-end tests.
  • Collaborate with QA engineers to ensure releases are stable, bug-free, and meet performance expectations.
  • Use tools like Vitest, Bun, Postman, etc for automated testing and performance profiling.

6. Best Practices & Tech Exploration  

  • Keep the tech stack updated and help the team adopt new tools and patterns (e.g., SSR/SSG, Edge APIs).
  • Ensure adherence to best practices in version control (Git), branching strategy, code documentation, and commit hygiene.
  • Promote accessibility, security, and performance as first-class citizens in development.

7. Cross-Team Collaboration  

  • Work closely with product managers, UI/UX designers, robotics engineers, and QA to ensure tight alignment between software and hardware goals.
  • Translate high-level business requirements into technical solutions.

8. Documentation & Knowledge Sharing  

  • Write and maintain technical documentation, design proposals, and onboarding guides.
  • Conduct knowledge-sharing sessions and technical deep dives for the team.



Requirements

Core Technical Skills

  • Frontend: React ecosystem, TanStack Suite (React Query, Router, Table), component-based architecture, TypeScript.
  • Backend: Node.js, Express.js, REST APIs, WebSockets, authentication/authorization, modern runtimes like Bun, lightweight frameworks like Hono.
  • Databases: NoSQL (MongoDB), Redis, ORM tools (Prisma/Drizzle).
  • Testing: Vitest, Jest, Playwright/Cypress, integration suites, contract testing.
  • Monorepo & Tooling: Turborepo or similar, module federation, caching strategies (Optional).
  • DevOps: Knowledge of Docker, Nginx, GitOps, CI/CD (GitHub Actions, ArgoCD, etc.), infra-as-code (Optional).
  • Familiarity with robotics, IoT, or real-time systems is a plus.

Bonus Skills  

  • Experience integrating with real-time robotic systems or IoT devices.
  • Contributions to open source or developer tooling projects.
  • Deep knowledge of browser rendering, async data flows, and modern state management patterns.


Benefits

  • Innovative Work – Be part of cutting-edge robotics and automation projects.
  • Career Growth – Opportunities for leadership, mentorship, and continuous learning.
  • Collaborative Culture – Work with a passionate and skilled team in a dynamic environment.
  • Competitive Perks – Industry-standard salary, bonuses, and health benefits.
  • Inclusive Workplace – We are an equal-opportunity employer committed to diversity.

 

Equal Opportunity Employer  

  • ANSCER Robotics is committed to creating a diverse and inclusive workplace. We welcome applicants from all backgrounds and do not discriminate based on race, gender, religion, disability, or any other protected category. We believe in providing equal opportunities based on merit, skills, and business needs.


Read more
NeoGenCode Technologies Pvt Ltd
Remote only
3 - 6 yrs
₹7L - ₹12L / yr
skill iconPython
FastAPI
API Development
third API Integration
Google Cloud Platform (GCP)
+2 more

Job Title: Backend Developer (Full Time)

Location: Remote

Interview: Virtual Interview

Experience Required: 3+ Years


Backend / API Development (About the Role)

  • Strong proficiency in Python (FastAPI) or Node.js (Express) (Python preferred).
  • Proven experience in designing, developing, and integrating APIs for production-grade applications.
  • Hands-on experience deploying to serverless platforms such as Cloudflare Workers, Firebase Functions, or Google Cloud Functions.
  • Solid understanding of Google Cloud backend services (Cloud Run, Cloud Functions, Secret Manager, IAM roles).
  • Expertise in API key and secrets management, ensuring compliance with security best practices.
  • Skilled in secure API development, including HTTPS, authentication/authorization, input validation, and rate limiting.
  • Track record of delivering scalable, high-quality backend systems through impactful projects in production environments.


Read more
aurusai

at aurusai

3 candid answers
Uday Ayyagari
Posted by Uday Ayyagari
Remote only
5 - 10 yrs
₹10L - ₹11L / yr
skill iconPython
skill iconReact.js
skill iconReact Native
Google Cloud Platform (GCP)
SQL Azure
+1 more

About Us

We are building the next generation of AI-powered products and platforms that redefine how businesses digitize, automate, and scale. Our flagship solutions span eCommerce, financial services, and enterprise automation, with an emerging focus on commercializing cutting-edge AI services across Grok, OpenAI, and the Azure Cloud ecosystem.

Role Overview

We are seeking a highly skilled Full-Stack Developer with a strong foundation in e-commerce product development and deep expertise in backend engineering using Python. The ideal candidate is passionate about designing scalable systems, has hands-on experience with cloud-native architectures, and is eager to drive the commercialization of AI-driven services and platforms.

Key Responsibilities

  • Design, build, and scale full-stack applications with a strong emphasis on backend services (Python, Django/FastAPI/Flask).
  • Lead development of eCommerce features including product catalogs, payments, order management, and personalized customer experiences.
  • Integrate and operationalize AI services across Grok, OpenAI APIs, and Azure AI services to deliver intelligent workflows and user experiences.
  • Build and maintain secure, scalable APIs and data pipelines for real-time analytics and automation.
  • Collaborate with product, design, and AI research teams to bring experimental features into production.
  • Ensure systems are cloud-ready (Azure preferred) with CI/CD, containerization (Docker/Kubernetes), and strong monitoring practices.
  • Contribute to frontend development (React, Angular, or Vue) to deliver seamless, responsive, and intuitive user experiences.
  • Champion best practices in coding, testing, DevOps, and Responsible AI integration.

Required Skills & Experience

  • 5+ years of professional full-stack development experience.
  • Proven track record in eCommerce product development (payments, cart, checkout, multi-tenant stores).
  • Strong backend expertise in Python (Django, FastAPI, Flask).
  • Experience with cloud services (Azure preferred; AWS/GCP is a plus).
  • Hands-on with AI/ML integration using APIs like OpenAI, Grok, Azure Cognitive Services.
  • Solid understanding of databases (SQL & NoSQL), caching, and API design.
  • Familiarity with frontend frameworks such as React, Angular, or Vue.
  • Experience with DevOps practices: GitHub/GitLab, CI/CD, Docker, Kubernetes.
  • Strong problem-solving skills, adaptability, and a product-first mindset.

Nice to Have

  • Knowledge of vector databases, RAG pipelines, and LLM fine-tuning.
  • Experience in scalable SaaS architectures and subscription platforms.
  • Familiarity with C2PA, identity security, or compliance-driven development.

What We Offer

  • Opportunity to shape the commercialization of AI-driven products in fast-growing markets.
  • A high-impact role with autonomy and visibility.
  • Competitive compensation, equity opportunities, and growth into leadership roles.
  • Collaborative environment working with seasoned entrepreneurs, AI researchers, and cloud architects.


Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Remote only
3 - 7 yrs
₹8L - ₹20L / yr
Google Cloud Platform (GCP)
ETL
skill iconPython
Big Data
SQL
+4 more

Must have skills:

1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, Airflow/Composer, Python(preferred)/Java

2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges

3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP

4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At least 2 databases)

5. Data Warehouse concepts - Beginner to Intermediate level


Role & Responsibilities:

● Work with business users and other stakeholders to understand business processes.

● Ability to design and implement Dimensional and Fact tables

● Identify and implement data transformation/cleansing requirements

● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data

from various systems to the Enterprise Data Warehouse

● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical

data definitions

● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique

● Provide research, high-level design, and estimates for data transformation and data integration from source

applications to end-user BI solutions.

● Provide production support of ETL processes to ensure timely completion and availability of data in the data

warehouse for reporting use.

● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate,

design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and

data quality.

● Work collaboratively with key stakeholders to translate business information needs into well-defined data

requirements to implement the BI solutions.

● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into

reporting & analytics.

● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.

● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers,

quality issues, and continuously validate reports, dashboards and suggest improvements.

● Train business end-users, IT analysts, and developers.

Read more
Publicis Sapient

at Publicis Sapient

10 recruiters
Dipika
Posted by Dipika
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Hyderabad, Pune
5 - 7 yrs
₹5L - ₹20L / yr
skill iconJava
Microservices
06692
Apache Kafka
Apache ActiveMQ
+3 more

1 Senior Associate Technology L1 – Java Microservices


Company Description

Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.


Job Description

We are looking for a Senior Associate Technology Level 1 - Java Microservices Developer to join our team of bright thinkers and doers. You’ll use your problem-solving creativity to design, architect, and develop high-end technology solutions that solve our clients’ most complex and challenging problems across different industries.

We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions.


Your Impact:

• Drive the design, planning, and implementation of multifaceted applications, giving you breadth and depth of knowledge across the entire project lifecycle.

• Combine your technical expertise and problem-solving passion to work closely with clients, turning • complex ideas into end-to-end solutions that transform our clients’ business

• Constantly innovate and evaluate emerging technologies and methods to provide scalable and elegant solutions that help clients achieve their business goals.


Qualifications

➢ 5 to 7 Years of software development experience

➢ Strong development skills in Java JDK 1.8 or above

➢ Java fundamentals like Exceptional handling, Serialization/Deserialization and Immutability concepts

➢ Good fundamental knowledge in Enums, Collections, Annotations, Generics, Auto boxing and Data Structure

➢ Database RDBMS/No SQL (SQL, Joins, Indexing)

➢ Multithreading (Re-entrant Lock, Fork & Join, Sync, Executor Framework)

➢ Spring Core & Spring Boot, security, transactions ➢ Hands-on experience with JMS (ActiveMQ, RabbitMQ, Kafka etc)

➢ Memory Mgmt (JVM configuration, Profiling, GC), profiling, Perf tunning, Testing, Jmeter/similar tool)

➢ Devops (CI/CD: Maven/Gradle, Jenkins, Quality plugins, Docker and containersization)

➢ Logical/Analytical skills. Thorough understanding of OOPS concepts, Design principles and implementation of

➢ different type of Design patterns. ➢ Hands-on experience with any of the logging frameworks (SLF4J/LogBack/Log4j) ➢ Experience of writing Junit test cases using Mockito / Powermock frameworks.

➢ Should have practical experience with Maven/Gradle and knowledge of version control systems like Git/SVN etc.

➢ Good communication skills and ability to work with global teams to define and deliver on projects.

➢ Sound understanding/experience in software development process, test-driven development.

➢ Cloud – AWS / AZURE / GCP / PCF or any private cloud would also be fine

➢ Experience in Microservices

Read more
Bengaluru (Bangalore)
5 - 8 yrs
₹8L - ₹25L / yr
skill iconMachine Learning (ML)
Production management
Large Language Models (LLM)
AIML
Google Cloud Platform (GCP)

Job description

We are looking for a Data Scientist with strong AI/ML engineering skills to join our high-impact team at KrtrimaIQ Cognitive Solutions. This is not a notebook-only role — you must have production-grade experience deploying and scaling AI/ML models in cloud environments, especially GCP, AWS, or Azure.

This role involves building, training, deploying, and maintaining ML models at scale, integrating them with business applications. Basic model prototyping won't qualify — we’re seeking hands-on expertise in building scalable machine learning pipelines.


Key Responsibilities


Design, train, test, and deploy end-to-end ML models on GCP (or AWS/Azure) to support product innovation and intelligent automation.

Implement GenAI use cases using LLMs

Perform complex data mining and apply statistical algorithms and ML techniques to derive actionable insights from large datasets.

Drive the development of scalable frameworks for automated insight generation, predictive modeling, and recommendation systems.

Work on impactful AI/ML use cases in Search & Personalization, SEO Optimization, Marketing Analytics, Supply Chain Forecasting, and Customer Experience.

Implement real-time model deployment and monitoring using tools like Kubeflow, Vertex AI, Airflow, PySpark, etc.

Collaborate with business and engineering teams to frame problems, identify data sources, build pipelines, and ensure production-readiness.

Maintain deep expertise in cloud ML architecture, model scalability, and performance tuning.

Stay up to date with AI trends, LLM integration, and modern practices in machine learning and deep learning.


Technical Skills Required Core ML & AI Skills (Must-Have):

Strong hands-on ML engineering (70% of the role) — supervised/unsupervised learning, clustering, regression, optimization.

Experience with real-world model deployment and scaling, not just notebooks or prototypes.

Good understanding of ML Ops, model lifecycle, and pipeline orchestration.

Strong with Python 3, Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch, Seaborn, Matplotlib, etc.

SQL proficiency and experience querying large datasets.

Deep understanding of linear algebra, probability/statistics, Big-O, and scientific experimentation.

Cloud experience in GCP (preferred), AWS, or Azure.

Cloud & Big Data Stack


Hands-on experience with:

GCP tools – Vertex AI, Kubeflow, BigQuery, GCS

Or equivalent AWS/Azure ML stacks

Familiar with Airflow, PySpark, or other pipeline orchestration tools.

Experience reading/writing data from/to cloud services.



Qualifications


Bachelor's/Master’s/Ph.D. in Computer Science, Mathematics, Engineering, Data Science, Statistics, or related quantitative field.

4+ years of experience in data analytics and machine learning roles.

2+ years of experience in Python or similar programming languages (Java, Scala, Rust).

Must have experience deploying and scaling ML models in production.


Nice to Have


Experience with LLM fine-tuning, Graph Algorithms, or custom deep learning architectures.

Background in academic research to production applications.

Building APIs and monitoring production ML models.

Familiarity with advanced math – Graph Theory, PDEs, Optimization Theory.


Communication & Collaboration


Strong ability to explain complex models and insights to both technical and non-technical stakeholders.

Ask the right questions, clarify objectives, and align analytics with business goals.

Comfortable working cross-functionally in agile and collaborative teams.


Important Note:

This is a Data Science-heavy role — 70% of responsibilities involve building, training, deploying, and scaling AI/ML models.

Cloud experience is mandatory (GCP preferred, AWS/Azure acceptable).

Only candidates with hands-on experience in deploying ML models into production (not just notebooks) will be considered.


Read more
a leading Data & Analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage. They are the partner of choice for enterprises on their digital transformation journey.  Teams offer solutions and services at the intersection of Advanced Data, Analytics, and AI.

a leading Data & Analytics intelligence technology solutions provider to companies that value insights from information as a competitive advantage. They are the partner of choice for enterprises on their digital transformation journey. Teams offer solutions and services at the intersection of Advanced Data, Analytics, and AI.

Agency job
via HyrHub by Shwetha Naik
Bengaluru (Bangalore), Mangalore
6.5 - 10 yrs
₹12L - ₹25L / yr
ETL
SQL
databricks
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

• Overall 6+ years of experience in development and maintenance in a data engineering project.

• Collaborate with stakeholders to understand data requirements and translate them into technical solutions.

 

SQL

• Experienced in writing complex SQL queries and knowledge of SQL Analytical functions.

 

Databricks

• Design, develop, and deploy scalable data pipelines and ETL processes using Databricks.

• Optimize Spark jobs and ensure efficient data processing and storage

• Develop and maintain Databricks notebooks, workflows, and dashboards

• Proficient in building and optimizing data pipelines and workflows in Databricks

• Troubleshoot and resolve data pipeline issues and performance bottlenecks

• Monitor and manage data processing jobs to ensure high availability and reliability • Work with cloud platforms such as AWS, Azure, or Google Cloud to integrate Databricks with other services

• Knowledge of programming languages such as Python, Scala

• Stay current with the latest Databricks and big data technologies and trends

 

Cloud

• Experience working in cloud platforms( Azure/AWS platforms) and familiar with the cloud tools and technologies

Read more
Aceis Services

at Aceis Services

2 candid answers
Anushi Mishra
Posted by Anushi Mishra
Remote only
2 - 10 yrs
₹8.6L - ₹30.2L / yr
CI/CD
Apache Spark
PySpark
MLOps
skill iconMachine Learning (ML)
+6 more

We are hiring freelancers to work on advanced Data & AI projects using Databricks. If you are passionate about cloud platforms, machine learning, data engineering, or architecture, and want to work with cutting-edge tools on real-world challenges, this is the opportunity for you!

Key Details

  • Work Type: Freelance / Contract
  • Location: Remote
  • Time Zones: IST / EST only
  • Domain: Data & AI, Cloud, Big Data, Machine Learning
  • Collaboration: Work with industry leaders on innovative projects

🔹 Open Roles

1. Databricks – Senior Consultant

  • Skills: Data Warehousing, Python, Java, Scala, ETL, SQL, AWS, GCP, Azure
  • Experience: 6+ years

2. Databricks – ML Engineer

  • Skills: CI/CD, MLOps, Machine Learning, Spark, Hadoop
  • Experience: 4+ years

3. Databricks – Solution Architect

  • Skills: Azure, GCP, AWS, CI/CD, MLOps
  • Experience: 7+ years

4. Databricks – Solution Consultant

  • Skills: SQL, Spark, BigQuery, Python, Scala
  • Experience: 2+ years

What We Offer

  • Opportunity to work with top-tier professionals and clients
  • Exposure to cutting-edge technologies and real-world data challenges
  • Flexible remote work environment aligned with IST / EST time zones
  • Competitive compensation and growth opportunities

📌 Skills We Value

Cloud Computing | Data Warehousing | Python | Java | Scala | ETL | SQL | AWS | GCP | Azure | CI/CD | MLOps | Machine Learning | Spark |

Read more
iRage

at iRage

3 recruiters
Jyosana Jadhav
Posted by Jyosana Jadhav
Mumbai
4 - 7 yrs
₹25L - ₹40L / yr
skill iconReact.js
skill iconJavascript
DOM
WebSocket
Chart.js
+20 more

We are seeking a highly skilled React JS Developer with exceptional DOM manipulation expertise and real-time data handling experience to join our team. You'll be building and optimizing high-performance user interfaces for stock market trading applications where milliseconds matter and data flows continuously.


The ideal candidate thrives in fast-paced environments, understands the intricacies of browser performance, and has hands-on experience with WebSockets and real-time data streaming architectures.


Key Responsibilities


Core Development

  • Advanced DOM Operations: Implement complex, performance-optimized DOM manipulations for real-time trading interfaces
  • Real-time Data Management: Build robust WebSocket connections and handle high-frequency data streams with minimal latency
  • Performance Engineering: Create lightning-fast, scalable front-end applications that process thousands of market updates per second
  • Custom Component Architecture: Design and build reusable, high-performance React components optimized for trading workflows


Collaboration & Integration

  • Work closely with traders, quants, and backend developers to translate complex trading requirements into intuitive interfaces
  • Collaborate with UX/UI designers and product managers to create responsive, trader-focused experiences
  • Integrate with real-time market data APIs and trading execution systems


Technical Excellence

  • Implement sophisticated data visualizations and interactive charts using libraries like Chartjs, TradingView, or custom D3.js solutions
  • Ensure cross-browser compatibility and responsiveness across multiple devices and screen sizes
  • Debug and resolve complex performance issues, particularly in real-time data processing and rendering
  • Maintain high-quality code through reviews, testing, and comprehensive documentation


Required Skills & Experience


React & JavaScript Mastery

  • 5+ years of professional React.js development with deep understanding of React internals, hooks, and advanced patterns
  • Expert-level JavaScript (ES6+) with strong proficiency in asynchronous programming, closures, and memory management
  • Advanced HTML5 & CSS3 skills with focus on performance and cross-browser compatibility


Real-time & Performance Expertise

  • Proven experience with WebSockets and real-time data streaming protocols
  • Strong DOM manipulation skills - direct DOM access, virtual scrolling, efficient updates, and performance optimization
  • RESTful API integration with experience in handling high-frequency data feeds
  • Browser performance optimization - understanding of rendering pipeline, memory management, and profiling tools


Development Tools & Practices

  • Proficiency with modern build tools: Webpack, Babel, Vite, or similar
  • Experience with Git version control and collaborative development workflows
  • Agile/Scrum development environment experience
  • Understanding of testing frameworks (Jest, React Testing Library)


Financial Data Visualization

  • Experience with financial charting libraries: Chartjs, TradingView, D3.js, or custom visualization solutions
  • Understanding of market data structures, order books, and trading terminology
  • Knowledge of data streaming optimization techniques for financial applications


Nice-to-Have Skills


Domain Expertise

  • Prior experience in stock market, trading, or financial services - understanding of trading workflows, order management, risk systems
  • Algorithmic trading knowledge or exposure to quantitative trading systems
  • Financial market understanding - equities, derivatives, commodities


Technical Plus Points

  • Backend development experience with GoLang, Python, or Node.js
  • Database knowledge: SQL, NoSQL, time-series databases (InfluxDB, TimescaleDB)
  • Cloud platform experience: AWS, Azure, GCP for deploying scalable applications
  • Message queue systems: Redis, RabbitMQ, Kafka, NATS for real-time data processing
  • Microservices architecture understanding and API design principles


Advanced Skills

  • Service Worker implementation for offline-first applications
  • Progressive Web App (PWA) development
  • Mobile-first responsive design expertise


Qualifications

  • Bachelor's degree in Computer Science, Engineering, or related field (or equivalent professional experience)
  • 5+ years of professional React.js development with demonstrable experience in performance-critical applications
  • Portfolio or examples of complex real-time applications you've built
  • Financial services experience strongly preferred


Why You'll Love Working Here


We're a team that hustles—plain and simple. But we also believe life outside work matters. No cubicles, no suits—just great people doing great work in a space built for comfort and creativity.


What We Offer

💰 Competitive salary – Get paid what you're worth

🌴 Generous paid time off – Recharge and come back sharper

🌍 Work with the best – Collaborate with top-tier global talent

✈️ Adventure together – Annual offsites (mostly outside India) and regular team outings

🎯 Performance rewards – Multiple bonuses for those who go above and beyond

🏥 Health covered – Comprehensive insurance so you're always protected

Fun, not just work – On-site sports, games, and a lively workspace

🧠 Learn and lead – Regular knowledge-sharing sessions led by your peers

📚 Annual Education Stipend – Take any external course, bootcamp, or certification that makes you better at your craft

🏋️ Stay fit – Gym memberships with equal employer contribution to keep you at your best

🚚 Relocation support – Smooth move? We've got your back

🏆 Friendly competition – Work challenges and extracurricular contests to keep things exciting


We work hard, play hard, and grow together. Join us.



Read more
iRage

at iRage

3 recruiters
Jyosana Jadhav
Posted by Jyosana Jadhav
Mumbai
4 - 7 yrs
₹25L - ₹35L / yr
skill iconPython
DevOps
CI/CD
Infrastructure
Deployment management
+14 more

We are seeking an experienced Operations Lead to drive operational excellence and lead a dynamic team in our fast-paced environment. The ideal candidate will combine strong technical expertise in Python with proven leadership capabilities to optimize processes, ensure system reliability, and deliver results.


Key Responsibilities


  • Team & stakeholder leadership - Lead 3-4 operations professionals and work cross-functionally with developers, system administrators, quants, and traders


  • DevOps automation & deployment - Develop deployment pipelines, automate configuration management, and build Python-based tools for operational processes and system optimization


  • Technical excellence & standards - Drive code reviews, establish development standards, ensure regional consistency with DevOps practices, and maintain technical documentation


  • System operations & performance - Monitor and optimize system performance for high availability, scalability, and security while managing day-to-day operations


  • Incident management & troubleshooting - Coordinate incident response, resolve infrastructure and deployment issues, and implement automated solutions to prevent recurring problems


  • Strategic technical leadership - Make infrastructure decisions, identify operational requirements, design scalable architecture, and stay current with industry best practices


  • Reporting & continuous improvement - Report on operational metrics and KPIs to senior leadership while actively contributing to DevOps process improvements


Qualifications and Experience


  • Bachelor's degree in Computer Science, Engineering, or related technical field
  • Proven experience of at least 5 years as a Software Engineer including at least 2 years as a DevOps Engineer or similar role, working with complex software projects and environments.
  • Excellent knowledge with cloud technologies, containers and orchestration.
  • Proficiency in scripting and programming languages such as Python and Bash.
  • Experience with Linux operating systems and command-line tools.
  • Proficient in using Git for version control.


Good to Have


  • Experience with Nagios or similar monitoring and alerting systems
  • Backend and/or frontend development experience for operational tooling
  • Previous experience working in a trading firm or financial services environment
  • Knowledge of database management and SQL
  • Familiarity with cloud platforms (AWS, Azure, GCP)
  • Experience with DevOps practices and CI/CD pipelines
  • Understanding of network protocols and system administration


Why You’ll Love Working Here


We’re a team that hustles—plain and simple. But we also believe life outside work matters. No cubicles, no suits—just great people doing great work in a space built for comfort and creativity.


Here’s what we offer:

💰 Competitive salary – Get paid what you’re worth.

🌴 Generous paid time off – Recharge and come back sharper.

🌍 Work with the best – Collaborate with top-tier global talent.

✈️ Adventure together – Annual offsites (mostly outside India) and regular team outings.

🎯 Performance rewards – Multiple bonuses for those who go above and beyond.

🏥 Health covered – Comprehensive insurance so you’re always protected.

Fun, not just work – On-site sports, games, and a lively workspace.

🧠 Learn and lead – Regular knowledge-sharing sessions led by your peers.

📚 Annual Education Stipend – Take any external course, bootcamp, or certification that makes you better at your craft.

🏋️ Stay fit – Gym memberships with equal employer contribution to keep you at your best.

🚚 Relocation support – Smooth move? We’ve got your back.

🏆 Friendly competition – Work challenges and extracurricular contests to keep things exciting.


We work hard, play hard, and grow together. Join us.


(P.S. We hire for talent, not pedigree—but if you’ve worked at a top tech co or fintech startup, we’d love to hear how you’ve shipped great products.)


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Anurag Sinha
Posted by Anurag Sinha
Pune, Mumbai, Navi Mumbai
5 - 8 yrs
Best in industry
Google Cloud Platform (GCP)
Azure Cloud
Terraform
DevOps
  • Looking manage IaC module
  • Terraform experience is a must
  • Terraform Module as a part of central platform team
  • Azure/GCP exp is a must
  • C#/Python/Java coding – is good to have

 

Read more
AI powered tech startup

AI powered tech startup

Agency job
via Recruit Square by Priyanka choudhary
Remote only
2 - 3 yrs
₹6L - ₹9L / yr
Large Language Models (LLM)
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
skill iconNodeJS (Node.js)
+3 more

Job Title: AI Developer/Engineer

Location: Remote

Employment Type: Full-time


About the Organization 

We are a cutting-edge AI-powered startup that is revolutionizing data management and content generation. Our platform harnesses the power of generative AI and natural language processing to turn unstructured data into actionable insights, providing businesses with real-time, intelligent content and driving operational efficiency. As we scale, we are looking for an experienced lead architect to help design and build our next-generation AI-driven solutions.


About the Role

We are seeking an AI Developer to design, fine-tune, and deploy advanced Large Language Models (LLMs) and AI agents across healthcare and SMB workflows. You will work with cutting-edge technologies—OpenAI, Claude, LLaMA, Gemini, Grok—building robust pipelines and scalable solutions that directly impact real-world hospital use cases such as risk calculators, clinical protocol optimization, and intelligent decision support.

Key Responsibilities

  • Build, fine-tune, and customize LLMs and AI agents for production-grade workflows
  • Leverage Node.js for backend development and integration with various cloud services.
  • Use AI tools and AI prompts to develop automated processes that enhance data management and client offerings
  • Drive the evolution of deployment methodologies, ensuring that AI systems are continuously optimized, tested, and delivered in production-ready environments.
  •  Stay up-to-date with emerging AI technologies, cloud platforms, and development methodologies to continually evolve the platform’s capabilities.
  • Integrate and manage vector databases such as FAISS and Pinecone.
  • Ensure scalability, performance, and compliance in all deployed AI systems.

Required Qualifications

  • 2–3 years of hands-on experience in AI/ML development or full-stack AI integration.
  • ​​Proven expertise in building Generative AI models and AI-powered applications, especially in a cloud environment.
  • Strong experience with multi-cloud infrastructure and platforms,
  • Proficiency with Node.js and modern backend frameworks for developing scalable solutions.
  • In-depth understanding of AI prompts, natural language processing, and agent-based systems for enhancing decision-making processes.
  • Familiarity with AI tools for model training, data processing, and real-time inference tasks.
  • Experience working with hybrid cloud solutions, including private and public cloud integration for AI workloads.
  • Strong problem-solving skills and a passion for innovation in AI and cloud technologies
  • Agile delivery mythology knowledge.
  • CI/CD pipeline deployment with JIRA and GitHub knowledge for code deployment
  • Strong experience with LLMs, prompt engineering, and fine-tuning.
  • Knowledge of vector databases (FAISS, Pinecone, Milvus, or similar).

Nice to Have

  • Experience in healthcare AI, digital health, or clinical applications.
  • Exposure to multi-agent AI frameworks.

What We Offer

  • Flexible working hours. 
  • Collaborative, innovation-driven work culture.
  • Growth opportunities in a rapidly evolving AI-first environment.
Read more
TalentLo

at TalentLo

2 candid answers
Sathwik P
Posted by Sathwik P
Remote only
1 - 5 yrs
₹5L - ₹15L / yr
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Google Cloud Platform (GCP)

About the Job

Cloud Engineer

Experience: 1–5 Years

Salary: Competitive

Preferred Notice Period: Immediate to 30 Days

Opportunity Type: Remote (Global)

Placement Type: Freelance/Contract

(Note: This is a requirement for one of TalentLo’s Clients)


Role Overview

We’re seeking experienced Cloud Engineers with 1–5 years of professional experience to design, build, and optimize cloud-native applications and infrastructure. This is a freelance/contract opportunity where you’ll work remotely with global clients on innovative and high-impact projects.


Responsibilities

  • Design and implement cloud-native applications and infrastructure
  • Create Infrastructure as Code (IaC) templates for automated deployments
  • Set up and optimize CI/CD pipelines for cloud applications
  • Implement security best practices for cloud environments
  • Design scalable and cost-effective cloud architectures
  • Troubleshoot and resolve complex cloud service issues
  • Create cloud migration strategies and implementation plans
  • Guide clients on cloud best practices and architectural decisions

Requirements

  • Strong proficiency with at least one major cloud provider (AWS, Azure, GCP)
  • Experience with Infrastructure as Code tools (Terraform, CloudFormation, ARM templates)
  • Knowledge of containerization and orchestration (Docker, Kubernetes)
  • Understanding of cloud networking and security concepts
  • Experience with CI/CD tools and methodologies
  • Scripting and automation skills
  • Solid understanding of high availability and disaster recovery
  • Experience implementing monitoring and logging solutions

How to Apply

  1. Create your profile on TalentLo’s platform → https://www.talentlo.com/signup
  2. Submit your GitHub, portfolio, or sample projects
  3. Take the required assessment and get qualified
  4. Get shortlisted & connect with the client

About TalentLo

TalentLo is a revolutionary talent platform connecting exceptional tech professionals with high-quality clients worldwide. We’re building a carefully curated pool of skilled experts to match with companies actively seeking specialized talent for impactful projects.


✨ If you’re ready to work on exciting cloud projects, collaborate with global teams, and take your career to the next level — apply today!

Read more
Renowned IT Services company

Renowned IT Services company

Agency job
via AccioJob by AccioJobHiring Board
Mumbai
0 - 1 yrs
₹5L - ₹6.5L / yr
Google Cloud Platform (GCP)
skill iconKubernetes
Cloud Computing

AccioJob is conducting a Walk-In Hiring Drive with a Renowned IT Services company for the position of Cloud Engineer.


To apply, register and select your slot here: https://go.acciojob.com/U24WTf


Required Skills: Terraform, Kubernetes, Cloud Platforms


Eligibility:

  • Degree: BTech./BE, MTech./ME, BCA, MCA, MSc, BSc.
  • Branch: Computer Science/CSE/Other CS related branch, Electrical/Other electrical-related branches, IT
  • Graduation Year: 2024, 2025

Work Details:

  • Work Location: Mumbai (Onsite)
  • CTC: 5 LPA to 6.5 LPA

Evaluation Process:

Round 1: Offline Assessment at Lokmanya Tilak College of Engineering

Further Rounds (for shortlisted candidates only):

  • Technical Interview 1
  • Technical Interview 2
  • Technical Interview 3


Important Note: Bring your laptop & earphones for the test.


Register here: https://go.acciojob.com/U24WTf

Read more
Monjura Parveen
Monjura Parveen
Posted by Monjura Parveen
Kolkata
5 - 12 yrs
₹5L - ₹12L / yr
skill iconGo Programming (Golang)
skill iconNodeJS (Node.js)
skill iconJavascript
WebSocket
RESTful APIs
+5 more

Job Title: Golang Developer

Location: Kolkata

Job Type: Full-time

Working Days: 5 Days (Rotational off)

About the Role:

We are seeking a skilled Golang Developer with experience in Golang, Node.js, WebSocket

communication, and API development. In this role, you will work closely with our development

team to design, develop, and maintain high-performance backend systems and real-time

applications.

Key Responsibilities:

 Design, build, and maintain efficient, reusable, and reliable Golang code.

 Develop scalable APIs and microservices.

 Integrate and build real-time communication using WebSocket protocols.

 Collaborate with frontend developers and other team members to establish objectives

and design more functional, cohesive systems.

 Write clean, maintainable, and well-documented code.

 Optimize applications for maximum performance, scalability, and security.

 Participate in code reviews, contribute to team knowledge, and continuously improve

development processes.

 Troubleshoot, debug, and upgrade existing systems.

 Occasionally work with Node.js services and modules when needed.

Required Skills and Qualifications:

 Min 3+ years of experience in backend development with Golang.

 Solid understanding of Node.js and JavaScript/TypeScript.

 Hands-on experience with WebSocket integration and real-time applications.

 Strong knowledge of RESTful APIs, API design principles, and API documentation. Experience with relational and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB).

 Good understanding of concurrent programming and microservices architecture.

 Familiarity with cloud platforms (AWS, GCP, Azure) is a plus.

 Knowledge of containerization tools like Docker and Kubernetes is a plus.

 Strong problem-solving skills, attention to detail, and a proactive attitude.

Read more
Matilda cloud

Matilda cloud

Agency job
via Employee Hub by PREETI DUA
Hyderabad, Bengaluru (Bangalore)
6 - 7 yrs
₹22L - ₹26L / yr
skill iconFlask
API
Google Cloud Platform (GCP)
AWS CloudFormation
AWS Lambda
+5 more

Job Summary:


We are seeking an experienced and highly motivated Senior Python Developer to join our dynamic and growing engineering team. This role is ideal for a seasoned Python expert who thrives in a fast-paced, collaborative environment and has deep experience building scalable applications, working with cloud platforms, and automating infrastructure.



Key Responsibilities:


Develop and maintain scalable backend services and APIs using Python, with a strong emphasis on clean architecture and maintainable code.


Design and implement RESTful APIs using frameworks such as Flask or FastAPI, and integrate with relational databases using ORM tools like SQLAlchemy.


Work with major cloud platforms (AWS, GCP, or Oracle Cloud Infrastructure) using Python SDKs to build and deploy cloud-native applications.


Automate system and infrastructure tasks using tools like Ansible, Chef, or other configuration management solutions.


Implement and support Infrastructure as Code (IaC) using Terraform or cloud-native templating tools to manage resources effectively.





Work across both Linux and Windows environments, ensuring compatibility and stability across platforms.


Required Qualifications:


5+ years of professional experience in Python development, with a strong portfolio of backend/API projects.


Strong expertise in Flask, SQLAlchemy, and other Python-based frameworks and libraries.


Proficient in asynchronous programming and event-driven architecture using tools such as asyncio, Celery, or similar.


Solid understanding and hands-on experience with cloud platforms – AWS, Google Cloud Platform, or Oracle Cloud Infrastructure.


Experience using Python SDKs for cloud services to automate provisioning, deployment, or data workflows.


Practical knowledge of Linux and Windows environments, including system-level scripting and debugging.


Automation experience using tools such as Ansible, Chef, or equivalent configuration management systems.


Experience implementing and maintaining CI/CD pipelines with industry-standard tools.


Familiarity with Docker and container orchestration concepts (e.g., Kubernetes is a plus).


Hands-on experience with Terraform or equivalent infrastructure-as-code tools for managing cloud environments.


Excellent problem-solving skills, attention to detail, and a proactive mindset.


Strong communication skills and the ability to collaborate with diverse technical teams.


Preferred Qualifications (Nice to Have):


Experience with other Python frameworks (FastAPI, Django)


Knowledge of container orchestration tools like Kubernetes


Familiarity with monitoring tools like Prometheus, Grafana, or Datadog


Prior experience working in an Agile/Scrum environment


Contributions to open-source projects or technical blogs


Read more
KGiSL Educational Institution

at KGiSL Educational Institution

2 candid answers
KGiSL EDU
Posted by KGiSL EDU
Coimbatore
2 - 5 yrs
₹2L - ₹5L / yr
skill iconAmazon Web Services (AWS)
skill iconDocker
skill iconKubernetes
Terraform
Google Cloud Platform (GCP)
+2 more

We are looking for an experienced Cloud & DevOps Engineer to join our growing team. The ideal candidate should have hands-on expertise in cloud platforms, automation, CI/CD, and container orchestration. You will be responsible for building scalable and secure infrastructure, optimizing deployments, and ensuring system reliability in a fast-paced environment.


Responsibilities

  • Design, deploy, and manage applications on AWS / GCP.
  • Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI/CD.
  • Manage containerized workloads with Docker & Kubernetes.
  • Implement Infrastructure as Code (IaC) using Terraform.
  • Automate infrastructure and operational tasks using Python/Shell scripts.
  • Set up monitoring & logging (Prometheus, Grafana, CloudWatch, ELK).
  • Ensure security, scalability, and high availability of systems.
  • Collaborate with development and QA teams in an Agile/DevOps environment.


Required Skills

  • AWS, GCP (cloud platforms)
  • Terraform (IaC)
  • Docker, Kubernetes (containers & orchestration)
  • Python, Bash (scripting & automation)
  • CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD)
  • Monitoring & Logging (Prometheus, Grafana, CloudWatch)
  • Strong Linux/Unix administration


Preferred Skills (Good to Have)

  • Cloud certifications (AWS, Azure, or GCP).
  • Knowledge of serverless computing (AWS Lambda, Cloud Run).
  • Experience with DevSecOps and cloud security practices.


Read more
Virtana

at Virtana

2 candid answers
Bimla Dhirayan
Posted by Bimla Dhirayan
Pune, Chennai
4 - 10 yrs
Best in industry
skill iconJava
skill iconKubernetes
Google Cloud Platform (GCP)
openshift
skill iconPython
+11 more

Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 


Work Location- Pune/ Chennai


Job Type- Hybrid

 

Role Responsibilities: 

  • The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform 
  • Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform. 
  • Communicate effectively with people having differing levels of technical knowledge.  
  • Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment 
  • Provide customers with complex application support, problem diagnosis and problem resolution 

Required Qualifications:    

  • Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.  
  • Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.  
  • Bachelor’s (B.E, B.Tech) or Master’s degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent 
  • 2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS) 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a strong technical engineer who can design and code with strong communication skills 
  • Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 
  • Ability to use a variety of debugging tools, simulators and test harnesses is a plus 

  

About Virtana:  Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

 

 

Read more
Amwhiz

at Amwhiz

2 candid answers
Aruljothi Kuppusamy
Posted by Aruljothi Kuppusamy
Chennai
2 - 5 yrs
₹5L - ₹15L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconMongoDB
MySQL
skill iconPostgreSQL
+17 more

We are seeking a Full Stack Developer with exceptional communication skills to collaborate daily with our international clients in the US and Australia. This role requires not only technical expertise but also the ability to clearly articulate ideas, gather requirements, and maintain strong client relationships. Communication is the top priority.

The ideal candidate is passionate about technology, eager to learn and adapt to new stacks, and capable of delivering scalable, high-quality solutions across the stack.

Key Responsibilities

  • Client Communication: Act as a daily point of contact for clients (US & Australia), ensuring smooth collaboration and requirement gathering.
  • Backend Development:
  • Design and implement REST APIs and GraphQL endpoints.
  • Integrate secure authentication methods including OAuthPasswordless, and Signature-based authentication.
  • Build scalable backend services with Node.js and serverless frameworks.
  • Frontend Development:
  • Develop responsive, mobile-friendly UIs using React and Tailwind CSS.
  • Ensure cross-browser and cross-device compatibility.
  • Database Management:
  • Work with RDBMSNoSQLMongoDB, and DynamoDB.
  • Cloud & DevOps:
  • Deploy applications on AWS / GCP / Azure (knowledge of at least one required).
  • Work with CI/CD pipelines, monitoring, and deployment automation.
  • Quality Assurance:
  • Write and maintain unit tests to ensure high code quality.
  • Participate in code reviews and follow best practices.
  • Continuous Learning:
  • Stay updated on the latest technologies and bring innovative solutions to the team.

Must-Have Skills

  • Excellent communication skills (verbal & written) for daily client interaction.
  • 2+ years of experience in full-stack development.
  • Proficiency in Node.js and React.
  • Strong knowledge of REST API and GraphQL development.
  • Experience with OAuthPasswordless, and Signature-based authentication methods.
  • Database expertise with RDBMS, NoSQL, MongoDB, DynamoDB.
  • Experience with Serverless Framework.
  • Strong frontend skills: React, Tailwind CSS, responsive design.

Nice-to-Have Skills

  • Familiarity with Python for backend or scripting.
  • Cloud experience with AWS, GCP, or Azure.
  • Knowledge of DevOps practices and CI/CD pipelines.
  • Experience with unit testing frameworks and TDD.

Who You Are

  • confident communicator who can manage client conversations independently.
  • Passionate about learning and experimenting with new technologies.
  • Detail-oriented and committed to delivering high-quality software.
  • A collaborative team player who thrives in dynamic environments.


Read more
Virtana

at Virtana

2 candid answers
Bimla Dhirayan
Posted by Bimla Dhirayan
Pune, Chennai
4 - 10 yrs
Best in industry
skill iconJava
skill iconGo Programming (Golang)
skill iconKubernetes
skill iconPython
Apache Kafka
+13 more

Senior Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 


Work Location: Pune/ Chennai


Job Type:Hybrid

 

Role Responsibilities: 

  • The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform 
  • Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform. 
  • Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.  
  • Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation 
  • Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution 
  • Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery 

 

Required Qualifications:    

  • Minimum of 7+ years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS) 
  • Experience with CI/CD and cloud-based software development and delivery 
  • Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM. 
  • Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required. 
  • Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent 
  • Highly effective verbal and written communication skills and ability to lead and participate in multiple projects 
  • Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities 
  • Must be results-focused, team-oriented and with a strong work ethic 

 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills 
  • Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 

  

About Virtana:  Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

 

Read more
Edutech Platform

Edutech Platform

Agency job
via Scaling Theory by Keerthana Prabkharan
Pune
2 - 5 yrs
₹25L - ₹30L / yr
skill iconNodeJS (Node.js)
skill iconExpress
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

Responsibilities:

● Design and build scalable APIs and microservices in Node.js (or equivalent backend frameworks).

● Develop and optimize high-performance systems handling large-scale data and concurrent users.

● Ensure system security, reliability, and fault tolerance.

● Collaborate closely with product managers, designers, and frontend engineers for seamless delivery.

● Write clean, maintainable, and well-documented code with a focus on best practices.

● Contribute to architectural decisions, technology choices, and overall system design.

● Monitor, debug, and continuously improve backend performance.

● Stay updated with modern backend technologies and bring innovation into the product.

Desired Qualifications & Skillset:

● 2+ years of professional backend development experience.

● Proficiency with Node.js, Express.js, or similar frameworks.

● Strong knowledge of web application architecture, databases (SQL/NoSQL), and caching strategies.

● Experience with cloud platforms (AWS/GCP/Azure), CI/CD pipelines, and containerization (Docker/Kubernetes) is a plus.

● Ability to break down complex problems into scalable solutions.

● Strong logical aptitude, quick learning ability, and a proactive mindset

Read more
Bluecopa

Bluecopa

Agency job
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹15L / yr
DevOps
skill iconPython
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

Role: DevOps Engineer


Exp: 4 - 7 Years

CTC: up to 28 LPA


Key Responsibilities

•   Design, build, and manage scalable infrastructure on cloud platforms (GCP, AWS, Azure, or OCI)

•   Administer and optimize Kubernetes clusters and container runtimes (Docker, containerd)

•   Develop and maintain CI/CD pipelines for multiple services and environments

•   Manage infrastructure as code using tools like Terraform and/or Pulumi

•   Automate operations with Python and shell scripting for deployment, monitoring, and maintenance

•   Ensure high availability and performance of production systems and troubleshoot incidents effectively

•   Monitor system metrics and implement observability best practices using tools like Prometheus, Grafana, ELK, etc.

•   Collaborate with development, security, and product teams to align infrastructure with business needs

•   Apply best practices in cloud networking, Linux administration, and configuration management

•   Support compliance and security audits; assist with implementation of cloud security measures (e.g., firewalls, IDS/IPS, IAM hardening)

•   Participate in on-call rotations and incident response activities

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Moulina Dey
Posted by Moulina Dey
Pune, Mumbai
5 - 8 yrs
Best in industry
skill iconJava
Microservices
Multithreading
Cloud Computing
Google Cloud Platform (GCP)
+1 more

Job Summary 

We are looking for a skilled Java + Cloud Developer to design, develop, and maintain high-performance applications. The ideal candidate will have strong expertise in Core Java, Spring Framework, multithreading, and database management, along with exposure to cloud platforms and containerization technologies

 

 

Job Title: Java + Cloud Developer 

Location: Pune / Mumbai / Bangalore  

Experience Level: 4-8 

Employment Type: Full-time 

Key Responsibilities 

  • Design, develop, and maintain scalable Java applications using Core Java, Spring Framework, JDBC, and multithreading concepts. 
  • Implement and integrate database solutions using relational and NoSQL databases
  • Utilize JDBC for database connectivity and manipulation. 
  • Work with cloud platforms such as Azure or GCP; experience with DevOps practices is an added advantage. 
  • Develop, deploy, and manage applications using containerization technologies (Docker, Kubernetes). 
  • Debug and troubleshoot applications through log analysis and monitoring tools
  • Collaborate with cross-functional teams to ensure seamless integration between multi-service components
  • Handle large-scale data processing tasks effectively; hands-on experience with Apache Spark is a plus. 
  • Apply Agile methodologies (Scrum/Kanban) in daily development activities. 
  • Continuously research and adopt new technologies to improve development processes and methodologies. 

Required Skills & Qualifications 

  • Strong proficiency in Core Java (Java 8 or higher) with a deep understanding of threading and concurrent programming
  • Solid experience with the Spring Framework and its various modules (Spring Boot, Spring MVC, Spring Data, etc.). 
  • Experience with RDBMS (e.g., MySQL, PostgreSQL, Oracle) and NoSQL databases (e.g., MongoDB, Cassandra). 
  • Basic understanding of cloud platforms (Azure, GCP, or AWS). 
  • Knowledge of DevOps practices (CI/CD, version control, monitoring tools) is a plus. 
  • Familiarity with Docker and Kubernetes for application deployment and scaling. 
  • Strong analytical and problem-solving skills. 
  • Good communication skills and ability to work in a collaborative environment. 

Preferred Qualifications 

  • Hands-on experience with Apache Spark for big data processing. 
  • Exposure to microservices architecture and API integrations. 
  • Familiarity with log monitoring tools (ELK, Splunk, etc.). 


 

Note : Immediate Joiners or Serving Notice till september 2025  

 

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Amita Soni
Posted by Amita Soni
Pune, Bengaluru (Bangalore), Mumbai
4 - 7 yrs
Best in industry
Google Cloud Platform (GCP)
GKE
Microsoft Windows Azure
Terraform

Job Description


We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.


Work location: Pune/Mumbai/Bangalore


Experience: 4-7 Years 


Joining: Mid of October


You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.


Key Responsibilities:

1. Cloud Infrastructure Design & Management

· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.

· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.

· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.

· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.

2. Kubernetes & Container Orchestration

· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).

· Work with Helm charts, Istio, and service meshes for microservices deployments.

· Automate scaling, rolling updates, and zero-downtime deployments.


3. Serverless & Compute Services

· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.

· Optimize containerized applications running on Cloud Run for cost efficiency and performance.


4. CI/CD & DevOps Automation

· Design, implement, and manage CI/CD pipelines using Azure DevOps.

· Automate infrastructure deployment using Terraform, Bash and Power shell scripting

· Integrate security and compliance checks into the DevOps workflow (DevSecOps).


Required Skills & Qualifications:

✔ Experience: 4+ years in Cloud Engineering, with a focus on GCP.

✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).

✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.

✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.

✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.

✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.

✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.


About Wissen Technology

Wissen Technology, established in 2015 and part of the Wissen Group (founded in 2000), is a specialized technology consulting company. We pride ourselves on delivering high-quality solutions for global organizations across Banking & Finance, Telecom, and Healthcare domains.

 

Here’s why Wissen Technology stands out:

 

Global Presence: Offices in US, India, UK, Australia, Mexico, and Canada.

Expert Team: Wissen Group comprises over 4000 highly skilled professionals worldwide, with Wissen Technology contributing 1400 of these experts. Our team includes graduates from prestigious institutions such as Wharton, MIT, IITs, IIMs, and NITs.

Recognitions: Great Place to Work® Certified.

Featured as a Top 20 AI/ML Vendor by CIO Insider (2020).

Impressive Growth: Achieved 400% revenue growth in 5 years without external funding.

Successful Projects: Delivered $650 million worth of projects to 20+ Fortune 500 companies.

 

For more details:

 

Website: www.wissen.com 

Wissen Thought leadership : https://www.wissen.com/articles/ 

 

LinkedIn: Wissen Technology

Read more
Virtana

at Virtana

2 candid answers
Eman Khan
Posted by Eman Khan
Pune
8 - 13 yrs
₹35L - ₹60L / yr
skill iconJava
Spring
skill iconGo Programming (Golang)
skill iconPython
skill iconAmazon Web Services (AWS)
+21 more

Company Overview:

Virtana delivers the industry’s only unified platform for Hybrid Cloud Performance, Capacity and Cost Management. Our platform provides unparalleled, real-time visibility into the performance, utilization, and cost of infrastructure across the hybrid cloud – empowering customers to manage their mission critical applications across physical, virtual, and cloud computing environments. Our SaaS platform allows organizations to easily manage and optimize their spend in the public cloud, assure resources are performing properly through real-time monitoring, and provide the unique ability to plan migrations across the hybrid cloud. 

As we continue to expand our portfolio, we are seeking a highly skilled and hands-on Staff Software Engineer in backend technologies to contribute to the futuristic development of our sophisticated monitoring products.

 

Position Overview:

As a Staff Software Engineer specializing in backend technologies for Storage and Network monitoring in an AI enabled Data center as well as Cloud, you will play a critical role in designing, developing, and delivering high-quality features within aggressive timelines. Your expertise in microservices-based streaming architectures and strong hands-on development skills are essential to solve complex problems related to large-scale data processing. Proficiency in backend technologies such as Java, Python is crucial.

 

Key Responsibilities:

  • Hands-on Development: Actively participate in the design, development, and delivery of high-quality features, demonstrating strong hands-on expertise in backend technologies like Java, Python, Go or related languages.
  • Microservices and Streaming Architectures: Design and implement microservices-based streaming architectures to efficiently process and analyze large volumes of data, ensuring real-time insights and optimal performance.
  • Agile Development: Collaborate within an agile development environment to deliver features on aggressive schedules, maintaining a high standard of quality in code, design, and architecture.
  • Feature Ownership: Take ownership of features from inception to deployment, ensuring they meet product requirements and align with the overall product vision.
  • Problem Solving and Optimization: Tackle complex technical challenges related to data processing, storage, and real-time monitoring, and optimize backend systems for high throughput and low latency.
  • Code Reviews and Best Practices: Conduct code reviews, provide constructive feedback, and promote best practices to maintain a high-quality and maintainable codebase.
  • Collaboration and Communication: Work closely with cross-functional teams, including UI/UX designers, product managers, and QA engineers, to ensure smooth integration and alignment with product goals.
  • Documentation: Create and maintain technical documentation, including system architecture, design decisions, and API documentation, to facilitate knowledge sharing and onboarding.


Qualifications:

  • Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
  • 8+ years of hands-on experience in backend development, demonstrating expertise in Java, Python or related technologies.
  • Strong domain knowledge in Storage and Networking, with exposure to monitoring technologies and practices.
  • Experience is handling the large data-lakes with purpose-built data stores (Vector databases, no-SQL, Graph, Time-series).
  • Practical knowledge of OO design patterns and Frameworks like Spring, Hibernate.
  • Extensive experience with cloud platforms such as AWS, Azure or GCP and development expertise on Kubernetes, Docker, etc.
  • Solid experience designing and delivering features with high quality on aggressive schedules.
  • Proven experience in microservices-based streaming architectures, particularly in handling large amounts of data for storage and networking monitoring.
  • Familiarity with performance optimization techniques and principles for backend systems.
  • Excellent problem-solving and critical-thinking abilities.
  • Outstanding communication and collaboration skills.


Why Join Us:

  • Opportunity to be a key contributor in the development of a leading performance monitoring company specializing in AI-powered Storage and Network monitoring.
  • Collaborative and innovative work environment.
  • Competitive salary and benefits package.
  • Professional growth and development opportunities.
  • Chance to work on cutting-edge technology and products that make a real impact.


If you are a hands-on technologist with a proven track record of designing and delivering high-quality features on aggressive schedules and possess strong expertise in microservices-based streaming architectures, we invite you to apply and help us redefine the future of performance monitoring.

Read more
Virtana

at Virtana

2 candid answers
Bimla Dhirayan
Posted by Bimla Dhirayan
Pune, Chennai
4 - 10 yrs
Best in industry
skill iconJava
skill iconGo Programming (Golang)
skill iconDocker
openshift
network performance
+13 more

Senior Software Engineer 

Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.  

We are seeking an individual with expert knowledge in Systems Management and/or Systems Monitoring Software, Observability platforms and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products. 

 

Role Responsibilities: 

  • The engineer will be primarily responsible for architecture, design and development of software solutions for the Virtana Platform 
  • Partner and work closely with cross functional teams and with other engineers and product managers to architect, design and implement new features and solutions for the Virtana Platform. 
  • Communicate effectively across the departments and R&D organization having differing levels of technical knowledge.  
  • Work closely with UX Design, Quality Assurance, DevOps and Documentation teams. Assist with functional and system test design and deployment automation 
  • Provide customers with complex and end-to-end application support, problem diagnosis and problem resolution 
  • Learn new technologies quickly and leverage 3rd party libraries and tools as necessary to expedite delivery 

 

Required Qualifications:    

  • Minimum of 4-10 years of progressive experience with back-end development in a Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software. 
  • Deep experience in public cloud environment using Kubernetes and other distributed managed services like Kafka etc (Google Cloud and/or AWS) 
  • Experience with CI/CD and cloud-based software development and delivery 
  • Deep experience with integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM. 
  • Minimum of 6 years of development experience with one or more of these high level languages like GO, Python, Java. Deep experience with one of these languages is required. 
  • Bachelor’s or Master’s degree in computer science, Computer Engineering or equivalent 
  • Highly effective verbal and written communication skills and ability to lead and participate in multiple projects 
  • Well versed with identifying opportunities and risks in a fast-paced environment and ability to adjust to changing business priorities 
  • Must be results-focused, team-oriented and with a strong work ethic 

 

Desired Qualifications: 

  • Prior experience with other virtualization platforms like OpenShift is a plus 
  • Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus 
  • Demonstrated ability as a lead engineer who can architect, design and code with strong communication and teaming skills 
  • Deep development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus 

  

About Virtana: 

Virtana delivers the industry’s only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

  

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

  

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

 

 

Read more
Versatile Commerce LLP

at Versatile Commerce LLP

2 candid answers
Burugupally Shailaja
Posted by Burugupally Shailaja
Hyderabad
4 - 7 yrs
₹5L - ₹9L / yr
skill iconJava
sonarqube
skill iconSpring Boot
RESTful APIs
skill iconDocker
+5 more

Job Description: Java Developer

Position: Java Developer

Experience: 5 to 7 Years

Notice Period: Immediate Joiner


Key Responsibilities

  • Design, develop, and maintain scalable, high-performance Java applications.
  • Work with Core Java and Advanced Java concepts to build reliable backend solutions.
  • Develop and deploy applications using Spring Boot framework.
  • Design and implement RESTful Microservices with best practices in scalability and performance.
  • Collaborate with cross-functional teams in an Agile/Scrum environment.
  • Manage code versions effectively using Git/GitHub.
  • Ensure code quality by integrating and analyzing with SonarQube.
  • Participate in code reviews, sprint planning, and daily stand-ups.
  • Troubleshoot production issues and optimize system performance.

Required Skills

  • Strong proficiency in Core Java (OOPs, Collections, Multithreading, Exception Handling).
  • Hands-on experience in Advanced Java (JDBC, Servlets, JSP, JPA/Hibernate).
  • Proven experience with Spring Boot for application development.
  • Knowledge and experience in Microservices Architecture.
  • Familiarity with REST APIs, JSON, and Web Services.
  • Proficient in Git/GitHub for version control and collaboration.
  • Experience with Sonar Qube for code quality and security checks.
  • Good understanding of Agile/Scrum methodologies.
  • Strong problem-solving and debugging skills.

Nice-to-Have

  • Experience with CI/CD pipelines (Jenkins, GitHub Actions, or similar).
  • Familiarity with Docker/Kubernetes for containerized deployments.
  • Basic knowledge of cloud platforms (AWS, Azure, GCP).


Read more
Remote only
0 - 1 yrs
₹5000 - ₹7000 / mo
Attention to detail
Troubleshooting
Data modeling
warehousing concepts
Google Cloud Platform (GCP)
+15 more

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.


Responsibilities:

-Design, build, and maintain scalable data pipelines for

structured and unstructured data sources

-Develop ETL processes to collect, clean, and transform data

from internal and external systems. Support integration of data into

dashboards, analytics tools, and reporting systems

-Collaborate with data analysts and software developers to

improve data accessibility and performance.

-Document workflows and maintain data infrastructure best

practices.

-Assist in identifying opportunities to automate repetitive data

tasks


Please send your resume to talent@springer. capital

Read more
Remote only
0 - 1 yrs
₹5000 - ₹7000 / mo
Attention to detail
Troubleshooting
Data modeling
warehousing concepts
Google Cloud Platform (GCP)
+15 more

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.


Responsibilities:

-Design, build, and maintain scalable data pipelines for

structured and unstructured data sources

-Develop ETL processes to collect, clean, and transform data

from internal and external systems. Support integration of data into

dashboards, analytics tools, and reporting systems

-Collaborate with data analysts and software developers to

improve data accessibility and performance.

-Document workflows and maintain data infrastructure best

practices.

-Assist in identifying opportunities to automate repetitive data

tasks


Please send your resume to talent@springer. capital

Read more
Bluecopa

Bluecopa

Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹15L / yr
DevOps
skill iconPython
skill iconKubernetes
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+1 more

Salary (Lacs): Up to 22 LPA


Required Qualifications

•   4–7 years of total experience, with a minimum of 4 years in a full-time DevOps role

•   Hands-on experience with major cloud platforms (GCP, AWS, Azure, OCI), more than one will be a plus

•   Proficient in Kubernetes administration and container technologies (Docker, containerd)

•   Strong Linux fundamentals

•   Scripting skills in Python and shell scripting

•   Knowledge of infrastructure as code with hands-on experience in Terraform and/or Pulumi (mandatory)

•   Experience in maintaining and troubleshooting production environments

•   Solid understanding of CI/CD concepts with hands-on experience in tools like Jenkins, GitLab CI, GitHub Actions, ArgoCD, Devtron, GCP Cloud Build, or Bitbucket Pipelines


If Interested kindly share your updated resume on 82008 31681

Read more
Remote only
3 - 6 yrs
₹6L - ₹12L / yr
skill iconPython
FastAPI
skill iconNodeJS (Node.js)
skill iconExpress
Cloudflare Workers
+3 more

Job Title : Backend / API Developer - Python (FastAPI) / Node.js (Express)

Location : Remote

Experience : 4+ Years


Job Description :

We are looking for a skilled Backend / API Developer - Python (FastAPI) / Node.js (Express) with strong expertise in building secure, scalable, and reliable backend systems. The ideal candidate should be proficient in Python (FastAPI preferred) or Node.js (Express) and have hands-on experience deploying applications to serverless environments.


Key Responsibilities :

  • Design, develop, and maintain RESTful APIs and backend services.
  • Deploy and manage serverless applications on Cloudflare Workers, Firebase Functions, and Google Cloud Functions.
  • Work with Google Cloud services including Cloud Run, Cloud Functions, Secret Manager, and IAM roles.
  • Implement secure API development practices (HTTPS, input validation, and secrets management).
  • Ensure performance optimization, scalability, and reliability of backend systems.
  • Collaborate with front-end developers, DevOps, and product teams to deliver high-quality solutions.

Mandatory Skills :

Python (FastAPI) / Node.js (Express), Serverless Deployment (Cloudflare Workers, Firebase, GCP Functions), Google Cloud Services (Cloud Run, IAM, Secret Manager), API Security (HTTPS, Input Validation, Secrets Management).


Required Skills :

  • Proficiency in Python (FastAPI preferred) or Node.js (Express).
  • Hands-on experience with serverless platforms (Cloudflare Workers, Firebase Functions, GCP Functions).
  • Familiarity with Google Cloud services (Cloud Run, IAM, Secret Manager, Cloud Functions).
  • Strong understanding of secure API development (HTTPS, input validation, API keys & secret management).
  • Knowledge of API design principles and best practices.
  • Ability to work with CI/CD pipelines and modern development workflows.


Preferred Qualifications :

  • Strong knowledge of microservices architecture.
  • Experience with CI/CD pipelines.
  • Knowledge of containerization (Docker, Kubernetes).
  • Familiarity with monitoring and logging tools.
Read more
Remote only
0 - 1 yrs
₹5000 - ₹5500 / mo
Cyber Security
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)

escription

Job Summary:

Join Springer Capital as a Cybersecurity & Cloud Intern to help architect, secure, and automate our cloud-based backend systems powering next-generation investment platforms.


Job Description:


Founded in 2015, Springer Capital is a technology-forward asset management and investment firm. We leverage cutting-edge digital solutions to uncover high-potential opportunities, transforming traditional finance through innovation, agility, and a relentless commitment to security and scalability.


Job Highlights

Work hands-on with AWS, Azure, or GCP to design and deploy secure, scalable backend infrastructure.


Collaborate with DevOps and engineering teams to embed security best practices in CI/CD pipelines.


Gain experience in real-world incident response, vulnerability assessment, and automated monitoring.


Drive meaningful impact on our security posture and cloud strategy from Day 1.


Enjoy a fully remote, flexible internship with global teammates.


Responsibilities

Assist in architecting and provisioning cloud resources (VMs, containers, serverless functions) with strict security controls.


Implement identity and access management, network segmentation, encryption, and logging best practices.


Develop and maintain automation scripts for security monitoring, patch management, and incident alerts.


Support vulnerability scanning, penetration testing, and remediation tracking.


Document cloud architectures, security configurations, and incident response procedures.


Partner with backend developers to ensure secure API gateways, databases, and storage services.


What We Offer

Mentorship: Learn directly from senior security engineers and cloud architects.


Training & Certifications: Access to online courses and support for AWS/Azure security certifications.


Impactful Projects: Contribute to critical security and cloud initiatives that safeguard our digital assets.


Remote-First Culture: Flexible hours and the freedom to collaborate from anywhere.


Career Growth: Build a strong foundation for a future in cybersecurity, cloud engineering, or DevSecOps.


Requirements

Pursuing or recently graduated in Computer Science, Cybersecurity, Information Technology, or a related discipline.


Familiarity with at least one major cloud platform (AWS, Azure, or GCP).


Understanding of core security principles: IAM, network security, encryption, and logging.


Scripting experience in Python, PowerShell, or Bash for automation tasks.


Strong analytical, problem-solving, and communication skills.


A proactive learner mindset and passion for securing cloud environments.


About Springer Capital

Springer Capital blends financial expertise with digital innovation to redefine asset management. Our mission is to drive exceptional value by implementing robust, technology-driven strategies that transform investment landscapes. We champion a culture of creativity, collaboration, and continuous improvement.


Location: Global (Remote)

Job Type: Internship

Pay: $50 USD per month

Work Location: Remote

Read more
Cymetrix Software

at Cymetrix Software

2 candid answers
Netra Shettigar
Posted by Netra Shettigar
Remote only
4 - 8 yrs
₹12L - ₹20L / yr
Data modeling
Dimensional modeling
Google Cloud Platform (GCP)

Advanced SQL, data modeling skills - designing Dimensional Layer, 3NF, denormalized views & semantic layer, Expertise in GCP services



Role & Responsibilities:

● Design and implement robust semantic layers for data systems on Google Cloud Platform (GCP)

● Develop and maintain complex data models, including dimensional models, 3NF structures, and denormalized views

● Write and optimize advanced SQL queries for data extraction, transformation, and analysis

● Utilize GCP services to create scalable and efficient data architectures

● Collaborate with cross-functional teams to translate business requirements(specified in mapping sheets or Legacy

Datastage jobs) into effective data models

● Implement and maintain data warehouses and data lakes on GCP

● Design and optimize ETL/ELT processes for large-scale data integration

● Ensure data quality, consistency, and integrity across all data models and semantic layers

● Develop and maintain documentation for data models, semantic layers, and data flows

● Participate in code reviews and implement best practices for data modeling and database design

● Optimize database performance and query execution on GCP

● Provide technical guidance and mentorship to junior team members

● Stay updated with the latest trends and advancements in data modeling, GCP services, and big data technologies

● Collaborate with data scientists and analysts to enable efficient data access and analysis

● Implement data governance and security measures within the semantic layer and data model

Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹20L - ₹25L / yr
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Terraform
skill iconPython
+2 more

Job Type : Contract


Location : Bangalore


Experience : 5+yrs


The role focuses on cloud security engineering with a strong emphasis on GCP, while also covering AWS and Azure.


Required Skills:


  • 5+ years of experience in software and/or cloud platform engineering, particularly focused on GCP environment.
  • Knowledge of the Shared Responsibility Model; keen understanding of the security risks inherent in hosting cloud-based applications and data.
  • Experience developing across the security assurance lifecycle (including prevent, detect, respond, and remediate controls)?Experience in configuring Public Cloud native security tooling and capabilities with a focus on Google Cloud Organizational policies/constraints, VPC SC, IAM policies and GCP APIs.
  • Experience with Cloud Security Posture Management (CSPM) 3rd Party tools such as Wiz, Prisma, Check Point CloudGuard, etc.
  • Experience in Policy-as-code (Rego) and OPA platform.
  • Experience solutioning and configuring event-driven serverless-based security controls in Azure, including but not limited to technologies such as Azure Function, Automation Runbook, AWS Lambda and Google Cloud Functions.
  • Deep understanding of DevOps processes and workflows.
  • Working knowledge of the Secure SDLC process
  • Experience with Infrastructure as Code (IaC) tooling, preferably Terraform.
  • Familiarity with Logging and data pipeline concepts and architectures in cloud.
  • Strong in scripting languages such as PowerShell or Python or Bash or Go.
  • Knowledge of Agile best practices and methodologies
  • Experience creating technical architecture documentation.? Excellent communication, written and interpersonal skills.
  • Practical experience in designing and configuring CICD pipelines. Practical experience in GitHub Actions and Jenkins.
  • Experience in ITSM.
  • Ability to articulate complex technical concepts to non-technical stakeholders.
  • Experience with risk control frameworks and engagements with risk and regulatory functions
  • Experience in the financial industry would be a plus.


Read more
CLOUDSUFI
Noida
6 - 12 yrs
₹22L - ₹34L / yr
Natural Language Processing (NLP)
Large Language Models (LLM) tuning
Generative AI
skill iconPython
CI/CD
+4 more

About Us


CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.


Our Values


We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.


Equal Opportunity Statement


CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.


Role Overview:


As a Senior Data Scientist / AI Engineer, you will be a key player in our technical leadership. You will be responsible for designing, developing, and deploying sophisticated AI and Machine Learning solutions, with a strong emphasis on Generative AI and Large Language Models (LLMs). You will architect and manage scalable AI microservices, drive research into state-of-the-art techniques, and translate complex business requirements into tangible, high-impact products. This role requires a blend of deep technical expertise, strategic thinking, and leadership.


Key Responsibilities:


  • Architect & Develop AI Solutions: Design, build, and deploy robust and scalable machine learning models, with a primary focus on Natural Language Processing (NLP), Generative AI, and LLM-based Agents.
  • Build AI Infrastructure: Create and manage AI-driven microservices using frameworks like Python FastAPI, ensuring high performance and reliability.
  • Lead AI Research & Innovation: Stay abreast of the latest advancements in AI/ML. Lead research initiatives to evaluate and implement state-of-the-art models and techniques for performance and cost optimization.
  • Solve Business Problems: Collaborate with product and business teams to understand challenges and develop data-driven solutions that create significant business value, such as building business rule engines or predictive classification systems.
  • End-to-End Project Ownership: Take ownership of the entire lifecycle of AI projects—from ideation, data processing, and model development to deployment, monitoring, and iteration on cloud platforms.
  • Team Leadership & Mentorship: Lead learning initiatives within the engineering team, mentor junior data scientists and engineers, and establish best practices for AI development.
  • Cross-Functional Collaboration: Work closely with software engineers to integrate AI models into production systems and contribute to the overall system architecture.

Required Skills and Qualifications


  • Master’s (M.Tech.) or Bachelor's (B.Tech.) degree in Computer Science, Artificial Intelligence, Information Technology, or a related field.
  • 6+ years of professional experience in a Data Scientist, AI Engineer, or related role.
  • Expert-level proficiency in Python and its core data science libraries (e.g., PyTorch, Huggingface Transformers, Pandas, Scikit-learn).
  • Demonstrable, hands-on experience building and fine-tuning Large Language Models (LLMs) and implementing Generative AI solutions.
  • Proven experience in developing and deploying scalable systems on cloud platforms, particularly AWS. Experience with GCS is a plus.
  • Strong background in Natural Language Processing (NLP), including experience with multilingual models and transcription.
  • Experience with containerization technologies, specifically Docker.
  • Solid understanding of software engineering principles and experience building APIs and microservices.


Preferred Qualifications


  • A strong portfolio of projects. A track record of publications in reputable AI/ML conferences is a plus.
  • Experience with full-stack development (Node.js, Next.js) and various database technologies (SQL, MongoDB, Elasticsearch).
  • Familiarity with setting up and managing CI/CD pipelines (e.g., Jenkins).
  • Proven ability to lead technical teams and mentor other engineers.
  • Experience developing custom tools or packages for data science workflows.


Read more
Umanist India
Chennai
7 - 8 yrs
₹21L - ₹22L / yr
Google Cloud Platform (GCP)
skill iconMachine Learning (ML)
skill iconPython

Job Title: Software Engineer Consultant/Expert 34192 

Location: Chennai

Work Type: Onsite

Notice Period: Immediate Joiners only or serving candidates upto 30 days.

 

Position Description:

  • Candidate with strong Python experience.
  • Full Stack Development in GCP End to End Deployment/ ML Ops Software Engineer with hands-on n both front end, back end and ML Ops
  • This is a Tech Anchor role.

Experience Required:

  • 7 Plus Years
Read more
Springer Capital
Andrew Rose
Posted by Andrew Rose
Remote only
0 - 1 yrs
₹5000 - ₹7000 / mo
Attention to detail
Troubleshooting
Data modeling
Warehousing concepts
Google Cloud Platform (GCP)
+15 more

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process. 

 

Responsibilities: 

▪ Design, build, and maintain scalable data pipelines for structured and unstructured data sources 

▪ Develop ETL processes to collect, clean, and transform data from internal and external systems 

▪ Support integration of data into dashboards, analytics tools, and reporting systems 

▪ Collaborate with data analysts and software developers to improve data accessibility and performance 

▪ Document workflows and maintain data infrastructure best practices 

▪ Assist in identifying opportunities to automate repetitive data tasks 

Read more
CLOUDSUFI

at CLOUDSUFI

3 recruiters
Ayushi Dwivedi
Posted by Ayushi Dwivedi
Noida
4 - 8 yrs
₹15L - ₹22L / yr
Google Cloud Platform (GCP)
skill iconKubernetes
Terraform
skill iconDocker
helm

About Us

CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.


Our Values

We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.


Equal Opportunity Statement

CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/


What are we looking for

We are seeking a highly skilled and experienced Senior DevOps Engineer to join our team. The ideal candidate will have extensive expertise in modern DevOps tools and practices, particularly in managing CI/CD pipelines, infrastructure as code, and cloud-native environments. This role involves designing, implementing, and maintaining robust, scalable, and efficient infrastructure and deployment pipelines to support our development and operations teams.


Required Skills and Experience:

- 7+ years of experience in DevOps, infrastructure automation, or related fields.

- Advanced expertise in Terraform for infrastructure as code.

- Solid experience with Helm for managing Kubernetes applications.

- Proficient with GitHub for version control, repository management, and workflows.

- Extensive experience with Kubernetes for container orchestration and management.

- In-depth understanding of Google Cloud Platform (GCP) services and architecture.

- Strong scripting and automation skills (e.g., Python, Bash, or equivalent).

- Excellent problem-solving skills and attention to detail. - Strong communication and collaboration abilities in agile development environments.


Preferred Qualifications:

- Experience with other CI/CD tools (e.g., Jenkins, GitLab CI/CD).

- Knowledge of additional cloud platforms (e.g., AWS, Azure).

- Certification in Kubernetes (CKA/CKAD) or Google Cloud (GCP Professional DevOps Engineer).


Behavioral Competencies

• Must have worked with US/Europe based clients in onsite/offshore delivery models.

• Should have very good verbal and written communication, technical articulation, listening and presentation skills.

• Should have proven analytical and problem solving skills.

• Should have collaborative mindset for cross-functional team work

• Passion for solving complex search problems

• Should have demonstrated effective task prioritization, time management and internal/external stakeholder management skills.

• Should be a quick learner, self starter, go-getter and team player.

• Should have experience of working under stringent deadlines in a Matrix organization structure.

Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
5 - 10 yrs
₹10L - ₹20L / yr
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Azure
skill iconJavascript
skill iconReact.js
+5 more

 Location: Hybrid/ Remote

Openings: 2

Experience: 5–12 Years

Qualification: Bachelor’s or Master’s in Computer Science or a related technical field


Key Responsibilities

Architect & Design:

  • Provide technical and architectural direction for complex frontend solutions,ensuring alignment with enterprise standards and best practices.
  • Conduct design and code reviews to maintain high-quality, reusable, and scalable frontend interfaces for enterprise applications.
  • Collaborate with cross-functional teams to define and enforce UI/UX design guidelines, accessibility standards, and performance benchmarks.
  • Identify and address potential security vulnerabilities in frontend implementations, ensuring compliance with security and data privacy requirements.

Development & Debugging:

  • Write clean, maintainable, and efficient frontend code.
  • Debug and troubleshoot code to ensure robust, high-performing applications.
  • Develop reusable frontend libraries that can be leveraged across multiple projects.

AI Awareness (Preferred):

  • Understand AI/ML fundamentals and how they can enhance frontend applications.
  • Collaborate with teams integrating AI-based features into chat applications.

Collaboration & Reporting:

  • Work closely with cross-functional teams to align on architecture and deliverables.
  • Regularly report progress, identify risks, and propose mitigation strategies.

Quality Assurance:

  • Implement unit tests and end-to-end tests to ensure code quality.
  • Participate in code reviews and enforce best practices.


Required Skills 

  • 5-10 years of experience architecting and developing cloud-based global applications in a public cloud environment (AWS, Azure, or GCP).
  • Strong hands-on expertise in frontend technologies: JavaScript, HTML5, CSS3
  • Proficiency with Modern frameworks like React, Angular, or Node.js
  • Backend familiarity with Java, Spring Boot (or similar technologies).
  • Experience developing real-world, at-scale products.
  • General knowledge of cloud platforms (AWS, Azure, or GCP) and their structure, use, and capabilities.
  • Strong problem-solving, debugging, and performance optimization skills.
Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
5 - 10 yrs
₹7L - ₹20L / yr
Fullstack Developer
skill iconJavascript
skill iconHTML/CSS
skill iconReact.js
skill iconSpring Boot
+9 more

Location: Hybrid/ Remote

Openings: 2

Experience: 5+ Years

Qualification: Bachelor’s or Master’s in Computer Science or related field


Job Responsibilities


Problem Solving & Optimization:

  • Analyze and resolve complex technical and application issues.
  • Optimize application performance, scalability, and reliability.

Design & Develop:

  • Build, test, and deploy scalable full-stack applications with high performance and security.
  • Develop clean, reusable, and maintainable code for both frontend and backend.

AI Integration (Preferred):

  • Collaborate with the team to integrate AI/ML models into applications where applicable.
  • Explore Generative AI, NLP, or machine learning solutions that enhance product capabilities.

Technical Leadership & Mentorship:

  • Provide guidance, mentorship, and code reviews for junior developers.
  • Foster a culture of technical excellence and knowledge sharing.

Agile & Delivery Management:

  • Participate in Agile ceremonies (sprint planning, stand-ups, retrospectives).
  • Define and scope backlog items, track progress, and ensure timely delivery.

Collaboration:

  • Work closely with cross-functional teams (product managers, designers, QA) to deliver high-quality solutions.
  • Coordinate with geographically distributed teams.

Quality Assurance & Security:

  • Conduct peer reviews of designs and code to ensure best practices.
  • Implement security measures and ensure compliance with industry standards.

Innovation & Continuous Improvement:

  • Identify areas for improvement in the software development lifecycle.
  • Stay updated with the latest tech trends, especially in AI and cloud technologies, and recommend new tools or frameworks.

Required Skills

  • Strong proficiency in JavaScript, HTML5, CSS3
  • Hands-on expertise with frontend frameworks like React, Angular, or Vue.js
  • Backend development experience with Java, Spring Boot (Node.js is a plus)
  • Knowledge of REST APIs, microservices, and scalable architectures
  • Familiarity with cloud platforms (AWS, Azure, or GCP)
  • Experience with Agile/Scrum methodologies and JIRA for project tracking
  • Proficiency in Git and version control best practices
  • Strong debugging, performance optimization, and problem-solving skills
  • Ability to analyze customer requirements and translate them into technical specifications
Read more
Lalitech

at Lalitech

1 recruiter
Govind Varshney
Posted by Govind Varshney
Remote, Bengaluru (Bangalore), Noida
0 - 2 yrs
₹3.5L - ₹4.5L / yr
Fullstack Developer
skill iconJavascript
skill iconReact.js
skill iconNodeJS (Node.js)
RESTful APIs
+6 more

Location: Hybrid/ Remote

Openings: 5

Experience: 0 - 2Years

Qualification: Bachelor’s or Master’s in Computer Science or a related technical field


Key Responsibilities:

Backend Development & APIs

  • Build microservices that provide REST APIs to power web frontends.
  • Design clean, reusable, and scalable backend code meeting enterprise security standards.
  • Conceptualize and implement optimized data storage solutions for high-performance systems.

Deployment & Cloud

  • Deploy microservices using a common deployment framework on AWS and GCP.
  • Inspect and optimize server code for speed, security, and scalability.

Frontend Integration

  • Work on modern front-end frameworks to ensure seamless integration with back-end services.
  • Develop reusable libraries for both frontend and backend codebases.


AI Awareness (Preferred)

  • Understand how AI/ML or Generative AI can enhance enterprise software workflows.
  • Collaborate with AI specialists to integrate AI-driven features where applicable.

Quality & Collaboration

  • Participate in code reviews to maintain high code quality.
  • Collaborate with teams using Agile/Scrum methodologies for rapid and structured delivery.


Required Skills:

  • Proficiency in JavaScript (ES6+), Webpack, Mocha, Jest
  • Experience with recent frontend frameworks – React.js, Redux.js, Node.js (or similar)
  • Deep understanding of HTML5, CSS3, SASS/LESS, and Content Management Systems
  • Ability to design and implement RESTful APIs and understand their impact on client-side applications
  • Familiarity with cloud platforms (AWS, Azure, or GCP) – deployment, storage, and scalability
  • Experience working with Agile and Scrum methodologies
  • Strong backend expertise in Java, J2EE, Spring Boot is a plus but not mandatory
Read more
Remote only
3 - 6 yrs
$25K - $35K / yr
Google Cloud Platform (GCP)
Natural Language Processing (NLP)
Chatbot
skill iconPython
skill iconNodeJS (Node.js)
+2 more

Ontrac Solutions is a leading technology consulting firm specializing in cutting-edge solutions that drive business transformation. We partner with organizations to modernize their infrastructure, streamline processes, and deliver tangible results.


Our client is actively seeking a Conversational AI Engineer with deep hands-on experience in Google Contact Center AI (CCAI) to join a high-impact digital transformation project via a GCP Premier Partner. As part of a staff augmentation model, you will be embedded within the client’s technology or contact center innovation team, delivering scalable virtual agent solutions that improve customer experience, agent productivity, and call deflection.


Key Responsibilities:

  • Lead the design and buildout of Dialogflow CX/ES agents across chat and voice channels.
  • Integrate virtual agents with client systems and platforms (e.g., Genesys, Twilio, NICE CXone, Salesforce).
  • Develop fulfillment logic using Google Cloud Functions, Cloud Run, and backend integrations (via REST APIs and webhooks).
  • Collaborate with stakeholders to define intent models, entity schemas, and user flows.
  • Implement Agent Assist and CCAI Insights to augment live agent productivity.
  • Leverage Google Cloud tools including Pub/Sub, Logging, and BigQuery to support and monitor the solution.
  • Support tuning, regression testing, and enhancement of NLP performance using live utterance data.
  • Ensure adherence to enterprise security and compliance requirements.


Required Skills & Qualifications:

  • 3+ years developing conversational AI experiences, including at least 1–2 years with Google Dialogflow CX or ES.
  • Solid experience across GCP services (Functions, Cloud Run, IAM, BigQuery, etc.).
  • Strong skills in Python or Node.js for webhook fulfillment and backend logic.
  • Knowledge of NLU/NLP modeling, training, and performance tuning.
  • Prior experience working in client-facing or embedded roles via consulting or staff augmentation.
  • Ability to communicate effectively with technical and business stakeholders.


Nice to Have:

  • Hands-on experience with Agent Assist, CCAI Insights, or third-party CCaaS tools (Genesys, Twilio Flex, NICE CXone).
  • Familiarity with Vertex AI, AutoML, or other GCP ML services.
  • Experience in regulated industries (healthcare, finance, insurance, etc.).
  • Google Cloud certification in Cloud Developer or CCAI Engineer.




Read more
Deqode

at Deqode

1 recruiter
Shubham Das
Posted by Shubham Das
Mumbai, Chennai, Gurugram
6 - 9 yrs
₹12L - ₹17L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
DevOps
helm
+3 more

We are looking for a highly skilled DevOps/Cloud Engineer with over 6 years of experience in infrastructure automation, cloud platforms, networking, and security. If you are passionate about designing scalable systems and love solving complex cloud and DevOps challenges—this opportunity is for you.

Key Responsibilities

  • Design, deploy, and manage cloud-native infrastructure using Kubernetes (K8s), Helm, Terraform, and Ansible
  • Automate provisioning and orchestration workflows for cloud and hybrid environments
  • Manage and optimize deployments on AWS, Azure, and GCP for high availability and cost efficiency
  • Troubleshoot and implement advanced network architectures including VPNs, firewalls, load balancers, and routing protocols
  • Implement and enforce security best practices: IAM, encryption, compliance, and vulnerability management
  • Collaborate with development and operations teams to improve CI/CD workflows and system observability

Required Skills & Qualifications

  • 6+ years of experience in DevOps, Infrastructure as Code (IaC), and cloud-native systems
  • Expertise in Helm, Terraform, and Kubernetes
  • Strong hands-on experience with AWS and Azure
  • Solid understanding of networking, firewall configurations, and security protocols
  • Experience with CI/CD tools like Jenkins, GitHub Actions, or similar
  • Strong problem-solving skills and a performance-first mindset

Why Join Us?

  • Work on cutting-edge cloud infrastructure across diverse industries
  • Be part of a collaborative, forward-thinking team
  • Flexible hybrid work model – work from anywhere while staying connected
  • Opportunity to take ownership and lead critical DevOps initiatives


Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore)
8 - 12 yrs
₹15L - ₹22L / yr
Data engineering
Google Cloud Platform (GCP)
Data Transformation Tool (DBT)
Google Dataform
BigQuery
+6 more

Job Title : Data Engineer – GCP + Spark + DBT

Location : Bengaluru (On-site at Client Location | 3 Days WFO)

Experience : 8 to 12 Years

Level : Associate Architect

Type : Full-time


Job Overview :

We are looking for a seasoned Data Engineer to join the Data Platform Engineering team supporting a Unified Data Platform (UDP). This role requires hands-on expertise in DBT, GCP, BigQuery, and PySpark, with a solid foundation in CI/CD, data pipeline optimization, and agile delivery.


Mandatory Skills : GCP, DBT, Google Dataform, BigQuery, PySpark/Spark SQL, Advanced SQL, CI/CD, Git, Agile Methodologies.


Key Responsibilities :

  • Design, build, and optimize scalable data pipelines using BigQuery, DBT, and PySpark.
  • Leverage GCP-native services like Cloud Storage, Pub/Sub, Dataproc, Cloud Functions, and Composer for ETL/ELT workflows.
  • Implement and maintain CI/CD for data engineering projects with Git-based version control.
  • Collaborate with cross-functional teams including Infra, Security, and DataOps for reliable, secure, and high-quality data delivery.
  • Lead code reviews, mentor junior engineers, and enforce best practices in data engineering.
  • Participate in Agile sprints, backlog grooming, and Jira-based project tracking.

Must-Have Skills :

  • Strong experience with DBT, Google Dataform, and BigQuery
  • Hands-on expertise with PySpark/Spark SQL
  • Proficient in GCP for data engineering workflows
  • Solid knowledge of SQL optimization, Git, and CI/CD pipelines
  • Agile team experience and strong problem-solving abilities

Nice-to-Have Skills :

  • Familiarity with Databricks, Delta Lake, or Kafka
  • Exposure to data observability and quality frameworks (e.g., Great Expectations, Soda)
  • Knowledge of MDM patterns, Terraform, or IaC is a plus
Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Pune
6 - 10 yrs
₹12L - ₹23L / yr
skill iconMachine Learning (ML)
skill iconDeep Learning
Natural Language Processing (NLP)
Computer Vision
Data engineering
+8 more

Job Title : AI Architect

Location : Pune (On-site | 3 Days WFO)

Experience : 6+ Years

Shift : US or flexible shifts


Job Summary :

We are looking for an experienced AI Architect to design and deploy AI/ML solutions that align with business goals.

The role involves leading end-to-end architecture, model development, deployment, and integration using modern AI/ML tools and cloud platforms (AWS/Azure/GCP).


Key Responsibilities :

  • Define AI strategy and identify business use cases
  • Design scalable AI/ML architectures
  • Collaborate on data preparation, model development & deployment
  • Ensure data quality, governance, and ethical AI practices
  • Integrate AI into existing systems and monitor performance

Must-Have Skills :

  • Machine Learning, Deep Learning, NLP, Computer Vision
  • Data Engineering, Model Deployment (CI/CD, MLOps)
  • Python Programming, Cloud (AWS/Azure/GCP)
  • Distributed Systems, Data Governance
  • Strong communication & stakeholder collaboration

Good to Have :

  • AI certifications (Azure/GCP/AWS)
  • Experience in big data and analytics
Read more
Blitzy

at Blitzy

2 candid answers
1 product
Eman Khan
Posted by Eman Khan
Pune
6 - 10 yrs
₹40L - ₹70L / yr
skill iconPython
skill iconDjango
skill iconFlask
FastAPI
Google Cloud Platform (GCP)
+1 more

Requirements

  • 7+ years of experience with Python
  • Strong expertise in Python frameworks (Django, Flask, or FastAPI)
  • Experience with GCP, Terraform, and Kubernetes
  • Deep understanding of REST API development and GraphQL
  • Strong knowledge of SQL and NoSQL databases
  • Experience with microservices architecture
  • Proficiency with CI/CD tools (Jenkins, CircleCI, GitLab)
  • Experience with container orchestration using Kubernetes
  • Understanding of cloud architecture and serverless computing
  • Experience with monitoring and logging solutions
  • Strong background in writing unit and integration tests
  • Familiarity with AI/ML concepts and integration points


Responsibilities

  • Design and develop scalable backend services for our AI platform
  • Architect and implement complex systems with high reliability
  • Build and maintain APIs for internal and external consumption
  • Work closely with AI engineers to integrate ML functionality
  • Optimize application performance and resource utilization
  • Make architectural decisions that balance immediate needs with long-term scalability
  • Mentor junior engineers and promote best practices
  • Contribute to the evolution of our technical standards and processes
Read more
AI driven consulting firm

AI driven consulting firm

Agency job
via PLEXO HR Solutions by Upashna Kumari
Pune
1 - 3 yrs
₹3L - ₹5L / yr
Google Cloud Platform (GCP)
CI/CD
skill iconKubernetes
Terraform
Linux/Unix

What You’ll Do:

We’re looking for a skilled DevOps Engineer to help us build and maintain reliable, secure, and scalable infrastructure. You will work closely with our development, product, and security teams to streamline deployments, improve performance, and ensure cloud infrastructure resilience.


Responsibilities:

● Deploy, manage, and monitor infrastructure on Google Cloud Platform (GCP)

● Build CI/CD pipelines using Jenkins and integrate them with Git workflows

● Design and manage Kubernetes clusters and helm-based deployments

● Manage infrastructure as code using Terraform

● Set up logging, monitoring, and alerting (Stackdriver, Prometheus, Grafana)

● Ensure security best practices across cloud resources, networks, and secrets

● Automate repetitive operations and improve system reliability

● Collaborate with developers to troubleshoot and resolve issues in staging/production environments


What We’re Looking For:

Required Skills:

● 1–3 years of hands-on experience in a DevOps or SRE role

● Strong knowledge of GCP services (IAM, GKE, Cloud Run, VPC, Cloud Build, etc.)

● Proficiency in Kubernetes (deployment, scaling, troubleshooting)

● Experience with Terraform for infrastructure provisioning

● CI/CD pipeline setup using Jenkins, GitHub Actions, or similar tools

● Understanding of DevSecOps principles and cloud security practices

● Good command over Linux, shell scripting, and basic networking concepts


Nice to have:

● Experience with Docker, Helm, ArgoCD

● Exposure to other cloud platforms (AWS, Azure)

● Familiarity with incident response and disaster recovery planning

● Knowledge of logging and monitoring tools like ELK, Prometheus, Grafana

Read more
Us healthcare company

Us healthcare company

Agency job
via People Impact by Ranjita Shrivastava
Hyderabad, Chennai
4 - 8 yrs
₹20L - ₹30L / yr
ai/ml
TensorFlow
skill iconPython
Google Cloud Platform (GCP)
Vertex

·                    Design, develop, and implement AI/ML models and algorithms.

·                    Focus on building Proof of Concept (POC) applications to demonstrate the feasibility and value of AI solutions.

·                    Write clean, efficient, and well-documented code.

·                    Collaborate with data engineers to ensure data quality and availability for model training and evaluation.

·                    Work closely with senior team members to understand project requirements and contribute to technical solutions.

·                    Troubleshoot and debug AI/ML models and applications.

·                    Stay up-to-date with the latest advancements in AI/ML.

·                    Utilize machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) to develop and deploy models.

·                    Develop and deploy AI solutions on Google Cloud Platform (GCP).

·                    Implement data preprocessing and feature engineering techniques using libraries like Pandas and NumPy.

·                    Utilize Vertex AI for model training, deployment, and management.

·                    Integrate and leverage Google Gemini for specific AI functionalities.

Qualifications:

·                    Bachelor’s degree in computer science, Artificial Intelligence, or a related field.

·                    3+ years of experience in developing and implementing AI/ML models.

·                    Strong programming skills in Python.

·                    Experience with machine learning frameworks such as TensorFlow, PyTorch, or Scikit-learn.

·                    Good understanding of machine learning concepts and techniques.

·                    Ability to work independently and as part of a team.

·                    Strong problem-solving skills.

·                    Good communication skills.

·                    Experience with Google Cloud Platform (GCP) is preferred.

·                    Familiarity with Vertex AI is a plus.


Read more
Snaphyr

Snaphyr

Agency job
via SnapHyr by Shagun Jaiswal
Gurugram
2 - 5 yrs
₹10L - ₹30L / yr
Cloud Computing
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Windows Azure
CI/CD
+2 more

 Job Opening: Cloud and Observability Engineer

📍 Location: Work From Office – Gurgaon (Sector 43)

🕒 Experience: 2+ Years

💼 Employment Type: Full-Time


Role Overview:

As a Cloud and Observability Engineer, you will play a critical role in helping customers transition and optimize their monitoring and observability infrastructure. You'll be responsible for building high-quality extension packages for alerts, dashboards, and parsing rules using the organization Platform. Your work will directly impact the reliability, scalability, and efficiency of monitoring across cloud-native environments.

This is a work-from-office role requiring collaboration with global customers and internal stakeholders.


Key Responsibilities:

  • Extension Delivery:
  • Develop, enhance, and maintain extension packages for alerts, dashboards, and parsing rules to improve monitoring experience.
  • Conduct in-depth research to create world-class observability solutions (e.g., for cloud-native and container technologies).
  • Customer & Internal Support:
  • Act as a technical advisor to both internal teams and external clients.
  • Respond to queries, resolve issues, and incorporate feedback related to deployed extensions.
  • Observability Solutions:
  • Design and implement optimized monitoring architectures.
  • Migrate and package dashboards, alerts, and rules based on customer environments.
  • Automation & Deployment:
  • Use CI/CD tools and version control systems to package and deploy monitoring components.
  • Continuously improve deployment workflows.
  • Collaboration & Enablement:
  • Work closely with DevOps, engineering, and customer success teams to gather requirements and deliver solutions.
  • Deliver technical documentation and training for customers.


Requirements:


Professional Experience:


  • Minimum 2 years in Systems Engineering or similar roles.
  • Focus on monitoring, observability, and alerting tools.
  • Cloud & Container Tech:
  • Hands-on experience with AWS, Azure, or GCP.
  • Experience with Kubernetes, EKS, GKE, or AKS.
  • Cloud DevOps certifications (preferred).


Observability Tools:

  • Practical experience with at least two observability platforms (e.g., Prometheus, Grafana, Datadog, etc.).
  • Strong understanding of alertingdashboards, and infrastructure monitoring.


Scripting & Automation:

  • Familiarity with CI/CDdeployment pipelines, and version control.
  • Experience in packaging and managing observability assets.
  • Technical Skills:
  • Working knowledge of PromQLGrafana, and related query languages.
  • Willingness to learn Dataprime and Lucene syntax.
  • Soft Skills:
  • Excellent problem-solving and debugging abilities.
  • Strong verbal and written communication in English.
  • Ability to work across US and European time zones as needed.


Why Join Us?

  • Opportunity to work on cutting-edge observability platforms.
  • Collaborate with global teams and top-tier clients.
  • Shape the future of cloud monitoring and performance optimization.
  • Growth-oriented, learning-focused environment.


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort