50+ Google Cloud Platform (GCP) Jobs in Bangalore (Bengaluru) | Google Cloud Platform (GCP) Job openings in Bangalore (Bengaluru)
Apply to 50+ Google Cloud Platform (GCP) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Google Cloud Platform (GCP) Job opportunities across top companies like Google, Amazon & Adobe.

About HelloRamp.ai
HelloRamp is on a mission to revolutionize media creation for automotive and retail using AI. Our platform powers 3D/AR experiences for leading brands like Cars24, Spinny, and Samsung. We’re now building the next generation of Computer Vision + AI products, including cutting-edge NeRF pipelines and AI-driven video generation.
What You’ll Work On
- Develop and optimize Computer Vision pipelines for large-scale media creation.
- Implement NeRF-based systems for high-quality 3D reconstruction.
- Build and fine-tune AI video generation models using state-of-the-art techniques.
- Optimize AI inference for production (CUDA, TensorRT, ONNX).
- Collaborate with the engineering team to integrate AI features into scalable cloud systems.
- Research latest AI/CV advancements and bring them into production.
Skills & Experience
- Strong Python programming skills.
- Deep expertise in Computer Vision and Machine Learning.
- Hands-on with PyTorch/TensorFlow.
- Experience with NeRF frameworks (Instant-NGP, Nerfstudio, Plenoxels) and/or video synthesis models.
- Familiarity with 3D graphics concepts (meshes, point clouds, depth maps).
- GPU programming and optimization skills.
Nice to Have
- Knowledge of Three.js or WebGL for rendering AI outputs on the web.
- Familiarity with FFmpeg and video processing pipelines.
- Experience in cloud-based GPU environments (AWS/GCP).
Why Join Us?
- Work on cutting-edge AI and Computer Vision projects with global impact.
- Join a small, high-ownership team where your work matters.
- Opportunity to experiment, publish, and contribute to open-source.
- Competitive pay and flexible work setup.
Role summary:
We are looking for a seasoned Oracle Financial Consultant with hands-on functional experience in Oracle Fusion Financials (GL, AP, AR, Procurement). You will drive financial transformation initiatives for e-commerce operations — working closely with finance, business, and operations teams to design, configure, and optimize financial processes (order-to-cash, procure-to-pay, refunds) while ensuring compliance and automation.
Key responsibilities:
- Lead functional design & implementation of Oracle Fusion Financial modules (GL, AP, AR, Procurement).
- Gather and analyze business requirements; perform fit-gap and translate to functional configurations.
- Define/optimize financial processes for e-commerce (order-to-cash, procure-to-pay, refunds, reconciliations).
- Collaborate functionally on integrations with OMS, WMS, TMS.
- Drive process automation and financial transformation projects; recommend best practices.
- Conduct workshops, training, UAT, knowledge transfer to finance/business users.
- Support testing, issue resolution, and adoption during/after implementation.
- Ensure compliance with tax, multi-entity and multi-currency reporting requirements.
Must-have:
- 5–10 yrs functional Oracle Financials Cloud (Fusion) experience — GL, AP, AR, Procurement.
- Strong financial process design experience in e-commerce/online retail.
- Experience with OMS/WMS/TMS touchpoints and integrations (functional).
- Proven track record of gathering requirements and interacting directly with stakeholders.
- Multi-entity/multi-currency and high-volume transaction exposure.
- Excellent communication & stakeholder management.
Preferred:
- Oracle Financials Cloud certification.
- Experience with automation/reconciliation tools for retail finance.
- Past work in product companies or large retail/marketplace environments.

Job Type : Contract
Location : Bangalore
Experience : 5+yrs
The role focuses on cloud security engineering with a strong emphasis on GCP, while also covering AWS and Azure.
Required Skills:
- 5+ years of experience in software and/or cloud platform engineering, particularly focused on GCP environment.
- Knowledge of the Shared Responsibility Model; keen understanding of the security risks inherent in hosting cloud-based applications and data.
- Experience developing across the security assurance lifecycle (including prevent, detect, respond, and remediate controls)?Experience in configuring Public Cloud native security tooling and capabilities with a focus on Google Cloud Organizational policies/constraints, VPC SC, IAM policies and GCP APIs.
- Experience with Cloud Security Posture Management (CSPM) 3rd Party tools such as Wiz, Prisma, Check Point CloudGuard, etc.
- Experience in Policy-as-code (Rego) and OPA platform.
- Experience solutioning and configuring event-driven serverless-based security controls in Azure, including but not limited to technologies such as Azure Function, Automation Runbook, AWS Lambda and Google Cloud Functions.
- Deep understanding of DevOps processes and workflows.
- Working knowledge of the Secure SDLC process
- Experience with Infrastructure as Code (IaC) tooling, preferably Terraform.
- Familiarity with Logging and data pipeline concepts and architectures in cloud.
- Strong in scripting languages such as PowerShell or Python or Bash or Go.
- Knowledge of Agile best practices and methodologies
- Experience creating technical architecture documentation.? Excellent communication, written and interpersonal skills.
- Practical experience in designing and configuring CICD pipelines. Practical experience in GitHub Actions and Jenkins.
- Experience in ITSM.
- Ability to articulate complex technical concepts to non-technical stakeholders.
- Experience with risk control frameworks and engagements with risk and regulatory functions
- Experience in the financial industry would be a plus.

Job description
We are looking for a Data Scientist with strong AI/ML engineering skills to join our high-impact team at KrtrimaIQ Cognitive Solutions. This is not a notebook-only role — you must have production-grade experience deploying and scaling AI/ML models in cloud environments, especially GCP, AWS, or Azure.
This role involves building, training, deploying, and maintaining ML models at scale, integrating them with business applications. Basic model prototyping won't qualify — we’re seeking hands-on expertise in building scalable machine learning pipelines.
Key Responsibilities
Design, train, test, and deploy end-to-end ML models on GCP (or AWS/Azure) to support product innovation and intelligent automation.
Implement GenAI use cases using LLMs
Perform complex data mining and apply statistical algorithms and ML techniques to derive actionable insights from large datasets.
Drive the development of scalable frameworks for automated insight generation, predictive modeling, and recommendation systems.
Work on impactful AI/ML use cases in Search & Personalization, SEO Optimization, Marketing Analytics, Supply Chain Forecasting, and Customer Experience.
Implement real-time model deployment and monitoring using tools like Kubeflow, Vertex AI, Airflow, PySpark, etc.
Collaborate with business and engineering teams to frame problems, identify data sources, build pipelines, and ensure production-readiness.
Maintain deep expertise in cloud ML architecture, model scalability, and performance tuning.
Stay up to date with AI trends, LLM integration, and modern practices in machine learning and deep learning.
Technical Skills Required Core ML & AI Skills (Must-Have):
Strong hands-on ML engineering (70% of the role) — supervised/unsupervised learning, clustering, regression, optimization.
Experience with real-world model deployment and scaling, not just notebooks or prototypes.
Good understanding of ML Ops, model lifecycle, and pipeline orchestration.
Strong with Python 3, Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch, Seaborn, Matplotlib, etc.
SQL proficiency and experience querying large datasets.
Deep understanding of linear algebra, probability/statistics, Big-O, and scientific experimentation.
Cloud experience in GCP (preferred), AWS, or Azure.
Cloud & Big Data Stack
Hands-on experience with:
GCP tools – Vertex AI, Kubeflow, BigQuery, GCS
Or equivalent AWS/Azure ML stacks
Familiar with Airflow, PySpark, or other pipeline orchestration tools.
Experience reading/writing data from/to cloud services.
Qualifications
Bachelor's/Master’s/Ph.D. in Computer Science, Mathematics, Engineering, Data Science, Statistics, or related quantitative field.
4+ years of experience in data analytics and machine learning roles.
2+ years of experience in Python or similar programming languages (Java, Scala, Rust).
Must have experience deploying and scaling ML models in production.
Nice to Have
Experience with LLM fine-tuning, Graph Algorithms, or custom deep learning architectures.
Background in academic research to production applications.
Building APIs and monitoring production ML models.
Familiarity with advanced math – Graph Theory, PDEs, Optimization Theory.
Communication & Collaboration
Strong ability to explain complex models and insights to both technical and non-technical stakeholders.
Ask the right questions, clarify objectives, and align analytics with business goals.
Comfortable working cross-functionally in agile and collaborative teams.
Important Note:
This is a Data Science-heavy role — 70% of responsibilities involve building, training, deploying, and scaling AI/ML models.
Cloud experience is mandatory (GCP preferred, AWS/Azure acceptable).
Only candidates with hands-on experience in deploying ML models into production (not just notebooks) will be considered.


Company Description
Appiness Interactive Pvt. Ltd. is a Bangalore-based product development and UX firm that
specializes in digital services for startups to fortune-500s. We work closely with our clients to
create a comprehensive soul for their brand in the online world, engaged through multiple
platforms of digital media. Our team is young, passionate, and aggressive, not afraid to think
out of the box or tread the un-trodden path in order to deliver the best results for our clients.
We pride ourselves on Practical Creativity where the idea is only as good as the returns it
fetches for our clients.
Key Responsibilities:
- Design and implement advanced AI/ML models and algorithms to address real-world challenges.
- Analyze large and complex datasets to derive actionable insights and train predictive models.
- Build and deploy scalable, production-ready AI solutions on cloud platforms such as AWS, Azure, or GCP.
- Collaborate closely with cross-functional teams, including data engineers, product managers, and software developers, to integrate AI solutions into business workflows.
- Continuously monitor and optimize model performance, ensuring scalability, robustness, and reliability.
- Stay abreast of the latest advancements in AI, ML, and Generative AI technologies, and proactively apply them where applicable.
- Implement MLOps best practices using tools such as MLflow, Docker, and CI/CD pipelines.
- Work with Large Language Models (LLMs) like GPT and LLaMA, and develop Retrieval-Augmented Generation (RAG) pipelines when needed.
Required Skills:
- Strong programming skills in Python (preferred); experience with R or Java is also valuable.
- Proficiency with machine learning libraries and frameworks such as TensorFlow, PyTorch, and Scikit-learn.
- Hands-on experience with cloud platforms like AWS, Azure, or GCP.
- Solid foundation in data structures, algorithms, statistics, and machine learning principles.
- Familiarity with MLOps tools and practices, including MLflow, Docker, and Kubernetes.
- Proven experience in deploying and maintaining AI/ML models in production environments.
- Exposure to Large Language Models (LLMs), Generative AI, and vector databases is a strong plus.

Location: Hybrid/ Remote
Openings: 2
Experience: 5–12 Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Key Responsibilities
Architect & Design:
- Provide technical and architectural direction for complex frontend solutions,ensuring alignment with enterprise standards and best practices.
- Conduct design and code reviews to maintain high-quality, reusable, and scalable frontend interfaces for enterprise applications.
- Collaborate with cross-functional teams to define and enforce UI/UX design guidelines, accessibility standards, and performance benchmarks.
- Identify and address potential security vulnerabilities in frontend implementations, ensuring compliance with security and data privacy requirements.
Development & Debugging:
- Write clean, maintainable, and efficient frontend code.
- Debug and troubleshoot code to ensure robust, high-performing applications.
- Develop reusable frontend libraries that can be leveraged across multiple projects.
AI Awareness (Preferred):
- Understand AI/ML fundamentals and how they can enhance frontend applications.
- Collaborate with teams integrating AI-based features into chat applications.
Collaboration & Reporting:
- Work closely with cross-functional teams to align on architecture and deliverables.
- Regularly report progress, identify risks, and propose mitigation strategies.
Quality Assurance:
- Implement unit tests and end-to-end tests to ensure code quality.
- Participate in code reviews and enforce best practices.
Required Skills
- 5-10 years of experience architecting and developing cloud-based global applications in a public cloud environment (AWS, Azure, or GCP).
- Strong hands-on expertise in frontend technologies: JavaScript, HTML5, CSS3
- Proficiency with Modern frameworks like React, Angular, or Node.js
- Backend familiarity with Java, Spring Boot (or similar technologies).
- Experience developing real-world, at-scale products.
- General knowledge of cloud platforms (AWS, Azure, or GCP) and their structure, use, and capabilities.
- Strong problem-solving, debugging, and performance optimization skills.

Location: Hybrid/ Remote
Openings: 2
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or related field
Job Responsibilities
Problem Solving & Optimization:
- Analyze and resolve complex technical and application issues.
- Optimize application performance, scalability, and reliability.
Design & Develop:
- Build, test, and deploy scalable full-stack applications with high performance and security.
- Develop clean, reusable, and maintainable code for both frontend and backend.
AI Integration (Preferred):
- Collaborate with the team to integrate AI/ML models into applications where applicable.
- Explore Generative AI, NLP, or machine learning solutions that enhance product capabilities.
Technical Leadership & Mentorship:
- Provide guidance, mentorship, and code reviews for junior developers.
- Foster a culture of technical excellence and knowledge sharing.
Agile & Delivery Management:
- Participate in Agile ceremonies (sprint planning, stand-ups, retrospectives).
- Define and scope backlog items, track progress, and ensure timely delivery.
Collaboration:
- Work closely with cross-functional teams (product managers, designers, QA) to deliver high-quality solutions.
- Coordinate with geographically distributed teams.
Quality Assurance & Security:
- Conduct peer reviews of designs and code to ensure best practices.
- Implement security measures and ensure compliance with industry standards.
Innovation & Continuous Improvement:
- Identify areas for improvement in the software development lifecycle.
- Stay updated with the latest tech trends, especially in AI and cloud technologies, and recommend new tools or frameworks.
Required Skills
- Strong proficiency in JavaScript, HTML5, CSS3
- Hands-on expertise with frontend frameworks like React, Angular, or Vue.js
- Backend development experience with Java, Spring Boot (Node.js is a plus)
- Knowledge of REST APIs, microservices, and scalable architectures
- Familiarity with cloud platforms (AWS, Azure, or GCP)
- Experience with Agile/Scrum methodologies and JIRA for project tracking
- Proficiency in Git and version control best practices
- Strong debugging, performance optimization, and problem-solving skills
- Ability to analyze customer requirements and translate them into technical specifications

Location: Hybrid/ Remote
Openings: 5
Experience: 0 - 2Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Key Responsibilities:
Backend Development & APIs
- Build microservices that provide REST APIs to power web frontends.
- Design clean, reusable, and scalable backend code meeting enterprise security standards.
- Conceptualize and implement optimized data storage solutions for high-performance systems.
Deployment & Cloud
- Deploy microservices using a common deployment framework on AWS and GCP.
- Inspect and optimize server code for speed, security, and scalability.
Frontend Integration
- Work on modern front-end frameworks to ensure seamless integration with back-end services.
- Develop reusable libraries for both frontend and backend codebases.
AI Awareness (Preferred)
- Understand how AI/ML or Generative AI can enhance enterprise software workflows.
- Collaborate with AI specialists to integrate AI-driven features where applicable.
Quality & Collaboration
- Participate in code reviews to maintain high code quality.
- Collaborate with teams using Agile/Scrum methodologies for rapid and structured delivery.
Required Skills:
- Proficiency in JavaScript (ES6+), Webpack, Mocha, Jest
- Experience with recent frontend frameworks – React.js, Redux.js, Node.js (or similar)
- Deep understanding of HTML5, CSS3, SASS/LESS, and Content Management Systems
- Ability to design and implement RESTful APIs and understand their impact on client-side applications
- Familiarity with cloud platforms (AWS, Azure, or GCP) – deployment, storage, and scalability
- Experience working with Agile and Scrum methodologies
- Strong backend expertise in Java, J2EE, Spring Boot is a plus but not mandatory
Job Title : Data Engineer – GCP + Spark + DBT
Location : Bengaluru (On-site at Client Location | 3 Days WFO)
Experience : 8 to 12 Years
Level : Associate Architect
Type : Full-time
Job Overview :
We are looking for a seasoned Data Engineer to join the Data Platform Engineering team supporting a Unified Data Platform (UDP). This role requires hands-on expertise in DBT, GCP, BigQuery, and PySpark, with a solid foundation in CI/CD, data pipeline optimization, and agile delivery.
Mandatory Skills : GCP, DBT, Google Dataform, BigQuery, PySpark/Spark SQL, Advanced SQL, CI/CD, Git, Agile Methodologies.
Key Responsibilities :
- Design, build, and optimize scalable data pipelines using BigQuery, DBT, and PySpark.
- Leverage GCP-native services like Cloud Storage, Pub/Sub, Dataproc, Cloud Functions, and Composer for ETL/ELT workflows.
- Implement and maintain CI/CD for data engineering projects with Git-based version control.
- Collaborate with cross-functional teams including Infra, Security, and DataOps for reliable, secure, and high-quality data delivery.
- Lead code reviews, mentor junior engineers, and enforce best practices in data engineering.
- Participate in Agile sprints, backlog grooming, and Jira-based project tracking.
Must-Have Skills :
- Strong experience with DBT, Google Dataform, and BigQuery
- Hands-on expertise with PySpark/Spark SQL
- Proficient in GCP for data engineering workflows
- Solid knowledge of SQL optimization, Git, and CI/CD pipelines
- Agile team experience and strong problem-solving abilities
Nice-to-Have Skills :
- Familiarity with Databricks, Delta Lake, or Kafka
- Exposure to data observability and quality frameworks (e.g., Great Expectations, Soda)
- Knowledge of MDM patterns, Terraform, or IaC is a plus
About Eazeebox
Eazeebox is India’s first B2B Quick Commerce platform for home electrical goods. We empower electrical retailers with access to 100+ brands, flexible credit options, and 4-hour delivery—making supply chains faster, smarter, and more efficient. Our tech-driven approach enables sub-3 hour inventory-aware fulfilment across micro-markets, with a goal of scaling to 50+ orders/day per store.
About the Role
We’re looking for a DevOps Engineer to help scale and stabilize the cloud-native backbone that powers Eazeebox. You’ll play a critical role in ensuring our microservices architecture remains reliable, responsive, and performant—especially during peak retailer ordering windows. This is a high-ownership role for an "all-rounder" who is passionate about
designing scalable architectures, writing robust code, and ensuring seamless deployments and operations.
What You'll Be Doing
As a critical member of our small, dedicated team, you will take on a versatile role encompassing development, infrastructure, and operations.
Cloud & DevOps Ownership
- Architect and implement containerized services on AWS (S3, EC2, ECS, ECR, CodeBuild, Lambda, Fargate, RDS, CloudWatch) under secure IAM policies.
- Take ownership of CI/CD pipelines, optimizing and managing GitHub Actions workflows.
- Configure and manage microservice versioning and CI/CD deployments.
- Implement secrets rotation and IP-based request rate limiting for enhanced security.
- Configure auto-scaling instances and Kubernetes for high-workload microservices to ensure performance and cost efficiency.
- Hands-on experience with Docker and Kubernetes/EKS fundamentals.
Backend & API Design
- Design, build, and maintain scalable REST/OpenAPI services in Django (DRF), WebSocket implementations, and asynchronous microservices in FastAPI.
- Model relational data in PostgreSQL 17 and optimize with Redis for caching and pub/sub.
- Orchestrate background tasks using Celery or RQ with Redis Streams or Amazon SQS.
- Collaborate closely with the frontend team (React/React Native) to define and build robust APIs.
Testing & Observability
- Ensure code quality via comprehensive testing using Pytest, React Testing Library, and Playwright.
- Instrument applications with CloudWatch metrics, contributing to our observability strategy.
- Maintain a Git-centric development workflow, including branching strategies and pull-request discipline.
Qualifications & Skills
Must-Have
- Experience: 2-4 years of hands-on experience delivering production-level full-stack applications with a strong emphasis on backend and DevOps.
- Backend Expertise: Proficiency in Python, with strong command of Django or FastAPI, including async Python patterns and REST best practices.
- Database Skills: Strong SQL skills with PostgreSQL; practical experience with Redis for caching and messaging.
- Cloud & DevOps Mastery: Hands-on experience with Docker and Kubernetes/EKS fundamentals.
- AWS Proficiency: Experience deploying and managing services on AWS (EC2, S3, RDS, Lambda, ECS Fargate, ECR, SQS).
- CI/CD: Deep experience with GitHub Actions or similar platforms, including semantic-release, Blue-Green Deployments, and artifact signing.
- Automation: Fluency in Python/Bash or Go for automation scripts; comfort with YAML.
- Ownership Mindset: Entrepreneurial spirit, strong sense of ownership, and ability to deliver at scale.
- Communication: Excellent written and verbal communication skills; comfortable in async and distributed team environments.
Nice-to-Have
- Frontend Familiarity: Exposure to React with Redux Toolkit and React Native.
- Event Streaming: Experience with Kafka or Amazon EventBridge.
- Serverless: Knowledge of AWS Lambda, Step Functions, CloudFront Functions, or Cloudflare Workers.
- Observability: Familiarity with Datadog, Posthog, Prometheus/Grafana/Loki.
- Emerging Tech: Interest in GraphQL (Apollo Federation) or generative AI frameworks (Amazon Bedrock, LangChain) and AI/ML.
Key Responsibilities
- Architectural Leadership: Design and lead the technical strategy for migrating our platform from a monolithic to a microservices architecture.
- System Design: Translate product requirements into scalable, secure, and reliable system designs.
- Backend Development: Build and maintain core backend services using Python (Django/FastAPI).
- CI/CD & Deployment: Own and manage CI/CD pipelines for multiple services using GitHub Actions, AWS CodeBuild, and automated deployments.
- Infrastructure & Operations: Deploy production-grade microservices using Docker, Kubernetes, and AWS EKS.
- FinOps & Performance: Drive cloud cost optimization and implement auto-scaling for performance and cost-efficiency.
- Security & Observability: Implement security, monitoring, and compliance using tools like Prometheus, Grafana, Datadog, Posthog, and Loki to ensure 99.99% uptime.
- Collaboration: Work with product and development teams to align technical strategy with business growth plans.
About the Company
We are hiring for a fast-growing, well-funded product startup backed by a leadership team with a proven track record of building billion-dollar digital businesses. The company is focused on delivering enterprise-grade SaaS products in the Cybersecurity domain for B2B markets. You’ll be part of a passionate and dynamic engineering team building innovative solutions using modern tech stacks.
Key Responsibilities
- Design and develop scalable microservices using Java and Spring Boot
- Build and manage robust RESTful APIs
- Collaborate with cross-functional teams in an Agile setup
- Lead and mentor junior engineers, driving technical excellence
- Contribute to architecture discussions and code reviews
- Work with PostgreSQL, implement data integrity and consistency
- Deploy and manage services on cloud platforms like GCP or Azure
- Utilize Docker/Kubernetes for containerization and orchestration
Must-Have Skills
- Strong backend experience with Java, Spring Boot, REST APIs
- Proficiency in frontend development with React.js
- Experience with PostgreSQL and database optimization
- Hands-on with cloud platforms (GCP or Azure)
- Familiarity with Docker and Kubernetes
- Strong understanding of:
- API Gateways
- Hibernate & JPA
- Transaction management & ACID properties
- Multi-threading and context switching
Good to Have
- Experience in Cybersecurity or Healthcare domain
Exposure to CI/CD pipelines and DevOps practices
Job Title: Sr. DevOps Engineer
Experience Required: 2 to 4 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two

Job Title : Lead Web Developer / Frontend Engineer
Experience Required : 10+ Years
Location : Bangalore (Hybrid – 3 Days Work From Office)
Work Timings : 11:00 AM to 8:00 PM IST
Notice Period : Immediate or Up to 30 Days (Preferred)
Work Mode : Hybrid
Interview Mode : Face-to-Face mandatory (for Round 2)
Role Overview :
We are hiring a Lead Frontend Engineer with 10+ Years of experience to drive the development of scalable, modern, and high-performance web applications.
This is a hands-on technical leadership role focused on React.js, micro-frontends, and Backend for Frontend (BFF) architecture, requiring both coding expertise and team leadership skills.
Mandatory Skills :
React.js, JavaScript/TypeScript, HTML, CSS, micro-frontend architecture, Backend for Frontend (BFF), Webpack, Jenkins (CI/CD), GCP, RDBMS/SQL, Git, and team leadership.
Core Responsibilities :
- Design and develop cloud-based web applications using React.js, HTML, CSS.
- Collaborate with UX/UI designers and backend engineers to implement seamless user experiences.
- Lead and mentor a team of frontend developers.
- Write clean, well-documented, scalable code using modern JavaScript/TypeScript practices.
- Implement CI/CD pipelines using Jenkins, deploy applications to CDNs.
- Integrate with GCP services, optimize front-end performance.
- Stay updated with modern frontend technologies and design patterns.
- Use Git for version control and collaborative workflows.
- Implement JavaScript libraries for web analytics and performance monitoring.
Key Requirements :
- 10+ Years of experience as a frontend/web developer.
- Strong proficiency in React.js, JavaScript/TypeScript, HTML, CSS.
- Experience with micro-frontend architecture and Backend for Frontend (BFF) patterns.
- Proficiency in frontend design frameworks and libraries (jQuery, Node.js).
- Strong understanding of build tools like Webpack, CI/CD using Jenkins.
- Experience with GCP and deploying to CDNs.
- Solid experience in RDBMS, SQL.
- Familiarity with Git and agile development practices.
- Excellent debugging, problem-solving, and communication skills.
- Bachelor’s/Master’s in Computer Science or a related field.
Nice to Have :
- Experience with Node.js.
- Previous experience working with web analytics frameworks.
- Exposure to JavaScript observability tools.
Interview Process :
1. Round 1 : Online Technical Interview (via Geektrust – 1 Hour)
2. Round 2 : Face-to-Face Interview with the Indian team in Bangalore (3 Hours – Mandatory)
3. Round 3 : Online Interview with CEO (30 Minutes)
Important Notes :
- Face-to-face interview in Bangalore is mandatory for Round 2.
- Preference given to candidates currently in Bangalore or willing to travel for interviews.
- Remote applicants who cannot attend the in-person round will not be considered.

Why This Role Matters
We’re looking for a Principal Engineer to lead the architecture and execution of our GenAI-powered, self-serve marketing platforms. You will work directly with the CEO to shape, build, and scale products that change how marketers interact with data and AI. This is intrapreneurship in action — not a sandbox innovation lab, but a real-world product with traction, velocity, and high stakes.
What You'll Do
- Co-own product architecture and direction alongside the CEO.
- Build GenAI-native, full-stack platforms from MVP to scale — powered by LLMs, agents, and predictive AI.
- Own the full stack: React (frontend), Node.js/Python (backend), GCP (infra), BigQuery (data), and vector databases (AI).
- Lead a lean, high-caliber team with a hands-on, unblock-and-coach mindset.
- Drive rapid iteration with rigor, balancing short-term delivery with long-term resilience.
- Ensure scalability, observability, and fault tolerance in multi-tenant, cloud-native environments.
- Bridge business and tech — aligning execution with evolving user and market insights.
What You Bring
- 8–12 years of experience building and scaling full-stack, data-heavy or AI-driven products.
- Fluency in React, Node.js, and Google Cloud (Functions, BigQuery, Cloud SQL, Airflow, etc.).
- Hands-on experience with GenAI tools (LangChain, OpenAI APIs, LlamaIndex) is a bonus.
- Track record of shipping products from ambiguity to impact.
- Strong product mindset — your goal is user value, not just elegant code.
- Architectural leadership with ownership of engineering rigor and scaling best practices.
- Startup or founder DNA — you’ve built things from scratch and know how to move fast without breaking things.
Who You Are
- A former founder, senior IC, or tech lead who’s done zero-to-one and 1-to-n scaling.
- Hungry for ownership and velocity — frustrated by bureaucracy or stagnation.
- You code because you care about solving real problems for real users.
- You’re pragmatic, hands-on, and grounded in first principles.
- You understand that great software isn't just shipped — it's hardened, maintained, and evolves with minimal manual effort.
- You’re open to evolving into a founding engineer role with influence over the tech vision and culture.
What You Get
- Equity in a high-growth product-led startup.
- A chance to build global products out of India with full-stack and GenAI innovation.
- Access to high-context decision-making and direct collaboration with the CEO.
- A tight, ego-free team and a culture that values clarity, ownership, learning, and candor.
Why YOptima?
YOptima is redefining how leading marketers unlock growth through full-funnel, AI-powered media solutions. As part of our growth journey, this is your opportunity to own the growth charter for leading brands and agencies globally and shape the narrative of a next-generation marketing platform.
Ready to lead, build, and scale?
We’d love to hear from you.


Job title - Python developer
Exp – 4 to 6 years
Location – Pune/Mum/B’lore
PFB JD
Requirements:
- Proven experience as a Python Developer
- Strong knowledge of core Python and Pyspark concepts
- Experience with web frameworks such as Django or Flask
- Good exposure to any cloud platform (GCP Preferred)
- CI/CD exposure required
- Solid understanding of RESTful APIs and how to build them
- Experience working with databases like Oracle DB and MySQL
- Ability to write efficient SQL queries and optimize database performance
- Strong problem-solving skills and attention to detail
- Strong SQL programing (stored procedure, functions)
- Excellent communication and interpersonal skill
Roles and Responsibilities
- Design, develop, and maintain data pipelines and ETL processes using pyspark
- Work closely with data scientists and analysts to provide them with clean, structured data.
- Optimize data storage and retrieval for performance and scalability.
- Collaborate with cross-functional teams to gather data requirements.
- Ensure data quality and integrity through data validation and cleansing processes.
- Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
- Stay up to date with industry best practices and emerging technologies in data engineering.
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.
· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
· Work with Helm charts, Istio, and service meshes for microservices deployments.
· Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
· Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
· Design, implement, and manage CI/CD pipelines using Azure DevOps.
· Automate infrastructure deployment using Terraform, Bash and Power shell scripting
· Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
Looking for Fresher developers
Responsibilities:
- Implement integrations requested by customers
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Requirements and skill:
Knowledge in DevOps Engineer or similar software engineering role
Good knowledge of Terraform, Kubernetes
Working knowledge on AWS, Google Cloud
You can directly contact me on nine three one six one two zero one three two

Position : Senior Data Analyst
Experience Required : 5 to 8 Years
Location : Hyderabad or Bangalore (Work Mode: Hybrid – 3 Days WFO)
Shift Timing : 11:00 AM – 8:00 PM IST
Notice Period : Immediate Joiners Only
Job Summary :
We are seeking a highly analytical and experienced Senior Data Analyst to lead complex data-driven initiatives that influence key business decisions.
The ideal candidate will have a strong foundation in data analytics, cloud platforms, and BI tools, along with the ability to communicate findings effectively across cross-functional teams. This role also involves mentoring junior analysts and collaborating closely with business and tech teams.
Key Responsibilities :
- Lead the design, execution, and delivery of advanced data analysis projects.
- Collaborate with stakeholders to identify KPIs, define requirements, and develop actionable insights.
- Create and maintain interactive dashboards, reports, and visualizations.
- Perform root cause analysis and uncover meaningful patterns from large datasets.
- Present analytical findings to senior leaders and non-technical audiences.
- Maintain data integrity, quality, and governance in all reporting and analytics solutions.
- Mentor junior analysts and support their professional development.
- Coordinate with data engineering and IT teams to optimize data pipelines and infrastructure.
Must-Have Skills :
- Strong proficiency in SQL and Databricks
- Hands-on experience with cloud data platforms (AWS, Azure, or GCP)
- Sound understanding of data warehousing concepts and BI best practices
Good-to-Have :
- Experience with AWS
- Exposure to machine learning and predictive analytics
- Industry-specific analytics experience (preferred but not mandatory)
- Strong Site Reliability Engineer (SRE - CloudOps) Profile
- Mandatory (Experience 1) - Must have a minimum 1 years of experience in SRE (CloudOps)
- Mandatory (Core Skill 1) - Must have experience with Google Cloud platforms (GCP)
- Mandatory (Core Skill 2) - Experience with monitoring, APM, and alerting tools like Prometheus, Grafana, ELK, Newrelic, Pingdom, or Pagerduty
- Mandatory (Core Skill 3) ) - Hands-on experience with Kubernetes for orchestration and container management.
- Mandatory (Company) - B2C Product Companies.
- Strong Senior Unity Developer Profile
- Mandatory (Experience 1) - Must have a minimum 2+ years of experience in game/application development using Unity.
- Mandatory (Experience 2) - Must have strong experience in backend development using C#
- Mandatory (Experience 3) - Must have strong experience in multiplayer game development with Unity, preferably using Photon Networking (PUN) or Photon Fusion.
- Mandatory (Company) - B2C Product Companies
Preferred
- Preferred (Education) - B.E / B.Tech
Job Title: Lead DevOps Engineer
Experience Required: 4 to 5 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two
Bangalore / Chennai
- Hands-on data modelling for OLTP and OLAP systems
- In-depth knowledge of Conceptual, Logical and Physical data modelling
- Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same
- Strong understanding of variables impacting database performance for near-real-time reporting and application interaction.
- Should have working experience on at least one data modelling tool, preferably DBSchema, Erwin
- Good understanding of GCP databases like AlloyDB, CloudSQL, and BigQuery.
- People with functional knowledge of the mutual fund industry will be a plus
Role & Responsibilities:
● Work with business users and other stakeholders to understand business processes.
● Ability to design and implement Dimensional and Fact tables
● Identify and implement data transformation/cleansing requirements
● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse
● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions
● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique
● Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions.
● Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use.
● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality.
● Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions.
● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics.
● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.
● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements.
● Train business end-users, IT analysts, and developers.
Required Skills:
● Bachelor’s degree in Computer Science or similar field or equivalent work experience.
● 5+ years of experience on Data Warehousing, Data Engineering or Data Integration projects.
● Expert with data warehousing concepts, strategies, and tools.
● Strong SQL background.
● Strong knowledge of relational databases like SQL Server, PostgreSQL, MySQL.
● Strong experience in GCP & Google BigQuery, Cloud SQL, Composer (Airflow), Dataflow, Dataproc, Cloud Function and GCS
● Good to have knowledge on SQL Server Reporting Services (SSRS), and SQL Server Integration Services (SSIS).
● Knowledge of AWS and Azure Cloud is a plus.
● Experience in Informatica Power exchange for Mainframe, Salesforce, and other new-age data sources.
● Experience in integration using APIs, XML, JSONs etc.

Why This Role Matters
- We are looking for a Staff Engineer to lead the technical direction and hands-on development of our next-generation, agentic AI-first marketing platforms. This is a high-impact role to architect, build, and ship products that change how marketers interact with data, plan campaigns, and make decisions.
What You'll Do
- Build Gen-AI native products: Architect, build, and ship platforms powered by LLMs, agents, and predictive AI
- Stay hands-on: Design systems, write code, debug, and drive product excellence
- Lead with depth: Mentor a high-caliber team of full stack engineers.
- Speed to market: Rapidly ship and iterate on MVPs to maximize learning and feedback.
- Own the full stack: From backend data pipelines to intuitive UIs—from Airflow to React - from BigQuery to embeddings.
- Scale what works: Ensure scalability, security, and performance in multi-tenant, cloud-native environments (GCP).
- Collaborate deeply: Work closely with product, growth, and leadership to align tech with business priorities.
What You Bring
- 8+ years of experience building and scaling full-stack, data-driven products
- Proficiency in backend (Node.js, Python) and frontend (React), with solid GCP experience
- Strong grasp of data pipelines, analytics, and real-time data processing
- Familiarity with Gen-AI frameworks (LangChain, LlamaIndex, OpenAI APIs, vector databases)
- Proven architectural leadership and technical ownership
- Product mindset with a bias for execution and iteration
Our Tech Stack
- Cloud: Google Cloud Platform
- Backend: Node.js, Python, Airflow
- Data: BigQuery, Cloud SQL
- AI/ML: TensorFlow, OpenAI APIs, custom agents
- Frontend: React.js
What You Get
- Meaningful equity in a high-growth startup
- The chance to build global products from India
- A culture that values clarity, ownership, learning, humility, and candor
- A rare opportunity to build with Gen-AI from the ground up
Who You Are
- You’re initiative-driven, not interruption-driven.
- You code because you love building things that matter.
- You enjoy ambiguity and solve problems from first principles.
- You believe true leadership is contextual, hands-on, and grounded.
- You’re here to build — not just maintain.
- You care deeply about seeing your products empower real users, run reliably at scale, and adapt intelligently with minimal manual effort.
- You know that elegant code is just 30% of the job — the real craft lies in the engineering rigour, edge-case handling, and production resilience that make great products truly dependable.
Job Description
We are looking for a passionate and skilled Rust Developer with at least 3 years of experience to join our growing development team. The ideal candidate will be proficient in building robust and scalable APIs using the Rocket framework, and have hands-on experience with PostgreSQL and the Diesel ORM. You will be working on performance-critical backend systems, designing APIs, managing deployments, and collaborating closely with other developers.
Responsibilities
- Design, develop, and maintain APIs using Rocket.
- Work with PostgreSQL databases, using Diesel ORM for efficient data access.
- Write clean, maintainable, and efficient Rust code.
- Apply object-oriented and functional programming principles effectively.
- Build and consume RESTful APIs and WebSockets for real-time communication.
- Handle server-side deployments and assist in managing the infrastructure.
- Optimize application performance and ensure high availability.
- Collaborate with frontend developers and DevOps engineers to integrate systems smoothly.
- Participate in code reviews and technical discussions.
- Apply knowledge of data structures and algorithms to solve complex problems efficiently.
Requirements
- 3+ years of experience working with Rust in production environments.
- Strong hands-on experience with Rocket framework.
- Solid understanding of Diesel ORM and PostgreSQL.
- Good grasp of OOP and functional programming concepts.
- Familiarity with RESTful APIs, WebSockets, and other web protocols.
- Experience handling application deployments and basic server management.
- Strong foundation in data structures, algorithms, and software design principles.
- Ability to write clean, well-documented, and testable code.
- Good communication skills and ability to work collaboratively.
Package
- As per Industry standards
Nice to Have
- Experience with CI/CD pipelines.
- Familiarity with containerization tools like Docker.
- Knowledge of cloud platforms (AWS, GCP, etc.).
- Contribution to open-source Rust projects.
- Knowledge of basic cryptographic primitives (AES, hashing, etc.).
Perks & Benefits
- Competitive compensation.
- Flexible work hours and remote-friendly culture.
- Opportunity to work with a modern tech stack.
- Supportive team and growth-oriented environment.
If you're passionate about Rust, love building high-performance systems, and enjoy solving real-world problems with elegant code, we’d love to connect! Apply now and let’s craft powerful backend experiences together! ⚙️🚀

As a Senior Backend & Infrastructure Engineer, you will take ownership of backend systems and cloud infrastructure. You’ll work closely with our CTO and cross-functional teams (hardware, AI, frontend) to design scalable, fault- tolerant architectures and ensure reliable deployment pipelines.
- What You’ll Do :
- Backend Development: Maintain and evolve our Node.js (TypeScript) and Python backend services with a focus on performance and scalability.
- Cloud Infrastructure: Manage our infrastructure on GCP and Firebase (Auth, Firestore, Storage, Functions, AppEngine, PubSub, Cloud Tasks).
- Database Management: Handle Firestore and other NoSQL DBs. Lead database schema design and migration strategies.
- Pipelines & Automation: Build robust real-time and batch data pipelines. Automate CI/CD and testing for backend and frontend services.
- Monitoring & Uptime: Deploy tools for observability (logging, alerts, debugging). Ensure 99.9% uptime of critical services.
- Dev Environments: Set up and manage developer and staging environments across teams.
- Quality & Security: Drive code reviews, implement backend best practices, and enforce security standards.
- Collaboration: Partner with other engineers (AI, frontend, hardware) to integrate backend capabilities seamlessly into our global system.
Must-Haves :
- 5+ years of experience in backend development and cloud infrastructure.
- Strong expertise in Node.js (TypeScript) and/or Python.
- Advanced skills in NoSQL databases (Firestore, MongoDB, DynamoDB...).
- Deep understanding of cloud platforms, preferably GCP and Firebase.
- Hands-on experience with CI/CD, DevOps tools, and automation.
- Solid knowledge of distributed systems and performance tuning.
- Experience setting up and managing development & staging environments.
• Proficiency in English and remote communication.
Good to have :
- Event-driven architecture experience (e.g., Pub/Sub, MQTT).
- Familiarity with observability tools (Prometheus, Grafana, Google Monitoring).
- Previous work on large-scale SaaS products.
- Knowledge of telecommunication protocols (MQTT, WebSockets, SNMP).
- Experience with edge computing on Nvidia Jetson devices.
What We Offer :
- Competitive salary for the Indian market (depending on experience).
- Remote-first culture with async-friendly communication.
- Autonomy and responsibility from day one.
- A modern stack and a fast-moving team working on cutting-edge AI and cloud infrastructure.
- A mission-driven company tackling real-world environmental challenges.
A backend developer is an engineer who can handle all the work of databases, servers,
systems engineering, and clients. Depending on the project, what customers need may
be a mobile stack, a Web stack, or a native application stack.
You will be responsible for:
Build reusable code and libraries for future use.
Own & build new modules/features end-to-end independently.
Collaborate with other team members and stakeholders.
Required Skills :
Thorough understanding of Node.js and Typescript.
Excellence in at least one framework like strongloop loopback, express.js, sail.js, etc.
Basic architectural understanding of modern day web applications
Diligence for coding standards
Must be good with git and git workflow
Experience of external integrations is a plus
Working knowledge of AWS or GCP or Azure - Expertise with linux based systems
Experience with CI/CD tools like jenkins is a plus.
Experience with testing and automation frameworks.
Extensive understanding of RDBMS systems

Job Summary:
We are looking for a motivated and detail-oriented Data Engineer with 1–2 years of experience to join our data engineering team. The ideal candidate should have solid foundational skills in SQL and Python, along with exposure to building or maintaining data pipelines. You’ll play a key role in helping to ingest, process, and transform data to support various business and analytical needs.
Key Responsibilities:
- Assist in the design, development, and maintenance of scalable and efficient data pipelines.
- Write clean, maintainable, and performance-optimized SQL queries.
- Develop data transformation scripts and automation using Python.
- Support data ingestion processes from various internal and external sources.
- Monitor data pipeline performance and help troubleshoot issues.
- Collaborate with data analysts, data scientists, and other engineers to ensure data quality and consistency.
- Work with cloud-based data solutions and tools (e.g., AWS, Azure, GCP – as applicable).
- Document technical processes and pipeline architecture.
Core Skills Required:
- Proficiency in SQL (data querying, joins, aggregations, performance tuning).
- Experience with Python, especially in the context of data manipulation (e.g., pandas, NumPy).
- Exposure to ETL/ELT pipelines and data workflow orchestration tools (e.g., Airflow, Prefect, Luigi – preferred).
- Understanding of relational databases and data warehouse concepts.
- Familiarity with version control systems like Git.
Preferred Qualifications:
- Experience with cloud data services (AWS S3, Redshift, Azure Data Lake, etc.)
- Familiarity with data modeling and data integration concepts.
- Basic knowledge of CI/CD practices for data pipelines.
- Bachelor’s degree in Computer Science, Engineering, or related field.
Role & Responsibilities
Responsible for ensuring that the architecture and design of the platform remains top-notch with respect to scalability, availability, reliability and maintainability
Act as a key technical contributor as well as a hands-on contributing member of the team.
Own end-to-end availability and performance of features, driving rapid product innovation while ensuring a reliable service.
Working closely with the various stakeholders like Program Managers, Product Managers, Reliability and Continuity Engineering(RCE) team, QE team to estimate and execute features/tasks independently.
Maintain and drive tech backlog execution for non-functional requirements of the platform required to keep the platform resilient
Assist in release planning and prioritization based on technical feasibility and engineering constraints
A zeal to continually find new ways to improve architecture, design and ensure timely delivery and high quality.
1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, BQ optimization, Airflow/Composer, Python(preferred)/Java
2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges
3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP
4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At Least 2 databases)
5. Data Warehouse concepts - Beginner to Intermediate level
6.Data Modeling, GCP Databases, DB Schema(or similar)
7.Hands-on data modelling for OLTP and OLAP systems
8.In-depth knowledge of Conceptual, Logical and Physical data modelling
9.Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same
10.Strong understanding of variables impacting database performance for near-real-time reporting and application interaction.
11.Should have working experience on at least one data modelling tool,
preferably DBSchema, Erwin
12Good understanding of GCP databases like AlloyDB, CloudSQL, and
BigQuery.
13.People with functional knowledge of the mutual fund industry will be a plus Should be willing to work from Chennai, office presence is mandatory
Role & Responsibilities:
● Work with business users and other stakeholders to understand business processes.
● Ability to design and implement Dimensional and Fact tables
● Identify and implement data transformation/cleansing requirements
● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse
● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions
● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique
● Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions.
● Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use.
● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality.
● Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions.
● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics.
● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.
● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements.
● Train business end-users, IT analysts, and developers.
Job Title : Lead Java Developer (Backend)
Experience Required : 8 to 15 Years
Open Positions : 5
Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)
Work Mode : Open to Remote / Hybrid / Onsite
Notice Period : Immediate Joiner/30 Days or Less
About the Role :
- We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
- This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
- This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.
Key Responsibilities :
- Design, develop, and implement scalable backend systems using Java and Spring Boot.
- Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
- Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
- Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
- Guide and mentor team members, fostering technical excellence and ownership.
- Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.
What We’re Looking For :
- Proven experience in Java backend development (Spring Boot, Microservices).
- 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Good understanding of containerization and orchestration tools like Docker and Kubernetes.
- Exposure to DevOps and Infrastructure as Code practices.
- Strong problem-solving skills and the ability to design solutions from first principles.
- Prior experience in product-based or startup environments is a big plus.
Ideal Candidate Profile :
- A tech enthusiast with a passion for clean code and scalable architecture.
- Someone who thrives in collaborative, transparent, and feedback-driven environments.
- A leader who takes ownership beyond individual deliverables to drive overall team and project success.
Interview Process
- Initial Technical Screening (via platform partner)
- Technical Interview with Engineering Team
- Client-facing Final Round
Additional Info :
- Targeting profiles from product/startup backgrounds.
- Strong preference for candidates with under 1 month of notice period.
- Interviews will be fast-tracked for qualified profiles.
Backend - Software Development Engineer III
Experience - 7+ yrs
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments leading technical teams, designing system architecture and reviewing peer code. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customers technical teams and MongoDB solutions Architects.
Location - Chennai or Bangalore
- Relevant experience of 7+ years building high-performance back-end applications with at least 3 or more projects delivered using the required technologies
- Good problem solving skills
- Strong mentoring capabilities
- Good understanding of software development life cycle
- Strong experience in system design and architecture
- Strong focus on quality of work delivered
- Excellent verbal and written communication skills
Required Technical Skills
- Extensive hands-on experience building high-performance web back-ends using Node.Js and Javascript/Typescript
- Min two years of hands-on experience in NestJs
- Strong experience with Express.Js framework
- Implementation experience in monolithic and microservices architecture
- Hands-on experience with data modeling on MongoDB and any other Relational or NoSQL databases
- Experience integrating with any 3rd party services such as cloud SDKs (Preferable X), payments, push notifications, authentication etc…
- Hands-on experience with Redis, Kafka, or X
- Exposure into unit testing with frameworks such as Mocha, Chai, Jest or others
- Strong experience writing and maintaining clear documentation
Good to have skills:
- Experience working with common services in any of the major cloud providers - AWS or GCP or Azure
- Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.
- Shift: 2 PM 11 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or those with a notice period of up to 30 days
Key Responsibilities:
- Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
- Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
- Ensure data integrity, consistency, and availability across all systems.
- Collaborate with data engineers, analysts, and stakeholders to optimize performance.
- Document standards and best practices for data engineering workflows.
Required Experience:
- 7-8 years of experience in data engineering, architecture, and pipeline development.
- Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
- Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
- Understanding of Data Lake table formats (Delta, Iceberg, etc.).
- Proficiency in Python for scripting and automation.
- Strong problem-solving skills and collaborative mindset.
⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
Best regards,
Vijay S
Assistant Manager - TAG
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance.
Key Roles & Responsibilities:
- Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
- Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
- Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
- Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
- Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
- Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
- Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
- Strong expertise in CI/CD pipelines, version control (Git), and release automation.
- Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
- Proficiency in Terraform, Ansible for infrastructure automation.
- Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
- Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Strong scripting and automation skills in Python, Bash, or Go.
Preferred Qualifications
- Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
- Exposure to serverless architectures and event-driven workflows.
- Contributions to open-source DevOps projects.
Architect
Experience - 12+ yrs
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate architects eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments leading technical teams, designing system architecture and reviewing peer code. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customer’s technical teams and MongoDB solutions Architects.
Location - Chennai or Bangalore
● Relevant experience of 12+ years building high-performance applications with at least 3+ years as an architect.
● Good problem solving skills
● Strong mentoring capabilities
● Good understanding of software development life cycle
● Strong experience in system design and architecture
● Strong focus on quality of work delivered
● Excellent verbal and written communication skills
Required Technical Skills
● Extensive hands-on experience building high-performance applications using Node.Js (Javascript/Typescript) and .NET/ Golang / Java / Python.
● Strong experience with appropriate framework(s).
● Wellversed in monolithic and microservices architecture.
● Hands-on experience with data modeling on MongoDB and any other Relational or NoSQL databases
● Experience working with 3rd party integrations ranging from authentication, cloud services, etc.
● Hands-on experience with Kafka or RabbitMQ.
● Handsonexperience with CI/CD pipelines and atleast 1 cloud provider- AWS / GCP / Azure
● Strong experience writing and maintaining clear documentation
Good to have skills:
● Experience working with frontend technologies - React.Js or Vue.Js or Angular.
● Extensive experience consulting with customers directly for defining architecture or system design.
● Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies

Apply Link - https://tally.so/r/wv0lEA
Key Responsibilities:
- Software Development:
- Design, implement, and optimise clean, scalable, and reliable code across [backend/frontend/full-stack] systems.
- Contribute to the development of micro services, APIs, or UI components as per the project requirements.
- System Architecture:
- Collaborate and design and enhance system architecture.
- Analyse and identify opportunities for performance improvements and scalability.
- Code Reviews and Mentorship:
- Conduct thorough code reviews to ensure code quality, maintainability, and adherence to best practices.
- Mentor and support junior developers, fostering a culture of learning and growth.
- Agile Collaboration:
- Work within an Agile/Scrum framework, participating in sprint planning, daily stand-ups, and retrospectives.
- Collaborate with Carbon Science, Designer, and other stakeholders to translate requirements into technical solutions.
- Problem-Solving:
- Investigate, troubleshoot, and resolve complex issues in production and development environments.
- Contribute to incident management and root cause analysis to improve system reliability.
- Continuous Improvement:
- Stay up-to-date with emerging technologies and industry trends.
- Propose and implement improvements to existing codebases, tools, and development processes.
Qualifications:
Must-Have:
- Experience: 2–5 years of professional software development experience in [specify languages/tools, e.g., Java, Python, JavaScript, etc.].
- Education: Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
- Technical Skills:
- Strong proficiency in [programming languages/frameworks/tools].
- Experience with cloud platforms like AWS, Azure, or GCP.
- Knowledge of version control tools (e.g., Git) and CI/CD pipelines.
- Understanding of data structures, algorithms, and system design principles.
Nice-to-Have:
- Experience with containerisation (e.g., Docker) and orchestration tools (e.g., Kubernetes).
- Knowledge of database technologies (SQL and NoSQL).
Soft Skills:
- Strong analytical and problem-solving skills.
- Excellent written and verbal communication skills.
- Ability to work in a fast-paced environment and manage multiple priorities effectively.

Dear,
We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.
📌 Job Details:
- Role: Senior Backend Engineer
- Shift: 1 PM – 10 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or up to 30 days
🔹 Job Responsibilities:
✅ Design and develop scalable, reliable, and maintainable backend solutions
✅ Work on event-driven microservices architecture
✅ Implement REST APIs and optimize backend performance
✅ Collaborate with cross-functional teams to drive innovation
✅ Mentor junior and mid-level engineers
🔹 Required Skills:
✔ Backend Development: Scala (preferred), Java, Kotlin
✔ Cloud: AWS or GCP
✔ Databases: MySQL, NoSQL (Cassandra)
✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code
✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch
✔ Agile Methodologies: Scrum, Kanban
⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.
Best regards,
Vijay S
Assistant Manager - TAG
Backend - Software Development Engineer II
Experience - 4+ yrs
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customer’s technical teams and MongoDB solutions Architects.
Location - Bangalore
Basic qualifications:
- Good problem solving skills
- Deep understanding of software development life cycle
- Excellent verbal and written communication skills
- Strong focus on quality of work delivered
- Relevant experience of 4+ years building high-performance backend applications with, at least 2 or more projects implemented using the required technologies
Required Technical Skills:
- Extensive hands-on experience building high-performance web back-ends using Node.Js. Having 3+ hands-on experience in Node.JS and Javascript/Typescript and minimum
- Hands-on project experience with Nest.Js
- Strong experience with Express.Js framework
- Hands-on experience in data modeling and schema design in MongoDB
- Experience integrating with any 3rd party services such as cloud SDKs, payments, push notifications, authentication etc…
- Exposure into unit testing with frameworks such as Mocha, Chai, Jest or others
- Strong experience writing and maintaining clear documentation
Good to have skills:
- Experience working with common services in any of the major cloud providers - AWS or GCP or Azure
- Experience with microservice architecture
- Experience working with other Relational and NoSQL Databases
- Experience with technologies such as Kafka and Redis
- Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies
Urgent Hiring: Senior Java Developers |Bangalore (Hybrid) 🚀
We are looking for experienced Java professionals to join our team! If you have the right skills and are ready to make an impact, this is your opportunity!
📌 Role: Senior Java Developer
📌 Experience: 6 to 9 Years
📌 Education: BE/BTech/MCA (Full-time)
📌 Location: Bangalore (Hybrid)
📌 Notice Period: Immediate Joiners Only
✅ Mandatory Skills:
🔹 Strong Core Java
🔹 Spring Boot (data flow basics)
🔹 JPA
🔹 Google Cloud Platform (GCP)
🔹 Spring Framework
🔹 Docker, Kubernetes (Good to have)

Position Name : Product Engineer (Backend Heavy)
Experience : 3 to 5 Years
Location : Bengaluru (Work From Office, 5 Days a Week)
Positions : 2
Notice Period : Immediate joiners or candidates serving notice (within 30 days)
Role Overview :
We’re looking for Product Engineers who are passionate about building scalable backend systems in the FinTech & payments domain. If you enjoy working on complex challenges, contributing to open-source projects, and driving impactful innovations, this role is for you!
What You’ll Do :
- Develop scalable APIs and backend services.
- Design and implement core payment systems.
- Take end-to-end ownership of product development from zero to one.
- Work on database design, query optimization, and system performance.
- Experiment with new technologies and automation tools.
- Collaborate with product managers, designers, and engineers to drive innovation.
What We’re Looking For :
- 3+ Years of professional backend development experience.
- Proficiency in any backend programming language (Ruby on Rails experience is a plus but not mandatory).
- Experience in building APIs and working with relational databases.
- Strong communication skills and ability to work in a team.
- Open-source contributions (minimum 50 stars on GitHub preferred).
- Experience in building and delivering 0 to 1 products.
- Passion for FinTech & payment systems.
- Familiarity with CI/CD, DevOps practices, and infrastructure management.
- Knowledge of payment protocols and financial regulations (preferred but not mandatory)
Main Technical Skills :
- Backend : Ruby on Rails, PostgreSQL
- Infrastructure : GCP, AWS, Terraform (fully automated infrastructure)
- Security : Zero-trust security protocol managed via Teleport
Java Developer with GCP
Skills : Java and Spring Boot, GCP, Cloud Storage, BigQuery, RESTful API,
EXP : SA(6-10 Years)
Loc : Bangalore, Mangalore, Chennai, Coimbatore, Pune, Mumbai, Kolkata
Np : Immediate to 60 Days.
Kindly share your updated resume via WA - 91five000260seven

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.
Qualifications & Experience:
bachelor's or master's degree in computer science, Information Systems, or a related field.
5+ years of experience in data engineering, with expertise in data architecture and pipeline development.
☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.
️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.
Strong proficiency in Python and data modelling.
Experience in testing and validation of data pipelines.
Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.
If you meet the above criteria and are interested, please share your updated CV along with the following details:
Total Experience:
Current CTC:
Expected CTC:
Current Location:
Preferred Location:
Notice Period / Last Working Day (if serving notice):
⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
at Bottle Lab Technologies Pvt Ltd

About SmartQ
A leading B2B Food-Tech company built on 4 pillars-great people, great food, great experience, and greater good. Solving complex business problems with our heart and analyzing possible solutions with our mind lie in our DNA. We are on the perpetual route of serving our clients wholeheartedly. Armed with the stability of an MNC and the agility of a start-up, we have spread across 17 countries, having collaborated and executed successfully with 600 clients. We have grown from strength to strength with a blend of exuberant youth and exceptional experience. Bengaluru, being our headquarters, is known as the innovation hub and we have grown up to be the global leader in the institutional food tech space. We were recently acquired by the world's largest foodservice company – Compass group which has an annual turnover of 20 billion USD.
In this role, you will:
1. Collaborate with Product & Design Teams Work closely with the Product team to ensure that we are building a scalable, bug-free platform. You will actively participate in product and design discussions, offering valuable insights from a backend perspective to align technology with business goals.
2. Drive Adoption of New Technologies
You will lead brainstorming sessions and define a clear direction for the backend team to incorporate the latest technologies into day-to-day development,continuously optimizing for performance, scalability, and efficiency.
3. RESTful API Design & Development:You will ensure that the APIs you design and develop are well-structured, following best practices, and are suitable for consumption by frontend teams across multiple platforms. A key part of your role is making sure these APIs are scalable and maintainable.
4. Third-Party Integration Support:As we sometimes partner with third-party providers to expedite our market entry,you’ll work closely with these partners to integrate their solutions into our system.This involves participating in calls, finding the best integration methods, and providing ongoing support.
5. AI and Prompt Engineering:With AI becoming more integral to backend development, you’ll leverage AI to speed up development processes and maintain best practices. Familiarity with prompt engineering and AI-driven problem-solving is a significant plus in our team.
Must-Have Requirements:
- Strong expertise in Python, microservices, backend development and scalable architectures.
- Proficiency in designing and building REST APIs.
- Experience with unit testing in any testing framework and maintaining 100% code coverage.
- Experience in working with NoSQL DB.
- Strong understanding of any Cloud platforms such as - GCP/AWS/Azure.
- Profound knowledge in Serverless design pattern .
- Familiarity with Django or Webapp2 or Flask or similar web app frameworks.
- Experience in writing unit test using any testing framework.
- Experience collaborating with product and design teams.
- Familiarity with integrating third-party solutions.
Good-to-Have Requirements:
- Educational background includes a degree (B.E/B.Tech/M.Tech) in ComputerScience, Engineering, or a related field.
- 4+ years’ experience as a backend/cloud developer.
- Good understanding of google cloud platform.
- Knowledge of AI and how to leverage it for day-to-day tasks in backend development.
- Familiarity with prompt engineering to enhance productivity.
- Prior experience working with global or regional teams.
- Experience with agile methodologies and working within cross-functional teams.
Job Title : Chief Technology Officer (CTO) – Blockchain & Web3
Location : Bangalore & Gurgaon
Job Type : Full-Time, On-Site
Working Days : 6 Days
About the Role :
- We are seeking an experienced and visionary Chief Technology Officer (CTO) to lead our Blockchain & Web3 initiatives.
- The ideal candidate will have a strong technical background in Blockchain, Distributed Ledger Technology (DLT), Smart Contracts, DeFi, and Web3 applications.
- As a CTO, you will be responsible for defining and implementing the technology roadmap, leading a high-performing tech team, and driving innovation in the Blockchain and Web3 space.
Key Responsibilities :
- Define and execute the technical strategy and roadmap for Blockchain & Web3 products and services.
- Lead the architecture, design, and development of scalable, secure, and efficient blockchain-based applications.
- Oversee Smart Contract development, Layer-1 & Layer-2 solutions, DeFi, NFTs, and decentralized applications (dApps).
- Manage and mentor a team of engineers, developers, and blockchain specialists to ensure high-quality product delivery.
- Drive R&D initiatives to stay ahead of emerging trends and advancements in Blockchain & Web3 technologies.
- Collaborate with cross-functional teams including Product, Marketing, and Business Development to align technology with business goals.
- Ensure regulatory compliance, security, and scalability of Blockchain solutions.
- Build and maintain relationships with industry partners, investors, and technology vendors to drive innovation.
Required Qualifications & Experience :
- 10+ Years of overall experience in software development with at least 5+ Years in Blockchain & Web3 technologies.
- Deep understanding of Blockchain protocols (Ethereum, Solana, Polkadot, Hyperledger, etc.), consensus mechanisms, cryptographic principles, and tokenomics.
- Hands-on experience with Solidity, Rust, Go, Node.js, Python, or other blockchain programming languages.
- Proven track record of building and scaling decentralized applications (dApps), DeFi platforms, or NFT marketplaces.
- Experience with cloud infrastructure (AWS, Azure, GCP) and DevOps best practices.
- Strong leadership and management skills with experience in building and leading high-performing teams.
- Excellent problem-solving skills with the ability to work in a fast-paced, high-growth environment.
- Strong understanding of Web3, DAOs, Metaverse, and the evolving regulatory landscape.
Preferred Qualifications :
- Prior experience in a CTO, VP Engineering, or similar leadership role.
- Experience in fundraising, investor relations, and strategic partnerships.
- Knowledge of cross-chain interoperability and Layer-2 scaling solutions.
- Understanding of data privacy, security, and compliance regulations related to Blockchain & Web3.
Company Overview
Adia Health revolutionizes clinical decision support by enhancing diagnostic accuracy and personalizing care. It modernizes the diagnostic process by automating optimal lab test selection and interpretation, utilizing a combination of expert medical insights, real-world data, and artificial intelligence. This approach not only streamlines the diagnostic journey but also ensures precise, individualized patient care by integrating comprehensive medical histories and collective platform knowledge.
Position Overview
We are seeking a talented and experienced Site Reliability Engineer/DevOps Engineer to join our dynamic team. The ideal candidate will be responsible for ensuring the reliability, scalability, and performance of our infrastructure and applications. You will collaborate closely with development, operations, and product teams to automate processes, implement best practices, and improve system reliability.
Key Responsibilities
- Design, implement, and maintain highly available and scalable infrastructure solutions using modern DevOps practices.
- Automate deployment, monitoring, and maintenance processes to streamline operations and increase efficiency.
- Monitor system performance and troubleshoot issues, ensuring timely resolution to minimize downtime and impact on users.
- Implement and manage CI/CD pipelines to automate software delivery and ensure code quality.
- Manage and configure cloud-based infrastructure services to optimize performance and cost.
- Collaborate with development teams to design and implement scalable, reliable, and secure applications.
- Implement and maintain monitoring, logging, and alerting solutions to proactively identify and address potential issues.
- Conduct periodic security assessments and implement appropriate measures to ensure the integrity and security of systems and data.
- Continuously evaluate and implement new tools and technologies to improve efficiency, reliability, and scalability.
- Participate in on-call rotation and respond to incidents promptly to ensure system uptime and availability.
Qualifications
- Bachelor's degree in Computer Science, Engineering, or related field
- Proven experience (5+ years) as a Site Reliability Engineer, DevOps Engineer, or similar role
- Strong understanding of cloud computing principles and experience with AWS
- Experience of building and supporting complex CI/CD pipelines using Github
- Experience of building and supporting infrastructure as a code using Terraform
- Proficiency in scripting and automating tools
- Solid understanding of networking concepts and protocols
- Understanding of security best practices and experience implementing security controls in cloud environments
- Knowing modern security requirements like SOC2, HIPAA, HITRUST will be a solid advantage.
Lightning Job By Cutshort ⚡
As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)
Job Overview:
We are seeking an experienced DevOps Engineer to join our team. The successful candidate will be responsible for designing, implementing, and maintaining the infrastructure and software systems required to support our development and production environments. The ideal candidate should have a strong background in Linux, GitHub, Actions/Jenkins, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
Responsibilities:
• Design, implement and maintain CI/CD pipelines using GitHub, Actions/Jenkins, Kubernetes, Helm, and ArgoCD.
• Deploy and manage Kubernetes clusters using AWS.
• Configure and maintain Envoy Proxy and Cert-Manager to automate deployment and manage application environments.
• Monitor system performance using Datadog, ELK, and Cloudflare tools.
• Automate infrastructure management and maintenance tasks using Terraform, Ansible, or similar tools.
• Collaborate with development teams to design, implement and test infrastructure changes.
• Troubleshoot and resolve infrastructure issues as they arise.
• Participate in on-call rotation and provide support for production issues.
Qualifications:
• Bachelor's or Master's degree in Computer Science, Engineering or a related field.
• 4+ years of experience in DevOps engineering with a focus on Linux, GitHub, Actions/CodeFresh, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
• Strong understanding of Linux administration and shell scripting.
• Experience with automation tools such as Terraform, Ansible, or similar.
• Ability to write infrastructure as code using tools such as Terraform, Ansible, or similar.
• Experience with container orchestration platforms such as Kubernetes.
• Familiarity with container technologies such as Docker.
• Experience with cloud providers such as AWS.
• Experience with monitoring tools such as Datadog and ELK.
Skills:
• Strong analytical and problem-solving skills.
• Excellent communication and collaboration skills.
• Ability to work independently or in a team environment.
• Strong attention to detail.
• Ability to learn and apply new technologies quickly.
• Ability to work in a fast-paced and dynamic environment.
• Strong understanding of DevOps principles and methodologies.
Kindly apply at https://www.wohlig.com/careers
About the Role:
We are seeking a talented and passionate DevOps Engineer to join our dynamic team. You will be responsible for designing, implementing, and managing scalable and secure infrastructure across multiple cloud platforms. The ideal candidate will have a deep understanding of DevOps best practices and a proven track record in automating and optimizing complex workflows.
Key Responsibilities:
Cloud Management:
- Design, implement, and manage cloud infrastructure on AWS, Azure, and GCP.
- Ensure high availability, scalability, and security of cloud resources.
Containerization & Orchestration:
- Develop and manage containerized applications using Docker.
- Deploy, scale, and manage Kubernetes clusters.
CI/CD Pipelines:
- Build and maintain robust CI/CD pipelines to automate the software delivery process.
- Implement monitoring and alerting to ensure pipeline efficiency.
Version Control & Collaboration:
- Manage code repositories and workflows using Git.
- Collaborate with development teams to optimize branching strategies and code reviews.
Automation & Scripting:
- Automate infrastructure provisioning and configuration using tools like Terraform, Ansible, or similar.
- Write scripts to optimize and maintain workflows.
Monitoring & Logging:
- Implement and maintain monitoring solutions to ensure system health and performance.
- Analyze logs and metrics to troubleshoot and resolve issues.
Required Skills & Qualifications:
- 3-5 years of experience with AWS, Azure, and Google Cloud Platform (GCP).
- Proficiency in containerization tools like Docker and orchestration tools like Kubernetes.
- Hands-on experience building and managing CI/CD pipelines.
- Proficient in using Git for version control.
- Experience with scripting languages such as Bash, Python, or PowerShell.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Solid understanding of networking, security, and system administration.
- Excellent problem-solving and troubleshooting skills.
- Strong communication and teamwork skills.
Preferred Qualifications:
- Certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, or Google Professional DevOps Engineer.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with serverless architectures and microservices.

A niche, specialist position in an interdisciplinary team focused on end-to-end solutions. Nature of projects range from proof-of-concept innovative applications, parallel implementations per end user requests, scaling up and continuous monitoring for improvements. Majority of the projects will be focused on providing automation solutions via both custom solutions and adapting machine learning generic standards to specific use cases/domains.
Clientele includes major publishers from the US and Europe, pharmaceutical bigwigs and government funded projects.
As a Senior Fullstack Developer, you will be responsible for designing, building, and maintaining scalable and performant web applications using modern technologies. You will work with cutting-edge tools and cloud infrastructure (primarily Google Cloud) and implement robust back-end services with React JS with Typescript, Koa.js, MongoDB, and Redis, while ensuring reliable and efficient monitoring with OpenTelemetry and logging with Bunyan. Your expertise in CI/CD pipelines and modern testing frameworks will be key to maintaining a smooth and efficient software development lifecycle.
Key Responsibilities:
- Fullstack Development: Design, develop, and maintain web applications using JavaScript (Node.js for back-end and React.js with Typescript for front-end).
- Cloud Infrastructure: Leverage Google Cloud services (like Compute Engine, Cloud Storage, Pub/Sub, etc.) to build scalable and resilient cloud solutions.
- API Development: Implement RESTful APIs and microservices with Koa.js, ensuring high performance, security, and scalability.
- Database Management: Manage MongoDB databases for storing and retrieving application data, and use Redis for caching and session management.
- Logging and Monitoring: Utilize Bunyan for structured logging and OpenTelemetry for distributed tracing and monitoring to ensure system health and performance.
- CI/CD: Design, implement, and maintain efficient CI/CD pipelines for continuous integration and deployment, ensuring fast and reliable code delivery.
- Testing & Quality Assurance: Write unit and integration tests using Jest, Mocha, and React Testing Library to ensure code reliability and maintainability.
- Collaboration: Work closely with front-end and back-end engineers to deliver high-quality software solutions, following agile development practices.
- Optimization & Scaling: Identify performance bottlenecks, troubleshoot production issues, and scale the system as needed.
- Code Reviews & Mentorship: Conduct peer code reviews, share best practices, and mentor junior developers to improve team efficiency and code quality.
Must-Have Skills:
- Google Cloud (GCP): Hands-on experience with various Google Cloud services (Compute Engine, Cloud Storage, Pub/Sub, Firestore, etc.) for building scalable applications.
- React.js: Strong experience in building modern, responsive user interfaces with React.js and Typescript
- Koa.js: Strong experience in building web servers and APIs with Koa.js.
- MongoDB & Redis: Proficiency in working with MongoDB (NoSQL databases) and Redis for caching and session management.
- Bunyan: Experience using Bunyan for structured logging and tracking application events.
- OpenTelemetry Ecosystem: Hands-on experience with the OpenTelemetry ecosystem for monitoring and distributed tracing.
- CI/CD: Proficient in setting up CI/CD pipelines using tools like CircleCI, Jenkins, or GitLab CI.
- Testing Frameworks: Solid understanding and experience with Jest, Mocha, and React Testing Library for testing both back-end and front-end applications.
- JavaScript & Node.js: Strong proficiency in JavaScript (ES6+), and experience working with Node.js for back-end services.
Desired Skills & Experience:
- Experience with other cloud platforms (AWS, Azure).
- Familiarity with containerization and orchestration tools like Docker and Kubernetes.
- Experience working with TypeScript.
- Knowledge of other logging and monitoring tools.
- Familiarity with agile methodologies and project management tools (JIRA, Trello, etc.).
Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- 5-10 years of hands-on experience as a Fullstack Developer.
- Strong problem-solving skills and ability to debug complex systems.
- Excellent communication skills and ability to work in a team-oriented, collaborative environment.
The candidate should have a background in development/programming with experience in at least one of the following: .NET, Java (Spring Boot), ReactJS, or AngularJS.
Primary Skills:
- AWS or GCP Cloud
- DevOps CI/CD pipelines (e.g., Azure DevOps, Jenkins)
- Python/Bash/PowerShell scripting
Secondary Skills:
- Docker or Kubernetes
Key Responsibilities
AI Model Development
- Design and implement advanced Generative AI models (e.g., GPT-based, LLaMA, etc.) to support applications across various domains, including text generation, summarization, and conversational agents.
- Utilize tools like LangChain and LlamaIndex to build robust AI-powered systems, ensuring seamless integration with data sources, APIs, and databases.
Backend Development with FastAPI
- Develop and maintain fast, efficient, and scalable FastAPI services to expose AI models and algorithms via RESTful APIs.
- Ensure optimal performance and low-latency for API endpoints, focusing on real-time data processing.
Pipeline and Integration
- Build and optimize data processing pipelines for AI models, including ingestion, transformation, and indexing of large datasets using tools like LangChain and LlamaIndex.
- Integrate AI models with external services, databases, and other backend systems to create end-to-end solutions.
Collaboration with Cross-Functional Teams
- Collaborate with data scientists, machine learning engineers, and product teams to define project requirements, technical feasibility, and timelines.
- Work with front-end developers to integrate AI-powered functionalities into web applications.
Model Optimization and Fine-Tuning
- Fine-tune and optimize pre-trained Generative AI models to improve accuracy, performance, and scalability for specific business use cases.
- Ensure efficient deployment of models in production environments, addressing issues related to memory, latency, and resource management.
Documentation and Code Quality
- Maintain high standards of code quality, write clear, maintainable code, and conduct thorough unit and integration tests.
- Document AI model architectures, APIs, and workflows for future reference and onboarding of team members.
Research and Innovation
- Stay updated with the latest advancements in Generative AI, LangChain, and LlamaIndex, and actively contribute to the adoption of new techniques and technologies.
- Propose and explore innovative ways to leverage cutting-edge AI technologies to solve complex problems.
Required Skills and Experience
Expertise in Generative AI
Strong experience working with Generative AI models, including but not limited to GPT-3/4, LLaMA, or other large language models (LLMs).
LangChain & LlamaIndex
Hands-on experience with LangChain for building language model-driven applications, and LlamaIndex for efficient data indexing and querying.
Python Programming
Proficiency in Python for building AI applications, working with frameworks such as TensorFlow, PyTorch, Hugging Face, and others.
API Development with FastAPI
Strong experience developing RESTful APIs using FastAPI, with a focus on high-performance, scalable web services.
NLP & Machine Learning
Solid foundation in Natural Language Processing (NLP) and machine learning techniques, including data preprocessing, feature engineering, model evaluation, and fine-tuning.
Database & Storage Systems Familiarity with relational and NoSQL databases, data storage, and management strategies for large-scale AI datasets.
Version Control & CI/CD
Experience with Git, GitHub, and implementing CI/CD pipelines for seamless deployment.
Preferred Skills
Containerization & Cloud Deployment
Familiarity with Docker, Kubernetes, and cloud platforms (e.g., AWS, GCP, Azure) for deploying scalable AI applications.
Data Engineering
Experience in working with data pipelines and frameworks such as Apache Spark, Airflow, or Dask.
Knowledge of Front-End Technologies Familiarity with front-end frameworks (React, Vue.js, etc.) for integrating AI APIs with user-facing applications.
Position Overview: We are seeking a talented and experienced Cloud Engineer specialized in AWS cloud services to join our dynamic team. The ideal candidate will have a strong background in AWS infrastructure and services, including EC2, Elastic Load Balancing (ELB), Auto Scaling, S3, VPC, RDS, CloudFormation, CloudFront, Route 53, AWS Certificate Manager (ACM), and Terraform for Infrastructure as Code (IaC). Experience with other AWS services is a plus.
Responsibilities:
• Design, deploy, and maintain AWS infrastructure solutions, ensuring scalability, reliability, and security.
• Configure and manage EC2 instances to meet application requirements.
• Implement and manage Elastic Load Balancers (ELB) to distribute incoming traffic across multiple instances.
• Set up and manage AWS Auto Scaling to dynamically adjust resources based on demand.
• Configure and maintain VPCs, including subnets, route tables, and security groups, to control network traffic.
• Deploy and manage AWS CloudFormation and Terraform templates to automate infrastructure provisioning using Infrastructure as Code (IaC) principles.
• Implement and monitor S3 storage solutions for secure and scalable data storage
• Set up and manage CloudFront distributions for content delivery with low latency and high transfer speeds.
• Configure Route 53 for domain management, DNS routing, and failover configurations.
• Manage AWS Certificate Manager (ACM) for provisioning, managing, and deploying SSL/TLS certificates.
• Collaborate with cross-functional teams to understand business requirements and provide effective cloud solutions.
• Stay updated with the latest AWS technologies and best practices to drive continuous improvement.
Qualifications:
• Bachelor's degree in computer science, Information Technology, or a related field.
• Minimum of 2 years of relevant experience in designing, deploying, and managing AWS cloud solutions.
• Strong proficiency in AWS services such as EC2, ELB, Auto Scaling, VPC, S3, RDS, and CloudFormation.
• Experience with other AWS services such as Lambda, ECS, EKS, and DynamoDB is a plus.
• Solid understanding of cloud computing principles, including IaaS, PaaS, and SaaS.
• Excellent problem-solving skills and the ability to troubleshoot complex issues in a cloud environment.
• Strong communication skills with the ability to collaborate effectively with cross-functional teams.
• Relevant AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are highly desirable.
Additional Information:
• We value creativity, innovation, and a proactive approach to problem-solving.
• We offer a collaborative and supportive work environment where your ideas and contributions are valued.
• Opportunities for professional growth and development. Someshwara Software Pvt Ltd is an equal opportunity employer.
We celebrate diversity and are dedicated to creating an inclusive environment for all employees.