50+ Google Cloud Platform (GCP) Jobs in India
Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!


Job Title: AI Solutioning Architect – Healthcare IT
Role Summary:
The AI Solutioning Architect leads the design and implementation of AI-driven solutions across the organization, ensuring alignment with business goals and healthcare IT standards. This role defines the AI/ML architecture, guides technical execution, and fosters innovation using platforms like Google Cloud (GCP).
Key Responsibilities:
- Architect scalable AI solutions from data ingestion to deployment.
- Align AI initiatives with business objectives and regulatory requirements (HIPAA).
- Collaborate with cross-functional teams to deliver AI projects.
- Lead POCs, evaluate AI tools/platforms, and promote GCP adoption.
- Mentor technical teams and ensure best practices in MLOps.
- Communicate complex concepts to diverse stakeholders.
Qualifications:
- Bachelor’s/Master’s in Computer Science or related field.
- 12+ years in software development/architecture with strong AI/ML focus.
- Experience in healthcare IT and compliance (HIPAA).
- Proficient in Python/Java and ML frameworks (TensorFlow, PyTorch).
- Hands-on with GCP (preferred) or other cloud platforms.
- Strong leadership, problem-solving, and communication skills.

Job Title : Lead Web Developer / Frontend Engineer
Experience Required : 10+ Years
Location : Bangalore (Hybrid – 3 Days Work From Office)
Work Timings : 11:00 AM to 8:00 PM IST
Notice Period : Immediate or Up to 30 Days (Preferred)
Work Mode : Hybrid
Interview Mode : Face-to-Face mandatory (for Round 2)
Role Overview :
We are hiring a Lead Frontend Engineer with 10+ Years of experience to drive the development of scalable, modern, and high-performance web applications.
This is a hands-on technical leadership role focused on React.js, micro-frontends, and Backend for Frontend (BFF) architecture, requiring both coding expertise and team leadership skills.
Mandatory Skills :
React.js, JavaScript/TypeScript, HTML, CSS, micro-frontend architecture, Backend for Frontend (BFF), Webpack, Jenkins (CI/CD), GCP, RDBMS/SQL, Git, and team leadership.
Core Responsibilities :
- Design and develop cloud-based web applications using React.js, HTML, CSS.
- Collaborate with UX/UI designers and backend engineers to implement seamless user experiences.
- Lead and mentor a team of frontend developers.
- Write clean, well-documented, scalable code using modern JavaScript/TypeScript practices.
- Implement CI/CD pipelines using Jenkins, deploy applications to CDNs.
- Integrate with GCP services, optimize front-end performance.
- Stay updated with modern frontend technologies and design patterns.
- Use Git for version control and collaborative workflows.
- Implement JavaScript libraries for web analytics and performance monitoring.
Key Requirements :
- 10+ Years of experience as a frontend/web developer.
- Strong proficiency in React.js, JavaScript/TypeScript, HTML, CSS.
- Experience with micro-frontend architecture and Backend for Frontend (BFF) patterns.
- Proficiency in frontend design frameworks and libraries (jQuery, Node.js).
- Strong understanding of build tools like Webpack, CI/CD using Jenkins.
- Experience with GCP and deploying to CDNs.
- Solid experience in RDBMS, SQL.
- Familiarity with Git and agile development practices.
- Excellent debugging, problem-solving, and communication skills.
- Bachelor’s/Master’s in Computer Science or a related field.
Nice to Have :
- Experience with Node.js.
- Previous experience working with web analytics frameworks.
- Exposure to JavaScript observability tools.
Interview Process :
1. Round 1 : Online Technical Interview (via Geektrust – 1 Hour)
2. Round 2 : Face-to-Face Interview with the Indian team in Bangalore (3 Hours – Mandatory)
3. Round 3 : Online Interview with CEO (30 Minutes)
Important Notes :
- Face-to-face interview in Bangalore is mandatory for Round 2.
- Preference given to candidates currently in Bangalore or willing to travel for interviews.
- Remote applicants who cannot attend the in-person round will not be considered.

Why This Role Matters
We’re looking for a Principal Engineer to lead the architecture and execution of our GenAI-powered, self-serve marketing platforms. You will work directly with the CEO to shape, build, and scale products that change how marketers interact with data and AI. This is intrapreneurship in action — not a sandbox innovation lab, but a real-world product with traction, velocity, and high stakes.
What You'll Do
- Co-own product architecture and direction alongside the CEO.
- Build GenAI-native, full-stack platforms from MVP to scale — powered by LLMs, agents, and predictive AI.
- Own the full stack: React (frontend), Node.js/Python (backend), GCP (infra), BigQuery (data), and vector databases (AI).
- Lead a lean, high-caliber team with a hands-on, unblock-and-coach mindset.
- Drive rapid iteration with rigor, balancing short-term delivery with long-term resilience.
- Ensure scalability, observability, and fault tolerance in multi-tenant, cloud-native environments.
- Bridge business and tech — aligning execution with evolving user and market insights.
What You Bring
- 8–12 years of experience building and scaling full-stack, data-heavy or AI-driven products.
- Fluency in React, Node.js, and Google Cloud (Functions, BigQuery, Cloud SQL, Airflow, etc.).
- Hands-on experience with GenAI tools (LangChain, OpenAI APIs, LlamaIndex) is a bonus.
- Track record of shipping products from ambiguity to impact.
- Strong product mindset — your goal is user value, not just elegant code.
- Architectural leadership with ownership of engineering rigor and scaling best practices.
- Startup or founder DNA — you’ve built things from scratch and know how to move fast without breaking things.
Who You Are
- A former founder, senior IC, or tech lead who’s done zero-to-one and 1-to-n scaling.
- Hungry for ownership and velocity — frustrated by bureaucracy or stagnation.
- You code because you care about solving real problems for real users.
- You’re pragmatic, hands-on, and grounded in first principles.
- You understand that great software isn't just shipped — it's hardened, maintained, and evolves with minimal manual effort.
- You’re open to evolving into a founding engineer role with influence over the tech vision and culture.
What You Get
- Equity in a high-growth product-led startup.
- A chance to build global products out of India with full-stack and GenAI innovation.
- Access to high-context decision-making and direct collaboration with the CEO.
- A tight, ego-free team and a culture that values clarity, ownership, learning, and candor.
Why YOptima?
YOptima is redefining how leading marketers unlock growth through full-funnel, AI-powered media solutions. As part of our growth journey, this is your opportunity to own the growth charter for leading brands and agencies globally and shape the narrative of a next-generation marketing platform.
Ready to lead, build, and scale?
We’d love to hear from you.


About the Role:
We’re looking for a skilled developer to build and maintain web and mobile apps using React, React Native, and Node.js. You’ll work on both the frontend and backend, collaborating with our team to deliver high-quality products.
What You’ll Do:
- Build and maintain full stack applications for web and mobile
- Write clean, efficient code with React, React Native, and Node.js
- Work with designers and other developers to deliver new features
- Debug, troubleshoot, and optimize existing apps
- Stay updated on the latest tech and best practices
What We’re Looking For:
- Solid experience with React, React Native, and Node.js
- Comfortable building both web and mobile applications
- Good understanding of REST APIs and databases
- Familiar with Git and agile workflows
- Team player with clear communication skills
Nice to Have:
- Experience with testing and CI/CD
- Knowledge of UI/UX basics

Product company for financial operations automation platform

Mandatory Criteria
- Candidate must have Strong hands-on experience with Kubernetes of at least 2 years in production environments.
- Candidate should have Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI).
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Candidate should have strong Backend experience.
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
About the Role
We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–7 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.
# Experience with Kubernetes is mandatory.
Key Responsibilities
- Design and develop scalable, reliable backend services and cloud-native applications.
- Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
- Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
- Implement and manage CI/CD pipelines and infrastructure automation.
- Collaborate with frontend, DevOps, and product teams in an agile environment.
- Ensure high code quality through testing, reviews, and documentation.
Required Skills
- Strong hands-on experience with Kubernetes of atleast 2 years in production environments (mandatory).
- Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI].
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Solid understanding of distributed systems, microservices, and cloud-native architecture.
- Experience with containerization using Docker and Kubernetes-native deployment workflows.
- Working knowledge of SQL and relational databases.
Preferred Qualifications
- Experience working across multiple cloud platforms.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
Nice to Have
- Knowledge of NoSQL databases or event-driven/message-based architectures.
- Experience with serverless services, managed data pipelines, or data lake platforms.
Requirements
- Bachelors/Masters in Computer Science or a related field
- 5-8 years of relevant experience
- Proven track record of Team Leading/Mentoring a team successfully.
- Experience with web technologies and microservices architecture both frontend and backend.
- Java, Spring framework, hibernate
- MySQL, Mongo, Solr, Redis,
- Kubernetes, Docker
- Strong understanding of Object-Oriented Programming, Data Structures, and Algorithms.
- Excellent teamwork skills, flexibility, and ability to handle multiple tasks.
- Experience with API Design, ability to architect and implement an intuitive customer and third-party integration story
- Ability to think and analyze both breadth-wise (client, server, DB, control flow) and depth-wise (threads, sessions, space-time complexity) while designing and implementing services
- Exceptional design and architectural skills
- Experience of cloud providers/platforms like GCP and AWS
Roles & Responsibilities
- Develop new user-facing features.
- Work alongside the product to understand our requirements, and design, develop and iterate, think through the complex architecture.
- Writing clean, reusable, high-quality, high-performance, maintainable code.
- Encourage innovation and efficiency improvements to ensure processes are productive.
- Ensure the training and mentoring of the team members.
- Ensure the technical feasibility of UI/UX designs and optimize applications for maximum speed.
- Research and apply new technologies, techniques, and best practices.
- Team mentorship and leadership.

Product company for financial operations automation platform

Mandatory Criteria :
- Candidate must have Strong hands-on experience with Kubernetes of atleast 2 years in production environments.
- Candidate should have Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI).
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Candidate should have strong Backend experience.
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
About the Role
We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–7 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.
# Experience with Kubernetes is mandatory.
Key Responsibilities
- Design and develop scalable, reliable backend services and cloud-native applications.
- Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
- Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
- Implement and manage CI/CD pipelines and infrastructure automation.
- Collaborate with frontend, DevOps, and product teams in an agile environment.
- Ensure high code quality through testing, reviews, and documentation.
Required Skills
- Strong hands-on experience with Kubernetes of atleast 2 years in production environments (mandatory).
- Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI].
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Solid understanding of distributed systems, microservices, and cloud-native architecture.
- Experience with containerization using Docker and Kubernetes-native deployment workflows.
- Working knowledge of SQL and relational databases.
Preferred Qualifications
- Experience working across multiple cloud platforms.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
Nice to Have
- Knowledge of NoSQL databases or event-driven/message-based architectures.
- Experience with serverless services, managed data pipelines, or data lake platforms.
Key Skills Required:
- Strong hands-on experience with Terraform
- Proficiency in CI/CD tools (e.g., Jenkins, GitLab CI, Azure DevOps, etc.)
- Experience working on Azure or GCP cloud platforms (at least one is mandatory)
- Good understanding of DevOps practices

Backend Engineer - Python
Location: Bangalore, India
Experience Required: 2-3 years minimum
About Us:
At PGAGI, we believe in a future where AI and human intelligence coexist in harmony, creating a world that is smarter, faster, and better. We are not just building AI; we are shaping a future where AI is a fundamental and positive force for businesses, societies, and the planet.
Job Overview
We are seeking a skilled Backend Engineer with expertise in Python to join our engineering team. The ideal candidate will have hands-on experience building and maintaining enterprise-level, scalable backend systems.
Key Requirements
Technical Skills
- Python Expertise: Advanced proficiency in Python with deep understanding of frameworks like Django, FastAPI, or Flask
- Database Management: Experience with PostgreSQL, MySQL, MongoDB, and database optimization
- API Development: Strong experience in designing and implementing RESTful APIs and GraphQL
- Cloud Platforms: Hands-on experience with AWS, GCP, or Azure services
- Containerization: Proficiency with Docker and Kubernetes
- Message Queues: Experience with Redis, RabbitMQ, or Apache Kafka
- Version Control: Advanced Git workflows and collaboration
Experience Requirements
- Minimum 2-3 years of backend development experience
- Proven track record of working on enterprise-level applications
- Experience building scalable systems handling high traffic loads
- Background in microservices architecture and distributed systems
- Experience with CI/CD pipelines and DevOps practices
Responsibilities
- Design, develop, and maintain robust backend services and APIs
- Optimize application performance and scalability
- Collaborate with frontend teams and product managers
- Implement security best practices and data protection measures
- Write comprehensive tests and maintain code quality
- Participate in code reviews and architectural discussions
- Monitor system performance and troubleshoot production issues
Preferred Qualifications
- Knowledge of caching strategies (Redis, Memcached)
- Understanding of software architecture patterns
- Experience with Agile/Scrum methodologies
- Open source contributions or personal projects



Who We Are
DAITA is a German AI start-up. We’re transforming the fashion supply chain with AI-powered agents that automate the mundane, freeing teams to focus on creativity, strategy, and growth.
After a successful research phase spanning 8 countries across 3 continents—gathering insights from Indian cotton fields to German retailers—we’ve secured pre-seed funding and key industry partnerships.
Now, we’re building our MVP to deliver speed, precision, and ease to one of the world’s biggest industries.
We’re set on hypergrowth, aiming to redefine textiles with intelligent, scalable tech—and this is your chance to join the ground floor of something huge.
What You’ll Do
As our Chief Engineer, you’ll lead the technical charge to make our vision real, starting with our MVP in a 3–5 month sprint. You’ll:
- Design and code an AI-driven/agent system (leveraging machine learning and NLP) with integrated workflow automation to streamline and automate tasks in the textile supply chain, owning it from scratch to finish.
- Develop backend systems, utilize cutting-edge tools, critically assess manpower needs beyond yourself, oversee a small support team, and drive toward our aggressive launch timeline.
- Collaborate closely with our founders to align tech with ambitious goals and client input, ensuring automated workflows deliver speed, precision, and ease to textile industry stakeholders.
- Build an MVP that scales to millions, integrating APIs and data pipelines, using major cloud platforms (AWS, Azure, Google Cloud)—keeping us nimble now and primed for explosive growth later.
What You Bring
- 2–5 years of experience at high-growth startups or leading tech firms—where you shipped real products, solved complex problems, and moved fast.
- End-to-end ownership: You've taken tech projects from zero to one—built systems from scratch, made architecture decisions, handled messy edge cases, and delivered under pressure.
- Team Leadership: 1–3 years leading engineering teams, ideally including recruitment and delivery in India.
- Technical horsepower: AI Agent Experience, strong across full-stack or backend engineering, ML/NLP integration, cloud architecture, and API/data pipeline development. Experience with workflow automation tools and platforms (e.g., Apache Airflow, UiPath, or similar) to automate processes, ideally in supply chain or textiles. You can code an MVP solo if needed.
- Resource Clarity: Bring as much technical expertise as possible to build our MVP, and if you can’t own every piece, clearly identify the specific areas where you’ll need team members to deliver on time.
- Vision Alignment: You think like a builder, taking ownership of the product and team as if it were your own, while partnering closely with the founders to execute their vision with trust and decisiveness.
- Execution DNA: You ship fast, iterate intelligently, and know when to be scrappy vs. when to be solid.
- Problem-First Thinking: You’re obsessed with solving real user problems, understanding stakeholder needs beyond just writing beautiful code.
- High-Energy Leadership: Hands-on, humble, and always ready to jump into the trenches. You lead by doing.
- Geographical Fit: India-based, ideally with previous exposure to international teams or founders.
- Values-driven: You live our culture—live in the future, move fast, one team, and character above all.
Why Join Us?
- Be the technical linchpin of a hypergrowth startup—build the MVP that launches us into the stratosphere.
- Competitive salary and equity options to negotiate—own a piece of something massive.
- On-site in our Tiruppur (Tamil Nadu) offices for 2 months with the German founders, to sync with the founders, then remote flexibility long-term.
- A full-time role demanding full availability—put in the time needed to smash deadlines and reshape the second-biggest industry on Earth with a team that moves fast and rewards hustle.


Job title - Python developer
Exp – 4 to 6 years
Location – Pune/Mum/B’lore
PFB JD
Requirements:
- Proven experience as a Python Developer
- Strong knowledge of core Python and Pyspark concepts
- Experience with web frameworks such as Django or Flask
- Good exposure to any cloud platform (GCP Preferred)
- CI/CD exposure required
- Solid understanding of RESTful APIs and how to build them
- Experience working with databases like Oracle DB and MySQL
- Ability to write efficient SQL queries and optimize database performance
- Strong problem-solving skills and attention to detail
- Strong SQL programing (stored procedure, functions)
- Excellent communication and interpersonal skill
Roles and Responsibilities
- Design, develop, and maintain data pipelines and ETL processes using pyspark
- Work closely with data scientists and analysts to provide them with clean, structured data.
- Optimize data storage and retrieval for performance and scalability.
- Collaborate with cross-functional teams to gather data requirements.
- Ensure data quality and integrity through data validation and cleansing processes.
- Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
- Stay up to date with industry best practices and emerging technologies in data engineering.
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.
· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
· Work with Helm charts, Istio, and service meshes for microservices deployments.
· Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
· Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
· Design, implement, and manage CI/CD pipelines using Azure DevOps.
· Automate infrastructure deployment using Terraform, Bash and Power shell scripting
· Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
Who we are
At CoinCROWD, were building the next-gen wallet for real-world crypto utility. Our flagship product, CROWD Wallet, is secure, intuitive, gasless, and designed to bring digital currencies into everyday spending from a coffee shop to cross-border payments.
Were redefining the wallet experience for everyday users, combining the best of Web3 + AI to create a secure, scalable, and delightful platform
Were more than just a blockchain company, were an AI-native, crypto-forward startup. We ship fast, think long, and believe in building agentic, self-healing infrastructure that can scale across geographies and blockchains. If that excites you, lets talk.
What Youll Be Doing :
As the DevOps Lead at CoinCROWD, youll own our infrastructure from end to end, designing, deploying, and scaling secure systems to support blockchain transactions, AI agents, and token operations across global users.
You will :
- Lead the CI/CD, infra automation, observability, and multi-region deployments of CoinCROWD products.
- Manage cloud and container infrastructure using GCP, Docker, Kubernetes, Terraform.
- Deploy and maintain scalable, secure blockchain infrastructure using QuickNode, Alchemy, Web3Auth, and other Web3 APIs.
- Implement infrastructure-level AI agents or scripts for auto-scaling, failure prediction, anomaly detection, and alert management (using LangChain, LLMs, or tools like n8n).
- Ensure 99.99% uptime for wallet systems, APIs, and smart contract layers.
- Build and optimize observability across on-chain/off-chain systems using tools like Prometheus,
Grafana, Sentry, Loki, ELK Stack.
- Create auto-healing, self-monitoring pipelines that reduce human ops time via Agentic AI workflows.
- Collaborate with engineering and security teams on smart contract deployment pipelines, token rollouts, and app store release automation.
Agentic Ops : What it means
- Use GPT-based agents to auto-document infra changes or failure logs.
- Run LangChain agents that triage alerts, perform log analysis, or suggest infra optimizations.
- Build CI/CD workflows that self-update or auto-tune based on system usage.
- Integrate AI to detect abnormal wallet behaviors, fraud attempts, or suspicious traffic spikes
What Were Looking For :
- 5 to 10 years of DevOps/SRE experience, with at least 2 to 3 years in Web3, fintech, or high-scale infra.
- Deep expertise with Docker, Kubernetes, Helm, and cloud providers (GCP preferred).
- Hands-on with Terraform, Ansible, GitHub Actions, Jenkins, or similar IAC and pipeline tools.
- Experience maintaining or scaling blockchain infra (EVM nodes, RPC endpoints, APIs).
- Understanding of smart contract CI/CD, token lifecycle (ICO, vesting, etc.), and wallet integrations.
- Familiarity with AI DevOps tools, or interest in building LLM-enhanced internal tooling.
- Strong grip on security best practices, key management, and secrets infrastructure (Vault, SOPS, AWS KMS).
Bonus Points :
- You've built or run infra for a token launch, DEX, or high-TPS crypto wallet.
- You've deployed or automated a blockchain node network at scale.
- You've used AI/LLMs to write ops scripts, manage logs, or analyze incidents.
- You've worked with systems handling real-money movement with tight uptime and security requirements.
Why Join CoinCROWD :
- Equity-first model: Build real value as we scale.
- Be the architect of infrastructure that supports millions of real-world crypto transactions.
- Build AI-powered ops that scale without a 24/7 pager culture
- Work remotely with passionate people who ship fast and iterate faster.
- Be part of one of the most ambitious crossovers of AI + Web3 in 2025.


We are looking for a hands-on technical expert who has worked with multiple technology stacks and has experience architecting and building scalable cloud solutions with web and mobile frontends.
What will you work on?
- Interface with clients
- Recommend tech stacks
- Define end-to-end logical and cloud-native architectures
- Define APIs
- Integrate with 3rd party systems
- Create architectural solution prototypes
- Hands-on coding, team lead, code reviews, and problem-solving
What Makes You A Great Fit?
- 5+ years of software experience
- Experience with architecture of technology systems having hands-on expertise in backend, and web or mobile frontend
- Solid expertise and hands-on experience in Python with Flask or Django
- Expertise on one or more cloud platforms (AWS, Azure, Google App Engine)
- Expertise with SQL and NoSQL databases (MySQL, Mongo, ElasticSearch, Redis)
- Knowledge of DevOps practices
- Chatbot, Machine Learning, Data Science/Big Data experience will be a plus
- Excellent communication skills, verbal and written About Us We offer CTO-as-a-service and Product Development for Startups. We value our employees and provide them an intellectually stimulating environment where everyone’s ideas and contributions are valued.


Job Overview:
- JD of DATA ANALYST:
- Strong proficiency in Python programming.
- Preferred knowledge of cloud technologies, especially in Google Cloud Platform (GCP).
- Experience with visualization tools such as Grafana, PowerBI, and Tableau.
- Good to have knowledge of AI/ML models.
- Must have extensive knowledge in Python analytics, particularly in exploratory data analysis (EDA).


LendFlow is an AI-powered home loan assessment platform that helps mortgage brokers and lenders save hours by automating document analysis, income validation, and serviceability assessment. We turn complex financial documents into clear insights—fast.
We’re building a smart assistant that ingests client docs (bank statements, payslips, loan summaries) and uses modular AI agents to extract, classify, and summarize financial data in minutes, not hours. Think OCR + AI agents + compliance-ready outputs.
🛠️ What You’ll Be Building
As part of our early technical team, you’ll help us develop and launch our MVP. Key modules include:
- Document ingestion and OCR processing (Textract, Document AI)
- AI agent workflows using LangChain or CrewAI
- Serviceability calculators with business rule engines
- React + Next.js frontend for brokers and analysts
- FastAPI backend with PostgreSQL
- Security, encryption, audit logging (privacy-first design)
🎯 We’re Looking For:
Must-Have Skills:
- Strong experience with Python (FastAPI, OCR, LLMs, prompt engineering)
- Familiarity with AI agent frameworks (LangChain, CrewAI, Autogen, or similar)
- Frontend skills in React.js / Next.js
- Experience with PostgreSQL and cloud storage (AWS/GCP)
- Understanding of financial documents and data privacy best practices
Bonus Points:
- Experience with OCR tools like Amazon Textract, Tesseract, or Document AI
- Building ML/NLP pipelines in real-world apps
- Prior work in fintech, lending, or proptech sectors
Must have handled at least one project of medium to high complexity of migrating ETL pipelines and data warehouses to cloud.
Min 3 years of experience with premium consulting companies.
mandatory experience in GCP.
Strong proficiency in at least one major cloud platform - Azure, GCP(Primary) or AWS. Azure + GCP (Secondary) preferred. Having proficiency in all 3 is a significant plus.
· Design, develop, and maintain cloud-based applications and infrastructure across various cloud platforms.
· Select and configure appropriate cloud services based on specific project requirements and constraints.
· Implement infrastructure automation with tools like Terraform and Ansible.
· Write clean, efficient, and well-documented code using various programming languages. (Python (Required), Knowledge of Java, C#, JavaScript is a plus).
· Implement RESTful APIs and microservices architectures.
· Utilize DevOps practices for continuous integration and continuous delivery (CI/CD).
· Design, configure, and manage scalable and secure cloud infrastructure for MLOps.
· Monitor and optimize cloud resources for performance and cost efficiency.
· Implement security best practices throughout the development lifecycle.
· Collaborate with developers, operations, and security teams to ensure seamless integration and successful deployments.
· Stay up-to-date on the latest cloud technologies, MLOps tools and trends
· Strong analytical and problem-solving skills.



About the Role:
We are seeking a Technical Architect with proven expertise in full-stack web development, cloud infrastructure, and system design. You will lead the design and delivery of scalable enterprise applications, drive technical decision-making, and mentor a cross-functional development team. The ideal candidate has a strong foundation in .NET-based architecture, modern front-end frameworks, and cloud-native technologies.
Key Responsibilities:
- Lead the technical architecture, system design, and full-stack development of enterprise-grade web applications.
- Design and develop robust backend systems and APIs using .NET Core / C# / Python, following TDD/BDD principles.
- Build modern frontends using React.js, TypeScript, and optionally Angular, ensuring responsive and accessible UI.
- Architect scalable, secure, and highly available solutions using cloud platforms such as Azure, AWS, or GCP.
- Guide and review CI/CD pipeline creation and DevOps practices, leveraging tools like Azure DevOps, Git, Docker, etc.
- Oversee database design and optimization for relational and NoSQL systems like MSSQL, PostgreSQL, MongoDB, CosmosDB.
- Mentor developers and collaborate with cross-functional teams including Product Owners, QA, and DevOps.
- Ensure best practices in code quality, security, performance, and compliance.
- Lead application monitoring, error tracking, and infrastructure tuning for production-grade deployments.
- Required Skills:
- 10+ years of experience in software development, with 3+ years in architectural or technical leadership roles.
- Strong expertise in .NET Core, C#, React.js, TypeScript, HTML5, CSS3, and JavaScript.
- Good exposure to Python for backend services or data pipelines.
- Cloud platform experience in at least one or more: Azure, AWS, or Google Cloud Platform (GCP).
- Proficient in designing and consuming RESTful APIs, and working with metadata-driven and microservices architecture.
- Strong understanding of DevOps, CI/CD, and deployment strategies using tools like Git, Docker, Azure DevOps.
- Familiarity with frontend frameworks like Angular or Vue.js is a plus.
- Proficient with databases: MSSQL, PostgreSQL, MySQL, MongoDB, CosmosDB.
- Comfortable working on Linux/UNIX and Windows-based servers, along with web servers like Nginx, Apache, IIS.
- Good to Have:
- Experience in CRM, ERP, or E-commerce platforms.
- Familiarity with AI/ML integration and working with data science teams.
- Exposure to mobile development using React Native.
- Experience integrating third-party tools like Slack, Microsoft Teams, etc.
- Soft Skills:
- Strong problem-solving mindset with a proactive and innovative approach.
- Excellent communication and leadership abilities.
- Capability to mentor junior engineers and drive a high-performance team culture.
- Adaptability to work in fast-paced, Agile environments.
Educational Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related technical discipline.
- Microsoft / Cloud certifications are a plus.
Looking for Fresher developers
Responsibilities:
- Implement integrations requested by customers
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Requirements and skill:
Knowledge in DevOps Engineer or similar software engineering role
Good knowledge of Terraform, Kubernetes
Working knowledge on AWS, Google Cloud
You can directly contact me on nine three one six one two zero one three two
Job Description for Database Consultant-I (PostgreSQL)
Job Title: Database Consultant-I (PostgreSQL)
Company: Mydbops
About Us:
Mydbops is a trusted leader with 8+ years of excellence in open-source database management. We deliver best-in-class services across MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, Cassandra, and more. Our focus is on building scalable, secure, and high-performance database solutions for global clients. As a PCI DSS-certified and ISO-certified organisation, we are committed to operational excellence and data security.
Role Overview:
As a Database Consultant – I (PostgreSQL Team), you will take ownership of PostgreSQL database environments, offering expert-level support to our clients. This role involves proactive monitoring, performance tuning, troubleshooting, high availability setup, and guiding junior team members. You will play a key role in customer-facing technical delivery, solution design, and implementation.
Key Responsibilities:
- Manage PostgreSQL production environments for performance, stability, and scalability.
- Handle complex troubleshooting, performance analysis, and query optimisation.
- Implement backup strategies, recovery solutions, replication, and failover techniques.
- Set up and manage high availability architectures (Streaming Replication, Patroni, etc.).
- Work with DevOps/cloud teams for deployment and automation.
- Support upgrades, patching, and migration projects across environments.
- Use monitoring tools to proactively detect and resolve issues.
- Mentor junior engineers and guide troubleshooting efforts.
- Interact with clients to understand requirements and deliver solutions.
Requirements:
- 3–5 years of hands-on experience in PostgreSQL database administration.
- Strong Linux OS knowledge and scripting skills (Bash/Python).
- Proficiency in SQL tuning, performance diagnostics, and explain plans.
- Experience with tools like pgBackRest, Barman for backup and recovery.
- Familiarity with high availability, failover, replication, and clustering.
- Good understanding of AWS RDS, Aurora PostgreSQL, and GCP Cloud SQL.
- Experience with monitoring tools like pg_stat_statements, PMM, Nagios, or custom dashboards.
- Knowledge of automation/configuration tools like Ansible or Terraform is a plus.
- Strong communication and problem-solving skills.
Preferred Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or equivalent.
- PostgreSQL certification (EDB/Cloud certifications preferred).
- Past experience in a consulting, customer support, or managed services role.
- Exposure to multi-cloud environments and database-as-a-service platforms.
- Prior experience with database migrations or modernisation projects.
Why Join Us:
- Opportunity to work in a dynamic and growing industry.
- Learning and development opportunities to enhance your career.
- A collaborative work environment with a supportive team.
Job Details:
- Job Type: Full-time
- Work Days: 5 Days
- Work Mode: Work From Home
- Experience Required: 3–5 years

Position : Senior Data Analyst
Experience Required : 5 to 8 Years
Location : Hyderabad or Bangalore (Work Mode: Hybrid – 3 Days WFO)
Shift Timing : 11:00 AM – 8:00 PM IST
Notice Period : Immediate Joiners Only
Job Summary :
We are seeking a highly analytical and experienced Senior Data Analyst to lead complex data-driven initiatives that influence key business decisions.
The ideal candidate will have a strong foundation in data analytics, cloud platforms, and BI tools, along with the ability to communicate findings effectively across cross-functional teams. This role also involves mentoring junior analysts and collaborating closely with business and tech teams.
Key Responsibilities :
- Lead the design, execution, and delivery of advanced data analysis projects.
- Collaborate with stakeholders to identify KPIs, define requirements, and develop actionable insights.
- Create and maintain interactive dashboards, reports, and visualizations.
- Perform root cause analysis and uncover meaningful patterns from large datasets.
- Present analytical findings to senior leaders and non-technical audiences.
- Maintain data integrity, quality, and governance in all reporting and analytics solutions.
- Mentor junior analysts and support their professional development.
- Coordinate with data engineering and IT teams to optimize data pipelines and infrastructure.
Must-Have Skills :
- Strong proficiency in SQL and Databricks
- Hands-on experience with cloud data platforms (AWS, Azure, or GCP)
- Sound understanding of data warehousing concepts and BI best practices
Good-to-Have :
- Experience with AWS
- Exposure to machine learning and predictive analytics
- Industry-specific analytics experience (preferred but not mandatory)

Job Role : DevOps Engineer (Python + DevOps)
Experience : 4 to 10 Years
Location : Hyderabad
Work Mode : Hybrid
Mandatory Skills : Python, Ansible, Docker, Kubernetes, CI/CD, Cloud (AWS/Azure/GCP)
Job Description :
We are looking for a skilled DevOps Engineer with expertise in Python, Ansible, Docker, and Kubernetes.
The ideal candidate will have hands-on experience automating deployments, managing containerized applications, and ensuring infrastructure reliability.
Key Responsibilities :
- Design and manage containerization and orchestration using Docker & Kubernetes.
- Automate deployments and infrastructure tasks using Ansible & Python.
- Build and maintain CI/CD pipelines for streamlined software delivery.
- Collaborate with development teams to integrate DevOps best practices.
- Monitor, troubleshoot, and optimize system performance.
- Enforce security best practices in containerized environments.
- Provide operational support and contribute to continuous improvements.
Required Qualifications :
- Bachelor’s in Computer Science/IT or related field.
- 4+ years of DevOps experience.
- Proficiency in Python and Ansible.
- Expertise in Docker and Kubernetes.
- Hands-on experience with CI/CD tools and pipelines.
- Experience with at least one cloud provider (AWS, Azure, or GCP).
- Strong analytical, communication, and collaboration skills.
Preferred Qualifications :
- Experience with Infrastructure-as-Code tools like Terraform.
- Familiarity with monitoring/logging tools like Prometheus, Grafana, or ELK.
- Understanding of Agile/Scrum practices.
True Hands-On Developer in Programming Languages like Java or Scala . Expertise in Apache Spark . Database modelling and working with any of the SQL or NoSQL Database is must. Working knowledge of Scripting languages like shell/python. Experience of working with Cloudera is Preferred Orchestration tools like Airflow or Oozie would be a value addition. Knowledge of Table formats like Delta or Iceberg is plus to have. Working experience of Version controls like Git, build tools like Maven is recommended. Having software development experience is good to have along with Data Engineering experience
About Us
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.
What we are looking for:
Experience: 10+ years
Education: BTech / BE / ME /MTech/ MCA / MSc Computer Science
Industry: Product Engineering Services or Enterprise Software Companies
Job Responsibilities:
- Sprint Development Task , Code Review , Defining detailed task for the connector based on design/Timelines, Documentation maturity, Release review and Sanity,Writing the design specifications and user stories for the functionalities assigned.
- Develop assigned components / classes and assist QA team in writing the test cases
- Create and maintain coding best practices and do peer code / solution reviews
- Participate in Daily Scrum calls, Scrum Planning, Retro and Demos meetings
- Bring out technical/design/architectural challenges/risks during execution, develop action plan for mitigation and aversion of identified risks
- Comply with development processes, documentation templates and tools prescribed by CloudSufi or and its clients
- Work with other teams and Architects in the organization and assist them on technical Issues/Demos/POCs and proposal writing for prospective clients
- Contribute towards the creation of knowledge repository, reusable assets/solution accelerators and IPs
- Provide feedback to junior developers and be a coach and mentor for them
- Provide training sessions on the latest technologies and topics to others employees in the organization
- Participate in organization development activities time to time - Interviews, CSR/Employee engagement activities, participation in business events/conferences, implementation of new policies, systems and procedures as decided by Management team
Certifications (Optional): OCPJP (Oracle Certified Professional Java Programmer)
Required Experience:
- Strong programming skills in the language Java.
- Hands on in Core Java and Microservices
- Understanding of Identity Management using users , groups and entitlements
- Hands on in developing connectivity for Identity management using SCIM,REST and LDAP.
- Through Experience in Triggers , Web hooks , events receiver implementations for connectors.
- Excellent in code review process and assessing developer’s productivity.
- Excellent analytical and problem-solving skills
Good to Have:
- Experience of developing 3-4 integration adapters/connectors for enterprise applications (ERP, CRM, HCM, SCM, Billing etc.) using industry standard frameworks and methodologies following Agile/Scrum
- Experience with IAM products.
- Experience on Implementation of Message Brokers using JMS.
- Experience on ETL processes
Non-Technical/ Behavioral competencies required:
- Must have worked with US/Europe based clients in onsite/offshore delivery model
- Should have very good verbal and written communication, technical articulation, listening and presentation skills
- Should have proven analytical and problem solving skills
- Should have demonstrated effective task prioritization, time management and internal/external stakeholder management skills
- Should be a quick learner, self starter, go-getter and team player
- Should have experience of working under stringent deadlines in a Matrix organization structure
- Should have demonstrated appreciable Organizational Citizenship Behavior in past organizations
escription
Job Summary:
Join Springer Capital as a Cybersecurity & Cloud Intern to help architect, secure, and automate our cloud-based backend systems powering next-generation investment platforms.
Job Description:
Founded in 2015, Springer Capital is a technology-forward asset management and investment firm. We leverage cutting-edge digital solutions to uncover high-potential opportunities, transforming traditional finance through innovation, agility, and a relentless commitment to security and scalability.
Job Highlights
Work hands-on with AWS, Azure, or GCP to design and deploy secure, scalable backend infrastructure.
Collaborate with DevOps and engineering teams to embed security best practices in CI/CD pipelines.
Gain experience in real-world incident response, vulnerability assessment, and automated monitoring.
Drive meaningful impact on our security posture and cloud strategy from Day 1.
Enjoy a fully remote, flexible internship with global teammates.
Responsibilities
Assist in architecting and provisioning cloud resources (VMs, containers, serverless functions) with strict security controls.
Implement identity and access management, network segmentation, encryption, and logging best practices.
Develop and maintain automation scripts for security monitoring, patch management, and incident alerts.
Support vulnerability scanning, penetration testing, and remediation tracking.
Document cloud architectures, security configurations, and incident response procedures.
Partner with backend developers to ensure secure API gateways, databases, and storage services.
What We Offer
Mentorship: Learn directly from senior security engineers and cloud architects.
Training & Certifications: Access to online courses and support for AWS/Azure security certifications.
Impactful Projects: Contribute to critical security and cloud initiatives that safeguard our digital assets.
Remote-First Culture: Flexible hours and the freedom to collaborate from anywhere.
Career Growth: Build a strong foundation for a future in cybersecurity, cloud engineering, or DevSecOps.
Requirements
Pursuing or recently graduated in Computer Science, Cybersecurity, Information Technology, or a related discipline.
Familiarity with at least one major cloud platform (AWS, Azure, or GCP).
Understanding of core security principles: IAM, network security, encryption, and logging.
Scripting experience in Python, PowerShell, or Bash for automation tasks.
Strong analytical, problem-solving, and communication skills.
A proactive learner mindset and passion for securing cloud environments.
About Springer Capital
Springer Capital blends financial expertise with digital innovation to redefine asset management. Our mission is to drive exceptional value by implementing robust, technology-driven strategies that transform investment landscapes. We champion a culture of creativity, collaboration, and continuous improvement.
Location: Global (Remote)
Job Type: Internship
Pay: $50 USD per month
Work Location: Remote
- Strong Site Reliability Engineer (SRE - CloudOps) Profile
- Mandatory (Experience 1) - Must have a minimum 1 years of experience in SRE (CloudOps)
- Mandatory (Core Skill 1) - Must have experience with Google Cloud platforms (GCP)
- Mandatory (Core Skill 2) - Experience with monitoring, APM, and alerting tools like Prometheus, Grafana, ELK, Newrelic, Pingdom, or Pagerduty
- Mandatory (Core Skill 3) ) - Hands-on experience with Kubernetes for orchestration and container management.
- Mandatory (Company) - B2C Product Companies.
- Strong Senior Unity Developer Profile
- Mandatory (Experience 1) - Must have a minimum 2+ years of experience in game/application development using Unity.
- Mandatory (Experience 2) - Must have strong experience in backend development using C#
- Mandatory (Experience 3) - Must have strong experience in multiplayer game development with Unity, preferably using Photon Networking (PUN) or Photon Fusion.
- Mandatory (Company) - B2C Product Companies
Preferred
- Preferred (Education) - B.E / B.Tech
GCP Data Engineer Job Description
A GCP Data Engineer is responsible for designing, building, and maintaining data pipelines, architectures, and systems on Google Cloud Platform (GCP). Here's a breakdown of the job:
Key Responsibilities
- Data Pipeline Development: Design and develop data pipelines using GCP services like Dataflow, BigQuery, and Cloud Pub/Sub.
- Data Architecture: Design and implement data architectures to meet business requirements.
- Data Processing: Process and analyze large datasets using GCP services like BigQuery and Cloud Dataflow.
- Data Integration: Integrate data from various sources using GCP services like Cloud Data Fusion and Cloud Pub/Sub.
- Data Quality: Ensure data quality and integrity by implementing data validation and data cleansing processes.
Essential Skills
- GCP Services: Strong understanding of GCP services like BigQuery, Cloud Dataflow, Cloud Pub/Sub, and Cloud Storage.
- Data Engineering: Experience with data engineering concepts, including data pipelines, data warehousing, and data integration.
- Programming Languages: Proficiency in programming languages like Python, Java, or Scala.
- Data Processing: Knowledge of data processing frameworks like Apache Beam and Apache Spark.
- Data Analysis: Understanding of data analysis concepts and tools like SQL and data visualization.


Who We Are
Studio Management (studiomgmt.co) is a uniquely positioned organization combining venture capital, hedge fund investments, and startup incubation. Our portfolio includes successful ventures like Sentieo (acquired by AlphaSense for $185 million), as well as innovative products such as Emailzap (emailzap.co) and Mindful Minutes for Toddlers. We’re expanding our team to continue launching products at the forefront of technology, and we’re looking for an Engineering Lead who shares our passion for building the “next big thing.”
The Role
We are seeking a hands-on Engineering Lead to guide our product development efforts across multiple high-impact ventures. You will own the overall technical vision, mentor a remote team of engineers, and spearhead the creation of new-age products in a fast-paced startup environment. This is a strategic, influential role that requires a blend of technical prowess, leadership, and a keen interest in building products from zero to one.
Responsibilities
- Technical Leadership: Define and drive the architectural roadmap for new and existing products, ensuring high-quality code, scalability, and reliability.
- Mentorship & Team Building: Hire, lead, and develop a team of engineers. Foster a culture of continuous learning, ownership, and collaboration.
- Product Innovation: Work closely with product managers, designers, and stakeholders to conceptualize, build, and iterate on cutting-edge, user-centric solutions.
- Hands-On Development: Write efficient, maintainable code and perform thorough code reviews, setting the standard for engineering excellence.
- Cross-Functional Collaboration: Partner with different functions (product, design, marketing) to ensure alignment on requirements, timelines, and deliverables.
- Process Optimization: Establish best practices and processes that improve development speed, code quality, and overall team productivity.
- Continuous Improvement: Champion performance optimizations, new technologies, and modern frameworks to keep the tech stack fresh and competitive.
What We’re Looking For
- 4+ Years of Engineering Experience: A proven track record of designing and delivering high-impact software products.
- Technical Mastery: Expertise in a full-stack environment—HTML, CSS, JavaScript (React/React Native), Python (Django), and AWS. Strong computer science fundamentals, including data structures and system design.
- Leadership & Communication: Demonstrated ability to mentor team members, influence technical decisions, and articulate complex concepts clearly.
- Entrepreneurial Mindset: Passion for building new-age products, thriving in ambiguity, and rapidly iterating to find product-market fit.
- Problem Solver: Adept at breaking down complex challenges into scalable, efficient solutions.
- Ownership Mentality: Self-driven individual who takes full responsibility for project outcomes and team success.
- Adaptability: Comfort working in an environment where priorities can shift quickly, and opportunities for innovation abound.
Why Join Us
- High-Impact Work: Drive the technical direction of multiple ventures, shaping the future of new products from day one.
- Innovation Culture: Operate in a remote-first, collaborative environment that encourages bold thinking and rapid experimentation.
- Growth & Autonomy: Enjoy opportunities for both leadership advancement and deepening your technical skillset.
- Global Team: Work alongside a diverse group of talented professionals who share a passion for pushing boundaries.
- Competitive Benefits: Receive market-leading compensation and benefits in a role that rewards both individual and team success.
Job Summary:
We are seeking a skilled DevOps Engineer to design, implement, and manage CI/CD pipelines, containerized environments, and infrastructure automation. The ideal candidate should have hands-on experience with ArgoCD, Kubernetes, and Docker, along with a deep understanding of cloud platforms and deployment strategies.
Key Responsibilities:
- CI/CD Implementation: Develop, maintain, and optimize CI/CD pipelines using ArgoCD, GitOps, and other automation tools.
- Container Orchestration: Deploy, manage, and troubleshoot containerized applications using Kubernetes and Docker.
- Infrastructure as Code (IaC): Automate infrastructure provisioning with Terraform, Helm, or Ansible.
- Monitoring & Logging: Implement and maintain observability tools like Prometheus, Grafana, ELK, or Loki.
- Security & Compliance: Ensure best security practices in containerized and cloud-native environments.
- Cloud & Automation: Manage cloud infrastructure on AWS, Azure, or GCP with automated deployments.
- Collaboration: Work closely with development teams to optimize deployments and performance.
Required Skills & Qualifications:
- Experience: 5+ years in DevOps, Site Reliability Engineering (SRE), or Infrastructure Engineering.
- Tools & Tech: Strong knowledge of ArgoCD, Kubernetes, Docker, Helm, Terraform, and CI/CD pipelines.
- Cloud Platforms: Experience with AWS, GCP, or Azure.
- Programming & Scripting: Proficiency in Python, Bash, or Go.
- Version Control: Hands-on with Git and GitOps workflows.
- Networking & Security: Knowledge of ingress controllers, service mesh (Istio/Linkerd), and container security best practices.
Nice to Have:
- Experience with Kubernetes Operators, Kustomize, or FluxCD.
- Exposure to serverless architectures and multi-cloud deployments.
- Certifications in CKA, AWS DevOps, or similar.
Strong proficiency in at least one major cloud platform - Azure, GCP(Primary) or AWS. Azure + GCP (Secondary) preferred. Having proficiency in all 3 is a significant plus.
· Design, develop, and maintain cloud-based applications and infrastructure across various cloud platforms.
· Select and configure appropriate cloud services based on specific project requirements and constraints.
· Implement infrastructure automation with tools like Terraform and Ansible.
· Write clean, efficient, and well-documented code using various programming languages. (Python (Required), Knowledge of Java, C#, JavaScript is a plus).
· Implement RESTful APIs and microservices architectures.
· Utilize DevOps practices for continuous integration and continuous delivery (CI/CD).
· Design, configure, and manage scalable and secure cloud infrastructure for MLOps.
· Monitor and optimize cloud resources for performance and cost efficiency.
· Implement security best practices throughout the development lifecycle.
· Collaborate with developers, operations, and security teams to ensure seamless integration and successful deployments.
· Stay up-to-date on the latest cloud technologies, MLOps tools and trends
· Strong analytical and problem-solving skills.



🚀 Hiring: Data Engineer | GCP + Spark + Python + .NET |
| 6–10 Yrs | Gurugram (Hybrid)
We’re looking for a skilled Data Engineer with strong hands-on experience in GCP, Spark-Scala, Python, and .NET.
📍 Location: Suncity, Sector 54, Gurugram (Hybrid – 3 days onsite)
💼 Experience: 6–10 Years
⏱️ Notice Period :- Immediate Joiner
Required Skills:
- 5+ years of experience in distributed computing (Spark) and software development.
- 3+ years of experience in Spark-Scala
- 5+ years of experience in Data Engineering.
- 5+ years of experience in Python.
- Fluency in working with databases (preferably Postgres).
- Have a sound understanding of object-oriented programming and development principles.
- Experience working in an Agile Scrum or Kanban development environment.
- Experience working with version control software (preferably Git).
- Experience with CI/CD pipelines.
- Experience with automated testing, including integration/delta, Load, and Performance


Requirements
- 7+ years of experience with Python
- Strong expertise in Python frameworks (Django, Flask, or FastAPI)
- Experience with GCP, Terraform, and Kubernetes
- Deep understanding of REST API development and GraphQL
- Strong knowledge of SQL and NoSQL databases
- Experience with microservices architecture
- Proficiency with CI/CD tools (Jenkins, CircleCI, GitLab)
- Experience with container orchestration using Kubernetes
- Understanding of cloud architecture and serverless computing
- Experience with monitoring and logging solutions
- Strong background in writing unit and integration tests
- Familiarity with AI/ML concepts and integration points
Responsibilities
- Design and develop scalable backend services for our AI platform
- Architect and implement complex systems with high reliability
- Build and maintain APIs for internal and external consumption
- Work closely with AI engineers to integrate ML functionality
- Optimize application performance and resource utilization
- Make architectural decisions that balance immediate needs with long-term scalability
- Mentor junior engineers and promote best practices
- Contribute to the evolution of our technical standards and processes

Job Title : Technical Architect
Experience : 8 to 12+ Years
Location : Trivandrum / Kochi / Remote
Work Mode : Remote flexibility available
Notice Period : Immediate to max 15 days (30 days with negotiation possible)
Summary :
We are looking for a highly skilled Technical Architect with expertise in Java Full Stack development, cloud architecture, and modern frontend frameworks (Angular). This is a client-facing, hands-on leadership role, ideal for technologists who enjoy designing scalable, high-performance, cloud-native enterprise solutions.
🛠 Key Responsibilities :
- Architect scalable and high-performance enterprise applications.
- Hands-on involvement in system design, development, and deployment.
- Guide and mentor development teams in architecture and best practices.
- Collaborate with stakeholders and clients to gather and refine requirements.
- Evaluate tools, processes, and drive strategic technical decisions.
- Design microservices-based solutions deployed over cloud platforms (AWS/Azure/GCP).
✅ Mandatory Skills :
- Backend : Java, Spring Boot, Python
- Frontend : Angular (at least 2 years of recent hands-on experience)
- Cloud : AWS / Azure / GCP
- Architecture : Microservices, EAI, MVC, Enterprise Design Patterns
- Data : SQL / NoSQL, Data Modeling
- Other : Client handling, team mentoring, strong communication skills
➕ Nice to Have Skills :
- Mobile technologies (Native / Hybrid / Cross-platform)
- DevOps & Docker-based deployment
- Application Security (OWASP, PCI DSS)
- TOGAF familiarity
- Test-Driven Development (TDD)
- Analytics / BI / ML / AI exposure
- Domain knowledge in Financial Services or Payments
- 3rd-party integration tools (e.g., MuleSoft, BizTalk)
⚠️ Important Notes :
- Only candidates from outside Hyderabad/Telangana and non-JNTU graduates will be considered.
- Candidates must be serving notice or joinable within 30 days.
- Client-facing experience is mandatory.
- Java Full Stack candidates are highly preferred.
🧭 Interview Process :
- Technical Assessment
- Two Rounds – Technical Interviews
- Final Round
Job Title: Lead DevOps Engineer
Experience Required: 4 to 5 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two
Bangalore / Chennai
- Hands-on data modelling for OLTP and OLAP systems
- In-depth knowledge of Conceptual, Logical and Physical data modelling
- Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same
- Strong understanding of variables impacting database performance for near-real-time reporting and application interaction.
- Should have working experience on at least one data modelling tool, preferably DBSchema, Erwin
- Good understanding of GCP databases like AlloyDB, CloudSQL, and BigQuery.
- People with functional knowledge of the mutual fund industry will be a plus
Role & Responsibilities:
● Work with business users and other stakeholders to understand business processes.
● Ability to design and implement Dimensional and Fact tables
● Identify and implement data transformation/cleansing requirements
● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse
● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions
● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique
● Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions.
● Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use.
● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality.
● Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions.
● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics.
● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.
● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements.
● Train business end-users, IT analysts, and developers.
Required Skills:
● Bachelor’s degree in Computer Science or similar field or equivalent work experience.
● 5+ years of experience on Data Warehousing, Data Engineering or Data Integration projects.
● Expert with data warehousing concepts, strategies, and tools.
● Strong SQL background.
● Strong knowledge of relational databases like SQL Server, PostgreSQL, MySQL.
● Strong experience in GCP & Google BigQuery, Cloud SQL, Composer (Airflow), Dataflow, Dataproc, Cloud Function and GCS
● Good to have knowledge on SQL Server Reporting Services (SSRS), and SQL Server Integration Services (SSIS).
● Knowledge of AWS and Azure Cloud is a plus.
● Experience in Informatica Power exchange for Mainframe, Salesforce, and other new-age data sources.
● Experience in integration using APIs, XML, JSONs etc.

Senior Data Analyst – Power BI, GCP, Python & SQL
Job Summary
We are looking for a Senior Data Analyst with strong expertise in Power BI, Google Cloud Platform (GCP), Python, and SQL to design data models, automate analytics workflows, and deliver business intelligence that drives strategic decisions. The ideal candidate is a problem-solver who can work with complex datasets in the cloud, build intuitive dashboards, and code custom analytics using Python and SQL.
Key Responsibilities
* Develop advanced Power BI dashboards and reports based on structured and semi-structured data from BigQuery and other GCP sources.
* Write and optimize complex SQL queries (BigQuery SQL) for reporting and data modeling.
* Use Python to automate data preparation tasks, build reusable analytics scripts, and support ad hoc data requests.
* Partner with data engineers and stakeholders to define metrics, build ETL pipelines, and create scalable data models.
* Design and implement star/snowflake schema models and DAX measures in Power BI.
* Maintain data integrity, monitor performance, and ensure security best practices across all reporting systems.
* Drive initiatives around data quality, governance, and cost optimization on GCP.
* Mentor junior analysts and actively contribute to analytics strategy and roadmap.
Must-Have Skills
* Expert-level SQL : Hands-on experience writing complex queries in BigQuery , optimizing joins, window functions, CTEs.
* Proficiency in Python : Data wrangling, Pandas, NumPy, automation scripts, API consumption, etc.
* Power BI expertise : Building dashboards, using DAX, Power Query (M), custom visuals, report performance tuning.
* GCP hands-on experience : Especially with BigQuery, Cloud Storage, and optionally Cloud Composer or Dataflow.
* Strong understanding of data modeling, ETL pipelines, and analytics workflows.
* Excellent communication skills and the ability to explain data insights to non-technical audiences.
Preferred Qualifications
* Experience in version control (Git) and working in CI/CD environments.
* Google Professional Data Engineer
* PL-300: Microsoft Power BI Data Analyst Associate


Job Title: Full-Stack Developer
Location: Bangalore/Remote
Type: Full-time
About Eblity:
Eblity’s mission is to empower educators and parents to help children facing challenges.
Over 50% of children in mainstream schools face academic or behavioural challenges, most of which go unnoticed and underserved. By providing the right support at the right time, we could make a world of difference to these children.
We serve a community of over 200,000 educators and parents and over 3,000 schools.
If you are purpose-driven and want to use your skills in technology to create a positive impact for children facing challenges and their families, we encourage you to apply.
Join us in shaping the future of inclusive education and empowering learners of all abilities.
Role Overview:
As a full-stack developer, you will lead the development of critical applications.
These applications enable services for parents of children facing various challenges such as Autism, ADHD and Learning Disabilities, and for experts who can make a significant difference in these children’s lives.
You will be part of a small, highly motivated team who are constantly working to improve outcomes for children facing challenges like Learning Disabilities, ADHD, Autism, Speech Disorders, etc.
Job Description:
We are seeking a talented and proactive Full Stack Developer with hands-on experience in the React / Python / Postgres stack, leveraging Cursor and Replit for full-stack development. As part of our product development team, you will work on building responsive, scalable, and user-friendly web applications, utilizing both front-end and back-end technologies. Your expertise with Cursor as an AI agent-based development platform and Replit will be crucial for streamlining development processes and accelerating product timelines.
Responsibilities:
- Design, develop, and maintain front-end web applications using React, ensuring a responsive, intuitive, and high-performance user experience.
- Build and optimize the back-end using FastAPI or Flask and PostgreSQL, ensuring scalability, performance, and maintainability.
- Leverage Replit for full-stack development, deploying applications, managing cloud resources, and streamlining collaboration across team members.
- Utilize Cursor, an AI agent-based development platform, to enhance application development, automate processes, and optimize workflows through AI-driven code generation, data management, and integration.
- Collaborate with cross-functional teams (back-end developers, designers, and product managers) to gather requirements, design solutions, and implement them seamlessly across the front-end and back-end.
- Design and implement PostgreSQL database schemas, writing optimized queries to ensure efficient data retrieval and integrity.
- Integrate RESTful APIs and third-party services across the React front-end and FastAPI/Flask/PostgreSQLback-end, ensuring smooth data flow.
- Implement and optimize reusable React components and FastAPI/Flask functions to improve code maintainability and application performance.
- Conduct thorough testing, including unit, integration, and UI testing, to ensure application stability and reliability.
- Optimize both front-end and back-end applications for maximum speed and scalability, while resolving performance issues in both custom code and integrated services.
- Stay up-to-date with emerging technologies to continuously improve the quality and efficiency of our solutions.
Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- 2+ years of experience in React development, with strong knowledge of component-based architecture, state management, and front-end best practices.
- Proven experience in Python development, with expertise in building web applications using frameworks like FastAPI or Flask.
- Solid experience working with PostgreSQL, including designing database schemas, writing optimized queries, and ensuring efficient data retrieval.
- Experience with Cursor, an AI agent-based development platform, to enhance full-stack development through AI-driven code generation, data management, and automation.
- Experience with Replit for full-stack development, deploying applications, and collaborating within cloud-based environments.
- Experience working with RESTful APIs, including their integration into both front-end and back-end systems.
- Familiarity with development tools and frameworks such as Git, Node.js, and Nginx.
- Strong problem-solving skills, a keen attention to detail, and the ability to work independently or within a collaborative team environment.
- Excellent communication skills to effectively collaborate with team members and stakeholders.
Nice-to-Have:
- Experience with other front-end frameworks (e.g., Vue, Angular).
- Familiarity with Agile methodologies and project management tools like Jira.
- Understanding of cloud technologies and experience deploying applications to platforms like AWS or Google Cloud.
- Knowledge of additional back-end technologies or frameworks (e.g., FastAPI).
What We Offer:
- A collaborative and inclusive work environment that values every team member’s input.
- Opportunities to work on innovative projects using Cursor and Replit for full-stack development.
- Competitive salary and comprehensive benefits package.
- Flexible working hours and potential for remote work options.
Location: Remote
If you're passionate about full-stack development and leveraging AI-driven platforms like Cursor and Replit to build scalable solutions, apply today to join our forward-thinking team!

What You’ll Do:
As a Data Scientist, you will work closely across DeepIntent Analytics teams located in New York City, India, and Bosnia. The role will support internal and external business partners in defining patient and provider audiences, and generating analyses and insights related to measurement of campaign outcomes, Rx, patient journey, and supporting evolution of DeepIntent product suite. Activities in this position include creating and scoring audiences, reading campaign results, analyzing medical claims, clinical, demographic and clickstream data, performing analysis and creating actionable insights, summarizing, and presenting results and recommended actions to internal stakeholders and external clients, as needed.
- Explore ways to to create better audiences
- Analyze medical claims, clinical, demographic and clickstream data to produce and present actionable insights
- Explore ways of using inference, statistical, machine learning techniques to improve the performance of existing algorithms and decision heuristics
- Design and deploy new iterations of production-level code
- Contribute posts to our upcoming technical blog
Who You Are:
- Bachelor’s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, OR, or Data Science. Graduate degree is strongly preferred
- 3+ years of working experience as Data Analyst, Data Engineer, Data Scientist in digital marketing, consumer advertisement, telecom, or other areas requiring customer level predictive analytics
- Background in either data engineering or analytics
- Hands on technical experience is required, proficiency in performing statistical analysis in Python, including relevant libraries, required
- You have an advanced understanding of the ad-tech ecosystem, digital marketing and advertising data and campaigns or familiarity with the US healthcare patient and provider systems (e.g. medical claims, medications)
- Experience in programmatic, DSP related, marketing predictive analytics, audience segmentation or audience behaviour analysis or medical / healthcare experience
- You have varied and hands-on predictive machine learning experience (deep learning, boosting algorithms, inference)
- Familiarity with data science tools such as, Xgboost, pytorch, Jupyter and strong LLM user experience (developer/API experience is a plus)
- You are interested in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicating with less technical audiences orally and in writing

As a Principal Software Engineer on our team, you will:
- Design and deliver the next generation of Toast products using our technology stack, which includes Kotlin, DynamoDB, React, Pulsar, Apache Camel, GraphQL, and Big Data technologies.
- Collaborate with our Data Platform teams to develop best-in-class reporting and analytics products.
- Document solution designs, write and review code, test, and roll out solutions to production. Capture and act on customer feedback to iteratively enhance the customer experience.
- Work closely with peers to optimize solution design for performance, flexibility, and scalability — enabling multiple product and engineering teams to work on a shared framework and platform.
- Partner with UX, Product Management, QA, and other engineering teams to build robust, high-quality solutions in a fast-moving, complex environment.
- Coach and mentor engineers on best practices and modern software development standards.
Do you have the right ingredients? (Requirements)
- 12+ years of software development experience.
- Proven experience in delivering high-quality, reliable, and scalable services to production in a continuous delivery environment.
- Expertise in AI, Cloud technologies, Image Processing, and Full Stack Development.
- Strong database skills, proficient in SQL Server, PostgreSQL, or DynamoDB.
- Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP).
- Proficient in one or more object-oriented programming languages like Java, Kotlin, or C#.
- Hands-on experience working in Agile/Scrum environments.
- Demonstrated ability to lead the development and scaling of mission-critical platform components.
- Strong problem-solving skills, with the ability to navigate complex and ambiguous challenges and clearly communicate solutions.
- Skilled in balancing delivery speed with platform stability.
- Passionate about writing clean, maintainable, and impactful code.
- Experience in mentoring and coaching other engineers.
TECHNICAL MANAGER
Department: Product Engineering Location: Noida/Chennai
Experience: 12+ years with 2+ years in a similar role
Job Summary:
We are looking for an inspiring leader to lead a dynamic R&D team with a strong “Product & Customer” spirit. As an Engineering Manager, you will be responsible for the entire process, from design and specification to code quality, process integration and delivery performance
Key Responsibilities:
•
Collaborate closely with Product Management teams to design and develop business modules.
•
As a manager, coordinate a diverse team and ensure collaboration between different departments. Empathetic and fair yet demanding management with particular attention to operational excellence.
•
Actively participate in resolving technical issues and challenges that the team encounters during development and escalated client implementation and production issues
•
Anticipate technical challenges and work to address them proactively to minimize disruptions to the development process. Guides the team in making architectural choices
•
Promote and advocate for best practices in software development, including coding standards, testing practices, and documentation.
•
Make informed decisions on technical trade-offs and communicate those decisions to the team and stakeholders.
•
Be on top of critical client/ implementation issues and keep stakeholders informed.
PROFILE
•
Good proficiency overlaps with technologies like: Java17, Spring, SpringMVC, RESTful web services, Hibernate, RDBMS, Spring Security, Ansible, Docker, Kubernetes, JMeter, Angular.
•
Strong experience in development tools, CI/CD pipelines. Extensive experience with Agile.
•
Deep understanding of cloud technologies on at least one of the cloud platforms AWS, Azure or Google Cloud
•
Strong communicator with ability to collaborate cross-functionally, build relationships, and achieve broader organizational goals.
•
Proven leadership skills
•
Product development experience preferred. Fintech or lending domain experience is a plus.
•
Engineering degree or equivalent.
Roles and Responsibilities:
• Independently analyze, solve, and correct issues in real time, providing problem resolution end-
to-end.
• Strong experience in development tools, CI/CD pipelines. Extensive experience with Agile.
• Good proficiency overlap with technologies like: Java8, Spring, SpringMVC, RESTful web services, Hibernate, Oracle PL/SQL, SpringSecurity, Ansible, Docker, JMeter, Angular.
• Strong fundamentals and clarity of REST web services. Person should have exposure to
developing REST services which handles large sets
• Fintech or lending domain experience is a plus but not necessary.
• Deep understanding of cloud technologies on at least one of the cloud platforms AWS, Azure or Google Cloud
• Wide knowledge of technology solutions and ability to learn and work with emerging technologies, methodologies, and solutions.
• Strong communicator with ability to collaborate cross-functionally, build relationships, and achieve broader organizational goals.
• Provide vision leadership for the technology roadmap of our products. Understand product capabilities and strategize technology for its alignment with business objectives and maximizing ROI.
• Define technical software architectures and lead development of frameworks.
• Engage end to end in product development, starting from business requirements to realization of product and to its deployment in production.
• Research, design, and implement the complex features being added to existing products and/or create new applications / components from scratch.
Minimum Qualifications
• Bachelor s or higher engineering degree in Computer Science, or related technical field, or equivalent additional professional experience.
• 5 years of experience in delivering solutions from concept to production that are based on Java and open-source technologies as an enterprise architect in global organizations.
• 12-15 years of industry experience in design, development, deployments, operations and managing non-functional perspectives of technical solutions.
We are seeking a talented Engineer to join our AI team. You will technically lead experienced software and machine learning engineers to develop, test, and deploy AI-based solutions, with a primary focus on large language models and other machine learning applications. This is an excellent opportunity to apply your software engineering skills in a dynamic, real-world environment and gain hands-on experience in cutting-edge AI technology.
Key Roles & Responsibilities:
- Design and implement software solutions that power machine learning models, particularly in LLMs
- Create robust data pipelines, handling data preprocessing, transformation, and integration for machine learning projects
- Collaborate with the engineering team to build and optimize machine learning models, particularly LLMs, that address client-specific challenges
- Partner with cross-functional teams, including business stakeholders, data engineers, and solutions architects to gather requirements and evaluate technical feasibility
- Design and implement a scale infrastructure for developing and deploying GenAI solutions
- Support model deployment and API integration to ensure interaction with existing enterprise systems.
Basic Qualifications:
- A master's degree or PhD in Computer Science, Data Science, Engineering, or a related field
- Experience: 3-5 Years
- Strong programming skills in Python and Java
- Good understanding of machine learning fundamentals
- Hands-on experience with Python and common ML libraries (e.g., PyTorch, TensorFlow, scikit-learn)
- Familiar with frontend development and frameworks like React
- Basic knowledge of LLMs and transformer-based architectures is a plus.
Preferred Qualifications
- Excellent problem-solving skills and an eagerness to learn in a fast-paced environment
- Strong attention to detail and ability to communicate technical concepts clearly
About the Role
We are looking for a talented LLM & Backend Engineer to join our AI innovation team at EaseMyTrip.com and help power the next generation of intelligent travel experiences. In this role, you will lead the integration and optimization of Large Language Models (LLMs) to create conversational travel agents that can understand, recommend, and assist travelers across platforms. You will work at the intersection of backend systems, AI models, and natural language understanding, bringing smart automation to every travel interaction.
Key Responsibilities:
- LLM Integration: Deploy and integrate LLMs (e.g., GPT-4, Claude, Mistral) to process natural language queries and deliver personalized travel recommendations.
- Prompt Engineering & RAG: Design optimized prompts and implement Retrieval-Augmented Generation (RAG) workflows to enhance contextual relevance in multi-turn conversations.
- Conversational Flow Design: Build and manage robust conversational workflows capable of handling complex travel scenarios such as booking modifications and cancellations.
- LLM Performance Optimization: Tune models and workflows to balance performance, scalability, latency, and cost across diverse environments.
- Backend Development: Develop scalable, asynchronous backend services using FastAPI or Django, with a focus on secure and efficient API architectures.
- Database & ORM Design: Design and manage data using PostgreSQL or MongoDB, and implement ORM solutions like SQLAlchemy for seamless data interaction.
- Cloud & Serverless Infrastructure: Deploy solutions on AWS, GCP, or Azure using containerized and serverless tools such as Lambda and Cloud Functions.
- Model Fine-Tuning & Evaluation: Fine-tune open-source and proprietary LLMs using techniques like LoRA and PEFT, and evaluate outputs using BLEU, ROUGE, or similar metrics.
- NLP Pipeline Implementation: Develop NLP functionalities including named entity recognition, sentiment analysis, and dialogue state tracking.
- Cross-Functional Collaboration: Work closely with AI researchers, frontend developers, and product teams to ship impactful features rapidly and iteratively.
Preferred Candidate Profile:
- Experience: Minimum 2 years in backend development with at least 1 year of hands-on experience working with LLMs or NLP systems.
- Programming Skills: Proficient in Python with practical exposure to asynchronous programming and frameworks like FastAPI or Django.
- LLM Ecosystem Expertise: Experience with tools and libraries such as LangChain, LlamaIndex, Hugging Face Transformers, and OpenAI/Anthropic APIs.
- Database Knowledge: Strong understanding of relational and NoSQL databases, including schema design and performance optimization.
- Model Engineering: Familiarity with prompt design, LLM fine-tuning (LoRA, PEFT), and evaluation metrics for language models.
- Cloud Deployment: Comfortable working with cloud platforms (AWS/GCP/Azure) and building serverless or containerized deployments.
- NLP Understanding: Solid grasp of NLP concepts including intent detection, dialogue management, and text classification.
- Problem-Solving Mindset: Ability to translate business problems into AI-first solutions with a user-centric approach.
- Team Collaboration: Strong communication skills and a collaborative spirit to work effectively with multidisciplinary teams.
- Curiosity and Drive: Passionate about staying at the forefront of AI and using emerging technologies to build innovative travel experiences.

Why This Role Matters
- We are looking for a Staff Engineer to lead the technical direction and hands-on development of our next-generation, agentic AI-first marketing platforms. This is a high-impact role to architect, build, and ship products that change how marketers interact with data, plan campaigns, and make decisions.
What You'll Do
- Build Gen-AI native products: Architect, build, and ship platforms powered by LLMs, agents, and predictive AI
- Stay hands-on: Design systems, write code, debug, and drive product excellence
- Lead with depth: Mentor a high-caliber team of full stack engineers.
- Speed to market: Rapidly ship and iterate on MVPs to maximize learning and feedback.
- Own the full stack: From backend data pipelines to intuitive UIs—from Airflow to React - from BigQuery to embeddings.
- Scale what works: Ensure scalability, security, and performance in multi-tenant, cloud-native environments (GCP).
- Collaborate deeply: Work closely with product, growth, and leadership to align tech with business priorities.
What You Bring
- 8+ years of experience building and scaling full-stack, data-driven products
- Proficiency in backend (Node.js, Python) and frontend (React), with solid GCP experience
- Strong grasp of data pipelines, analytics, and real-time data processing
- Familiarity with Gen-AI frameworks (LangChain, LlamaIndex, OpenAI APIs, vector databases)
- Proven architectural leadership and technical ownership
- Product mindset with a bias for execution and iteration
Our Tech Stack
- Cloud: Google Cloud Platform
- Backend: Node.js, Python, Airflow
- Data: BigQuery, Cloud SQL
- AI/ML: TensorFlow, OpenAI APIs, custom agents
- Frontend: React.js
What You Get
- Meaningful equity in a high-growth startup
- The chance to build global products from India
- A culture that values clarity, ownership, learning, humility, and candor
- A rare opportunity to build with Gen-AI from the ground up
Who You Are
- You’re initiative-driven, not interruption-driven.
- You code because you love building things that matter.
- You enjoy ambiguity and solve problems from first principles.
- You believe true leadership is contextual, hands-on, and grounded.
- You’re here to build — not just maintain.
- You care deeply about seeing your products empower real users, run reliably at scale, and adapt intelligently with minimal manual effort.
- You know that elegant code is just 30% of the job — the real craft lies in the engineering rigour, edge-case handling, and production resilience that make great products truly dependable.

We’re looking for a Product Ninja with the mindset of a Tech Catalyst — a proactive executor who thrives at the intersection of product, technology, and user experience. In this role, you’ll bring product ideas to life, translate strategy into action, and collaborate closely with engineers, designers, and stakeholders to deliver impactful solutions.
This role is ideal for someone who’s hands-on, detail-oriented, and passionate about using technology to create real customer value.
Responsibilities:
- Support the definition and execution of the product roadmap in alignment with business goals.
- Work closely with engineering, design, QA, and marketing teams to drive product development.
- Translate product requirements into detailed specs, user stories, and acceptance criteria.
- Conduct competitive research and analyze user feedback to inform feature enhancements.
- Track product performance post-launch and gather insights for continuous improvement.
- Assist in managing the full product lifecycle, from ideation to rollout.
- Be a tech-savvy contributor, suggesting improvements based on emerging tools, platforms, and technologies.
Qualification:
- Bachelor’s degree in Business, Marketing, Computer Science, or a related field.
- 3+ years of hands-on experience in product management, product operations, or related roles.
- Comfortable working in fast-paced, cross-functional tech environments.
Required Skills:
- Strong analytical and problem-solving abilities.
- Clear, concise communication and documentation skills.
- Proficiency with project and product management tools (e.g., JIRA, Trello, Confluence).
- Ability to manage details without losing sight of the bigger picture.
Preferred Skills:
- Experience with Agile or Scrum workflows.
- Familiarity with UX/UI best practices and working with design systems.
- Exposure to APIs, databases, or cloud-based platforms is a plus.
- Comfortable with basic data analysis and tools like Excel, SQL, or analytics dashboards.
Who You Are:
- A doer who turns ideas into working solutions.
- A collaborator who thrives in tech teams and enjoys building alongside others.
- A catalyst who nudges things forward with curiosity, speed, and smart experimentation.

About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Senior Software Development Engineer – Data Engineering with 5-8 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions.
Key Roles & Responsibilities:
- Design, develop, and optimize ETL/ELT pipelines using Apache Spark, PySpark, Databricks, and Snowflake.
- Implement real-time and batch data processing workflows in cloud environments (AWS, Azure, GCP).
- Develop high-performance, scalable data pipelines for structured, semi-structured, and unstructured data.
- Work with Delta Lake and Lakehouse architectures to improve data reliability and efficiency.
- Optimize Snowflake and Databricks performance, including query tuning, caching, partitioning, and cost optimization.
- Implement data governance, security, and compliance best practices.
- Build and maintain data models, transformations, and data marts for analytics and reporting.
- Collaborate with data scientists, analysts, and business teams to define data engineering requirements.
- Automate infrastructure and deployments using Terraform, Airflow, or dbt.
- Monitor and troubleshoot data pipeline failures, performance issues, and bottlenecks.
- Develop and enforce data quality and observability frameworks using Great Expectations, Monte Carlo, or similar tools.
Basic Qualifications:
- Bachelor’s or Master’s Degree in Computer Science or Data Science.
- 5–8 years of experience in data engineering, big data processing, and cloud-based data platforms.
- Hands-on expertise in Apache Spark, PySpark, and distributed computing frameworks.
- Strong experience with Snowflake (Warehouses, Streams, Tasks, Snowpipe, Query Optimization).
- Experience in Databricks (Delta Lake, MLflow, SQL Analytics, Photon Engine).
- Proficiency in SQL, Python, or Scala for data transformation and analytics.
- Experience working with data lake architectures and storage formats (Parquet, Avro, ORC, Iceberg).
- Hands-on experience with cloud data services (AWS Redshift, Azure Synapse, Google BigQuery).
- Experience in workflow orchestration tools like Apache Airflow, Prefect, or Dagster.
- Strong understanding of data governance, access control, and encryption strategies.
- Experience with CI/CD for data pipelines using GitOps, Terraform, dbt, or similar technologies.
Preferred Qualifications:
- Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub).
- Experience in BI and analytics tools (Tableau, Power BI, Looker).
- Familiarity with data observability tools (Monte Carlo, Great Expectations).
- Experience with machine learning feature engineering pipelines in Databricks.
- Contributions to open-source data engineering projects.
Technical Skills:
- Hands-on experience with AWS, Google Cloud Platform (GCP), and Microsoft Azure cloud computing
- Proficiency in Windows Server and Linux server environments
- Proficiency with Internet Information Services (IIS), Nginx, Apache, etc.
- Experience in deploying .NET applications (ASP.NET, MVC, Web API, WCF, etc.), .NET Core, Python and Node.js applications etc.
- Familiarity with GitLab or GitHub for version control and Jenkins for CI/CD processes
Key Responsibilities:
- ☁️ Manage cloud infrastructure and automation on AWS, Google Cloud (GCP), and Azure.
- 🖥️ Deploy and maintain Windows Server environments, including Internet Information Services (IIS).
- 🐧 Administer Linux servers and ensure their security and performance.
- 🚀 Deploy .NET applications (ASP.Net, MVC, Web API, WCF, etc.) using Jenkins CI/CD pipelines.
- 🔗 Manage source code repositories using GitLab or GitHub.
- 📊 Monitor and troubleshoot cloud and on-premises server performance and availability.
- 🤝 Collaborate with development teams to support application deployments and maintenance.
- 🔒 Implement security best practices across cloud and server environments.
Required Skills:
- ☁️ Hands-on experience with AWS, Google Cloud (GCP), and Azure cloud services.
- 🖥️ Strong understanding of Windows Server administration and IIS.
- 🐧 Proficiency in Linux server management.
- 🚀 Experience in deploying .NET applications and working with Jenkins for CI/CD automation.
- 🔗 Knowledge of version control systems such as GitLab or GitHub.
- 🛠️ Good troubleshooting skills and ability to resolve system issues efficiently.
- 📝 Strong documentation and communication skills.
Preferred Skills:
- 🖥️ Experience with scripting languages (PowerShell, Bash, or Python) for automation.
- 📦 Knowledge of containerization technologies (Docker, Kubernetes) is a plus.
- 🔒 Understanding of networking concepts, firewalls, and security best practices.