50+ Google Cloud Platform (GCP) Jobs in India
Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!




🚀 Hiring: Data Engineer | GCP + Spark + Python + .NET |
| 6–10 Yrs | Gurugram (Hybrid)
We’re looking for a skilled Data Engineer with strong hands-on experience in GCP, Spark-Scala, Python, and .NET.
📍 Location: Suncity, Sector 54, Gurugram (Hybrid – 3 days onsite)
💼 Experience: 6–10 Years
⏱️ Notice Period :- Immediate Joiner
Required Skills:
- 5+ years of experience in distributed computing (Spark) and software development.
- 3+ years of experience in Spark-Scala
- 5+ years of experience in Data Engineering.
- 5+ years of experience in Python.
- Fluency in working with databases (preferably Postgres).
- Have a sound understanding of object-oriented programming and development principles.
- Experience working in an Agile Scrum or Kanban development environment.
- Experience working with version control software (preferably Git).
- Experience with CI/CD pipelines.
- Experience with automated testing, including integration/delta, Load, and Performance


Requirements
- 7+ years of experience with Python
- Strong expertise in Python frameworks (Django, Flask, or FastAPI)
- Experience with GCP, Terraform, and Kubernetes
- Deep understanding of REST API development and GraphQL
- Strong knowledge of SQL and NoSQL databases
- Experience with microservices architecture
- Proficiency with CI/CD tools (Jenkins, CircleCI, GitLab)
- Experience with container orchestration using Kubernetes
- Understanding of cloud architecture and serverless computing
- Experience with monitoring and logging solutions
- Strong background in writing unit and integration tests
- Familiarity with AI/ML concepts and integration points
Responsibilities
- Design and develop scalable backend services for our AI platform
- Architect and implement complex systems with high reliability
- Build and maintain APIs for internal and external consumption
- Work closely with AI engineers to integrate ML functionality
- Optimize application performance and resource utilization
- Make architectural decisions that balance immediate needs with long-term scalability
- Mentor junior engineers and promote best practices
- Contribute to the evolution of our technical standards and processes

Job Title : Technical Architect
Experience : 8 to 12+ Years
Location : Trivandrum / Kochi / Remote
Work Mode : Remote flexibility available
Notice Period : Immediate to max 15 days (30 days with negotiation possible)
Summary :
We are looking for a highly skilled Technical Architect with expertise in Java Full Stack development, cloud architecture, and modern frontend frameworks (Angular). This is a client-facing, hands-on leadership role, ideal for technologists who enjoy designing scalable, high-performance, cloud-native enterprise solutions.
🛠 Key Responsibilities :
- Architect scalable and high-performance enterprise applications.
- Hands-on involvement in system design, development, and deployment.
- Guide and mentor development teams in architecture and best practices.
- Collaborate with stakeholders and clients to gather and refine requirements.
- Evaluate tools, processes, and drive strategic technical decisions.
- Design microservices-based solutions deployed over cloud platforms (AWS/Azure/GCP).
✅ Mandatory Skills :
- Backend : Java, Spring Boot, Python
- Frontend : Angular (at least 2 years of recent hands-on experience)
- Cloud : AWS / Azure / GCP
- Architecture : Microservices, EAI, MVC, Enterprise Design Patterns
- Data : SQL / NoSQL, Data Modeling
- Other : Client handling, team mentoring, strong communication skills
➕ Nice to Have Skills :
- Mobile technologies (Native / Hybrid / Cross-platform)
- DevOps & Docker-based deployment
- Application Security (OWASP, PCI DSS)
- TOGAF familiarity
- Test-Driven Development (TDD)
- Analytics / BI / ML / AI exposure
- Domain knowledge in Financial Services or Payments
- 3rd-party integration tools (e.g., MuleSoft, BizTalk)
⚠️ Important Notes :
- Only candidates from outside Hyderabad/Telangana and non-JNTU graduates will be considered.
- Candidates must be serving notice or joinable within 30 days.
- Client-facing experience is mandatory.
- Java Full Stack candidates are highly preferred.
🧭 Interview Process :
- Technical Assessment
- Two Rounds – Technical Interviews
- Final Round
Job Title: Lead DevOps Engineer
Experience Required: 4 to 5 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two
Bangalore / Chennai
- Hands-on data modelling for OLTP and OLAP systems
- In-depth knowledge of Conceptual, Logical and Physical data modelling
- Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same
- Strong understanding of variables impacting database performance for near-real-time reporting and application interaction.
- Should have working experience on at least one data modelling tool, preferably DBSchema, Erwin
- Good understanding of GCP databases like AlloyDB, CloudSQL, and BigQuery.
- People with functional knowledge of the mutual fund industry will be a plus
Role & Responsibilities:
● Work with business users and other stakeholders to understand business processes.
● Ability to design and implement Dimensional and Fact tables
● Identify and implement data transformation/cleansing requirements
● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse
● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions
● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique
● Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions.
● Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use.
● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality.
● Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions.
● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics.
● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.
● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements.
● Train business end-users, IT analysts, and developers.
Required Skills:
● Bachelor’s degree in Computer Science or similar field or equivalent work experience.
● 5+ years of experience on Data Warehousing, Data Engineering or Data Integration projects.
● Expert with data warehousing concepts, strategies, and tools.
● Strong SQL background.
● Strong knowledge of relational databases like SQL Server, PostgreSQL, MySQL.
● Strong experience in GCP & Google BigQuery, Cloud SQL, Composer (Airflow), Dataflow, Dataproc, Cloud Function and GCS
● Good to have knowledge on SQL Server Reporting Services (SSRS), and SQL Server Integration Services (SSIS).
● Knowledge of AWS and Azure Cloud is a plus.
● Experience in Informatica Power exchange for Mainframe, Salesforce, and other new-age data sources.
● Experience in integration using APIs, XML, JSONs etc.

Senior Data Analyst – Power BI, GCP, Python & SQL
Job Summary
We are looking for a Senior Data Analyst with strong expertise in Power BI, Google Cloud Platform (GCP), Python, and SQL to design data models, automate analytics workflows, and deliver business intelligence that drives strategic decisions. The ideal candidate is a problem-solver who can work with complex datasets in the cloud, build intuitive dashboards, and code custom analytics using Python and SQL.
Key Responsibilities
* Develop advanced Power BI dashboards and reports based on structured and semi-structured data from BigQuery and other GCP sources.
* Write and optimize complex SQL queries (BigQuery SQL) for reporting and data modeling.
* Use Python to automate data preparation tasks, build reusable analytics scripts, and support ad hoc data requests.
* Partner with data engineers and stakeholders to define metrics, build ETL pipelines, and create scalable data models.
* Design and implement star/snowflake schema models and DAX measures in Power BI.
* Maintain data integrity, monitor performance, and ensure security best practices across all reporting systems.
* Drive initiatives around data quality, governance, and cost optimization on GCP.
* Mentor junior analysts and actively contribute to analytics strategy and roadmap.
Must-Have Skills
* Expert-level SQL : Hands-on experience writing complex queries in BigQuery , optimizing joins, window functions, CTEs.
* Proficiency in Python : Data wrangling, Pandas, NumPy, automation scripts, API consumption, etc.
* Power BI expertise : Building dashboards, using DAX, Power Query (M), custom visuals, report performance tuning.
* GCP hands-on experience : Especially with BigQuery, Cloud Storage, and optionally Cloud Composer or Dataflow.
* Strong understanding of data modeling, ETL pipelines, and analytics workflows.
* Excellent communication skills and the ability to explain data insights to non-technical audiences.
Preferred Qualifications
* Experience in version control (Git) and working in CI/CD environments.
* Google Professional Data Engineer
* PL-300: Microsoft Power BI Data Analyst Associate


Job Title: Full-Stack Developer
Location: Bangalore/Remote
Type: Full-time
About Eblity:
Eblity’s mission is to empower educators and parents to help children facing challenges.
Over 50% of children in mainstream schools face academic or behavioural challenges, most of which go unnoticed and underserved. By providing the right support at the right time, we could make a world of difference to these children.
We serve a community of over 200,000 educators and parents and over 3,000 schools.
If you are purpose-driven and want to use your skills in technology to create a positive impact for children facing challenges and their families, we encourage you to apply.
Join us in shaping the future of inclusive education and empowering learners of all abilities.
Role Overview:
As a full-stack developer, you will lead the development of critical applications.
These applications enable services for parents of children facing various challenges such as Autism, ADHD and Learning Disabilities, and for experts who can make a significant difference in these children’s lives.
You will be part of a small, highly motivated team who are constantly working to improve outcomes for children facing challenges like Learning Disabilities, ADHD, Autism, Speech Disorders, etc.
Job Description:
We are seeking a talented and proactive Full Stack Developer with hands-on experience in the React / Python / Postgres stack, leveraging Cursor and Replit for full-stack development. As part of our product development team, you will work on building responsive, scalable, and user-friendly web applications, utilizing both front-end and back-end technologies. Your expertise with Cursor as an AI agent-based development platform and Replit will be crucial for streamlining development processes and accelerating product timelines.
Responsibilities:
- Design, develop, and maintain front-end web applications using React, ensuring a responsive, intuitive, and high-performance user experience.
- Build and optimize the back-end using FastAPI or Flask and PostgreSQL, ensuring scalability, performance, and maintainability.
- Leverage Replit for full-stack development, deploying applications, managing cloud resources, and streamlining collaboration across team members.
- Utilize Cursor, an AI agent-based development platform, to enhance application development, automate processes, and optimize workflows through AI-driven code generation, data management, and integration.
- Collaborate with cross-functional teams (back-end developers, designers, and product managers) to gather requirements, design solutions, and implement them seamlessly across the front-end and back-end.
- Design and implement PostgreSQL database schemas, writing optimized queries to ensure efficient data retrieval and integrity.
- Integrate RESTful APIs and third-party services across the React front-end and FastAPI/Flask/PostgreSQLback-end, ensuring smooth data flow.
- Implement and optimize reusable React components and FastAPI/Flask functions to improve code maintainability and application performance.
- Conduct thorough testing, including unit, integration, and UI testing, to ensure application stability and reliability.
- Optimize both front-end and back-end applications for maximum speed and scalability, while resolving performance issues in both custom code and integrated services.
- Stay up-to-date with emerging technologies to continuously improve the quality and efficiency of our solutions.
Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- 2+ years of experience in React development, with strong knowledge of component-based architecture, state management, and front-end best practices.
- Proven experience in Python development, with expertise in building web applications using frameworks like FastAPI or Flask.
- Solid experience working with PostgreSQL, including designing database schemas, writing optimized queries, and ensuring efficient data retrieval.
- Experience with Cursor, an AI agent-based development platform, to enhance full-stack development through AI-driven code generation, data management, and automation.
- Experience with Replit for full-stack development, deploying applications, and collaborating within cloud-based environments.
- Experience working with RESTful APIs, including their integration into both front-end and back-end systems.
- Familiarity with development tools and frameworks such as Git, Node.js, and Nginx.
- Strong problem-solving skills, a keen attention to detail, and the ability to work independently or within a collaborative team environment.
- Excellent communication skills to effectively collaborate with team members and stakeholders.
Nice-to-Have:
- Experience with other front-end frameworks (e.g., Vue, Angular).
- Familiarity with Agile methodologies and project management tools like Jira.
- Understanding of cloud technologies and experience deploying applications to platforms like AWS or Google Cloud.
- Knowledge of additional back-end technologies or frameworks (e.g., FastAPI).
What We Offer:
- A collaborative and inclusive work environment that values every team member’s input.
- Opportunities to work on innovative projects using Cursor and Replit for full-stack development.
- Competitive salary and comprehensive benefits package.
- Flexible working hours and potential for remote work options.
Location: Remote
If you're passionate about full-stack development and leveraging AI-driven platforms like Cursor and Replit to build scalable solutions, apply today to join our forward-thinking team!

What You’ll Do:
As a Sr. Data Scientist, you will work closely across DeepIntent Data Science teams located in New York City, India, and Bosnia. The role will focus on building predictive models, implement data drive solutions to maximize ad effectiveness. You will also lead efforts in generating analyses and insights related to measurement of campaign outcomes, Rx, patient journey, and supporting evolution of DeepIntent product suite. Activities in this position include developing and deploying models in production, reading campaign results, analyzing medical claims, clinical, demographic and clickstream data, performing analysis and creating actionable insights, summarizing, and presenting results and recommended actions to internal stakeholders and external clients, as needed.
- Explore ways to to create better predictive models
- Analyze medical claims, clinical, demographic and clickstream data to produce and present actionable insights
- Explore ways of using inference, statistical, machine learning techniques to improve the performance of existing algorithms and decision heuristics
- Design and deploy new iterations of production-level code
- Contribute posts to our upcoming technical blog
Who You Are:
- Bachelor’s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, OR, or Data Science. Graduate degree is strongly preferred
- 5+ years of working experience as Data Scientist or Researcher in digital marketing, consumer advertisement, telecom, or other areas requiring customer level predictive analytics
- Advanced proficiency in performing statistical analysis in Python, including relevant libraries is required
- Experience working with data processing , transformation and building model pipelines using tools such as spark , airflow , docker
- You have an understanding of the ad-tech ecosystem, digital marketing and advertising data and campaigns or familiarity with the US healthcare patient and provider systems (e.g. medical claims, medications)
- You have varied and hands-on predictive machine learning experience (deep learning, boosting algorithms, inference…)
- You are interested in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicating with less technical audiences orally and in writing
- You can write production level code, work with Git repositories
- Active Kaggle participant
- Working experience with SQL
- Familiar with medical and healthcare data (medical claims, Rx, preferred)
- Conversant with cloud technologies such as AWS or Google Cloud

As a Principal Software Engineer on our team, you will:
- Design and deliver the next generation of Toast products using our technology stack, which includes Kotlin, DynamoDB, React, Pulsar, Apache Camel, GraphQL, and Big Data technologies.
- Collaborate with our Data Platform teams to develop best-in-class reporting and analytics products.
- Document solution designs, write and review code, test, and roll out solutions to production. Capture and act on customer feedback to iteratively enhance the customer experience.
- Work closely with peers to optimize solution design for performance, flexibility, and scalability — enabling multiple product and engineering teams to work on a shared framework and platform.
- Partner with UX, Product Management, QA, and other engineering teams to build robust, high-quality solutions in a fast-moving, complex environment.
- Coach and mentor engineers on best practices and modern software development standards.
Do you have the right ingredients? (Requirements)
- 12+ years of software development experience.
- Proven experience in delivering high-quality, reliable, and scalable services to production in a continuous delivery environment.
- Expertise in AI, Cloud technologies, Image Processing, and Full Stack Development.
- Strong database skills, proficient in SQL Server, PostgreSQL, or DynamoDB.
- Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP).
- Proficient in one or more object-oriented programming languages like Java, Kotlin, or C#.
- Hands-on experience working in Agile/Scrum environments.
- Demonstrated ability to lead the development and scaling of mission-critical platform components.
- Strong problem-solving skills, with the ability to navigate complex and ambiguous challenges and clearly communicate solutions.
- Skilled in balancing delivery speed with platform stability.
- Passionate about writing clean, maintainable, and impactful code.
- Experience in mentoring and coaching other engineers.
TECHNICAL MANAGER
Department: Product Engineering Location: Noida/Chennai
Experience: 12+ years with 2+ years in a similar role
Job Summary:
We are looking for an inspiring leader to lead a dynamic R&D team with a strong “Product & Customer” spirit. As an Engineering Manager, you will be responsible for the entire process, from design and specification to code quality, process integration and delivery performance
Key Responsibilities:
•
Collaborate closely with Product Management teams to design and develop business modules.
•
As a manager, coordinate a diverse team and ensure collaboration between different departments. Empathetic and fair yet demanding management with particular attention to operational excellence.
•
Actively participate in resolving technical issues and challenges that the team encounters during development and escalated client implementation and production issues
•
Anticipate technical challenges and work to address them proactively to minimize disruptions to the development process. Guides the team in making architectural choices
•
Promote and advocate for best practices in software development, including coding standards, testing practices, and documentation.
•
Make informed decisions on technical trade-offs and communicate those decisions to the team and stakeholders.
•
Be on top of critical client/ implementation issues and keep stakeholders informed.
PROFILE
•
Good proficiency overlaps with technologies like: Java17, Spring, SpringMVC, RESTful web services, Hibernate, RDBMS, Spring Security, Ansible, Docker, Kubernetes, JMeter, Angular.
•
Strong experience in development tools, CI/CD pipelines. Extensive experience with Agile.
•
Deep understanding of cloud technologies on at least one of the cloud platforms AWS, Azure or Google Cloud
•
Strong communicator with ability to collaborate cross-functionally, build relationships, and achieve broader organizational goals.
•
Proven leadership skills
•
Product development experience preferred. Fintech or lending domain experience is a plus.
•
Engineering degree or equivalent.
Roles and Responsibilities:
• Independently analyze, solve, and correct issues in real time, providing problem resolution end-
to-end.
• Strong experience in development tools, CI/CD pipelines. Extensive experience with Agile.
• Good proficiency overlap with technologies like: Java8, Spring, SpringMVC, RESTful web services, Hibernate, Oracle PL/SQL, SpringSecurity, Ansible, Docker, JMeter, Angular.
• Strong fundamentals and clarity of REST web services. Person should have exposure to
developing REST services which handles large sets
• Fintech or lending domain experience is a plus but not necessary.
• Deep understanding of cloud technologies on at least one of the cloud platforms AWS, Azure or Google Cloud
• Wide knowledge of technology solutions and ability to learn and work with emerging technologies, methodologies, and solutions.
• Strong communicator with ability to collaborate cross-functionally, build relationships, and achieve broader organizational goals.
• Provide vision leadership for the technology roadmap of our products. Understand product capabilities and strategize technology for its alignment with business objectives and maximizing ROI.
• Define technical software architectures and lead development of frameworks.
• Engage end to end in product development, starting from business requirements to realization of product and to its deployment in production.
• Research, design, and implement the complex features being added to existing products and/or create new applications / components from scratch.
Minimum Qualifications
• Bachelor s or higher engineering degree in Computer Science, or related technical field, or equivalent additional professional experience.
• 5 years of experience in delivering solutions from concept to production that are based on Java and open-source technologies as an enterprise architect in global organizations.
• 12-15 years of industry experience in design, development, deployments, operations and managing non-functional perspectives of technical solutions.
We are seeking a talented Engineer to join our AI team. You will technically lead experienced software and machine learning engineers to develop, test, and deploy AI-based solutions, with a primary focus on large language models and other machine learning applications. This is an excellent opportunity to apply your software engineering skills in a dynamic, real-world environment and gain hands-on experience in cutting-edge AI technology.
Key Roles & Responsibilities:
- Design and implement software solutions that power machine learning models, particularly in LLMs
- Create robust data pipelines, handling data preprocessing, transformation, and integration for machine learning projects
- Collaborate with the engineering team to build and optimize machine learning models, particularly LLMs, that address client-specific challenges
- Partner with cross-functional teams, including business stakeholders, data engineers, and solutions architects to gather requirements and evaluate technical feasibility
- Design and implement a scale infrastructure for developing and deploying GenAI solutions
- Support model deployment and API integration to ensure interaction with existing enterprise systems.
Basic Qualifications:
- A master's degree or PhD in Computer Science, Data Science, Engineering, or a related field
- Experience: 3-5 Years
- Strong programming skills in Python and Java
- Good understanding of machine learning fundamentals
- Hands-on experience with Python and common ML libraries (e.g., PyTorch, TensorFlow, scikit-learn)
- Familiar with frontend development and frameworks like React
- Basic knowledge of LLMs and transformer-based architectures is a plus.
Preferred Qualifications
- Excellent problem-solving skills and an eagerness to learn in a fast-paced environment
- Strong attention to detail and ability to communicate technical concepts clearly
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are looking for a Senior Software Development Engineer with 5-8 years of experience specializing in infrastructure deployment automation and VMware workload migration. The ideal candidate will have expertise in Infrastructure-as-Code (IaC), VMware vSphere, vMotion, HCX, Terraform, Kubernetes, and AI POD managed services. You will be responsible for automating infrastructure provisioning, migrating workloads from VMware environments to cloud and hybrid infrastructures, and optimizing AI/ML deployments.
Key Roles & Responsibilities
- Automate infrastructure deployment using Terraform, Ansible, and Helm for VMware and cloud environments.
- Develop and implement VMware workload migration strategies, including vMotion, HCX, SRM (Site Recovery Manager), and lift-and-shift migrations.
- Migrate VMware-based workloads to public cloud (AWS, Azure, GCP) or hybrid cloud environments.
- Optimize and manage AI POD workloads on VMware and Kubernetes-based environments.
- Leverage VMware HCX for live and bulk workload migrations, ensuring minimal downtime and optimal performance.
- Automate virtual machine provisioning and lifecycle management using VMware vSphere APIs, PowerCLI, or vRealize Automation.
- Integrate VMware workloads with Kubernetes for containerized AI/ML workflows.
- Ensure workload high availability and disaster recovery post-migration using VMware SRM, vSAN, and backup strategies.
- Monitor and troubleshoot migration performance using vRealize Operations, Prometheus, Grafana, and ELK.
- Develop and optimize CI/CD pipelines to automate workload migration, deployment, and validation.
- Ensure security and compliance for workloads before, during, and after migration.
- Collaborate with cloud architects to design hybrid cloud solutions supporting AI/ML workloads.
Basic Qualifications
- 5–8 years of experience in infrastructure automation, VMware workload migration, and cloud integration.
- Expertise in VMware vSphere, ESXi, vMotion, HCX, SRM, vSAN, and NSX-T.
- Hands-on experience with workload migration tools such as VMware HCX, CloudEndure, AWS Application Migration Service, and Azure Migrate.
- Proficiency in Infrastructure-as-Code using Terraform, Ansible, PowerCLI, and vRealize Automation.
- Strong experience with Kubernetes (EKS, AKS, GKE) and containerized AI/ML workloads.
- Experience in public cloud migration (AWS, Azure, GCP) for VMware-based workloads.
- Hands-on knowledge of CI/CD tools such as Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Strong scripting and automation skills in Python, Bash, or PowerShell.
- Familiarity with disaster recovery, backup, and business continuity planning in VMware environments.
- Experience in performance tuning and troubleshooting for VMware-based workloads.
Preferred Qualifications
- Experience with NVIDIA GPU orchestration (e.g., KubeFlow, Triton, RAPIDS).
- Familiarity with Packer for automated VM image creation.
- Exposure to Edge AI deployments, federated learning, and AI inferencing at scale.
- Contributions to open-source infrastructure automation projects.
About the Role
We are looking for a talented LLM & Backend Engineer to join our AI innovation team at EaseMyTrip.com and help power the next generation of intelligent travel experiences. In this role, you will lead the integration and optimization of Large Language Models (LLMs) to create conversational travel agents that can understand, recommend, and assist travelers across platforms. You will work at the intersection of backend systems, AI models, and natural language understanding, bringing smart automation to every travel interaction.
Key Responsibilities:
- LLM Integration: Deploy and integrate LLMs (e.g., GPT-4, Claude, Mistral) to process natural language queries and deliver personalized travel recommendations.
- Prompt Engineering & RAG: Design optimized prompts and implement Retrieval-Augmented Generation (RAG) workflows to enhance contextual relevance in multi-turn conversations.
- Conversational Flow Design: Build and manage robust conversational workflows capable of handling complex travel scenarios such as booking modifications and cancellations.
- LLM Performance Optimization: Tune models and workflows to balance performance, scalability, latency, and cost across diverse environments.
- Backend Development: Develop scalable, asynchronous backend services using FastAPI or Django, with a focus on secure and efficient API architectures.
- Database & ORM Design: Design and manage data using PostgreSQL or MongoDB, and implement ORM solutions like SQLAlchemy for seamless data interaction.
- Cloud & Serverless Infrastructure: Deploy solutions on AWS, GCP, or Azure using containerized and serverless tools such as Lambda and Cloud Functions.
- Model Fine-Tuning & Evaluation: Fine-tune open-source and proprietary LLMs using techniques like LoRA and PEFT, and evaluate outputs using BLEU, ROUGE, or similar metrics.
- NLP Pipeline Implementation: Develop NLP functionalities including named entity recognition, sentiment analysis, and dialogue state tracking.
- Cross-Functional Collaboration: Work closely with AI researchers, frontend developers, and product teams to ship impactful features rapidly and iteratively.
Preferred Candidate Profile:
- Experience: Minimum 2 years in backend development with at least 1 year of hands-on experience working with LLMs or NLP systems.
- Programming Skills: Proficient in Python with practical exposure to asynchronous programming and frameworks like FastAPI or Django.
- LLM Ecosystem Expertise: Experience with tools and libraries such as LangChain, LlamaIndex, Hugging Face Transformers, and OpenAI/Anthropic APIs.
- Database Knowledge: Strong understanding of relational and NoSQL databases, including schema design and performance optimization.
- Model Engineering: Familiarity with prompt design, LLM fine-tuning (LoRA, PEFT), and evaluation metrics for language models.
- Cloud Deployment: Comfortable working with cloud platforms (AWS/GCP/Azure) and building serverless or containerized deployments.
- NLP Understanding: Solid grasp of NLP concepts including intent detection, dialogue management, and text classification.
- Problem-Solving Mindset: Ability to translate business problems into AI-first solutions with a user-centric approach.
- Team Collaboration: Strong communication skills and a collaborative spirit to work effectively with multidisciplinary teams.
- Curiosity and Drive: Passionate about staying at the forefront of AI and using emerging technologies to build innovative travel experiences.

Why This Role Matters
- We are looking for a Staff Engineer to lead the technical direction and hands-on development of our next-generation, agentic AI-first marketing platforms. This is a high-impact role to architect, build, and ship products that change how marketers interact with data, plan campaigns, and make decisions.
What You'll Do
- Build Gen-AI native products: Architect, build, and ship platforms powered by LLMs, agents, and predictive AI
- Stay hands-on: Design systems, write code, debug, and drive product excellence
- Lead with depth: Mentor a high-caliber team of full stack engineers.
- Speed to market: Rapidly ship and iterate on MVPs to maximize learning and feedback.
- Own the full stack: From backend data pipelines to intuitive UIs—from Airflow to React - from BigQuery to embeddings.
- Scale what works: Ensure scalability, security, and performance in multi-tenant, cloud-native environments (GCP).
- Collaborate deeply: Work closely with product, growth, and leadership to align tech with business priorities.
What You Bring
- 8+ years of experience building and scaling full-stack, data-driven products
- Proficiency in backend (Node.js, Python) and frontend (React), with solid GCP experience
- Strong grasp of data pipelines, analytics, and real-time data processing
- Familiarity with Gen-AI frameworks (LangChain, LlamaIndex, OpenAI APIs, vector databases)
- Proven architectural leadership and technical ownership
- Product mindset with a bias for execution and iteration
Our Tech Stack
- Cloud: Google Cloud Platform
- Backend: Node.js, Python, Airflow
- Data: BigQuery, Cloud SQL
- AI/ML: TensorFlow, OpenAI APIs, custom agents
- Frontend: React.js
What You Get
- Meaningful equity in a high-growth startup
- The chance to build global products from India
- A culture that values clarity, ownership, learning, humility, and candor
- A rare opportunity to build with Gen-AI from the ground up
Who You Are
- You’re initiative-driven, not interruption-driven.
- You code because you love building things that matter.
- You enjoy ambiguity and solve problems from first principles.
- You believe true leadership is contextual, hands-on, and grounded.
- You’re here to build — not just maintain.
- You care deeply about seeing your products empower real users, run reliably at scale, and adapt intelligently with minimal manual effort.
- You know that elegant code is just 30% of the job — the real craft lies in the engineering rigour, edge-case handling, and production resilience that make great products truly dependable.

We’re looking for a Product Ninja with the mindset of a Tech Catalyst — a proactive executor who thrives at the intersection of product, technology, and user experience. In this role, you’ll bring product ideas to life, translate strategy into action, and collaborate closely with engineers, designers, and stakeholders to deliver impactful solutions.
This role is ideal for someone who’s hands-on, detail-oriented, and passionate about using technology to create real customer value.
Responsibilities:
- Support the definition and execution of the product roadmap in alignment with business goals.
- Work closely with engineering, design, QA, and marketing teams to drive product development.
- Translate product requirements into detailed specs, user stories, and acceptance criteria.
- Conduct competitive research and analyze user feedback to inform feature enhancements.
- Track product performance post-launch and gather insights for continuous improvement.
- Assist in managing the full product lifecycle, from ideation to rollout.
- Be a tech-savvy contributor, suggesting improvements based on emerging tools, platforms, and technologies.
Qualification:
- Bachelor’s degree in Business, Marketing, Computer Science, or a related field.
- 3+ years of hands-on experience in product management, product operations, or related roles.
- Comfortable working in fast-paced, cross-functional tech environments.
Required Skills:
- Strong analytical and problem-solving abilities.
- Clear, concise communication and documentation skills.
- Proficiency with project and product management tools (e.g., JIRA, Trello, Confluence).
- Ability to manage details without losing sight of the bigger picture.
Preferred Skills:
- Experience with Agile or Scrum workflows.
- Familiarity with UX/UI best practices and working with design systems.
- Exposure to APIs, databases, or cloud-based platforms is a plus.
- Comfortable with basic data analysis and tools like Excel, SQL, or analytics dashboards.
Who You Are:
- A doer who turns ideas into working solutions.
- A collaborator who thrives in tech teams and enjoys building alongside others.
- A catalyst who nudges things forward with curiosity, speed, and smart experimentation.


🚀 React Developer – Gurgaon (Onsite Role)
🧑💻 Experience: 2 to 3 Years
🕒 Joining Time: Immediate to 15 Days
📍 Location: Gurgaon (Work from Office)
We’re hiring a passionate and skilled React Developer who is ready to take their frontend game to the next level. If you’re someone who breathes clean UI, scalable components, and loves delivering pixel-perfect performance — this is your chance.
🧩 Your Responsibilities:
- Develop and maintain rich, interactive web applications using React.js.
- Architect front-end flows using Flux or Redux for seamless state management.
- Design reusable component libraries with attention to responsiveness and accessibility.
- Integrate with REST APIs and manage secure data exchange.
- Collaborate with cross-functional teams (UI/UX, Backend, QA) for seamless releases.
- Optimize application speed, bundle sizes, and rendering performance.
- Implement routing and navigation using React Router and handle form states with Formik or React Hook Form.
- Follow CI/CD workflows, write test cases, and participate in code reviews.
✅ Core Skills You Must Have:
- Strong hands-on experience in React.js, JavaScript (ES6+), and TypeScript.
- Deep knowledge of React Hooks, Context API, Redux, and Flux architecture.
- Excellent command of HTML5, CSS3, SASS, and responsive design frameworks like Material UI, Tailwind, or Bootstrap.
- Familiar with Webpack, Babel, Vite, and other bundling tools.
- Experience with REST API integration, error handling, and API security best practices.
- Solid understanding of unit testing tools like Jest, React Testing Library, or Cypress.
- Experience in Git-based workflows, Agile teams, and JIRA or similar tracking tools.
☁️ Nice to Have:
- Exposure to cloud platforms like AWS, Azure, or GCP (deployment, storage, CI/CD pipelines).
- Experience with Docker (for frontend containerization during builds).
- Familiarity with micro frontends architecture or component-based scalable design.
- Basic knowledge of Node.js for API understanding and backend collaboration.
🌟 What You Get:
- Work on enterprise-level projects with modern frontend architecture
- Exposure to scalable and distributed UI systems
- A collaborative environment with performance-focused peers
- Real career growth and tech leadership opportunities

About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Senior Software Development Engineer – Data Engineering with 5-8 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions.
Key Roles & Responsibilities:
- Design, develop, and optimize ETL/ELT pipelines using Apache Spark, PySpark, Databricks, and Snowflake.
- Implement real-time and batch data processing workflows in cloud environments (AWS, Azure, GCP).
- Develop high-performance, scalable data pipelines for structured, semi-structured, and unstructured data.
- Work with Delta Lake and Lakehouse architectures to improve data reliability and efficiency.
- Optimize Snowflake and Databricks performance, including query tuning, caching, partitioning, and cost optimization.
- Implement data governance, security, and compliance best practices.
- Build and maintain data models, transformations, and data marts for analytics and reporting.
- Collaborate with data scientists, analysts, and business teams to define data engineering requirements.
- Automate infrastructure and deployments using Terraform, Airflow, or dbt.
- Monitor and troubleshoot data pipeline failures, performance issues, and bottlenecks.
- Develop and enforce data quality and observability frameworks using Great Expectations, Monte Carlo, or similar tools.
Basic Qualifications:
- Bachelor’s or Master’s Degree in Computer Science or Data Science.
- 5–8 years of experience in data engineering, big data processing, and cloud-based data platforms.
- Hands-on expertise in Apache Spark, PySpark, and distributed computing frameworks.
- Strong experience with Snowflake (Warehouses, Streams, Tasks, Snowpipe, Query Optimization).
- Experience in Databricks (Delta Lake, MLflow, SQL Analytics, Photon Engine).
- Proficiency in SQL, Python, or Scala for data transformation and analytics.
- Experience working with data lake architectures and storage formats (Parquet, Avro, ORC, Iceberg).
- Hands-on experience with cloud data services (AWS Redshift, Azure Synapse, Google BigQuery).
- Experience in workflow orchestration tools like Apache Airflow, Prefect, or Dagster.
- Strong understanding of data governance, access control, and encryption strategies.
- Experience with CI/CD for data pipelines using GitOps, Terraform, dbt, or similar technologies.
Preferred Qualifications:
- Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub).
- Experience in BI and analytics tools (Tableau, Power BI, Looker).
- Familiarity with data observability tools (Monte Carlo, Great Expectations).
- Experience with machine learning feature engineering pipelines in Databricks.
- Contributions to open-source data engineering projects.
Technical Skills:
- Hands-on experience with AWS, Google Cloud Platform (GCP), and Microsoft Azure cloud computing
- Proficiency in Windows Server and Linux server environments
- Proficiency with Internet Information Services (IIS), Nginx, Apache, etc.
- Experience in deploying .NET applications (ASP.NET, MVC, Web API, WCF, etc.), .NET Core, Python and Node.js applications etc.
- Familiarity with GitLab or GitHub for version control and Jenkins for CI/CD processes
Key Responsibilities:
- ☁️ Manage cloud infrastructure and automation on AWS, Google Cloud (GCP), and Azure.
- 🖥️ Deploy and maintain Windows Server environments, including Internet Information Services (IIS).
- 🐧 Administer Linux servers and ensure their security and performance.
- 🚀 Deploy .NET applications (ASP.Net, MVC, Web API, WCF, etc.) using Jenkins CI/CD pipelines.
- 🔗 Manage source code repositories using GitLab or GitHub.
- 📊 Monitor and troubleshoot cloud and on-premises server performance and availability.
- 🤝 Collaborate with development teams to support application deployments and maintenance.
- 🔒 Implement security best practices across cloud and server environments.
Required Skills:
- ☁️ Hands-on experience with AWS, Google Cloud (GCP), and Azure cloud services.
- 🖥️ Strong understanding of Windows Server administration and IIS.
- 🐧 Proficiency in Linux server management.
- 🚀 Experience in deploying .NET applications and working with Jenkins for CI/CD automation.
- 🔗 Knowledge of version control systems such as GitLab or GitHub.
- 🛠️ Good troubleshooting skills and ability to resolve system issues efficiently.
- 📝 Strong documentation and communication skills.
Preferred Skills:
- 🖥️ Experience with scripting languages (PowerShell, Bash, or Python) for automation.
- 📦 Knowledge of containerization technologies (Docker, Kubernetes) is a plus.
- 🔒 Understanding of networking concepts, firewalls, and security best practices.
Role: Senior Software Engineer - Backend
Location: In-Office, Bangalore, Karnataka, India
Job Summary:
We are seeking a highly skilled and experienced Senior Backend Engineer with a minimum of 3 years of experience in product building to join our dynamic and innovative team. In this role, you will be responsible for designing, developing, and maintaining robust backend systems that power our applications. You will work closely with cross-functional teams to ensure seamless integration between frontend and backend components, leveraging your expertise to architect scalable, secure, and high-performance solutions. As a senior team member, you will mentor junior developers and lead technical initiatives to drive innovation and excellence.
Annual Compensation: 12-18 LPA
Responsibilities:
- Lead the design, development, and maintenance of scalable and efficient backend systems and APIs.
- Architect and implement complex backend solutions, ensuring high availability and performance.
- Collaborate with product managers, frontend developers, and other stakeholders to deliver comprehensive end-to-end solutions.
- Design and optimize data storage solutions using relational databases and NoSQL databases.
- Mentor and guide junior developers, fostering a culture of knowledge sharing and continuous improvement.
- Implement and enforce best practices for code quality, security, and performance optimization.
- Develop and maintain CI/CD pipelines to automate build, test, and deployment processes.
- Ensure comprehensive test coverage, including unit testing, and implement various testing methodologies and tools to validate application functionality.
- Utilize cloud services (e.g., AWS, Azure, GCP) for infrastructure deployment, management, and optimization.
- Conduct system design reviews and provide technical leadership in architectural discussions.
- Stay updated with industry trends and emerging technologies to drive innovation within the team.
- Implement secure authentication and authorization mechanisms and ensure data encryption for sensitive information.
- Design and develop event-driven applications utilizing serverless computing principles to enhance scalability and efficiency.
Requirements:
- Minimum of 3 years of proven experience as a Backend Engineer, with a strong portfolio of product-building projects.
- Strong proficiency in backend development using Java, Python, and JavaScript, with experience in building scalable and high-performance applications.
- Experience with popular backend frameworks and libraries for Java (e.g., Spring Boot) and Python (e.g., Django, Flask).
- Strong expertise in SQL and NoSQL databases (e.g., MySQL, MongoDB) with a focus on data modeling and scalability.
- Practical experience with caching mechanisms (e.g., Redis) to enhance application performance.
- Proficient in RESTful API design and development, with a strong understanding of API security best practices.
- In-depth knowledge of asynchronous programming and event-driven architecture.
- Familiarity with the entire web stack, including protocols, web server optimization techniques, and performance tuning.
- Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes) is highly desirable.
- Proven experience working with cloud technologies (AWS/GCP/Azure) and understanding of cloud architecture principles.
- Strong understanding of fundamental design principles behind scalable applications and microservices architecture.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work collaboratively in a fast-paced, agile environment and lead projects to successful completion.
Job Title: Backend Developer
Location: In-Office, Bangalore, Karnataka, India
Job Summary:
We are seeking a highly skilled and experienced Backend Developer with a minimum of 1 year of experience in product building to join our dynamic and innovative team. In this role, you will be responsible for designing, developing, and maintaining robust backend systems that drive our applications. You will collaborate with cross-functional teams to ensure seamless integration between frontend and backend components, and your expertise will be critical in architecting scalable, secure, and high-performance backend solutions.
Annual Compensation: 6-10 LPA
Responsibilities:
- Design, develop, and maintain scalable and efficient backend systems and APIs using NodeJS.
- Architect and implement complex backend solutions, ensuring high availability and performance.
- Collaborate with product managers, frontend developers, and other stakeholders to deliver comprehensive end-to-end solutions.
- Design and optimize data storage solutions using relational databases (e.g., MySQL) and NoSQL databases (e.g., MongoDB, Redis).
- Promoting a culture of collaboration, knowledge sharing, and continuous improvement.
- Implement and enforce best practices for code quality, security, and performance optimization.
- Develop and maintain CI/CD pipelines to automate build, test, and deployment processes.
- Ensure comprehensive test coverage, including unit testing, and implement various testing methodologies and tools to validate application functionality.
- Utilize cloud services (e.g., AWS, Azure, GCP) for infrastructure deployment, management, and optimization.
- Conduct system design reviews and contribute to architectural discussions.
- Stay updated with industry trends and emerging technologies to drive innovation within the team.
- Implement secure authentication and authorization mechanisms and ensure data encryption for sensitive information.
- Design and develop event-driven applications utilizing serverless computing principles to enhance scalability and efficiency.
Requirements:
- Minimum of 1 year of proven experience as a Backend Developer, with a strong portfolio of product-building projects.
- Extensive experience with JavaScript backend frameworks (e.g., Express, Socket) and a deep understanding of their ecosystems.
- Strong expertise in SQL and NoSQL databases (MySQL and MongoDB) with a focus on data modeling and scalability.
- Practical experience with Redis and caching mechanisms to enhance application performance.
- Proficient in RESTful API design and development, with a strong understanding of API security best practices.
- In-depth knowledge of asynchronous programming and event-driven architecture.
- Familiarity with the entire web stack, including protocols, web server optimization techniques, and performance tuning.
- Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes) is highly desirable.
- Proven experience working with cloud technologies (AWS/GCP/Azure) and understanding of cloud architecture principles.
- Strong understanding of fundamental design principles behind scalable applications and microservices architecture.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work collaboratively in a fast-paced, agile environment and lead projects to successful completion.
Job Description
We are looking for a passionate and skilled Rust Developer with at least 3 years of experience to join our growing development team. The ideal candidate will be proficient in building robust and scalable APIs using the Rocket framework, and have hands-on experience with PostgreSQL and the Diesel ORM. You will be working on performance-critical backend systems, designing APIs, managing deployments, and collaborating closely with other developers.
Responsibilities
- Design, develop, and maintain APIs using Rocket.
- Work with PostgreSQL databases, using Diesel ORM for efficient data access.
- Write clean, maintainable, and efficient Rust code.
- Apply object-oriented and functional programming principles effectively.
- Build and consume RESTful APIs and WebSockets for real-time communication.
- Handle server-side deployments and assist in managing the infrastructure.
- Optimize application performance and ensure high availability.
- Collaborate with frontend developers and DevOps engineers to integrate systems smoothly.
- Participate in code reviews and technical discussions.
- Apply knowledge of data structures and algorithms to solve complex problems efficiently.
Requirements
- 3+ years of experience working with Rust in production environments.
- Strong hands-on experience with Rocket framework.
- Solid understanding of Diesel ORM and PostgreSQL.
- Good grasp of OOP and functional programming concepts.
- Familiarity with RESTful APIs, WebSockets, and other web protocols.
- Experience handling application deployments and basic server management.
- Strong foundation in data structures, algorithms, and software design principles.
- Ability to write clean, well-documented, and testable code.
- Good communication skills and ability to work collaboratively.
Package
- As per Industry standards
Nice to Have
- Experience with CI/CD pipelines.
- Familiarity with containerization tools like Docker.
- Knowledge of cloud platforms (AWS, GCP, etc.).
- Contribution to open-source Rust projects.
- Knowledge of basic cryptographic primitives (AES, hashing, etc.).
Perks & Benefits
- Competitive compensation.
- Flexible work hours and remote-friendly culture.
- Opportunity to work with a modern tech stack.
- Supportive team and growth-oriented environment.
If you're passionate about Rust, love building high-performance systems, and enjoy solving real-world problems with elegant code, we’d love to connect! Apply now and let’s craft powerful backend experiences together! ⚙️🚀
Role - Sr. QA Engineer
Location- Gurgaon
Mode - Hybrid
Experience - 6 Years
Notice Period:- Immediate Joiner
Must-Have:
- Experience in QA automation/platform QA
- Experience in Playwright, Selenium, Rest Assured
- Strong in API & load testing (JMeter, k6)
- GCP or Azure experience
- CI/CD: GitHub Actions, Jenkins
- Drive test automation, CI/CD quality gates, chaos testing & more.
Job Summary:
We are looking for a detail-oriented and proactive QA Engineer with 3–5 years of hands-on experience in both manual and automated testing. The ideal candidate will have strong knowledge of API testing (REST, SOAP, GraphQL), web debugging tools, various cloud platforms, and end-to-end testing frameworks. Excellent communication skills and the ability to work directly with clients from the USA, UK, and Australia are essential. The role requires daily participation in client-facing stand-ups and ongoing collaboration in a dynamic, fast-paced environment focused on continuous learning.
Key Responsibilities:
- Perform API testing for REST, SOAP, and GraphQL; validate HTTP status codes using tools like Postman and Curl.
- Debug browser console, network tab, performance issues, WebSocket, and WebAssembly behaviors.
- Reproduce and log issues with clear, actionable steps and supporting evidence.
- Conduct performance testing and identify bottlenecks.
- Communicate directly with clients from English-speaking countries (USA, UK, Australia) on a daily basis through stand-ups, scrums, and status updates.
- Test various authentication types (OAuth, JWT, API Keys, SSO, SAML etc.).
- Implement and maintain E2E tests using Cypress.
- Perform database validations (PostgreSQL, MySQL, MongoDB, DynamoDB, Elasticsearch).
- Ensure quality through thorough test planning, execution, and adherence to industry best practices.
- Collaborate closely with developers and stakeholders across time zones.
Required Skills and Experience:
- 3–5 years of experience in QA (manual + automation).
- Strong hands-on experience in API testing including REST, SOAP, and GraphQL.
- Knowledge of Curl, browser dev tools (Console, Network), and debugging web issues.
- Solid understanding of HTTP status codes, WebSocket, WebAssembly, and performance analysis.
- Ability to write reproducible and detailed bug reports.
- Excellent English communication and confidence in handling client discussions.
- Experience testing multiple authentication schemes.
- Working knowledge of cloud platforms: AWS, GCP, Azure.
- Proficiency with PostgreSQL, MySQL, MongoDB, DynamoDB, and Elasticsearch.
- Hands-on experience with Cypress for E2E testing.
- Familiarity with QA methodologies, STLC, SDLC, and test documentation standards.
Preferred:
- Prior experience working directly with clients from the USA or other English-speaking countries.
- Willingness to communicate with clients daily—participate in scrum meetings, standups, and provide regular status updates.
- A passion for continuous learning and ability to explore and adapt to new SaaS products regularly.
- Experience with CI/CD pipelines and Git-based version control.
- QA certifications such as ISTQB are a plus.
What We Offer:
- International project exposure (USA, UK, Australia).
- Collaborative team culture and flexible work environment.
- Continuous learning, mentorship, and career growth.
- Work with modern tools and evolving SaaS technologies.

As a Senior Backend & Infrastructure Engineer, you will take ownership of backend systems and cloud infrastructure. You’ll work closely with our CTO and cross-functional teams (hardware, AI, frontend) to design scalable, fault- tolerant architectures and ensure reliable deployment pipelines.
- What You’ll Do :
- Backend Development: Maintain and evolve our Node.js (TypeScript) and Python backend services with a focus on performance and scalability.
- Cloud Infrastructure: Manage our infrastructure on GCP and Firebase (Auth, Firestore, Storage, Functions, AppEngine, PubSub, Cloud Tasks).
- Database Management: Handle Firestore and other NoSQL DBs. Lead database schema design and migration strategies.
- Pipelines & Automation: Build robust real-time and batch data pipelines. Automate CI/CD and testing for backend and frontend services.
- Monitoring & Uptime: Deploy tools for observability (logging, alerts, debugging). Ensure 99.9% uptime of critical services.
- Dev Environments: Set up and manage developer and staging environments across teams.
- Quality & Security: Drive code reviews, implement backend best practices, and enforce security standards.
- Collaboration: Partner with other engineers (AI, frontend, hardware) to integrate backend capabilities seamlessly into our global system.
Must-Haves :
- 5+ years of experience in backend development and cloud infrastructure.
- Strong expertise in Node.js (TypeScript) and/or Python.
- Advanced skills in NoSQL databases (Firestore, MongoDB, DynamoDB...).
- Deep understanding of cloud platforms, preferably GCP and Firebase.
- Hands-on experience with CI/CD, DevOps tools, and automation.
- Solid knowledge of distributed systems and performance tuning.
- Experience setting up and managing development & staging environments.
• Proficiency in English and remote communication.
Good to have :
- Event-driven architecture experience (e.g., Pub/Sub, MQTT).
- Familiarity with observability tools (Prometheus, Grafana, Google Monitoring).
- Previous work on large-scale SaaS products.
- Knowledge of telecommunication protocols (MQTT, WebSockets, SNMP).
- Experience with edge computing on Nvidia Jetson devices.
What We Offer :
- Competitive salary for the Indian market (depending on experience).
- Remote-first culture with async-friendly communication.
- Autonomy and responsibility from day one.
- A modern stack and a fast-moving team working on cutting-edge AI and cloud infrastructure.
- A mission-driven company tackling real-world environmental challenges.

Role: GCP Data Engineer
Notice Period: Immediate Joiners
Experience: 5+ years
Location: Remote
Company: Deqode
About Deqode
At Deqode, we work with next-gen technologies to help businesses solve complex data challenges. Our collaborative teams build reliable, scalable systems that power smarter decisions and real-time analytics.
Key Responsibilities
- Build and maintain scalable, automated data pipelines using Python, PySpark, and SQL.
- Work on cloud-native data infrastructure using Google Cloud Platform (BigQuery, Cloud Storage, Dataflow).
- Implement clean, reusable transformations using DBT and Databricks.
- Design and schedule workflows using Apache Airflow.
- Collaborate with data scientists and analysts to ensure downstream data usability.
- Optimize pipelines and systems for performance and cost-efficiency.
- Follow best software engineering practices: version control, unit testing, code reviews, CI/CD.
- Manage and troubleshoot data workflows in Linux environments.
- Apply data governance and access control via Unity Catalog or similar tools.
Required Skills & Experience
- Strong hands-on experience with PySpark, Spark SQL, and Databricks.
- Solid understanding of GCP services (BigQuery, Cloud Functions, Dataflow, Cloud Storage).
- Proficiency in Python for scripting and automation.
- Expertise in SQL and data modeling.
- Experience with DBT for data transformations.
- Working knowledge of Airflow for workflow orchestration.
- Comfortable with Linux-based systems for deployment and troubleshooting.
- Familiar with Git for version control and collaborative development.
- Understanding of data pipeline optimization, monitoring, and debugging.
Position: Project Manager
Location: Bengaluru, India (Hybrid/Remote flexibility available)
Company: PGAGI Consultancy Pvt. Ltd
About PGAGI
At PGAGI, we are building the future where human and artificial intelligence coexist to solve complex problems, accelerate innovation, and power sustainable growth. We develop and deploy advanced AI solutions across industries, making AI not just a tool but a transformational force for businesses and society.
Position Summary
PGAGI is seeking a dynamic and experienced Project Manager to lead cross-functional engineering teams and drive the successful execution of multiple AI/ML-centric projects. The ideal candidate is a strategic thinker with a solid background in engineering-led product/project management, especially in AI/ML product lifecycles. This role is crucial to scaling our technical operations, ensuring seamless collaboration, timely delivery, and high-impact results across initiatives.
Key Responsibilities
• Lead Engineering Teams Across AI/ML Projects: Manage and mentor cross-functional teams of ML engineers, DevOps professionals, and software developers through agile delivery cycles, ensuring timely and high-quality execution of AI-focused initiatives.
• Drive Agile Project Execution: Define project scope, objectives, timelines, and deliverables using Agile/Scrum methodologies. Ensure continuous sprint planning, backlog grooming, and milestone tracking via tools like Jira or GitHub Projects.
• Manage Multiple Concurrent Projects: Oversee the full lifecycle of multiple high-priority projects—ranging from AI model development and infrastructure integration to client delivery and platform enhancements.
• Collaborate with Technical and Business Stakeholders: Act as the bridge between engineering, research, and client-facing teams, translating complex requirements into actionable tasks and product features.
• Maintain Engineering and Infrastructure Quality: Uphold rigorous engineering standards across deployments. Coordinate testing, model performance validation, version control, and CI/CD operations.
• Budget and Resource Allocation: Optimize resource distribution across teams, track project costs, and ensure effective use of cloud infrastructure and personnel to maximize project ROI.
• Risk Management & Mitigation: Identify risks proactively across technical and operational layers. Develop mitigation plans and troubleshoot issues that may impact timelines or performance.
• Monitor KPIs and Delivery Metrics: Establish and monitor performance indicators such as sprint velocity, deployment frequency, incident response times, and customer satisfaction for each release.
• Support Continuous Improvement: Foster a culture of feedback and iteration. Champion retrospectives and process reviews to continually refine development practices and workflows.
Qualifications:
• Education: Bachelor’s or Master’s in Computer Science, Engineering, or a related technical field.
• Experience: Minimum 5 years of experience as a Project Manager, with at least 2 years managing AI/ML or software engineering teams.
• Tech Expertise: Familiarity with AI/ML lifecycles, cloud platforms (AWS, GCP, or Azure), and DevOps pipelines (Docker, Kubernetes, GitHub Actions, Jenkins).
• Tools: Strong experience with Jira, Confluence, and project tracking/reporting tools.
• Leadership: Proven success leading high-performing engineering teams in a fast-paced, innovative environment.
• Communication: Excellent written and verbal skills to interface with both technical and non-technical stakeholders.
• Certifications (Preferred): PMP, CSM, or certifications in AI/ML project management or cloud technologies.
Why Join PGAGI?
• Lead cutting-edge AI/ML product teams building scalable, impactful solutions.
• Be part of a fast-growing, innovation-driven startup environment.
• Enjoy a collaborative, intellectually stimulating workplace with growth opportunities.
• Competitive compensation and performance-based rewards.
• Access to learning resources, mentoring, and AI/DevOps communities.
A backend developer is an engineer who can handle all the work of databases, servers,
systems engineering, and clients. Depending on the project, what customers need may
be a mobile stack, a Web stack, or a native application stack.
You will be responsible for:
Build reusable code and libraries for future use.
Own & build new modules/features end-to-end independently.
Collaborate with other team members and stakeholders.
Required Skills :
Thorough understanding of Node.js and Typescript.
Excellence in at least one framework like strongloop loopback, express.js, sail.js, etc.
Basic architectural understanding of modern day web applications
Diligence for coding standards
Must be good with git and git workflow
Experience of external integrations is a plus
Working knowledge of AWS or GCP or Azure - Expertise with linux based systems
Experience with CI/CD tools like jenkins is a plus.
Experience with testing and automation frameworks.
Extensive understanding of RDBMS systems

Job Summary:
We are looking for a motivated and detail-oriented Data Engineer with 1–2 years of experience to join our data engineering team. The ideal candidate should have solid foundational skills in SQL and Python, along with exposure to building or maintaining data pipelines. You’ll play a key role in helping to ingest, process, and transform data to support various business and analytical needs.
Key Responsibilities:
- Assist in the design, development, and maintenance of scalable and efficient data pipelines.
- Write clean, maintainable, and performance-optimized SQL queries.
- Develop data transformation scripts and automation using Python.
- Support data ingestion processes from various internal and external sources.
- Monitor data pipeline performance and help troubleshoot issues.
- Collaborate with data analysts, data scientists, and other engineers to ensure data quality and consistency.
- Work with cloud-based data solutions and tools (e.g., AWS, Azure, GCP – as applicable).
- Document technical processes and pipeline architecture.
Core Skills Required:
- Proficiency in SQL (data querying, joins, aggregations, performance tuning).
- Experience with Python, especially in the context of data manipulation (e.g., pandas, NumPy).
- Exposure to ETL/ELT pipelines and data workflow orchestration tools (e.g., Airflow, Prefect, Luigi – preferred).
- Understanding of relational databases and data warehouse concepts.
- Familiarity with version control systems like Git.
Preferred Qualifications:
- Experience with cloud data services (AWS S3, Redshift, Azure Data Lake, etc.)
- Familiarity with data modeling and data integration concepts.
- Basic knowledge of CI/CD practices for data pipelines.
- Bachelor’s degree in Computer Science, Engineering, or related field.
Job Title: IT Head – Fintech Industry
Department: Information Technology
Location: Andheri East
Reports to: COO
Job Type: Full-Time
Job Overview:
The IT Head in a fintech company is responsible for overseeing the entire information technology infrastructure, including the development, implementation, and maintenance of IT systems, networks, and software solutions. The role involves leading the IT team, managing technology projects, ensuring data security, and ensuring the smooth functioning of all technology operations. As the company scales, the IT Head will play a key role in enabling digital innovation, optimizing IT processes, and ensuring compliance with relevant regulations in the fintech sector.
Key Responsibilities:
1. IT Strategy and Leadership
- Develop and execute the company’s IT strategy to align with the organization’s overall business goals and objectives, ensuring the integration of new technologies and systems.
- Lead, mentor, and manage a team of IT professionals, setting clear goals, priorities, and performance expectations.
- Stay up-to-date with industry trends and emerging technologies, providing guidance and recommending innovations to improve efficiency and security.
- Oversee the design, implementation, and maintenance of IT systems that support fintech products, customer experience, and business operations.
2. IT Infrastructure Management
- Oversee the management and optimization of the company’s IT infrastructure, including servers, networks, databases, and cloud services.
- Ensure the scalability and reliability of IT systems to support the company’s growth and increasing demand for digital services.
- Manage system updates, hardware procurement, and vendor relationships to ensure that infrastructure is cost-effective, secure, and high-performing.
3. Cybersecurity and Data Protection
- Lead efforts to ensure the company’s IT infrastructure is secure, implementing robust cybersecurity measures to protect sensitive customer data, financial transactions, and intellectual property.
- Develop and enforce data protection policies and procedures to ensure compliance with data privacy regulations (e.g., GDPR, CCPA, RBI, etc.).
- Conduct regular security audits and vulnerability assessments, working with the security team to address potential risks proactively.
4. Software Development and Integration
- Oversee the development and deployment of software applications and tools that support fintech operations, including payment gateways, loan management systems, and customer engagement platforms.
- Collaborate with product teams to identify technological needs, integrate new features, and optimize existing products for improved performance and user experience.
- Ensure the seamless integration of third-party platforms, APIs, and fintech partners into the company’s core systems.
5. IT Operations and Support
- Ensure the efficient day-to-day operation of IT services, including helpdesk support, system maintenance, and troubleshooting.
- Establish service level agreements (SLAs) for IT services, ensuring that internal teams and customers receive timely support and issue resolution.
- Manage incident response, ensuring quick resolution of system failures, security breaches, or service interruptions.
6. Budgeting and Cost Control
- Manage the IT department’s budget, ensuring cost-effective spending on technology, software, hardware, and IT services.
- Analyze and recommend investments in new technologies and infrastructure that can improve business performance while optimizing costs.
- Ensure the efficient use of IT resources and the appropriate allocation of budget to support business priorities.
7. Compliance and Regulatory Requirements
- Ensure IT practices comply with relevant industry regulations and standards, such as financial services regulations, data privacy laws, and cybersecurity guidelines.
- Work with legal and compliance teams to ensure that all systems and data handling procedures meet industry-specific regulatory requirements (e.g., PCI DSS, ISO 27001).
- Provide input and guidance on IT-related regulatory audits and assessments, ensuring the organization is always in compliance.
8. Innovation and Digital Transformation
- Drive innovation by identifying opportunities for digital transformation within the organization, using technology to streamline operations and enhance the customer experience.
- Collaborate with other departments (marketing, customer service, product development) to introduce new fintech products and services powered by cutting-edge technology.
- Oversee the implementation of AI, machine learning, and other advanced technologies to enhance business performance, operational efficiency, and customer satisfaction.
9. Vendor and Stakeholder Management
- Manage relationships with external technology vendors, service providers, and consultants to ensure the company gets the best value for its investments.
- Negotiate contracts, terms of service, and service level agreements (SLAs) with vendors and technology partners.
- Ensure strong communication with business stakeholders, understanding their IT needs and delivering technology solutions that align with company objectives.
Qualifications and Skills:
Education:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field (Master’s degree or relevant certifications like ITIL, PMP, or CISSP are a plus).
Experience:
- 8-12 years of experience in IT management, with at least 4 years in a leadership role, preferably within the fintech, banking, or technology industry.
- Strong understanding of IT infrastructure, cloud computing, database management, and cybersecurity best practices.
- Proven experience in managing IT teams and large-scale IT projects, especially in fast-paced, growth-driven environments.
- Knowledge of fintech products and services, including digital payments, blockchain, and online lending platforms.
Skills:
- Expertise in IT infrastructure management, cloud services (AWS, Azure, Google Cloud), and enterprise software.
- Strong understanding of cybersecurity protocols, data protection laws, and IT governance frameworks.
- Experience with software development and integration, particularly for fintech platforms.
- Strong project management and budgeting skills, with a track record of delivering IT projects on time and within budget.
- Excellent communication and leadership skills, with the ability to manage cross-functional teams and communicate complex technical concepts to non-technical stakeholders.
- Ability to manage multiple priorities in a fast-paced, high-pressure environment.
Role & Responsibilities
Responsible for ensuring that the architecture and design of the platform remains top-notch with respect to scalability, availability, reliability and maintainability
Act as a key technical contributor as well as a hands-on contributing member of the team.
Own end-to-end availability and performance of features, driving rapid product innovation while ensuring a reliable service.
Working closely with the various stakeholders like Program Managers, Product Managers, Reliability and Continuity Engineering(RCE) team, QE team to estimate and execute features/tasks independently.
Maintain and drive tech backlog execution for non-functional requirements of the platform required to keep the platform resilient
Assist in release planning and prioritization based on technical feasibility and engineering constraints
A zeal to continually find new ways to improve architecture, design and ensure timely delivery and high quality.
1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, BQ optimization, Airflow/Composer, Python(preferred)/Java
2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges
3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP
4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At Least 2 databases)
5. Data Warehouse concepts - Beginner to Intermediate level
6.Data Modeling, GCP Databases, DB Schema(or similar)
7.Hands-on data modelling for OLTP and OLAP systems
8.In-depth knowledge of Conceptual, Logical and Physical data modelling
9.Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same
10.Strong understanding of variables impacting database performance for near-real-time reporting and application interaction.
11.Should have working experience on at least one data modelling tool,
preferably DBSchema, Erwin
12Good understanding of GCP databases like AlloyDB, CloudSQL, and
BigQuery.
13.People with functional knowledge of the mutual fund industry will be a plus Should be willing to work from Chennai, office presence is mandatory
Role & Responsibilities:
● Work with business users and other stakeholders to understand business processes.
● Ability to design and implement Dimensional and Fact tables
● Identify and implement data transformation/cleansing requirements
● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse
● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions
● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique
● Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions.
● Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use.
● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality.
● Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions.
● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics.
● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.
● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements.
● Train business end-users, IT analysts, and developers.


About the Company – Gruve
Gruve is an innovative software services startup dedicated to empowering enterprise customers in managing their Data Life Cycle. We specialize in Cybersecurity, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence.
As a well-funded early-stage startup, we offer a dynamic environment, backed by strong customer and partner networks. Our mission is to help customers make smarter decisions through data-driven business strategies.
Why Gruve
At Gruve, we foster a culture of:
- Innovation, collaboration, and continuous learning
- Diversity and inclusivity, where everyone is encouraged to thrive
- Impact-focused work — your ideas will shape the products we build
We’re an equal opportunity employer and encourage applicants from all backgrounds. We appreciate all applications, but only shortlisted candidates will be contacted.
Position Summary
We are seeking a highly skilled Software Engineer to lead the development of an Infrastructure Asset Management Platform. This platform will assist infrastructure teams in efficiently managing and tracking assets for regulatory audit purposes.
You will play a key role in building a comprehensive automation solution to maintain a real-time inventory of critical infrastructure assets.
Key Responsibilities
- Design and develop an Infrastructure Asset Management Platform for tracking a wide range of assets across multiple environments.
- Build and maintain automation to track:
- Physical Assets: Servers, power strips, racks, DC rooms & buildings, security cameras, network infrastructure.
- Virtual Assets: Load balancers (LTM), communication equipment, IPs, virtual networks, VMs, containers.
- Cloud Assets: Public cloud services, process registry, database resources.
- Collaborate with infrastructure teams to understand asset-tracking requirements and convert them into technical implementations.
- Optimize performance and scalability to handle large-scale asset data in real-time.
- Document system architecture, implementation, and usage.
- Generate reports for compliance and auditing.
- Ensure integration with existing systems for streamlined asset management.
Basic Qualifications
- Bachelor’s or Master’s degree in Computer Science or a related field
- 3–6 years of experience in software development
- Strong proficiency in Golang and Python
- Hands-on experience with public cloud infrastructure (AWS, GCP, Azure)
- Deep understanding of automation solutions and parallel computing principles
Preferred Qualifications
- Excellent problem-solving skills and attention to detail
- Strong communication and teamwork skills


About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 45 years in the USA. Data Axle has set up a strategic global center of excellence in Pune. This center delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases. Data Axle is headquartered in Dallas, TX, USA.
Roles and Responsibilities:
- Design, implement, and manage scalable analytical data infrastructure, enabling efficient access to large datasets and high-performance computing on Google Cloud Platform (GCP).
- Develop and optimize data pipelines using GCP-native services like BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Data Fusion, and Cloud Storage.
- Work with diverse data sources to extract, transform, and load data into enterprise-grade data lakes and warehouses, ensuring high availability and reliability.
- Implement and maintain real-time data streaming solutions using Pub/Sub, Dataflow, and Kafka.
- Research and integrate the latest big data and visualization technologies to enhance analytics capabilities and improve efficiency.
- Collaborate with cross-functional teams to implement machine learning models and AI-driven analytics solutions using Vertex AI and BigQuery ML.
- Continuously improve existing data architectures to support scalability, performance optimization, and cost efficiency.
- Enhance data security and governance by implementing industry best practices for access control, encryption, and compliance.
- Automate and optimize data workflows to simplify reporting, dashboarding, and self-service analytics using Looker and Data Studio.
Basic Qualifications
- 7+ years of experience in data engineering, software development, business intelligence, or data science, with expertise in large-scale data processing and analytics.
- Strong proficiency in SQL and experience with BigQuery for data warehousing.
- Hands-on experience in designing and developing ETL/ELT pipelines using GCP services (Cloud Composer, Dataflow, Dataproc, Data Fusion, or Apache Airflow).
- Expertise in distributed computing and big data processing frameworks, such as Apache Spark, Hadoop, or Flink, particularly within Dataproc and Dataflow environments.
- Experience with business intelligence and data visualization tools, such as Looker, Tableau, or Power BI.
- Knowledge of data governance, security best practices, and compliance requirements in cloud environments.
Preferred Qualifications:
- Degree/Diploma in Computer Science, Engineering, Mathematics, or a related technical field.
- Experience working with GCP big data technologies, including BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud SQL.
- Hands-on experience with real-time data processing frameworks, including Kafka and Apache Beam.
- Proficiency in Python, Java, or Scala for data engineering and pipeline development.
- Familiarity with DevOps best practices, CI/CD pipelines, Terraform, and infrastructure-as-code for managing GCP resources.
- Experience integrating AI/ML models into data workflows, leveraging BigQuery ML, Vertex AI, or TensorFlow.
- Understanding of Agile methodologies, software development life cycle (SDLC), and cloud cost optimization strategies.


Full Stack Developer
Location: Hyderabad
Experience: 7+ Years
Type: BCS - Business Consulting Services
RESPONSIBILITIES:
* Strong programming skills in Node JS [ Must] , React JS, Android and Kotlin [Must]
* Hands on Experience in UI development with good UX sense understanding.
• Hands on Experience in Database design and management
• Hands on Experience to create and maintain backend-framework for mobile applications.
• Hands-on development experience on cloud-based platforms like GCP/Azure/AWS
• Ability to manage and provide technical guidance to the team.
• Strong experience in designing APIs using RAML, Swagger, etc.
• Service Definition Development.
• API Standards, Security, Policies Definition and Management.
REQUIRED EXPERIENCE:
* Bachelor’s and/or master's degree in computer science or equivalent work experience
* Excellent analytical, problem solving, and communication skills.
* 7+ years of software engineering experience in a multi-national company
* 6+ years of development experience in Kotlin, Node and React JS
* 3+ Year(s) experience creating solutions in native public cloud (GCP, AWS or Azure)
* Experience with Git or similar version control system, continuous integration
* Proficiency in automated unit test development practices and design methodologies
* Fluent English
Proficient in Looker Action, Looker Dashboarding, Looker Data Entry, LookML, SQL Queries, BigQuery, LookML, Looker Studio, BigQuery, GCP.
Remote Working
2 pm to 12 am IST or
10:30 AM to 7:30 PM IST
Sunday to Thursday
Responsibilities:
● Create and maintain LookML code, which defines data models, dimensions, measures, and relationships within Looker.
● Develop reusable LookML components to ensure consistency and efficiency in report and dashboard creation.
● Build and customize dashboard to Incorporate data visualizations, such as charts and graphs, to present insights effectively.
● Write complex SQL queries when necessary to extract and manipulate data from underlying databases and also optimize SQL queries for performance.
● Connect Looker to various data sources, including databases, data warehouses, and external APIs.
● Identify and address bottlenecks that affect report and dashboard loading times and Optimize Looker performance by tuning queries, caching strategies, and exploring indexing options.
● Configure user roles and permissions within Looker to control access to sensitive data & Implement data security best practices, including row-level and field-level security.
● Develop custom applications or scripts that interact with Looker's API for automation and integration with other tools and systems.
● Use version control systems (e.g., Git) to manage LookML code changes and collaborate with other developers.
● Provide training and support to business users, helping them navigate and use Looker effectively.
● Diagnose and resolve technical issues related to Looker, data models, and reports.
Skills Required:
● Experience in Looker's modeling language, LookML, including data models, dimensions, and measures.
● Strong SQL skills for writing and optimizing database queries across different SQL databases (GCP/BQ preferable)
● Knowledge of data modeling best practices
● Proficient in BigQuery, billing data analysis, GCP billing, unit costing, and invoicing, with the ability to recommend cost optimization strategies.
● Previous experience in Finops engagements is a plus
● Proficiency in ETL processes for data transformation and preparation.
● Ability to create effective data visualizations and reports using Looker’s dashboard tools.
● Ability to optimize Looker performance by fine-tuning queries, caching strategies, and indexing.
● Familiarity with related tools and technologies, such as data warehousing (e.g., BigQuery ), data transformation tools (e.g., Apache Spark), and scripting languages (e.g., Python).

As a Solution Architect, you will collaborate with our sales, presales and COE teams to provide technical expertise and support throughout the new business acquisition process. You will play a crucial role in understanding customer requirements, presenting our solutions, and demonstrating the value of our products.
You thrive in high-pressure environments, maintaining a positive outlook and understanding that career growth is a journey that requires making strategic choices. You possess good communication skills, both written and verbal, enabling you to convey complex technical concepts clearly and effectively. You are a team player, customer-focused, self-motivated, responsible individual who can work under pressure with a positive attitude. You must have experience in managing and handling RFPs/ RFIs, client demos and presentations, and converting opportunities into winning bids. You possess a strong work ethic, positive attitude, and enthusiasm to embrace new challenges. You can multi-task and prioritize (good time management skills), willing to display and learn. You should be able to work independently with less or no supervision. You should be process-oriented, have a methodical approach and demonstrate a quality-first approach.
Ability to convert client’s business challenges/ priorities into winning proposal/ bid through excellence in technical solution will be the key performance indicator for this role.
What you’ll do
- Architecture & Design: Develop high-level architecture designs for scalable, secure, and robust solutions.
- Technology Evaluation: Select appropriate technologies, frameworks, and platforms for business needs.
- Cloud & Infrastructure: Design cloud-native, hybrid, or on-premises solutions using AWS, Azure, or GCP.
- Integration: Ensure seamless integration between various enterprise applications, APIs, and third-party services.
- Design and develop scalable, secure, and performant data architectures on Microsoft Azure and/or new generation analytics platform like MS Fabric.
- Translate business needs into technical solutions by designing secure, scalable, and performant data architectures on cloud platforms.
- Select and recommend appropriate Data services (e.g. Fabric, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Power BI etc) to meet specific data storage, processing, and analytics needs.
- Develop and recommend data models that optimize data access and querying. Design and implement data pipelines for efficient data extraction, transformation, and loading (ETL/ELT) processes.
- Ability to understand Conceptual/Logical/Physical Data Modelling.
- Choose and implement appropriate data storage, processing, and analytics services based on specific data needs (e.g., data lakes, data warehouses, data pipelines).
- Understand and recommend data governance practices, including data lineage tracking, access control, and data quality monitoring.
What you will Bring
- 10+ years of working in data analytics and AI technologies from consulting, implementation and design perspectives
- Certifications in data engineering, analytics, cloud, AI will be a certain advantage
- Bachelor’s in engineering/ technology or an MCA from a reputed college is a must
- Prior experience of working as a solution architect during presales cycle will be an advantage
Soft Skills
- Communication Skills
- Presentation Skills
- Flexible and Hard-working
Technical Skills
- Knowledge of Presales Processes
- Basic understanding of business analytics and AI
- High IQ and EQ
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.
POSITION: Sr. Devops Engineer
Job Type: Work From Office (5 days)
Location: Sector 16A, Film City, Noida / Mumbai
Relevant Experience: Minimum 4+ year
Salary: Competitive
Education- B.Tech
About the Company: Devnagri is a AI company dedicated to personalizing business communication and making it hyper-local to attract non-English speakers. We address the significant gap in internet content availability for most of the world’s population who do not speak English. For more detail - Visit www.devnagri.com
We seek a highly skilled and experienced Senior DevOps Engineer to join our dynamic team. As a key member of our technology department, you will play a crucial role in designing and implementing scalable, efficient and robust infrastructure solutions with a strong focus on DevOps automation and best practices.
Roles and Responsibilities
- Design, plan, and implement scalable, reliable, secure, and robust infrastructure architectures
- Manage and optimize cloud-based infrastructure components - Architect and implement containerization technologies, such as Docker, Kubernetes
- Implement the CI/CD pipelines to automate the build, test, and deployment processes
- Design and implement effective monitoring and logging solutions for applications and infrastructure. Establish metrics and alerts for proactive issue identification and resolution
- Work closely with cross-functional teams to troubleshoot and resolve issues.
- Implement and enforce security best practices across infrastructure components
- Establish and enforce configuration standards across various environments.
- Implement and manage infrastructure using Infrastructure as Code principles
- Leverage tools like Terraform for provisioning and managing resources.
- Stay abreast of industry trends and emerging technologies.
- Evaluate and recommend new tools and technologies to enhance infrastructure and operations
Must have Skills:
Cloud ( AWS & GCP ), Redis, MongoDB, MySQL, Docker, bash scripting, Jenkins, Prometheus, Grafana, ELK Stack, Apache, Linux
Good to have Skills:
Kubernetes, Collaboration and Communication, Problem Solving, IAM, WAF, SAST/DAST
Interview Process:
Screening Round then Shortlisting >> 3 technical round >> 1 Managerial round >> HR Closure
with your short success story into Devops and Tech
Cheers
For more details, visit our website- https://www.devnagri.com
Note for approver
Hiring for Big4 Company'
GCP Data engineer
GCP - Mandate
3-7 Years
Gurgaon location
only serving candidate or immediately Joiner can apply
Notice period - less than 30 Days


Job Profile : Python Developer
Job Location : Ahmedabad, Gujarat - On site
Job Type : Full time
Experience - 1-3 Years
Key Responsibilities:
Design, develop, and maintain Python-based applications and services.
Collaborate with cross-functional teams to define, design, and ship new features.
Write clean, maintainable, and efficient code following best practices.
Optimize applications for maximum speed and scalability.
Troubleshoot, debug, and upgrade existing systems.
Integrate user-facing elements with server-side logic.
Implement security and data protection measures.
Work with databases (SQL/NoSQL) and integrate data storage solutions.
Participate in code reviews to ensure code quality and share knowledge with the team.
Stay up-to-date with emerging technologies and industry trends.
Requirements:
1-3 years of professional experience in Python development.
Strong knowledge of Python frameworks such as Django, Flask, or FastAPI.
Experience with RESTful APIs and web services.
Proficiency in working with databases (e.g., PostgreSQL, MySQL, MongoDB).
Familiarity with front-end technologies (e.g., HTML, CSS, JavaScript) is a plus.
Experience with version control systems (e.g., Git).
Knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud) is a plus.
Understanding of containerization tools like Docker and orchestration tools like Kubernetes is good to have
Strong problem-solving skills and attention to detail.
Excellent communication and teamwork skills.
Good to Have:
Experience with data analysis and visualization libraries (e.g., Pandas, NumPy, Matplotlib).
Knowledge of asynchronous programming and event-driven architecture.
Familiarity with CI/CD pipelines and DevOps practices.
Experience with microservices architecture.
Knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch) is a plus.
Hands on experience in RAG and LLM model intergration would be surplus.

Responsibilities: Design, develop, and maintain robust and scalable backend systems using PHP and Laravel framework. Develop and implement RESTful APIs for mobile and web applications. Optimize database performance and ensure data integrity. Implement security best practices to protect sensitive data. Integrate with third-party services and APIs. Write clean, well-documented, and testable code. Participate in code reviews and contribute to improving code quality. Troubleshoot and debug issues, and provide timely resolutions. Contribute to the continuous improvement of the development process. Mentor junior developers (if applicable). Qualifications: 4+ years of experience in backend development, with a strong focus on PHP and Laravel. Deep understanding of Laravel framework, including Eloquent ORM, routing, and templating. Experience with API design and development. Solid understanding of database design and optimization (MySQL). Familiarity with version control systems (e.g., Git). Excellent problem-solving and debugging skills. Strong communication and collaboration skills. Experience with Google Cloud Platform is a plus. Experience in the healthcare domain is a plus.
Must be:
- Based in Mumbai
- Comfortable with Work from Office
- Available to join immediately
Responsibilities:
- Manage, monitor, and scale production systems across cloud (AWS/GCP) and on-prem.
- Work with Kubernetes, Docker, Lambdas to build reliable, scalable infrastructure.
- Build tools and automation using Python, Go, or relevant scripting languages.
- Ensure system observability using tools like NewRelic, Prometheus, Grafana, CloudWatch, PagerDuty.
- Optimize for performance and low-latency in real-time systems using Kafka, gRPC, RTP.
- Use Terraform, CloudFormation, Ansible, Chef, Puppet for infra automation and orchestration.
- Load testing using Gatling, JMeter, and ensuring fault tolerance and high availability.
- Collaborate with dev teams and participate in on-call rotations.
Requirements:
- B.E./B.Tech in CS, Engineering or equivalent experience.
- 3+ years in production infra and cloud-based systems.
- Strong background in Linux (RHEL/CentOS) and shell scripting.
- Experience managing hybrid infrastructure (cloud + on-prem).
- Strong testing practices and code quality focus.
- Experience leading teams is a plus.

What you’ll do
- Design, build, and maintain robust ETL/ELT pipelines for product and analytics data
- Work closely with business, product, analytics, and ML teams to define data needs
- Ensure high data quality, lineage, versioning, and observability
- Optimize performance of batch and streaming jobs
- Automate and scale ingestion, transformation, and monitoring workflows
- Document data models and key business metrics in a self-serve way
- Use AI tools to accelerate development, troubleshooting, and documentation
Must-Haves:
- 2–4 years of experience as a data engineer (product or analytics-focused preferred)
- Solid hands-on experience with Python and SQL
- Experience with data pipeline orchestration tools like Airflow or Prefect
- Understanding of data modeling, warehousing concepts, and performance optimization
- Familiarity with cloud platforms (GCP, AWS, or Azure)
- Bachelor's in Computer Science, Data Engineering, or a related field
- Strong problem-solving mindset and AI-native tooling comfort (Copilot, GPTs)
Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries.
What will you do at Fynd?
- Run the production environment by monitoring availability and taking a holistic view of system health.
- Improve reliability, quality, and time-to-market of our suite of software solutions
- Be the 1st person to report the incident.
- Debug production issues across services and levels of the stack.
- Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realise it.
- Building automated tools in Python / Java / GoLang / Ruby etc.
- Help Platform and Engineering teams gain visibility into our infrastructure.
- Lead design of software components and systems, to ensure availability, scalability, latency, and efficiency of our services.
- Participate actively in detecting, remediating and reporting on Production incidents, ensuring the SLAs are met and driving Problem Management for permanent remediation.
- Participate in on-call rotation to ensure coverage for planned/unplanned events.
- Perform other task like load-test & generating system health reports.
- Periodically check for all dashboards readiness.
- Engage with other Engineering organizations to implement processes, identify improvements, and drive consistent results.
- Working with your SRE and Engineering counterparts for driving Game days, training and other response readiness efforts.
- Participate in the 24x7 support coverage as needed Troubleshooting and problem-solving complex issues with thorough root cause analysis on customer and SRE production environments
- Collaborate with Service Engineering organizations to build and automate tooling, implement best practices to observe and manage the services in production and consistently achieve our market leading SLA.
- Improving the scalability and reliability of our systems in production.
- Evaluating, designing and implementing new system architectures.
Some specific Requirements:
- B.E./B.Tech. in Engineering, Computer Science, technical degree, or equivalent work experience
- At least 3 years of managing production infrastructure. Leading / managing a team is a huge plus.
- Experience with cloud platforms like - AWS, GCP.
- Experience developing and operating large scale distributed systems with Kubernetes, Docker and and Serverless (Lambdas)
- Experience in running real-time and low latency high available applications (Kafka, gRPC, RTP)
- Comfortable with Python, Go, or any relevant programming language.
- Experience with monitoring alerting using technologies like Newrelic / zybix /Prometheus / Garafana / cloudwatch / Kafka / PagerDuty etc.
- Experience with one or more orchestration, deployment tools, e.g. CloudFormation / Terraform / Ansible / Packer / Chef.
- Experience with configuration management systems such as Ansible / Chef / Puppet.
- Knowledge of load testing methodologies, tools like Gating, Apache Jmeter.
- Work your way around Unix shell.
- Experience running hybrid clouds and on-prem infrastructures on Red Hat Enterprise Linux / CentOS
- A focus on delivering high-quality code through strong testing practices.
What do we offer?
Growth
Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially.
Flex University: We help you upskill by organising in-house courses on important subjects
Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you.
Culture
Community and Team building activities
Host weekly, quarterly and annual events/parties.
Wellness
Mediclaim policy for you + parents + spouse + kids
Experienced therapist for better mental health, improve productivity & work-life balance
We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!

We are seeking a highly skilled Java full-stack developer with 5–8 years of experience to join our dynamic development team. The ideal candidate will have deep technical expertise across Java, Microservices, React/Redux, Kubernetes, DevOps tools, and GCP. You will work on designing and deploying full-stack applications that are robust, scalable, and aligned with business goals.
Key Responsibilities
- Design, develop, and deploy scalable full-stack applications using Java, React, and Redux
- Build microservices following SOLID principles
- Collaborate with cross-functional team,s including product owners, QA, BAs, and other engineers
- Write clean, maintainable, and efficient code
- Perform debugging, troubleshooting, and optimization
- Participate in code reviews and contribute to engineering best practices
- Stay updated on security, privacy, and compliance requirements
- Work in an Agile/Scrum environment using tools like JIRA and Confluence
Technical Skills Required
Frontend
- Strong proficiency in JavaScript and modern ES6 features
- Expertise in React.js with advanced knowledge of hooks (useCallback, useMemo, etc.)
- Solid understanding of Redux for state management
Backend
- Strong hands-on experience in Java
- Building and maintaining Microservices architectures
DevOps & Infrastructure
- Experience with CI/CD tools: Jenkins, Nexus, Maven, Ansible
- Terraform for infrastructure as code
- Containerization and orchestration using Docker and Kubernetes/GKE
- Experience with IAM, security roles, service accounts
Cloud
- Proficient with any cloud services
Database
- Hands-on experience with PostgreSQL, MySQL, BigQuery
Scripting
- Proficiency in Bash/Shell scripting and Python
Non-Technical Skills
- Strong communication and interpersonal skills
- Ability to work effectively in distributed teams across time zones
- Quick learner and adaptable to new technologies
- Team player with a collaborative mindset
- Ability to explain complex technical concepts to non-technical stakeholders
Nice to Have
- Experience with NetReveal / Detica
Why Join Us?
- 🚀 Challenging Projects: Be part of innovative solutions making a global impact
- 🌍 Global Exposure: Work with international teams and clients
- 📈 Career Growth: Clear pathways for professional advancement
- 🧘♂️ Flexible Work Options: Hybrid and remote flexibility to support work-life balance
- 💼 Competitive Compensation: Industry-leading salary and benefits
Job Title : Lead Java Developer (Backend)
Experience Required : 8 to 15 Years
Open Positions : 5
Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)
Work Mode : Open to Remote / Hybrid / Onsite
Notice Period : Immediate Joiner/30 Days or Less
About the Role :
- We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
- This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
- This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.
Key Responsibilities :
- Design, develop, and implement scalable backend systems using Java and Spring Boot.
- Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
- Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
- Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
- Guide and mentor team members, fostering technical excellence and ownership.
- Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.
What We’re Looking For :
- Proven experience in Java backend development (Spring Boot, Microservices).
- 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Good understanding of containerization and orchestration tools like Docker and Kubernetes.
- Exposure to DevOps and Infrastructure as Code practices.
- Strong problem-solving skills and the ability to design solutions from first principles.
- Prior experience in product-based or startup environments is a big plus.
Ideal Candidate Profile :
- A tech enthusiast with a passion for clean code and scalable architecture.
- Someone who thrives in collaborative, transparent, and feedback-driven environments.
- A leader who takes ownership beyond individual deliverables to drive overall team and project success.
Interview Process
- Initial Technical Screening (via platform partner)
- Technical Interview with Engineering Team
- Client-facing Final Round
Additional Info :
- Targeting profiles from product/startup backgrounds.
- Strong preference for candidates with under 1 month of notice period.
- Interviews will be fast-tracked for qualified profiles.
Backend - Software Development Engineer III
Experience - 7+ yrs
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments leading technical teams, designing system architecture and reviewing peer code. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customers technical teams and MongoDB solutions Architects.
Location - Chennai or Bangalore
- Relevant experience of 7+ years building high-performance back-end applications with at least 3 or more projects delivered using the required technologies
- Good problem solving skills
- Strong mentoring capabilities
- Good understanding of software development life cycle
- Strong experience in system design and architecture
- Strong focus on quality of work delivered
- Excellent verbal and written communication skills
Required Technical Skills
- Extensive hands-on experience building high-performance web back-ends using Node.Js and Javascript/Typescript
- Min two years of hands-on experience in NestJs
- Strong experience with Express.Js framework
- Implementation experience in monolithic and microservices architecture
- Hands-on experience with data modeling on MongoDB and any other Relational or NoSQL databases
- Experience integrating with any 3rd party services such as cloud SDKs (Preferable X), payments, push notifications, authentication etc…
- Hands-on experience with Redis, Kafka, or X
- Exposure into unit testing with frameworks such as Mocha, Chai, Jest or others
- Strong experience writing and maintaining clear documentation
Good to have skills:
- Experience working with common services in any of the major cloud providers - AWS or GCP or Azure
- Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies


AI Architect
Location and Work Requirements
- Position is based in KSA or UAE
- Must be eligible to work abroad without restrictions
- Regular travel within the region required
Key Responsibilities
- Minimum 7+ years of experience in Data & Analytics domain and minimum 2 years as AI Architect
- Drive technical solution design engagements and implementations
- Support customer implementations across various deployment modes (Public SaaS, Single-Tenant SaaS, and Self-Managed Kubernetes)
- Provide advanced technical support, including deployment troubleshooting and coordinating with customer AI Architect and product development teams when needed
- Guide customers in implementing generative AI solutions, including LLM integration, vector database management, and prompt engineering
- Coordinate and oversee platform installations and configuration work
- Assist customers with platform integration, including API implementation and custom model deployment
- Establish and promote best practices for AI governance and MLOps
- Proactively identify and address potential technical challenges before they impact customer success
Required Technical Skills
- Strong programming skills in Python with experience in data processing libraries (Pandas, NumPy)
- Proficiency in SQL and experience with various database technologies including MongoDB
- Container technologies: Docker (build, modify, deploy) and Kubernetes (kubectl, helm)
- Version control systems (Git) and CI/CD practices
- Strong networking fundamentals (TCP/IP, SSH, SSL/TLS)
- Shell scripting (Linux/Unix environments)
- Experience in working on on-prem, airgapped environments
- Experience with cloud platforms (AWS, Azure, GCP)
Required AI/ML Skills
- Deep expertise in both predictive machine learning and generative AI technologies
- Proven experience implementing and operationalizing large language models (LLMs)
- Strong knowledge of vector databases, embedding technologies, and similarity search concepts
- Advanced understanding of prompt engineering, LLM evaluation, and AI governance methods
- Practical experience with machine learning deployment and production operations
- Understanding of AI safety considerations and risk mitigation strategies
Required Qualities
- Excellent English communication skills with ability to explain complex technical concepts. Arabic language is advantageous.
- Strong consultative approach to understanding and solving business problems
- Proven ability to build trust through proactive customer engagement
- Strong problem-solving abilities and attention to detail
- Ability to work independently and as part of a distributed team
- Willingness to travel within the Middle East & Africa region as needed

About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage.
About the Role:
HighLevel Inc. is looking for a Lead SDET with 8-10 years of experience to play a pivotal role in ensuring the quality, performance, and scalability of our products. We are seeking engineers who thrive in a fast-paced startup environment, enjoy problem-solving, and stay updated with the latest models and solutions. This is an exciting opportunity to work on cutting-edge performance testing strategies and drive impactful initiatives across the organisation.
Responsibilities:
- Implement performance, scalability, and reliability testing strategies
- Capture and analyze key performance metrics to identify bottlenecks
- Work closely with development, DevOps, and infrastructure teams to optimize system performance
- Review application architecture and suggest improvements to enhance scalability
- Leverage AI at appropriate layers to improve efficiency and drive positive business outcomes
- Drive performance testing initiatives across the organization and ensure seamless execution
- Automate the capturing of performance metrics and generate performance trend reports
- Research, evaluate, and conduct PoCs for new tools and solutions
- Collaborate with developers and architects to enhance frontend and API performance
- Conduct root cause analysis of performance issues using logs and monitoring tools
- Ensure high availability and reliability of applications and services
Requirements:
- 6-9 years of hands-on experience in Performance, Reliability, and Scalability testing
- Strong skills in capturing, analyzing, and optimizing performance metrics
- Expertise in performance testing tools such as Locust, Gatling, k6, etc.
- Experience working with cloud platforms (Google Cloud, AWS, Azure) and setting up performance testing environments
- Knowledge of CI/CD deployments and integrating performance testing into pipelines
- Proficiency in scripting languages (Python, Java, JavaScript) for test automation
- Hands-on experience with monitoring and observability tools (New Relic, AppDynamics, Prometheus, etc.)
- Strong knowledge of JVM monitoring, thread analysis, and RESTful services
- Experience in optimising frontend performance and API performance
- Ability to deploy applications in Kubernetes and troubleshoot environment issues
- Excellent problem-solving skills and the ability to troubleshoot customer issues effectively
- Experience in increasing application/service availability from 99.9% (three 9s) to 99.99% or higher (four/five 9s)
EEO Statement:
The company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.