50+ Google Cloud Platform (GCP) Jobs in India
Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!


About the Role
We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–6 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.
# Experience with Kubernetes is mandatory.
Key Responsibilities
- Design and develop scalable, reliable backend services and cloud-native applications.
- Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
- Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
- Implement and manage CI/CD pipelines and infrastructure automation.
- Collaborate with frontend, DevOps, and product teams in an agile environment.
- Ensure high code quality through testing, reviews, and documentation.
Required Skills
- Strong hands-on experience with Kubernetes of at least 2 years in production environments (mandatory).
- Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI].
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Solid understanding of distributed systems, microservices, and cloud-native architecture.
- Experience with containerization using Docker and Kubernetes-native deployment workflows.
- Working knowledge of SQL and relational databases.
Preferred Qualifications
- Experience working across multiple cloud platforms.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
Nice to Have
- Knowledge of NoSQL databases or event-driven/message-based architectures.
- Experience with serverless services, managed data pipelines, or data lake platforms.

Position : Senior Data Analyst
Experience Required : 5 to 8 Years
Location : Hyderabad or Bangalore (Work Mode: Hybrid – 3 Days WFO)
Shift Timing : 11:00 AM – 8:00 PM IST
Notice Period : Immediate Joiners Only
Job Summary :
We are seeking a highly analytical and experienced Senior Data Analyst to lead complex data-driven initiatives that influence key business decisions.
The ideal candidate will have a strong foundation in data analytics, cloud platforms, and BI tools, along with the ability to communicate findings effectively across cross-functional teams. This role also involves mentoring junior analysts and collaborating closely with business and tech teams.
Key Responsibilities :
- Lead the design, execution, and delivery of advanced data analysis projects.
- Collaborate with stakeholders to identify KPIs, define requirements, and develop actionable insights.
- Create and maintain interactive dashboards, reports, and visualizations.
- Perform root cause analysis and uncover meaningful patterns from large datasets.
- Present analytical findings to senior leaders and non-technical audiences.
- Maintain data integrity, quality, and governance in all reporting and analytics solutions.
- Mentor junior analysts and support their professional development.
- Coordinate with data engineering and IT teams to optimize data pipelines and infrastructure.
Must-Have Skills :
- Strong proficiency in SQL and Databricks
- Hands-on experience with cloud data platforms (AWS, Azure, or GCP)
- Sound understanding of data warehousing concepts and BI best practices
Good-to-Have :
- Experience with AWS
- Exposure to machine learning and predictive analytics
- Industry-specific analytics experience (preferred but not mandatory)

Job Role : DevOps Engineer (Python + DevOps)
Experience : 4 to 10 Years
Location : Hyderabad
Work Mode : Hybrid
Mandatory Skills : Python, Ansible, Docker, Kubernetes, CI/CD, Cloud (AWS/Azure/GCP)
Job Description :
We are looking for a skilled DevOps Engineer with expertise in Python, Ansible, Docker, and Kubernetes.
The ideal candidate will have hands-on experience automating deployments, managing containerized applications, and ensuring infrastructure reliability.
Key Responsibilities :
- Design and manage containerization and orchestration using Docker & Kubernetes.
- Automate deployments and infrastructure tasks using Ansible & Python.
- Build and maintain CI/CD pipelines for streamlined software delivery.
- Collaborate with development teams to integrate DevOps best practices.
- Monitor, troubleshoot, and optimize system performance.
- Enforce security best practices in containerized environments.
- Provide operational support and contribute to continuous improvements.
Required Qualifications :
- Bachelor’s in Computer Science/IT or related field.
- 4+ years of DevOps experience.
- Proficiency in Python and Ansible.
- Expertise in Docker and Kubernetes.
- Hands-on experience with CI/CD tools and pipelines.
- Experience with at least one cloud provider (AWS, Azure, or GCP).
- Strong analytical, communication, and collaboration skills.
Preferred Qualifications :
- Experience with Infrastructure-as-Code tools like Terraform.
- Familiarity with monitoring/logging tools like Prometheus, Grafana, or ELK.
- Understanding of Agile/Scrum practices.

About Us
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/
What are we looking for
We are seeking an experienced and innovative Golang Developers to join our team. The ideal candidate will be responsible for designing, developing, and optimizing the parts of web applications that users interact with directly.
Job Responsibilities:
● Design, develop, and maintain efficient, reusable, and reliable code using Go (Golang).
● Build and manage scalable, secure, and high-performance backend services and APIs.
● Collaborate with cross-functional teams (frontend, product, DevOps) to define, design, and ship new features.
● Optimize applications for maximum speed and scalability, ensuring high availability and responsiveness.
● Write unit tests and integration tests to ensure code quality.
● Identify bottlenecks and bugs and devise solutions to these problems.
● Work with cloud platforms (AWS, GCP, Azure) and deploy applications in containerized environments (Docker, Kubernetes).
True Hands-On Developer in Programming Languages like Java or Scala . Expertise in Apache Spark . Database modelling and working with any of the SQL or NoSQL Database is must. Working knowledge of Scripting languages like shell/python. Experience of working with Cloudera is Preferred Orchestration tools like Airflow or Oozie would be a value addition. Knowledge of Table formats like Delta or Iceberg is plus to have. Working experience of Version controls like Git, build tools like Maven is recommended. Having software development experience is good to have along with Data Engineering experience
About Us
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.
What we are looking for:
Experience: 10+ years
Education: BTech / BE / ME /MTech/ MCA / MSc Computer Science
Industry: Product Engineering Services or Enterprise Software Companies
Job Responsibilities:
- Sprint Development Task , Code Review , Defining detailed task for the connector based on design/Timelines, Documentation maturity, Release review and Sanity,Writing the design specifications and user stories for the functionalities assigned.
- Develop assigned components / classes and assist QA team in writing the test cases
- Create and maintain coding best practices and do peer code / solution reviews
- Participate in Daily Scrum calls, Scrum Planning, Retro and Demos meetings
- Bring out technical/design/architectural challenges/risks during execution, develop action plan for mitigation and aversion of identified risks
- Comply with development processes, documentation templates and tools prescribed by CloudSufi or and its clients
- Work with other teams and Architects in the organization and assist them on technical Issues/Demos/POCs and proposal writing for prospective clients
- Contribute towards the creation of knowledge repository, reusable assets/solution accelerators and IPs
- Provide feedback to junior developers and be a coach and mentor for them
- Provide training sessions on the latest technologies and topics to others employees in the organization
- Participate in organization development activities time to time - Interviews, CSR/Employee engagement activities, participation in business events/conferences, implementation of new policies, systems and procedures as decided by Management team
Certifications (Optional): OCPJP (Oracle Certified Professional Java Programmer)
Required Experience:
- Strong programming skills in the language Java.
- Hands on in Core Java and Microservices
- Understanding of Identity Management using users , groups and entitlements
- Hands on in developing connectivity for Identity management using SCIM,REST and LDAP.
- Through Experience in Triggers , Web hooks , events receiver implementations for connectors.
- Excellent in code review process and assessing developer’s productivity.
- Excellent analytical and problem-solving skills
Good to Have:
- Experience of developing 3-4 integration adapters/connectors for enterprise applications (ERP, CRM, HCM, SCM, Billing etc.) using industry standard frameworks and methodologies following Agile/Scrum
- Experience with IAM products.
- Experience on Implementation of Message Brokers using JMS.
- Experience on ETL processes
Non-Technical/ Behavioral competencies required:
- Must have worked with US/Europe based clients in onsite/offshore delivery model
- Should have very good verbal and written communication, technical articulation, listening and presentation skills
- Should have proven analytical and problem solving skills
- Should have demonstrated effective task prioritization, time management and internal/external stakeholder management skills
- Should be a quick learner, self starter, go-getter and team player
- Should have experience of working under stringent deadlines in a Matrix organization structure
- Should have demonstrated appreciable Organizational Citizenship Behavior in past organizations
escription
Job Summary:
Join Springer Capital as a Cybersecurity & Cloud Intern to help architect, secure, and automate our cloud-based backend systems powering next-generation investment platforms.
Job Description:
Founded in 2015, Springer Capital is a technology-forward asset management and investment firm. We leverage cutting-edge digital solutions to uncover high-potential opportunities, transforming traditional finance through innovation, agility, and a relentless commitment to security and scalability.
Job Highlights
Work hands-on with AWS, Azure, or GCP to design and deploy secure, scalable backend infrastructure.
Collaborate with DevOps and engineering teams to embed security best practices in CI/CD pipelines.
Gain experience in real-world incident response, vulnerability assessment, and automated monitoring.
Drive meaningful impact on our security posture and cloud strategy from Day 1.
Enjoy a fully remote, flexible internship with global teammates.
Responsibilities
Assist in architecting and provisioning cloud resources (VMs, containers, serverless functions) with strict security controls.
Implement identity and access management, network segmentation, encryption, and logging best practices.
Develop and maintain automation scripts for security monitoring, patch management, and incident alerts.
Support vulnerability scanning, penetration testing, and remediation tracking.
Document cloud architectures, security configurations, and incident response procedures.
Partner with backend developers to ensure secure API gateways, databases, and storage services.
What We Offer
Mentorship: Learn directly from senior security engineers and cloud architects.
Training & Certifications: Access to online courses and support for AWS/Azure security certifications.
Impactful Projects: Contribute to critical security and cloud initiatives that safeguard our digital assets.
Remote-First Culture: Flexible hours and the freedom to collaborate from anywhere.
Career Growth: Build a strong foundation for a future in cybersecurity, cloud engineering, or DevSecOps.
Requirements
Pursuing or recently graduated in Computer Science, Cybersecurity, Information Technology, or a related discipline.
Familiarity with at least one major cloud platform (AWS, Azure, or GCP).
Understanding of core security principles: IAM, network security, encryption, and logging.
Scripting experience in Python, PowerShell, or Bash for automation tasks.
Strong analytical, problem-solving, and communication skills.
A proactive learner mindset and passion for securing cloud environments.
About Springer Capital
Springer Capital blends financial expertise with digital innovation to redefine asset management. Our mission is to drive exceptional value by implementing robust, technology-driven strategies that transform investment landscapes. We champion a culture of creativity, collaboration, and continuous improvement.
Location: Global (Remote)
Job Type: Internship
Pay: $50 USD per month
Work Location: Remote
- Strong Site Reliability Engineer (SRE - CloudOps) Profile
- Mandatory (Experience 1) - Must have a minimum 1 years of experience in SRE (CloudOps)
- Mandatory (Core Skill 1) - Must have experience with Google Cloud platforms (GCP)
- Mandatory (Core Skill 2) - Experience with monitoring, APM, and alerting tools like Prometheus, Grafana, ELK, Newrelic, Pingdom, or Pagerduty
- Mandatory (Core Skill 3) ) - Hands-on experience with Kubernetes for orchestration and container management.
- Mandatory (Company) - B2C Product Companies.
- Strong Senior Unity Developer Profile
- Mandatory (Experience 1) - Must have a minimum 2+ years of experience in game/application development using Unity.
- Mandatory (Experience 2) - Must have strong experience in backend development using C#
- Mandatory (Experience 3) - Must have strong experience in multiplayer game development with Unity, preferably using Photon Networking (PUN) or Photon Fusion.
- Mandatory (Company) - B2C Product Companies
Preferred
- Preferred (Education) - B.E / B.Tech
GCP Data Engineer Job Description
A GCP Data Engineer is responsible for designing, building, and maintaining data pipelines, architectures, and systems on Google Cloud Platform (GCP). Here's a breakdown of the job:
Key Responsibilities
- Data Pipeline Development: Design and develop data pipelines using GCP services like Dataflow, BigQuery, and Cloud Pub/Sub.
- Data Architecture: Design and implement data architectures to meet business requirements.
- Data Processing: Process and analyze large datasets using GCP services like BigQuery and Cloud Dataflow.
- Data Integration: Integrate data from various sources using GCP services like Cloud Data Fusion and Cloud Pub/Sub.
- Data Quality: Ensure data quality and integrity by implementing data validation and data cleansing processes.
Essential Skills
- GCP Services: Strong understanding of GCP services like BigQuery, Cloud Dataflow, Cloud Pub/Sub, and Cloud Storage.
- Data Engineering: Experience with data engineering concepts, including data pipelines, data warehousing, and data integration.
- Programming Languages: Proficiency in programming languages like Python, Java, or Scala.
- Data Processing: Knowledge of data processing frameworks like Apache Beam and Apache Spark.
- Data Analysis: Understanding of data analysis concepts and tools like SQL and data visualization.


Who We Are
Studio Management (studiomgmt.co) is a uniquely positioned organization combining venture capital, hedge fund investments, and startup incubation. Our portfolio includes successful ventures like Sentieo (acquired by AlphaSense for $185 million), as well as innovative products such as Emailzap (emailzap.co) and Mindful Minutes for Toddlers. We’re expanding our team to continue launching products at the forefront of technology, and we’re looking for an Engineering Lead who shares our passion for building the “next big thing.”
The Role
We are seeking a hands-on Engineering Lead to guide our product development efforts across multiple high-impact ventures. You will own the overall technical vision, mentor a remote team of engineers, and spearhead the creation of new-age products in a fast-paced startup environment. This is a strategic, influential role that requires a blend of technical prowess, leadership, and a keen interest in building products from zero to one.
Responsibilities
- Technical Leadership: Define and drive the architectural roadmap for new and existing products, ensuring high-quality code, scalability, and reliability.
- Mentorship & Team Building: Hire, lead, and develop a team of engineers. Foster a culture of continuous learning, ownership, and collaboration.
- Product Innovation: Work closely with product managers, designers, and stakeholders to conceptualize, build, and iterate on cutting-edge, user-centric solutions.
- Hands-On Development: Write efficient, maintainable code and perform thorough code reviews, setting the standard for engineering excellence.
- Cross-Functional Collaboration: Partner with different functions (product, design, marketing) to ensure alignment on requirements, timelines, and deliverables.
- Process Optimization: Establish best practices and processes that improve development speed, code quality, and overall team productivity.
- Continuous Improvement: Champion performance optimizations, new technologies, and modern frameworks to keep the tech stack fresh and competitive.
What We’re Looking For
- 4+ Years of Engineering Experience: A proven track record of designing and delivering high-impact software products.
- Technical Mastery: Expertise in a full-stack environment—HTML, CSS, JavaScript (React/React Native), Python (Django), and AWS. Strong computer science fundamentals, including data structures and system design.
- Leadership & Communication: Demonstrated ability to mentor team members, influence technical decisions, and articulate complex concepts clearly.
- Entrepreneurial Mindset: Passion for building new-age products, thriving in ambiguity, and rapidly iterating to find product-market fit.
- Problem Solver: Adept at breaking down complex challenges into scalable, efficient solutions.
- Ownership Mentality: Self-driven individual who takes full responsibility for project outcomes and team success.
- Adaptability: Comfort working in an environment where priorities can shift quickly, and opportunities for innovation abound.
Why Join Us
- High-Impact Work: Drive the technical direction of multiple ventures, shaping the future of new products from day one.
- Innovation Culture: Operate in a remote-first, collaborative environment that encourages bold thinking and rapid experimentation.
- Growth & Autonomy: Enjoy opportunities for both leadership advancement and deepening your technical skillset.
- Global Team: Work alongside a diverse group of talented professionals who share a passion for pushing boundaries.
- Competitive Benefits: Receive market-leading compensation and benefits in a role that rewards both individual and team success.
Job Summary:
We are seeking a skilled DevOps Engineer to design, implement, and manage CI/CD pipelines, containerized environments, and infrastructure automation. The ideal candidate should have hands-on experience with ArgoCD, Kubernetes, and Docker, along with a deep understanding of cloud platforms and deployment strategies.
Key Responsibilities:
- CI/CD Implementation: Develop, maintain, and optimize CI/CD pipelines using ArgoCD, GitOps, and other automation tools.
- Container Orchestration: Deploy, manage, and troubleshoot containerized applications using Kubernetes and Docker.
- Infrastructure as Code (IaC): Automate infrastructure provisioning with Terraform, Helm, or Ansible.
- Monitoring & Logging: Implement and maintain observability tools like Prometheus, Grafana, ELK, or Loki.
- Security & Compliance: Ensure best security practices in containerized and cloud-native environments.
- Cloud & Automation: Manage cloud infrastructure on AWS, Azure, or GCP with automated deployments.
- Collaboration: Work closely with development teams to optimize deployments and performance.
Required Skills & Qualifications:
- Experience: 5+ years in DevOps, Site Reliability Engineering (SRE), or Infrastructure Engineering.
- Tools & Tech: Strong knowledge of ArgoCD, Kubernetes, Docker, Helm, Terraform, and CI/CD pipelines.
- Cloud Platforms: Experience with AWS, GCP, or Azure.
- Programming & Scripting: Proficiency in Python, Bash, or Go.
- Version Control: Hands-on with Git and GitOps workflows.
- Networking & Security: Knowledge of ingress controllers, service mesh (Istio/Linkerd), and container security best practices.
Nice to Have:
- Experience with Kubernetes Operators, Kustomize, or FluxCD.
- Exposure to serverless architectures and multi-cloud deployments.
- Certifications in CKA, AWS DevOps, or similar.


Role Objective
Develop business relevant, high quality, scalable web applications. You will be part of a dynamic AdTech team solving big problems in the Media and Entertainment Sector.
Roles & Responsibilities
* Application Design: Understand requirements from the user, create stories and be a part of the design team. Check designs, give regular feedback and ensure that the designs are as per user expectations.
* Architecture: Create scalable and robust system architecture. The design should be in line with the client infra. This could be on-prem or cloud (Azure, AWS or GCP).
* Development: You will be responsible for the development of the front-end and back-end. The application stack will comprise of (depending on the project) SQL, Django, Angular/React, HTML, CSS. Knowledge of GoLang and Big Data is a plus point.
* Deployment: Suggest and implement a deployment strategy that is scalable and cost-effective. Create a detailed resource architecture and get it approved. CI/CD deployment on IIS or Linux. Knowledge of dockers is a plus point.
* Maintenance: Maintaining development and production environments will be a key part of your job profile. This will also include trouble shooting, fixing bugs and suggesting ways for improving the application.
* Data Migration: In the case of database migration, you will be expected to suggest appropriate strategies and implementation plans.
* Documentation: Create a detailed document covering important aspects like HLD, Technical Diagram, Script Design, SOP etc.
* Client Interaction: You will be interacting with the client on a day-to-day basis and hence having good communication skills is a must.
**Requirements**
Education-B. Tech (Comp. Sc, IT) or equivalent
Experience- 3+ years of experience developing applications on Django, Angular/React, HTML, and CSS
Behavioural Skills-
1. Clear and Assertive communication
2. Ability to comprehend the business requirement
3. Teamwork and collaboration
4. Analytics thinking
5. Time Management
6. Strong troubleshooting and problem-solving skills
Technical Skills-
1. Back-end and Front-end Technologies: Django, Angular/React, HTML and CSS.
2. Cloud Technologies: AWS, GCP, and Azure
3. Big Data Technologies: Hadoop and Spark
4. Containerized Deployment: Docker and Kubernetes is a plus.
5. Other: Understanding of Golang is a plus.


Tech Lead(Fullstack) – Nexa (Conversational Voice AI Platform)
Location: Bangalore Type: Full-time
Experience: 4+ years (preferably in early-stage startups)
Tech Stack: Python (core), Node.js, React.js
About Nexa
Nexa is a new venture by the founders of HeyCoach—Pratik Kapasi and Aditya Kamat—on a mission to build the most intuitive voice-first AI platform. We’re rethinking how humans interact with machines using natural, intelligent, and fast conversational interfaces.
We're looking for a Tech Lead to join us at the ground level. This is a high-ownership, high-speed role for builders who want to move fast and go deep.
What You’ll Do
● Design, build, and scale backend and full-stack systems for our voice AI engine
● Work primarily with Python (core logic, pipelines, model integration), and support full-stack features using Node.js and React.js
● Lead projects end-to-end—from whiteboard to production deployment
● Optimize systems for performance, scale, and real-time processing
● Collaborate with founders, ML engineers, and designers to rapidly prototype and ship features
● Set engineering best practices, own code quality, and mentor junior team members as we grow
✅ Must-Have Skills
● 4+ years of experience in Python, building scalable production systems
● Has led projects independently, from design through deployment
● Excellent at executing fast without compromising quality
● Strong foundation in system design, data structures and algorithms
● Hands-on experience with Node.js and React.js in a production setup
● Deep understanding of backend architecture—APIs, microservices, data flows
● Proven success working in early-stage startups, especially during 0→1 scaling phases
● Ability to debug and optimize across the full stack
● High autonomy—can break down big problems, prioritize, and deliver without hand-holding
🚀 What We Value
● Speed > Perfection: We move fast, ship early, and iterate
● Ownership mindset: You act like a founder-even if you're not one
● Technical depth: You’ve built things from scratch and understand what’s under the hood
● Product intuition: You don’t just write code—you ask if it solves the user’s problem
● Startup muscle: You’re scrappy, resourceful, and don’t need layers of process
● Bias for action: You unblock yourself and others. You push code and push thinking
Humility and curiosity
: You challenge ideas, accept better ones, and never stop learning
💡 Nice-to-Have
● Experience with NLP, speech interfaces, or audio processing
● Familiarity with cloud platforms (GCP/AWS), CI/CD, Docker, Kubernetes
● Contributions to open-source or technical blogs
● Prior experience integrating ML models into production systems
Why Join Nexa?
● Work directly with founders on a product that pushes boundaries in voice AI
● Be part of the core team shaping product and tech from day one
● High-trust environment focused on output and impact, not hours
● Flexible work style and a flat, fast culture



🚀 Hiring: Data Engineer | GCP + Spark + Python + .NET |
| 6–10 Yrs | Gurugram (Hybrid)
We’re looking for a skilled Data Engineer with strong hands-on experience in GCP, Spark-Scala, Python, and .NET.
📍 Location: Suncity, Sector 54, Gurugram (Hybrid – 3 days onsite)
💼 Experience: 6–10 Years
⏱️ Notice Period :- Immediate Joiner
Required Skills:
- 5+ years of experience in distributed computing (Spark) and software development.
- 3+ years of experience in Spark-Scala
- 5+ years of experience in Data Engineering.
- 5+ years of experience in Python.
- Fluency in working with databases (preferably Postgres).
- Have a sound understanding of object-oriented programming and development principles.
- Experience working in an Agile Scrum or Kanban development environment.
- Experience working with version control software (preferably Git).
- Experience with CI/CD pipelines.
- Experience with automated testing, including integration/delta, Load, and Performance


Requirements
- 7+ years of experience with Python
- Strong expertise in Python frameworks (Django, Flask, or FastAPI)
- Experience with GCP, Terraform, and Kubernetes
- Deep understanding of REST API development and GraphQL
- Strong knowledge of SQL and NoSQL databases
- Experience with microservices architecture
- Proficiency with CI/CD tools (Jenkins, CircleCI, GitLab)
- Experience with container orchestration using Kubernetes
- Understanding of cloud architecture and serverless computing
- Experience with monitoring and logging solutions
- Strong background in writing unit and integration tests
- Familiarity with AI/ML concepts and integration points
Responsibilities
- Design and develop scalable backend services for our AI platform
- Architect and implement complex systems with high reliability
- Build and maintain APIs for internal and external consumption
- Work closely with AI engineers to integrate ML functionality
- Optimize application performance and resource utilization
- Make architectural decisions that balance immediate needs with long-term scalability
- Mentor junior engineers and promote best practices
- Contribute to the evolution of our technical standards and processes

Job Title : Technical Architect
Experience : 8 to 12+ Years
Location : Trivandrum / Kochi / Remote
Work Mode : Remote flexibility available
Notice Period : Immediate to max 15 days (30 days with negotiation possible)
Summary :
We are looking for a highly skilled Technical Architect with expertise in Java Full Stack development, cloud architecture, and modern frontend frameworks (Angular). This is a client-facing, hands-on leadership role, ideal for technologists who enjoy designing scalable, high-performance, cloud-native enterprise solutions.
🛠 Key Responsibilities :
- Architect scalable and high-performance enterprise applications.
- Hands-on involvement in system design, development, and deployment.
- Guide and mentor development teams in architecture and best practices.
- Collaborate with stakeholders and clients to gather and refine requirements.
- Evaluate tools, processes, and drive strategic technical decisions.
- Design microservices-based solutions deployed over cloud platforms (AWS/Azure/GCP).
✅ Mandatory Skills :
- Backend : Java, Spring Boot, Python
- Frontend : Angular (at least 2 years of recent hands-on experience)
- Cloud : AWS / Azure / GCP
- Architecture : Microservices, EAI, MVC, Enterprise Design Patterns
- Data : SQL / NoSQL, Data Modeling
- Other : Client handling, team mentoring, strong communication skills
➕ Nice to Have Skills :
- Mobile technologies (Native / Hybrid / Cross-platform)
- DevOps & Docker-based deployment
- Application Security (OWASP, PCI DSS)
- TOGAF familiarity
- Test-Driven Development (TDD)
- Analytics / BI / ML / AI exposure
- Domain knowledge in Financial Services or Payments
- 3rd-party integration tools (e.g., MuleSoft, BizTalk)
⚠️ Important Notes :
- Only candidates from outside Hyderabad/Telangana and non-JNTU graduates will be considered.
- Candidates must be serving notice or joinable within 30 days.
- Client-facing experience is mandatory.
- Java Full Stack candidates are highly preferred.
🧭 Interview Process :
- Technical Assessment
- Two Rounds – Technical Interviews
- Final Round
Job Title: Lead DevOps Engineer
Experience Required: 4 to 5 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two
Bangalore / Chennai
- Hands-on data modelling for OLTP and OLAP systems
- In-depth knowledge of Conceptual, Logical and Physical data modelling
- Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same
- Strong understanding of variables impacting database performance for near-real-time reporting and application interaction.
- Should have working experience on at least one data modelling tool, preferably DBSchema, Erwin
- Good understanding of GCP databases like AlloyDB, CloudSQL, and BigQuery.
- People with functional knowledge of the mutual fund industry will be a plus
Role & Responsibilities:
● Work with business users and other stakeholders to understand business processes.
● Ability to design and implement Dimensional and Fact tables
● Identify and implement data transformation/cleansing requirements
● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse
● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions
● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique
● Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions.
● Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use.
● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality.
● Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions.
● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics.
● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.
● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements.
● Train business end-users, IT analysts, and developers.
Required Skills:
● Bachelor’s degree in Computer Science or similar field or equivalent work experience.
● 5+ years of experience on Data Warehousing, Data Engineering or Data Integration projects.
● Expert with data warehousing concepts, strategies, and tools.
● Strong SQL background.
● Strong knowledge of relational databases like SQL Server, PostgreSQL, MySQL.
● Strong experience in GCP & Google BigQuery, Cloud SQL, Composer (Airflow), Dataflow, Dataproc, Cloud Function and GCS
● Good to have knowledge on SQL Server Reporting Services (SSRS), and SQL Server Integration Services (SSIS).
● Knowledge of AWS and Azure Cloud is a plus.
● Experience in Informatica Power exchange for Mainframe, Salesforce, and other new-age data sources.
● Experience in integration using APIs, XML, JSONs etc.

Senior Data Analyst – Power BI, GCP, Python & SQL
Job Summary
We are looking for a Senior Data Analyst with strong expertise in Power BI, Google Cloud Platform (GCP), Python, and SQL to design data models, automate analytics workflows, and deliver business intelligence that drives strategic decisions. The ideal candidate is a problem-solver who can work with complex datasets in the cloud, build intuitive dashboards, and code custom analytics using Python and SQL.
Key Responsibilities
* Develop advanced Power BI dashboards and reports based on structured and semi-structured data from BigQuery and other GCP sources.
* Write and optimize complex SQL queries (BigQuery SQL) for reporting and data modeling.
* Use Python to automate data preparation tasks, build reusable analytics scripts, and support ad hoc data requests.
* Partner with data engineers and stakeholders to define metrics, build ETL pipelines, and create scalable data models.
* Design and implement star/snowflake schema models and DAX measures in Power BI.
* Maintain data integrity, monitor performance, and ensure security best practices across all reporting systems.
* Drive initiatives around data quality, governance, and cost optimization on GCP.
* Mentor junior analysts and actively contribute to analytics strategy and roadmap.
Must-Have Skills
* Expert-level SQL : Hands-on experience writing complex queries in BigQuery , optimizing joins, window functions, CTEs.
* Proficiency in Python : Data wrangling, Pandas, NumPy, automation scripts, API consumption, etc.
* Power BI expertise : Building dashboards, using DAX, Power Query (M), custom visuals, report performance tuning.
* GCP hands-on experience : Especially with BigQuery, Cloud Storage, and optionally Cloud Composer or Dataflow.
* Strong understanding of data modeling, ETL pipelines, and analytics workflows.
* Excellent communication skills and the ability to explain data insights to non-technical audiences.
Preferred Qualifications
* Experience in version control (Git) and working in CI/CD environments.
* Google Professional Data Engineer
* PL-300: Microsoft Power BI Data Analyst Associate


Job Title: Full-Stack Developer
Location: Bangalore/Remote
Type: Full-time
About Eblity:
Eblity’s mission is to empower educators and parents to help children facing challenges.
Over 50% of children in mainstream schools face academic or behavioural challenges, most of which go unnoticed and underserved. By providing the right support at the right time, we could make a world of difference to these children.
We serve a community of over 200,000 educators and parents and over 3,000 schools.
If you are purpose-driven and want to use your skills in technology to create a positive impact for children facing challenges and their families, we encourage you to apply.
Join us in shaping the future of inclusive education and empowering learners of all abilities.
Role Overview:
As a full-stack developer, you will lead the development of critical applications.
These applications enable services for parents of children facing various challenges such as Autism, ADHD and Learning Disabilities, and for experts who can make a significant difference in these children’s lives.
You will be part of a small, highly motivated team who are constantly working to improve outcomes for children facing challenges like Learning Disabilities, ADHD, Autism, Speech Disorders, etc.
Job Description:
We are seeking a talented and proactive Full Stack Developer with hands-on experience in the React / Python / Postgres stack, leveraging Cursor and Replit for full-stack development. As part of our product development team, you will work on building responsive, scalable, and user-friendly web applications, utilizing both front-end and back-end technologies. Your expertise with Cursor as an AI agent-based development platform and Replit will be crucial for streamlining development processes and accelerating product timelines.
Responsibilities:
- Design, develop, and maintain front-end web applications using React, ensuring a responsive, intuitive, and high-performance user experience.
- Build and optimize the back-end using FastAPI or Flask and PostgreSQL, ensuring scalability, performance, and maintainability.
- Leverage Replit for full-stack development, deploying applications, managing cloud resources, and streamlining collaboration across team members.
- Utilize Cursor, an AI agent-based development platform, to enhance application development, automate processes, and optimize workflows through AI-driven code generation, data management, and integration.
- Collaborate with cross-functional teams (back-end developers, designers, and product managers) to gather requirements, design solutions, and implement them seamlessly across the front-end and back-end.
- Design and implement PostgreSQL database schemas, writing optimized queries to ensure efficient data retrieval and integrity.
- Integrate RESTful APIs and third-party services across the React front-end and FastAPI/Flask/PostgreSQLback-end, ensuring smooth data flow.
- Implement and optimize reusable React components and FastAPI/Flask functions to improve code maintainability and application performance.
- Conduct thorough testing, including unit, integration, and UI testing, to ensure application stability and reliability.
- Optimize both front-end and back-end applications for maximum speed and scalability, while resolving performance issues in both custom code and integrated services.
- Stay up-to-date with emerging technologies to continuously improve the quality and efficiency of our solutions.
Qualifications:
- Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- 2+ years of experience in React development, with strong knowledge of component-based architecture, state management, and front-end best practices.
- Proven experience in Python development, with expertise in building web applications using frameworks like FastAPI or Flask.
- Solid experience working with PostgreSQL, including designing database schemas, writing optimized queries, and ensuring efficient data retrieval.
- Experience with Cursor, an AI agent-based development platform, to enhance full-stack development through AI-driven code generation, data management, and automation.
- Experience with Replit for full-stack development, deploying applications, and collaborating within cloud-based environments.
- Experience working with RESTful APIs, including their integration into both front-end and back-end systems.
- Familiarity with development tools and frameworks such as Git, Node.js, and Nginx.
- Strong problem-solving skills, a keen attention to detail, and the ability to work independently or within a collaborative team environment.
- Excellent communication skills to effectively collaborate with team members and stakeholders.
Nice-to-Have:
- Experience with other front-end frameworks (e.g., Vue, Angular).
- Familiarity with Agile methodologies and project management tools like Jira.
- Understanding of cloud technologies and experience deploying applications to platforms like AWS or Google Cloud.
- Knowledge of additional back-end technologies or frameworks (e.g., FastAPI).
What We Offer:
- A collaborative and inclusive work environment that values every team member’s input.
- Opportunities to work on innovative projects using Cursor and Replit for full-stack development.
- Competitive salary and comprehensive benefits package.
- Flexible working hours and potential for remote work options.
Location: Remote
If you're passionate about full-stack development and leveraging AI-driven platforms like Cursor and Replit to build scalable solutions, apply today to join our forward-thinking team!

What You’ll Do:
As a Data Scientist, you will work closely across DeepIntent Analytics teams located in New York City, India, and Bosnia. The role will support internal and external business partners in defining patient and provider audiences, and generating analyses and insights related to measurement of campaign outcomes, Rx, patient journey, and supporting evolution of DeepIntent product suite. Activities in this position include creating and scoring audiences, reading campaign results, analyzing medical claims, clinical, demographic and clickstream data, performing analysis and creating actionable insights, summarizing, and presenting results and recommended actions to internal stakeholders and external clients, as needed.
- Explore ways to to create better audiences
- Analyze medical claims, clinical, demographic and clickstream data to produce and present actionable insights
- Explore ways of using inference, statistical, machine learning techniques to improve the performance of existing algorithms and decision heuristics
- Design and deploy new iterations of production-level code
- Contribute posts to our upcoming technical blog
Who You Are:
- Bachelor’s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, OR, or Data Science. Graduate degree is strongly preferred
- 3+ years of working experience as Data Analyst, Data Engineer, Data Scientist in digital marketing, consumer advertisement, telecom, or other areas requiring customer level predictive analytics
- Background in either data engineering or analytics
- Hands on technical experience is required, proficiency in performing statistical analysis in Python, including relevant libraries, required
- You have an advanced understanding of the ad-tech ecosystem, digital marketing and advertising data and campaigns or familiarity with the US healthcare patient and provider systems (e.g. medical claims, medications)
- Experience in programmatic, DSP related, marketing predictive analytics, audience segmentation or audience behaviour analysis or medical / healthcare experience
- You have varied and hands-on predictive machine learning experience (deep learning, boosting algorithms, inference)
- Familiarity with data science tools such as, Xgboost, pytorch, Jupyter and strong LLM user experience (developer/API experience is a plus)
- You are interested in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicating with less technical audiences orally and in writing

As a Principal Software Engineer on our team, you will:
- Design and deliver the next generation of Toast products using our technology stack, which includes Kotlin, DynamoDB, React, Pulsar, Apache Camel, GraphQL, and Big Data technologies.
- Collaborate with our Data Platform teams to develop best-in-class reporting and analytics products.
- Document solution designs, write and review code, test, and roll out solutions to production. Capture and act on customer feedback to iteratively enhance the customer experience.
- Work closely with peers to optimize solution design for performance, flexibility, and scalability — enabling multiple product and engineering teams to work on a shared framework and platform.
- Partner with UX, Product Management, QA, and other engineering teams to build robust, high-quality solutions in a fast-moving, complex environment.
- Coach and mentor engineers on best practices and modern software development standards.
Do you have the right ingredients? (Requirements)
- 12+ years of software development experience.
- Proven experience in delivering high-quality, reliable, and scalable services to production in a continuous delivery environment.
- Expertise in AI, Cloud technologies, Image Processing, and Full Stack Development.
- Strong database skills, proficient in SQL Server, PostgreSQL, or DynamoDB.
- Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP).
- Proficient in one or more object-oriented programming languages like Java, Kotlin, or C#.
- Hands-on experience working in Agile/Scrum environments.
- Demonstrated ability to lead the development and scaling of mission-critical platform components.
- Strong problem-solving skills, with the ability to navigate complex and ambiguous challenges and clearly communicate solutions.
- Skilled in balancing delivery speed with platform stability.
- Passionate about writing clean, maintainable, and impactful code.
- Experience in mentoring and coaching other engineers.
TECHNICAL MANAGER
Department: Product Engineering Location: Noida/Chennai
Experience: 12+ years with 2+ years in a similar role
Job Summary:
We are looking for an inspiring leader to lead a dynamic R&D team with a strong “Product & Customer” spirit. As an Engineering Manager, you will be responsible for the entire process, from design and specification to code quality, process integration and delivery performance
Key Responsibilities:
•
Collaborate closely with Product Management teams to design and develop business modules.
•
As a manager, coordinate a diverse team and ensure collaboration between different departments. Empathetic and fair yet demanding management with particular attention to operational excellence.
•
Actively participate in resolving technical issues and challenges that the team encounters during development and escalated client implementation and production issues
•
Anticipate technical challenges and work to address them proactively to minimize disruptions to the development process. Guides the team in making architectural choices
•
Promote and advocate for best practices in software development, including coding standards, testing practices, and documentation.
•
Make informed decisions on technical trade-offs and communicate those decisions to the team and stakeholders.
•
Be on top of critical client/ implementation issues and keep stakeholders informed.
PROFILE
•
Good proficiency overlaps with technologies like: Java17, Spring, SpringMVC, RESTful web services, Hibernate, RDBMS, Spring Security, Ansible, Docker, Kubernetes, JMeter, Angular.
•
Strong experience in development tools, CI/CD pipelines. Extensive experience with Agile.
•
Deep understanding of cloud technologies on at least one of the cloud platforms AWS, Azure or Google Cloud
•
Strong communicator with ability to collaborate cross-functionally, build relationships, and achieve broader organizational goals.
•
Proven leadership skills
•
Product development experience preferred. Fintech or lending domain experience is a plus.
•
Engineering degree or equivalent.
Roles and Responsibilities:
• Independently analyze, solve, and correct issues in real time, providing problem resolution end-
to-end.
• Strong experience in development tools, CI/CD pipelines. Extensive experience with Agile.
• Good proficiency overlap with technologies like: Java8, Spring, SpringMVC, RESTful web services, Hibernate, Oracle PL/SQL, SpringSecurity, Ansible, Docker, JMeter, Angular.
• Strong fundamentals and clarity of REST web services. Person should have exposure to
developing REST services which handles large sets
• Fintech or lending domain experience is a plus but not necessary.
• Deep understanding of cloud technologies on at least one of the cloud platforms AWS, Azure or Google Cloud
• Wide knowledge of technology solutions and ability to learn and work with emerging technologies, methodologies, and solutions.
• Strong communicator with ability to collaborate cross-functionally, build relationships, and achieve broader organizational goals.
• Provide vision leadership for the technology roadmap of our products. Understand product capabilities and strategize technology for its alignment with business objectives and maximizing ROI.
• Define technical software architectures and lead development of frameworks.
• Engage end to end in product development, starting from business requirements to realization of product and to its deployment in production.
• Research, design, and implement the complex features being added to existing products and/or create new applications / components from scratch.
Minimum Qualifications
• Bachelor s or higher engineering degree in Computer Science, or related technical field, or equivalent additional professional experience.
• 5 years of experience in delivering solutions from concept to production that are based on Java and open-source technologies as an enterprise architect in global organizations.
• 12-15 years of industry experience in design, development, deployments, operations and managing non-functional perspectives of technical solutions.
We are seeking a talented Engineer to join our AI team. You will technically lead experienced software and machine learning engineers to develop, test, and deploy AI-based solutions, with a primary focus on large language models and other machine learning applications. This is an excellent opportunity to apply your software engineering skills in a dynamic, real-world environment and gain hands-on experience in cutting-edge AI technology.
Key Roles & Responsibilities:
- Design and implement software solutions that power machine learning models, particularly in LLMs
- Create robust data pipelines, handling data preprocessing, transformation, and integration for machine learning projects
- Collaborate with the engineering team to build and optimize machine learning models, particularly LLMs, that address client-specific challenges
- Partner with cross-functional teams, including business stakeholders, data engineers, and solutions architects to gather requirements and evaluate technical feasibility
- Design and implement a scale infrastructure for developing and deploying GenAI solutions
- Support model deployment and API integration to ensure interaction with existing enterprise systems.
Basic Qualifications:
- A master's degree or PhD in Computer Science, Data Science, Engineering, or a related field
- Experience: 3-5 Years
- Strong programming skills in Python and Java
- Good understanding of machine learning fundamentals
- Hands-on experience with Python and common ML libraries (e.g., PyTorch, TensorFlow, scikit-learn)
- Familiar with frontend development and frameworks like React
- Basic knowledge of LLMs and transformer-based architectures is a plus.
Preferred Qualifications
- Excellent problem-solving skills and an eagerness to learn in a fast-paced environment
- Strong attention to detail and ability to communicate technical concepts clearly
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are looking for a Senior Software Development Engineer with 5-8 years of experience specializing in infrastructure deployment automation and VMware workload migration. The ideal candidate will have expertise in Infrastructure-as-Code (IaC), VMware vSphere, vMotion, HCX, Terraform, Kubernetes, and AI POD managed services. You will be responsible for automating infrastructure provisioning, migrating workloads from VMware environments to cloud and hybrid infrastructures, and optimizing AI/ML deployments.
Key Roles & Responsibilities
- Automate infrastructure deployment using Terraform, Ansible, and Helm for VMware and cloud environments.
- Develop and implement VMware workload migration strategies, including vMotion, HCX, SRM (Site Recovery Manager), and lift-and-shift migrations.
- Migrate VMware-based workloads to public cloud (AWS, Azure, GCP) or hybrid cloud environments.
- Optimize and manage AI POD workloads on VMware and Kubernetes-based environments.
- Leverage VMware HCX for live and bulk workload migrations, ensuring minimal downtime and optimal performance.
- Automate virtual machine provisioning and lifecycle management using VMware vSphere APIs, PowerCLI, or vRealize Automation.
- Integrate VMware workloads with Kubernetes for containerized AI/ML workflows.
- Ensure workload high availability and disaster recovery post-migration using VMware SRM, vSAN, and backup strategies.
- Monitor and troubleshoot migration performance using vRealize Operations, Prometheus, Grafana, and ELK.
- Develop and optimize CI/CD pipelines to automate workload migration, deployment, and validation.
- Ensure security and compliance for workloads before, during, and after migration.
- Collaborate with cloud architects to design hybrid cloud solutions supporting AI/ML workloads.
Basic Qualifications
- 5–8 years of experience in infrastructure automation, VMware workload migration, and cloud integration.
- Expertise in VMware vSphere, ESXi, vMotion, HCX, SRM, vSAN, and NSX-T.
- Hands-on experience with workload migration tools such as VMware HCX, CloudEndure, AWS Application Migration Service, and Azure Migrate.
- Proficiency in Infrastructure-as-Code using Terraform, Ansible, PowerCLI, and vRealize Automation.
- Strong experience with Kubernetes (EKS, AKS, GKE) and containerized AI/ML workloads.
- Experience in public cloud migration (AWS, Azure, GCP) for VMware-based workloads.
- Hands-on knowledge of CI/CD tools such as Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Strong scripting and automation skills in Python, Bash, or PowerShell.
- Familiarity with disaster recovery, backup, and business continuity planning in VMware environments.
- Experience in performance tuning and troubleshooting for VMware-based workloads.
Preferred Qualifications
- Experience with NVIDIA GPU orchestration (e.g., KubeFlow, Triton, RAPIDS).
- Familiarity with Packer for automated VM image creation.
- Exposure to Edge AI deployments, federated learning, and AI inferencing at scale.
- Contributions to open-source infrastructure automation projects.
About the Role
We are looking for a talented LLM & Backend Engineer to join our AI innovation team at EaseMyTrip.com and help power the next generation of intelligent travel experiences. In this role, you will lead the integration and optimization of Large Language Models (LLMs) to create conversational travel agents that can understand, recommend, and assist travelers across platforms. You will work at the intersection of backend systems, AI models, and natural language understanding, bringing smart automation to every travel interaction.
Key Responsibilities:
- LLM Integration: Deploy and integrate LLMs (e.g., GPT-4, Claude, Mistral) to process natural language queries and deliver personalized travel recommendations.
- Prompt Engineering & RAG: Design optimized prompts and implement Retrieval-Augmented Generation (RAG) workflows to enhance contextual relevance in multi-turn conversations.
- Conversational Flow Design: Build and manage robust conversational workflows capable of handling complex travel scenarios such as booking modifications and cancellations.
- LLM Performance Optimization: Tune models and workflows to balance performance, scalability, latency, and cost across diverse environments.
- Backend Development: Develop scalable, asynchronous backend services using FastAPI or Django, with a focus on secure and efficient API architectures.
- Database & ORM Design: Design and manage data using PostgreSQL or MongoDB, and implement ORM solutions like SQLAlchemy for seamless data interaction.
- Cloud & Serverless Infrastructure: Deploy solutions on AWS, GCP, or Azure using containerized and serverless tools such as Lambda and Cloud Functions.
- Model Fine-Tuning & Evaluation: Fine-tune open-source and proprietary LLMs using techniques like LoRA and PEFT, and evaluate outputs using BLEU, ROUGE, or similar metrics.
- NLP Pipeline Implementation: Develop NLP functionalities including named entity recognition, sentiment analysis, and dialogue state tracking.
- Cross-Functional Collaboration: Work closely with AI researchers, frontend developers, and product teams to ship impactful features rapidly and iteratively.
Preferred Candidate Profile:
- Experience: Minimum 2 years in backend development with at least 1 year of hands-on experience working with LLMs or NLP systems.
- Programming Skills: Proficient in Python with practical exposure to asynchronous programming and frameworks like FastAPI or Django.
- LLM Ecosystem Expertise: Experience with tools and libraries such as LangChain, LlamaIndex, Hugging Face Transformers, and OpenAI/Anthropic APIs.
- Database Knowledge: Strong understanding of relational and NoSQL databases, including schema design and performance optimization.
- Model Engineering: Familiarity with prompt design, LLM fine-tuning (LoRA, PEFT), and evaluation metrics for language models.
- Cloud Deployment: Comfortable working with cloud platforms (AWS/GCP/Azure) and building serverless or containerized deployments.
- NLP Understanding: Solid grasp of NLP concepts including intent detection, dialogue management, and text classification.
- Problem-Solving Mindset: Ability to translate business problems into AI-first solutions with a user-centric approach.
- Team Collaboration: Strong communication skills and a collaborative spirit to work effectively with multidisciplinary teams.
- Curiosity and Drive: Passionate about staying at the forefront of AI and using emerging technologies to build innovative travel experiences.

Why This Role Matters
- We are looking for a Staff Engineer to lead the technical direction and hands-on development of our next-generation, agentic AI-first marketing platforms. This is a high-impact role to architect, build, and ship products that change how marketers interact with data, plan campaigns, and make decisions.
What You'll Do
- Build Gen-AI native products: Architect, build, and ship platforms powered by LLMs, agents, and predictive AI
- Stay hands-on: Design systems, write code, debug, and drive product excellence
- Lead with depth: Mentor a high-caliber team of full stack engineers.
- Speed to market: Rapidly ship and iterate on MVPs to maximize learning and feedback.
- Own the full stack: From backend data pipelines to intuitive UIs—from Airflow to React - from BigQuery to embeddings.
- Scale what works: Ensure scalability, security, and performance in multi-tenant, cloud-native environments (GCP).
- Collaborate deeply: Work closely with product, growth, and leadership to align tech with business priorities.
What You Bring
- 8+ years of experience building and scaling full-stack, data-driven products
- Proficiency in backend (Node.js, Python) and frontend (React), with solid GCP experience
- Strong grasp of data pipelines, analytics, and real-time data processing
- Familiarity with Gen-AI frameworks (LangChain, LlamaIndex, OpenAI APIs, vector databases)
- Proven architectural leadership and technical ownership
- Product mindset with a bias for execution and iteration
Our Tech Stack
- Cloud: Google Cloud Platform
- Backend: Node.js, Python, Airflow
- Data: BigQuery, Cloud SQL
- AI/ML: TensorFlow, OpenAI APIs, custom agents
- Frontend: React.js
What You Get
- Meaningful equity in a high-growth startup
- The chance to build global products from India
- A culture that values clarity, ownership, learning, humility, and candor
- A rare opportunity to build with Gen-AI from the ground up
Who You Are
- You’re initiative-driven, not interruption-driven.
- You code because you love building things that matter.
- You enjoy ambiguity and solve problems from first principles.
- You believe true leadership is contextual, hands-on, and grounded.
- You’re here to build — not just maintain.
- You care deeply about seeing your products empower real users, run reliably at scale, and adapt intelligently with minimal manual effort.
- You know that elegant code is just 30% of the job — the real craft lies in the engineering rigour, edge-case handling, and production resilience that make great products truly dependable.

We’re looking for a Product Ninja with the mindset of a Tech Catalyst — a proactive executor who thrives at the intersection of product, technology, and user experience. In this role, you’ll bring product ideas to life, translate strategy into action, and collaborate closely with engineers, designers, and stakeholders to deliver impactful solutions.
This role is ideal for someone who’s hands-on, detail-oriented, and passionate about using technology to create real customer value.
Responsibilities:
- Support the definition and execution of the product roadmap in alignment with business goals.
- Work closely with engineering, design, QA, and marketing teams to drive product development.
- Translate product requirements into detailed specs, user stories, and acceptance criteria.
- Conduct competitive research and analyze user feedback to inform feature enhancements.
- Track product performance post-launch and gather insights for continuous improvement.
- Assist in managing the full product lifecycle, from ideation to rollout.
- Be a tech-savvy contributor, suggesting improvements based on emerging tools, platforms, and technologies.
Qualification:
- Bachelor’s degree in Business, Marketing, Computer Science, or a related field.
- 3+ years of hands-on experience in product management, product operations, or related roles.
- Comfortable working in fast-paced, cross-functional tech environments.
Required Skills:
- Strong analytical and problem-solving abilities.
- Clear, concise communication and documentation skills.
- Proficiency with project and product management tools (e.g., JIRA, Trello, Confluence).
- Ability to manage details without losing sight of the bigger picture.
Preferred Skills:
- Experience with Agile or Scrum workflows.
- Familiarity with UX/UI best practices and working with design systems.
- Exposure to APIs, databases, or cloud-based platforms is a plus.
- Comfortable with basic data analysis and tools like Excel, SQL, or analytics dashboards.
Who You Are:
- A doer who turns ideas into working solutions.
- A collaborator who thrives in tech teams and enjoys building alongside others.
- A catalyst who nudges things forward with curiosity, speed, and smart experimentation.


🚀 React Developer – Gurgaon (Onsite Role)
🧑💻 Experience: 2 to 3 Years
🕒 Joining Time: Immediate to 15 Days
📍 Location: Gurgaon (Work from Office)
We’re hiring a passionate and skilled React Developer who is ready to take their frontend game to the next level. If you’re someone who breathes clean UI, scalable components, and loves delivering pixel-perfect performance — this is your chance.
🧩 Your Responsibilities:
- Develop and maintain rich, interactive web applications using React.js.
- Architect front-end flows using Flux or Redux for seamless state management.
- Design reusable component libraries with attention to responsiveness and accessibility.
- Integrate with REST APIs and manage secure data exchange.
- Collaborate with cross-functional teams (UI/UX, Backend, QA) for seamless releases.
- Optimize application speed, bundle sizes, and rendering performance.
- Implement routing and navigation using React Router and handle form states with Formik or React Hook Form.
- Follow CI/CD workflows, write test cases, and participate in code reviews.
✅ Core Skills You Must Have:
- Strong hands-on experience in React.js, JavaScript (ES6+), and TypeScript.
- Deep knowledge of React Hooks, Context API, Redux, and Flux architecture.
- Excellent command of HTML5, CSS3, SASS, and responsive design frameworks like Material UI, Tailwind, or Bootstrap.
- Familiar with Webpack, Babel, Vite, and other bundling tools.
- Experience with REST API integration, error handling, and API security best practices.
- Solid understanding of unit testing tools like Jest, React Testing Library, or Cypress.
- Experience in Git-based workflows, Agile teams, and JIRA or similar tracking tools.
☁️ Nice to Have:
- Exposure to cloud platforms like AWS, Azure, or GCP (deployment, storage, CI/CD pipelines).
- Experience with Docker (for frontend containerization during builds).
- Familiarity with micro frontends architecture or component-based scalable design.
- Basic knowledge of Node.js for API understanding and backend collaboration.
🌟 What You Get:
- Work on enterprise-level projects with modern frontend architecture
- Exposure to scalable and distributed UI systems
- A collaborative environment with performance-focused peers
- Real career growth and tech leadership opportunities

About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Senior Software Development Engineer – Data Engineering with 5-8 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions.
Key Roles & Responsibilities:
- Design, develop, and optimize ETL/ELT pipelines using Apache Spark, PySpark, Databricks, and Snowflake.
- Implement real-time and batch data processing workflows in cloud environments (AWS, Azure, GCP).
- Develop high-performance, scalable data pipelines for structured, semi-structured, and unstructured data.
- Work with Delta Lake and Lakehouse architectures to improve data reliability and efficiency.
- Optimize Snowflake and Databricks performance, including query tuning, caching, partitioning, and cost optimization.
- Implement data governance, security, and compliance best practices.
- Build and maintain data models, transformations, and data marts for analytics and reporting.
- Collaborate with data scientists, analysts, and business teams to define data engineering requirements.
- Automate infrastructure and deployments using Terraform, Airflow, or dbt.
- Monitor and troubleshoot data pipeline failures, performance issues, and bottlenecks.
- Develop and enforce data quality and observability frameworks using Great Expectations, Monte Carlo, or similar tools.
Basic Qualifications:
- Bachelor’s or Master’s Degree in Computer Science or Data Science.
- 5–8 years of experience in data engineering, big data processing, and cloud-based data platforms.
- Hands-on expertise in Apache Spark, PySpark, and distributed computing frameworks.
- Strong experience with Snowflake (Warehouses, Streams, Tasks, Snowpipe, Query Optimization).
- Experience in Databricks (Delta Lake, MLflow, SQL Analytics, Photon Engine).
- Proficiency in SQL, Python, or Scala for data transformation and analytics.
- Experience working with data lake architectures and storage formats (Parquet, Avro, ORC, Iceberg).
- Hands-on experience with cloud data services (AWS Redshift, Azure Synapse, Google BigQuery).
- Experience in workflow orchestration tools like Apache Airflow, Prefect, or Dagster.
- Strong understanding of data governance, access control, and encryption strategies.
- Experience with CI/CD for data pipelines using GitOps, Terraform, dbt, or similar technologies.
Preferred Qualifications:
- Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub).
- Experience in BI and analytics tools (Tableau, Power BI, Looker).
- Familiarity with data observability tools (Monte Carlo, Great Expectations).
- Experience with machine learning feature engineering pipelines in Databricks.
- Contributions to open-source data engineering projects.
Technical Skills:
- Hands-on experience with AWS, Google Cloud Platform (GCP), and Microsoft Azure cloud computing
- Proficiency in Windows Server and Linux server environments
- Proficiency with Internet Information Services (IIS), Nginx, Apache, etc.
- Experience in deploying .NET applications (ASP.NET, MVC, Web API, WCF, etc.), .NET Core, Python and Node.js applications etc.
- Familiarity with GitLab or GitHub for version control and Jenkins for CI/CD processes
Key Responsibilities:
- ☁️ Manage cloud infrastructure and automation on AWS, Google Cloud (GCP), and Azure.
- 🖥️ Deploy and maintain Windows Server environments, including Internet Information Services (IIS).
- 🐧 Administer Linux servers and ensure their security and performance.
- 🚀 Deploy .NET applications (ASP.Net, MVC, Web API, WCF, etc.) using Jenkins CI/CD pipelines.
- 🔗 Manage source code repositories using GitLab or GitHub.
- 📊 Monitor and troubleshoot cloud and on-premises server performance and availability.
- 🤝 Collaborate with development teams to support application deployments and maintenance.
- 🔒 Implement security best practices across cloud and server environments.
Required Skills:
- ☁️ Hands-on experience with AWS, Google Cloud (GCP), and Azure cloud services.
- 🖥️ Strong understanding of Windows Server administration and IIS.
- 🐧 Proficiency in Linux server management.
- 🚀 Experience in deploying .NET applications and working with Jenkins for CI/CD automation.
- 🔗 Knowledge of version control systems such as GitLab or GitHub.
- 🛠️ Good troubleshooting skills and ability to resolve system issues efficiently.
- 📝 Strong documentation and communication skills.
Preferred Skills:
- 🖥️ Experience with scripting languages (PowerShell, Bash, or Python) for automation.
- 📦 Knowledge of containerization technologies (Docker, Kubernetes) is a plus.
- 🔒 Understanding of networking concepts, firewalls, and security best practices.
Job Title: Backend Developer
Location: In-Office, Bangalore, Karnataka, India
Job Summary:
We are seeking a highly skilled and experienced Backend Developer with a minimum of 1 year of experience in product building to join our dynamic and innovative team. In this role, you will be responsible for designing, developing, and maintaining robust backend systems that drive our applications. You will collaborate with cross-functional teams to ensure seamless integration between frontend and backend components, and your expertise will be critical in architecting scalable, secure, and high-performance backend solutions.
Annual Compensation: 6-10 LPA
Responsibilities:
- Design, develop, and maintain scalable and efficient backend systems and APIs using NodeJS.
- Architect and implement complex backend solutions, ensuring high availability and performance.
- Collaborate with product managers, frontend developers, and other stakeholders to deliver comprehensive end-to-end solutions.
- Design and optimize data storage solutions using relational databases (e.g., MySQL) and NoSQL databases (e.g., MongoDB, Redis).
- Promoting a culture of collaboration, knowledge sharing, and continuous improvement.
- Implement and enforce best practices for code quality, security, and performance optimization.
- Develop and maintain CI/CD pipelines to automate build, test, and deployment processes.
- Ensure comprehensive test coverage, including unit testing, and implement various testing methodologies and tools to validate application functionality.
- Utilize cloud services (e.g., AWS, Azure, GCP) for infrastructure deployment, management, and optimization.
- Conduct system design reviews and contribute to architectural discussions.
- Stay updated with industry trends and emerging technologies to drive innovation within the team.
- Implement secure authentication and authorization mechanisms and ensure data encryption for sensitive information.
- Design and develop event-driven applications utilizing serverless computing principles to enhance scalability and efficiency.
Requirements:
- Minimum of 1 year of proven experience as a Backend Developer, with a strong portfolio of product-building projects.
- Extensive experience with JavaScript backend frameworks (e.g., Express, Socket) and a deep understanding of their ecosystems.
- Strong expertise in SQL and NoSQL databases (MySQL and MongoDB) with a focus on data modeling and scalability.
- Practical experience with Redis and caching mechanisms to enhance application performance.
- Proficient in RESTful API design and development, with a strong understanding of API security best practices.
- In-depth knowledge of asynchronous programming and event-driven architecture.
- Familiarity with the entire web stack, including protocols, web server optimization techniques, and performance tuning.
- Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes) is highly desirable.
- Proven experience working with cloud technologies (AWS/GCP/Azure) and understanding of cloud architecture principles.
- Strong understanding of fundamental design principles behind scalable applications and microservices architecture.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work collaboratively in a fast-paced, agile environment and lead projects to successful completion.
Job Description
We are looking for a passionate and skilled Rust Developer with at least 3 years of experience to join our growing development team. The ideal candidate will be proficient in building robust and scalable APIs using the Rocket framework, and have hands-on experience with PostgreSQL and the Diesel ORM. You will be working on performance-critical backend systems, designing APIs, managing deployments, and collaborating closely with other developers.
Responsibilities
- Design, develop, and maintain APIs using Rocket.
- Work with PostgreSQL databases, using Diesel ORM for efficient data access.
- Write clean, maintainable, and efficient Rust code.
- Apply object-oriented and functional programming principles effectively.
- Build and consume RESTful APIs and WebSockets for real-time communication.
- Handle server-side deployments and assist in managing the infrastructure.
- Optimize application performance and ensure high availability.
- Collaborate with frontend developers and DevOps engineers to integrate systems smoothly.
- Participate in code reviews and technical discussions.
- Apply knowledge of data structures and algorithms to solve complex problems efficiently.
Requirements
- 3+ years of experience working with Rust in production environments.
- Strong hands-on experience with Rocket framework.
- Solid understanding of Diesel ORM and PostgreSQL.
- Good grasp of OOP and functional programming concepts.
- Familiarity with RESTful APIs, WebSockets, and other web protocols.
- Experience handling application deployments and basic server management.
- Strong foundation in data structures, algorithms, and software design principles.
- Ability to write clean, well-documented, and testable code.
- Good communication skills and ability to work collaboratively.
Package
- As per Industry standards
Nice to Have
- Experience with CI/CD pipelines.
- Familiarity with containerization tools like Docker.
- Knowledge of cloud platforms (AWS, GCP, etc.).
- Contribution to open-source Rust projects.
- Knowledge of basic cryptographic primitives (AES, hashing, etc.).
Perks & Benefits
- Competitive compensation.
- Flexible work hours and remote-friendly culture.
- Opportunity to work with a modern tech stack.
- Supportive team and growth-oriented environment.
If you're passionate about Rust, love building high-performance systems, and enjoy solving real-world problems with elegant code, we’d love to connect! Apply now and let’s craft powerful backend experiences together! ⚙️🚀
Role - Sr. QA Engineer
Location- Gurgaon
Mode - Hybrid
Experience - 6 Years
Notice Period:- Immediate Joiner
Must-Have:
- Experience in QA automation/platform QA
- Experience in Playwright, Selenium, Rest Assured
- Strong in API & load testing (JMeter, k6)
- GCP or Azure experience
- CI/CD: GitHub Actions, Jenkins
- Drive test automation, CI/CD quality gates, chaos testing & more.
Job Summary:
We are looking for a detail-oriented and proactive QA Engineer with 3–5 years of hands-on experience in both manual and automated testing. The ideal candidate will have strong knowledge of API testing (REST, SOAP, GraphQL), web debugging tools, various cloud platforms, and end-to-end testing frameworks. Excellent communication skills and the ability to work directly with clients from the USA, UK, and Australia are essential. The role requires daily participation in client-facing stand-ups and ongoing collaboration in a dynamic, fast-paced environment focused on continuous learning.
Key Responsibilities:
- Perform API testing for REST, SOAP, and GraphQL; validate HTTP status codes using tools like Postman and Curl.
- Debug browser console, network tab, performance issues, WebSocket, and WebAssembly behaviors.
- Reproduce and log issues with clear, actionable steps and supporting evidence.
- Conduct performance testing and identify bottlenecks.
- Communicate directly with clients from English-speaking countries (USA, UK, Australia) on a daily basis through stand-ups, scrums, and status updates.
- Test various authentication types (OAuth, JWT, API Keys, SSO, SAML etc.).
- Implement and maintain E2E tests using Cypress.
- Perform database validations (PostgreSQL, MySQL, MongoDB, DynamoDB, Elasticsearch).
- Ensure quality through thorough test planning, execution, and adherence to industry best practices.
- Collaborate closely with developers and stakeholders across time zones.
Required Skills and Experience:
- 3–5 years of experience in QA (manual + automation).
- Strong hands-on experience in API testing including REST, SOAP, and GraphQL.
- Knowledge of Curl, browser dev tools (Console, Network), and debugging web issues.
- Solid understanding of HTTP status codes, WebSocket, WebAssembly, and performance analysis.
- Ability to write reproducible and detailed bug reports.
- Excellent English communication and confidence in handling client discussions.
- Experience testing multiple authentication schemes.
- Working knowledge of cloud platforms: AWS, GCP, Azure.
- Proficiency with PostgreSQL, MySQL, MongoDB, DynamoDB, and Elasticsearch.
- Hands-on experience with Cypress for E2E testing.
- Familiarity with QA methodologies, STLC, SDLC, and test documentation standards.
Preferred:
- Prior experience working directly with clients from the USA or other English-speaking countries.
- Willingness to communicate with clients daily—participate in scrum meetings, standups, and provide regular status updates.
- A passion for continuous learning and ability to explore and adapt to new SaaS products regularly.
- Experience with CI/CD pipelines and Git-based version control.
- QA certifications such as ISTQB are a plus.
What We Offer:
- International project exposure (USA, UK, Australia).
- Collaborative team culture and flexible work environment.
- Continuous learning, mentorship, and career growth.
- Work with modern tools and evolving SaaS technologies.

As a Senior Backend & Infrastructure Engineer, you will take ownership of backend systems and cloud infrastructure. You’ll work closely with our CTO and cross-functional teams (hardware, AI, frontend) to design scalable, fault- tolerant architectures and ensure reliable deployment pipelines.
- What You’ll Do :
- Backend Development: Maintain and evolve our Node.js (TypeScript) and Python backend services with a focus on performance and scalability.
- Cloud Infrastructure: Manage our infrastructure on GCP and Firebase (Auth, Firestore, Storage, Functions, AppEngine, PubSub, Cloud Tasks).
- Database Management: Handle Firestore and other NoSQL DBs. Lead database schema design and migration strategies.
- Pipelines & Automation: Build robust real-time and batch data pipelines. Automate CI/CD and testing for backend and frontend services.
- Monitoring & Uptime: Deploy tools for observability (logging, alerts, debugging). Ensure 99.9% uptime of critical services.
- Dev Environments: Set up and manage developer and staging environments across teams.
- Quality & Security: Drive code reviews, implement backend best practices, and enforce security standards.
- Collaboration: Partner with other engineers (AI, frontend, hardware) to integrate backend capabilities seamlessly into our global system.
Must-Haves :
- 5+ years of experience in backend development and cloud infrastructure.
- Strong expertise in Node.js (TypeScript) and/or Python.
- Advanced skills in NoSQL databases (Firestore, MongoDB, DynamoDB...).
- Deep understanding of cloud platforms, preferably GCP and Firebase.
- Hands-on experience with CI/CD, DevOps tools, and automation.
- Solid knowledge of distributed systems and performance tuning.
- Experience setting up and managing development & staging environments.
• Proficiency in English and remote communication.
Good to have :
- Event-driven architecture experience (e.g., Pub/Sub, MQTT).
- Familiarity with observability tools (Prometheus, Grafana, Google Monitoring).
- Previous work on large-scale SaaS products.
- Knowledge of telecommunication protocols (MQTT, WebSockets, SNMP).
- Experience with edge computing on Nvidia Jetson devices.
What We Offer :
- Competitive salary for the Indian market (depending on experience).
- Remote-first culture with async-friendly communication.
- Autonomy and responsibility from day one.
- A modern stack and a fast-moving team working on cutting-edge AI and cloud infrastructure.
- A mission-driven company tackling real-world environmental challenges.

Role: GCP Data Engineer
Notice Period: Immediate Joiners
Experience: 5+ years
Location: Remote
Company: Deqode
About Deqode
At Deqode, we work with next-gen technologies to help businesses solve complex data challenges. Our collaborative teams build reliable, scalable systems that power smarter decisions and real-time analytics.
Key Responsibilities
- Build and maintain scalable, automated data pipelines using Python, PySpark, and SQL.
- Work on cloud-native data infrastructure using Google Cloud Platform (BigQuery, Cloud Storage, Dataflow).
- Implement clean, reusable transformations using DBT and Databricks.
- Design and schedule workflows using Apache Airflow.
- Collaborate with data scientists and analysts to ensure downstream data usability.
- Optimize pipelines and systems for performance and cost-efficiency.
- Follow best software engineering practices: version control, unit testing, code reviews, CI/CD.
- Manage and troubleshoot data workflows in Linux environments.
- Apply data governance and access control via Unity Catalog or similar tools.
Required Skills & Experience
- Strong hands-on experience with PySpark, Spark SQL, and Databricks.
- Solid understanding of GCP services (BigQuery, Cloud Functions, Dataflow, Cloud Storage).
- Proficiency in Python for scripting and automation.
- Expertise in SQL and data modeling.
- Experience with DBT for data transformations.
- Working knowledge of Airflow for workflow orchestration.
- Comfortable with Linux-based systems for deployment and troubleshooting.
- Familiar with Git for version control and collaborative development.
- Understanding of data pipeline optimization, monitoring, and debugging.
Position: Project Manager
Location: Bengaluru, India (Hybrid/Remote flexibility available)
Company: PGAGI Consultancy Pvt. Ltd
About PGAGI
At PGAGI, we are building the future where human and artificial intelligence coexist to solve complex problems, accelerate innovation, and power sustainable growth. We develop and deploy advanced AI solutions across industries, making AI not just a tool but a transformational force for businesses and society.
Position Summary
PGAGI is seeking a dynamic and experienced Project Manager to lead cross-functional engineering teams and drive the successful execution of multiple AI/ML-centric projects. The ideal candidate is a strategic thinker with a solid background in engineering-led product/project management, especially in AI/ML product lifecycles. This role is crucial to scaling our technical operations, ensuring seamless collaboration, timely delivery, and high-impact results across initiatives.
Key Responsibilities
• Lead Engineering Teams Across AI/ML Projects: Manage and mentor cross-functional teams of ML engineers, DevOps professionals, and software developers through agile delivery cycles, ensuring timely and high-quality execution of AI-focused initiatives.
• Drive Agile Project Execution: Define project scope, objectives, timelines, and deliverables using Agile/Scrum methodologies. Ensure continuous sprint planning, backlog grooming, and milestone tracking via tools like Jira or GitHub Projects.
• Manage Multiple Concurrent Projects: Oversee the full lifecycle of multiple high-priority projects—ranging from AI model development and infrastructure integration to client delivery and platform enhancements.
• Collaborate with Technical and Business Stakeholders: Act as the bridge between engineering, research, and client-facing teams, translating complex requirements into actionable tasks and product features.
• Maintain Engineering and Infrastructure Quality: Uphold rigorous engineering standards across deployments. Coordinate testing, model performance validation, version control, and CI/CD operations.
• Budget and Resource Allocation: Optimize resource distribution across teams, track project costs, and ensure effective use of cloud infrastructure and personnel to maximize project ROI.
• Risk Management & Mitigation: Identify risks proactively across technical and operational layers. Develop mitigation plans and troubleshoot issues that may impact timelines or performance.
• Monitor KPIs and Delivery Metrics: Establish and monitor performance indicators such as sprint velocity, deployment frequency, incident response times, and customer satisfaction for each release.
• Support Continuous Improvement: Foster a culture of feedback and iteration. Champion retrospectives and process reviews to continually refine development practices and workflows.
Qualifications:
• Education: Bachelor’s or Master’s in Computer Science, Engineering, or a related technical field.
• Experience: Minimum 5 years of experience as a Project Manager, with at least 2 years managing AI/ML or software engineering teams.
• Tech Expertise: Familiarity with AI/ML lifecycles, cloud platforms (AWS, GCP, or Azure), and DevOps pipelines (Docker, Kubernetes, GitHub Actions, Jenkins).
• Tools: Strong experience with Jira, Confluence, and project tracking/reporting tools.
• Leadership: Proven success leading high-performing engineering teams in a fast-paced, innovative environment.
• Communication: Excellent written and verbal skills to interface with both technical and non-technical stakeholders.
• Certifications (Preferred): PMP, CSM, or certifications in AI/ML project management or cloud technologies.
Why Join PGAGI?
• Lead cutting-edge AI/ML product teams building scalable, impactful solutions.
• Be part of a fast-growing, innovation-driven startup environment.
• Enjoy a collaborative, intellectually stimulating workplace with growth opportunities.
• Competitive compensation and performance-based rewards.
• Access to learning resources, mentoring, and AI/DevOps communities.
A backend developer is an engineer who can handle all the work of databases, servers,
systems engineering, and clients. Depending on the project, what customers need may
be a mobile stack, a Web stack, or a native application stack.
You will be responsible for:
Build reusable code and libraries for future use.
Own & build new modules/features end-to-end independently.
Collaborate with other team members and stakeholders.
Required Skills :
Thorough understanding of Node.js and Typescript.
Excellence in at least one framework like strongloop loopback, express.js, sail.js, etc.
Basic architectural understanding of modern day web applications
Diligence for coding standards
Must be good with git and git workflow
Experience of external integrations is a plus
Working knowledge of AWS or GCP or Azure - Expertise with linux based systems
Experience with CI/CD tools like jenkins is a plus.
Experience with testing and automation frameworks.
Extensive understanding of RDBMS systems

Job Summary:
We are looking for a motivated and detail-oriented Data Engineer with 1–2 years of experience to join our data engineering team. The ideal candidate should have solid foundational skills in SQL and Python, along with exposure to building or maintaining data pipelines. You’ll play a key role in helping to ingest, process, and transform data to support various business and analytical needs.
Key Responsibilities:
- Assist in the design, development, and maintenance of scalable and efficient data pipelines.
- Write clean, maintainable, and performance-optimized SQL queries.
- Develop data transformation scripts and automation using Python.
- Support data ingestion processes from various internal and external sources.
- Monitor data pipeline performance and help troubleshoot issues.
- Collaborate with data analysts, data scientists, and other engineers to ensure data quality and consistency.
- Work with cloud-based data solutions and tools (e.g., AWS, Azure, GCP – as applicable).
- Document technical processes and pipeline architecture.
Core Skills Required:
- Proficiency in SQL (data querying, joins, aggregations, performance tuning).
- Experience with Python, especially in the context of data manipulation (e.g., pandas, NumPy).
- Exposure to ETL/ELT pipelines and data workflow orchestration tools (e.g., Airflow, Prefect, Luigi – preferred).
- Understanding of relational databases and data warehouse concepts.
- Familiarity with version control systems like Git.
Preferred Qualifications:
- Experience with cloud data services (AWS S3, Redshift, Azure Data Lake, etc.)
- Familiarity with data modeling and data integration concepts.
- Basic knowledge of CI/CD practices for data pipelines.
- Bachelor’s degree in Computer Science, Engineering, or related field.
Job Title: IT Head – Fintech Industry
Department: Information Technology
Location: Andheri East
Reports to: COO
Job Type: Full-Time
Job Overview:
The IT Head in a fintech company is responsible for overseeing the entire information technology infrastructure, including the development, implementation, and maintenance of IT systems, networks, and software solutions. The role involves leading the IT team, managing technology projects, ensuring data security, and ensuring the smooth functioning of all technology operations. As the company scales, the IT Head will play a key role in enabling digital innovation, optimizing IT processes, and ensuring compliance with relevant regulations in the fintech sector.
Key Responsibilities:
1. IT Strategy and Leadership
- Develop and execute the company’s IT strategy to align with the organization’s overall business goals and objectives, ensuring the integration of new technologies and systems.
- Lead, mentor, and manage a team of IT professionals, setting clear goals, priorities, and performance expectations.
- Stay up-to-date with industry trends and emerging technologies, providing guidance and recommending innovations to improve efficiency and security.
- Oversee the design, implementation, and maintenance of IT systems that support fintech products, customer experience, and business operations.
2. IT Infrastructure Management
- Oversee the management and optimization of the company’s IT infrastructure, including servers, networks, databases, and cloud services.
- Ensure the scalability and reliability of IT systems to support the company’s growth and increasing demand for digital services.
- Manage system updates, hardware procurement, and vendor relationships to ensure that infrastructure is cost-effective, secure, and high-performing.
3. Cybersecurity and Data Protection
- Lead efforts to ensure the company’s IT infrastructure is secure, implementing robust cybersecurity measures to protect sensitive customer data, financial transactions, and intellectual property.
- Develop and enforce data protection policies and procedures to ensure compliance with data privacy regulations (e.g., GDPR, CCPA, RBI, etc.).
- Conduct regular security audits and vulnerability assessments, working with the security team to address potential risks proactively.
4. Software Development and Integration
- Oversee the development and deployment of software applications and tools that support fintech operations, including payment gateways, loan management systems, and customer engagement platforms.
- Collaborate with product teams to identify technological needs, integrate new features, and optimize existing products for improved performance and user experience.
- Ensure the seamless integration of third-party platforms, APIs, and fintech partners into the company’s core systems.
5. IT Operations and Support
- Ensure the efficient day-to-day operation of IT services, including helpdesk support, system maintenance, and troubleshooting.
- Establish service level agreements (SLAs) for IT services, ensuring that internal teams and customers receive timely support and issue resolution.
- Manage incident response, ensuring quick resolution of system failures, security breaches, or service interruptions.
6. Budgeting and Cost Control
- Manage the IT department’s budget, ensuring cost-effective spending on technology, software, hardware, and IT services.
- Analyze and recommend investments in new technologies and infrastructure that can improve business performance while optimizing costs.
- Ensure the efficient use of IT resources and the appropriate allocation of budget to support business priorities.
7. Compliance and Regulatory Requirements
- Ensure IT practices comply with relevant industry regulations and standards, such as financial services regulations, data privacy laws, and cybersecurity guidelines.
- Work with legal and compliance teams to ensure that all systems and data handling procedures meet industry-specific regulatory requirements (e.g., PCI DSS, ISO 27001).
- Provide input and guidance on IT-related regulatory audits and assessments, ensuring the organization is always in compliance.
8. Innovation and Digital Transformation
- Drive innovation by identifying opportunities for digital transformation within the organization, using technology to streamline operations and enhance the customer experience.
- Collaborate with other departments (marketing, customer service, product development) to introduce new fintech products and services powered by cutting-edge technology.
- Oversee the implementation of AI, machine learning, and other advanced technologies to enhance business performance, operational efficiency, and customer satisfaction.
9. Vendor and Stakeholder Management
- Manage relationships with external technology vendors, service providers, and consultants to ensure the company gets the best value for its investments.
- Negotiate contracts, terms of service, and service level agreements (SLAs) with vendors and technology partners.
- Ensure strong communication with business stakeholders, understanding their IT needs and delivering technology solutions that align with company objectives.
Qualifications and Skills:
Education:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field (Master’s degree or relevant certifications like ITIL, PMP, or CISSP are a plus).
Experience:
- 8-12 years of experience in IT management, with at least 4 years in a leadership role, preferably within the fintech, banking, or technology industry.
- Strong understanding of IT infrastructure, cloud computing, database management, and cybersecurity best practices.
- Proven experience in managing IT teams and large-scale IT projects, especially in fast-paced, growth-driven environments.
- Knowledge of fintech products and services, including digital payments, blockchain, and online lending platforms.
Skills:
- Expertise in IT infrastructure management, cloud services (AWS, Azure, Google Cloud), and enterprise software.
- Strong understanding of cybersecurity protocols, data protection laws, and IT governance frameworks.
- Experience with software development and integration, particularly for fintech platforms.
- Strong project management and budgeting skills, with a track record of delivering IT projects on time and within budget.
- Excellent communication and leadership skills, with the ability to manage cross-functional teams and communicate complex technical concepts to non-technical stakeholders.
- Ability to manage multiple priorities in a fast-paced, high-pressure environment.
Role & Responsibilities
Responsible for ensuring that the architecture and design of the platform remains top-notch with respect to scalability, availability, reliability and maintainability
Act as a key technical contributor as well as a hands-on contributing member of the team.
Own end-to-end availability and performance of features, driving rapid product innovation while ensuring a reliable service.
Working closely with the various stakeholders like Program Managers, Product Managers, Reliability and Continuity Engineering(RCE) team, QE team to estimate and execute features/tasks independently.
Maintain and drive tech backlog execution for non-functional requirements of the platform required to keep the platform resilient
Assist in release planning and prioritization based on technical feasibility and engineering constraints
A zeal to continually find new ways to improve architecture, design and ensure timely delivery and high quality.
1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, BQ optimization, Airflow/Composer, Python(preferred)/Java
2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges
3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP
4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At Least 2 databases)
5. Data Warehouse concepts - Beginner to Intermediate level
6.Data Modeling, GCP Databases, DB Schema(or similar)
7.Hands-on data modelling for OLTP and OLAP systems
8.In-depth knowledge of Conceptual, Logical and Physical data modelling
9.Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same
10.Strong understanding of variables impacting database performance for near-real-time reporting and application interaction.
11.Should have working experience on at least one data modelling tool,
preferably DBSchema, Erwin
12Good understanding of GCP databases like AlloyDB, CloudSQL, and
BigQuery.
13.People with functional knowledge of the mutual fund industry will be a plus Should be willing to work from Chennai, office presence is mandatory
Role & Responsibilities:
● Work with business users and other stakeholders to understand business processes.
● Ability to design and implement Dimensional and Fact tables
● Identify and implement data transformation/cleansing requirements
● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse
● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions
● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique
● Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions.
● Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use.
● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality.
● Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions.
● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics.
● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.
● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements.
● Train business end-users, IT analysts, and developers.


About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 45 years in the USA. Data Axle has set up a strategic global center of excellence in Pune. This center delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases. Data Axle is headquartered in Dallas, TX, USA.
Roles and Responsibilities:
- Design, implement, and manage scalable analytical data infrastructure, enabling efficient access to large datasets and high-performance computing on Google Cloud Platform (GCP).
- Develop and optimize data pipelines using GCP-native services like BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Data Fusion, and Cloud Storage.
- Work with diverse data sources to extract, transform, and load data into enterprise-grade data lakes and warehouses, ensuring high availability and reliability.
- Implement and maintain real-time data streaming solutions using Pub/Sub, Dataflow, and Kafka.
- Research and integrate the latest big data and visualization technologies to enhance analytics capabilities and improve efficiency.
- Collaborate with cross-functional teams to implement machine learning models and AI-driven analytics solutions using Vertex AI and BigQuery ML.
- Continuously improve existing data architectures to support scalability, performance optimization, and cost efficiency.
- Enhance data security and governance by implementing industry best practices for access control, encryption, and compliance.
- Automate and optimize data workflows to simplify reporting, dashboarding, and self-service analytics using Looker and Data Studio.
Basic Qualifications
- 7+ years of experience in data engineering, software development, business intelligence, or data science, with expertise in large-scale data processing and analytics.
- Strong proficiency in SQL and experience with BigQuery for data warehousing.
- Hands-on experience in designing and developing ETL/ELT pipelines using GCP services (Cloud Composer, Dataflow, Dataproc, Data Fusion, or Apache Airflow).
- Expertise in distributed computing and big data processing frameworks, such as Apache Spark, Hadoop, or Flink, particularly within Dataproc and Dataflow environments.
- Experience with business intelligence and data visualization tools, such as Looker, Tableau, or Power BI.
- Knowledge of data governance, security best practices, and compliance requirements in cloud environments.
Preferred Qualifications:
- Degree/Diploma in Computer Science, Engineering, Mathematics, or a related technical field.
- Experience working with GCP big data technologies, including BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud SQL.
- Hands-on experience with real-time data processing frameworks, including Kafka and Apache Beam.
- Proficiency in Python, Java, or Scala for data engineering and pipeline development.
- Familiarity with DevOps best practices, CI/CD pipelines, Terraform, and infrastructure-as-code for managing GCP resources.
- Experience integrating AI/ML models into data workflows, leveraging BigQuery ML, Vertex AI, or TensorFlow.
- Understanding of Agile methodologies, software development life cycle (SDLC), and cloud cost optimization strategies.


Full Stack Developer
Location: Hyderabad
Experience: 7+ Years
Type: BCS - Business Consulting Services
RESPONSIBILITIES:
* Strong programming skills in Node JS [ Must] , React JS, Android and Kotlin [Must]
* Hands on Experience in UI development with good UX sense understanding.
• Hands on Experience in Database design and management
• Hands on Experience to create and maintain backend-framework for mobile applications.
• Hands-on development experience on cloud-based platforms like GCP/Azure/AWS
• Ability to manage and provide technical guidance to the team.
• Strong experience in designing APIs using RAML, Swagger, etc.
• Service Definition Development.
• API Standards, Security, Policies Definition and Management.
REQUIRED EXPERIENCE:
* Bachelor’s and/or master's degree in computer science or equivalent work experience
* Excellent analytical, problem solving, and communication skills.
* 7+ years of software engineering experience in a multi-national company
* 6+ years of development experience in Kotlin, Node and React JS
* 3+ Year(s) experience creating solutions in native public cloud (GCP, AWS or Azure)
* Experience with Git or similar version control system, continuous integration
* Proficiency in automated unit test development practices and design methodologies
* Fluent English
Proficient in Looker Action, Looker Dashboarding, Looker Data Entry, LookML, SQL Queries, BigQuery, LookML, Looker Studio, BigQuery, GCP.
Remote Working
2 pm to 12 am IST or
10:30 AM to 7:30 PM IST
Sunday to Thursday
Responsibilities:
● Create and maintain LookML code, which defines data models, dimensions, measures, and relationships within Looker.
● Develop reusable LookML components to ensure consistency and efficiency in report and dashboard creation.
● Build and customize dashboard to Incorporate data visualizations, such as charts and graphs, to present insights effectively.
● Write complex SQL queries when necessary to extract and manipulate data from underlying databases and also optimize SQL queries for performance.
● Connect Looker to various data sources, including databases, data warehouses, and external APIs.
● Identify and address bottlenecks that affect report and dashboard loading times and Optimize Looker performance by tuning queries, caching strategies, and exploring indexing options.
● Configure user roles and permissions within Looker to control access to sensitive data & Implement data security best practices, including row-level and field-level security.
● Develop custom applications or scripts that interact with Looker's API for automation and integration with other tools and systems.
● Use version control systems (e.g., Git) to manage LookML code changes and collaborate with other developers.
● Provide training and support to business users, helping them navigate and use Looker effectively.
● Diagnose and resolve technical issues related to Looker, data models, and reports.
Skills Required:
● Experience in Looker's modeling language, LookML, including data models, dimensions, and measures.
● Strong SQL skills for writing and optimizing database queries across different SQL databases (GCP/BQ preferable)
● Knowledge of data modeling best practices
● Proficient in BigQuery, billing data analysis, GCP billing, unit costing, and invoicing, with the ability to recommend cost optimization strategies.
● Previous experience in Finops engagements is a plus
● Proficiency in ETL processes for data transformation and preparation.
● Ability to create effective data visualizations and reports using Looker’s dashboard tools.
● Ability to optimize Looker performance by fine-tuning queries, caching strategies, and indexing.
● Familiarity with related tools and technologies, such as data warehousing (e.g., BigQuery ), data transformation tools (e.g., Apache Spark), and scripting languages (e.g., Python).

As a Solution Architect, you will collaborate with our sales, presales and COE teams to provide technical expertise and support throughout the new business acquisition process. You will play a crucial role in understanding customer requirements, presenting our solutions, and demonstrating the value of our products.
You thrive in high-pressure environments, maintaining a positive outlook and understanding that career growth is a journey that requires making strategic choices. You possess good communication skills, both written and verbal, enabling you to convey complex technical concepts clearly and effectively. You are a team player, customer-focused, self-motivated, responsible individual who can work under pressure with a positive attitude. You must have experience in managing and handling RFPs/ RFIs, client demos and presentations, and converting opportunities into winning bids. You possess a strong work ethic, positive attitude, and enthusiasm to embrace new challenges. You can multi-task and prioritize (good time management skills), willing to display and learn. You should be able to work independently with less or no supervision. You should be process-oriented, have a methodical approach and demonstrate a quality-first approach.
Ability to convert client’s business challenges/ priorities into winning proposal/ bid through excellence in technical solution will be the key performance indicator for this role.
What you’ll do
- Architecture & Design: Develop high-level architecture designs for scalable, secure, and robust solutions.
- Technology Evaluation: Select appropriate technologies, frameworks, and platforms for business needs.
- Cloud & Infrastructure: Design cloud-native, hybrid, or on-premises solutions using AWS, Azure, or GCP.
- Integration: Ensure seamless integration between various enterprise applications, APIs, and third-party services.
- Design and develop scalable, secure, and performant data architectures on Microsoft Azure and/or new generation analytics platform like MS Fabric.
- Translate business needs into technical solutions by designing secure, scalable, and performant data architectures on cloud platforms.
- Select and recommend appropriate Data services (e.g. Fabric, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Power BI etc) to meet specific data storage, processing, and analytics needs.
- Develop and recommend data models that optimize data access and querying. Design and implement data pipelines for efficient data extraction, transformation, and loading (ETL/ELT) processes.
- Ability to understand Conceptual/Logical/Physical Data Modelling.
- Choose and implement appropriate data storage, processing, and analytics services based on specific data needs (e.g., data lakes, data warehouses, data pipelines).
- Understand and recommend data governance practices, including data lineage tracking, access control, and data quality monitoring.
What you will Bring
- 10+ years of working in data analytics and AI technologies from consulting, implementation and design perspectives
- Certifications in data engineering, analytics, cloud, AI will be a certain advantage
- Bachelor’s in engineering/ technology or an MCA from a reputed college is a must
- Prior experience of working as a solution architect during presales cycle will be an advantage
Soft Skills
- Communication Skills
- Presentation Skills
- Flexible and Hard-working
Technical Skills
- Knowledge of Presales Processes
- Basic understanding of business analytics and AI
- High IQ and EQ
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.