50+ Google Cloud Platform (GCP) Jobs in India
Apply to 50+ Google Cloud Platform (GCP) Jobs on CutShort.io. Find your next job, effortlessly. Browse Google Cloud Platform (GCP) Jobs and apply today!



About Us
DAITA is a German AI startup revolutionizing the global textile supply chain by digitizing factory-to-brand workflows. We are building cutting-edge AI-powered SaaS and Agentic Systems that automate order management, production tracking, and compliance — making the supply chain smarter, faster, and more transparent.
Fresh off a $500K pre-seed raise, our passionate team is on the ground in India, collaborating directly with factories and brands to build our MVP and create real-world impact. If you’re excited by the intersection of AI, SaaS, and supply chain innovation, join us to help reshape how textiles move from factory floors to global brands.
Role Overview
We’re seeking a versatile Full-Stack Engineer to join our growing engineering team. You’ll be instrumental in designing and building scalable, secure, and high-performance applications that power our AI-driven platform. Working closely with Founders, ML Engineers, and Pilot Customers, you’ll transform complex AI workflows into intuitive, production-ready features.
What You’ll Do
• Design, develop, and deploy backend services, APIs, and microservices powering our platform.
• Build responsive, user-friendly frontend applications tailored for factory and brand users.
• Integrate AI/ML models and agentic workflows into seamless production environments.
• Develop features supporting order parsing, supply chain tracking, compliance, and reporting.
• Collaborate cross-functionally to iterate rapidly, test with users, and deliver impactful releases.
• Optimize applications for performance, scalability, and cost-efficiency on cloud platforms.
• Establish and improve CI/CD pipelines, deployment processes, and engineering best practices.
• Write clear documentation and maintain clean, maintainable code.
Required Skills
• 3–5 years of professional Full-Stack development experience
• Strong backend skills with frameworks like Node.js, Python (FastAPI, Django), Go, or similar
• Frontend experience with React, Vue.js, Next.js, or similar modern frameworks
• Solid knowledge and experience with relational databases (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Redis, Neon)
• Strong API design skills (REST mandatory; GraphQL a plus)
• Containerization expertise with Docker
• Container orchestration and management with Kubernetes (including experience with Helm charts, operators, or custom resource definitions)
• Cloud deployment and infrastructure experience on AWS, GCP or Azure
• Hands-on experience deploying AI/ML models in cloud-native environments (AWS, GCP or Azure) with scalable infrastructure and monitoring.
• Experience with managed AI/ML services like AWS SageMaker, GCP Vertex AI, Azure ML, Together.ai, or similar
• Experience with CI/CD pipelines and DevOps tools such as Jenkins, GitHub Actions, Terraform, Ansible, or ArgoCD
• Familiarity with monitoring, logging, and observability tools like Prometheus, Grafana, ELK stack (Elasticsearch, Logstash, Kibana), or Helicone
Nice-to-have
• Experience with TypeScript for full-stack AI SaaS development
• Use of modern UI frameworks and tooling like Tailwind CSS
• Familiarity with modern AI-first SaaS concepts viz. vector databases for fast ML data retrieval, prompt engineering for LLM integration, integrating with OpenRouter or similar LLM orchestration frameworks etc.
• Knowledge of MLOps tools like Kubeflow, MLflow, or Seldon for model lifecycle management.
• Background in building data pipelines, real-time analytics, and predictive modeling.
• Knowledge of AI-driven security tools and best practices for SaaS compliance.
• Proficiency in cloud automation, cost optimization, and DevOps for AI workflows.
• Ability to design and implement hyper-personalized, adaptive user experiences.
What We Value
• Ownership: You take full responsibility for your work and ship high-quality solutions quickly.
• Bias for Action: You’re pragmatic, proactive, and focused on delivering results.
• Clear Communication: You articulate ideas, challenges, and solutions effectively across teams.
• Collaborative Spirit: You thrive in a cross-functional, distributed team environment.
• Customer Focus: You build with empathy for end users and real-world usability.
• Curiosity & Adaptability: You embrace learning, experimentation, and pivoting when needed.
• Quality Mindset: You write clean, maintainable, and well-tested code.
Why Join DAITA?
• Be part of a mission-driven startup transforming a $1+ Trillion global industry.
• Work closely with founders and AI experts on cutting-edge technology.
• Directly impact real-world supply chains and sustainability.
• Grow your skills in AI, SaaS, and supply chain tech in a fast-paced environment.
DevOps Engineer
AiSensy
Gurugram, Haryana, India (On-site)
About AiSensy
AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Adani, Delhi Transport Corporation, Yakult, Godrej, Aditya Birla Hindalco., Wipro, Asian Paints, India Today Group Skullcandy, Vivo, Physicswallah, Cosco grow their revenues via WhatsApp.
- Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing
- 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year
- Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more
- High Impact as Businesses drive 25-80% Revenues using AiSensy Platform
- Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors
Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀
What You’ll Do (Key Responsibilities)
🔹 CI/CD & Automation:
- Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins.
- Automate deployment processes to improve efficiency and reduce downtime.
🔹 Infrastructure Management:
- Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code.
- Deploy and maintain Dockerized applications on Kubernetes clusters for scalability.
🔹 Cloud & Security:
- Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure.
- Optimize cloud costs and ensure security best practices are in place.
🔹 Monitoring & Troubleshooting:
- Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime.
- Proactively identify and resolve infrastructure-related issues.
🔹 Scripting & Automation:
- Use Python or Bash scripting to automate repetitive DevOps tasks.
- Build internal tools for system health monitoring, logging, and debugging.
What We’re Looking For (Must-Have Skills)
✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket)
✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins
✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi
✅ Containerization & Orchestration: Experience with Docker & Kubernetes
✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers
✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana
✅ Scripting Knowledge: Python or Bash for automation
Bonus Skills (Good to Have, Not Mandatory)
➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking
➕ Experience with Microsoft/Linux/F5 Technologies
➕ Hands-on knowledge of Database servers
🌐 Job Opening: Cloud and Observability Engineer
📍 Location: Work From Office – Gurgaon (Sector 43)
🕒 Experience: 2+ Years
💼 Employment Type: Full-Time
Role Overview:
As a Cloud and Observability Engineer, you will play a critical role in helping customers transition and optimize their monitoring and observability infrastructure. You'll be responsible for building high-quality extension packages for alerts, dashboards, and parsing rules using the organization Platform. Your work will directly impact the reliability, scalability, and efficiency of monitoring across cloud-native environments.
This is a work-from-office role requiring collaboration with global customers and internal stakeholders.
Key Responsibilities:
- Extension Delivery:
- Develop, enhance, and maintain extension packages for alerts, dashboards, and parsing rules to improve monitoring experience.
- Conduct in-depth research to create world-class observability solutions (e.g., for cloud-native and container technologies).
- Customer & Internal Support:
- Act as a technical advisor to both internal teams and external clients.
- Respond to queries, resolve issues, and incorporate feedback related to deployed extensions.
- Observability Solutions:
- Design and implement optimized monitoring architectures.
- Migrate and package dashboards, alerts, and rules based on customer environments.
- Automation & Deployment:
- Use CI/CD tools and version control systems to package and deploy monitoring components.
- Continuously improve deployment workflows.
- Collaboration & Enablement:
- Work closely with DevOps, engineering, and customer success teams to gather requirements and deliver solutions.
- Deliver technical documentation and training for customers.
Requirements:
- Professional Experience:
- Minimum 2 years in Systems Engineering or similar roles.
- Focus on monitoring, observability, and alerting tools.
- Cloud & Container Tech:
- Hands-on experience with AWS, Azure, or GCP.
- Experience with Kubernetes, EKS, GKE, or AKS.
- Cloud DevOps certifications (preferred).
- Observability Tools:
- Practical experience with at least two observability platforms (e.g., Prometheus, Grafana, Datadog, etc.).
- Strong understanding of alerting, dashboards, and infrastructure monitoring.
- Scripting & Automation:
- Familiarity with CI/CD, deployment pipelines, and version control.
- Experience in packaging and managing observability assets.
- Technical Skills:
- Working knowledge of PromQL, Grafana, and related query languages.
- Willingness to learn Dataprime and Lucene syntax.
- Soft Skills:
- Excellent problem-solving and debugging abilities.
- Strong verbal and written communication in English.
- Ability to work across US and European time zones as needed.
Why Join Us?
- Opportunity to work on cutting-edge observability platforms.
- Collaborate with global teams and top-tier clients.
- Shape the future of cloud monitoring and performance optimization.
- Growth-oriented, learning-focused environment.
🛡️ Job Opening: Security Operations Center (SOC) Analyst – Gurgaon (Sector 43)
📍 Location: Gurgaon, Sector 43
🕒 Experience: 3+ Years
💼 Employment Type: Full-Time
Who We’re Looking For:
We’re seeking a dynamic and experienced SOC Analyst to join our growing cybersecurity team. If you're passionate about threat detection, incident response, and working hands-on with cutting-edge security tools — this role is for you.
Key Responsibilities:
- Monitor, detect, investigate, and respond to cybersecurity threats in real-time.
- Work hands-on with tools such as SIEM, SOAR, WAF, IPS/IDS, etc.
- Collaborate with customers and internal teams to provide timely and clear communication around security events.
- Analyze threat scenarios and provide actionable intelligence and mitigation strategies.
- Create and refine detection rules, playbooks, and escalation workflows.
- Assist in continuous improvement of SOC procedures and threat detection capabilities.
Requirements:
- Minimum 3 years of experience in a SOC/MDR (Managed Detection and Response) environment, preferably in a customer-facing role.
- Strong technical skills in using security platforms like SIEM, SOAR, WAF, IPS, etc.
- Solid understanding of security principles, threat vectors, and incident response methodologies.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Excellent analytical, communication, and problem-solving skills.
Preferred Qualifications:
- Security certifications such as CEH, OSCP, CSA, or equivalent are a plus.
- Experience in scripting or automation (Python, PowerShell, etc.) is an added advantage.
Why Join Us?
- Be part of a fast-paced and innovative cybersecurity team.
- Work on real-world threats and cutting-edge technologies.
- Collaborative work environment with a focus on growth and learning.
- 2+ years in a DevOps/SRE/System Engineering role
- Hands-on experience with Linux-based systems
- Experience with cloud platforms like AWS, GCP, or Azure
- Proficient in scripting (Bash, Python, or Go)
- Experience with monitoring tools (Prometheus, Grafana, ELK, Datadog, etc.)
- Knowledge of containerization (Docker, Kubernetes)
- Familiarity with Git and CI/CD tools (Jenkins, GitLab CI, etc.)

About the role
We are looking for a Senior Automation Engineer to architect and implement automated testing frameworks that validate the runtime behavior of code generated by our AI platform. This role is critical in ensuring that our platform's output performs correctly in production environments. You'll work at the intersection of AI and quality assurance, creating innovative testing solutions that can validate AI-generated applications during actual execution.
What Success Looks Like
- You architect and implement automated testing frameworks that validate the runtime behavior and performance of AI-generated applications
- You develop intelligent test suites that can automatically assess application functionality in production environments
- You create testing frameworks that can validate runtime behavior across multiple languages and frameworks
- You establish quality metrics and testing protocols that measure real-world performance of generated applications
- You build systems to automatically detect and flag runtime issues in deployed applications
- You collaborate with our AI team to improve the platform based on runtime performance data
- You implement automated integration and end-to-end testing that ensures generated applications work as intended in production
- You develop metrics and monitoring systems to track runtime performance across different customer deployments
Areas of Ownership
Our hiring process is designed for you to demonstrate deep expertise in automation testing with a focus on AI-powered systems.
Required Technical Experience:
- 4+ years of experience with Selenium and automated testing frameworks
- Strong expertise in Python (our primary automation language)
- Experience with CI/CD tools (Jenkins, CircleCI, or similar)
- Proficiency in version control systems (Git)
- Experience testing distributed systems
- Understanding of modern software development practices
- Experience working with cloud platforms (GCP preferred)
Ways to stand out
- Experience with runtime monitoring and testing of distributed systems
- Knowledge of performance testing and APM (Application Performance Monitoring)
- Experience with end-to-end testing of complex applications
- Background in developing testing systems for enterprise-grade applications
- Understanding of distributed tracing and monitoring
- Experience with chaos engineering and resilience testing


Engineering Head / Tech Lead (React + Node.js)
About MrPropTek
MrPropTek is building the future of real estate technology. We're looking for a hands-on Engineering Head / Tech Lead to drive our tech strategy and lead the development of scalable web applications across frontend and backend using React and Node.js.
Responsibilities
- Lead and mentor a team of full-stack developers
- Architect and build scalable, high-performance applications
- Drive end-to-end development using React (frontend) and Node.js (backend)
- Collaborate with product, design, and business teams to align on priorities
- Enforce code quality, best practices, and agile processes
- Oversee deployment, performance, and security of systems
Requirements
- 7+ years in software development; 3+ years in a tech lead or engineering management role
- Deep expertise in React.js, Node.js, JavaScript/TypeScript
- Experience with REST APIs, cloud platforms (AWS/GCP), and databases (SQL/NoSQL)
- Strong leadership, communication, and decision-making skills
- Startup or fast-paced team experience preferred
Job Location- Mohali, Delhi/ NCR
Job Type: Full-time
About the Company
We are hiring for a fast-growing, well-funded product startup backed by a leadership team with a proven track record of building billion-dollar digital businesses. The company is focused on delivering enterprise-grade SaaS products in the Cybersecurity domain for B2B markets. You’ll be part of a passionate and dynamic engineering team building innovative solutions using modern tech stacks.
Key Responsibilities
- Design and develop scalable microservices using Java and Spring Boot
- Build and manage robust RESTful APIs
- Collaborate with cross-functional teams in an Agile setup
- Lead and mentor junior engineers, driving technical excellence
- Contribute to architecture discussions and code reviews
- Work with PostgreSQL, implement data integrity and consistency
- Deploy and manage services on cloud platforms like GCP or Azure
- Utilize Docker/Kubernetes for containerization and orchestration
Must-Have Skills
- Strong backend experience with Java, Spring Boot, REST APIs
- Proficiency in frontend development with React.js
- Experience with PostgreSQL and database optimization
- Hands-on with cloud platforms (GCP or Azure)
- Familiarity with Docker and Kubernetes
- Strong understanding of:
- API Gateways
- Hibernate & JPA
- Transaction management & ACID properties
- Multi-threading and context switching
Good to Have
- Experience in Cybersecurity or Healthcare domain
Exposure to CI/CD pipelines and DevOps practices
- Work with the team in capacity of GCP Data Engineer on day-to-day activities.
- Solve problems at hand with utmost clarity and speed.
- Work with Data analysts and architects to help them solve any specific issues with tooling/processes.
- Design, build and operationalize large scale enterprise data solutions and applications using one or more of GCP data and analytics services in combination with 3rd parties - Python/Java/React.js, AirFlow ETL skills - GCP services (BigQuery, Dataflow, Cloud SQL, Cloud Functions, Data Lake.
- Design and build production data pipelines from ingestion to consumption within a big data architecture.
- GCP BQ modeling and performance tuning techniques.
- RDBMS and No-SQL database experience.
- Knowledge on orchestrating workloads on cloud.

We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems.
Key Responsibilities:
- Design, develop, test, and maintain scalable ETL data pipelines using Python.
- Work extensively on Google Cloud Platform (GCP) services such as:
- Dataflow for real-time and batch data processing
- Cloud Functions for lightweight serverless compute
- BigQuery for data warehousing and analytics
- Cloud Composer for orchestration of data workflows (based on Apache Airflow)
- Google Cloud Storage (GCS) for managing data at scale
- IAM for access control and security
- Cloud Run for containerized applications
- Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery.
- Implement and enforce data quality checks, validation rules, and monitoring.
- Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions.
- Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects.
- Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL.
- Document pipeline designs, data flow diagrams, and operational support procedures.
Required Skills:
- 4–8 years of hands-on experience in Python for backend or data engineering projects.
- Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.).
- Solid understanding of data pipeline architecture, data integration, and transformation techniques.
- Experience in working with version control systems like GitHub and knowledge of CI/CD practices.
- Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.).
About the Role
We are looking for a highly motivated DevOps Engineer with a strong background in cloud technologies, big data ecosystems, and software development lifecycles to lead cross-functional teams in delivering high-impact projects. The ideal candidate will combine excellent project management skills with technical acumen in GCP, DevOps, and Python-based applications.
Key Responsibilities
- Lead end-to-end project planning, execution, and delivery, ensuring alignment
- Create and maintain project documentation including detailed timelines, sprint boards, risk logs, and weekly status reports.
- Facilitate Agile ceremonies: daily stand-ups, sprint planning, retrospectives, and backlog grooming.
- Actively manage risks, scope changes, resource allocation, and project dependencies to ensure delivery without disruptions.
- Ensure compliance with QA processes and security/compliance standards throughout the SDLC.
- Collaborate with stakeholders and senior leadership to communicate progress, blockers, and key milestones.
- Provide mentorship and support to cross-functional team members to drive continuous improvement and team performance.
- Coordinate with clients and act as a key point of contact for requirement gathering, updates, and escalations.
Required Skills & Experience
Cloud & DevOps
- Proficient in Google Cloud Platform (GCP) services: Compute, Storage, Networking, IAM.
- Hands-on experience with cloud deployments and infrastructure as code.
- Strong working knowledge of CI/CD pipelines, Docker, Kubernetes, and Terraform (or similar tools).
Big Data & Data Engineering
- Experience with large-scale data processing using tools like PySpark, Hadoop, Hive, HDFS, and Spark Streaming (preferred).
- Proven experience in managing and optimizing big data pipelines and ensuring high performance.
Programming & Frameworks
- Strong proficiency in Python with experience in Django (REST APIs, ORM, deployment workflows).
- Familiarity with Git and version control best practices.
- Basic knowledge of Linux administration and shell scripting.
Nice to Have
- Knowledge or prior experience in the Media & Advertising domain.
- Experience in client-facing roles and handling stakeholder communications.
- Proven ability to manage technical teams (5–6 members).
Why Join Us?
- Work on cutting-edge cloud and data engineering projects
- Collaborate with a talented, fast-paced team
- Flexible work setup and culture of ownership
1. Solid Databricks & pyspark experience
2. Must have worked in projects dealing with data at terabyte scale
3. Must have knowledge of spark optimization techniques
4. Must have experience setting up job pipelines in Databricks
5. Basic knowledge of gcp and big query is required
6. Understanding LLMs and vector db

bout the Role:
We are looking for an experienced Senior Data Scientist for a short-term contract role to work on real-world problems in the autonomous driving and mobility domain. You will work on large-scale sensor and telemetry datasets from truck fleets to drive key insights, develop ML models, and improve operational intelligence.
Key Responsibilities:
- Analyze multi-source vehicle data: GPS, CAN, LiDAR, camera, IMU, radar.
- Build ML/statistical models for anomaly detection, predictive maintenance, and driver behaviour analysis.
- Develop scalable data pipelines for structured/unstructured fleet data.
- Apply time-series analysis, clustering, and predictive modeling on driving patterns.
- Collaborate with cross-functional teams to enhance autonomy performance.
- Visualize insights through dashboards (Tableau/PowerBI/Plotly) for business impact.
Required Skills:
- Strong Python skills with Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch.
- Experience in data engineering tools (Spark, AWS/GCP pipelines).
- Deep understanding of time-series and spatiotemporal data.
- Proficiency in SQL and data visualization tools.
- Background in automotive/connected vehicle data is a strong plus.
Nice to Have:
- Familiarity with vehicle dynamics, CAN decoding, or driver modeling.
- Exposure to sensor fusion, edge AI, or model optimization.
- Experience with Docker, version control, and CI/CD tools.
Why Join?
- Work on cutting-edge mobility and autonomy problems.
- Collaborate with a dynamic, data-driven engineering team.
- Flexibility to work remotely with impactful product teams.
💡 Note: Immediate joiners or candidates with <15 days' notice preferred.
About the Role
We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–7 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.
# Experience with Kubernetes is mandatory.
Key Responsibilities
- Design and develop scalable, reliable backend services and cloud-native applications.
- Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
- Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
- Implement and manage CI/CD pipelines and infrastructure automation.
- Collaborate with frontend, DevOps, and product teams in an agile environment.
- Ensure high code quality through testing, reviews, and documentation.
Required Skills
- Strong hands-on experience with Kubernetes of atleast 2 years in production environments (mandatory).
- Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI].
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Solid understanding of distributed systems, microservices, and cloud-native architecture.
- Experience with containerization using Docker and Kubernetes-native deployment workflows.
- Working knowledge of SQL and relational databases.
Preferred Qualifications
- Experience working across multiple cloud platforms.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.


Key Responsibilities:
● Design, develop, and maintain scalable web applications using .NET Core, .NET
Framework, C#, and related technologies.
● Participate in all phases of the SDLC, including requirements gathering, architecture
design, coding, testing, deployment, and support.
● Build and integrate RESTful APIs, and work with SQL Server, Entity Framework, and
modern front-end technologies such as Angular, React, and JavaScript.
● Conduct thorough code reviews, write unit tests, and ensure adherence to coding
standards and best practices.
● Lead or support .NET Framework to .NET Core migration initiatives, ensuring
minimal disruption and optimal performance.
● Implement and manage CI/CD pipelines using tools like Azure DevOps, Jenkins, or
GitLab CI/CD.
● Containerize applications using Docker and deploy/manage them on orchestration
platforms like Kubernetes or GKE.
● Lead and execute database migration projects, particularly transitioning from SQL
Server to PostgreSQL.
● Manage and optimize Cloud SQL for PostgreSQL, including configuration, tuning, and
ongoing maintenance.
● Leverage Google Cloud Platform (GCP) services such as GKE, Cloud SQL, Cloud
Run, and Dataflow to build and maintain cloud-native solutions.
● Handle schema conversion and data transformation tasks as part of migration and
modernization efforts.
Required Skills & Experience:
● 5+ years of hands-on experience with C#, .NET Core, and .NET Framework.
● Proven experience in application modernization and cloud-native development.
● Strong knowledge of containerization (Docker) and orchestration tools like
Kubernetes/GKE.
● Expertise in implementing and managing CI/CD pipelines.
● Solid understanding of relational databases and experience in SQL Server to
PostgreSQL migrations.
● Familiarity with cloud infrastructure, especially GCP services relevant to application
hosting and data processing.
● Excellent problem-solving, communication,

About You: ● Education ranging from a Bachelor’s of Science degree in computer science or related engineering degree. ● 12+ years of high level API, abstraction layers, and application software development experience. ● 5+ years experience building scalable, serverless solutions in GCP or AWS ● 4+ years of experience in Python, MongoDB ● Experience with large-scale distributed systems and streaming data services. ● Experience building, developing, and maintaining cloud native infrastructure, serverless architecture, micro-operations, and workflow automation. ● You are a hardworking problem-solver who thrives in finding solutions to difficult technical challenges. ● Experience with modern high-level languages and databases including Javascript, MongoDB, and Python. ● Experience in Github, Gitlab, CI/CD, Jira, unit testing, integration testing, regression testing, and collaborative documentation. ● Expertise with GCP, Kubernetes, Docker, or containerization, is a great plus. ● Ability to write and assess clean, functional, high quality and testable code for each of our projects. ● Positive and proactive, solution-focused contributor and team motivation.


About the Role
Join our dynamic DTS team and collaborate with a top-tier global hedge fund on data-driven development initiatives. You'll focus on cloud migration, automation, application development, and implementing DevOps best practices, especially on Google Cloud Platform (GCP).
Key Responsibilities
✅ Design and develop scalable and secure software solutions
✅ Build and maintain apps using C#, MSSQL, Python, GCP/BigQuery
✅ Conduct code reviews and enforce best practices
✅ Troubleshoot issues and ensure smooth application performance
✅ Collaborate with cross-functional teams to drive project success
✅ Mentor and support junior developers
What We’re Looking For
- 3+ years of experience in C#, MSSQL, Python, and GCP/BigQuery
- Solid understanding of software development and DevOps in multi-cloud setups
- Strong problem-solving mindset with attention to detail
- Excellent communication and stakeholder management skills
- Proven ability to work collaboratively and guide junior team members
Good to Have
- Experience with DevOps pipelines/tools
- Exposure to agile methodologies


Role Objective
Develop business relevant, high quality, scalable web applications. You will be part of a dynamic AdTech team solving big problems in the Media and Entertainment Sector.
Roles & Responsibilities
* Application Design: Understand requirements from the user, create stories and be a part of the design team. Check designs, give regular feedback and ensure that the designs are as per user expectations.
* Architecture: Create scalable and robust system architecture. The design should be in line with the client infra. This could be on-prem or cloud (Azure, AWS or GCP).
* Development: You will be responsible for the development of the front-end and back-end. The application stack will comprise of (depending on the project) SQL, Django, Angular/React, HTML, CSS. Knowledge of GoLang and Big Data is a plus point.
* Deployment: Suggest and implement a deployment strategy that is scalable and cost-effective. Create a detailed resource architecture and get it approved. CI/CD deployment on IIS or Linux. Knowledge of dockers is a plus point.
* Maintenance: Maintaining development and production environments will be a key part of your job profile. This will also include trouble shooting, fixing bugs and suggesting ways for improving the application.
* Data Migration: In the case of database migration, you will be expected to suggest appropriate strategies and implementation plans.
* Documentation: Create a detailed document covering important aspects like HLD, Technical Diagram, Script Design, SOP etc.
* Client Interaction: You will be interacting with the client on a day-to-day basis and hence having good communication skills is a must.
**Requirements**
Education-B. Tech (Comp. Sc, IT) or equivalent
Experience- 3+ years of experience developing applications on Django, Angular/React, HTML, and CSS
Behavioural Skills-
1. Clear and Assertive communication
2. Ability to comprehend the business requirement
3. Teamwork and collaboration
4. Analytics thinking
5. Time Management
6. Strong troubleshooting and problem-solving skills
Technical Skills-
1. Back-end and Front-end Technologies: Django, Angular/React, HTML and CSS.
2. Cloud Technologies: AWS, GCP, and Azure
3. Big Data Technologies: Hadoop and Spark
4. Containerized Deployment: Docker and Kubernetes is a plus.
5. Other: Understanding of Golang is a plus.
Job Title: Sr. DevOps Engineer
Experience Required: 2 to 4 years in DevOps or related fields
Employment Type: Full-time
About the Role:
We are seeking a highly skilled and experienced Lead DevOps Engineer. This role will focus on driving the design, implementation, and optimization of our CI/CD pipelines, cloud infrastructure, and operational processes. As a Lead DevOps Engineer, you will play a pivotal role in enhancing the scalability, reliability, and security of our systems while mentoring a team of DevOps engineers to achieve operational excellence.
Key Responsibilities:
Infrastructure Management: Architect, deploy, and maintain scalable, secure, and resilient cloud infrastructure (e.g., AWS, Azure, or GCP).
CI/CD Pipelines: Design and optimize CI/CD pipelines, to improve development velocity and deployment quality.
Automation: Automate repetitive tasks and workflows, such as provisioning cloud resources, configuring servers, managing deployments, and implementing infrastructure as code (IaC) using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Implement robust monitoring, alerting, and logging systems for enterprise and cloud-native environments using tools like Prometheus, Grafana, ELK Stack, NewRelic or Datadog.
Security: Ensure the infrastructure adheres to security best practices, including vulnerability assessments and incident response processes.
Collaboration: Work closely with development, QA, and IT teams to align DevOps strategies with project goals.
Mentorship: Lead, mentor, and train a team of DevOps engineers to foster growth and technical expertise.
Incident Management: Oversee production system reliability, including root cause analysis and performance tuning.
Required Skills & Qualifications:
Technical Expertise:
Strong proficiency in cloud platforms like AWS, Azure, or GCP.
Advanced knowledge of containerization technologies (e.g., Docker, Kubernetes).
Expertise in IaC tools such as Terraform, CloudFormation, or Pulumi.
Hands-on experience with CI/CD tools, particularly Bitbucket Pipelines, Jenkins, GitLab CI/CD, Github Actions or CircleCI.
Proficiency in scripting languages (e.g., Python, Bash, PowerShell).
Soft Skills:
Excellent communication and leadership skills.
Strong analytical and problem-solving abilities.
Proven ability to manage and lead a team effectively.
Experience:
4 years + of experience in DevOps or Site Reliability Engineering (SRE).
4+ years + in a leadership or team lead role, with proven experience managing distributed teams, mentoring team members, and driving cross-functional collaboration.
Strong understanding of microservices, APIs, and serverless architectures.
Nice to Have:
Certifications like AWS Certified Solutions Architect, Kubernetes Administrator, or similar.
Experience with GitOps tools such as ArgoCD or Flux.
Knowledge of compliance standards (e.g., GDPR, SOC 2, ISO 27001).
Perks & Benefits:
Competitive salary and performance bonuses.
Comprehensive health insurance for you and your family.
Professional development opportunities and certifications, including sponsored certifications and access to training programs to help you grow your skills and expertise.
Flexible working hours and remote work options.
Collaborative and inclusive work culture.
Join us to build and scale world-class systems that empower innovation and deliver exceptional user experiences.
You can directly contact us: Nine three one six one two zero one three two
About Us
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.
What we are looking for:
Experience: 3-6years
Education: BTech / BE / MCA / MSc Computer Science
About Role:
Primary Skills -Devops with strong circleCI, argoCD, Github, Terraform, Helm, kubernetes and google cloud experience
Required Skills and Experience:
- 3+ years of experience in DevOps, infrastructure automation, or related fields.
- Strong proficiency with CircleCI for building and managing CI/CD pipelines.
- Advanced expertise in Terraform for infrastructure as code.
- Solid experience with Helm for managing Kubernetes applications.
- Hands-on knowledge of ArgoCD for GitOps-based deployment strategies.
- Proficient with GitHub for version control, repository management, and workflows.
- Extensive experience with Kubernetes for container orchestration and management.
- In-depth understanding of Google Cloud Platform (GCP) services and architecture.
- Strong scripting and automation skills (e.g., Python, Bash, or equivalent).
- Familiarity with monitoring and logging tools like Prometheus, Grafana, and ELK stack.
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration abilities in agile development environments.
Note :Kindly share your LinkedIn profile when applying.
About Us:
CLOUDSUFI is a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values:
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Diversity & Inclusivity:
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/
Summary:
We are seeking a talented and passionate Data Engineer to join our growing data team. In this role, you will be responsible for building, maintaining, and optimizing our data pipelines and infrastructure on Google Cloud Platform (GCP). The ideal candidate will have a strong background in data warehousing, ETL/ELT processes, and a passion for turning raw data into actionable insights. You will work closely with data scientists, analysts, and other engineers to support a variety of data-driven initiatives.
Responsibilities:
- Design, develop, and maintain scalable and reliable data pipelines using Dataform or DBT.
- Build and optimize data warehousing solutions on Google BigQuery.
- Develop and manage data workflows using Apache Airflow.
- Write complex and efficient SQL queries for data extraction, transformation, and analysis.
- Develop Python-based scripts and applications for data processing and automation.
- Collaborate with data scientists and analysts to understand their data requirements and provide solutions.
- Implement data quality checks and monitoring to ensure data accuracy and consistency.
- Optimize data pipelines for performance, scalability, and cost-effectiveness.
- Contribute to the design and implementation of data infrastructure best practices.
- Troubleshoot and resolve data-related issues.
- Stay up-to-date with the latest data engineering trends and technologies, particularly within the Google Cloud ecosystem.
Qualifications:
- Bachelor’s degree in computer science, a related technical field, or equivalent practical experience.
- 3-4 years of experience in a Data Engineer role.
- Strong expertise in SQL (preferably with BigQuery SQL).
- Proficiency in Python programming for data manipulation and automation.
- Hands-on experience with Google Cloud Platform (GCP) and its data services.
- Solid understanding of data warehousing concepts and ETL/ELT methodologies.
- Experience with Data form or DBT for data transformation and modeling.
- Experience with workflow management tools such as Apache Airflow.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Google Cloud Professional Data Engineer certification.
- Knowledge of data modeling techniques (e.g., dimensional modeling, star schema).
- Familiarity with Agile development methodologies.
Behavioral Competencies:
- Should have very good verbal and written communication, technical articulation, listening and presentation skills
- Should have proven analytical and problem-solving skills
- Should have demonstrated effective task prioritization, time management and internal/external stakeholder management skills
- Should be a quick learner, self-starter, go-getter and team player
- Should have experience of working under stringent deadlines in a Matrix organization structure

About the Role
We are looking for a highly motivated Project Manager with a strong background in cloud technologies, big data ecosystems, and software development lifecycles to lead cross-functional teams in delivering high-impact projects. The ideal candidate will combine excellent project management skills with technical acumen in GCP, DevOps, and Python-based applications.
Key Responsibilities
- Lead end-to-end project planning, execution, and delivery, ensuring alignment with business goals and timelines.
- Create and maintain project documentation including detailed timelines, sprint boards, risk logs, and weekly status reports.
- Facilitate Agile ceremonies: daily stand-ups, sprint planning, retrospectives, and backlog grooming.
- Actively manage risks, scope changes, resource allocation, and project dependencies to ensure delivery without disruptions.
- Ensure compliance with QA processes and security/compliance standards throughout the SDLC.
- Collaborate with stakeholders and senior leadership to communicate progress, blockers, and key milestones.
- Provide mentorship and support to cross-functional team members to drive continuous improvement and team performance.
- Coordinate with clients and act as a key point of contact for requirement gathering, updates, and escalations.
Required Skills & Experience
Cloud & DevOps
- Proficient in Google Cloud Platform (GCP) services: Compute, Storage, Networking, IAM.
- Hands-on experience with cloud deployments and infrastructure as code.
- Strong working knowledge of CI/CD pipelines, Docker, Kubernetes, and Terraform (or similar tools).
Big Data & Data Engineering
- Experience with large-scale data processing using tools like PySpark, Hadoop, Hive, HDFS, and Spark Streaming (preferred).
- Proven experience in managing and optimizing big data pipelines and ensuring high performance.
Programming & Frameworks
- Strong proficiency in Python with experience in Django (REST APIs, ORM, deployment workflows).
- Familiarity with Git and version control best practices.
- Basic knowledge of Linux administration and shell scripting.
Nice to Have
- Knowledge or prior experience in the Media & Advertising domain.
- Experience in client-facing roles and handling stakeholder communications.
- Proven ability to manage technical teams (5–6 members).
Why Join Us?
- Work on cutting-edge cloud and data engineering projects
- Collaborate with a talented, fast-paced team
- Flexible work setup and culture of ownership
- Continuous learning and upskilling environment
- Inclusive health benefits included

Job Title: AI Solutioning Architect – Healthcare IT
Role Summary:
The AI Solutioning Architect leads the design and implementation of AI-driven solutions across the organization, ensuring alignment with business goals and healthcare IT standards. This role defines the AI/ML architecture, guides technical execution, and fosters innovation using platforms like Google Cloud (GCP).
Key Responsibilities:
- Architect scalable AI solutions from data ingestion to deployment.
- Align AI initiatives with business objectives and regulatory requirements (HIPAA).
- Collaborate with cross-functional teams to deliver AI projects.
- Lead POCs, evaluate AI tools/platforms, and promote GCP adoption.
- Mentor technical teams and ensure best practices in MLOps.
- Communicate complex concepts to diverse stakeholders.
Qualifications:
- Bachelor’s/Master’s in Computer Science or related field.
- 12+ years in software development/architecture with strong AI/ML focus.
- Experience in healthcare IT and compliance (HIPAA).
- Proficient in Python/Java and ML frameworks (TensorFlow, PyTorch).
- Hands-on with GCP (preferred) or other cloud platforms.
- Strong leadership, problem-solving, and communication skills.

Job Title : Lead Web Developer / Frontend Engineer
Experience Required : 10+ Years
Location : Bangalore (Hybrid – 3 Days Work From Office)
Work Timings : 11:00 AM to 8:00 PM IST
Notice Period : Immediate or Up to 30 Days (Preferred)
Work Mode : Hybrid
Interview Mode : Face-to-Face mandatory (for Round 2)
Role Overview :
We are hiring a Lead Frontend Engineer with 10+ Years of experience to drive the development of scalable, modern, and high-performance web applications.
This is a hands-on technical leadership role focused on React.js, micro-frontends, and Backend for Frontend (BFF) architecture, requiring both coding expertise and team leadership skills.
Mandatory Skills :
React.js, JavaScript/TypeScript, HTML, CSS, micro-frontend architecture, Backend for Frontend (BFF), Webpack, Jenkins (CI/CD), GCP, RDBMS/SQL, Git, and team leadership.
Core Responsibilities :
- Design and develop cloud-based web applications using React.js, HTML, CSS.
- Collaborate with UX/UI designers and backend engineers to implement seamless user experiences.
- Lead and mentor a team of frontend developers.
- Write clean, well-documented, scalable code using modern JavaScript/TypeScript practices.
- Implement CI/CD pipelines using Jenkins, deploy applications to CDNs.
- Integrate with GCP services, optimize front-end performance.
- Stay updated with modern frontend technologies and design patterns.
- Use Git for version control and collaborative workflows.
- Implement JavaScript libraries for web analytics and performance monitoring.
Key Requirements :
- 10+ Years of experience as a frontend/web developer.
- Strong proficiency in React.js, JavaScript/TypeScript, HTML, CSS.
- Experience with micro-frontend architecture and Backend for Frontend (BFF) patterns.
- Proficiency in frontend design frameworks and libraries (jQuery, Node.js).
- Strong understanding of build tools like Webpack, CI/CD using Jenkins.
- Experience with GCP and deploying to CDNs.
- Solid experience in RDBMS, SQL.
- Familiarity with Git and agile development practices.
- Excellent debugging, problem-solving, and communication skills.
- Bachelor’s/Master’s in Computer Science or a related field.
Nice to Have :
- Experience with Node.js.
- Previous experience working with web analytics frameworks.
- Exposure to JavaScript observability tools.
Interview Process :
1. Round 1 : Online Technical Interview (via Geektrust – 1 Hour)
2. Round 2 : Face-to-Face Interview with the Indian team in Bangalore (3 Hours – Mandatory)
3. Round 3 : Online Interview with CEO (30 Minutes)
Important Notes :
- Face-to-face interview in Bangalore is mandatory for Round 2.
- Preference given to candidates currently in Bangalore or willing to travel for interviews.
- Remote applicants who cannot attend the in-person round will not be considered.

Why This Role Matters
We’re looking for a Principal Engineer to lead the architecture and execution of our GenAI-powered, self-serve marketing platforms. You will work directly with the CEO to shape, build, and scale products that change how marketers interact with data and AI. This is intrapreneurship in action — not a sandbox innovation lab, but a real-world product with traction, velocity, and high stakes.
What You'll Do
- Co-own product architecture and direction alongside the CEO.
- Build GenAI-native, full-stack platforms from MVP to scale — powered by LLMs, agents, and predictive AI.
- Own the full stack: React (frontend), Node.js/Python (backend), GCP (infra), BigQuery (data), and vector databases (AI).
- Lead a lean, high-caliber team with a hands-on, unblock-and-coach mindset.
- Drive rapid iteration with rigor, balancing short-term delivery with long-term resilience.
- Ensure scalability, observability, and fault tolerance in multi-tenant, cloud-native environments.
- Bridge business and tech — aligning execution with evolving user and market insights.
What You Bring
- 8–12 years of experience building and scaling full-stack, data-heavy or AI-driven products.
- Fluency in React, Node.js, and Google Cloud (Functions, BigQuery, Cloud SQL, Airflow, etc.).
- Hands-on experience with GenAI tools (LangChain, OpenAI APIs, LlamaIndex) is a bonus.
- Track record of shipping products from ambiguity to impact.
- Strong product mindset — your goal is user value, not just elegant code.
- Architectural leadership with ownership of engineering rigor and scaling best practices.
- Startup or founder DNA — you’ve built things from scratch and know how to move fast without breaking things.
Who You Are
- A former founder, senior IC, or tech lead who’s done zero-to-one and 1-to-n scaling.
- Hungry for ownership and velocity — frustrated by bureaucracy or stagnation.
- You code because you care about solving real problems for real users.
- You’re pragmatic, hands-on, and grounded in first principles.
- You understand that great software isn't just shipped — it's hardened, maintained, and evolves with minimal manual effort.
- You’re open to evolving into a founding engineer role with influence over the tech vision and culture.
What You Get
- Equity in a high-growth product-led startup.
- A chance to build global products out of India with full-stack and GenAI innovation.
- Access to high-context decision-making and direct collaboration with the CEO.
- A tight, ego-free team and a culture that values clarity, ownership, learning, and candor.
Why YOptima?
YOptima is redefining how leading marketers unlock growth through full-funnel, AI-powered media solutions. As part of our growth journey, this is your opportunity to own the growth charter for leading brands and agencies globally and shape the narrative of a next-generation marketing platform.
Ready to lead, build, and scale?
We’d love to hear from you.


About the Role:
We’re looking for a skilled developer to build and maintain web and mobile apps using React, React Native, and Node.js. You’ll work on both the frontend and backend, collaborating with our team to deliver high-quality products.
What You’ll Do:
- Build and maintain full stack applications for web and mobile
- Write clean, efficient code with React, React Native, and Node.js
- Work with designers and other developers to deliver new features
- Debug, troubleshoot, and optimize existing apps
- Stay updated on the latest tech and best practices
What We’re Looking For:
- Solid experience with React, React Native, and Node.js
- Comfortable building both web and mobile applications
- Good understanding of REST APIs and databases
- Familiar with Git and agile workflows
- Team player with clear communication skills
Nice to Have:
- Experience with testing and CI/CD
- Knowledge of UI/UX basics

Product company for financial operations automation platform

Mandatory Criteria
- Candidate must have Strong hands-on experience with Kubernetes of at least 2 years in production environments.
- Candidate should have Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI).
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Candidate should have strong Backend experience.
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
About the Role
We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–7 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.
# Experience with Kubernetes is mandatory.
Key Responsibilities
- Design and develop scalable, reliable backend services and cloud-native applications.
- Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
- Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
- Implement and manage CI/CD pipelines and infrastructure automation.
- Collaborate with frontend, DevOps, and product teams in an agile environment.
- Ensure high code quality through testing, reviews, and documentation.
Required Skills
- Strong hands-on experience with Kubernetes of atleast 2 years in production environments (mandatory).
- Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI].
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Solid understanding of distributed systems, microservices, and cloud-native architecture.
- Experience with containerization using Docker and Kubernetes-native deployment workflows.
- Working knowledge of SQL and relational databases.
Preferred Qualifications
- Experience working across multiple cloud platforms.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
Nice to Have
- Knowledge of NoSQL databases or event-driven/message-based architectures.
- Experience with serverless services, managed data pipelines, or data lake platforms.
Requirements
- Bachelors/Masters in Computer Science or a related field
- 5-8 years of relevant experience
- Proven track record of Team Leading/Mentoring a team successfully.
- Experience with web technologies and microservices architecture both frontend and backend.
- Java, Spring framework, hibernate
- MySQL, Mongo, Solr, Redis,
- Kubernetes, Docker
- Strong understanding of Object-Oriented Programming, Data Structures, and Algorithms.
- Excellent teamwork skills, flexibility, and ability to handle multiple tasks.
- Experience with API Design, ability to architect and implement an intuitive customer and third-party integration story
- Ability to think and analyze both breadth-wise (client, server, DB, control flow) and depth-wise (threads, sessions, space-time complexity) while designing and implementing services
- Exceptional design and architectural skills
- Experience of cloud providers/platforms like GCP and AWS
Roles & Responsibilities
- Develop new user-facing features.
- Work alongside the product to understand our requirements, and design, develop and iterate, think through the complex architecture.
- Writing clean, reusable, high-quality, high-performance, maintainable code.
- Encourage innovation and efficiency improvements to ensure processes are productive.
- Ensure the training and mentoring of the team members.
- Ensure the technical feasibility of UI/UX designs and optimize applications for maximum speed.
- Research and apply new technologies, techniques, and best practices.
- Team mentorship and leadership.

Product company for financial operations automation platform

Mandatory Criteria :
- Candidate must have Strong hands-on experience with Kubernetes of atleast 2 years in production environments.
- Candidate should have Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI).
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Candidate should have strong Backend experience.
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
About the Role
We are looking for a highly skilled and motivated Cloud Backend Engineer with 4–7 years of experience, who has worked extensively on at least one major cloud platform (GCP, AWS, Azure, or OCI). Experience with multiple cloud providers is a strong plus. As a Senior Development Engineer, you will play a key role in designing, building, and scaling backend services and infrastructure on cloud-native platforms.
# Experience with Kubernetes is mandatory.
Key Responsibilities
- Design and develop scalable, reliable backend services and cloud-native applications.
- Build and manage RESTful APIs, microservices, and asynchronous data processing systems.
- Deploy and operate workloads on Kubernetes with best practices in availability, monitoring, and cost-efficiency.
- Implement and manage CI/CD pipelines and infrastructure automation.
- Collaborate with frontend, DevOps, and product teams in an agile environment.
- Ensure high code quality through testing, reviews, and documentation.
Required Skills
- Strong hands-on experience with Kubernetes of atleast 2 years in production environments (mandatory).
- Expertise in at least one public cloud platform [GCP (Preferred), AWS, Azure, or OCI].
- Proficient in backend programming with Python, Java, or Kotlin (at least one is required).
- Solid understanding of distributed systems, microservices, and cloud-native architecture.
- Experience with containerization using Docker and Kubernetes-native deployment workflows.
- Working knowledge of SQL and relational databases.
Preferred Qualifications
- Experience working across multiple cloud platforms.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Exposure to monitoring, logging, and observability stacks (e.g., Prometheus, Grafana, Cloud Monitoring).
- Hands-on experience with BigQuery or Snowflake for data analytics and integration.
Nice to Have
- Knowledge of NoSQL databases or event-driven/message-based architectures.
- Experience with serverless services, managed data pipelines, or data lake platforms.
Key Skills Required:
- Strong hands-on experience with Terraform
- Proficiency in CI/CD tools (e.g., Jenkins, GitLab CI, Azure DevOps, etc.)
- Experience working on Azure or GCP cloud platforms (at least one is mandatory)
- Good understanding of DevOps practices

Backend Engineer - Python
Location: Bangalore, India
Experience Required: 2-3 years minimum
About Us:
At PGAGI, we believe in a future where AI and human intelligence coexist in harmony, creating a world that is smarter, faster, and better. We are not just building AI; we are shaping a future where AI is a fundamental and positive force for businesses, societies, and the planet.
Job Overview
We are seeking a skilled Backend Engineer with expertise in Python to join our engineering team. The ideal candidate will have hands-on experience building and maintaining enterprise-level, scalable backend systems.
Key Requirements
Technical Skills
- Python Expertise: Advanced proficiency in Python with deep understanding of frameworks like Django, FastAPI, or Flask
- Database Management: Experience with PostgreSQL, MySQL, MongoDB, and database optimization
- API Development: Strong experience in designing and implementing RESTful APIs and GraphQL
- Cloud Platforms: Hands-on experience with AWS, GCP, or Azure services
- Containerization: Proficiency with Docker and Kubernetes
- Message Queues: Experience with Redis, RabbitMQ, or Apache Kafka
- Version Control: Advanced Git workflows and collaboration
Experience Requirements
- Minimum 2-3 years of backend development experience
- Proven track record of working on enterprise-level applications
- Experience building scalable systems handling high traffic loads
- Background in microservices architecture and distributed systems
- Experience with CI/CD pipelines and DevOps practices
Responsibilities
- Design, develop, and maintain robust backend services and APIs
- Optimize application performance and scalability
- Collaborate with frontend teams and product managers
- Implement security best practices and data protection measures
- Write comprehensive tests and maintain code quality
- Participate in code reviews and architectural discussions
- Monitor system performance and troubleshoot production issues
Preferred Qualifications
- Knowledge of caching strategies (Redis, Memcached)
- Understanding of software architecture patterns
- Experience with Agile/Scrum methodologies
- Open source contributions or personal projects


Job title - Python developer
Exp – 4 to 6 years
Location – Pune/Mum/B’lore
PFB JD
Requirements:
- Proven experience as a Python Developer
- Strong knowledge of core Python and Pyspark concepts
- Experience with web frameworks such as Django or Flask
- Good exposure to any cloud platform (GCP Preferred)
- CI/CD exposure required
- Solid understanding of RESTful APIs and how to build them
- Experience working with databases like Oracle DB and MySQL
- Ability to write efficient SQL queries and optimize database performance
- Strong problem-solving skills and attention to detail
- Strong SQL programing (stored procedure, functions)
- Excellent communication and interpersonal skill
Roles and Responsibilities
- Design, develop, and maintain data pipelines and ETL processes using pyspark
- Work closely with data scientists and analysts to provide them with clean, structured data.
- Optimize data storage and retrieval for performance and scalability.
- Collaborate with cross-functional teams to gather data requirements.
- Ensure data quality and integrity through data validation and cleansing processes.
- Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
- Stay up to date with industry best practices and emerging technologies in data engineering.
We are looking for an experienced GCP Cloud Engineer to design, implement, and manage cloud-based solutions on Google Cloud Platform (GCP). The ideal candidate should have expertise in GKE (Google Kubernetes Engine), Cloud Run, Cloud Loadbalancer, Cloud function, Azure DevOps, and Terraform, with a strong focus on automation, security, and scalability.
You will work closely with development, operations, and security teams to ensure robust cloud infrastructure and CI/CD pipelines while optimizing performance and cost.
Key Responsibilities:
1. Cloud Infrastructure Design & Management
· Architect, deploy, and maintain GCP cloud resources via terraform/other automation.
· Implement Google Cloud Storage, Cloud SQL, file store, for data storage and processing needs.
· Manage and configure Cloud Load Balancers (HTTP(S), TCP/UDP, and SSL Proxy) for high availability and scalability.
· Optimize resource allocation, monitoring, and cost efficiency across GCP environments.
2. Kubernetes & Container Orchestration
· Deploy, manage, and optimize workloads on Google Kubernetes Engine (GKE).
· Work with Helm charts, Istio, and service meshes for microservices deployments.
· Automate scaling, rolling updates, and zero-downtime deployments.
3. Serverless & Compute Services
· Deploy and manage applications on Cloud Run and Cloud Functions for scalable, serverless workloads.
· Optimize containerized applications running on Cloud Run for cost efficiency and performance.
4. CI/CD & DevOps Automation
· Design, implement, and manage CI/CD pipelines using Azure DevOps.
· Automate infrastructure deployment using Terraform, Bash and Power shell scripting
· Integrate security and compliance checks into the DevOps workflow (DevSecOps).
Required Skills & Qualifications:
✔ Experience: 8+ years in Cloud Engineering, with a focus on GCP.
✔ Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions).
✔ Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm.
✔ DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation.
✔ Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources.
✔ Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation.
✔ Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
Who we are
At CoinCROWD, were building the next-gen wallet for real-world crypto utility. Our flagship product, CROWD Wallet, is secure, intuitive, gasless, and designed to bring digital currencies into everyday spending from a coffee shop to cross-border payments.
Were redefining the wallet experience for everyday users, combining the best of Web3 + AI to create a secure, scalable, and delightful platform
Were more than just a blockchain company, were an AI-native, crypto-forward startup. We ship fast, think long, and believe in building agentic, self-healing infrastructure that can scale across geographies and blockchains. If that excites you, lets talk.
What Youll Be Doing :
As the DevOps Lead at CoinCROWD, youll own our infrastructure from end to end, designing, deploying, and scaling secure systems to support blockchain transactions, AI agents, and token operations across global users.
You will :
- Lead the CI/CD, infra automation, observability, and multi-region deployments of CoinCROWD products.
- Manage cloud and container infrastructure using GCP, Docker, Kubernetes, Terraform.
- Deploy and maintain scalable, secure blockchain infrastructure using QuickNode, Alchemy, Web3Auth, and other Web3 APIs.
- Implement infrastructure-level AI agents or scripts for auto-scaling, failure prediction, anomaly detection, and alert management (using LangChain, LLMs, or tools like n8n).
- Ensure 99.99% uptime for wallet systems, APIs, and smart contract layers.
- Build and optimize observability across on-chain/off-chain systems using tools like Prometheus,
Grafana, Sentry, Loki, ELK Stack.
- Create auto-healing, self-monitoring pipelines that reduce human ops time via Agentic AI workflows.
- Collaborate with engineering and security teams on smart contract deployment pipelines, token rollouts, and app store release automation.
Agentic Ops : What it means
- Use GPT-based agents to auto-document infra changes or failure logs.
- Run LangChain agents that triage alerts, perform log analysis, or suggest infra optimizations.
- Build CI/CD workflows that self-update or auto-tune based on system usage.
- Integrate AI to detect abnormal wallet behaviors, fraud attempts, or suspicious traffic spikes
What Were Looking For :
- 5 to 10 years of DevOps/SRE experience, with at least 2 to 3 years in Web3, fintech, or high-scale infra.
- Deep expertise with Docker, Kubernetes, Helm, and cloud providers (GCP preferred).
- Hands-on with Terraform, Ansible, GitHub Actions, Jenkins, or similar IAC and pipeline tools.
- Experience maintaining or scaling blockchain infra (EVM nodes, RPC endpoints, APIs).
- Understanding of smart contract CI/CD, token lifecycle (ICO, vesting, etc.), and wallet integrations.
- Familiarity with AI DevOps tools, or interest in building LLM-enhanced internal tooling.
- Strong grip on security best practices, key management, and secrets infrastructure (Vault, SOPS, AWS KMS).
Bonus Points :
- You've built or run infra for a token launch, DEX, or high-TPS crypto wallet.
- You've deployed or automated a blockchain node network at scale.
- You've used AI/LLMs to write ops scripts, manage logs, or analyze incidents.
- You've worked with systems handling real-money movement with tight uptime and security requirements.
Why Join CoinCROWD :
- Equity-first model: Build real value as we scale.
- Be the architect of infrastructure that supports millions of real-world crypto transactions.
- Build AI-powered ops that scale without a 24/7 pager culture
- Work remotely with passionate people who ship fast and iterate faster.
- Be part of one of the most ambitious crossovers of AI + Web3 in 2025.


We are looking for a hands-on technical expert who has worked with multiple technology stacks and has experience architecting and building scalable cloud solutions with web and mobile frontends.
What will you work on?
- Interface with clients
- Recommend tech stacks
- Define end-to-end logical and cloud-native architectures
- Define APIs
- Integrate with 3rd party systems
- Create architectural solution prototypes
- Hands-on coding, team lead, code reviews, and problem-solving
What Makes You A Great Fit?
- 5+ years of software experience
- Experience with architecture of technology systems having hands-on expertise in backend, and web or mobile frontend
- Solid expertise and hands-on experience in Python with Flask or Django
- Expertise on one or more cloud platforms (AWS, Azure, Google App Engine)
- Expertise with SQL and NoSQL databases (MySQL, Mongo, ElasticSearch, Redis)
- Knowledge of DevOps practices
- Chatbot, Machine Learning, Data Science/Big Data experience will be a plus
- Excellent communication skills, verbal and written About Us We offer CTO-as-a-service and Product Development for Startups. We value our employees and provide them an intellectually stimulating environment where everyone’s ideas and contributions are valued.


Job Overview:
- JD of DATA ANALYST:
- Strong proficiency in Python programming.
- Preferred knowledge of cloud technologies, especially in Google Cloud Platform (GCP).
- Experience with visualization tools such as Grafana, PowerBI, and Tableau.
- Good to have knowledge of AI/ML models.
- Must have extensive knowledge in Python analytics, particularly in exploratory data analysis (EDA).


LendFlow is an AI-powered home loan assessment platform that helps mortgage brokers and lenders save hours by automating document analysis, income validation, and serviceability assessment. We turn complex financial documents into clear insights—fast.
We’re building a smart assistant that ingests client docs (bank statements, payslips, loan summaries) and uses modular AI agents to extract, classify, and summarize financial data in minutes, not hours. Think OCR + AI agents + compliance-ready outputs.
🛠️ What You’ll Be Building
As part of our early technical team, you’ll help us develop and launch our MVP. Key modules include:
- Document ingestion and OCR processing (Textract, Document AI)
- AI agent workflows using LangChain or CrewAI
- Serviceability calculators with business rule engines
- React + Next.js frontend for brokers and analysts
- FastAPI backend with PostgreSQL
- Security, encryption, audit logging (privacy-first design)
🎯 We’re Looking For:
Must-Have Skills:
- Strong experience with Python (FastAPI, OCR, LLMs, prompt engineering)
- Familiarity with AI agent frameworks (LangChain, CrewAI, Autogen, or similar)
- Frontend skills in React.js / Next.js
- Experience with PostgreSQL and cloud storage (AWS/GCP)
- Understanding of financial documents and data privacy best practices
Bonus Points:
- Experience with OCR tools like Amazon Textract, Tesseract, or Document AI
- Building ML/NLP pipelines in real-world apps
- Prior work in fintech, lending, or proptech sectors
Must have handled at least one project of medium to high complexity of migrating ETL pipelines and data warehouses to cloud.
Min 3 years of experience with premium consulting companies.
mandatory experience in GCP.
Strong proficiency in at least one major cloud platform - Azure, GCP(Primary) or AWS. Azure + GCP (Secondary) preferred. Having proficiency in all 3 is a significant plus.
· Design, develop, and maintain cloud-based applications and infrastructure across various cloud platforms.
· Select and configure appropriate cloud services based on specific project requirements and constraints.
· Implement infrastructure automation with tools like Terraform and Ansible.
· Write clean, efficient, and well-documented code using various programming languages. (Python (Required), Knowledge of Java, C#, JavaScript is a plus).
· Implement RESTful APIs and microservices architectures.
· Utilize DevOps practices for continuous integration and continuous delivery (CI/CD).
· Design, configure, and manage scalable and secure cloud infrastructure for MLOps.
· Monitor and optimize cloud resources for performance and cost efficiency.
· Implement security best practices throughout the development lifecycle.
· Collaborate with developers, operations, and security teams to ensure seamless integration and successful deployments.
· Stay up-to-date on the latest cloud technologies, MLOps tools and trends
· Strong analytical and problem-solving skills.



About the Role:
We are seeking a Technical Architect with proven expertise in full-stack web development, cloud infrastructure, and system design. You will lead the design and delivery of scalable enterprise applications, drive technical decision-making, and mentor a cross-functional development team. The ideal candidate has a strong foundation in .NET-based architecture, modern front-end frameworks, and cloud-native technologies.
Key Responsibilities:
- Lead the technical architecture, system design, and full-stack development of enterprise-grade web applications.
- Design and develop robust backend systems and APIs using .NET Core / C# / Python, following TDD/BDD principles.
- Build modern frontends using React.js, TypeScript, and optionally Angular, ensuring responsive and accessible UI.
- Architect scalable, secure, and highly available solutions using cloud platforms such as Azure, AWS, or GCP.
- Guide and review CI/CD pipeline creation and DevOps practices, leveraging tools like Azure DevOps, Git, Docker, etc.
- Oversee database design and optimization for relational and NoSQL systems like MSSQL, PostgreSQL, MongoDB, CosmosDB.
- Mentor developers and collaborate with cross-functional teams including Product Owners, QA, and DevOps.
- Ensure best practices in code quality, security, performance, and compliance.
- Lead application monitoring, error tracking, and infrastructure tuning for production-grade deployments.
- Required Skills:
- 10+ years of experience in software development, with 3+ years in architectural or technical leadership roles.
- Strong expertise in .NET Core, C#, React.js, TypeScript, HTML5, CSS3, and JavaScript.
- Good exposure to Python for backend services or data pipelines.
- Cloud platform experience in at least one or more: Azure, AWS, or Google Cloud Platform (GCP).
- Proficient in designing and consuming RESTful APIs, and working with metadata-driven and microservices architecture.
- Strong understanding of DevOps, CI/CD, and deployment strategies using tools like Git, Docker, Azure DevOps.
- Familiarity with frontend frameworks like Angular or Vue.js is a plus.
- Proficient with databases: MSSQL, PostgreSQL, MySQL, MongoDB, CosmosDB.
- Comfortable working on Linux/UNIX and Windows-based servers, along with web servers like Nginx, Apache, IIS.
- Good to Have:
- Experience in CRM, ERP, or E-commerce platforms.
- Familiarity with AI/ML integration and working with data science teams.
- Exposure to mobile development using React Native.
- Experience integrating third-party tools like Slack, Microsoft Teams, etc.
- Soft Skills:
- Strong problem-solving mindset with a proactive and innovative approach.
- Excellent communication and leadership abilities.
- Capability to mentor junior engineers and drive a high-performance team culture.
- Adaptability to work in fast-paced, Agile environments.
Educational Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related technical discipline.
- Microsoft / Cloud certifications are a plus.
Looking for Fresher developers
Responsibilities:
- Implement integrations requested by customers
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Requirements and skill:
Knowledge in DevOps Engineer or similar software engineering role
Good knowledge of Terraform, Kubernetes
Working knowledge on AWS, Google Cloud
You can directly contact me on nine three one six one two zero one three two
Job Description for Database Consultant-I (PostgreSQL)
Job Title: Database Consultant-I (PostgreSQL)
Company: Mydbops
About Us:
Mydbops is a trusted leader with 8+ years of excellence in open-source database management. We deliver best-in-class services across MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, Cassandra, and more. Our focus is on building scalable, secure, and high-performance database solutions for global clients. As a PCI DSS-certified and ISO-certified organisation, we are committed to operational excellence and data security.
Role Overview:
As a Database Consultant – I (PostgreSQL Team), you will take ownership of PostgreSQL database environments, offering expert-level support to our clients. This role involves proactive monitoring, performance tuning, troubleshooting, high availability setup, and guiding junior team members. You will play a key role in customer-facing technical delivery, solution design, and implementation.
Key Responsibilities:
- Manage PostgreSQL production environments for performance, stability, and scalability.
- Handle complex troubleshooting, performance analysis, and query optimisation.
- Implement backup strategies, recovery solutions, replication, and failover techniques.
- Set up and manage high availability architectures (Streaming Replication, Patroni, etc.).
- Work with DevOps/cloud teams for deployment and automation.
- Support upgrades, patching, and migration projects across environments.
- Use monitoring tools to proactively detect and resolve issues.
- Mentor junior engineers and guide troubleshooting efforts.
- Interact with clients to understand requirements and deliver solutions.
Requirements:
- 3–5 years of hands-on experience in PostgreSQL database administration.
- Strong Linux OS knowledge and scripting skills (Bash/Python).
- Proficiency in SQL tuning, performance diagnostics, and explain plans.
- Experience with tools like pgBackRest, Barman for backup and recovery.
- Familiarity with high availability, failover, replication, and clustering.
- Good understanding of AWS RDS, Aurora PostgreSQL, and GCP Cloud SQL.
- Experience with monitoring tools like pg_stat_statements, PMM, Nagios, or custom dashboards.
- Knowledge of automation/configuration tools like Ansible or Terraform is a plus.
- Strong communication and problem-solving skills.
Preferred Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or equivalent.
- PostgreSQL certification (EDB/Cloud certifications preferred).
- Past experience in a consulting, customer support, or managed services role.
- Exposure to multi-cloud environments and database-as-a-service platforms.
- Prior experience with database migrations or modernisation projects.
Why Join Us:
- Opportunity to work in a dynamic and growing industry.
- Learning and development opportunities to enhance your career.
- A collaborative work environment with a supportive team.
Job Details:
- Job Type: Full-time
- Work Days: 5 Days
- Work Mode: Work From Home
- Experience Required: 3–5 years

Position : Senior Data Analyst
Experience Required : 5 to 8 Years
Location : Hyderabad or Bangalore (Work Mode: Hybrid – 3 Days WFO)
Shift Timing : 11:00 AM – 8:00 PM IST
Notice Period : Immediate Joiners Only
Job Summary :
We are seeking a highly analytical and experienced Senior Data Analyst to lead complex data-driven initiatives that influence key business decisions.
The ideal candidate will have a strong foundation in data analytics, cloud platforms, and BI tools, along with the ability to communicate findings effectively across cross-functional teams. This role also involves mentoring junior analysts and collaborating closely with business and tech teams.
Key Responsibilities :
- Lead the design, execution, and delivery of advanced data analysis projects.
- Collaborate with stakeholders to identify KPIs, define requirements, and develop actionable insights.
- Create and maintain interactive dashboards, reports, and visualizations.
- Perform root cause analysis and uncover meaningful patterns from large datasets.
- Present analytical findings to senior leaders and non-technical audiences.
- Maintain data integrity, quality, and governance in all reporting and analytics solutions.
- Mentor junior analysts and support their professional development.
- Coordinate with data engineering and IT teams to optimize data pipelines and infrastructure.
Must-Have Skills :
- Strong proficiency in SQL and Databricks
- Hands-on experience with cloud data platforms (AWS, Azure, or GCP)
- Sound understanding of data warehousing concepts and BI best practices
Good-to-Have :
- Experience with AWS
- Exposure to machine learning and predictive analytics
- Industry-specific analytics experience (preferred but not mandatory)

Job Role : DevOps Engineer (Python + DevOps)
Experience : 4 to 10 Years
Location : Hyderabad
Work Mode : Hybrid
Mandatory Skills : Python, Ansible, Docker, Kubernetes, CI/CD, Cloud (AWS/Azure/GCP)
Job Description :
We are looking for a skilled DevOps Engineer with expertise in Python, Ansible, Docker, and Kubernetes.
The ideal candidate will have hands-on experience automating deployments, managing containerized applications, and ensuring infrastructure reliability.
Key Responsibilities :
- Design and manage containerization and orchestration using Docker & Kubernetes.
- Automate deployments and infrastructure tasks using Ansible & Python.
- Build and maintain CI/CD pipelines for streamlined software delivery.
- Collaborate with development teams to integrate DevOps best practices.
- Monitor, troubleshoot, and optimize system performance.
- Enforce security best practices in containerized environments.
- Provide operational support and contribute to continuous improvements.
Required Qualifications :
- Bachelor’s in Computer Science/IT or related field.
- 4+ years of DevOps experience.
- Proficiency in Python and Ansible.
- Expertise in Docker and Kubernetes.
- Hands-on experience with CI/CD tools and pipelines.
- Experience with at least one cloud provider (AWS, Azure, or GCP).
- Strong analytical, communication, and collaboration skills.
Preferred Qualifications :
- Experience with Infrastructure-as-Code tools like Terraform.
- Familiarity with monitoring/logging tools like Prometheus, Grafana, or ELK.
- Understanding of Agile/Scrum practices.
True Hands-On Developer in Programming Languages like Java or Scala . Expertise in Apache Spark . Database modelling and working with any of the SQL or NoSQL Database is must. Working knowledge of Scripting languages like shell/python. Experience of working with Cloudera is Preferred Orchestration tools like Airflow or Oozie would be a value addition. Knowledge of Table formats like Delta or Iceberg is plus to have. Working experience of Version controls like Git, build tools like Maven is recommended. Having software development experience is good to have along with Data Engineering experience
About Us
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.
What we are looking for:
Experience: 10+ years
Education: BTech / BE / ME /MTech/ MCA / MSc Computer Science
Industry: Product Engineering Services or Enterprise Software Companies
Job Responsibilities:
- Sprint Development Task , Code Review , Defining detailed task for the connector based on design/Timelines, Documentation maturity, Release review and Sanity,Writing the design specifications and user stories for the functionalities assigned.
- Develop assigned components / classes and assist QA team in writing the test cases
- Create and maintain coding best practices and do peer code / solution reviews
- Participate in Daily Scrum calls, Scrum Planning, Retro and Demos meetings
- Bring out technical/design/architectural challenges/risks during execution, develop action plan for mitigation and aversion of identified risks
- Comply with development processes, documentation templates and tools prescribed by CloudSufi or and its clients
- Work with other teams and Architects in the organization and assist them on technical Issues/Demos/POCs and proposal writing for prospective clients
- Contribute towards the creation of knowledge repository, reusable assets/solution accelerators and IPs
- Provide feedback to junior developers and be a coach and mentor for them
- Provide training sessions on the latest technologies and topics to others employees in the organization
- Participate in organization development activities time to time - Interviews, CSR/Employee engagement activities, participation in business events/conferences, implementation of new policies, systems and procedures as decided by Management team
Certifications (Optional): OCPJP (Oracle Certified Professional Java Programmer)
Required Experience:
- Strong programming skills in the language Java.
- Hands on in Core Java and Microservices
- Understanding of Identity Management using users , groups and entitlements
- Hands on in developing connectivity for Identity management using SCIM,REST and LDAP.
- Through Experience in Triggers , Web hooks , events receiver implementations for connectors.
- Excellent in code review process and assessing developer’s productivity.
- Excellent analytical and problem-solving skills
Good to Have:
- Experience of developing 3-4 integration adapters/connectors for enterprise applications (ERP, CRM, HCM, SCM, Billing etc.) using industry standard frameworks and methodologies following Agile/Scrum
- Experience with IAM products.
- Experience on Implementation of Message Brokers using JMS.
- Experience on ETL processes
Non-Technical/ Behavioral competencies required:
- Must have worked with US/Europe based clients in onsite/offshore delivery model
- Should have very good verbal and written communication, technical articulation, listening and presentation skills
- Should have proven analytical and problem solving skills
- Should have demonstrated effective task prioritization, time management and internal/external stakeholder management skills
- Should be a quick learner, self starter, go-getter and team player
- Should have experience of working under stringent deadlines in a Matrix organization structure
- Should have demonstrated appreciable Organizational Citizenship Behavior in past organizations
escription
Job Summary:
Join Springer Capital as a Cybersecurity & Cloud Intern to help architect, secure, and automate our cloud-based backend systems powering next-generation investment platforms.
Job Description:
Founded in 2015, Springer Capital is a technology-forward asset management and investment firm. We leverage cutting-edge digital solutions to uncover high-potential opportunities, transforming traditional finance through innovation, agility, and a relentless commitment to security and scalability.
Job Highlights
Work hands-on with AWS, Azure, or GCP to design and deploy secure, scalable backend infrastructure.
Collaborate with DevOps and engineering teams to embed security best practices in CI/CD pipelines.
Gain experience in real-world incident response, vulnerability assessment, and automated monitoring.
Drive meaningful impact on our security posture and cloud strategy from Day 1.
Enjoy a fully remote, flexible internship with global teammates.
Responsibilities
Assist in architecting and provisioning cloud resources (VMs, containers, serverless functions) with strict security controls.
Implement identity and access management, network segmentation, encryption, and logging best practices.
Develop and maintain automation scripts for security monitoring, patch management, and incident alerts.
Support vulnerability scanning, penetration testing, and remediation tracking.
Document cloud architectures, security configurations, and incident response procedures.
Partner with backend developers to ensure secure API gateways, databases, and storage services.
What We Offer
Mentorship: Learn directly from senior security engineers and cloud architects.
Training & Certifications: Access to online courses and support for AWS/Azure security certifications.
Impactful Projects: Contribute to critical security and cloud initiatives that safeguard our digital assets.
Remote-First Culture: Flexible hours and the freedom to collaborate from anywhere.
Career Growth: Build a strong foundation for a future in cybersecurity, cloud engineering, or DevSecOps.
Requirements
Pursuing or recently graduated in Computer Science, Cybersecurity, Information Technology, or a related discipline.
Familiarity with at least one major cloud platform (AWS, Azure, or GCP).
Understanding of core security principles: IAM, network security, encryption, and logging.
Scripting experience in Python, PowerShell, or Bash for automation tasks.
Strong analytical, problem-solving, and communication skills.
A proactive learner mindset and passion for securing cloud environments.
About Springer Capital
Springer Capital blends financial expertise with digital innovation to redefine asset management. Our mission is to drive exceptional value by implementing robust, technology-driven strategies that transform investment landscapes. We champion a culture of creativity, collaboration, and continuous improvement.
Location: Global (Remote)
Job Type: Internship
Pay: $50 USD per month
Work Location: Remote
- Strong Site Reliability Engineer (SRE - CloudOps) Profile
- Mandatory (Experience 1) - Must have a minimum 1 years of experience in SRE (CloudOps)
- Mandatory (Core Skill 1) - Must have experience with Google Cloud platforms (GCP)
- Mandatory (Core Skill 2) - Experience with monitoring, APM, and alerting tools like Prometheus, Grafana, ELK, Newrelic, Pingdom, or Pagerduty
- Mandatory (Core Skill 3) ) - Hands-on experience with Kubernetes for orchestration and container management.
- Mandatory (Company) - B2C Product Companies.
- Strong Senior Unity Developer Profile
- Mandatory (Experience 1) - Must have a minimum 2+ years of experience in game/application development using Unity.
- Mandatory (Experience 2) - Must have strong experience in backend development using C#
- Mandatory (Experience 3) - Must have strong experience in multiplayer game development with Unity, preferably using Photon Networking (PUN) or Photon Fusion.
- Mandatory (Company) - B2C Product Companies
Preferred
- Preferred (Education) - B.E / B.Tech
GCP Data Engineer Job Description
A GCP Data Engineer is responsible for designing, building, and maintaining data pipelines, architectures, and systems on Google Cloud Platform (GCP). Here's a breakdown of the job:
Key Responsibilities
- Data Pipeline Development: Design and develop data pipelines using GCP services like Dataflow, BigQuery, and Cloud Pub/Sub.
- Data Architecture: Design and implement data architectures to meet business requirements.
- Data Processing: Process and analyze large datasets using GCP services like BigQuery and Cloud Dataflow.
- Data Integration: Integrate data from various sources using GCP services like Cloud Data Fusion and Cloud Pub/Sub.
- Data Quality: Ensure data quality and integrity by implementing data validation and data cleansing processes.
Essential Skills
- GCP Services: Strong understanding of GCP services like BigQuery, Cloud Dataflow, Cloud Pub/Sub, and Cloud Storage.
- Data Engineering: Experience with data engineering concepts, including data pipelines, data warehousing, and data integration.
- Programming Languages: Proficiency in programming languages like Python, Java, or Scala.
- Data Processing: Knowledge of data processing frameworks like Apache Beam and Apache Spark.
- Data Analysis: Understanding of data analysis concepts and tools like SQL and data visualization.