50+ Remote AWS (Amazon Web Services) Jobs in India
Apply to 50+ Remote AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!

Are you looking to explore what is possible in a collaborative and innovative work environment? Is your goal to work with a team of talented professionals who are keenly focused on solving complex business problems and supporting product innovation with technology?
If so, you might be our next Senior DevOps Engineer, where you will be involved in building out systems for our rapidly expanding team, enabling the whole group to operate more effectively and iterate at top speed in an open, collaborative environment.
Systems management and automation are the name of the game here – in development, testing, staging, and production. If you are passionate about building innovative and complex software, are comfortable in an “all hands on deck” environment, and can thrive in an Insurtech culture, we want to meet you!
What We’re Looking For
- You will collaborate with our development team to support ongoing projects, manage software releases, and ensure smooth updates to QA and production environments. This includes handling configuration updates and meeting all release requirements.
- You will work closely with your team members to enhance the company’s engineering tools, systems, procedures, and data security practices.
- Provide technical guidance and educate team members and coworkers on development and operations.
- Monitor metrics and develop ways to improve.
- Conduct systems tests for security, performance, and availability.
- Collaborate with team members to improve the company’s engineering tools, systems and procedures, and data security.
What You’ll Be Doing:
- You have a working knowledge of technologies like:
- Docker, Kubernetes
- oipJenkins (alt. Bamboo, TeamCity, Travis CI, BuildMaster)
- Ansible, Terraform, Pulumi
- Python
- You have experience with GitHub Actions, Version Control, CI/CD/CT, shell scripting, and database change management
- You have working experience with Microsoft Azure, Amazon AWS, Google Cloud, or other cloud providers
- You have experience with cloud security management
- You can configure assigned applications and troubleshoot most configuration issues without assistance
- You can write accurate, concise, and formatted documentation that can be reused and read by others
- You know scripting tools like bash or PowerShell

About Us:
MyOperator and Heyo are India’s leading conversational platforms, empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
About the Role
OpenIAM is looking for a Solutions Architect (IAM) to support our enterprise customers in the
successful deployment and integration of OpenIAM’s platform. This role is highly technical and customer-facing, bridging architecture design, migration,
troubleshooting, and best-practice advisory.
Responsibilities
- Partner with CSMs to deliver technical onboarding and solution design for enterprise accounts.
- Lead integrations with enterprise directories, cloud services, databases, and custom applications.
- Provide migration planning and execution support for legacy IAM systems.
- Troubleshoot complex issues, including connector logic, data sync, performance, and HA setup.
- Advise customers on compliance/audit configuration (e.g., SoD, certifications, reporting).
- Work with Engineering to resolve escalations and influence product roadmap.
- Deliver technical workshops, architecture sessions, and training to customer teams.
Qualifications
- 5+ years in IAM architecture, engineering, or consulting (e.g., OpenIAM, SailPoint, ForgeRock, Ping,
- Okta).
- Deep technical knowledge of IAM standards and protocols (SAML, OIDC, OAuth2, SCIM, LDAP).
- Hands-on experience with provisioning/deprovisioning, connectors, APIs, and scripting (Groovy, Java, or similar).
- Strong knowledge of enterprise IT environments (Windows, Linux, AD, databases, Kubernetes, cloud).
- Excellent problem-solving and troubleshooting skills.
- Comfortable presenting to technical and business stakeholders.
- Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
Why Join OpenIAM
- Work on challenging IAM deployments at scale with leading global enterprises.
- Shape the success of customers while influencing the evolution of our platform. Competitive compensation, benefits, and growth opportunities.
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
About the Role
We are seeking a Site Reliability Engineer (SRE) with a minimum of 2 years of experience who is passionate about monitoring, observability, and ensuring system reliability. The ideal candidate will have strong expertise in Grafana, Prometheus, Opensearch, and AWS CloudWatch, with the ability to design insightful dashboards and proactively optimize system performance.
Key Responsibilities
- Design, develop, and maintain monitoring and alerting systems using Grafana, Prometheus, and AWS CloudWatch.
- Create and optimize dashboards to provide actionable insights into system and application performance.
- Collaborate with development and operations teams to ensure high availability and reliability of services.
- Proactively identify performance bottlenecks and drive improvements.
- Continuously explore and adopt new monitoring/observability tools and best practices.
Required Skills & Qualifications
- Minimum 2 years of experience in SRE, DevOps, or related roles.
- Hands-on expertise in Grafana, Prometheus, and AWS CloudWatch.
- Proven experience in dashboard creation, visualization, and alerting setup.
- Strong understanding of system monitoring, logging, and metrics collection.
- Excellent problem-solving and troubleshooting skills.
- Quick learner with a proactive attitude and adaptability to new technologies.
Good to Have (Optional)
- Experience with AWS services beyond CloudWatch.
- Familiarity with containerization (Docker, Kubernetes) and CI/CD pipelines.
- Scripting knowledge (Python, Bash, or similar).
Why Join Us
At MyOperator, you will play a key role in ensuring the reliability, scalability, and performance of systems that power AI-driven business communication for leading global brands. You’ll work in a fast-paced, innovation-driven environment where your expertise will directly impact thousands of businesses worldwide.
Job Description for Delivery Manager
Job Title: Delivery Manager
Company: Mydbops
About Us:
As a seasoned industry leader for 8 years in open-source database management, we specialise in providing unparalleled solutions and services for MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, Cassandra, and more. At Mydbops, we are committed to providing exceptional service and building lasting relationships with our customers. Mydbops takes pride in being a PCI DSS-certified and ISO-certified company, reflecting our unwavering commitment to maintaining the highest security and operational excellence standards.
Role Overview
We’re looking for a capable and motivated Delivery Manager with 1.5–2 years of experience to lead client projects involving AWS and open-source databases like MySQL, MariaDB, PostgreSQL, and MongoDB. This role bridges technical teams and clients to ensure high-quality project delivery.
Key Responsibilities
- Manage database-related client projects end-to-end.
- Coordinate with DBAS, cloud engineers and internal teams to meet delivery goals.
- Monitor project timelines, resolve blockers, and ensure client satisfaction.
- Oversee deployment, migration, performance tuning, and scaling for databases.
- Maintain clear communication between stakeholders and technical teams.
- Implement and follow delivery best practices and documentation standards.
Skills Required
- Project Manager with hands-on experience in AWS services.
- Strong knowledge of database technologies MySQL, MariaDB, PostgreSQL, and MongoDB.
- Proven ability to deliver projects in AWS environments
- Ability to deliver and focus exclusively on AWS-related projects
- In-depth understanding of database replication, performance tuning, and backup strategies.
- Strong coordination and leadership abilities in technical project environments.
- Excellent communication and stakeholder engagement skills.
Why Join Us:
- Opportunity to work in a dynamic and growing industry.
- Learning and development opportunities to enhance your career.
- A collaborative work environment with a supportive team.
Job Details:
- Job Type: Full-time opportunity
- Work time: General Shift
- Mode of Employment - Work From Home
- Experience - 2 to 3 years
We are looking for a highly skilled Solution Architect with a passion for software engineering and deep experience in backend technologies, cloud, and DevOps. This role will be central in managing, designing, and delivering large-scale, scalable solutions.
Core Skills
- Strong coding and software engineering fundamentals.
- Experience in large-scale custom-built applications and platforms.
- Champion of SOLID principles, OO design, and pair programming.
- Agile, Lean, and Continuous Delivery – CI, TDD, BDD.
- Frontend experience is a plus.
- Hands-on with Java, Scala, Golang, Rust, Spark, Python, and JS frameworks.
- Experience with Docker, Kubernetes, and Infrastructure as Code.
- Excellent understanding of cloud technologies – AWS, GCP, Azure.
Responsibilities
- Own all aspects of technical development and delivery.
- Understand project requirements and create architecture documentation.
- Ensure adherence to development best practices through code reviews.
Job Description:
We are looking for a Lead Java Developer – Backend with a strong foundation in software engineering and hands-on experience in designing and building scalable, high-performance backend systems. You’ll be working within our Digital Engineering Studios on impactful and transformative projects in a fast-paced environment.
Key Responsibilities:
- Lead and mentor backend development teams.
- Design and develop scalable backend applications using Java and Spring Boot.
- Ensure high standards of code quality through best practices such as SOLID principles and clean code.
- Participate in pair programming, code reviews, and continuous integration processes.
- Drive Agile, Lean, and Continuous Delivery practices like TDD, BDD, and CI/CD.
- Collaborate with cross-functional teams and clients for successful delivery.
Required Skills & Experience:
- 9–12+ years of experience in backend development (Up to 17 years may be considered).
- Strong programming skills in Java and backend frameworks such as Spring Boot.
- Experience in designing and building large-scale, custom-built, scalable applications.
- Sound understanding of Object-Oriented Design (OOD) and SOLID principles.
- Hands-on experience with Agile methodologies, TDD/BDD, CI/CD pipelines.
- Familiarity with DevOps practices, Docker, Kubernetes, and Infrastructure as Code.
- Good understanding of cloud technologies – especially AWS, and exposure to GCP or Azure.
- Experience working in a product engineering environment is a plus.
- Startup experience or working in fast-paced, high-impact teams is highly desirable.

About the Role
NeoGenCode Technologies is looking for a Senior Technical Architect with strong expertise in enterprise architecture, cloud, data engineering, and microservices. This is a critical role demanding leadership, client engagement, and architectural ownership in designing scalable, secure enterprise systems.
Key Responsibilities
- Design scalable, secure, and high-performance enterprise software architectures.
- Architect distributed, fault-tolerant systems using microservices and event-driven patterns.
- Provide technical leadership and hands-on guidance to engineering teams.
- Collaborate with clients, understand business needs, and translate them into architectural designs.
- Evaluate, recommend, and implement modern tools, technologies, and processes.
- Drive DevOps, CI/CD best practices, and application security.
- Mentor engineers and participate in architecture reviews.
Must-Have Skills
- Architecture: Enterprise Solutions, EAI, Design Patterns, Microservices (API & Event-driven)
- Tech Stack: Java, Spring Boot, Python, Angular (recent 2+ years experience), MVC
- Cloud Platforms: AWS, Azure, or Google Cloud
- Client Handling: Strong experience with client-facing roles and delivery
- Data: Data Modeling, RDBMS & NoSQL, Data Migration/Retention Strategies
- Security: Familiarity with OWASP, PCI DSS, InfoSec principles
Good to Have
- Experience with Mobile Technologies (native, hybrid, cross-platform)
- Knowledge of tools like Enterprise Architect, TOGAF frameworks
- DevOps tools, containerization (Docker), CI/CD
- Experience in financial services / payments domain
- Familiarity with BI/Analytics, AI/ML, Predictive Analytics
- 3rd-party integrations (e.g., MuleSoft, BizTalk)
About the Role:
NeoGenCode is looking for a highly skilled Lead Java Fullstack Developer to join their agile development team. The ideal candidate will bring strong expertise in Java, Angular, and Spring Boot, with hands-on experience in developing and deploying enterprise-level microservices in cloud environments (especially AWS). The candidate will be expected to lead technically while remaining hands-on, guiding junior developers and ensuring top-quality code delivery.
Key Responsibilities:
- Act as a technical lead and contributor in a cross-functional agile team.
- Analyze, design, develop, and deploy web applications using Java, Angular, and Spring Boot.
- Lead sprint activities, task allocation, and code reviews to ensure quality and timely delivery.
- Design and implement microservices-based architecture and RESTful APIs.
- Ensure performance, security, scalability, and maintainability of the applications.
- Maintain CI/CD pipelines using GitHub, Jenkins, and related tools.
- Collaborate with business analysts, product managers, and UX teams for requirement gathering and refinement.
Technical Requirements:
✅ Core Technologies:
- Java (Java 21 preferred) – minimum 5+ years of hands-on experience
- Spring Boot (MVC, Spring Data, Hibernate) – strong hands-on experience
- Angular (Angular 19 preferred) – minimum 2+ years of hands-on experience
✅ Cloud & DevOps:
- Experience in AWS ecosystem (especially S3, Secrets Manager, CloudWatch)
- Experience with Docker
- Familiarity with CI/CD tools (Jenkins, GitHub, etc.)
✅ Database:
- PostgreSQL or other RDBMS
- Familiarity with NoSQL databases is a plus
✅ Frontend Proficiency:
- HTML5, CSS3, JavaScript, AJAX, JSON
- Angular concepts like Interceptors, Pipes, Directives, Decorators
- Strong debugging and performance optimization skills
✅ Testing & Tools:
- Unit testing using Jasmine/Karma or Jest is a plus
- Experience with tools like JIRA, Azure DevOps, Confluence
Soft Skills & Other Expectations:
- Excellent verbal and written communication skills
- Prior consulting or client-facing experience is a big plus
- Strong analytical, problem-solving, and leadership abilities
- Familiarity with Agile/Scrum methodology
- Self-motivated and adaptable with a strong desire to learn and grow
Job Description for PostgreSQL Lead
Job Title: PostgreSQL Lead
Company: Mydbops
About us:
As a seasoned industry leader for 8 years in open-source database management, we specialise in providing unparalleled solutions and services for MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, Cassandra, and more. At Mydbops, we are committed to providing exceptional service and building lasting relationships with our customers. Our Customer Account Management team is vital in ensuring client satisfaction and loyalty.
Role Overview
As the PostgreSQL Lead, you will own the design, implementation, and operational excellence of PostgreSQL environments. You’ll lead technical decision-making, mentor the team, interface with customers, and drive key initiatives covering performance tuning, HA architectures, migrations, and cloud deployments.
Key Responsibilities
- Lead PostgreSQL production environments: architecture, stability, performance, and scalability
- Oversee complex troubleshooting, query optimization, and performance analysis
- Architect and maintain HA/DR systems (e.g., Streaming Replication, Patroni, repmgr)
- Define backup, recovery, replication, and failover protocols
- Guide DB migrations, patches, and upgrades across environments
- Collaborate with DevOps and cloud teams for infrastructure automation
- Use monitoring (pg_stat_statements, PMM, Nagios or any monitoring stack) to proactively resolve issues
- Provide technical mentorship—conduct peer reviews, upskill, and onboard junior DBAs
- Lead customer interactions: understand requirements, design solutions, and present proposals
- Drive process improvements and establish database best practices
Requirements
- Experience: 4-5 years in PostgreSQL administration, with at least 2+ years in a leadership role
- Performance Optimization: Expert in query tuning, indexing strategies, partitioning, and execution plan analysis.
- Extension Management: Proficient with critical PostgreSQL extensions including:
- pg_stat_statements – query performance tracking
- pg_partman – partition maintenance
- pg_repack – online table reorganization
- uuid-ossp – UUID generation
- pg_cron – native job scheduling
- auto_explain – capturing costly queries
- Backup & Recovery: Deep experience with pgBackRest, Barman, and implementing Point-in-Time Recovery (PITR).
- High Availability & Clustering: Proven expertise in configuring and managing HA environments using Patroni, repmgr, and streaming replication.
- Cloud Platforms: Strong operational knowledge of AWS RDS and Aurora PostgreSQL, including parameter tuning, snapshot management, and performance insights.
- Scripting & Automation: Skilled in Linux system administration, with advanced scripting capabilities in Bash and Python.
- Monitoring & Observability: Familiar with pg_stat_statements, PMM, Nagios, and building custom dashboards using Grafana and Prometheus.
- Leadership & Collaboration: Strong problem-solving skills, effective communication with stakeholders, and experience leading database reliability and automation initiatives.
Preferred Qualifications
- Bachelor’s/Master’s degree in CS, Engineering, or equivalent
- PostgreSQL certifications (e.g., EDB, AWS)
- Consulting/service delivery experience in managed services or support roles
- Experience in large-scale migrations and modernization projects
- Exposure to multi-cloud environments and DBaaS platforms
What We Offer:
- Competitive salary and benefits package.
- Opportunity to work with a dynamic and innovative team.
- Professional growth and development opportunities.
- Collaborative and inclusive work environment.
Job Details:
- Work time: General shift
- Working days: 5 Days
- Mode of Employment - Work From Home
- Experience - 4-5 years
Supercharge Your Career as a Sr. Dev Engg – Java at Technoidentity!
Are you ready to solve people challenges that fuel business growth? At Technoidentity, we’re
a Data+AI product engineering company building cutting-edge solutions in the FinTech
domain for over 13 years—and we’re expanding globally. It’s the perfect time to join our
team of tech innovators and leave your mark
What’s in it for You?
We are looking for a skilled Java Backend Engineer who is passionate about building scalable, high-performance applications. The ideal candidate should have strong expertise in Java, data structures, databases, and modern frameworks, along with experience deploying solutions on AWS and managing CI/CD pipelines.
What Will You Be Doing?
- Design, develop, and maintain backend services using Java and Spring frameworks.
- Implement efficient algorithms and data structures for complex problem-solving.
- Integrate and manage relational databases (preferably RDS) with Java applications.
- Deploy and manage services on AWS infrastructure (EC2, SQS, RDS, etc.).
- Implement and maintain CI/CD pipelines (Jenkins or similar) for seamless delivery.
- Collaborate with cross-functional teams to design scalable solutions.
- Ensure code quality, performance, and security best practices.
- Contribute to code reviews, documentation, and knowledge sharing.
What Makes You the Perfect Fit?
- Strong proficiency in Java and data structures/algorithms.
- Hands-on experience with Spring frameworks (Spring Boot, Spring MVC, Spring Data, etc.).
- Proficiency in working with databases (SQL, schema design, query optimization).
- Practical experience with AWS services (EC2, SQS, RDS, IAM, etc.).
- Experience in setting up and maintaining CI/CD pipelines (Jenkins, GitHub Actions).
- Good understanding of version control using Git/GitHub.
- Solid problem-solving and debugging skills.
- Excellent communication and teamwork skills.
Preferred Qualifications
- Experience with microservices architecture.
- Exposure to containerization (Docker, Kubernetes).
- Knowledge of monitoring and logging tools (CloudWatch, ELK, Prometheus, etc.).

We're Hiring: Senior Developer (AI & Machine Learning)** 🚀
🔧 **Tech Stack**: Python, Neo4j, FAISS, LangChain, React.js, AWS/GCP/Azure
🧠 **Role**: AI/ML development, backend architecture, cloud deployment
🌍 **Location**: Remote (India)
💼 **Experience**: 5-10 years
If you're passionate about making an impact in EdTech and want to help shape the future of learning with AI, we want to hear from you!


About The Role
Location: Remote / Hybrid (India Preferred)
Experience: 03–10 years of relevant experience
Reports to: CEO/Co-founder
Type: Full-Time
Tech Stack: Python, Frappe, LangChain, Neo4j, FAISS, React.js, AWS/GCP/Azure
What You’ll Do
● AI/ML Development
○ Build AI-powered student learning insights using Graph Databases (Neo4j), FAISS,
Sentence Transformers, and OpenCV (ResNet-50).
○ Develop Retrieval-Augmented Generation (RAG) and reinforcement learning models to
personalize content and assessment.
○ Research and implement multi-modal generation (text, image, voice) for highly
personalized learning interactions.
○ Fine-tune and optimize transformer-based models (e.g., GPT, BERT) to deliver contextual,
culturally relevant learning experiences.
● Backend & API Development
○ Architect and build scalable backend using Frappe, FastAPI, or Django.
○ Develop REST and GraphQL APIs to connect PAL with TAP’s LMS, Glific, and content
repositories.
○ Integrate Redis and Celery to manage background inference processes.
○ Connect with Glific APIs to power our AI-driven WhatsApp learning chatbot.
● Frontend & User Experience (Optional)
○ Develop a React.js-based student dashboard for real-time learning insights and content
delivery.
○ Collaborate closely with our UX team to ensure intuitive and accessible design.
● Cloud & Deployment (DevOps)
○ Deploy and scale models across cloud platforms: AWS, GCP, or Azure.
○ Implement CI/CD pipelines (Jenkins, Cypress.io) to ensure continuous delivery and testing.
○ Use Docker and Kubernetes for managing containerized deployments.
● AI-Driven Security & Automation
○ Ensure data privacy and compliance by embedding AI-powered security checks.
○ Automate personalized content delivery through NLP chatbots and AI workflows.
Who You Are
● You’ve worked on AI/ML systems at scale, especially in EdTech, SaaS, or social impact settings
● You’ve built or fine-tuned models with GPT/BERT, FAISS, LangChain, or custom embeddings
● You know how to move between backend complexity and real-world deployment
● You’ve used tools like Zapier, N8N, or FastAPI in production
● You don’t just write code — you write roadmaps, define structure, and love cross-functional
collaboration
● Bonus: You’ve dabbled in adaptive learning, NLP in regional languages, or multimodal AI
What You Can Expect?
● Real ownership – You’ll lead architecture, experimentation, and rollout
● Deep learning – Work with experienced leaders across product, pedagogy, and AI
● Remote flexibility – Work from anywhere, build for everywhere
● Bold pace, clear values – We move fast, think big, and always center the child
● Future leadership track – Opportunity to grow into a Tech/AI Lead role as we scale
● Full transparency – Competitive salary, equity potential, and clarity on what’s next

Full Stack Engineer (Frontend Strong, Backend Proficient)
5-10 Years Experience
Contract: 6months+extendable
Location: Remote
Technical Requirements Frontend Expertise (Strong)
*Need at least 4 Yrs in React web developement, Node & AI.*
● Deep proficiency in React, Next.js, TypeScript
● Experience with state management (Redux, Context API)
● Frontend testing expertise (Jest, Cypress)
● Proven track record of achieving high Lighthouse performance scores Backend Proficiency
● Solid experience with Node.js, NestJS (preferred), or ExpressJS
● Database management (SQL, NoSQL)
● Cloud technologies experience (AWS, Azure)
● Understanding of OpenAI and AI integration capabilities (bonus) Full Stack Integration
● Excellent ability to manage and troubleshoot integration issues between frontend and backend systems
● Experience designing cohesive systems with proper separation of concerns
About HelloRamp.ai
HelloRamp is on a mission to revolutionize media creation for automotive and retail using AI. Our platform powers 3D/AR experiences for leading brands like Cars24, Spinny, and Samsung. We’re now building the next generation of Computer Vision + AI products, including cutting-edge NeRF pipelines and AI-driven video generation.
What You’ll Work On
- Develop and optimize Computer Vision pipelines for large-scale media creation.
- Implement NeRF-based systems for high-quality 3D reconstruction.
- Build and fine-tune AI video generation models using state-of-the-art techniques.
- Optimize AI inference for production (CUDA, TensorRT, ONNX).
- Collaborate with the engineering team to integrate AI features into scalable cloud systems.
- Research latest AI/CV advancements and bring them into production.
Skills & Experience
- Strong Python programming skills.
- Deep expertise in Computer Vision and Machine Learning.
- Hands-on with PyTorch/TensorFlow.
- Experience with NeRF frameworks (Instant-NGP, Nerfstudio, Plenoxels) and/or video synthesis models.
- Familiarity with 3D graphics concepts (meshes, point clouds, depth maps).
- GPU programming and optimization skills.
Nice to Have
- Knowledge of Three.js or WebGL for rendering AI outputs on the web.
- Familiarity with FFmpeg and video processing pipelines.
- Experience in cloud-based GPU environments (AWS/GCP).
Why Join Us?
- Work on cutting-edge AI and Computer Vision projects with global impact.
- Join a small, high-ownership team where your work matters.
- Opportunity to experiment, publish, and contribute to open-source.
- Competitive pay and flexible work setup.
About Us:
MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences.
Are you the One?
As a Senior DevOps Engineer in our Platform Engineering team, you’ll play a pivotal role in evolving the foundational cloud infrastructure that powers our API-first BaaS platform. You will design, build, and operate secure, multi-tenant, highly available infrastructure systems optimized for rapid developer onboarding, API lifecycle management, observability, and compliance enforcement.
This role is deeply technical and cross functional. You’ll work closely with Backend Engineers, SREs, Product teams, and Infosec to deliver a seamless and secure platform experience.
You responsibilities will include:
- Architect, provision, and operate infrastructure that supports tens of thousands of API transactions per second across multi-cloud environments.
- Build multi-tenant Kubernetes environments that serve as the backbone for API deployments, internal developer platforms, and CI/CD automation.
- Implement scalable service mesh, ingress, and routing layers to support API versioning, rate Enable self-service infrastructure provisioning, deployment pipelines, and observability tooling through developer portals and golden paths.
- Improve time-to-first-API and time-to-production for engineering teams by abstracting infrastructure complexity via platform APIs.
- Implement zero-trust networking, encryption at rest and in transit, secrets management, and fine-grained IAM policies for platform services.
- Collaborate with security and compliance teams to embed policy-as-code, audit logging, and monitoring to meet regulatory standards (e.g., PCI DSS, MAS-TRM).
- Ensure that infrastructure changes are governed through GitOps and change management workflows.
- Build and scale telemetry pipelines (metrics, traces, logs) for granular insights into API health, latency, and developer usage.
- Enable real-time alerting, SLO-based monitoring, and automated remediation for critical platform services.
- Lead incident response and postmortem analysis, continuously improving system resiliency and response times.
Requirements:
- 5+ years of hands-on experience designing and operating production-grade cloud infrastructure and Kubernetes workloads.
- Deep understanding of cloud-native architectures supporting large-scale, low-latency API services (e.g., OpenAPI, REST, gRPC).
- Strong command of Kubernetes internals, including operators, Helm, RBAC, admission controllers, CNI, CSI, and NetworkPolicies.
- Proficient in cloud security practices, identity federation (OIDC, JWT), API gateway controls, TLS management, and key rotation.
- Proficient in Python, Go, or a similar language for scripting, tooling, and automation.
- Familiar with cloud-native CI/CD tooling (ArgoCD, Tekton, FluxCD, Jenkins) and Infrastructure-as-Code(Terraform, Pulumi).
Good to have:
- Experience with API gateway platforms (Kong, Apigee, AWS API Gateway) in production setups.
- Background in multi-region deployments, failover design, and cross-zone service routing.
- Experience with developer portals, internal developer platforms (IDPs), and API usage analytics.
- Familiarity with banking-grade compliance standards, SOC2, PCI DSS, ISO 27001.
- Understanding of data protection laws (e.g., PDPA, GDPR) as applied to cloud infrastructure.
MatchMove Culture:
- We cultivate a dynamic, innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication.
- We focus on employee development, supporting continuous learning and growth through training programs, on-the-job learning, and mentorship.
- We encourage speaking up, sharing ideas, and taking ownership. Our team spans Asia, embracing diversity and fostering a rich exchange of perspectives and experiences.
- Together, we harness the power of fintech and e-commerce to impact people's lives meaningfully.
- Grow with us and shape the future of fintech. Join us and be part of something bigger!

About Us
MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences.
Are You The One?
As a Technical Lead Engineer - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business.
You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight.
You will contribute to
- Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services.
- Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark.
- Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services.
- Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases.
- Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment.
- Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM).
- Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights.
- Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines.
Responsibilities
- Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR.
- Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication.
- Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation.
- Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards).
- Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations.
- Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership.
- Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform.
Requirements
- At-least 6 years of experience in data engineering.
- Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum.
- Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs.
- Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation.
- Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale.
- Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions.
- Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments.
- Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene.
- Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders.
Brownie Points
- Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements.
- Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection.
- Familiarity with data contracts, data mesh patterns, and data as a product principles.
- Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases.
- Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3.
- Experience building data platforms for ML/AI teams or integrating with model feature stores.
MatchMove Culture:
- We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication.
- We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship.
- We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences.
- Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives.
Grow with us and shape the future of fintech and e-commerce. Join us and be part of something bigger!
Job Position: DevOps Engineer
Experience Range: 2 - 3 years
Type:Full Time
Location:India (Remote)
Desired Skills: DevOps, Kubernetes (EKS), Docker, Kafka, HAProxy, MQTT brokers, Redis, PostgreSQL, TimescaleDB, Shell Scripting, Terraform, AWS (API Gateway, ALB, ECS, EKS, SNS, SES, CloudWatch Logs), Prometheus, Grafana, Jenkins, GitHub
Your key responsibilities:
- Collaborate with developers to design and implement scalable, secure, and reliable infrastructure.
- Manage and automate CI/CD pipelines (Jenkins - Groovy Scripts, GitHub Actions), ensuring smooth deployments.
- Containerise applications using Docker and manage workloads on Kubernetes (EKS).
- Work with AWS services (ECS, EKS, API Gateway, SNS, SES, CloudWatch Logs) to provision and maintain infrastructure.
- Implement infrastructure as code using Terraform.
- Set up and manage monitoring and alerting using Prometheus and Grafana.
- Manage and optimize Kafka, Redis, PostgreSQL, TimescaleDB deployments.
- Troubleshoot issues in distributed systems and ensure high availability using HAProxy, load balancing, and failover strategies.
- Drive automation initiatives across development, testing, and production environments.
What you’ll bring
Required:
- 2–3 years of hands-on DevOps experience.
- Strong proficiency in Shell Scripting.
- Practical experience with Docker and Kubernetes (EKS).
- Knowledge of Terraform or other IaC tools.
- Experience with Jenkins pipelines (Groovy scripting preferred).
- Exposure to AWS cloud services (ECS, EKS, API Gateway, SNS, SES, CloudWatch).
- Understanding of microservices deployment and orchestration.
- Familiarity with monitoring/observability tools (Prometheus, Grafana).
- Good communication and collaboration skills.
Nice to have:
- Experience with Kafka, HAProxy, MQTT brokers.
- Knowledge of Redis, PostgreSQL, TimescaleDB.
- Exposure to DevOps best practices in agile environments.
Are you an experienced Infrastructure/DevOps Engineer looking for an exciting remote opportunity to design, automate, and scale modern cloud environments? We’re seeking a skilled engineer with strong expertise in Terraform and DevOps practices to join our growing team. If you’re passionate about automation, cloud infrastructure, and CI/CD pipelines, we’d love to hear from you!
Key Responsibilities:
- Design, implement, and manage cloud infrastructure using Terraform (IaC).
- Build and maintain CI/CD pipelines for seamless application deployment.
- Ensure scalability, reliability, and security of cloud-based systems.
- Collaborate with developers and QA to optimize environments and workflows.
- Automate infrastructure provisioning, monitoring, and scaling.
- Troubleshoot infrastructure and deployment issues quickly and effectively.
- Stay up to date with emerging DevOps tools, practices, and cloud technologies.
Requirements:
- Minimum 5+ years of professional experience in DevOps or Infrastructure Engineering.
- Strong expertise in Terraform and Infrastructure as Code (IaC).
- Hands-on experience with AWS / Azure / GCP (at least one cloud platform).
- Proficiency in CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, etc.).
- Experience with Docker, Kubernetes, and container orchestration.
- Strong knowledge of Linux systems, networking, and security best practices.
- Familiarity with monitoring & logging tools (Prometheus, Grafana, ELK, etc.).
- Scripting experience (Bash, Python, or similar).
- Excellent problem-solving skills and ability to work in remote teams.
Perks and Benefits:
- Competitive salary with remote work flexibility.
- Opportunity to work with global clients on modern infrastructure.
- Growth and learning opportunities in cutting-edge DevOps practices.
- Collaborative team culture that values automation and innovation.
Job Title- Java Developer
Location - Remote
Years of Exp : 2–3
Desired Skills: Java, Spring Boot, Microservices, PostgreSQL, TimescaleDB, Kafka, AWS (ECS, EKS, API Gateway, Secrets Manager)
Your key responsibilities:
- Design and build microservices and backend systems using Java (Spring Boot).
- Work with PostgreSQL/TimescaleDB to design and optimize data storage solutions.
- Develop event-driven systems leveraging Kafka for messaging and real-time processing.
- Collaborate with DevOps teams to deploy, scale, and manage services on AWS (ECS, EKS, API Gateway, Secrets Manager).
- Write unit, integration, and end-to-end tests to ensure reliability and quality.
- Participate in code reviews and contribute to architectural discussions.
- Troubleshoot and resolve issues in development, testing, and production environments.
What you’ll bring
Required:
- 2–3 years of hands-on software development experience with Java.
- Strong expertise in Spring Boot and microservices architecture.
- Working knowledge of PostgreSQL; experience with TimescaleDB preferred.
- Familiarity with Kafka for distributed messaging.
- Experience or exposure to AWS services (ECS, EKS, API Gateway, Secrets Manager).
- Understanding of REST APIs and modern API design.
- Good grasp of Object-Oriented Design and design patterns.
- Experience with Git and CI/CD tools.
- Strong problem-solving, analytical, and communication skills.
Nice to have:
- Exposure to Docker and Kubernetes.
- Knowledge of monitoring and logging tools (Prometheus, Grafana, ELK).
- Experience working in Agile environments.

🚀 About Us
At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews — all while improving their SEO.
We’re also the creators of Convertlens, our generative AI-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.
We’re a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solves real problems — you’ll fit right in.
🛠️ What You’ll Do
- Build and maintain scalable Python back-end systems that power Convertlens and internal applications.
- Develop Agentic AI applications and workflows to drive automation and insights.
- Design and implement connectors to third-party systems (APIs, CRMs, marketing tools) to source and unify data.
- Ensure system reliability with strong practices in observability, monitoring, and troubleshooting.
⚙️ What You Bring
- 2+ years of hands-on experience in Python back-end development.
- Strong understanding of REST API design and integration.
- Proficiency with relational databases (MySQL/PostgreSQL).
- Familiarity with observability tools (logging, monitoring, tracing — e.g., OpenTelemetry, Prometheus, Grafana, ELK).
- Experience maintaining production systems with a focus on reliability and scalability.
- Bonus: Exposure to Node.js and modern front-end frameworks like ReactJs.
- Strong problem-solving skills and comfort working in a startup/product environment.
- A builder mindset — scrappy, curious, and ready to ship.
💼 Perks & Culture
- Flexible work setup — remote-first for most, hybrid if you’re in Delhi NCR.
- A high-growth, high-impact environment where your code goes live fast.
- Opportunities to work with Agentic AI and cutting-edge tech.
- Small team, big vision — your work truly matters here.
Position Overview:
We are seeking a hands-on Engineering Lead with a strong background in cloud-native application development, microservices architecture, and team leadership. The ideal candidate will have a proven track record of delivering complex enterprise-grade applications and will
be capable of leading a large team to build scalable, secure, and high-
performance systems. This person will not only be a technical expert but also an effective people manager, fostering growth and collaboration within their team.
Key Responsibilities:
- Lead by example, mentor junior engineers, and contribute to team knowledge-sharing efforts.
- Provide guidance on best practices, architecture, and development processes.
- Drive the design and implementation of cloud-native enterprise applications, ensuring scalability, reliability, and maintainability.
- Champion the adoption of microservices principles and design patterns across the engineering team.
- Maintain a hands-on approach in software development, contributing directly to code while balancing leadership responsibilities.
- Collaborate with cross-functional teams (Product, UI/UX, DevOps,
- QA, Security) to ensure successful delivery of features and enhancements.
- Continuously evaluate and improve the development process, from CI/CD pipelines to code quality and testing.
- Ensure application security best practices are followed, addressing vulnerabilities and potential threats in a proactive manner.
- Help define technical roadmaps and provide input on architectural decisions that meet both current and future customer needs.
- Foster a culture of collaboration, continuous learning, and innovation within the engineering team.
Required Skills & Experience:
Technical Skills:
Core Technologies: Strong expertise in Node.js and Javascript,
with the ability to pick up new languages and technologies as required.
Cloud Expertise: Hands-on experience with cloud technologies,
particularly AWS, Azure, or Google Cloud Platform (GCP).
Microservices Architecture: Proven experience in building and
maintaining cloud-native, microservices-based applications.
Security Awareness: Deep understanding of security principles,
especially in the context of developing enterprise applications.
Development Tools: Proficiency in version control systems (Git),
CI/CD tools, containerization (Docker), and orchestration platforms
(Kubernetes).
Scalability & Performance: Strong knowledge of designing
systems for scalability and performance, with experience managing
large-scale systems.
Communication Skills:
- Exceptional verbal and written communication skills, with the ability to articulate complex business concepts to both technical and non-technical stakeholders.
- Strong presentation skills to effectively convey technical information and business value to clients.
- Ability to collaborate effectively with cross-functional teams and clients across different time zones and cultural backgrounds.
Experience:
- At least 5-10 years of experience in software engineering with at least 2-3 years in a leadership role managing a team of developers.
- Proven track-record for delivering performant and scalable applications.
- Experience working in client-facing roles, providing technical consulting, and managing client expectations.
Leadership Skills:
- Proven ability to manage, mentor, and motivate a team of engineers.
- Strong communication skills, capable of explaining complex technical concepts to non-technical stakeholders.
- Collaborative mindset with the ability to work effectively with cross-functional teams.
LIFE AT FOUNTANE:
- Fountane offers an environment where all members are supported, challenged, recognized & given opportunities to grow to their fullest potential.
- Competitive pay
- Health insurance
- Individual/team bonuses
- Employee stock ownership plan
- Fun/challenging variety of projects/industries
- Flexible workplace policy - remote/physical
- Flat organization - no micromanagement
- Individual contribution - set your deadlines
- Above all - culture that helps you grow exponentially.
Qualifications - No bachelor's degree required. Good communication skills are a must!
ABOUT US:
Established in 2017, Fountane Inc is a Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi-million valuation companies in the US and a handful of sister ventures for large corporations, including Target, US Ventures, and Imprint Engine.
We’re a team of 80 strong from around the world that are radically open-minded and believes in excellence, respecting one another, and pushing our boundaries to the furthest it's ever been.

Job Title: Backend Developer (Full Time)
Location: Remote
Interview: Virtual Interview
Experience Required: 3+ Years
Backend / API Development (About the Role)
- Strong proficiency in Python (FastAPI) or Node.js (Express) (Python preferred).
- Proven experience in designing, developing, and integrating APIs for production-grade applications.
- Hands-on experience deploying to serverless platforms such as Cloudflare Workers, Firebase Functions, or Google Cloud Functions.
- Solid understanding of Google Cloud backend services (Cloud Run, Cloud Functions, Secret Manager, IAM roles).
- Expertise in API key and secrets management, ensuring compliance with security best practices.
- Skilled in secure API development, including HTTPS, authentication/authorization, input validation, and rate limiting.
- Track record of delivering scalable, high-quality backend systems through impactful projects in production environments.
Job Description For Database Engineer
Job Title: Database Engineer(MySQL)
Company: Mydbops
About Us:
As a seasoned industry leader for 8 years in open-source database management, we specialise in providing unparalleled solutions and services for MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, Cassandra, and more. At Mydbops, we are committed to providing exceptional service and building lasting relationships with our customers. Mydbops takes pride in being a PCI DSS-certified and ISO-certified company, reflecting our unwavering commitment to maintaining the highest standards of security and operational excellence.
Role Overview:
You will work in a fast-paced environment where we are responsible for the most critical systems. Our external teams ( Customers ) count on us to keep their MySQL database running and we are vital to the success of their business. You will troubleshoot and resolve customer issues related to the DB system's availability and performance. You will develop relationships with customers, comprehend and fulfil their needs, and maintain their satisfaction through regular communication and engagement with their environments.
Responsibility :
- The major focus is on handling all the alerts in MDS MySQL.
- Optimising the complex SQL Queries.
- On-Demand Query Optimization.
- Monthly Query optimization assigned ( based on customer list ).
- Get into Client calls on Performance issues and other tasks.
- Reaching the DBE-2 and DBRE's for escalations.
- Assisting and mentoring the ADBEs.
- Efficient troubleshooting using MySQL MDS Internal tools
- Writing the technical and process documents.
- Managing the client-based operations documents to ease scheduled & repeated tasks for the database engineering team.
Skills Required :
- Teamwork in a fast-paced environment.
- Alarm resolution expert
- Good in SQL language.
- Linux commands for day-to-day operations.
- Kernel tuning and Linux performance tools
- Expertise in MySQL backup tools
- mysqldump
- mydumper/myloader
- Xtrabackup.
- mariabackup
- Knowledge on AWS basic operations
- log collections
- S3 Push
- AWS Workspace
- Deep Skills in query optimization, index tuning and implementing the database features for SQL optimisation.
- Polite, friendly and professional; this position requires significant customer interaction and teamwork.
- Process-oriented and teamwork play a pivotal role.
- Excellent and strong written and spoken English Communication Skills.
- Strong work ethic and accepts feedback from others.
- Strong understanding of MySQL architecture, storage engines (InnoDB, MyISAM), and replication (Binlog Based And GTID).
- Good understanding of database security practices (users, roles, encryption, authentication mechanisms).
- Strong problem-solving and troubleshooting skills.
- Experience with monitoring tools such as PMM (Percona Monitoring and Management), Prometheus, Grafana
Good to Have (Optional but Preferred): Experience with AWS RDS/Aurora MySQL.
Why Join Us:
- Opportunity to work in a dynamic and growing industry.
- Learning and development opportunities to enhance your career.
- A collaborative work environment with a supportive team.
Job Details:
- Job Type: Full-time opportunity
- Work Days - 6 days
- Work time: Rotational shift
- Mode of Employment - Work From Home



About Us
We are building the next generation of AI-powered products and platforms that redefine how businesses digitize, automate, and scale. Our flagship solutions span eCommerce, financial services, and enterprise automation, with an emerging focus on commercializing cutting-edge AI services across Grok, OpenAI, and the Azure Cloud ecosystem.
Role Overview
We are seeking a highly skilled Full-Stack Developer with a strong foundation in e-commerce product development and deep expertise in backend engineering using Python. The ideal candidate is passionate about designing scalable systems, has hands-on experience with cloud-native architectures, and is eager to drive the commercialization of AI-driven services and platforms.
Key Responsibilities
- Design, build, and scale full-stack applications with a strong emphasis on backend services (Python, Django/FastAPI/Flask).
- Lead development of eCommerce features including product catalogs, payments, order management, and personalized customer experiences.
- Integrate and operationalize AI services across Grok, OpenAI APIs, and Azure AI services to deliver intelligent workflows and user experiences.
- Build and maintain secure, scalable APIs and data pipelines for real-time analytics and automation.
- Collaborate with product, design, and AI research teams to bring experimental features into production.
- Ensure systems are cloud-ready (Azure preferred) with CI/CD, containerization (Docker/Kubernetes), and strong monitoring practices.
- Contribute to frontend development (React, Angular, or Vue) to deliver seamless, responsive, and intuitive user experiences.
- Champion best practices in coding, testing, DevOps, and Responsible AI integration.
Required Skills & Experience
- 5+ years of professional full-stack development experience.
- Proven track record in eCommerce product development (payments, cart, checkout, multi-tenant stores).
- Strong backend expertise in Python (Django, FastAPI, Flask).
- Experience with cloud services (Azure preferred; AWS/GCP is a plus).
- Hands-on with AI/ML integration using APIs like OpenAI, Grok, Azure Cognitive Services.
- Solid understanding of databases (SQL & NoSQL), caching, and API design.
- Familiarity with frontend frameworks such as React, Angular, or Vue.
- Experience with DevOps practices: GitHub/GitLab, CI/CD, Docker, Kubernetes.
- Strong problem-solving skills, adaptability, and a product-first mindset.
Nice to Have
- Knowledge of vector databases, RAG pipelines, and LLM fine-tuning.
- Experience in scalable SaaS architectures and subscription platforms.
- Familiarity with C2PA, identity security, or compliance-driven development.
What We Offer
- Opportunity to shape the commercialization of AI-driven products in fast-growing markets.
- A high-impact role with autonomy and visibility.
- Competitive compensation, equity opportunities, and growth into leadership roles.
- Collaborative environment working with seasoned entrepreneurs, AI researchers, and cloud architects.

We are hiring freelancers to work on advanced Data & AI projects using Databricks. If you are passionate about cloud platforms, machine learning, data engineering, or architecture, and want to work with cutting-edge tools on real-world challenges, this is the opportunity for you!
✅ Key Details
- Work Type: Freelance / Contract
- Location: Remote
- Time Zones: IST / EST only
- Domain: Data & AI, Cloud, Big Data, Machine Learning
- Collaboration: Work with industry leaders on innovative projects
🔹 Open Roles
1. Databricks – Senior Consultant
- Skills: Data Warehousing, Python, Java, Scala, ETL, SQL, AWS, GCP, Azure
- Experience: 6+ years
2. Databricks – ML Engineer
- Skills: CI/CD, MLOps, Machine Learning, Spark, Hadoop
- Experience: 4+ years
3. Databricks – Solution Architect
- Skills: Azure, GCP, AWS, CI/CD, MLOps
- Experience: 7+ years
4. Databricks – Solution Consultant
- Skills: SQL, Spark, BigQuery, Python, Scala
- Experience: 2+ years
✅ What We Offer
- Opportunity to work with top-tier professionals and clients
- Exposure to cutting-edge technologies and real-world data challenges
- Flexible remote work environment aligned with IST / EST time zones
- Competitive compensation and growth opportunities
📌 Skills We Value
Cloud Computing | Data Warehousing | Python | Java | Scala | ETL | SQL | AWS | GCP | Azure | CI/CD | MLOps | Machine Learning | Spark |

Job Title: AI Developer/Engineer
Location: Remote
Employment Type: Full-time
About the Organization
We are a cutting-edge AI-powered startup that is revolutionizing data management and content generation. Our platform harnesses the power of generative AI and natural language processing to turn unstructured data into actionable insights, providing businesses with real-time, intelligent content and driving operational efficiency. As we scale, we are looking for an experienced lead architect to help design and build our next-generation AI-driven solutions.
About the Role
We are seeking an AI Developer to design, fine-tune, and deploy advanced Large Language Models (LLMs) and AI agents across healthcare and SMB workflows. You will work with cutting-edge technologies—OpenAI, Claude, LLaMA, Gemini, Grok—building robust pipelines and scalable solutions that directly impact real-world hospital use cases such as risk calculators, clinical protocol optimization, and intelligent decision support.
Key Responsibilities
- Build, fine-tune, and customize LLMs and AI agents for production-grade workflows
- Leverage Node.js for backend development and integration with various cloud services.
- Use AI tools and AI prompts to develop automated processes that enhance data management and client offerings
- Drive the evolution of deployment methodologies, ensuring that AI systems are continuously optimized, tested, and delivered in production-ready environments.
- Stay up-to-date with emerging AI technologies, cloud platforms, and development methodologies to continually evolve the platform’s capabilities.
- Integrate and manage vector databases such as FAISS and Pinecone.
- Ensure scalability, performance, and compliance in all deployed AI systems.
Required Qualifications
- 2–3 years of hands-on experience in AI/ML development or full-stack AI integration.
- Proven expertise in building Generative AI models and AI-powered applications, especially in a cloud environment.
- Strong experience with multi-cloud infrastructure and platforms,
- Proficiency with Node.js and modern backend frameworks for developing scalable solutions.
- In-depth understanding of AI prompts, natural language processing, and agent-based systems for enhancing decision-making processes.
- Familiarity with AI tools for model training, data processing, and real-time inference tasks.
- Experience working with hybrid cloud solutions, including private and public cloud integration for AI workloads.
- Strong problem-solving skills and a passion for innovation in AI and cloud technologies
- Agile delivery mythology knowledge.
- CI/CD pipeline deployment with JIRA and GitHub knowledge for code deployment
- Strong experience with LLMs, prompt engineering, and fine-tuning.
- Knowledge of vector databases (FAISS, Pinecone, Milvus, or similar).
Nice to Have
- Experience in healthcare AI, digital health, or clinical applications.
- Exposure to multi-agent AI frameworks.
What We Offer
- Flexible working hours.
- Collaborative, innovation-driven work culture.
- Growth opportunities in a rapidly evolving AI-first environment.

A LITTLE BIT ABOUT THE ROLE:
As a Backend Developer, you will be responsible for designing and building high-performance, scalable, and secure backend systems. You will focus on core application logic, APIs, data models, and integration with relational databases.
You’ll work closely with product managers, frontend developers, DevOps, and QA teams to deliver solutions that meet business needs. At Fountane, we follow a scrum-based Agile delivery cycle, and you will be an active contributor to building efficient and future-ready backend services.
WHAT YOU WILL BE DOING:
- Develop, test, and maintain scalable backend services using Golang.
- Design and manage relational database schemas, queries, and stored procedures in MS SQL.
- Build and optimize RESTful APIs/GraphQL endpoints for high performance and reliability.
- Write clean, efficient, and reusable code with proper documentation.
- Collaborate with cross-functional teams (frontend, product, DevOps) to deliver end-to-end solutions.
- Ensure security, scalability, and fault tolerance in backend systems.
- Continuously evaluate and introduce new tools and practices for backend efficiency.
- Troubleshoot, debug, and improve backend services and database performance.
WHAT YOU WILL NEED TO BE GREAT IN THIS ROLE:
Experience:
- Associate Level: Minimum 3+ years of backend development experience.
- Senior Level: Minimum 5+ years of backend development experience.
Technical Skills:
- Strong proficiency in Golang for backend development.
- Hands-on experience with MS SQL (queries, indexing, optimization, schema design).
- Solid understanding of API development (REST/GraphQL).
- Experience in microservices architecture and distributed systems.
- Familiarity with Docker, Kubernetes, and CI/CD pipelines is a plus.
- Knowledge of cloud environments (AWS/GCP/Azure) is preferred.
Education:
- Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience.
SOFT SKILLS:
- Collaboration - Ability to work in teams across the world
- Adaptability - situations are unexpected and need to be quick to adapt
- Open-mindedness - Expect to see things outside the ordinary
LIFE AT FOUNTANE:
- Fountane offers an environment where all members are supported, challenged, recognized & given opportunities to grow to their fullest potential.
- Competitive pay
- Health insurance for spouses, kids, and parents.
- PF/ESI or equivalent
- Individual/team bonuses
- Employee stock ownership plan
- Fun/challenging variety of projects/industries
- Flexible workplace policy - remote/physical
- Flat organization - no micromanagement
- Individual contribution - set your deadlines
- Above all - culture that helps you grow exponentially!
- Qualifications - No bachelor's degree required. Good communication skills are a must!
A LITTLE BIT ABOUT THE COMPANY:
Established in 2017, Fountane Inc is a Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi-million valuation companies in the US and a handful of sister ventures for large corporations, including Target, US Ventures, and Imprint Engine.
We’re a team of 120+ strong from around the world that are radically open-minded and believes in excellence, respecting one another, and pushing our boundaries to the furthest it's ever been.

We are inviting a Fullstack Developer who excels at building modern web and mobile applications, with deep backend experience in Node.js and strong frontend proficiency in Next.js and React Native (Expo). You’ll work with a team of designers, product leads, and developers to bring impactful climate tech to life.
Location: Mumbai & Vicinity (India)
Responsibilities:
- Design, develop, and maintain scalable backend services using Node.js.
- Develop responsive and high-performance web applications using Next.js.
- Build and deploy mobile applications using React Native and Expo.
- Collaborate with UX Designers, Architects, and other Developers to implement full-stack web and mobile solutions.
- Perform data modeling and database management using PostgreSQL and Prisma.
- Ensure the performance, quality, and responsiveness of applications.
- Troubleshoot and debug applications to optimize performance.
- Write clean, maintainable, and well-documented code.
- Participate in code reviews and contribute to continuous improvement of development processes.
- Apply AI-enhanced developer tools like Cursor, Copilot, or similar to boost development velocity and code quality.
Required Skills and Experience:
- 2+ years of experience in full-stack JavaScript development.
- Strong proficiency in backend development using Node.js.
- Demonstrated experience with frontend technologies such as Next.js and React Native.
- Experience with PostgreSQL and Prisma for database management and data modeling.
- Experience with deploying React Native applications using Expo.
- Solid understanding of RESTful APIs and how to integrate them with front-end applications.
- Familiarity with modern JavaScript (ES6+), HTML5, and CSS3.
- Strong understanding of software development best practices.
- Proficiency in version control systems such as Git.
Additional Relevant Skills and Experience:
- Experience with map modules, such as ArcGIS, and Google Earth Engine.
- Experience with TypeScript.
- Experience with CI/CD pipelines.
- Understanding of server-side rendering and static site generation.
- Good eye for design and UX principles.
- Experience working in Agile/Scrum environments.
Good to Have:
- Experience with WebSockets and real-time applications.
- Familiarity with cloud platforms such as AWS or Azure.
- Experience with Docker and containerized applications.
- Knowledge of performance optimization techniques.
- Strong problem-solving skills and ability to work independently or as part of a team.
We Offer:
- Work on Open Source Projects
- Competitive Salary based on Location
- Flexible working hours
- 4 weeks of paid leave/year
- Work from home
Plant-for-the-Planet is a global, youth-led non-profit with a mission to restore ecosystems through tree planting and climate justice advocacy. Our tech team, spanning five continents, builds scalable, open-source tools to support environmental action at a global scale.
Learn more: https://www.plant-for-the-planet.org

We are looking for a Senior Software Engineer to join our team and contribute to key business functions. The ideal candidate will bring relevant experience, strong problem-solving skills, and a collaborative
mindset.
Responsibilities:
- Design, build, and maintain high-performance systems using modern C++
- Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
- Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and
- managing applications in the cloud.
- Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
- Participate in system design, peer code reviews, and performance tuning
Qualifications:
- 5+ years of software development experience, with strong command over modern C++
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud.
- Apache Airflow for orchestrating complex data workflows.
- EKS (Elastic Kubernetes Service) for managing containerized workloads.
- Proven expertise in designing and managing robust data pipelines & Microservices.
- Proficient in building and scaling data processing workflows and working with structured/unstructured data
- Strong hands-on experience with Docker, container orchestration, and microservices architecture
- Working knowledge of CI/CD practices, Git, and build/release tools
- Strong problem-solving, debugging, and cross-functional collaboration skills
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.


At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- You have 1 year of hands-on Python development experience
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Bangalore, Remote
Compensation: Competitive salary + Meaningful equity
If you get excited about solving hard problems that have real-world impact, we should talk.
All the best!!

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
-Design, build, and maintain scalable data pipelines for
structured and unstructured data sources
-Develop ETL processes to collect, clean, and transform data
from internal and external systems. Support integration of data into
dashboards, analytics tools, and reporting systems
-Collaborate with data analysts and software developers to
improve data accessibility and performance.
-Document workflows and maintain data infrastructure best
practices.
-Assist in identifying opportunities to automate repetitive data
tasks
Please send your resume to talent@springer. capital

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
-Design, build, and maintain scalable data pipelines for
structured and unstructured data sources
-Develop ETL processes to collect, clean, and transform data
from internal and external systems. Support integration of data into
dashboards, analytics tools, and reporting systems
-Collaborate with data analysts and software developers to
improve data accessibility and performance.
-Document workflows and maintain data infrastructure best
practices.
-Assist in identifying opportunities to automate repetitive data
tasks
Please send your resume to talent@springer. capital

Position: Lead Backend Engineer
Location: Remote
Experience: 10+ Years
Budget: 1.7 LPM
Employment Type: [Contract]
Required Skills & Qualifications:
- 10+ years of proven backend engineering experience.
- Strong proficiency in Python.
- Expertise in SQL (Postgres) and database optimization.
- Hands-on experience with OpenAI APIs.
- Strong command of FastAPI and microservices architecture.
- Solid knowledge of debugging, troubleshooting, and performance tuning.
Nice to Have:
- Experience with Agentic Systems or ability to quickly adopt them.
- Exposure to modern CI/CD pipelines, containerization (Docker/Kubernetes), and cloud platforms (AWS, Azure, or GCP).
Your Impact
- Build scalable backend services.
- Design, implement, and maintain databases, ensuring data integrity, security, and efficient retrieval.
- Implement the core logic that makes applications work, handling data processing, user requests, and system operations
- Contribute to the architecture and design of new product features
- Optimize systems for performance, scalability, and security
- Stay up-to-date with new technologies and frameworks, contributing to the advancement of software development practices
- Working closely with product managers and designers to turn ideas into reality and shape the product roadmap.
What skills do you need?
- 4+ years of experience in backend development, especially building robust APIS using Node.js, Express.js, MYSQL
- Strong command of JavaScript and understanding of its quirks and best practices
- Ability to think strategically when designing systems—not just how to build, but why
- Exposure to system design and interest in building scalable, high-availability systems
- Prior work on B2C applications with a focus on performance and user experience
- Ensure that applications can handle increasing loads and maintain performance, even under heavy traffic
- Work with complex queries for performing sophisticated data manipulation, analysis, and reporting.
- Knowledge of Sequelize, MongoDB and AWS would be an advantage.
- Experience in optimizing backend systems for speed and scalability.


Company Overview
MedExpert, Chennai is a pioneering healthcare consulting firm specializing in US Medical Billing and Revenue Cycle Management (RCM) solutions tailored for healthcare providers and Emergency Medical Services (EMS) organizations. Since its inception in 2023, MedExpert has rapidly grown into a team of over 950 experts passionate about transforming the healthcare industry through technology-driven solutions.
With a focus on client success and operational efficiency, we empower healthcare providers to streamline billing processes, optimize revenue, and maintain their focus on quality patient care.
We pride ourselves on fostering a collaborative and growth-oriented work environment where every team member is empowered to contribute to projects that shape the future. Our culture champions Agile principles and is dedicated to high performance, continuous learning, and technological advancement.
We are seeking a dedicated and experienced Backend Developer to join our team. This role is integral to helping our development teams maintain agility, productivity, and the highest standards in delivering solutions to meet our clients’ evolving needs.
About the role
We are seeking a talented Senior PHP Developer with 8+ years of overall experience to join our growing team. The ideal candidate must be a senior full-stack developer with 8+ years of PHP experience, strong skills in JavaScript, design patterns, AWS, and microservices.
They must excel in Agile environments, RESTful API development, DevOps, and have strong communication skills, problem-solving abilities, and a passion for learning emerging technologies.
Roles and Responsibilities
- Join our agile team and drive the development of robust and scalable web applications.
- Work closely with developers, technical leads, solution architects, QA, and product owners to deliver high-quality solutions in a dynamic environment.
- Lead the design and development of complex backend systems using PHP and related frameworks.
- Architect and implement RESTful APIs and Web Services.
- Develop and deploy applications using AWS cloud technologies and infrastructure.
- Ensure code quality through unit testing frameworks and adherence to best practices.
- Contribute to DevOps pipelines and promote continuous integration and delivery.
- Participate in code reviews, mentor junior developers, and contribute to technical discussions.
Required Skills and Qualifications
Technical Skills
- 8+ years of professional experience with PHP programming and proficiency with PHP Frameworks (e.g., Yii).
- Strong skills in HTML, CSS, JavaScript, AJAX, with hands-on experience in libraries like jQuery, Bootstrap.
- 8+ years of experience in software design with strong knowledge of design patterns and best practices.
- Proficiency in SOLID principles, Dependency Injection (DI), Repository Pattern, Unit of Work Pattern, CQRS, Microservices Architecture, Event Sourcing, DDD Patterns, Singleton, MVC, MVVM.
- Proficiency with JavaScript frameworks (Angular, React, Vue).
- Solid experience with DBMSs (MS SQL Server, MySQL, PostgreSQL) and strong SQL skills.
- Strong experience with AWS cloud and event-driven architectures.
- Proficiency with Git and version control.
- Experience with Docker containers.
- Working knowledge of JIRA/Confluence or similar tools.
- At least 8+ years of development experience in C# and .NET technologies (.NET Core & Framework).
Development Stack
- JavaScript, PHP, Visual Studio Code/PhpStorm
- DBMSs: MS SQL Server, MySQL, PostgreSQL
- JIRA, Confluence, Git
- HTML, CSS, JavaScript, JSON, XML
- JavaScript & PHP Frameworks
- Docker & Unit testing frameworks
Non-Functional Requirements
- Solid understanding of SLA, uptime, performance optimization, security, cloud infrastructure, CI/CD practices.
Soft Skills
- Excellent problem-solving skills.
- Strong written and verbal communication skills.
Preferred Skills and Qualifications
- Hands-on experience with JavaScript data grids like AG Grid, Angular, PHP, and Yii frameworks.
- Working knowledge of HTML, CSS, JavaScript, JSON, XML, JavaScript frameworks, and PHP frameworks.
- Strong experience with Entity Framework programming.
Why Join Us?
At MedExpert, you will have the opportunity to make an impact by shaping Agile practices and driving project success. We provide an environment where innovation, collaboration, and personal growth are at the forefront.
Joining MedExpert Billing and Consulting gives you the chance to contribute meaningfully to a positive and supportive work environment, where your expertise will make a difference not just within our team but across the EMS industry.

About Us :
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values :
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement :
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace.
Role: Senior Integration Engineer
Location: Remote/Delhi NCR
Experience: 4-8 years
Position Overview :
We are seeking a Senior Integration Engineer with deep expertise in building and managing integrations across Finance, ERP, and business systems. The ideal candidate will have both technical proficiency and strong business understanding, enabling them to translate finance team needs into robust, scalable, and fault-tolerant solutions.
Key Responsibilities:
- Design, develop, and maintain integrations between financial systems, ERPs, and related applications (e.g., expense management, commissions, accounting, sales)
- Gather requirements from Finance and Business stakeholders, analyze pain points, and translate them into effective integration solutions
- Build and support integrations using SOAP and REST APIs, ensuring reliability, scalability, and best practices for logging, error handling, and edge cases
- Develop, debug, and maintain workflows and automations in platforms such as Workato and Exactly Connect
- Support and troubleshoot NetSuite SuiteScript, Suiteflows, and related ERP customizations
- Write, optimize, and execute queries for Zuora (ZQL, Business Objects) and support invoice template customization (HTML)
- Implement integrations leveraging AWS (RDS, S3) and SFTP for secure and scalable data exchange
- Perform database operations and scripting using Python and JavaScript for transformation, validation, and automation tasks
- Provide functional support and debugging for finance tools such as Concur and Coupa
- Ensure integration architecture follows best practices for fault tolerance, monitoring, and maintainability
- Collaborate cross-functionally with Finance, Business, and IT teams to align technology solutions with business goals.
Qualifications:
- 3-8 years of experience in software/system integration with strong exposure to Finance and ERP systems
- Proven experience integrating ERP systems (e.g., NetSuite, Zuora, Coupa, Concur) with financial tools
- Strong understanding of finance and business processes: accounting, commissions, expense management, sales operations
- Hands-on experience with SOAP, REST APIs, Workato, AWS services, SFTP
- Working knowledge of NetSuite SuiteScript, Suiteflows, and Zuora queries (ZQL, Business Objects, invoice templates)
- Proficiency with databases, Python, JavaScript, and SQL query optimization
- Familiarity with Concur and Coupa functionality
- Strong debugging, problem-solving, and requirement-gathering skills
- Excellent communication skills and ability to work with cross-functional business teams.
Preferred Skills:
- Experience with integration design patterns and frameworks
- Exposure to CI/CD pipelines for integration deployments
- Knowledge of business and operations practices in financial systems and finance teams

Description
Job Description:
Company: Springer Capital
Type: Internship (Remote, Part-Time/Full-Time)
Duration: 3–6 months
Start Date: Rolling
Compensation:
About the role:
We’re building high-performance backend systems that power our financial and ESG intelligence platforms and we want you on the team. As a Backend Engineering Intern, you’ll help us develop scalable APIs, automate data pipelines, and deploy secure cloud infrastructure. This is your chance to work alongside experienced engineers, contribute to real products, and see your code go live.
What You'll Work On:
As a Backend Engineering Intern, you’ll be shaping the systems that power financial insights.
Engineering scalable backend services in Python, Node.js, or Go
Designing and integrating RESTful APIs and microservices
Working with PostgreSQL, MongoDB, or Redis for data persistence
Deploying on AWS/GCP, using Docker, and learning Kubernetes on the fly
Automating infrastructure and shipping faster with CI/CD pipelines
Collaborating with a product-focused team that values fast iteration
What We’re Looking For:
A builder mindset – you like writing clean, efficient code that works
Strong grasp of backend languages (Python, Java, Node, etc.)
Understanding of cloud platforms and containerization basics
Basic knowledge of databases and version control
Students or self-taught engineers actively learning and building
Preferred skills:
Experience with serverless or event-driven architectures
Familiarity with DevOps tools or monitoring systems
A curious mind for AI/ML, fintech, or real-time analytics
What You’ll Get:
Real-world experience solving core backend problems
Autonomy and ownership of live features
Mentorship from engineers who’ve built at top-tier startups
A chance to grow into a full-time offer
escription
Job Summary:
Join Springer Capital as a Cybersecurity & Cloud Intern to help architect, secure, and automate our cloud-based backend systems powering next-generation investment platforms.
Job Description:
Founded in 2015, Springer Capital is a technology-forward asset management and investment firm. We leverage cutting-edge digital solutions to uncover high-potential opportunities, transforming traditional finance through innovation, agility, and a relentless commitment to security and scalability.
Job Highlights
Work hands-on with AWS, Azure, or GCP to design and deploy secure, scalable backend infrastructure.
Collaborate with DevOps and engineering teams to embed security best practices in CI/CD pipelines.
Gain experience in real-world incident response, vulnerability assessment, and automated monitoring.
Drive meaningful impact on our security posture and cloud strategy from Day 1.
Enjoy a fully remote, flexible internship with global teammates.
Responsibilities
Assist in architecting and provisioning cloud resources (VMs, containers, serverless functions) with strict security controls.
Implement identity and access management, network segmentation, encryption, and logging best practices.
Develop and maintain automation scripts for security monitoring, patch management, and incident alerts.
Support vulnerability scanning, penetration testing, and remediation tracking.
Document cloud architectures, security configurations, and incident response procedures.
Partner with backend developers to ensure secure API gateways, databases, and storage services.
What We Offer
Mentorship: Learn directly from senior security engineers and cloud architects.
Training & Certifications: Access to online courses and support for AWS/Azure security certifications.
Impactful Projects: Contribute to critical security and cloud initiatives that safeguard our digital assets.
Remote-First Culture: Flexible hours and the freedom to collaborate from anywhere.
Career Growth: Build a strong foundation for a future in cybersecurity, cloud engineering, or DevSecOps.
Requirements
Pursuing or recently graduated in Computer Science, Cybersecurity, Information Technology, or a related discipline.
Familiarity with at least one major cloud platform (AWS, Azure, or GCP).
Understanding of core security principles: IAM, network security, encryption, and logging.
Scripting experience in Python, PowerShell, or Bash for automation tasks.
Strong analytical, problem-solving, and communication skills.
A proactive learner mindset and passion for securing cloud environments.
About Springer Capital
Springer Capital blends financial expertise with digital innovation to redefine asset management. Our mission is to drive exceptional value by implementing robust, technology-driven strategies that transform investment landscapes. We champion a culture of creativity, collaboration, and continuous improvement.
Location: Global (Remote)
Job Type: Internship
Pay: $50 USD per month
Work Location: Remote

We are seeking an experienced Java Developer to design, develop, and maintain high-performance Java applications. The ideal candidate will have 5 to 7 years of hands-on experience in Java development, with proficiency in building scalable backend systems, integrating APIs, and working within agile teams.
Responsibilities:
- Design, develop, and maintain Java-based applications, ensuring high performance and responsiveness.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Develop and integrate RESTful APIs using frameworks like Spring Boot.
- Write clean, maintainable, and efficient code following best practices.
- Conduct code reviews and provide constructive feedback to team members.
- Participate in the full software development lifecycle, including requirement analysis, design, implementation, testing, and deployment.
- Troubleshoot and resolve technical issues across development, testing, and production environments.
- Ensure application security and data protection measures are in place.
Requirements:
- 5-7 years of experience.
Required Qualifications:
- Java/JEE/Jakarta EE: Core Java, Multithreading, Concurrency, Collections, OOP.
- Microservices: MicroProfile, Open Liberty, RESTful APIs (JAX-RS), JSONB/P.
- Messaging: Apache Kafka (Producers, Consumers, Streams).
- Caching: Redis (Cache Management, Data Structures).
- Database: JDBC, SQL, Data Source Configuration, Transaction Management.
- Web Technologies: WebSockets, Servlets, JSP.
Frontend Development:
- JavaScript, JSP.
- Frameworks: ReactJS, React Native, Bootstrap, JQuery.
- Libraries: jQuery.
- Web Fundamentals: HTML5 CSS3 JSON, XML.
Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
▪ Design, build, and maintain scalable data pipelines for structured and unstructured data sources
▪ Develop ETL processes to collect, clean, and transform data from internal and external systems
▪ Support integration of data into dashboards, analytics tools, and reporting systems
▪ Collaborate with data analysts and software developers to improve data accessibility and performance
▪ Document workflows and maintain data infrastructure best practices
▪ Assist in identifying opportunities to automate repetitive data tasks
Job Title: Sr. Node.js Developer
Location: Ahmedabad, Gujarat
Job Type: Full Time
Department: MEAN Stack
About Simform:
Simform is a premier digital engineering company specializing in Cloud, Data, AI/ML, and Experience Engineering to create seamless digital experiences and scalable products. Simform is a strong partner for Microsoft, AWS, Google Cloud, and Databricks. With a presence in 5+ countries, Simform primarily serves North America, the UK, and the Northern European market.
Simform takes pride in being one of the most reputed employers in the region, having created a thriving work culture with a high work-life balance that gives a sense of freedom and opportunity to grow.
Role Overview:
We are looking for a Sr. Node Developer who not only possesses extensive backend expertise but also demonstrates proficiency in system design, cloud services, microservices architecture, and containerization. (Additionally, a good understanding of frontend tech stack to give support to frontend developers is highly valued) We're currently seeking a seasoned Senior Node.js Engineer who not only possesses extensive backend expertise but also demonstrates proficiency in system design, cloud services, microservices architecture, and containerization. (Additionally, a good understanding of frontend tech stack to give support to frontend developers is highly valued)
Key Responsibilities:
- Develop reusable, testable, maintainable, and scalable code with a focus on unit testing.
- Implement robust security measures and data protection mechanisms across projects.
- Champion the implementation of design patterns such as Test-Driven Development (TDD) and Behavior-Driven Development (BDD).
- Actively participate in architecture design sessions and sprint planning meetings, contributing valuable insights.
- Lead code reviews, providing insightful comments and guidance to team members.
- Mentor team members, assisting in debugging complex issues and providing optimal solutions.
Required Skills & Qualifications:
- Excellent written and verbal communication skills.
- Experience: 4+yrs
- Advanced knowledge of JavaScript and TypeScript, including core concepts and best practices.
- Extensive experience in developing highly scalable services and APIs using various protocols.
- Proficiency in data modeling and optimizing database performance in both SQL and NoSQL databases.
- Hands-on experience with PostgreSQL and MongoDB, leveraging technologies like TypeORM, Sequelize, or Knex.
- Proficient in working with frameworks like NestJS, LoopBack, Express, and other TypeScript-based frameworks.
- Strong familiarity with unit testing libraries such as Jest, Mocha, and Chai.
- Expertise in code versioning using Git or Bitbucket.
- Practical experience with Docker for building and deploying microservices.
- Strong command of Linux, including familiarity with server configurations.
- Familiarity with queuing protocols and asynchronous messaging systems.
Preferred Qualification:
- Experience with frontend JavaScript concepts and frameworks such as ReactJS.
- Proficiency in designing and implementing cloud architectures, particularly on AWS services.
- Knowledge of GraphQL and its associated libraries like Apollo and Prisma.
- Hands-on experience with deployment pipelines and CI/CD processes.
- Experience with document, key/value, or other non-relational database systems like Elasticsearch, Redis, and DynamoDB.
- Ability to build AI-centric applications and work with machine learning models, Langchain, vector databases, embeddings, etc.
Why Join Us:
- Young Team, Thriving Culture
- Flat-hierarchical, friendly, engineering-oriented, and growth-focused culture.
- Well-balanced learning and growth opportunities
- Free health insurance.
- Office facilities with a game zone, in-office kitchen with affordable lunch service, and free snacks.
- Sponsorship for certifications/events and library service.
- Flexible work timing, leaves for life events, WFH, and hybrid options

Location: Hybrid/ Remote
Type: Contract / Full‑Time
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Responsibilities:
- Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
- Design and build Python‑based services (FastAPI) for generating and updating embeddings.
- Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
- Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
- Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
- Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
- Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
- Collaborate with frontend engineers to define API contracts and demo endpoints.
- Document architecture diagrams, API specifications, and runbooks for future team onboarding.
Required Skills
- Strong Python expertise (FastAPI, async programming).
- Proficiency with Node.js and Express for API development.
- Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
- Familiarity with OpenAI’s APIs (embeddings, chat completions).
- Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
- Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).
Containerization skills (Docker):
- Good understanding of RAG architectures, prompt design, and memory management.
- Strong Git workflow and collaborative development practices (GitHub, CI/CD).
Nice‑to‑Have:
- Experience with Llama family models or other open‑source LLMs.
- Familiarity with MongoDB Atlas free tier and cluster management.
- Background in data engineering for streaming or batch processing.
- Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
- Frontend skills in React to prototype demo UIs.

Location: Hybrid/ Remote
Openings: 2
Experience: 5–12 Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Key Responsibilities
Architect & Design:
- Provide technical and architectural direction for complex frontend solutions,ensuring alignment with enterprise standards and best practices.
- Conduct design and code reviews to maintain high-quality, reusable, and scalable frontend interfaces for enterprise applications.
- Collaborate with cross-functional teams to define and enforce UI/UX design guidelines, accessibility standards, and performance benchmarks.
- Identify and address potential security vulnerabilities in frontend implementations, ensuring compliance with security and data privacy requirements.
Development & Debugging:
- Write clean, maintainable, and efficient frontend code.
- Debug and troubleshoot code to ensure robust, high-performing applications.
- Develop reusable frontend libraries that can be leveraged across multiple projects.
AI Awareness (Preferred):
- Understand AI/ML fundamentals and how they can enhance frontend applications.
- Collaborate with teams integrating AI-based features into chat applications.
Collaboration & Reporting:
- Work closely with cross-functional teams to align on architecture and deliverables.
- Regularly report progress, identify risks, and propose mitigation strategies.
Quality Assurance:
- Implement unit tests and end-to-end tests to ensure code quality.
- Participate in code reviews and enforce best practices.
Required Skills
- 5-10 years of experience architecting and developing cloud-based global applications in a public cloud environment (AWS, Azure, or GCP).
- Strong hands-on expertise in frontend technologies: JavaScript, HTML5, CSS3
- Proficiency with Modern frameworks like React, Angular, or Node.js
- Backend familiarity with Java, Spring Boot (or similar technologies).
- Experience developing real-world, at-scale products.
- General knowledge of cloud platforms (AWS, Azure, or GCP) and their structure, use, and capabilities.
- Strong problem-solving, debugging, and performance optimization skills.

Location: Hybrid/ Remote
Openings: 2
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or related field
Job Responsibilities
Problem Solving & Optimization:
- Analyze and resolve complex technical and application issues.
- Optimize application performance, scalability, and reliability.
Design & Develop:
- Build, test, and deploy scalable full-stack applications with high performance and security.
- Develop clean, reusable, and maintainable code for both frontend and backend.
AI Integration (Preferred):
- Collaborate with the team to integrate AI/ML models into applications where applicable.
- Explore Generative AI, NLP, or machine learning solutions that enhance product capabilities.
Technical Leadership & Mentorship:
- Provide guidance, mentorship, and code reviews for junior developers.
- Foster a culture of technical excellence and knowledge sharing.
Agile & Delivery Management:
- Participate in Agile ceremonies (sprint planning, stand-ups, retrospectives).
- Define and scope backlog items, track progress, and ensure timely delivery.
Collaboration:
- Work closely with cross-functional teams (product managers, designers, QA) to deliver high-quality solutions.
- Coordinate with geographically distributed teams.
Quality Assurance & Security:
- Conduct peer reviews of designs and code to ensure best practices.
- Implement security measures and ensure compliance with industry standards.
Innovation & Continuous Improvement:
- Identify areas for improvement in the software development lifecycle.
- Stay updated with the latest tech trends, especially in AI and cloud technologies, and recommend new tools or frameworks.
Required Skills
- Strong proficiency in JavaScript, HTML5, CSS3
- Hands-on expertise with frontend frameworks like React, Angular, or Vue.js
- Backend development experience with Java, Spring Boot (Node.js is a plus)
- Knowledge of REST APIs, microservices, and scalable architectures
- Familiarity with cloud platforms (AWS, Azure, or GCP)
- Experience with Agile/Scrum methodologies and JIRA for project tracking
- Proficiency in Git and version control best practices
- Strong debugging, performance optimization, and problem-solving skills
- Ability to analyze customer requirements and translate them into technical specifications

Location: Hybrid/ Remote
Openings: 5
Experience: 0 - 2Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Key Responsibilities:
Backend Development & APIs
- Build microservices that provide REST APIs to power web frontends.
- Design clean, reusable, and scalable backend code meeting enterprise security standards.
- Conceptualize and implement optimized data storage solutions for high-performance systems.
Deployment & Cloud
- Deploy microservices using a common deployment framework on AWS and GCP.
- Inspect and optimize server code for speed, security, and scalability.
Frontend Integration
- Work on modern front-end frameworks to ensure seamless integration with back-end services.
- Develop reusable libraries for both frontend and backend codebases.
AI Awareness (Preferred)
- Understand how AI/ML or Generative AI can enhance enterprise software workflows.
- Collaborate with AI specialists to integrate AI-driven features where applicable.
Quality & Collaboration
- Participate in code reviews to maintain high code quality.
- Collaborate with teams using Agile/Scrum methodologies for rapid and structured delivery.
Required Skills:
- Proficiency in JavaScript (ES6+), Webpack, Mocha, Jest
- Experience with recent frontend frameworks – React.js, Redux.js, Node.js (or similar)
- Deep understanding of HTML5, CSS3, SASS/LESS, and Content Management Systems
- Ability to design and implement RESTful APIs and understand their impact on client-side applications
- Familiarity with cloud platforms (AWS, Azure, or GCP) – deployment, storage, and scalability
- Experience working with Agile and Scrum methodologies
- Strong backend expertise in Java, J2EE, Spring Boot is a plus but not mandatory
We’re looking for an Engineering Manager to guide our micro-service platform and mentor a fully remote backend team. You’ll blend hands-on technical ownership with people leadership—shaping architecture, driving cloud best practices, and coaching engineers in their careers and craft.
Key Responsibilities:
Area
What You’ll Own
Architecture & Delivery
• Define and evolve backend architecture built on Java 17+, Spring Boot 3, AWS (Containers, Lambdas, SQS, S3), Elasticsearch, PostgreSQL/MySQL, Databricks, Redis etc...
• Lead design and code reviews; enforce best practices for testing, CI/CD, observability, security, and cost-efficient cloud operations.
• Drive technical roadmaps, ensuring scalability (billions of events, 99.9 %+ uptime) and rapid feature delivery.
Team Leadership & Growth
• Manage and inspire a distributed team of 6-10 backend engineers across multiple time zones.
• Set clear growth objectives, run 1-on-1s, deliver feedback, and foster an inclusive, high-trust culture.
• Coach the team on AI-assisted development workflows (e.g., GitHub Copilot, LLM-based code review) to boost productivity and code quality.
Stakeholder Collaboration
• Act as technical liaison to Product, Frontend, SRE, and Data teams, translating business goals into resilient backend solutions.
• Communicate complex concepts to both technical and non-technical audiences; influence cross-functional decisions.
Technical Vision & Governance
• Own coding standards, architectural principles, and technology selection.
• Evaluate emerging tools and frameworks (especially around GenAI and cloud-native patterns) and create adoption strategies.
• Balance technical debt and new feature delivery through data-driven prioritization.
Required Qualifications:
● 8+ years designing, building, and operating distributed backend systems with Java & Spring Boot
● Proven experience leading or mentoring engineers; direct people-management a plus
● Expert knowledge of AWS services and cloud-native design patterns
● Hands-on mastery of Elasticsearch, PostgreSQL/MySQL, and Redis for high-volume, low-latency workloads
● Demonstrated success scaling systems to millions of users or billions of events Strong grasp of DevOps practices: containerization (Docker), CI/CD (GitHub Actions), observability stacks
● Excellent communication and stakeholder-management skills in a remote-fi rst environment
Nice-to-Have:
● Hands-on experience with Datadog (APM, Logs, RUM) and a data-driven approach to debugging/performance tuning
● Startup experience—comfortable wearing multiple hats and juggling several projects simultaneously
● Prior title of Principal Engineer, Staff Engineer, or Engineering Manager in a high-growth SaaS company
● Familiarity with AI-assisted development tools (Copilot, CodeWhisperer, Cursor) and a track record of introducing them safely
Job Title: Engineering Manager (Java / Spring Boot, AWS) – Remote
Leadership Role
Location: Remote
Employment Type: Full-time
Position Overview: We are looking for an experienced and highly skilled Senior Data Engineer to join our team and help design, implement, and optimize data systems that support high-end analytical solutions for our clients. As a customer-centric Data Engineer, you will work closely with clients to understand their business needs and translate them into robust, scalable, and efficient technical solutions. You will be responsible for end-to-end data modelling, integration workflows, and data transformation processes while ensuring security, privacy, and compliance.In this role, you will also leverage the latest advancements in artificial intelligence, machine learning, and large language models (LLMs) to deliver high-impact solutions that drive business success. The ideal candidate will have a deep understanding of data infrastructure, optimization techniques, and cost-effective data management
Key Responsibilities:
• Customer Collaboration:
– Partner with clients to gather and understand their business
requirements, translating them into actionable technical specifications.
– Act as the primary technical consultant to guide clients through data challenges and deliver tailored solutions that drive value.
•Data Modeling & Integration:
– Design and implement scalable, efficient, and optimized data models to support business operations and analytical needs.
– Develop and maintain data integration workflows to seamlessly extract, transform, and load (ETL) data from various sources into data repositories.
– Ensure smooth integration between multiple data sources and platforms, including cloud and on-premise systems
• Data Processing & Optimization:
– Develop, optimize, and manage data processing pipelines to enable real-time and batch data processing at scale.
– Continuously evaluate and improve data processing performance, optimizing for throughput while minimizing infrastructure costs.
• Data Governance & Security:
–Implement and enforce data governance policies and best practices, ensuring data security, privacy, and compliance with relevant industry regulations (e.g., GDPR, HIPAA).
–Collaborate with security teams to safeguard sensitive data and maintain privacy controls across data environments.
• Cross-Functional Collaboration:
– Work closely with data engineers, data scientists, and business
analysts to ensure that the data architecture aligns with organizational objectives and delivers actionable insights.
– Foster collaboration across teams to streamline data workflows and optimize solution delivery.
• Leveraging Advanced Technologies:
– Utilize AI, machine learning models, and large language models (LLMs) to automate processes, accelerate delivery, and provide
smart, data-driven solutions to business challenges.
– Identify opportunities to apply cutting-edge technologies to improve the efficiency, speed, and quality of data processing and analytics.
• Cost Optimization:
–Proactively manage infrastructure and cloud resources to optimize throughput while minimizing operational costs.
–Make data-driven recommendations to reduce infrastructure overhead and increase efficiency without sacrificing performance.
Qualifications:
• Experience:
– Proven experience (5+ years) as a Data Engineer or similar role, designing and implementing data solutions at scale.
– Strong expertise in data modelling, data integration (ETL), and data transformation processes.
– Experience with cloud platforms (AWS, Azure, Google Cloud) and big data technologies (e.g., Hadoop, Spark).
• Technical Skills:
– Advanced proficiency in SQL, data modelling tools (e.g., Erwin,PowerDesigner), and data integration frameworks (e.g., Apache
NiFi, Talend).
– Strong understanding of data security protocols, privacy regulations, and compliance requirements.
– Experience with data storage solutions (e.g., data lakes, data warehouses, NoSQL, relational databases).
• AI & Machine Learning Exposure:
– Familiarity with leveraging AI and machine learning technologies (e.g., TensorFlow, PyTorch, scikit-learn) to optimize data processing and analytical tasks.
–Ability to apply advanced algorithms and automation techniques to improve business processes.
• Soft Skills:
– Excellent communication skills to collaborate with clients, stakeholders, and cross-functional teams.
– Strong problem-solving ability with a customer-centric approach to solution design.
– Ability to translate complex technical concepts into clear, understandable terms for non-technical audiences.
• Education:
– Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Science, or a related field (or equivalent practical experience).
LIFE AT FOUNTANE:
- Fountane offers an environment where all members are supported, challenged, recognized & given opportunities to grow to their fullest potential.
- Competitive pay
- Health insurance for spouses, kids, and parents.
- PF/ESI or equivalent
- Individual/team bonuses
- Employee stock ownership plan
- Fun/challenging variety of projects/industries
- Flexible workplace policy - remote/physical
- Flat organization - no micromanagement
- Individual contribution - set your deadlines
- Above all - culture that helps you grow exponentially!
A LITTLE BIT ABOUT THE COMPANY:
Established in 2017, Fountane Inc is a Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi-million valuation companies in the US and a handful of sister ventures for large corporations, including Target, US Ventures, and Imprint Engine.
We’re a team of 120+ strong from around the world that are radically open-minded and believes in excellence, respecting one another, and pushing our boundaries to the furthest it's ever been.
We are inviting a Fullstack Engineer to join our dynamic product team at BrandLabs. If you’re passionate about building end-to-end web applications and enjoy working across the stack, we’d love to connect!
Key Responsibilities:
- Develop, deploy, and maintain web applications using NextJS, Typescript, NodeJS, NestJS, and ExpressJS.
- Collaborate with the product, design, and backend teams to deliver seamless user experiences.
- Write clean, maintainable, and scalable code following industry best practices.
- Participate in technical discussions, code reviews, and architectural planning.
- Integrate third-party services, APIs, and tools where required.
- Ensure application performance, security, and responsiveness.
Required Skills:
- 1-2 years of hands-on experience in fullstack development.
- Strong proficiency in NextJS, Typescript, NodeJS, NestJS, and ExpressJS.
- Good understanding of RESTful APIs and microservice architecture.
- Ability to troubleshoot, debug, and optimize applications for performance.
Good to Have:
- Experience building n8n workflows for automation.
- Exposure to AI/ML-based projects or AI service integrations.
- Familiarity with DevOps, CI/CD pipelines, and cloud platforms.
Additional Information:
- Location: We prefer candidates based in or around Thane, Mumbai for better collaboration.
Project Links: Please share your GitHub, portfolio, or links to live projects you’ve contributed to.

At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking 4 DevOps Support Engineer to join one of our clients' teams in India who can start until 20th of July. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.
Job requirements
Key Responsibilities:
- Monitor and troubleshoot AWS and/or Azure environments to ensure optimal performance and availability.
- Respond promptly to incidents and alerts, investigating and resolving issues efficiently.
- Perform basic scripting and automation tasks to streamline cloud operations (e.g., Bash, Python).
- Communicate clearly and fluently in English with customers and internal teams.
- Collaborate closely with the Team Lead, following Standard Operating Procedures (SOPs) and escalation workflows.
- Work in a rotating shift schedule, including weekends and nights, ensuring continuous support coverage.
Shift Details:
- Engineers rotate shifts, typically working 4–5 shifts per week.
- Each engineer works about 4 to 5 shifts per week, rotating through morning, evening, and night shifts—including weekends—to cover 24/7 support evenly among the team
- Rotation ensures no single engineer is always working nights or weekends; the load is shared fairly among the team.
Qualifications:
- 2–5 years of experience in DevOps or cloud support roles.
- Strong familiarity with AWS and/or Azure cloud environments.
- Experience with CI/CD tools such as GitHub Actions or Jenkins.
- Proficiency with monitoring tools like Datadog, CloudWatch, or similar.
- Basic scripting skills in Bash, Python, or comparable languages.
- Excellent communication skills in English.
- Comfortable and willing to work in a shift-based support role, including night and weekend shifts.
- Prior experience in a shift-based support environment is preferred.
What We Offer:
- Remote work opportunity — work from anywhere in India with a stable internet connection.
- Comprehensive training program including:
- Shadowing existing processes to gain hands-on experience.
- Learning internal tools, Standard Operating Procedures (SOPs), ticketing systems, and escalation paths to ensure smooth onboarding and ongoing success.