50+ Remote AWS (Amazon Web Services) Jobs in India
Apply to 50+ Remote AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!
Required Qualifications
- 4+ years of professional software development experience
- 2+ years contributing to service design and architecture
- Strong expertise in modern languages like Golang, Python
- Deep understanding of scalable, cloud-native architectures and microservices
- Production experience with distributed systems and database technologies
- Experience with Docker, software engineering best practices
- Bachelor's Degree in Computer Science or related technical field
Preferred Qualifications
- Experience with Golang, AWS, and Kubernetes
- CI/CD pipeline experience with GitHub Actions
Start-up environment experience
Job Summary: Lead/Senior ML Data Engineer (Cloud-Native, Healthcare AI)
Experience Required: 8+ Years
Work Mode: Remote
We are seeking a highly autonomous and experienced Lead/Senior ML Data Engineer to drive the critical data foundation for our AI analytics and Generative AI platforms. This is a specialized hybrid position, focusing on designing, building, and optimizing scalable data pipelines (ETL/ELT) that transform complex, messy clinical and healthcare data into high-quality, production-ready feature stores for Machine Learning and NLP models.
The successful candidate will own technical work streams end-to-end, ensuring data quality, governance, and low-latency delivery in a cloud-native environment.
Key Responsibilities & Focus Areas:
- ML Data Pipeline Ownership (70-80% Focus): Design and implement high-performance, scalable ETL/ELT pipelines using PySpark and a Lakehouse architecture (such as Databricks) to ingest, clean, and transform large-scale healthcare datasets.
- AI Data Preparation: Specialize in Feature Engineering and data preparation for complex ML workloads, including transforming unstructured clinical data (e.g., medical notes) for Generative AI and NLP model training.
- Cloud Architecture & Orchestration: Deploy, manage, and optimize data workflows using Airflow in a production AWS environment.
- Data Governance & Compliance: Mandatorily implement pipelines with robust data masking, pseudonymization, and security controls to ensure continuous adherence to HIPAA and other relevant health data privacy regulations.
- Technical Leadership: Lead and define technical requirements from ambiguous business problems, acting as a key contributor to the data architecture strategy for the core AI platform.
Non-Negotiable Requirements (The "Must-Haves"):
- 5+ years of progressive experience as a Data Engineer, with a clear focus on ML/AI support.
- Deep expertise in PySpark/Python for distributed data processing.
- Mandatory proficiency with Lakehouse platforms (e.g., Databricks) in an AWS production environment.
- Proven experience handling complex clinical/healthcare data (EHR, Claims), including unstructured text.
- Hands-on experience with HIPAA/GDPR compliance in data pipeline design.
About the Role
We are looking for a passionate GenAI Developer to join our dynamic team at Hardwin Software Solutions. In this role, you will design and develop scalable backend systems, leverage AWS services for data processing, and work on cutting-edge Generative AI solutions. If you enjoy solving complex problems and building impactful applications, we’d love to hear from you.
What You Will Do
- Develop robust and scalable backend services and APIs using Python, integrating with various AWS services.
- Design, implement, and maintain data processing pipelines leveraging AWS (e.g., S3, Lambda).
- Collaborate with cross-functional teams to translate requirements into efficient technical solutions.
- Write clean, maintainable code while following agile engineering practices (CI/CD, version control, release cycles).
- Optimize application performance and scalability by fine-tuning AWS resources and leveraging advanced Python techniques.
- Contribute to the development and integration of Generative AI techniques into business applications.
What You Should Have
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 3+ years of professional experience in software development.
- Strong programming skills in Python and good understanding of data structures & algorithms.
- Hands-on experience with AWS services: S3, Lambda, DynamoDB, OpenSearch.
- Experience with Relational Databases, Source Control, and CI/CD pipelines.
- Practical knowledge of Generative AI techniques (mandatory).
- Strong analytical and mathematical problem-solving abilities.
- Excellent communication skills in English.
- Ability to work both independently and collaboratively, with a proactive and self-motivated attitude.
- Strong organizational skills with the ability to prioritize tasks and meet deadlines.
Job Summary
We are seeking an experienced Databricks Developer with strong skills in PySpark, SQL, Python, and hands-on experience deploying data solutions on AWS (preferred), Azure. The role involves designing, developing, and optimizing scalable data pipelines and analytics workflows on the Databricks platform.
Key Responsibilities
- Develop and optimize ETL/ELT pipelines using Databricks and PySpark.
- Build scalable data workflows on AWS (EC2, S3, Glue, Lambda, IAM) or Azure (ADF, ADLS, Synapse).
- Implement and manage Delta Lake (ACID, schema evolution, time travel).
- Write efficient, complex SQL for transformation and analytics.
- Build and support batch and streaming ingestion (Kafka, Kinesis, EventHub).
- Optimize Databricks clusters, jobs, notebooks, and PySpark performance.
- Collaborate with cross-functional teams to deliver reliable data solutions.
- Ensure data governance, security, and compliance.
- Troubleshoot pipelines and support CI/CD deployments.
Required Skills & Experience
- 4–8 years in Data Engineering / Big Data development.
- Strong hands-on experience with Databricks (clusters, jobs, workflows).
- Advanced PySpark and strong Python skills.
- Expert-level SQL (complex queries, window functions).
- Practical experience with AWS (preferred) or Azure cloud services.
- Experience with Delta Lake, Parquet, and data lake architectures.
- Familiarity with CI/CD tools (GitHub Actions, Azure DevOps, Jenkins).
- Good understanding of data modeling, optimization, and distributed systems.
Seeking an experienced AWS Migration Engineer with 7+ years of hands-on experience to lead cloud migration projects, assess legacy systems, and ensure seamless transitions to AWS infrastructure. The role focuses on strategy, execution, optimization, and minimizing downtime during migrations.
Key Responsibilities:
- Conduct assessments of on-premises and legacy systems for AWS migration feasibility.
- Design and execute migration strategies using AWS Migration Hub, DMS, and Server Migration Service.
- Plan and implement lift-and-shift, re-platforming, and refactoring approaches.
- Optimize workloads post-migration for cost, performance, and security.
- Collaborate with stakeholders to define migration roadmaps and timelines.
- Perform data migration, application re-architecture, and hybrid cloud setups.
- Monitor migration progress, troubleshoot issues, and ensure business continuity.
- Document processes and provide post-migration support and training.
- Manage and troubleshoot Kubernetes/EKS networking components including VPC CNI, Service Mesh, Ingress controllers, and Network Policies.
Required Qualifications:
- 7+ years of IT experience, with minimum 4 years focused on AWS migrations.
- AWS Certified Solutions Architect or Migration Specialty certification preferred.
- Expertise in AWS services: EC2, S3, RDS, VPC, Direct Connect, DMS, SMS.
- Strong knowledge of cloud migration tools and frameworks (AWS MGN, Snowball).
- Experience with infrastructure as code (CloudFormation, Terraform).
- Proficiency in scripting (Python, PowerShell) and automation.
- Familiarity with security best practices (IAM, encryption, compliance).
- Hands-on experience with Kubernetes/EKS networking components and best practices.
Preferred Skills:
- Experience with hybrid/multi-cloud environments.
- Knowledge of DevOps tools (Jenkins, GitLab CI/CD).
- Excellent problem-solving and communication skills.
JOB TITLE: Senior Full Stack Developer (SDE-3)
LOCATION: Remote/Hybrid.
A LITTLE BIT ABOUT THE ROLE:
As a Full Stack Developer, you will be responsible for developing digital systems that deliver optimal end-to-end solutions to our business needs. The work will cover all aspects of software delivery, including working with staff, vendors, and outsourced contributors to build, release and maintain the product.
Fountane operates a scrum-based Agile delivery cycle, and you will be working within this. You will work with product owners, user experience, test, infrastructure, and operations professionals to build the most effective solutions.
WHAT YOU WILL BE DOING:
- Full-stack development on a multinational team on various products across different technologies and industries.
- Optimize the development process and identify continuing improvements.
- Monitor technology landscape, assess and introduce new technology. Own and communicate development processes and standards.
- The job title does not define or limit your duties, and you may be required to carry out other work within your abilities from time to time at our request. We reserve the right to introduce changes in line with technological developments which may impact your job duties or methods of working.
WHAT YOU WILL NEED TO BE GREAT IN THIS ROLE:
- Minimum of 3+ years of full-stack development, combined back and front-end experience building fast, reliable web and/or mobile applications.
- Experience with Web frameworks (e.g., React, Angular or Vue) and/or mobile development (e.g., Native, Native Script, React)
- Proficient in at least one JavaScript framework such as React, NodeJs, AngularJS (2. x), or jQuery.
- Ability to optimize product development by leveraging software development processes.
- Bachelor's degree or equivalent (minimum six years) of work experience. If you have an Associate’s Degree must have a minimum of 4 years of work experience.
- Fountane's current technology stack driving our digital products includes React.js, Node.js, React Native, Angular, Firebase, Bootstrap, MongoDB, Express, Hasura, GraphQl, Amazon Web Services(AWS), and Google Cloud Platform.
SOFT SKILLS:
- Collaboration - Ability to work in teams across the world
- Adaptability - situations are unexpected, and you need to be quick to adapt
- Open-mindedness - Expect to see things outside the ordinary
LIFE AT FOUNTANE:
- Fountane offers an environment where all members are supported, challenged, recognized & given opportunities to grow to their fullest potential.
- Competitive pay
- Health insurance
- Individual/team bonuses
- Employee stock ownership plan
- Fun/challenging variety of projects/industries
- Flexible workplace policy - remote/physical
- Flat organization - no micromanagement
- Individual contribution - set your deadlines
- Above all - culture that helps you grow exponentially.
Qualifications - No bachelor's degree required. Good communication skills are a must!
A LITTLE BIT ABOUT THE COMPANY:
Established in 2017, Fountane Inc is a Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi-million valuation companies in the US and a handful of sister ventures for large corporations, including Target, US Ventures, and Imprint Engine.
We’re a team of 80 strong from around the world that are radically open-minded and believes in excellence, respecting one
Summary:
We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.
Job Type:
Full-Time Contractor (12 months)
Location:
Remote / On-site (Jaipur preferred, as per project needs)
Experience:
5+ years in backend development
Key Responsibilities:
- Design, develop, and maintain robust backend services using Python and FastAPI.
- Implement and manage Prisma ORM for database operations.
- Build scalable APIs and integrate with SQL databases and third-party services.
- Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
- Collaborate with front-end developers and other team members to deliver high-quality web applications.
- Ensure application performance, security, and reliability.
- Participate in code reviews, testing, and deployment processes.
Required Skills:
- Expertise in Python backend development with strong experience in FastAPI.
- Solid understanding of RESTful API design and implementation.
- Proficiency in SQL databases and ORM tools (preferably Prisma)
- Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
- Familiarity with CI/CD pipelines and containerization (Docker).
- Knowledge of cloud architecture best practices.
Added Advantage:
- Front-end development knowledge (React, Angular, or similar frameworks).
- Exposure to AWS/GCP cloud platforms.
- Experience with NoSQL databases.
Eligibility:
- Minimum 5 years of professional experience in backend development.
- Available for full-time engagement.
- Please excuse if you are currently engaged in other projects—we require dedicated availability.
Role Overview
As a Senior SQL Developer, you’ll be responsible for data extracts, updating, and maintaining reports as requested by stakeholders. You’ll work closely with finance operations and developers to ensure data requests are appropriately managed.
Key Responsibilities
- Design, develop, and optimize complex SQL queries, stored procedures, functions, and tasks across multiple databases/schemas.
- Transform cost-intensive models from full-refreshes to incremental loads based on upstream data.
- Help design proactive monitoring of data to catch data issues/data delays.
Qualifications
- 5+ years of experience as a SQL developer, preferably in a B2C or tech environment. • Ability to translate requirements into datasets.
- Understanding of dbt framework for transformations.
- Basic usage of git - branching/ PR generation.
- Detail-oriented with strong organizational and time management skills.
- Ability to work cross-functionally and manage multiple projects simultaneously.
Bonus Points
- Experience with Snowflake and AWS data technologies.
- Experience with Python and containers (Docker)
Inflection.io is a venture-backed B2B marketing automation company, enabling to communicate with their customers and prospects from one platform. We're used by leading SaaS companies like Sauce Labs, Sigma Computing, BILL, Mural, and Elastic, many of which pay more than $100K/yr (1 crore rupee).
And,... it’s working! We have world-class stats: our largest deal is over 3 crore, we have a 5 star rating on G2, over 100% NRR, and constantly break sales and customer records. We’ve raised $14M in total since 2021 with $7.6M of fresh funding in 2024, giving us many years of runway.
However, we’re still in startup mode with approximately 30 employees and looking for the next SDE3 to help propel Inflection forward. Do you want to join a fast growing startup that is aiming to build a very large company?
Key Responsibilities:
- Lead the design, development, and deployment of complex software systems and applications.
- Collaborate with engineers and product managers to define and implement innovative solutions
- Provide technical leadership and mentorship to junior engineers, promoting best practices and fostering a culture of continuous improvement.
- Write clean, maintainable and efficient code, ensuring high performance and scalability of the software.
- Conduct code reviews and provide constructive feedback to ensure code quality and adherence to coding standards.
- Troubleshoot and resolve complex technical issues, optimizing system performance and reliability.
- Stay updated with the latest industry trends and technologies, evaluating their potential for adoption in our projects.
- Participate in the full software development lifecycle, from requirements gathering to deployment and monitoring.
Qualifications:
- 5+ years of professional software development experience, with a strong focus on backend development.
- Proficiency in one or more programming languages such as Java, Python, Golang or C#
- Strong understanding of database systems, both relational (e.g., MySQL, PostgreSQL) and NoSQL (e.g., MongoDB, Cassandra).
- Hands-on experience with message brokers such as Kafka, RabbitMQ, or Amazon SQS.
- Experience with cloud platforms (AWS or Azure or Google Cloud) and containerization technologies (Docker, Kubernetes).
- Proven track record of designing and implementing scalable, high-performance systems.
- Excellent problem-solving skills and the ability to think critically and creatively.
- Strong communication and collaboration skills, with the ability to work effectively in a fast-paced, team-oriented environment.
About Us:
MyOperator and Heyo are India’s leading conversational platforms, empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML and advanced decision capabilities to drive real-time business insights. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in various industry verticals. Hypersonix.ai is seeking a well-rounded, hands-on product leader to help lead product management of key capabilities and features.
About the Role
We are looking for talented and driven Data Engineers at various levels to work with customers to build the data warehouse, analytical dashboards and ML capabilities as per customer needs.
Roles and Responsibilities
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements; should write complex queries in an optimized way
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
- Run ad-hoc analysis utilizing the data pipeline to provide actionable insights
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
- Work with analytics and data scientist team members and assist them in building and optimizing our product into an innovative industry leader
Requirements
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Experience supporting and working with cross-functional teams in a dynamic environment
- We are looking for a candidate with 4+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Technology or completed MCA.
We’re seeking a highly skilled, execution-focused Senior Backend Engineer with a minimum of 5 years of experience to join our team. This role demands hands-on expertise in building and scaling distributed systems, strong proficiency in Java, and deep knowledge of cloud-native infrastructure. You will be expected to design robust backend services, optimize performance across storage and caching layers, and enable seamless integrations using modern messaging and CI/CD pipelines.
You’ll be working in a high-scale, high-impact environment where reliability, speed, and efficiency are paramount. If you enjoy solving complex engineering challenges and have a passion for distributed systems, this is the right role for you.
Responsibilities:
- Design, develop, and maintain distributed backend systems at scale.
- Write high-performance, production-grade code in Java or Kotlin.
- Architect and optimize storage systems, ensuring efficient query performance and scalable data models.
- Implement caching strategies to reduce latency and improve system throughput.
- Build and manage services leveraging AWS cloud infrastructure.
- Develop resilient messaging pipelines using Kafka (or equivalent) for real-time data processing.
- Define and streamline CI/CD pipelines, ensuring rapid and reliable deployment cycles.
- Collaborate with product managers, frontend engineers, and DevOps to deliver end-to-end solutions.
- Monitor system performance, identify bottlenecks, and apply proactive fixes.
- Drive best practices in software engineering, testing, and code reviews.
Key focus areas include:
- Performance optimization, reliability, and low-latency API design
- Microservices architecture and cloud-native development (Kubernetes, Docker, CI/CD)
- Experience with large-scale systems, monitoring, and performance profiling
- Deep understanding of concurrency,
- JVM tuning, and scalable data handling
Requirements:
- 5+ years of experience in backend engineering, with deep hands-on coding experience.
- Strong proficiency in Java or Kotlin and familiarity with modern frameworks.
- Strong hands-on expertise in Spring Boot or Quarkus frameworks.
- Proven track record in building scalable distributed systems.
- Hands-on expertise with AWS services (e.g., EC2, S3, Lambda, DynamoDB, RDS).
- Solid understanding of messaging systems like Kafka, RabbitMQ, or similar.
- Strong grasp of query performance optimization and storage system design.
- Experience with caching solutions (Redis, Memcached, etc.).
- Familiarity with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, etc.).
- Excellent problem-solving skills and ability to thrive in fast-paced environments.
- Strong communication and collaboration skills, with a proactive mindset.
Benefits:
- Best in class salary: We hire only the best, and we pay accordingly.
- Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
- Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.
About Us
Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media, and Entertainment companies in the world! We’re headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies.
Today, we are a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting-edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders who have built and led multiple tech, product, and design teams.
Role: Senior Backend Engineer(Nodes.js+Typescript+Postgres)
Location: Pune
Type: Full-Time
Who We Are:
After a highly successful launch, Azodha is ready to take its next major step. We are seeking a passionate and experienced Senior Backend Engineer to build and enhance a disruptive healthcare product. This is a unique opportunity to get in on the ground floor of a fast-growing startup and play a pivotal role in shaping both the product and the team.
If you are an experienced backend engineer who thrives in an agile startup environment and has a strong technical background, we want to hear from you!
About The Role:
As a Senior Backend Engineer at Azodha, you’ll play a key role in architecting, solutioning and driving development of our AI led interoperable digital enablement platform.You will work closely with the founder/CEO to refine the product vision, drive product innovation, delivery and grow with a strong technical team.
What You’ll Do:
* Technical Excellence: Design, develop, and scale backend services using Node.js and TypeScript, including REST and GraphQL APIs. Ensure systems are scalable, secure, and high-performing.
* Data Management and Integrity: Work with Prisma or TypeORM, and relational databases like PostgreSQL and MySQL
* Continuous Improvement: Stay updated with the latest trends in backend development, incorporating new technologies where appropriate. Drive innovation and efficiency within the team
* Utilize ORMs such as Prisma or TypeORM to interact with database and ensure data integrity.
* Follow Agile sprint methodology for development.
* Conduct code reviews to maintain code quality and adherence to best practices.
* Optimize API performance for optimal user experiences.
* Participate in the entire development lifecycle, from initial planning , design and maintenance
* Troubleshoot and debug issues to ensure system stability.
* Collaborate with QA teams to ensure high quality releases.
* Mentor and provide guidance to junior developers, offering technical expertise and constructive feedback.
Requirements
* Bachelor's degree in Computer Science, software Engineering, or a related field.
* 5+ years of hands-on experience in backend development using Node.js and TypeScript.
* Experience working on Postgres or My SQL.
* Proficiency in TypeScript and its application in Node.js
* Experience with ORM such as Prisma or TypeORM.
* Familiarity with Agile development methodologies.
* Strong analytical and problem solving skills.
* Ability to work independently and in a team oriented, fast-paced environment.
* Excellent written and oral communication skills.
* Self motivated and proactive attitude.
Preferred:
* Experience with other backend technologies and languages.
* Familiarity with continuous integration and deployment process.
* Contributions to open-source projects related to backend development.
Note: please don't apply if you're profile if you're primary database is postgres SQL.
Join our team of talented engineers and be part of building cutting edge backend systems that drive our applications. As a Senior Backend Engineer, you'll have the opportunity to shape the future of our backend infrastructure and contribute company's success. If you are passionate about backend development and meet the above requirements, we encourage you to apply and become valued member of our team at Azodha.
We are seeking a highly skilled Power Platform Developer with deep expertise in designing, developing, and deploying solutions using Microsoft Power Platform. The ideal candidate will have strong knowledge of Power Apps, Power Automate, Power BI, Power Pages, and Dataverse, along with integration capabilities across Microsoft 365, Azure, and third-party systems.
Key Responsibilities
- Solution Development:
- Design and build custom applications using Power Apps (Canvas & Model-Driven).
- Develop automated workflows using Power Automate for business process optimization.
- Create interactive dashboards and reports using Power BI for data visualization and analytics.
- Configure and manage Dataverse for secure data storage and modelling.
- Develop and maintain Power Pages for external-facing portals.
- Integration & Customization:
- Integrate Power Platform solutions with Microsoft 365, Dynamics 365, Azure services, and external APIs.
- Implement custom connectors and leverage Power Platform SDK for advanced scenarios.
- Utilize Azure Functions, Logic Apps, and REST APIs for extended functionality.
- Governance & Security:
- Apply best practices for environment management, ALM (Application Lifecycle Management), and solution deployment.
- Ensure compliance with security, data governance, and licensing guidelines.
- Implement role-based access control and manage user permissions.
- Performance & Optimization:
- Monitor and optimize app performance, workflow efficiency, and data refresh strategies.
- Troubleshoot and resolve technical issues promptly.
- Collaboration & Documentation:
- Work closely with business stakeholders to gather requirements and translate them into technical solutions.
- Document architecture, workflows, and processes for maintainability.
Required Skills & Qualifications
- Technical Expertise:
- Strong proficiency in Power Apps (Canvas & Model-Driven), Power Automate, Power BI, Power Pages, and Dataverse.
- Experience with Microsoft 365, Dynamics 365, and Azure services.
- Knowledge of JavaScript, TypeScript, C#, .NET, and Power Fx for custom development.
- Familiarity with SQL, DAX, and data modeling.
- Additional Skills:
- Understanding of ALM practices, solution packaging, and deployment pipelines.
- Experience with Git, Azure DevOps, or similar tools for version control and CI/CD.
- Strong problem-solving and analytical skills.
- Certifications (Preferred):
- Microsoft Certified: Power Platform Developer Associate.
- Microsoft Certified: Power Platform Solution Architect Expert.
Soft Skills
- Excellent communication and collaboration skills.
- Ability to work in agile environments and manage multiple priorities.
- Strong documentation and presentation abilities.
Job Summary:
We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.
Responsibilities:
- Deploy, configure, and troubleshoot various infrastructure and application environments
- Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
- Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
- Collaborate with application teams on infrastructure design and issues
- Architect solutions that optimally meet business needs
- Implement CI/CD pipelines and automate deployment processes
- Disaster recovery and infrastructure restoration
- Restore/Recovery operations from backups
- Automate routine tasks
- Execute company initiatives in the infrastructure space
- Expertise with observability tools like ELK, Prometheus, Grafana , Loki
Qualifications:
- Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
- Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
- Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
- Experience in architecting solutions that optimally meet business needs
- Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
- Strong understanding of system concepts like high availability, scalability, and redundancy
- Ability to work with application teams on infrastructure design and issues
- Excellent problem-solving and troubleshooting skills
- Experience with automation of routine tasks
- Good communication and interpersonal skills
Education and Experience:
- Bachelor's degree in Computer Science or a related field
- 5 to 10 years of experience as a DevOps Engineer or in a related role
- Experience with observability tools like ELK, Prometheus, Grafana
Working Conditions:
The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.
Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.
We are looking for an 4-5 years of experienced AI/ML Engineer who can design, develop, and deploy scalable machine learning models and AI-driven solutions. The ideal candidate should have strong expertise in data processing, model building, and production deployment, along with solid programming and problem-solving skills.
Key Responsibilities
- Develop and deploy machine learning, deep learning, and NLP models for various business use cases.
- Build end-to-end ML pipelines including data preprocessing, feature engineering, training, evaluation, and production deployment.
- Optimize model performance and ensure scalability in production environments.
- Work closely with data scientists, product teams, and engineers to translate business requirements into AI solutions.
- Conduct data analysis to identify trends and insights.
- Implement MLOps practices for versioning, monitoring, and automating ML workflows.
- Research and evaluate new AI/ML techniques, tools, and frameworks.
- Document system architecture, model design, and development processes.
Required Skills
- Strong programming skills in Python (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, Keras).
- Hands-on experience in building and deploying ML/DL models in production.
- Good understanding of machine learning algorithms, neural networks, NLP, and computer vision.
- Experience with REST APIs, Docker, Kubernetes, and cloud platforms (AWS/GCP/Azure).
- Working knowledge of MLOps tools such as MLflow, Airflow, DVC, or Kubeflow.
- Familiarity with data pipelines and big data technologies (Spark, Hadoop) is a plus.
- Strong analytical skills and ability to work with large datasets.
- Excellent communication and problem-solving abilities.
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI, Machine Learning, or related fields.
- Experience in deploying models using cloud services (AWS Sagemaker, GCP Vertex AI, etc.).
- Experience in LLM fine-tuning or generative AI is an added advantage.
What You Will be Doing:
● Develop and maintain software that is scalable, secure, and efficient
● Collaborate with Technical Architects & Business Analysts
● Architect and design software solutions that meet project requirements
● Mentor and train junior developers to improve their skills and knowledge
● Conduct code reviews ensuring the code is maintainable, readable, and efficient
● Research and evaluate new technologies to improve the processes
● Effective communication skills, particularly in documenting and explaining code and technical concepts.
Skills We Are Looking For:
● 5+ years extensive hands-on experience with NodeJS and Typescript
● Strong understanding of RESTful API design and implementation.
● Comfortable with debugging, performance tuning, and optimizing Node.js applications.
● Strong problem-solving abilities and attention to detail.
● Experience with authentication and authorization protocols, such as OAuth, JWT and session management.
● Understanding of security best practices in backend development, including data encryption and vulnerability mitigation.
Bonus Skills
● Experience with server-side frameworks such as Express.js or NestJS.
● Familiarity with cloud platforms (e.g., AWS, Azure, (preferred) Google Cloud) and their services for backend deployment.
● Familiarity with NoSQL databases (Mongo preferred), and the ability to design and optimize database queries.
Why You’ll Love It Here
● Innovative Culture - We believe in pushing boundaries
● Impactful Work - You won’t just write code, you will help build the future
● Collaborative Environment - We believe that everyone has a voice that matters
● Work Life Balance - Our flexible work environment encourages you to have space to
recharge
Position Overview:
We are seeking a hands-on Engineering Lead with a strong background in cloud-native application development, microservices architecture, and team leadership. The ideal candidate will have a proven track record of delivering complex enterprise-grade applications and will
be capable of leading a large team to build scalable, secure, and high-
performance systems. This person will not only be a technical expert but also an effective people manager, fostering growth and collaboration within their team.
Key Responsibilities:
- Lead by example, mentor junior engineers, and contribute to team knowledge-sharing efforts.
- Provide guidance on best practices, architecture, and development processes.
- Drive the design and implementation of cloud-native enterprise applications, ensuring scalability, reliability, and maintainability.
- Champion the adoption of microservices principles and design patterns across the engineering team.
- Maintain a hands-on approach in software development, contributing directly to code while balancing leadership responsibilities.
- Collaborate with cross-functional teams (Product, UI/UX, DevOps,
- QA, Security) to ensure successful delivery of features and enhancements.
- Continuously evaluate and improve the development process, from CI/CD pipelines to code quality and testing.
- Ensure application security best practices are followed, addressing vulnerabilities and potential threats in a proactive manner.
- Help define technical roadmaps and provide input on architectural decisions that meet both current and future customer needs.
- Foster a culture of collaboration, continuous learning, and innovation within the engineering team.
Required Skills & Experience:
Technical Skills:
Core Technologies: Strong expertise in Node.js and Javascript,
with the ability to pick up new languages and technologies as required.
Cloud Expertise: Hands-on experience with cloud technologies,
particularly AWS, Azure, or Google Cloud Platform (GCP).
Microservices Architecture: Proven experience in building and
maintaining cloud-native, microservices-based applications.
Security Awareness: Deep understanding of security principles,
especially in the context of developing enterprise applications.
Development Tools: Proficiency in version control systems (Git),
CI/CD tools, containerization (Docker), and orchestration platforms
(Kubernetes).
Scalability & Performance: Strong knowledge of designing
systems for scalability and performance, with experience managing
large-scale systems.
Communication Skills:
- Exceptional verbal and written communication skills, with the ability to articulate complex business concepts to both technical and non-technical stakeholders.
- Strong presentation skills to effectively convey technical information and business value to clients.
- Ability to collaborate effectively with cross-functional teams and clients across different time zones and cultural backgrounds.
Experience:
- At least 5-10 years of experience in software engineering with at least 2-3 years in a leadership role managing a team of developers.
- Proven track-record for delivering performant and scalable applications.
- Experience working in client-facing roles, providing technical consulting, and managing client expectations.
Leadership Skills:
- Proven ability to manage, mentor, and motivate a team of engineers.
- Strong communication skills, capable of explaining complex technical concepts to non-technical stakeholders.
- Collaborative mindset with the ability to work effectively with cross-functional teams.
LIFE AT FOUNTANE:
- Fountane offers an environment where all members are supported, challenged, recognized & given opportunities to grow to their fullest potential.
- Competitive pay
- Health insurance
- Individual/team bonuses
- Employee stock ownership plan
- Fun/challenging variety of projects/industries
- Flexible workplace policy - remote/physical
- Flat organization - no micromanagement
- Individual contribution - set your deadlines
- Above all - culture that helps you grow exponentially.
Qualifications - No bachelor's degree required. Good communication skills are a must!
ABOUT US:
Established in 2017, Fountane Inc is a Ventures Lab incubating and investing in new competitive technology businesses from scratch. Thus far, we’ve created half a dozen multi-million valuation companies in the US and a handful of sister ventures for large corporations, including Target, US Ventures, and Imprint Engine.
We’re a team of 80 strong from around the world that are radically open-minded and believes in excellence, respecting one another, and pushing our boundaries to the furthest it's ever been.
About Intro
Intro is a dating app where LGBTQ South Asians find love. Built by Queer Desis in New York for the 100 million queer South Asians around the globe who deserve a space of their own. Our mission: help 1 million Queer Desis find love by the end of 2026. We’re creating a safer, more intentional, and community-driven dating experience — one that celebrates identity, culture, and connection.
The Role
We’re looking for a Founding Full-Stack Engineer who thrives in 0→1 environments. You’ll take ownership of architecture, design, and execution across backend, web, and mobile (iOS/Android) — helping shape both the product and the culture of the company.
You’ll be joining at the earliest stage — working directly with the founding team on everything from feature development to infrastructure decisions and product strategy.
Responsibilities
- Architect and build scalable backend systems (APIs, data models, authentication, messaging, matching).
- Lead development across web and mobile clients (React, React Native, Swift/Kotlin).
- Collaborate on product design and iterate quickly on user feedback.
- Set up CI/CD, testing, and monitoring pipelines.
- Help define the tech culture, best practices, and early engineering team standards.
- Contribute to early hiring and mentorship as we grow.
You Might Be a Great Fit If You
- Care deeply about building for LGBTQ and South Asian communities.
- Are motivated by impact and ownership, not just code.
- Thrive in ambiguity and love solving real user problems fast.
- Want to help define the technical and cultural DNA of a mission-driven company.
Interview Process
- AI Screen – Initial automated technical and culture-fit assessment.
- Web Challenge – Build a small feature for the web app to demonstrate frontend and full-stack skills.
- iOS Challenge – Build a small feature for the iOS app to showcase mobile development and design sense.
- Android Challenge – Build a small feature for the Android app to highlight cross-platform depth.
- Founder Chat – Meet with the Founder to discuss vision, values, and long-term fit.
What We Offer
- Competitive salary.
- Full-time (40 hours/week) with flexible hours.
- Opportunity to shape a product with global cultural impact.
- Work directly with the founding team building something that truly matters.
Job Title: Python Developer
Experience Level: 4+ years
Job Summary:
We are seeking a skilled Python Developer with strong experience in developing and maintaining APIs. Familiarity with 2D and 3D geometry concepts is a strong plus. The ideal candidate will be passionate about clean code, scalable systems, and solving complex geometric and computational problems.
Key Responsibilities:
· Design, develop, and maintain robust and scalable APIs using Python.
· Work with geometric data structures and algorithms (2D/3D).
· Collaborate with cross-functional teams including front-end developers, designers, and product managers.
· Optimize code for performance and scalability.
· Write unit and integration tests to ensure code quality.
· Participate in code reviews and contribute to best practices.
Required Skills:
· Strong proficiency in Python.
· Experience with RESTful API development (e.g., Flask, FastAPI, Django REST Framework).
· Good understanding of 2D/3D geometry, computational geometry, or CAD-related concepts.
· Familiarity with libraries such as NumPy, SciPy, Shapely, Open3D, or PyMesh.
· Experience with version control systems (e.g., Git).
· Strong problem-solving and analytical skills.
Good to Have:
· Experience with 3D visualization tools or libraries (e.g., VTK, Blender API, Three.js via Python bindings).
· Knowledge of mathematical modeling or simulation.
· Exposure to cloud platforms (AWS, Azure, GCP).
· Familiarity with CI/CD pipelines.
Education:
· Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field.
Title – Principal Cloud Architect
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
Business Summary :
The Deltek Global Cloud team focuses on the delivery of first-class services and solutions for our customers. We are an innovative and dynamic team that is passionate about transforming the Deltek cloud services that power our customers' project success. Our diverse, global team works cross-functionally to make an impact on the business. If you want to work in a transformational environment, where education and training are encouraged, consider Deltek as the next step in your career!
External Job Title :
Principal Cloud Cost Optimization Engineer
Position Responsibilities :
The Cloud Cost Optimization Engineer plays a key role in supporting the full lifecycle of cloud financial management (FinOps) at Deltek—driving visibility, accountability, and efficiency across our cloud investments. This role is responsible for managing cloud spend, forecasting, and identifying optimization opportunities that support Deltek's cloud expansion and financial performance goals.
We are seeking a candidate with hands-on experience in Cloud FinOps practices, software development capabilities, AI/automation expertise, strong analytical skills, and a passion for driving financial insights that enable smarter business decisions. The ideal candidate is a self-starter with excellent cross-team collaboration abilities and a proven track record of delivering results in a fast-paced environment.
Key Responsibilities:
- Prepare and deliver monthly reports and presentations on cloud spend performance versus plan and forecast for Finance, IT, and business leaders.
- Support the evaluation, implementation, and ongoing management of cloud consumption and financial management tools.
- Apply financial and vendor management principles to support contract optimization, cost modeling, and spend management.
- Clearly communicate technical and financial insights, presenting complex topics in a simple, actionable manner to both technical and non-technical audiences.
- Partner with engineering, product, and infrastructure teams to identify cost drivers, promote best practices for efficient cloud consumption, and implement savings opportunities.
- Lead cost optimization initiatives, including analyzing and recommending savings plans, reserved instances, and right-sizing opportunities across AWS, Azure, and OCI.
- Collaborate with the Cloud Governance team to ensure effective tagging strategies and alerting frameworks are deployed and maintained at scale.
- Support forecasting by partnering with infrastructure and engineering teams to understand demand plans and proactively manage capacity and spend.
- Build and maintain financial models and forecasting tools that provide actionable insights into current and future cloud expenditures.
- Develop and maintain automated FinOps solutions using Python, SQL, and cloud-native services (Lambda, Azure Functions) to streamline cost analysis, anomaly detection, and reporting workflows.
- Design and implement AI-powered cost optimization tools leveraging GenAI APIs (OpenAI, Claude, Bedrock) to automate spend analysis, generate natural language insights, and provide intelligent recommendations to stakeholders.
- Build custom integrations and data pipelines connecting cloud billing APIs, FinOps platforms, and internal systems to enable real-time cost visibility and automated alerting.
- Develop and sustain relationships with internal stakeholders, onboarding them to FinOps tools, processes, and continuous cost optimization practices.
- Create and maintain KPIs, scorecards, and financial dashboards to monitor cloud spend and optimization progress.
- Drive a culture of optimization by translating financial insights into actionable engineering recommendations, promoting cost-conscious architecture, and leveraging automation for resource optimization.
- Use FinOps tools and services to analyze cloud usage patterns and provide technical cost-saving recommendations to application teams.
- Develop self-service FinOps portals and chatbots using GenAI to enable teams to query cost data, receive optimization recommendations, and understand cloud spending through natural language interfaces.
- Leverage Generative AI tools to enhance FinOps automation, streamline reporting, and improve team productivity across forecasting, optimization, and anomaly detection.
Qualifications :
- Bachelor's degree in Finance, Computer Science, Information Systems, or a related field.
- 4+ years of professional experience in Cloud FinOps, IT Financial Management, or Cloud Cost Governance within an IT organization.
- 6-8 years of overall experience in Cloud Infrastructure Management, DevOps, Software Development, or related technical roles with hands-on cloud platform expertise
- Hands-on experience with native cloud cost management tools (e.g., AWS Cost Explorer, Azure Cost Management, OCI Cost Analysis) and/or third-party FinOps platforms (e.g., Cloudability, CloudHealth, Apptio).
- Proven experience working within the FinOps domain in a large enterprise environment.
- Strong background in building and managing custom reports, dashboards, and financial insights.
- Deep understanding of cloud financial management practices, including chargeback/showback models, cost savings and avoidance tracking, variance analysis, and financial forecasting.
- Solid knowledge of cloud provider pricing models, billing structures, and optimization strategies.
- Practical experience with cloud optimization and governance practices such as anomaly detection, capacity planning, rightsizing, tagging strategies, and storage lifecycle policies.
- Skilled in leveraging automation to drive operational efficiency in cloud cost management processes.
- Strong analytical and data storytelling skills, with the ability to collect, interpret, and present complex financial and technical data to diverse audiences.
- Experience developing KPIs, scorecards, and metrics aligned with business goals and industry benchmarks.
- Ability to influence and drive change management initiatives that increase adoption and maturity of FinOps practices.
- Highly results-driven, detail-oriented, and goal-focused, with a passion for continuous improvement.
- Strong communicator and collaborative team player with a passion for mentoring and educating others.
- Strong proficiency in Python and SQL for data analysis, automation, and tool development, with demonstrated experience building production-grade scripts and applications.
- Hands-on development experience building automation solutions, APIs, or internal tools for cloud management or financial operations.
- Practical experience with GenAI technologies including prompt engineering, and integrating LLM APIs (OpenAI, Claude, Bedrock) into business workflows.
- Experience with Infrastructure as Code (Terraform etc.) and CI/CD pipelines for deploying FinOps automation and tooling.
- Familiarity with data visualization libraries (e.g. PowerBI ) and building interactive dashboards programmatically.
- Knowledge of ML/AI frameworks is a plus.
- Experience building chatbots or conversational AI interfaces for internal tooling is a plus.
- FinOps Certified Practitioner.
- AWS, Azure, or OCI cloud certifications are preferred.
Mission
Own architecture across web + backend, ship reliably, and establish patterns the team can scale on.
Responsibilities
- Lead system architecture for Next.js (web) and FastAPI (backend); own code quality, reviews, and release cadence.
- Build and maintain the web app (marketing, auth, dashboard) and a shared TS SDK (@revilo/contracts, @revilo/sdk).
- Integrate Stripe, Maps, analytics; enforce accessibility and performance baselines.
- Define CI/CD (GitHub Actions), containerization (Docker), env/promotions (staging → prod).
- Partner with Mobile and AI engineers on API/tool schemas and developer experience.
Requirements
- 6–10+ years; expert TypeScript, strong Python.
- Next.js (App Router), TanStack Query, shadcn/ui; FastAPI, Postgres, pydantic/SQLModel.
- Auth (OTP/JWT/OAuth), payments, caching, pagination, API versioning.
- Practical CI/CD and observability (logs/metrics/traces).
Nice-to-haves
- OpenAPI typegen (Zod), feature flags, background jobs/queues, Vercel/EAS.
Key Outcomes (ongoing)
- Stable architecture with typed contracts; <2% crash/error on web, p95 API latency in budget, reliable weekly releases.
Bidgely is seeking an outstanding and deeply technical Principal Engineer / Sr. Principal Engineer / Architect to lead the architecture and evolution of our next-generation data and platform infrastructure. This is a senior IC role for someone who loves solving complex problems at scale, thrives in high-ownership environments, and influences engineering direction across teams.
You will be instrumental in designing scalable and resilient platform components that can handle trillions of data points, integrate machine learning pipelines, and support advanced energy analytics. As we evolve our systems for the future of clean energy, you will play a critical role in shaping the platform that powers all Bidgely products.
Responsibilities
- Architect & Design: Lead the end-to-end architecture of core platform components – from ingestion pipelines to ML orchestration and serving layers. Architect for scale (200Bn+ daily data points), performance, and flexibility.
- Technical Leadership: Act as a thought leader and trusted advisor for engineering teams. Review designs, guide critical decisions, and set high standards for software engineering excellence.
- Platform Evolution: Define and evolve the platform’s vision, making key choices in data processing, storage, orchestration, and cloud-native patterns.
- Mentorship: Coach senior engineers and staff on architecture, engineering best practices, and system thinking. Foster a culture of engineering excellence and continuous improvement.
- Innovation & Research: Evaluate and experiment with emerging technologies (e.g., event-driven architectures, AI infrastructure, new cloud-native tools) to stay ahead of the curve.
- Cross-functional Collaboration: Partner with Engineering Managers, Product Managers, and Data Scientists to align platform capabilities with product needs.
- Non-functional Leadership: Ensure systems are secure, observable, resilient, performant, and cost-efficient. Drive excellence in areas like compliance, DevSecOps, and cloud cost optimization.
- GenAI Integration: Explore and drive adoption of Generative AI to enhance developer productivity, platform intelligence, and automation of repetitive engineering tasks.
Requirements:
- 8+ years of experience in backend/platform architecture roles, ideally with experience at scale.
- Deep expertise in distributed systems, data engineering stacks (Kafka, Spark, HDFS, NoSQL DBs like Cassandra/ElasticSearch), and cloud-native infrastructure (AWS, GCP, or Azure).
- Proven ability to architect high-throughput, low-latency systems with batch + real-time processing.
- Experience designing and implementing DAG-based data processing and orchestration systems.
- Proficient in Java (Spring Boot, REST), and comfortable with infrastructure-as-code and CI/CD practices.
- Strong understanding of non-functional areas: security, scalability, observability, and
- compliance.
- Exceptional problem-solving skills and a data-driven approach to decision-making.
- Excellent communication and collaboration skills with the ability to influence at all levels.
- Prior experience working in a SaaS environment is a strong plus.
- Experience with GenAI tools or frameworks (e.g., LLMs, embedding models, prompt engineering, RAG, Copilot-like integrations) to accelerate engineering workflows or enhance platform intelligence is highly desirable.
Job Title: PHP Coordinator / Laravel Developer
Experience: 4+ Years
Work Mode: Work From Home (WFH)
Working Days: 5 Days
Job Description:
We are looking for an experienced PHP Coordinator / Laravel Developer to join our team. The ideal candidate should have strong expertise in PHP and Laravel framework, along with the ability to coordinate and manage development (as Team Lead) tasks effectively.
Key Responsibilities:
- Develop, test, and maintain web applications using PHP and Laravel.
- Coordinate with team members to ensure timely project delivery.
- Write clean, secure, and efficient code.
- Troubleshoot, debug, and optimize existing applications.
- Collaborate with stakeholders to gather and analyze requirements.
Required Skills:
- Strong experience in PHP and Laravel framework.
- Good understanding of MySQL and RESTful APIs and Cloud (AWS/ Azure/ GCP).
- Familiarity with front-end technologies (HTML, CSS, JavaScript).
- Excellent communication and coordination skills.
- Ability to work independently in a remote environment.
Role & responsibilities
- Develop and maintain server-side applications using Go Lang.
- Design and implement scalable, secure, and maintainable RESTful APIs and microservices.
- Collaborate with front-end developers to integrate user-facing elements with server-side logic
- Optimize applications for performance, reliability, and scalability.
- Write clean, efficient, and reusable code that adheres to best practices.
Preferred candidate profile
- Minimum 5 years of working experience in Go Lang development.
- Proven experience in developing RESTful APIs and microservices.
- Familiarity of cloud platforms like AWS, GCP, or Azure.
- Familiarity with CI/CD pipelines and DevOps practices
About the role:
The SDE 2 - Backend will work as part of the Digitization and Automation team to help Sun King design, develop, and implement - intelligent, tech-enabled solutions to help solve a large variety of our business problems. We are looking for candidates with an affinity for technology and automations, curiosity towards advancement in products, and strong coding skills for our in-house software development team.
What you will be expected to do:
- Design and build applications/systems based on wireframes and product requirements documents
- Design and develop conceptual and physical data models to meet application requirements.
- Identify and correct bottlenecks/bugs according to operational requirements
- Focus on scalability, performance, service robustness, and cost trade-offs.
- Create prototypes and proof-of-concepts for iterative development.
- Take complete ownership of projects (end to end) and their development cycle
- Mentoring and guiding team members
- Unit test code for robustness, including edge cases, usability and general reliability
- Integrate existing tools and business systems (in-house tools or business tools like Ticketing softwares, communication tools) with external services
- Coordinate with the Product Manager, development team & business analysts
You might be a strong candidate if you have/are:
- Development experience: 3 – 5 years
- Should be very strong in problem-solving, data structures, and algorithms.
- Deep knowledge of OOPS concepts and programming skills in Core Java and Spring Boot Framework
- Strong Experience in SQL
- Experience in web service development and integration (SOAP, REST, JSON, XML)
- Understanding of code versioning tools (e.g., git)
- Experience in Agile/Scrum development process and tools
- Experience in Microservice architecture
- Hands-on experience in AWS RDS, EC2, S3 and deployments
Good to have:
- Knowledge on messaging systems RabbitMQ, Kafka.
- Knowledge of Python
- Container-based application deployment (Docker or equivalent)
- Willing to learn new technologies and implement them in products
What Sun King offers:
- Professional growth in a dynamic, rapidly expanding, high-social-impact industry
- An open-minded, collaborative culture made up of enthusiastic colleagues who are driven by the challenge of innovation towards profound impact on people and the planet.
- A truly multicultural experience: you will have the chance to work with and learn from people from different geographies, nationalities, and backgrounds.
- Structured, tailored learning and development programs that help you become a better leader, manager, and professional through the Sun King Center for Leadership.
About Sun King
Sun King is a leading off-grid solar energy company providing affordable, reliable electricity to 1.8 billion people without grid access. Operating across Africa and Asia, Sun King has connected over 20 million homes, adding 200,000 homes monthly.
Through a ‘pay-as-you-go’ model, customers make small daily payments (as low as $0.11) via mobile money or cash, eventually owning their solar equipment and saving on costly kerosene or diesel. To date, Sun King products have saved customers over $4 billion.
With 28,000 field agents and embedded electronics that regulate usage based on payments, Sun King ensures seamless energy access. Its products range from home lighting and phone charging systems to solar inverters capable of powering high-energy appliances.
Sun King is expanding into clean cooking, electric mobility, and entertainment while serving a wide range of income segments.
The company employs 2,800 staff across 12 countries, with women representing 44% of the workforce, and expertise spanning product design, data science, logistics, sales, software, and operations.
About the Role
OpenIAM is looking for a Solutions Architect (IAM) to support our enterprise customers in the
successful deployment and integration of OpenIAM’s platform. This role is highly technical and customer-facing, bridging architecture design, migration,
troubleshooting, and best-practice advisory.
Responsibilities
- Partner with CSMs to deliver technical onboarding and solution design for enterprise accounts.
- Lead integrations with enterprise directories, cloud services, databases, and custom applications.
- Provide migration planning and execution support for legacy IAM systems.
- Troubleshoot complex issues, including connector logic, data sync, performance, and HA setup.
- Advise customers on compliance/audit configuration (e.g., SoD, certifications, reporting).
- Work with Engineering to resolve escalations and influence the product roadmap.
- Deliver technical workshops, architecture sessions, and training to customer teams.
Qualifications
- 5+ years in IAM architecture, engineering, or consulting (e.g., OpenIAM, SailPoint, ForgeRock, Ping, Okta).
- Deep technical knowledge of IAM standards and protocols (SAML, OIDC, OAuth2, SCIM, LDAP).
- Hands-on experience with provisioning/deprovisioning, connectors, APIs, and scripting (Groovy, Java, or similar).
- Strong knowledge of enterprise IT environments (Windows, Linux, AD, databases, Kubernetes, cloud).
- Excellent problem-solving and troubleshooting skills.
- Comfortable presenting to technical and business stakeholders.
- Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
Why Join OpenIAM
- Work on challenging IAM deployments at scale with leading global enterprises.
- Shape the success of customers while influencing the evolution of our platform. Competitive compensation, benefits, and growth opportunities.
Job Description:
We are looking for a Lead Java Developer – Backend with a strong foundation in software engineering and hands-on experience in designing and building scalable, high-performance backend systems. You’ll be working within our Digital Engineering Studios on impactful and transformative projects in a fast-paced environment.
Key Responsibilities:
- Lead and mentor backend development teams.
- Design and develop scalable backend applications using Java and Spring Boot.
- Ensure high standards of code quality through best practices such as SOLID principles and clean code.
- Participate in pair programming, code reviews, and continuous integration processes.
- Drive Agile, Lean, and Continuous Delivery practices like TDD, BDD, and CI/CD.
- Collaborate with cross-functional teams and clients for successful delivery.
Required Skills & Experience:
- 9–12+ years of experience in backend development (Up to 17 years may be considered).
- Strong programming skills in Java and backend frameworks such as Spring Boot.
- Experience in designing and building large-scale, custom-built, scalable applications.
- Sound understanding of Object-Oriented Design (OOD) and SOLID principles.
- Hands-on experience with Agile methodologies, TDD/BDD, CI/CD pipelines.
- Familiarity with DevOps practices, Docker, Kubernetes, and Infrastructure as Code.
- Good understanding of cloud technologies – especially AWS, and exposure to GCP or Azure.
- Experience working in a product engineering environment is a plus.
- Startup experience or working in fast-paced, high-impact teams is highly desirable.
About the Role:
NeoGenCode is looking for a highly skilled Lead Java Fullstack Developer to join their agile development team. The ideal candidate will bring strong expertise in Java, Angular, and Spring Boot, with hands-on experience in developing and deploying enterprise-level microservices in cloud environments (especially AWS). The candidate will be expected to lead technically while remaining hands-on, guiding junior developers and ensuring top-quality code delivery.
Key Responsibilities:
- Act as a technical lead and contributor in a cross-functional agile team.
- Analyze, design, develop, and deploy web applications using Java, Angular, and Spring Boot.
- Lead sprint activities, task allocation, and code reviews to ensure quality and timely delivery.
- Design and implement microservices-based architecture and RESTful APIs.
- Ensure performance, security, scalability, and maintainability of the applications.
- Maintain CI/CD pipelines using GitHub, Jenkins, and related tools.
- Collaborate with business analysts, product managers, and UX teams for requirement gathering and refinement.
Technical Requirements:
✅ Core Technologies:
- Java (Java 21 preferred) – minimum 5+ years of hands-on experience
- Spring Boot (MVC, Spring Data, Hibernate) – strong hands-on experience
- Angular (Angular 19 preferred) – minimum 2+ years of hands-on experience
✅ Cloud & DevOps:
- Experience in AWS ecosystem (especially S3, Secrets Manager, CloudWatch)
- Experience with Docker
- Familiarity with CI/CD tools (Jenkins, GitHub, etc.)
✅ Database:
- PostgreSQL or other RDBMS
- Familiarity with NoSQL databases is a plus
✅ Frontend Proficiency:
- HTML5, CSS3, JavaScript, AJAX, JSON
- Angular concepts like Interceptors, Pipes, Directives, Decorators
- Strong debugging and performance optimization skills
✅ Testing & Tools:
- Unit testing using Jasmine/Karma or Jest is a plus
- Experience with tools like JIRA, Azure DevOps, Confluence
Soft Skills & Other Expectations:
- Excellent verbal and written communication skills
- Prior consulting or client-facing experience is a big plus
- Strong analytical, problem-solving, and leadership abilities
- Familiarity with Agile/Scrum methodology
- Self-motivated and adaptable with a strong desire to learn and grow
Job Description for PostgreSQL Lead
Job Title: PostgreSQL Lead
Company: Mydbops
About us:
As a seasoned industry leader for 8 years in open-source database management, we specialise in providing unparalleled solutions and services for MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, Cassandra, and more. At Mydbops, we are committed to providing exceptional service and building lasting relationships with our customers. Our Customer Account Management team is vital in ensuring client satisfaction and loyalty.
Role Overview
As the PostgreSQL Lead, you will own the design, implementation, and operational excellence of PostgreSQL environments. You’ll lead technical decision-making, mentor the team, interface with customers, and drive key initiatives covering performance tuning, HA architectures, migrations, and cloud deployments.
Key Responsibilities
- Lead PostgreSQL production environments: architecture, stability, performance, and scalability
- Oversee complex troubleshooting, query optimization, and performance analysis
- Architect and maintain HA/DR systems (e.g., Streaming Replication, Patroni, repmgr)
- Define backup, recovery, replication, and failover protocols
- Guide DB migrations, patches, and upgrades across environments
- Collaborate with DevOps and cloud teams for infrastructure automation
- Use monitoring (pg_stat_statements, PMM, Nagios or any monitoring stack) to proactively resolve issues
- Provide technical mentorship—conduct peer reviews, upskill, and onboard junior DBAs
- Lead customer interactions: understand requirements, design solutions, and present proposals
- Drive process improvements and establish database best practices
Requirements
- Experience: 4-5 years in PostgreSQL administration, with at least 2+ years in a leadership role
- Performance Optimization: Expert in query tuning, indexing strategies, partitioning, and execution plan analysis.
- Extension Management: Proficient with critical PostgreSQL extensions including:
- pg_stat_statements – query performance tracking
- pg_partman – partition maintenance
- pg_repack – online table reorganization
- uuid-ossp – UUID generation
- pg_cron – native job scheduling
- auto_explain – capturing costly queries
- Backup & Recovery: Deep experience with pgBackRest, Barman, and implementing Point-in-Time Recovery (PITR).
- High Availability & Clustering: Proven expertise in configuring and managing HA environments using Patroni, repmgr, and streaming replication.
- Cloud Platforms: Strong operational knowledge of AWS RDS and Aurora PostgreSQL, including parameter tuning, snapshot management, and performance insights.
- Scripting & Automation: Skilled in Linux system administration, with advanced scripting capabilities in Bash and Python.
- Monitoring & Observability: Familiar with pg_stat_statements, PMM, Nagios, and building custom dashboards using Grafana and Prometheus.
- Leadership & Collaboration: Strong problem-solving skills, effective communication with stakeholders, and experience leading database reliability and automation initiatives.
Preferred Qualifications
- Bachelor’s/Master’s degree in CS, Engineering, or equivalent
- PostgreSQL certifications (e.g., EDB, AWS)
- Consulting/service delivery experience in managed services or support roles
- Experience in large-scale migrations and modernization projects
- Exposure to multi-cloud environments and DBaaS platforms
What We Offer:
- Competitive salary and benefits package.
- Opportunity to work with a dynamic and innovative team.
- Professional growth and development opportunities.
- Collaborative and inclusive work environment.
Job Details:
- Work time: General shift
- Working days: 5 Days
- Mode of Employment - Work From Home
- Experience - 4-5 years
Supercharge Your Career as a Sr. Dev Engg – Java at Technoidentity!
Are you ready to solve people challenges that fuel business growth? At Technoidentity, we’re
a Data+AI product engineering company building cutting-edge solutions in the FinTech
domain for over 13 years—and we’re expanding globally. It’s the perfect time to join our
team of tech innovators and leave your mark
What’s in it for You?
We are looking for a skilled Java Backend Engineer who is passionate about building scalable, high-performance applications. The ideal candidate should have strong expertise in Java, data structures, databases, and modern frameworks, along with experience deploying solutions on AWS and managing CI/CD pipelines.
What Will You Be Doing?
- Design, develop, and maintain backend services using Java and Spring frameworks.
- Implement efficient algorithms and data structures for complex problem-solving.
- Integrate and manage relational databases (preferably RDS) with Java applications.
- Deploy and manage services on AWS infrastructure (EC2, SQS, RDS, etc.).
- Implement and maintain CI/CD pipelines (Jenkins or similar) for seamless delivery.
- Collaborate with cross-functional teams to design scalable solutions.
- Ensure code quality, performance, and security best practices.
- Contribute to code reviews, documentation, and knowledge sharing.
What Makes You the Perfect Fit?
- Strong proficiency in Java and data structures/algorithms.
- Hands-on experience with Spring frameworks (Spring Boot, Spring MVC, Spring Data, etc.).
- Proficiency in working with databases (SQL, schema design, query optimization).
- Practical experience with AWS services (EC2, SQS, RDS, IAM, etc.).
- Experience in setting up and maintaining CI/CD pipelines (Jenkins, GitHub Actions).
- Good understanding of version control using Git/GitHub.
- Solid problem-solving and debugging skills.
- Excellent communication and teamwork skills.
Preferred Qualifications
- Experience with microservices architecture.
- Exposure to containerization (Docker, Kubernetes).
- Knowledge of monitoring and logging tools (CloudWatch, ELK, Prometheus, etc.).
We're Hiring: Senior Developer (AI & Machine Learning)** 🚀
🔧 **Tech Stack**: Python, Neo4j, FAISS, LangChain, React.js, AWS/GCP/Azure
🧠 **Role**: AI/ML development, backend architecture, cloud deployment
🌍 **Location**: Remote (India)
💼 **Experience**: 5-10 years
If you're passionate about making an impact in EdTech and want to help shape the future of learning with AI, we want to hear from you!
About The Role
Location: Remote / Hybrid (India Preferred)
Experience: 03–10 years of relevant experience
Reports to: CEO/Co-founder
Type: Full-Time
Tech Stack: Python, Frappe, LangChain, Neo4j, FAISS, React.js, AWS/GCP/Azure
What You’ll Do
● AI/ML Development
○ Build AI-powered student learning insights using Graph Databases (Neo4j), FAISS,
Sentence Transformers, and OpenCV (ResNet-50).
○ Develop Retrieval-Augmented Generation (RAG) and reinforcement learning models to
personalize content and assessment.
○ Research and implement multi-modal generation (text, image, voice) for highly
personalized learning interactions.
○ Fine-tune and optimize transformer-based models (e.g., GPT, BERT) to deliver contextual,
culturally relevant learning experiences.
● Backend & API Development
○ Architect and build scalable backend using Frappe, FastAPI, or Django.
○ Develop REST and GraphQL APIs to connect PAL with TAP’s LMS, Glific, and content
repositories.
○ Integrate Redis and Celery to manage background inference processes.
○ Connect with Glific APIs to power our AI-driven WhatsApp learning chatbot.
● Frontend & User Experience (Optional)
○ Develop a React.js-based student dashboard for real-time learning insights and content
delivery.
○ Collaborate closely with our UX team to ensure intuitive and accessible design.
● Cloud & Deployment (DevOps)
○ Deploy and scale models across cloud platforms: AWS, GCP, or Azure.
○ Implement CI/CD pipelines (Jenkins, Cypress.io) to ensure continuous delivery and testing.
○ Use Docker and Kubernetes for managing containerized deployments.
● AI-Driven Security & Automation
○ Ensure data privacy and compliance by embedding AI-powered security checks.
○ Automate personalized content delivery through NLP chatbots and AI workflows.
Who You Are
● You’ve worked on AI/ML systems at scale, especially in EdTech, SaaS, or social impact settings
● You’ve built or fine-tuned models with GPT/BERT, FAISS, LangChain, or custom embeddings
● You know how to move between backend complexity and real-world deployment
● You’ve used tools like Zapier, N8N, or FastAPI in production
● You don’t just write code — you write roadmaps, define structure, and love cross-functional
collaboration
● Bonus: You’ve dabbled in adaptive learning, NLP in regional languages, or multimodal AI
What You Can Expect?
● Real ownership – You’ll lead architecture, experimentation, and rollout
● Deep learning – Work with experienced leaders across product, pedagogy, and AI
● Remote flexibility – Work from anywhere, build for everywhere
● Bold pace, clear values – We move fast, think big, and always center the child
● Future leadership track – Opportunity to grow into a Tech/AI Lead role as we scale
● Full transparency – Competitive salary, equity potential, and clarity on what’s next
Full Stack Engineer (Frontend Strong, Backend Proficient)
5-10 Years Experience
Contract: 6months+extendable
Location: Remote
Technical Requirements Frontend Expertise (Strong)
*Need at least 4 Yrs in React web developement, Node & AI.*
● Deep proficiency in React, Next.js, TypeScript
● Experience with state management (Redux, Context API)
● Frontend testing expertise (Jest, Cypress)
● Proven track record of achieving high Lighthouse performance scores Backend Proficiency
● Solid experience with Node.js, NestJS (preferred), or ExpressJS
● Database management (SQL, NoSQL)
● Cloud technologies experience (AWS, Azure)
● Understanding of OpenAI and AI integration capabilities (bonus) Full Stack Integration
● Excellent ability to manage and troubleshoot integration issues between frontend and backend systems
● Experience designing cohesive systems with proper separation of concerns
Are you an experienced Infrastructure/DevOps Engineer looking for an exciting remote opportunity to design, automate, and scale modern cloud environments? We’re seeking a skilled engineer with strong expertise in Terraform and DevOps practices to join our growing team. If you’re passionate about automation, cloud infrastructure, and CI/CD pipelines, we’d love to hear from you!
Key Responsibilities:
- Design, implement, and manage cloud infrastructure using Terraform (IaC).
- Build and maintain CI/CD pipelines for seamless application deployment.
- Ensure scalability, reliability, and security of cloud-based systems.
- Collaborate with developers and QA to optimize environments and workflows.
- Automate infrastructure provisioning, monitoring, and scaling.
- Troubleshoot infrastructure and deployment issues quickly and effectively.
- Stay up to date with emerging DevOps tools, practices, and cloud technologies.
Requirements:
- Minimum 5+ years of professional experience in DevOps or Infrastructure Engineering.
- Strong expertise in Terraform and Infrastructure as Code (IaC).
- Hands-on experience with AWS / Azure / GCP (at least one cloud platform).
- Proficiency in CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, etc.).
- Experience with Docker, Kubernetes, and container orchestration.
- Strong knowledge of Linux systems, networking, and security best practices.
- Familiarity with monitoring & logging tools (Prometheus, Grafana, ELK, etc.).
- Scripting experience (Bash, Python, or similar).
- Excellent problem-solving skills and ability to work in remote teams.
Perks and Benefits:
- Competitive salary with remote work flexibility.
- Opportunity to work with global clients on modern infrastructure.
- Growth and learning opportunities in cutting-edge DevOps practices.
- Collaborative team culture that values automation and innovation.
Job Title: Backend Developer (Full Time)
Location: Remote
Interview: Virtual Interview
Experience Required: 3+ Years
Backend / API Development (About the Role)
- Strong proficiency in Python (FastAPI) or Node.js (Express) (Python preferred).
- Proven experience in designing, developing, and integrating APIs for production-grade applications.
- Hands-on experience deploying to serverless platforms such as Cloudflare Workers, Firebase Functions, or Google Cloud Functions.
- Solid understanding of Google Cloud backend services (Cloud Run, Cloud Functions, Secret Manager, IAM roles).
- Expertise in API key and secrets management, ensuring compliance with security best practices.
- Skilled in secure API development, including HTTPS, authentication/authorization, input validation, and rate limiting.
- Track record of delivering scalable, high-quality backend systems through impactful projects in production environments.
Job Description For Database Engineer
Job Title: Database Engineer(MySQL)
Company: Mydbops
About Us:
As a seasoned industry leader for 8 years in open-source database management, we specialise in providing unparalleled solutions and services for MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, Cassandra, and more. At Mydbops, we are committed to providing exceptional service and building lasting relationships with our customers. Mydbops takes pride in being a PCI DSS-certified and ISO-certified company, reflecting our unwavering commitment to maintaining the highest standards of security and operational excellence.
Role Overview:
You will work in a fast-paced environment where we are responsible for the most critical systems. Our external teams ( Customers ) count on us to keep their MySQL database running and we are vital to the success of their business. You will troubleshoot and resolve customer issues related to the DB system's availability and performance. You will develop relationships with customers, comprehend and fulfil their needs, and maintain their satisfaction through regular communication and engagement with their environments.
Responsibility :
- The major focus is on handling all the alerts in MDS MySQL.
- Optimising the complex SQL Queries.
- On-Demand Query Optimization.
- Monthly Query optimization assigned ( based on customer list ).
- Get into Client calls on Performance issues and other tasks.
- Reaching the DBE-2 and DBRE's for escalations.
- Assisting and mentoring the ADBEs.
- Efficient troubleshooting using MySQL MDS Internal tools
- Writing the technical and process documents.
- Managing the client-based operations documents to ease scheduled & repeated tasks for the database engineering team.
Skills Required :
- Teamwork in a fast-paced environment.
- Alarm resolution expert
- Good in SQL language.
- Linux commands for day-to-day operations.
- Kernel tuning and Linux performance tools
- Expertise in MySQL backup tools
- mysqldump
- mydumper/myloader
- Xtrabackup.
- mariabackup
- Knowledge on AWS basic operations
- log collections
- S3 Push
- AWS Workspace
- Deep Skills in query optimization, index tuning and implementing the database features for SQL optimisation.
- Polite, friendly and professional; this position requires significant customer interaction and teamwork.
- Process-oriented and teamwork play a pivotal role.
- Excellent and strong written and spoken English Communication Skills.
- Strong work ethic and accepts feedback from others.
- Strong understanding of MySQL architecture, storage engines (InnoDB, MyISAM), and replication (Binlog Based And GTID).
- Good understanding of database security practices (users, roles, encryption, authentication mechanisms).
- Strong problem-solving and troubleshooting skills.
- Experience with monitoring tools such as PMM (Percona Monitoring and Management), Prometheus, Grafana
Good to Have (Optional but Preferred): Experience with AWS RDS/Aurora MySQL.
Why Join Us:
- Opportunity to work in a dynamic and growing industry.
- Learning and development opportunities to enhance your career.
- A collaborative work environment with a supportive team.
Job Details:
- Job Type: Full-time opportunity
- Work Days - 6 days
- Work time: Rotational shift
- Mode of Employment - Work From Home
About Us
We are building the next generation of AI-powered products and platforms that redefine how businesses digitize, automate, and scale. Our flagship solutions span eCommerce, financial services, and enterprise automation, with an emerging focus on commercializing cutting-edge AI services across Grok, OpenAI, and the Azure Cloud ecosystem.
Role Overview
We are seeking a highly skilled Full-Stack Developer with a strong foundation in e-commerce product development and deep expertise in backend engineering using Python. The ideal candidate is passionate about designing scalable systems, has hands-on experience with cloud-native architectures, and is eager to drive the commercialization of AI-driven services and platforms.
Key Responsibilities
- Design, build, and scale full-stack applications with a strong emphasis on backend services (Python, Django/FastAPI/Flask).
- Lead development of eCommerce features including product catalogs, payments, order management, and personalized customer experiences.
- Integrate and operationalize AI services across Grok, OpenAI APIs, and Azure AI services to deliver intelligent workflows and user experiences.
- Build and maintain secure, scalable APIs and data pipelines for real-time analytics and automation.
- Collaborate with product, design, and AI research teams to bring experimental features into production.
- Ensure systems are cloud-ready (Azure preferred) with CI/CD, containerization (Docker/Kubernetes), and strong monitoring practices.
- Contribute to frontend development (React, Angular, or Vue) to deliver seamless, responsive, and intuitive user experiences.
- Champion best practices in coding, testing, DevOps, and Responsible AI integration.
Required Skills & Experience
- 5+ years of professional full-stack development experience.
- Proven track record in eCommerce product development (payments, cart, checkout, multi-tenant stores).
- Strong backend expertise in Python (Django, FastAPI, Flask).
- Experience with cloud services (Azure preferred; AWS/GCP is a plus).
- Hands-on with AI/ML integration using APIs like OpenAI, Grok, Azure Cognitive Services.
- Solid understanding of databases (SQL & NoSQL), caching, and API design.
- Familiarity with frontend frameworks such as React, Angular, or Vue.
- Experience with DevOps practices: GitHub/GitLab, CI/CD, Docker, Kubernetes.
- Strong problem-solving skills, adaptability, and a product-first mindset.
Nice to Have
- Knowledge of vector databases, RAG pipelines, and LLM fine-tuning.
- Experience in scalable SaaS architectures and subscription platforms.
- Familiarity with C2PA, identity security, or compliance-driven development.
What We Offer
- Opportunity to shape the commercialization of AI-driven products in fast-growing markets.
- A high-impact role with autonomy and visibility.
- Competitive compensation, equity opportunities, and growth into leadership roles.
- Collaborative environment working with seasoned entrepreneurs, AI researchers, and cloud architects.
We are hiring freelancers to work on advanced Data & AI projects using Databricks. If you are passionate about cloud platforms, machine learning, data engineering, or architecture, and want to work with cutting-edge tools on real-world challenges, this is the opportunity for you!
✅ Key Details
- Work Type: Freelance / Contract
- Location: Remote
- Time Zones: IST / EST only
- Domain: Data & AI, Cloud, Big Data, Machine Learning
- Collaboration: Work with industry leaders on innovative projects
🔹 Open Roles
1. Databricks – Senior Consultant
- Skills: Data Warehousing, Python, Java, Scala, ETL, SQL, AWS, GCP, Azure
- Experience: 6+ years
2. Databricks – ML Engineer
- Skills: CI/CD, MLOps, Machine Learning, Spark, Hadoop
- Experience: 4+ years
3. Databricks – Solution Architect
- Skills: Azure, GCP, AWS, CI/CD, MLOps
- Experience: 7+ years
4. Databricks – Solution Consultant
- Skills: SQL, Spark, BigQuery, Python, Scala
- Experience: 2+ years
✅ What We Offer
- Opportunity to work with top-tier professionals and clients
- Exposure to cutting-edge technologies and real-world data challenges
- Flexible remote work environment aligned with IST / EST time zones
- Competitive compensation and growth opportunities
📌 Skills We Value
Cloud Computing | Data Warehousing | Python | Java | Scala | ETL | SQL | AWS | GCP | Azure | CI/CD | MLOps | Machine Learning | Spark |
Job Title: AI Developer/Engineer
Location: Remote
Employment Type: Full-time
About the Organization
We are a cutting-edge AI-powered startup that is revolutionizing data management and content generation. Our platform harnesses the power of generative AI and natural language processing to turn unstructured data into actionable insights, providing businesses with real-time, intelligent content and driving operational efficiency. As we scale, we are looking for an experienced lead architect to help design and build our next-generation AI-driven solutions.
About the Role
We are seeking an AI Developer to design, fine-tune, and deploy advanced Large Language Models (LLMs) and AI agents across healthcare and SMB workflows. You will work with cutting-edge technologies—OpenAI, Claude, LLaMA, Gemini, Grok—building robust pipelines and scalable solutions that directly impact real-world hospital use cases such as risk calculators, clinical protocol optimization, and intelligent decision support.
Key Responsibilities
- Build, fine-tune, and customize LLMs and AI agents for production-grade workflows
- Leverage Node.js for backend development and integration with various cloud services.
- Use AI tools and AI prompts to develop automated processes that enhance data management and client offerings
- Drive the evolution of deployment methodologies, ensuring that AI systems are continuously optimized, tested, and delivered in production-ready environments.
- Stay up-to-date with emerging AI technologies, cloud platforms, and development methodologies to continually evolve the platform’s capabilities.
- Integrate and manage vector databases such as FAISS and Pinecone.
- Ensure scalability, performance, and compliance in all deployed AI systems.
Required Qualifications
- 2–3 years of hands-on experience in AI/ML development or full-stack AI integration.
- Proven expertise in building Generative AI models and AI-powered applications, especially in a cloud environment.
- Strong experience with multi-cloud infrastructure and platforms,
- Proficiency with Node.js and modern backend frameworks for developing scalable solutions.
- In-depth understanding of AI prompts, natural language processing, and agent-based systems for enhancing decision-making processes.
- Familiarity with AI tools for model training, data processing, and real-time inference tasks.
- Experience working with hybrid cloud solutions, including private and public cloud integration for AI workloads.
- Strong problem-solving skills and a passion for innovation in AI and cloud technologies
- Agile delivery mythology knowledge.
- CI/CD pipeline deployment with JIRA and GitHub knowledge for code deployment
- Strong experience with LLMs, prompt engineering, and fine-tuning.
- Knowledge of vector databases (FAISS, Pinecone, Milvus, or similar).
Nice to Have
- Experience in healthcare AI, digital health, or clinical applications.
- Exposure to multi-agent AI frameworks.
What We Offer
- Flexible working hours.
- Collaborative, innovation-driven work culture.
- Growth opportunities in a rapidly evolving AI-first environment.
We are inviting a Fullstack Developer who excels at building modern web and mobile applications, with deep backend experience in Node.js and strong frontend proficiency in Next.js and React Native (Expo). You’ll work with a team of designers, product leads, and developers to bring impactful climate tech to life.
Location: Mumbai & Vicinity (India)
Responsibilities:
- Design, develop, and maintain scalable backend services using Node.js.
- Develop responsive and high-performance web applications using Next.js.
- Build and deploy mobile applications using React Native and Expo.
- Collaborate with UX Designers, Architects, and other Developers to implement full-stack web and mobile solutions.
- Perform data modeling and database management using PostgreSQL and Prisma.
- Ensure the performance, quality, and responsiveness of applications.
- Troubleshoot and debug applications to optimize performance.
- Write clean, maintainable, and well-documented code.
- Participate in code reviews and contribute to continuous improvement of development processes.
- Apply AI-enhanced developer tools like Cursor, Copilot, or similar to boost development velocity and code quality.
Required Skills and Experience:
- 2+ years of experience in full-stack JavaScript development.
- Strong proficiency in backend development using Node.js.
- Demonstrated experience with frontend technologies such as Next.js and React Native.
- Experience with PostgreSQL and Prisma for database management and data modeling.
- Experience with deploying React Native applications using Expo.
- Solid understanding of RESTful APIs and how to integrate them with front-end applications.
- Familiarity with modern JavaScript (ES6+), HTML5, and CSS3.
- Strong understanding of software development best practices.
- Proficiency in version control systems such as Git.
Additional Relevant Skills and Experience:
- Experience with map modules, such as ArcGIS, and Google Earth Engine.
- Experience with TypeScript.
- Experience with CI/CD pipelines.
- Understanding of server-side rendering and static site generation.
- Good eye for design and UX principles.
- Experience working in Agile/Scrum environments.
Good to Have:
- Experience with WebSockets and real-time applications.
- Familiarity with cloud platforms such as AWS or Azure.
- Experience with Docker and containerized applications.
- Knowledge of performance optimization techniques.
- Strong problem-solving skills and ability to work independently or as part of a team.
We Offer:
- Work on Open Source Projects
- Competitive Salary based on Location
- Flexible working hours
- 4 weeks of paid leave/year
- Work from home
Plant-for-the-Planet is a global, youth-led non-profit with a mission to restore ecosystems through tree planting and climate justice advocacy. Our tech team, spanning five continents, builds scalable, open-source tools to support environmental action at a global scale.
Learn more: https://www.plant-for-the-planet.org
We are looking for a Senior Software Engineer to join our team and contribute to key business functions. The ideal candidate will bring relevant experience, strong problem-solving skills, and a collaborative
mindset.
Responsibilities:
- Design, build, and maintain high-performance systems using modern C++
- Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
- Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and
- managing applications in the cloud.
- Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
- Participate in system design, peer code reviews, and performance tuning
Qualifications:
- 5+ years of software development experience, with strong command over modern C++
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud.
- Apache Airflow for orchestrating complex data workflows.
- EKS (Elastic Kubernetes Service) for managing containerized workloads.
- Proven expertise in designing and managing robust data pipelines & Microservices.
- Proficient in building and scaling data processing workflows and working with structured/unstructured data
- Strong hands-on experience with Docker, container orchestration, and microservices architecture
- Working knowledge of CI/CD practices, Git, and build/release tools
- Strong problem-solving, debugging, and cross-functional collaboration skills
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.
At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- You have 1 year of hands-on Python development experience
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Bangalore, Remote
Compensation: Competitive salary + Meaningful equity
If you get excited about solving hard problems that have real-world impact, we should talk.
All the best!!
Position: Lead Backend Engineer
Location: Remote
Experience: 10+ Years
Budget: 1.7 LPM
Employment Type: [Contract]
Required Skills & Qualifications:
- 10+ years of proven backend engineering experience.
- Strong proficiency in Python.
- Expertise in SQL (Postgres) and database optimization.
- Hands-on experience with OpenAI APIs.
- Strong command of FastAPI and microservices architecture.
- Solid knowledge of debugging, troubleshooting, and performance tuning.
Nice to Have:
- Experience with Agentic Systems or ability to quickly adopt them.
- Exposure to modern CI/CD pipelines, containerization (Docker/Kubernetes), and cloud platforms (AWS, Azure, or GCP).
Your Impact
- Build scalable backend services.
- Design, implement, and maintain databases, ensuring data integrity, security, and efficient retrieval.
- Implement the core logic that makes applications work, handling data processing, user requests, and system operations
- Contribute to the architecture and design of new product features
- Optimize systems for performance, scalability, and security
- Stay up-to-date with new technologies and frameworks, contributing to the advancement of software development practices
- Working closely with product managers and designers to turn ideas into reality and shape the product roadmap.
What skills do you need?
- 4+ years of experience in backend development, especially building robust APIS using Node.js, Express.js, MYSQL
- Strong command of JavaScript and understanding of its quirks and best practices
- Ability to think strategically when designing systems—not just how to build, but why
- Exposure to system design and interest in building scalable, high-availability systems
- Prior work on B2C applications with a focus on performance and user experience
- Ensure that applications can handle increasing loads and maintain performance, even under heavy traffic
- Work with complex queries for performing sophisticated data manipulation, analysis, and reporting.
- Knowledge of Sequelize, MongoDB and AWS would be an advantage.
- Experience in optimizing backend systems for speed and scalability.
About Us :
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values :
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement :
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace.
Role: Senior Integration Engineer
Location: Remote/Delhi NCR
Experience: 4-8 years
Position Overview :
We are seeking a Senior Integration Engineer with deep expertise in building and managing integrations across Finance, ERP, and business systems. The ideal candidate will have both technical proficiency and strong business understanding, enabling them to translate finance team needs into robust, scalable, and fault-tolerant solutions.
Key Responsibilities:
- Design, develop, and maintain integrations between financial systems, ERPs, and related applications (e.g., expense management, commissions, accounting, sales)
- Gather requirements from Finance and Business stakeholders, analyze pain points, and translate them into effective integration solutions
- Build and support integrations using SOAP and REST APIs, ensuring reliability, scalability, and best practices for logging, error handling, and edge cases
- Develop, debug, and maintain workflows and automations in platforms such as Workato and Exactly Connect
- Support and troubleshoot NetSuite SuiteScript, Suiteflows, and related ERP customizations
- Write, optimize, and execute queries for Zuora (ZQL, Business Objects) and support invoice template customization (HTML)
- Implement integrations leveraging AWS (RDS, S3) and SFTP for secure and scalable data exchange
- Perform database operations and scripting using Python and JavaScript for transformation, validation, and automation tasks
- Provide functional support and debugging for finance tools such as Concur and Coupa
- Ensure integration architecture follows best practices for fault tolerance, monitoring, and maintainability
- Collaborate cross-functionally with Finance, Business, and IT teams to align technology solutions with business goals.
Qualifications:
- 3-8 years of experience in software/system integration with strong exposure to Finance and ERP systems
- Proven experience integrating ERP systems (e.g., NetSuite, Zuora, Coupa, Concur) with financial tools
- Strong understanding of finance and business processes: accounting, commissions, expense management, sales operations
- Hands-on experience with SOAP, REST APIs, Workato, AWS services, SFTP
- Working knowledge of NetSuite SuiteScript, Suiteflows, and Zuora queries (ZQL, Business Objects, invoice templates)
- Proficiency with databases, Python, JavaScript, and SQL query optimization
- Familiarity with Concur and Coupa functionality
- Strong debugging, problem-solving, and requirement-gathering skills
- Excellent communication skills and ability to work with cross-functional business teams.
Preferred Skills:
- Experience with integration design patterns and frameworks
- Exposure to CI/CD pipelines for integration deployments
- Knowledge of business and operations practices in financial systems and finance teams
Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
▪ Design, build, and maintain scalable data pipelines for structured and unstructured data sources
▪ Develop ETL processes to collect, clean, and transform data from internal and external systems
▪ Support integration of data into dashboards, analytics tools, and reporting systems
▪ Collaborate with data analysts and software developers to improve data accessibility and performance
▪ Document workflows and maintain data infrastructure best practices
▪ Assist in identifying opportunities to automate repetitive data tasks
Job Title: Sr. Node.js Developer
Location: Ahmedabad, Gujarat
Job Type: Full Time
Department: MEAN Stack
About Simform:
Simform is a premier digital engineering company specializing in Cloud, Data, AI/ML, and Experience Engineering to create seamless digital experiences and scalable products. Simform is a strong partner for Microsoft, AWS, Google Cloud, and Databricks. With a presence in 5+ countries, Simform primarily serves North America, the UK, and the Northern European market.
Simform takes pride in being one of the most reputed employers in the region, having created a thriving work culture with a high work-life balance that gives a sense of freedom and opportunity to grow.
Role Overview:
We are looking for a Sr. Node Developer who not only possesses extensive backend expertise but also demonstrates proficiency in system design, cloud services, microservices architecture, and containerization. (Additionally, a good understanding of frontend tech stack to give support to frontend developers is highly valued) We're currently seeking a seasoned Senior Node.js Engineer who not only possesses extensive backend expertise but also demonstrates proficiency in system design, cloud services, microservices architecture, and containerization. (Additionally, a good understanding of frontend tech stack to give support to frontend developers is highly valued)
Key Responsibilities:
- Develop reusable, testable, maintainable, and scalable code with a focus on unit testing.
- Implement robust security measures and data protection mechanisms across projects.
- Champion the implementation of design patterns such as Test-Driven Development (TDD) and Behavior-Driven Development (BDD).
- Actively participate in architecture design sessions and sprint planning meetings, contributing valuable insights.
- Lead code reviews, providing insightful comments and guidance to team members.
- Mentor team members, assisting in debugging complex issues and providing optimal solutions.
Required Skills & Qualifications:
- Excellent written and verbal communication skills.
- Experience: 4+yrs
- Advanced knowledge of JavaScript and TypeScript, including core concepts and best practices.
- Extensive experience in developing highly scalable services and APIs using various protocols.
- Proficiency in data modeling and optimizing database performance in both SQL and NoSQL databases.
- Hands-on experience with PostgreSQL and MongoDB, leveraging technologies like TypeORM, Sequelize, or Knex.
- Proficient in working with frameworks like NestJS, LoopBack, Express, and other TypeScript-based frameworks.
- Strong familiarity with unit testing libraries such as Jest, Mocha, and Chai.
- Expertise in code versioning using Git or Bitbucket.
- Practical experience with Docker for building and deploying microservices.
- Strong command of Linux, including familiarity with server configurations.
- Familiarity with queuing protocols and asynchronous messaging systems.
Preferred Qualification:
- Experience with frontend JavaScript concepts and frameworks such as ReactJS.
- Proficiency in designing and implementing cloud architectures, particularly on AWS services.
- Knowledge of GraphQL and its associated libraries like Apollo and Prisma.
- Hands-on experience with deployment pipelines and CI/CD processes.
- Experience with document, key/value, or other non-relational database systems like Elasticsearch, Redis, and DynamoDB.
- Ability to build AI-centric applications and work with machine learning models, Langchain, vector databases, embeddings, etc.
Why Join Us:
- Young Team, Thriving Culture
- Flat-hierarchical, friendly, engineering-oriented, and growth-focused culture.
- Well-balanced learning and growth opportunities
- Free health insurance.
- Office facilities with a game zone, in-office kitchen with affordable lunch service, and free snacks.
- Sponsorship for certifications/events and library service.
- Flexible work timing, leaves for life events, WFH, and hybrid options
Location: Hybrid/ Remote
Type: Contract / Full‑Time
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Responsibilities:
- Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
- Design and build Python‑based services (FastAPI) for generating and updating embeddings.
- Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
- Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
- Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
- Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
- Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
- Collaborate with frontend engineers to define API contracts and demo endpoints.
- Document architecture diagrams, API specifications, and runbooks for future team onboarding.
Required Skills
- Strong Python expertise (FastAPI, async programming).
- Proficiency with Node.js and Express for API development.
- Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
- Familiarity with OpenAI’s APIs (embeddings, chat completions).
- Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
- Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).
Containerization skills (Docker):
- Good understanding of RAG architectures, prompt design, and memory management.
- Strong Git workflow and collaborative development practices (GitHub, CI/CD).
Nice‑to‑Have:
- Experience with Llama family models or other open‑source LLMs.
- Familiarity with MongoDB Atlas free tier and cluster management.
- Background in data engineering for streaming or batch processing.
- Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
- Frontend skills in React to prototype demo UIs.
Location: Hybrid/ Remote
Openings: 2
Experience: 5–12 Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Key Responsibilities
Architect & Design:
- Provide technical and architectural direction for complex frontend solutions,ensuring alignment with enterprise standards and best practices.
- Conduct design and code reviews to maintain high-quality, reusable, and scalable frontend interfaces for enterprise applications.
- Collaborate with cross-functional teams to define and enforce UI/UX design guidelines, accessibility standards, and performance benchmarks.
- Identify and address potential security vulnerabilities in frontend implementations, ensuring compliance with security and data privacy requirements.
Development & Debugging:
- Write clean, maintainable, and efficient frontend code.
- Debug and troubleshoot code to ensure robust, high-performing applications.
- Develop reusable frontend libraries that can be leveraged across multiple projects.
AI Awareness (Preferred):
- Understand AI/ML fundamentals and how they can enhance frontend applications.
- Collaborate with teams integrating AI-based features into chat applications.
Collaboration & Reporting:
- Work closely with cross-functional teams to align on architecture and deliverables.
- Regularly report progress, identify risks, and propose mitigation strategies.
Quality Assurance:
- Implement unit tests and end-to-end tests to ensure code quality.
- Participate in code reviews and enforce best practices.
Required Skills
- 5-10 years of experience architecting and developing cloud-based global applications in a public cloud environment (AWS, Azure, or GCP).
- Strong hands-on expertise in frontend technologies: JavaScript, HTML5, CSS3
- Proficiency with Modern frameworks like React, Angular, or Node.js
- Backend familiarity with Java, Spring Boot (or similar technologies).
- Experience developing real-world, at-scale products.
- General knowledge of cloud platforms (AWS, Azure, or GCP) and their structure, use, and capabilities.
- Strong problem-solving, debugging, and performance optimization skills.




























