50+ Remote AWS (Amazon Web Services) Jobs in India
Apply to 50+ Remote AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!
Role Context & Importance
At ARDEM, uninterrupted connectivity is mission-critical. Our remote teams across India process millions of pages and records annually using ARDEM Cloud Platforms, AWS WorkSpaces, and enterprise tools. The Network Engineer plays a central role in ensuring zero downtime for our processing operations, maintaining secure remote access for hundreds of team members, and supporting the cloud and on-premises infrastructure that underpins every client engagement.
Key Responsibilities
1. Remote Desktop & End-User Support
• Provide prompt remote desktop support to ARDEM’s distributed workforce, resolving hardware, software, and network-related issues via AnyDesk and other remote tools.
• Diagnose and resolve connectivity issues affecting access to ARDEM Cloud Platforms and AWS WorkSpaces.
• Support onboarding and configuration of workstations (Windows 14” FHD laptops, minimum i5/8GB RAM) per ARDEM standard specifications.
• Ensure minimum 100 Mbps internet connectivity compliance for remote staff and assist with ISP-related escalations.
2. Identity & Access Management
• Manage and maintain user identities, access policies, and lifecycle operations using Azure Entra ID (Azure AD), Active Directory, and Microsoft 365 Admin Center.
• Configure role-based access controls (RBAC), group policies, and conditional access to protect client data in line with SOC 2 and ISO 27001 requirements.
• Manage Microsoft 365 services including Exchange Online, Teams, SharePoint, and OneDrive for ARDEM’s internal and remote teams.
3. AWS Cloud Services Administration
• Configure, monitor, and support AWS services critical to ARDEM’s cloud operations: EC2, S3, IAM, AWS WorkSpaces, and VPC.
• Manage AWS IAM policies, user roles, and security groups to ensure least-privilege access across cloud environments.
• Monitor cloud resource utilisation, performance metrics, and costs; generate reports and recommend optimisations.
• Support cloud-based remote desktop (AWS WorkSpaces) used by ARDEM’s BPO processing teams.
4. Network Infrastructure & Cisco Hardware
• Configure, manage, and troubleshoot Cisco switches, routers, and firewalls at ARDEM’s processing centres.
• Manage DNS, DHCP, VPN, and VLAN configurations to support secure and high-availability operations.
• Monitor network performance and bandwidth; implement QoS policies to prioritise critical BPO workloads.
• Coordinate with ISPs and hardware vendors to resolve infrastructure issues with minimal service disruption.
5. On-Premises Server Administration
• Maintain Windows Server infrastructure including file servers, application hosting servers, and internal email servers.
• Administer DNS, DHCP, Group Policy, and Active Directory Domain Services (AD DS) across on-premises environments.
• Perform routine health checks, patch management, and capacity planning for on-prem systems.
6. Security, Backup & Disaster Recovery
• Implement and maintain data backup schedules and disaster recovery (DR) procedures in line with ARDEM’s data security policies.
• Support compliance with ARDEM’s ISO 27001-aligned, SOC 2, HIPAA, and GDPR security frameworks through network-level controls.
• Manage VPNs, SSL certificates, endpoint security tools, and encryption at rest/in-transit for all ARDEM platforms.
• Respond to and document security incidents; participate in periodic security audits and remediation activities.
7. Documentation & Knowledge Management
• Create and maintain clear, accurate technical documentation: network diagrams, SOPs, runbooks, and incident logs.
• Build and update the internal IT knowledge base to enable faster issue resolution and reduce repeat incidents.
• Document all changes to infrastructure, cloud configurations, and access policies in accordance with change management protocols.
8. Collaboration & Project Support
• Work closely with ARDEM’s Project Managers, Operations teams, and client-facing staff to resolve IT dependencies impacting BPO delivery.
• Assist with IT infrastructure upgrades, cloud migrations, and automation initiatives that support ARDEM’s growth.
• Participate in rotational shifts to ensure 24/7 coverage aligned with ARDEM’s three-shift processing operations.
Qualifications & Requirements
Education
• B.Tech in Information Technology
Experience
• 3–5 years of professional experience in network support, IT infrastructure management, or cloud administration.
• Proven track record supporting remote or distributed teams in a BPO, IT services, or technology company environment.
Technical Skills – Required
• AWS Cloud Services: EC2, S3, IAM, VPC, AWS WorkSpaces – hands-on configuration and monitoring.
• Azure Entra ID (Azure AD), Active Directory, Group Policy, and Microsoft 365 administration.
• Windows Server administration: AD DS, DNS, DHCP, File Services, patch management.
• Cisco networking hardware: switches, routers, firewalls – configuration and troubleshooting.
• VPN, VLAN, SSL, and remote access technologies (AnyDesk, RDP, VPN clients).
• Network monitoring tools and log analysis for proactive issue detection.
• Backup and disaster recovery tools and procedures.
Technical Skills – Preferred
• Experience with ARDEM-type BPO cloud platforms or similar multi-tenant cloud environments.
• Familiarity with security frameworks: ISO 27001, SOC 2, HIPAA, GDPR.
• Exposure to automation scripting (PowerShell, Python) for IT operations tasks.
Certifications (Preferred)
AWS Cloud Practitioner (CLF-C02)
CCNA (Cisco Certified Network Associate)
AWS SysOps Administrator
MCSE / Windows Server
Azure Fundamentals (AZ-900)
ITIL v4 Foundation
Microsoft 365
Soft Skills
• Strong analytical and systematic troubleshooting skills with a solution-first mindset.
• Excellent written and verbal communication in English; ability to explain technical issues to non-technical stakeholders.
• Ability to work independently and collaboratively in a fully remote, distributed team environment.
• High sense of accountability, punctuality, and commitment to SLAs critical to BPO operations.
• Willingness to work rotational shifts to support ARDEM’s round-the-clock processing operations.
• Responsible for assisting technology and production team in client deliverables and receipt.
Mandatory Work-from-Home Equipment Requirements
All candidates must confirm that they meet the following minimum home office specifications before selection:
Device Type
Windows Laptop
Operating System
Windows 10 / Windows 11
Screen Size
14 Inches/ preferable to have 2 monitors
Screen Resolution
FHD (1920 × 1080) or higher
Processor
Intel Core i5 (8th Gen or later) or higher
RAM
Minimum 8 GB (Mandatory) – 16 GB preferred
Internet Speed
100 Mbps or higher (dedicated broadband connection)
Remote Tool
AnyDesk (to be installed and configured prior to joining)
Power Backup
UPS / Inverter recommended for uninterrupted connectivity
Qualification- BTech-CS (2025 graduate only)
Joining: Immediate Joiner
Job Type: Trainee
Work Mode: Remote
Working Days: Monday to Friday
Shift (Rotational – based on project need):
· 5:00 PM – 2:00 AM IST
· 6:00 PM – 3:00 AM IST
Job Summary
ARDEM is seeking highly motivated Technology Interns from Tier 1 colleges who are passionate about software development and eager to work with modern Microsoft technologies. This role is ideal for fresher who want hands-on experience in building scalable web applications while maintaining a healthy work-life balance through remote work opportunities.
Eligibility & Qualifications
- Education:
- B.Tech (Computer Science) / M.Tech (Computer Science)
- Tier 1 colleges preferred
- Experience Level: Fresher
- Communication: Excellent English communication skills (verbal & written)
Skills Required
Technical & Development Skills:
· Basic understanding of AI / Machine Learning concepts
· Exposure to AWS (deployment or cloud fundamentals)
· PHP development
· WordPress development and customization
· JavaScript (ES5 / ES6+)
· jQuery
· AJAX calls and asynchronous handling
· Event handling
· HTML5 & CSS3
· Client-side form validation
Work Environment & Tools
- Comfortable working in a remote setup
- Familiarity with collaboration and remote access tools
Additional Requirements (Work-from-Home Setup)
This opportunity promotes a healthy work-life balance with remote work flexibility. Candidates must have the following minimum infrastructure:
- System: Laptop or Desktop (Windows-based)
- Operating System: Windows
- Screen Size: Minimum 14 inches
- Screen Resolution: Full HD (1920 × 1080)
- Processor: Intel i5 or higher
- RAM: Minimum 8 GB (Mandatory)
- Software: AnyDesk
- Internet Speed: 100 Mbps or higher
About ARDEM
ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. With over 20 years of experience, ARDEM has consistently delivered high-quality outsourcing and automation services to clients across the USA and Canada. We are growing rapidly and continuously innovating to improve our services. Our goal is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company for our customers.
Palcode.ai is an AI-first platform built to solve real, high-impact problems in the construction and preconstruction ecosystem. We work at the intersection of AI, product execution, and domain depth, and are backed by leading global ecosystems
Role: Full Stack Developer
Industry Type: Software Product
Department: Engineering - Software & QA
Employment Type: Full Time, Permanent
Role Category: Software Development
Education
UG: Any Graduate
About Company:
Snapsight is an AI-powered platform that delivers real-time event summaries in 75+ languages. We work with conferences worldwide and won the 2024 Skift Award for Most Innovative Event Tech. We're an early-stage startup scaling fast.
Join us if you want to become part of a vibrant and fast-moving product company that's on a mission to connect people around the world through events.
Location: Remote/Work From Home
What you'll be doing:
- Writing reusable, testable, and efficient code in Node.js for back-end services.
- Ensuring optimal and high-performance code logic for the data from/to the database.
- Collaborating with front-end developers on the integrations.
- Implementing effective security protocols, data protection measures, and storage solutions.
- Preparing technical specification documents for the developed features.
- Providing technical recommendations and suggesting improvements to the product.
- Writing unit test cases for APIs.
- Documenting code standards and practicing it.
- Staying updated on the advancements in the field of Node.js development.
- Should be open to new challenges and be comfortable in taking up new exploration tasks.
Skills:
- 3-5 years of strong proficiency in Node.js and its core principles.
- Experience in test-driven development.
- Experience with NoSQL databases like MongoDB is required
- Experience with MySQL database
- RESTful/GraphQL API design and development
- Docker and AWS experience is a plus
- Extensive knowledge of JavaScript, PHP, web stacks, libraries, and frameworks.
- Strong interpersonal, communication, and collaboration skills.
- Exceptional analytical and problem-solving aptitude
- Experience with a version control system like Git
- Knowledge about the Software Development Life Cycle Model, secure development best practices and standards, source control, code review, build and deployment, continuous integration
We are looking for an experienced DevOps Architect with strong expertise in telecom environments (OSS/BSS, 4G/5G core, network systems). The candidate will design and implement scalable, highly available, and automated DevOps solutions to support telecom-grade applications and infrastructure.
Responsibilities:
- Design and implement DevOps architecture for telecom applications (OSS/BSS, mediation systems, billing platforms)
- Architect CI/CD pipelines using Jenkins, GitLab, or Azure DevOps
- Manage cloud infrastructure on Amazon Web Services, Microsoft Azure, or hybrid telecom data centers
- Implement containerization using Docker and orchestration with Kubernetes
- Design Infrastructure as Code (IaC) using Terraform
- Ensure high availability, disaster recovery, and zero-downtime deployment strategies
- Automate deployments for 4G/5G core network functions (CNFs/VNFs)
- Implement monitoring solutions using Prometheus, Grafana, and ELK Stack
- Work closely with network engineering and telecom operations teams
- Ensure compliance with telecom-grade security standards
🚀 We’re Hiring: Senior Full Stack Engineer (On-Call Support) 🚀
Work Mode-Remote
Shift Timings-PST
Working hours-9hours(including a 1 hour of break)
Are you a seasoned Full Stack Engineer who enjoys solving real-world production challenges and being the go-to expert when it matters most? This role is for you! 💡
Role Overview
We’re looking for 3 Senior Resources to join our On-Call Support Team, ensuring platform stability and rapid issue resolution across backend, frontend, and infrastructure.
Tech Stack
Node.js (NestJS)
React.js (Next.js)
React Native
PostgreSQL
AWS (Hybrid with On-Premise)
Linux
Docker Swarm
Portainer
What You’ll Do
Provide on-call support for production systems
Troubleshoot and resolve high-priority issues
Collaborate with senior engineers to maintain system reliability
Work across backend, frontend, and infrastructure layers
Ensure uptime, performance, and scalability of applications
What We’re Looking For
Strong experience with modern JavaScript frameworks
Hands-on knowledge of cloud + on-prem environments
Solid understanding of containerized deployments
Excellent problem-solving and debugging skills
Comfortable working in on-call support rotations
Job Description
We are looking for a Data Scientist with 3–5 years of experience in data analysis, statistical modeling, and machine learning to drive actionable business insights. This role involves translating complex business problems into analytical solutions, building and evaluating ML models, and communicating insights through compelling data stories. The ideal candidate combines strong statistical foundations with hands-on experience across modern data platforms and cross-functional collaboration.
What will you need to be successful in this role?
Core Data Science Skills
• Strong foundation in statistics, probability, and mathematical modeling
• Expertise in Python for data analysis (NumPy, Pandas, Scikit-learn, SciPy)
• Strong SQL skills for data extraction, transformation, and complex analytical queries
• Experience with exploratory data analysis (EDA) and statistical hypothesis testing
• Proficiency in data visualization tools (Matplotlib, Seaborn, Plotly, Tableau, or Power BI)
• Strong understanding of feature engineering and data preprocessing techniques
• Experience with A/B testing, experimental design, and causal inference
Machine Learning & Analytics
• Strong experience building and deploying ML models (regression, classification, clustering)
• Knowledge of ensemble methods, gradient boosting (XGBoost, LightGBM, CatBoost)
• Understanding of time series analysis and forecasting techniques
• Experience with model evaluation metrics and cross-validation strategies
• Familiarity with dimensionality reduction techniques (PCA, t-SNE, UMAP)
• Understanding of bias-variance tradeoff and model interpretability
• Experience with hyperparameter tuning and model optimization
GenAI & Advanced Analytics
• Working knowledge of LLMs and their application to business problems
• Experience with prompt engineering for analytical tasks
• Understanding of embeddings and semantic similarity for analytics
• Familiarity with NLP techniques (text classification, sentiment analysis, entity,extraction)
• Experience integrating AI/ML models into analytical workflows
Data Platforms & Tools
• Experience with cloud data platforms (Snowflake, Databricks, BigQuery)
• Proficiency in Jupyter notebooks and collaborative development environments
• Familiarity with version control (Git) and collaborative workflows
• Experience working with large datasets and distributed computing (Spark/PySpark)
• Understanding of data warehousing concepts and dimensional modeling
• Experience with cloud platforms (AWS, Azure, or GCP)
Business Acumen & Communication
• Strong ability to translate business problems into analytical frameworks
• Experience presenting complex analytical findings to non-technical stakeholders
• Ability to create compelling data stories and visualizations
• Track record of driving business decisions through data-driven insights
• Experience working with cross-functional teams (Product, Engineering, Business)
• Strong documentation skills for analytical methodologies and findings
Good to have
• Experience with deep learning frameworks (TensorFlow, PyTorch, Keras)
• Knowledge of reinforcement learning and optimization techniques
• Familiarity with graph analytics and network analysis
• Experience with MLOps and model deployment pipelines
• Understanding of model monitoring and performance tracking in production
• Knowledge of AutoML tools and automated feature engineering
• Experience with real-time analytics and streaming data
• Familiarity with causal ML and uplift modeling
• Publications or contributions to data science community
• Kaggle competitions or open-source contributions
• Experience in specific domains (finance, healthcare, e-commerce)
💼 Job Title: Full Stack Developer (experienced only)
🏢 Company: SDS Softwares
💻 Location: Work from Home
💸 Salary range: ₹10,000 - ₹15,000 per month (based on knowledge and interview)
🕛 Shift Timings: 12 PM to 9 PM (5 days working )
About the role: As a Full Stack Developer, you will work on both the front-end and back-end of web applications. You will be responsible for developing user-friendly interfaces and maintaining the overall functionality of our projects.
⚜️ Key Responsibilities:
- Collaborate with cross-functional teams to define, design, and ship new features.
- Develop and maintain high-quality web applications (frontend + backend )
- Troubleshoot and debug applications to ensure peak performance.
- Participate in code reviews and contribute to the team’s knowledge base.
⚜️ Required Skills:
- Proficiency in HTML, CSS, JavaScript, Redux, React.js for front-end development. ✅
- Understanding of server-side languages such as Node.js, Python. ✅
- Familiarity with database technologies such as MySQL, MongoDB, or ✅ PostgreSQL.
- Basic knowledge of version control systems, particularly Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication skills and a team-oriented mindset.
💠 Qualifications:
- individuals with full-time work experience (1 year to 2 years) in software development.
- Must have a personal laptop and stable internet connection.
- Ability to join immediately is preferred.
If you are passionate about coding and eager to learn, we would love to hear from you. 👍

We are building VALLI AI SecurePay, an AI-powered fintech and cybersecurity platform focused on fraud detection and transaction risk scoring.
We are looking for an AI / Machine Learning Engineer with strong AWS experience to design, develop, and deploy ML models in a cloud-native environment.
Responsibilities:
- Build ML models for fraud detection and anomaly detection
- Work with transactional and behavioral data
- Deploy models on AWS (S3, SageMaker, EC2/Lambda)
- Build data pipelines and inference workflows
- Integrate ML models with backend APIs
Requirements:
- Strong Python and Machine Learning experience
- Hands-on AWS experience
- Experience deploying ML models in production
- Ability to work independently in a remote setup
Job Type: Contract / Freelance
Duration: 3–6 months (extendable)
Location: Remote (India)
About Us:
MyOperator is a Business AI Operator and a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform.
Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one

🚀 Hiring: Associate Tech Architect / Senior Tech Specialist
🌍 Remote | Contract Opportunity
We’re looking for a seasoned tech professional who can lead the design and implementation of cloud-native data and platform solutions. This is a remote, contract-based role for someone with strong ownership and architecture experience.
🔴 Mandatory & Most Important Skill Set
Hands-on expertise in the following technologies is essential:
✅ AWS – Cloud architecture & services
✅ Python – Backend & data engineering
✅ Terraform – Infrastructure as Code
✅ Airflow – Workflow orchestration
✅ SQL – Data processing & querying
✅ DBT – Data transformation & modeling
💼 Key Responsibilities
- Architect and build scalable AWS-based data platforms
- Design and manage ETL/ELT pipelines
- Orchestrate workflows using Airflow
- Implement cloud infrastructure using Terraform
- Lead best practices in data architecture, performance, and scalability
- Collaborate with engineering teams and provide technical leadership
🎯 Ideal Profile
✔ Strong experience in cloud and data platform architecture
✔ Ability to take end-to-end technical ownership
✔ Comfortable working in a remote, distributed team environment
📄 Role Type: Contract
🌍 Work Mode: 100% Remote
If you have deep expertise in these core technologies and are ready to take on a high-impact architecture role, we’d love to hear from you.
Job Title : Full Stack Developer
Experience : 5+ Years (Mandatory)
Mandatory Tech Stack : Node.js (NestJS), React.js (Next.js), React Native, PostgreSQL, AWS (Hybrid with On-Premise infrastructure) & Docker Swarm & Portainer
Location : Remote
Working Days : Monday to Saturday
Shift : Night Shift
Job Summary :
We are scaling rapidly and looking for a high-impact Full Stack Developer who thrives on solving complex problems across Web, Mobile, and Cloud Infrastructure.
The ideal candidate is hands-on, adaptable, and comfortable working in distributed systems and hybrid cloud environments, delivering end-to-end solutions with ownership and accountability.
Mandatory Technical Skills :
- Backend : Node.js with NestJS
- Frontend (Web) : React.js with Next.js
- Mobile : React Native
- Database : PostgreSQL
- Cloud : AWS (Hybrid with On-Premise infrastructure)
- OS : Linux
- Containers & Orchestration : Docker Swarm
- Container Management : Portainer
🎯 Key Responsibilities :
- Design, develop, and maintain scalable full-stack applications (Web + Mobile)
- Build and manage microservices and RESTful APIs
- Work in distributed and hybrid cloud environments
- Develop cloud-ready solutions and manage deployments
- Handle containerized applications using Docker Swarm & Portainer
- Collaborate closely with Product, DevOps, and Engineering teams
- Ensure application performance, security, and reliability
- Participate in code reviews and follow best engineering practices
- Troubleshoot, debug, and optimize applications across the stack
✅ Required Qualifications :
- Strong hands-on experience with Node.js (NestJS)
- Solid expertise in React.js (Next.js) and React Native
- Experience with PostgreSQL and backend data modeling
- Working knowledge of AWS services in hybrid environments
- Good understanding of Linux systems
- Hands-on experience with Docker Swarm & Portainer
- Strong understanding of microservices architecture
- Ability to manage end-to-end full-stack delivery
⭐ Good-to-Have Skills :
- Experience with CI/CD pipelines
- Exposure to monitoring & logging tools
- Knowledge of event-driven systems
- Experience working in high-availability systems
About Unilog
Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.
With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale Distribution, Manufacturing, and Specialty Retail.
Unilog’s Mission Statement
At Unilog, our mission is to provide purpose-built connected product content and eCommerce solutions that empower our customers to succeed in the face of intense competition. By virtue of living our mission, we are able to transform the way Wholesale Distributors, Manufacturers, and Specialty Retailers go to market. We help our customers extend a digital version of their business and accelerate their growth.
Job Details
- Designation: Principal Engineer – Solr
- Location: Bangalore / Mysore / Remote
- Job Type: Full-time
- Department: Software R&D
Job Summary
We are seeking a highly skilled and experienced Principal Engineer with a strong background in Apache Solr and Java to lead our Engineering and customer-led initiatives. The ideal candidate will be responsible for ensuring the reliability, scalability, and performance of our search platform while providing expert-level troubleshooting and resolution for critical production issues.
This role will involve designing the architecture for new platforms while reviewing and recommending better approaches for existing ones to drive continuous improvement and efficiency.
Key Responsibilities
- Lead Engineering and support activities for Solr-based search applications, ensuring minimal downtime and optimal performance
- Design and develop the architecture of new platforms while reviewing and recommending better approaches for existing ones
- Regularly work towards enhancing search ranking, query understanding, and retrieval effectiveness
- Diagnose, troubleshoot, and resolve complex technical issues in Solr, Java-based applications, and supporting infrastructure
- Perform deep-dive analysis of logs, performance metrics, and alerts to proactively prevent incidents
- Optimize Solr indexes, queries, and configurations to enhance search performance and reliability
- Work closely with development, operations, and business teams to drive improvements in system stability and efficiency
- Implement monitoring tools, dashboards, and alerting mechanisms to enhance observability and proactive issue detection
- Exposure to AI-based search using vector databases, RAG models, NLP, and LLMs
- Collaborate on capacity planning, system scaling, and disaster recovery strategies for mission-critical search systems
- Provide mentorship and technical guidance to junior engineers and support teams
- Drive innovation by tracking latest trends, emerging technologies, and best practices in AI-based Search, Solr, and other search platforms
Requirement
- 8+ years of experience in software development and production support with a focus on Apache Solr, Java, and databases (Oracle, MySQL, PostgreSQL, etc.)
- Strong understanding of Solr indexing, query execution, schema design, configuration, and tuning
- Experience in designing and implementing scalable system architectures for search platforms
- Proven ability to review and assess existing platform architectures, identifying areas for improvement and recommending better approaches
- Proficiency in Java, Spring Boot, and micro-services architectures
- Experience with Linux / Unix-based environments, shell scripting, and debugging production systems
- Hands-on experience with monitoring tools (e.g., Prometheus, Grafana, Splunk, ELK Stack) and log analysis
- Expertise in troubleshooting performance issues related to Solr, JVM tuning, and memory management
- Familiarity with cloud platforms such as AWS, Azure, or GCP and containerization technologies like Docker / Kubernetes
- Strong analytical and problem-solving skills, with the ability to work under pressure in a fast-paced environment
- Certifications in Solr, Java, or cloud technologies
- Excellent communication and leadership abilities
About Our Benefits
- Competitive salary
- Health insurance
- Retirement plan
- Paid time off
- Training and development opportunities
Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life.
We do this by combining data-driven content, rigorous product comparisons, and user-first design all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics.
The Role
We are hiring a Senior Data Engineer to help design and scale the infrastructure behind our analytics,performance marketing, and experimentation platforms.
This role is ideal for someone who thrives on solving complex data problems, enjoys owning systems end-to-end, and wants to work closely with stakeholders across product, marketing, and analytics.
You’ll build reliable, scalable pipelines and models that support decision-making and automation at every level of the business.
What you’ll do
● Build, maintain, and optimize data pipelines using Spark, Kafka, Airflow, and Python
● Orchestrate workflows across GCP (GCS, BigQuery, Composer) and AWS-based systems
● Model data using dbt, with an emphasis on quality, reuse, and documentation
● Ingest, clean, and normalize data from third-party sources such as Google Ads, Meta,Taboola, Outbrain, and Google Analytics
● Write high-performance SQL and support analytics and reporting teams in self-serve data access
● Monitor and improve data quality, lineage, and governance across critical workflows
● Collaborate with engineers, analysts, and business partners across the US, UK, and India
What You Bring
● 4+ years of data engineering experience, ideally in a global, distributed team
● Strong Python development skills and experience
● Expert in SQL for data transformation, analysis, and debugging
● Deep knowledge of Airflow and orchestration best practices
● Proficient in DBT (data modeling, testing, release workflows)
● Experience with GCP (BigQuery, GCS, Composer); AWS familiarity is a plus
● Strong grasp of data governance, observability, and privacy standards
● Excellent written and verbal communication skills
Nice to have
● Experience working with digital marketing and performance data, including:
Google Ads, Meta (Facebook), TikTok, Taboola, Outbrain, Google Analytics (GA4)
● Familiarity with BI tools like Tableau or Looker
● Exposure to attribution models, media mix modeling, or A/B testing infrastructure
● Collaboration experience with data scientists or machine learning workflows
Why Join Us
● Monthly long weekends — every third Friday off
● Wellness reimbursement to support your health and balance
● Paid parental leave
● Remote-first with flexibility and trust
● Work with a world-class data and marketing team inside a globally recognized brand
Role: AWS Cloud Engineer (Principal / Senior Level)
Employment Type: Contract (12+ Months)
Location: Fully Remote (USA)
Experience Required: 14+ Years
Company Type: Global Remote Talent & Technology Services Platform
About us:
We are a global remote talent and technology services company, enabling leading organizations worldwide to hire elite engineers and build next-generation products.
Our platform connects companies with highly skilled professionals who work remotely while delivering enterprise-grade innovation across cloud, AI, DevOps, and modern software engineering.
We partner with high-growth startups and Fortune-level enterprises to design, build, and scale mission-critical technology platforms—all in a fully remote, distributed model.
For more details, visit our site: Recruiting Bond (https://recruitingbond.com/)
Role Overview:
We are seeking a highly experienced AWS Cloud Engineer to support one of our global clients in designing, securing, and operating enterprise-scale AWS environments.
This is a hands-on, senior-level role focused on cloud architecture, governance, automation, and Python-based tooling, working closely with distributed DevOps, Security, and Application teams.
Key Responsibilities
- Design and maintain scalable, secure AWS cloud architectures aligned with enterprise best practices
- Build and govern multi-account AWS environments using AWS Control Tower, Organizations, and landing zones
- Implement IAM strategies, including SCPs, identity federation, and least-privilege access models
- Develop and manage Infrastructure as Code (IaC) using Terraform and/or AWS CloudFormation
- Architect and manage AWS networking (VPCs, subnets, routing, security groups, NACLs, Transit Gateway)
- Implement AWS Config, logging, monitoring, and compliance controls
- Automate infrastructure operations and workflows using Python
- Enable DevOps practices, CI/CD pipelines, and cloud-native deployments
- Troubleshoot complex cloud infrastructure issues and drive root-cause analysis
- Produce and maintain technical documentation, standards, and runbooks
- Act as a trusted cloud advisor to client stakeholders and engineering teams
Required Skills & Qualifications
- 14+ years of overall IT experience, with deep hands-on expertise in AWS
- Strong experience in AWS architecture, security, and governance
- Proven experience with AWS Control Tower and multi-account strategies
- Advanced knowledge of IAM, SCPs, identity management, and access control
- Strong hands-on experience with Terraform and/or CloudFormation
- Solid understanding of AWS networking and VPC design
- Proficiency in Python for automation and cloud operations
- Experience working in DevOps-driven, cloud-native environments
- Strong communication skills and ability to work independently in a fully remote setup
Nice to Have
- AWS Professional or Speciality certifications
- Experience supporting global or enterprise clients
- Exposure to compliance or regulated cloud environments
Why Join Us
- Work fully remote with global teams and top-tier clients
- Long-term 12+ month engagement with extension potential
- Opportunity to influence large-scale cloud platforms
- Be part of a company shaping the future of remote-first engineering
Job Title: Technology Intern
Location: Remote (India)
Shift Timings:
- 5:00 PM – 2:00 AM
- 6:00 PM – 3:00 AM
Compensation: Stipend
Job Summary
ARDEM is looking for enthusiastic Technology Interns from Tier 1 colleges who are eager to build hands-on experience across web technologies, cloud platforms, and emerging technologies such as AI/ML. This role is ideal for final-year students (2026 pass-outs) or fresh graduates seeking real-world exposure in a fast-growing, technology-driven organization.
Eligibility & Qualifications
- Education:
- B.Tech (Computer Science) / M.Tech (Computer Science)
- Tier 1 colleges preferred
- Final semester students pursuing graduation (2026 pass-outs) or recently hired interns
- Experience Level: Fresher
- Communication: Excellent English communication skills (verbal & written)
Skills Required
Technical & Development Skills
- Basic understanding of AI / Machine Learning concepts
- Exposure to AWS (deployment or cloud fundamentals)
- PHP development
- WordPress development and customization
- JavaScript (ES5 / ES6+)
- jQuery
- AJAX calls and asynchronous handling
- Event handling
- HTML5 & CSS3
- Client-side form validation
Work Environment & Tools
- Comfortable working in a remote setup
- Familiarity with collaboration and remote access tools
Additional Requirements (Work-from-Home Setup)
This opportunity promotes a healthy work-life balance with remote work flexibility. Candidates must have the following minimum infrastructure:
- System: Laptop or Desktop (Windows-based)
- Operating System: Windows
- Screen Size: Minimum 14 inches
- Screen Resolution: Full HD (1920 × 1080)
- Processor: Intel i5 or higher
- RAM: Minimum 8 GB (Mandatory)
- Software: AnyDesk
- Internet Speed: 100 Mbps or higher
About ARDEM
ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. With over 20 years of experience, ARDEM has consistently delivered high-quality outsourcing and automation services to clients across the USA and Canada. We are growing rapidly and continuously innovating to improve our services. Our goal is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company for our customers.
Job Title: Technology Intern
Location: Remote (India)
Shift Timings:
- 5:00 PM – 2:00 AM
- 6:00 PM – 3:00 AM
Compensation: Stipend
Job Summary
ARDEM is seeking highly motivated Technology Interns from Tier 1 colleges who are passionate about software development and eager to work with modern Microsoft technologies. This role is ideal for final-year students (2026 pass-outs) who want hands-on experience in building scalable web applications while maintaining a healthy work-life balance through remote work opportunities.
Eligibility & Qualifications
- Education:
- B.Tech (Computer Science) / M.Tech (Computer Science)
- Tier 1 colleges preferred
- Final semester students (2026 pass-outs) or recent interns
- Experience Level: Fresher
- Communication: Excellent English communication skills (verbal & written)
Skills Required
1. Technical Skills (Must Have)
- Experience with .NET Core (.NET 6 / 7 / 8)
- Strong knowledge of C#, including:
- Object-Oriented Programming (OOP) concepts
- async/await
- LINQ
- ASP.NET Core (Web API / MVC)
2. Database Skills
- SQL Server (preferred)
- Writing complex SQL queries, joins, and subqueries
- Stored Procedures, Functions, and Indexes
- Database design and performance tuning
- Entity Framework Core
- Migrations and transaction handling
3. Frontend Skills (Required)
- JavaScript (ES5 / ES6+)
- jQuery
- DOM manipulation
- AJAX calls
- Event handling
- HTML5 & CSS3
- Client-side form validation
4. Security & Performance
- Data validation and exception handling
- Caching concepts (In-memory / Redis – good to have)
5. Tools & Environment
- Visual Studio / VS Code
- Git (GitHub / Azure DevOps)
- Basic knowledge of server deployment
6. Good to Have (Optional)
- Azure or AWS deployment experience
- CI/CD pipelines
- Docker
- Experience with data handling
Additional Requirements (Work-from-Home Setup)
This role supports remote work. Candidates must ensure the following minimum infrastructure requirements:
- Laptop/Desktop: Windows-based system
- Operating System: Windows
- Screen Size: Minimum 14 inches
- Screen Resolution: Full HD (1920 × 1080)
- Processor: Intel i5 or higher
- RAM: Minimum 8 GB (Mandatory)
- Software: AnyDesk
- Internet Speed: 100 Mbps or higher
About ARDEM
ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. For over 20 years, ARDEM has successfully delivered high-quality outsourcing and automation services to clients across the USA and Canada.
We are growing rapidly and continuously innovating to become a better service provider for our customers. Our mission is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company in the industry.
Experience: 8+ Years
Work Mode: Remote
Engagement: Full-time / Freelancer
Dual Project: Acceptable
Job Description:
We are looking for an experienced AWS Cloud Engineer II with strong hands-on system engineering expertise in AWS production environments.
Key Responsibilities and Skills:
Hands-on experience in AWS system engineering with a strong focus on Amazon RDS, including performance tuning, backups, restores, Multi-AZ configurations, and read replicas
Strong experience in application troubleshooting across AWS services including EC2, ALB, VPC, and IAM
Expertise in log analysis and monitoring using AWS CloudWatch
Ability to troubleshoot connectivity issues, latency problems, and service dependencies
Experience in end-to-end root cause analysis and production issue resolution
Strong understanding of AWS networking and security best practices
Ability to work independently in a remote setup and handle production-level issues
Preferred Qualifications:
Experience working in high-availability and production-critical environments
Strong analytical and problem-solving skills
Good communication skills for collaborating with cross-functional teams
Hands-on experience with Microsoft Azure core services including Virtual Machines, storage, networking, and identity management
Strong expertise in Azure RDS / Azure Virtual Desktop (AVD) deployment, configuration, and performance tuning
Solid systems engineering background with Windows Server administration, Active Directory, GPO, DNS, and basic Linux management
Proficiency in automation and scripting, primarily using PowerShell, with working knowledge of Azure CLI and Infrastructure as Code (ARM/Bicep/Terraform)
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (B2B SaaS preferred)
Hands-on experience implementing and managing DLP solutions in AWS and AzureStrong expertise in data classification, labeling, and protection (e.g., Microsoft Purview, AIP)Experience designing and enforcing DLP policies across cloud storage, email, endpoints, and SaaS appsProficient in monitoring, investigating, and remediating data leakage incidents

US based large Biotech company with WW operations.
Senior Cloud Engineer Job Description
Position Title: Senior Cloud Engineer -- AWS [LONG TERM-CONTRACT POSITION]
Location: Remote [REQUIRES WORKING IN CST TIME ZONE]
Position Overview
The Senior Cloud Engineer will play a critical role in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure across multiple platforms (AWS, Azure, Google Cloud). This role requires deep technical expertise, leadership in cloud
strategy, and hands-on experience with automation, DevOps practices, and cloud-native technologies. The ideal candidate will work collaboratively with cross-functional teams to deliver robust cloud solutions, drive best practices, and support business objectives
through innovative cloud engineering.
Key Responsibilities
Design, implement, and maintain cloud infrastructure and services, ensuring high availability, performance, and security across multi-cloud environments (AWS, Azure, GCP)
Develop and manage Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, and Ansible for automated provisioning and configuration
Lead the adoption and optimization of DevOps methodologies, including CI/CD pipelines, automated testing, and deployment processes
Collaborate with software engineers, architects, and stakeholders to architect cloud-native solutions that meet business and technical requirements
Monitor, troubleshoot, and optimize cloud systems for cost, performance, and reliability, using cloud monitoring and logging tools
Ensure cloud environments adhere to security best practices, compliance standards, and governance policies, including identity and access management, encryption, and vulnerability management
Mentor and guide junior engineers, sharing knowledge and fostering a culture of continuous improvement and innovation
Participate in on-call rotation and provide escalation support for critical cloud infrastructure issues
Document cloud architectures, processes, and procedures to ensure knowledge transfer and operational excellence
Stay current with emerging cloud technologies, trends, and best practices,
Required Qualifications
- Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field, or equivalent work experience
- 6–10 years of experience in cloud engineering or related roles, with a proven track record in large-scale cloud environments
- Deep expertise in at least one major cloud platform (AWS, Azure, Google Cloud) and experience in multi-cloud environments
- Strong programming and scripting skills (Python, Bash, PowerShell, etc.) for automation and cloud service integration
- Proficiency with DevOps tools and practices, including CI/CD (Jenkins, GitLab CI), containerization (Docker, Kubernetes), and configuration management (Ansible, Chef)
- Solid understanding of networking concepts (VPC, VPN, DNS, firewalls, load balancers), system administration (Linux/Windows), and cloud storage solutions
- Experience with cloud security, governance, and compliance frameworks
- Excellent analytical, troubleshooting, and root cause analysis skills
- Strong communication and collaboration abilities, with experience working in agile, interdisciplinary teams
- Ability to work independently, manage multiple priorities, and lead complex projects to completion
Preferred Qualifications
- Relevant cloud certifications (e.g., AWS Certified Solutions Architect, AWS DevOps Engineer, Microsoft AZ-300/400/500, Google Professional Cloud Architect)
- Experience with cloud cost optimization and FinOps practices
- Familiarity with monitoring/logging tools (CloudWatch, Kibana, Logstash, Datadog, etc.)
- Exposure to cloud database technologies (SQL, NoSQL, managed database services)
- Knowledge of cloud migration strategies and hybrid cloud architectures
Job Details
- Job Title: Software Developer (Python, React/Vue)
- Industry: Technology
- Experience Required: 2-4 years
- Working Days: 5 days/week
- Job Location: Remote working
- CTC Range: Best in Industry
Review Criteria
- Strong Full stack/Backend engineer profile
- 2+ years of hands-on experience as a full stack developer (backend-heavy)
- (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
- (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
- (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
- (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
- (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
- (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
- Product companies (B2B SaaS preferred)
Preferred
- Preferred (Location) - Mumbai
- Preferred (Skills): Candidates with strong backend or full-stack experience in other languages/frameworks are welcome if fundamentals are strong
- Preferred (Education): B.Tech from Tier 1, Tier 2 institutes
Role & Responsibilities
This is not just another dev job. You’ll help engineer the backbone of the world’s first AI Agentic manufacturing OS.
You will:
- Build and own features end-to-end — from design → deployment → scale.
- Architect scalable, loosely coupled systems powering AI-native workflows.
- Create robust integrations with 3rd-party systems.
- Push boundaries on reliability, performance, and automation.
- Write clean, tested, secure code → and continuously improve it.
- Collaborate directly with Founders & Snr engineers in a high-trust environment.
Our Tech Arsenal:
- We believe in always using the sharpest tools for the job. On that end we always try to remain tech agnostic and leave it up to discussion on what tools to use to get the problem solved in the most robust and quick way.
- That being said, our bright team of engineers have already constructed a formidable arsenal of tools which helps us fortify our defense and always play on the offensive. Take a look at the Tech Stack we already use.
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (B2B SaaS preferred)
Hands-on experience implementing and managing DLP solutions in AWS and Azure
Strong expertise in data classification, labeling, and protection (e.g., Microsoft Purview, AIP)
Experience designing and enforcing DLP policies across cloud storage, email, endpoints, and SaaS apps
Proficient in monitoring, investigating, and remediating data leakage incidents
📍 Position: IT Intern (Only candidates from BTech-IT background will be considered)
👩💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)
🎓 Qualification: B.Tech (IT) / M.Tech (IT) only
📌 Mode: Remote (WFH)
⏳ Shift: Willingness to work in night/rotational shifts
🗣 Communication: Excellent English
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.
- Support user account management activities using #Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.
- Assist the IT team in configuring, monitoring, and supporting #AWS cloud services (EC2, S3, IAM, WorkSpaces).
- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.
- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.
- Help create and update technical documentation and knowledge base articles.
- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.
💻 Technical Requirements:
- Laptop with i5 or higher processor
-Reliable internet connectivity with 100mbps speed
📍 Position: IT Intern
👩💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)
🎓 Qualification: B.Tech (IT) / M.Tech (IT) only
📌 Mode: Remote (WFH)
⏳ Shift: Willingness to work in night/rotational shifts
🗣 Communication: Excellent English
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.
- Support user account management activities using hashtag
hashtag
#Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.
- Assist the IT team in configuring, monitoring, and supporting hashtag
hashtag
#AWS cloud services (EC2, S3, IAM, WorkSpaces).
- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.
- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.
- Help create and update technical documentation and knowledge base articles.
- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.
💻 Technical Requirements:
- Laptop with i5 or higher processor
-Reliable internet connectivity with 100mbps speed
We are seeking a highly skilled Staff / Senior Staff Full Stack Engineer (Architect) to join our product engineering team. This is a hands-on, high-impact technical leadership role requiring deep expertise across frontend (React), backend (Node.js), cloud infrastructure, and databases.
You will work closely with Engineering, Product, UX, and cross-functional stakeholders to design and deliver scalable, secure, and high-performance systems, while driving engineering best practices and mentoring senior engineers.
Key Responsibilities:
Architecture & Technical Ownership
- Architect and design complex, scalable full-stack systems across multiple services and teams
- Translate business and product requirements into robust technical solutions
- Drive system design, scalability, performance, and security decisions
Hands-On Development
- Write clean, maintainable, production-grade code in:
- React + TypeScript (frontend)
- Node.js + TypeScript (backend)
- Build and maintain REST & GraphQL APIs
- Develop reusable frontend and backend components
Scalability, Performance & Security
- Optimize applications for speed, scalability, reliability, and cost
- Implement security best practices, authentication/authorization, and data protection
- Ensure compliance with security and regulatory standards (OWASP, GDPR, CCPA, etc.)
Collaboration & Leadership
- Partner closely with Product, UX, QA, DevOps, and Marketing
- Lead architecture discussions, design reviews, and code reviews
- Mentor senior engineers and lead by influence (not hierarchy)
- Promote engineering excellence through TDD, CI/CD, observability, and automation
Documentation & Communication
- Clearly document architecture, system flows, and design decisions
- Communicate complex technical concepts to non-technical stakeholders
- Contribute to long-term technology strategy and roadmap
Required Experience & Qualifications
Education & Experience
- Bachelor’s or Master’s degree in Computer Science or related field
- 10+ years of overall software engineering experience
- 7+ years of hands-on full-stack development experience
- Proven delivery of large-scale, complex systems
Core Technical Skills
Frontend
- Expert in React (architecture, performance, state management)
- Strong TypeScript
- Deep knowledge of HTML5, CSS3, responsive & adaptive design
- Experience with Redux / Context API, CSS-in-JS or Tailwind
- Familiarity with build tools: Webpack, Babel, npm/yarn
- Frontend testing: Jest, Vitest, Cypress, Storybook
Backend
- Strong hands-on experience with Node.js
- Frameworks: Express, NestJS, Koa
- API design: REST & GraphQL
- Serverless experience (AWS Lambda / Cloud Functions)
Databases & Caching
- SQL: PostgreSQL / MySQL
- NoSQL: MongoDB, Redis
- Database schema design, indexing, and performance tuning
- Caching & search: Redis, Elasticsearch
Cloud, Infra & DevOps
- Strong experience with AWS / GCP / Azure
- Containers: Docker, Kubernetes
- CI/CD: GitHub Actions, GitLab CI, Jenkins
- CDN, infrastructure scaling, and observability
- Git expertise (GitHub / GitLab / Bitbucket)
Security & Systems
- Web security best practices (OWASP)
- Authentication & authorization (OAuth, JWT)
- Experience with high-availability, fault-tolerant systems
Leadership & Ways of Working
- Strong track record of technical leadership and delivery
- Experience mentoring senior and staff-level engineers
- Ability to conduct high-quality code and design reviews
- Comfortable working in Agile (Scrum/Kanban) environments
- Excellent verbal and written communication
- Strong analytical and problem-solving skills
- Ability to learn and adapt quickly to new technologies
Perks & Benefits
- Day off on the 3rd Friday of every month (monthly long weekend)
- Monthly Wellness Reimbursement Program
- Monthly Office Commutation Reimbursement
- Paid paternity & maternity leave
Senior Penetration Tester
Experience: 2–5 years
Industry: EdTech / SaaS
Role Summary:
We are looking for a Penetration Tester to identify and remediate security vulnerabilities in our EdTech platforms including LMS, ERP, web apps, mobile apps, and APIs.
Key Responsibilities:
Perform VAPT on web, mobile, API, and cloud systems
Identify vulnerabilities using OWASP standards
Prepare security reports and remediation guidance
Re-test fixes with development teams
Skills Required:
Web & API security (OWASP Top 10)
Tools: Burp Suite, Nmap, Nessus, Metasploit
Basic scripting (Python/Bash)
Understanding of cloud security basics
Preferred:
EdTech or SaaS experience
Certifications: CEH / OSCP
Company: Grey Chain AI
Location: Remote
Experience: 7+ Years
Employment Type: Full Time
About the Role
We are looking for a Senior Python AI Engineer who will lead the design, development, and delivery of production-grade GenAI and agentic AI solutions for global clients. This role requires a strong Python engineering background, experience working with foreign enterprise clients, and the ability to own delivery, guide teams, and build scalable AI systems.
You will work closely with product, engineering, and client stakeholders to deliver high-impact AI-driven platforms, intelligent agents, and LLM-powered systems.
Key Responsibilities
- Lead the design and development of Python-based AI systems, APIs, and microservices.
- Architect and build GenAI and agentic AI workflows using modern LLM frameworks.
- Own end-to-end delivery of AI projects for international clients, from requirement gathering to production deployment.
- Design and implement LLM pipelines, prompt workflows, and agent orchestration systems.
- Ensure reliability, scalability, and security of AI solutions in production.
- Mentor junior engineers and provide technical leadership to the team.
- Work closely with clients to understand business needs and translate them into robust AI solutions.
- Drive adoption of latest GenAI trends, tools, and best practices across projects.
Must-Have Technical Skills
- 7+ years of hands-on experience in Python development, building scalable backend systems.
- Strong experience with Python frameworks and libraries (FastAPI, Flask, Pydantic, SQLAlchemy, etc.).
- Solid experience working with LLMs and GenAI systems (OpenAI, Claude, Gemini, open-source models).
- Good experience with agentic AI frameworks such as LangChain, LangGraph, LlamaIndex, or similar.
- Experience designing multi-agent workflows, tool calling, and prompt pipelines.
- Strong understanding of REST APIs, microservices, and cloud-native architectures.
- Experience deploying AI solutions on AWS, Azure, or GCP.
- Knowledge of MLOps / LLMOps, model monitoring, evaluation, and logging.
- Proficiency with Git, CI/CD, and production deployment pipelines.
Leadership & Client-Facing Experience
- Proven experience leading engineering teams or acting as a technical lead.
- Strong experience working directly with foreign or enterprise clients.
- Ability to gather requirements, propose solutions, and own delivery outcomes.
- Comfortable presenting technical concepts to non-technical stakeholders.
What We Look For
- Excellent communication, comprehension, and presentation skills.
- High level of ownership, accountability, and reliability.
- Self-driven professional who can operate independently in a remote setup.
- Strong problem-solving mindset and attention to detail.
- Passion for GenAI, agentic systems, and emerging AI trends.
Why Grey Chain AI
Grey Chain AI is a Generative AI-as-a-Service and Digital Transformation company trusted by global brands such as UNICEF, BOSE, KFINTECH, WHO, and Fortune 500 companies. We build real-world, production-ready AI systems that drive business impact across industries like BFSI, Non-profits, Retail, and Consulting.
Here, you won’t just experiment with AI — you will build, deploy, and scale it for the real world.
Technical Lead – Golang | AWS | Database DesignWork Model: Hybrid (Mandatory Work From Office for the first 1 month in Chennai, followed by remote work)
Location: Chennai, India
Experience: 8–12 Years
Budget: 1L ~ 1.2L MonthlyRole Summary
We are seeking an experienced Technical Lead with strong expertise in Golang, AWS, and Database Design to spearhead backend development initiatives, drive architectural decisions, and mentor engineering teams. The ideal candidate will combine hands-on technical skills with leadership capabilities to deliver scalable, secure, and high-performance solutions.
Key Responsibilities
Backend Development Leadership:
Lead the design and development of backend systems using Golang and microservices architecture.
Ensure scalability, reliability, and maintainability of backend services.Database
Design & Optimization:
Own database schema modeling, normalization, and performance tuning.
Work with MySQL, PostgreSQL, and NoSQL databases to design efficient data storage solutions.
Implement strategies for query optimization and high availability.
Cloud Infrastructure Management:
Architect and manage scalable solutions on AWS cloud services including EC2, ECS/EKS, Lambda, RDS, DynamoDB, and S3.
Ensure cost optimization, security compliance, and disaster recovery planning.
Technical Governance & Mentorship:
Review code, enforce best practices, and maintain coding standards.
Mentor and guide developers, fostering a culture of continuous learning and innovation.
Collaboration & Delivery:
Partner with product managers, architects, and stakeholders to align technical solutions with business goals.
Drive end-to-end delivery of projects with a focus on quality and timelines.
Production Support & Optimization:
Troubleshoot and resolve production issues.
Continuously monitor system performance and implement improvements.
Required Skills & Qualifications
Technical Expertise:
Strong hands-on experience with Golang in production-grade applications.
Solid knowledge of Database Design (MySQL, PostgreSQL, NoSQL).
Proficiency in AWS services (EC2, ECS/EKS, Lambda, RDS, DynamoDB, S3).
Strong understanding of microservices and distributed systems.
DevOps & Tools:
Experience with Docker, Kubernetes, and container orchestration.
Familiarity with CI/CD pipelines using tools like Jenkins, Maven, or GitHub Actions.
Soft Skills:
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities.
Ability to mentor and inspire engineering teams.
Shift + Return to add a new line
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Job Summary
We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.
Key Responsibilities
- Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
- Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
- Containerize applications using Docker and manage deployments with Helm charts
- Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
- Provision and manage infrastructure using Terraform (Infrastructure as Code)
- Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
- Write and maintain Python scripts for automation, monitoring, and operational tasks
- Ensure high availability, scalability, performance, and cost optimization of cloud resources
- Implement and follow security best practices across AWS and Kubernetes environments
- Troubleshoot production issues, perform root cause analysis, and support incident resolution
- Collaborate closely with development and QA teams to streamline deployment and release processes
Required Skills & Qualifications
- 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
- Strong experience with AWS services, including:
- EC2, RDS, OpenSearch, VPC, S3
- Application Load Balancer (ALB), API Gateway, Lambda
- SNS and SQS.
- Hands-on experience with AWS EKS (Kubernetes)
- Strong knowledge of Docker and Helm charts
- Experience with Terraform for infrastructure provisioning and management
- Solid experience building and managing CI/CD pipelines using Jenkins
- Practical experience with Prometheus and Grafana for monitoring and alerting
- Proficiency in Python scripting for automation and operational tasks
- Good understanding of Linux systems, networking concepts, and cloud security
- Strong problem-solving and troubleshooting skills
Good to Have (Preferred Skills)
- Exposure to GitOps practices
- Experience managing multi-environment setups (Dev, QA, UAT, Production)
- Knowledge of cloud cost optimization techniques
- Understanding of Kubernetes security best practices
- Experience with log aggregation tools (e.g., ELK/OpenSearch stack)
Language Preference
- Fluency in English is mandatory.
- Fluency in Hindi is preferred.
We are seeking a highly skilled software developer with proven experience in developing and scaling education ERP solutions. The ideal candidate should have strong expertise in Node.js or PHP (Laravel), MySQL, and MongoDB, along with hands-on experience in implementing ERP modules such as HR, Exams, Inventory, Learning Management System (LMS), Admissions, Fee Management, and Finance.
Key Responsibilities
Design, develop, and maintain scalable Education ERP modules.
Work on end-to-end ERP features, including HR, exams, inventory, LMS, admissions, fees, and finance.
Build and optimize REST APIs/GraphQL services and ensure seamless integrations.
Optimize system performance, scalability, and security for high-volume ERP usage.
Conduct code reviews, enforce coding standards, and mentor junior developers.
Stay updated with emerging technologies and recommend improvements for ERP solutions.
Required Skills & Qualifications
Strong expertise in Node.js and PHP (Laravel, Core PHP).
Proficiency with MySQL, MongoDB, and PostgreSQL (database design & optimization).
Frontend knowledge: JavaScript, jQuery, HTML, CSS (React/Vue preferred).
Experience with REST APIs, GraphQL, and third-party integrations (payment gateways, SMS, and email).
Hands-on with Git/GitHub, Docker, and CI/CD pipelines.
Familiarity with cloud platforms (AWS, Azure, GCP) is a plus.
4+ years of professional development experience, with a minimum of 2 years in ERP systems.
Preferred Experience
Prior work in the education ERP domain.
Deep knowledge of HR, Exam, Inventory, LMS, Admissions, Fees & Finance modules.
Exposure to high-traffic enterprise applications.
Strong leadership, mentoring, and problem-solving abilities
Benefit:
Permanent Work From Home
Procedure is hiring for Drover.
This is not a DevOps/SRE/cloud-migration role — this is a hands-on backend engineering and architecture role where you build the platform powering our hardware at scale.
About Drover
Ranching is getting harder. Increased labor costs and a volatile climate are placing mounting pressure to provide for a growing population. Drover is empowering ranchers to efficiently and sustainably feed the world by making it cheaper and easier to manage livestock, unlock productivity gains, and reduce carbon footprint with rotational grazing. Not only is this a $46B opportunity, you'll be working on a climate solution with the potential for real, meaningful impact.
We use patent-pending low-voltage electrical muscle stimulation (EMS) to steer and contain cows, replacing the need for physical fences or electric shock. We are building something that has never been done before, and we have hundreds of ranches on our waitlist.
Drover is founded by Callum Taylor (ex-Harvard), who comes from 5 generations of ranching, and Samuel Aubin, both of whom grew up in Australian ranching towns and have an intricate understanding of the problem space. We are well-funded and supported by Workshop Ventures, a VC firm with experience in building unicorn IoT companies.
We're looking to assemble a team of exceptional talent with a high eagerness to dive headfirst into understanding the challenges and opportunities within ranching.
About The Role
As our founding cloud engineer, you will be responsible for building and scaling the infrastructure that powers our IoT platform, connecting thousands of devices across ranches nationwide.
Because we are an early-stage startup, you will have high levels of ownership in what you build. You will play a pivotal part in architecting our cloud infrastructure, building robust APIs, and ensuring our systems can scale reliably. We are looking for someone who is excited about solving complex technical challenges at the intersection of IoT, agriculture, and cloud computing.
What You'll Do
- Develop Drover IoT cloud architecture from the ground up (it’s a green field project)
- Design and implement services to support wearable devices, mobile app, and backend API
- Implement data processing and storage pipelines
- Create and maintain Infrastructure-as-Code
- Support the engineering team across all aspects of early-stage development -- after all, this is a startup
Requirements
- 5+ years of experience developing cloud architecture on AWS
- In-depth understanding of various AWS services, especially those related to IoT
- Expertise in cloud-hosted, event-driven, serverless architectures
- Expertise in programming languages suitable for AWS micro-services (eg: TypeScript, Python)
- Experience with networking and socket programming
- Experience with Kubernetes or similar orchestration platforms
- Experience with Infrastructure-as-Code tools (e.g., Terraform, AWS CDK)
- Familiarity with relational databases (PostgreSQL)
- Familiarity with Continuous Integration and Continuous Deployment (CI/CD)
Nice To Have
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, Electrical Engineering, or a related field
Seeking a Senior Staff Cloud Engineer who will lead the design, development, and optimization of scalable cloud architectures, drive automation across the platform, and collaborate with cross-functional stakeholders to deliver secure, high-performance cloud solutions aligned with business goals
Responsibilities:
- Cloud Architecture & Strategy
- Define and evolve the company’s cloud architecture, with AWS as the primary platform.
- Design secure, scalable, and resilient cloud-native and event-driven architectures to support product growth and enterprise demands.
- Create and scale up our platform for integrations with our enterprise customers (webhooks, data pipelines, connectors, batch ingestions, etc)
- Partner with engineering and product to convert custom solutions into productised capabilities.
- Security & Compliance Enablement
- Act as a foundational partner in building out the company’s security andcompliance functions.
- Help define cloud security architecture, policies, and controls to meet enterprise and customer requirements.
- Guide compliance teams on technical approaches to SOC2, ISO 27001, GDPR, and GxP standards.
- Mentor engineers and security specialists on embedding secure-by-design and compliance-first practices.
- Customer & Solutions Enablement
- Work with Solutions Engineering and customers to design and validate complex deployments.
- Contribute to processes that productise custom implementations into scalable platform features.
- Leadership & Influence
- Serve as a technical thought leader across cloud, data, and security domains.
- Collaborate with cross-functional leadership (Product, Platform, TPM, Security) to align technical strategy with business goals.
- Act as an advisor to security and compliance teams during their growth, helping establish scalable practices and frameworks.
- Represent the company in customer and partner discussions as a trusted cloud and security subject matter expert.
- Data Platforms & Governance
- Provide guidance to the data engineering team on database architecture, storage design, and integration patterns.
- Advise on selection and optimisation of a wide variety of databases (relational, NoSQL, time-series, graph, analytical).
- Collaborate on data governance frameworks covering lifecycle management, retention, classification, and access controls.
- Partner with data and compliance teams to ensure regulatory alignment and strong data security practices.
- Developer Experience & DevOps
- Build and maintain tools, automation, and CI/CD pipelines that accelerate developer velocity.
- Promote best practices for infrastructure as code, containerisation, observability, and cost optimisation.
- Embed security, compliance, and reliability standards into the development lifecycle.
Requirements:
- 12+ years of experience in cloud engineering or architecture roles.
- Deep expertise in AWS and strong understanding of modern distributed application design (microservices, containers, event-driven architectures).
- Hands-on experience with a wide range of databases (SQL, NoSQL, analytical, and specialized systems).
- Strong foundation in data management and governance, including lifecycle and compliance.
- Experience supporting or helping build security and compliance functions within a SaaS or enterprise environment.
- Expertise with IaC (Terraform, CDK, CloudFormation) and CI/CD pipelines.
- Strong foundation in networking, security, observability, and performance engineering.
- Excellent communication and influencing skills, with the ability to partner across technical and business functions.
Good to Have:
- Exposure to Azure, GCP, or other cloud environments.
- Experience working in SaaS/PaaS at enterprise scale.
- Background in product engineering, with experience shaping technical direction in collaboration with product teams.
- Knowledge of regulatory and compliance standards (SOC2, ISO 27001, GDPR, and GxP).
About Albert Invent
Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Every day, scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, bringing better products to market, faster.
Why Join Albert Invent
- Joining Albert Invent means becoming part of a mission-driven, fast-growing global team at the intersection of AI, data, and advanced materials science.
- You will collaborate with world-class scientists and technologists to redefine how new materials are discovered, developed, and brought to market.
- The culture is built on curiosity, collaboration, and ownership, with a strong focus on learning and impact.
- You will enjoy the opportunity to work on cutting-edge AI tools that accelerate real- world R&D and solve global challenges from sustainability to advanced manufacturing while growing your careers in a high-energy environment.
Job Description: DevOps Engineer
Location: Bangalore / Hybrid / Remote
Company: LodgIQ
Industry: Hospitality / SaaS / Machine Learning
About LodgIQ
Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the
travel industry. By leveraging machine learning and artificial intelligence, we enable precise
forecasting and optimized pricing for hotel revenue management. Backed by Highgate
Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a
global presence.
Role Summary:
We are seeking a Senior DevOps Engineer with 5+ years of strong hands-on experience in
AWS, Kubernetes, CI/CD, infrastructure as code, and cloud-native technologies. This
role involves designing and implementing scalable infrastructure, improving system
reliability, and driving automation across our cloud ecosystem.
Key Responsibilities:
• Architect, implement, and manage scalable, secure, and resilient cloud
infrastructure on AWS
• Lead DevOps initiatives including CI/CD pipelines, infrastructure automation,
and monitoring
• Deploy and manage Kubernetes clusters and containerized microservices
• Define and implement infrastructure as code using
Terraform/CloudFormation
• Monitor production and staging environments using tools like CloudWatch,
Prometheus, and Grafana
• Support MongoDB and MySQL database administration and optimization
• Ensure high availability, performance tuning, and cost optimization
• Guide and mentor junior engineers, and enforce DevOps best practices
• Drive system security, compliance, and audit readiness in cloud environments
• Collaborate with engineering, product, and QA teams to streamline release
processes
Required Qualifications:
• 5+ years of DevOps/Infrastructure experience in production-grade environments
• Strong expertise in AWS services: EC2, EKS, IAM, S3, RDS, Lambda, VPC, etc.
• Proven experience with Kubernetes and Docker in production
• Proficient with Terraform, CloudFormation, or similar IaC tools
• Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, or
similar
• Advanced scripting in Python, Bash, or Go
• Solid understanding of networking, firewalls, DNS, and security protocols
• Exposure to monitoring and logging stacks (e.g., ELK, Prometheus, Grafana)
• Experience with MongoDB and MySQL in cloud environments
Preferred Qualifications:
• AWS Certified DevOps Engineer or Solutions Architect
• Experience with service mesh (Istio, Linkerd), Helm, or ArgoCD
• Familiarity with Zero Downtime Deployments, Canary Releases, and Blue/Green
Deployments
• Background in high-availability systems and incident response
• Prior experience in a SaaS, ML, or hospitality-tech environment
Tools and Technologies You’ll Use:
• Cloud: AWS
• Containers: Docker, Kubernetes, Helm
• CI/CD: Jenkins, GitHub Actions
• IaC: Terraform, CloudFormation
• Monitoring: Prometheus, Grafana, CloudWatch
• Databases: MongoDB, MySQL
• Scripting: Bash, Python
• Collaboration: Git, Jira, Confluence, Slack
Why Join Us?
• Competitive salary and performance bonuses.
• Remote-friendly work culture.
• Opportunity to work on cutting-edge tech in AI and ML.
• Collaborative, high-growth startup environment.
• For more information, visit http://www.lodgiq.com
We are looking for a skilled Node.js Developer with PHP experience to build, enhance, and maintain ERP and EdTech platforms. The role involves developing scalable backend services, integrating ERP modules, and supporting education-focused systems such as LMS, student management, exams, and fee management.
Key Responsibilities
Develop and maintain backend services using Node.js and PHP.
Build and integrate ERP modules for EdTech platforms (Admissions, Students, Exams, Attendance, Fees, Reports).
Design and consume RESTful APIs and third-party integrations (payment gateway, SMS, email).
Work with databases (MySQL / MongoDB / PostgreSQL) for high-volume education data.
Optimize application performance, scalability, and security.
Collaborate with frontend, QA, and product teams.
Debug, troubleshoot, and provide production support.
Required Skills
Strong experience in Node.js (Express.js / NestJS).
Working experience in PHP (Core PHP / Laravel / CodeIgniter).
Hands-on experience with ERP systems.
Domain experience in EdTech / Education ERP / LMS.
Strong knowledge of MySQL and database design.
Experience with authentication, role-based access, and reporting.
Familiarity with Git, APIs, and server environments.
Preferred Skills
Experience with online examination systems.
Knowledge of cloud platforms (AWS / Azure).
Understanding of security best practices (CSRF, XSS, SQL Injection).
Exposure to microservices or modular architecture.
Qualification
Bachelor’s degree in Computer Science or equivalent experience.
3–6 years of relevant experience in Node.js & PHP development
The Senior Software Developer is responsible for development of CFRA’s report generation framework using a modern technology stack: Python on AWS cloud infrastructure, SQL, and Web technologies. This is an opportunity to make an impact on both the team and the organization by being part of the design and development of a new customer-facing report generation framework that will serve as the foundation for all future report development at CFRA.
The ideal candidate has a passion for solving business problems with technology and can effectively communicate business and technical needs to stakeholders. We are looking for candidates that value collaboration with colleagues and having an immediate, tangible impact for a leading global independent financial insights and data company.
Key Responsibilities
- Analyst Workflows: Design and development of CFRA’s integrated content publishing platform using a proprietary 3rd party editorial and publishing platform for integrated digital publishing.
- Designing and Developing APIs: Design and development of robust, scalable, and secure APIs on AWS, considering factors like performance, reliability, and cost-efficiency.
- AWS Service Integration: Integrate APIs with various AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, AWS Glue, and others, to build comprehensive and efficient solutions.
- Performance Optimization: Identify and implement optimizations to improve performance, scalability, and efficiency, leveraging AWS services and tools.
- Security and Compliance: Ensure APIs are developed following best security practices, including authentication, authorization, encryption, and compliance with relevant standards and regulations.
- Monitoring and Logging: Implement monitoring and logging solutions for APIs using AWS CloudWatch, AWS X-Ray, or similar tools, to ensure availability, performance, and reliability.
- Continuous Integration and Deployment (CI/CD): Establish and maintain CI/CD pipelines for API development, automating testing, deployment, and monitoring processes on AWS.
- Documentation and Training: Create and maintain comprehensive documentation for internal and external users, and provide training and support to developers and stakeholders.
- Team Collaboration: Collaborate effectively with cross-functional teams, including product managers, designers, and other developers, to deliver high-quality solutions that meet business requirements.
- Problem Solving: troubleshooting efforts, identifying root causes and implementing solutions to ensure system stability and performance.
- Stay Updated: Stay updated with the latest trends, tools, and technologies related to development on AWS, and continuously improve your skills and knowledge.
Desired Skills and Experience
- Development: 5+ years of extensive experience in designing, developing, and deploying using modern technologies, with a focus on scalability, performance, and security.
- AWS Services: proficiency in using AWS services such as AWS Lambda, Amazon API Gateway, Amazon SQS, Amazon SNS, Amazon SES, Amazon RDS, Amazon DynamoDB, and others, to build and deploy API solutions.
- Programming Languages: Proficiency in programming languages commonly used for development, such as Python, Node.js, or others, as well as experience with serverless frameworks like AWS.
- Architecture Design: Ability to design scalable and resilient API architectures using microservices, serverless, or other modern architectural patterns, considering factors like performance, reliability, and cost-efficiency.
- Security: Strong understanding of security principles and best practices, including authentication, authorization, encryption, and compliance with standards like OAuth, OpenID Connect, and AWS IAM.
- DevOps Practices: Familiarity with DevOps practices and tools, including CI/CD pipelines, infrastructure as code (IaC), and automated testing, to ensure efficient and reliable deployment on AWS.
- Problem-solving Skills: Excellent problem-solving skills, with the ability to troubleshoot complex issues, identify root causes, and implement effective solutions to ensure the stability and performance.
- Communication Skills: Strong communication skills, with the ability to effectively communicate technical concepts to both technical and non-technical stakeholders, and collaborate with cross- functional teams.
- Agile Methodologies: Experience working in Agile development environments, following practices like Scrum or Kanban, and ability to adapt to changing requirements and priorities.
- Continuous Learning: A commitment to continuous learning and staying updated with the latest trends, tools, and technologies related to development and AWS services.
- Bachelor's Degree: A bachelor's degree in Computer Science, Software Engineering, or a related field is often preferred, although equivalent experience and certifications can also be valuable.
Supercharge Your Career as a Tableau Systems Engineer at Technoidentity!
Are you ready to solve people challenges that fuel business growth? At Technoidentity, we’re a Data+AI product engineering company building cutting-edge solutions in the FinTech domain for over 13 years—and we’re expanding globally. It’s the perfect time to join our
team of tech innovators and leave your mark!
What’s in it for You?
Our Self Serve Business Intelligence team is focused on enabling strong data exploration and analytics across the organisation. The team manages Tableau dashboards, provides foundational Tableau systems administration, and resolves connectivity issues across platforms. We also oversee Tableau Server and Tableau Cloud, including license utilisation and allocation. As the organisation continues to scale its analytical maturity, this team plays a key role in expanding Tableau adoption across business functions and setting standards for
canonical dashboards and curated data sources. The team also contributes to company wide analytics education.
As a Tableau and Analytical Systems Engineer, you will strengthen our data visualisationecosystem and ensure seamless data exploration for stakeholders. Your work will span installation, configuration, maintenance, and support of Tableau Server environments. You
will manage user accounts, permissions, and security settings across analytics systems, monitor performance, and troubleshoot issues related to Tableau and its integrations. Your contribution will support our goal of building a data literate culture and ensuring reliable analytical infrastructure.
What Will You Be Doing?
• Install, configure, and maintain multi node Linux based Tableau Server environments and Tableau Cloud, including upgrades and patches
• Manage user accounts, permissions, and security configurations across analytical systems
• Monitor system performance and troubleshoot issues related to Tableau, integrations, and upstream data systems
• Automate tasks such as deployment of data sources or workbooks, alerting, and integrations with Airflow using scripting languages
• Collaborate with the Tableau vendor on support cases, enhancements, and product fixes
• Partner with analysts and data teams to understand visualisation needs and deliver efficient Tableau solutions
• Track server activity and usage insights to identify performance improvements
• Create and maintain data extracts, refresh schedules, and quality checks
• Manage migrations of dashboards and workbooks across development and production environments
• Document procedures and administration practices for all analytical systems
• Stay updated on latest Tableau features, product updates, and industry best practices
• Maintain server logs, backups, and metadata exports
• Provide training, onboarding support, and guidance to users on Tableau Desktop and Server
What Makes You the Perfect Fit?
• At least 3 years of experience in Tableau Server administration
• Strong technical experience with AWS and familiarity with Linux
• Two or more years of coding experience in Python or JavaScript, with additional scripting knowledge in PowerShell or Bash
• Experience in designing cloud architectures for compute and storage
• Strong understanding of relational databases such as Postgres SQL and Trino or Hive
• Experience with Tableau APIs and Airflow is preferred
• Strong analytical and problem solving skills
• Ability to communicate clearly, collaborate effectively, and work both independently and in a team environment
Job Summary
We are seeking an experienced Databricks Developer with strong skills in PySpark, SQL, Python, and hands-on experience deploying data solutions on AWS (preferred), Azure. The role involves designing, developing, and optimizing scalable data pipelines and analytics workflows on the Databricks platform.
Key Responsibilities
- Develop and optimize ETL/ELT pipelines using Databricks and PySpark.
- Build scalable data workflows on AWS (EC2, S3, Glue, Lambda, IAM) or Azure (ADF, ADLS, Synapse).
- Implement and manage Delta Lake (ACID, schema evolution, time travel).
- Write efficient, complex SQL for transformation and analytics.
- Build and support batch and streaming ingestion (Kafka, Kinesis, EventHub).
- Optimize Databricks clusters, jobs, notebooks, and PySpark performance.
- Collaborate with cross-functional teams to deliver reliable data solutions.
- Ensure data governance, security, and compliance.
- Troubleshoot pipelines and support CI/CD deployments.
Required Skills & Experience
- 4–8 years in Data Engineering / Big Data development.
- Strong hands-on experience with Databricks (clusters, jobs, workflows).
- Advanced PySpark and strong Python skills.
- Expert-level SQL (complex queries, window functions).
- Practical experience with AWS (preferred) or Azure cloud services.
- Experience with Delta Lake, Parquet, and data lake architectures.
- Familiarity with CI/CD tools (GitHub Actions, Azure DevOps, Jenkins).
- Good understanding of data modeling, optimization, and distributed systems.
Summary:
We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.
Job Type:
Full-Time Contractor (12 months)
Location:
Remote / On-site (Jaipur preferred, as per project needs)
Experience:
5+ years in backend development
Key Responsibilities:
- Design, develop, and maintain robust backend services using Python and FastAPI.
- Implement and manage Prisma ORM for database operations.
- Build scalable APIs and integrate with SQL databases and third-party services.
- Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
- Collaborate with front-end developers and other team members to deliver high-quality web applications.
- Ensure application performance, security, and reliability.
- Participate in code reviews, testing, and deployment processes.
Required Skills:
- Expertise in Python backend development with strong experience in FastAPI.
- Solid understanding of RESTful API design and implementation.
- Proficiency in SQL databases and ORM tools (preferably Prisma)
- Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
- Familiarity with CI/CD pipelines and containerization (Docker).
- Knowledge of cloud architecture best practices.
Added Advantage:
- Front-end development knowledge (React, Angular, or similar frameworks).
- Exposure to AWS/GCP cloud platforms.
- Experience with NoSQL databases.
Eligibility:
- Minimum 5 years of professional experience in backend development.
- Available for full-time engagement.
- Please excuse if you are currently engaged in other projects—we require dedicated availability.
Role: Senior Backend Engineer(Nodes.js+Typescript+Postgres)
Location: Pune
Type: Full-Time
Who We Are:
After a highly successful launch, Azodha is ready to take its next major step. We are seeking a passionate and experienced Senior Backend Engineer to build and enhance a disruptive healthcare product. This is a unique opportunity to get in on the ground floor of a fast-growing startup and play a pivotal role in shaping both the product and the team.
If you are an experienced backend engineer who thrives in an agile startup environment and has a strong technical background, we want to hear from you!
About The Role:
As a Senior Backend Engineer at Azodha, you’ll play a key role in architecting, solutioning and driving development of our AI led interoperable digital enablement platform.You will work closely with the founder/CEO to refine the product vision, drive product innovation, delivery and grow with a strong technical team.
What You’ll Do:
* Technical Excellence: Design, develop, and scale backend services using Node.js and TypeScript, including REST and GraphQL APIs. Ensure systems are scalable, secure, and high-performing.
* Data Management and Integrity: Work with Prisma or TypeORM, and relational databases like PostgreSQL and MySQL
* Continuous Improvement: Stay updated with the latest trends in backend development, incorporating new technologies where appropriate. Drive innovation and efficiency within the team
* Utilize ORMs such as Prisma or TypeORM to interact with database and ensure data integrity.
* Follow Agile sprint methodology for development.
* Conduct code reviews to maintain code quality and adherence to best practices.
* Optimize API performance for optimal user experiences.
* Participate in the entire development lifecycle, from initial planning , design and maintenance
* Troubleshoot and debug issues to ensure system stability.
* Collaborate with QA teams to ensure high quality releases.
* Mentor and provide guidance to junior developers, offering technical expertise and constructive feedback.
Requirements
* Bachelor's degree in Computer Science, software Engineering, or a related field.
* 5+ years of hands-on experience in backend development using Node.js and TypeScript.
* Experience working on Postgres or My SQL.
* Proficiency in TypeScript and its application in Node.js
* Experience with ORM such as Prisma or TypeORM.
* Familiarity with Agile development methodologies.
* Strong analytical and problem solving skills.
* Ability to work independently and in a team oriented, fast-paced environment.
* Excellent written and oral communication skills.
* Self motivated and proactive attitude.
Preferred:
* Experience with other backend technologies and languages.
* Familiarity with continuous integration and deployment process.
* Contributions to open-source projects related to backend development.
Note: please don't apply if you're profile if you're primary database is postgres SQL.
Join our team of talented engineers and be part of building cutting edge backend systems that drive our applications. As a Senior Backend Engineer, you'll have the opportunity to shape the future of our backend infrastructure and contribute company's success. If you are passionate about backend development and meet the above requirements, we encourage you to apply and become valued member of our team at Azodha.
We are seeking a highly skilled Power Platform Developer with deep expertise in designing, developing, and deploying solutions using Microsoft Power Platform. The ideal candidate will have strong knowledge of Power Apps, Power Automate, Power BI, Power Pages, and Dataverse, along with integration capabilities across Microsoft 365, Azure, and third-party systems.
Key Responsibilities
- Solution Development:
- Design and build custom applications using Power Apps (Canvas & Model-Driven).
- Develop automated workflows using Power Automate for business process optimization.
- Create interactive dashboards and reports using Power BI for data visualization and analytics.
- Configure and manage Dataverse for secure data storage and modelling.
- Develop and maintain Power Pages for external-facing portals.
- Integration & Customization:
- Integrate Power Platform solutions with Microsoft 365, Dynamics 365, Azure services, and external APIs.
- Implement custom connectors and leverage Power Platform SDK for advanced scenarios.
- Utilize Azure Functions, Logic Apps, and REST APIs for extended functionality.
- Governance & Security:
- Apply best practices for environment management, ALM (Application Lifecycle Management), and solution deployment.
- Ensure compliance with security, data governance, and licensing guidelines.
- Implement role-based access control and manage user permissions.
- Performance & Optimization:
- Monitor and optimize app performance, workflow efficiency, and data refresh strategies.
- Troubleshoot and resolve technical issues promptly.
- Collaboration & Documentation:
- Work closely with business stakeholders to gather requirements and translate them into technical solutions.
- Document architecture, workflows, and processes for maintainability.
Required Skills & Qualifications
- Technical Expertise:
- Strong proficiency in Power Apps (Canvas & Model-Driven), Power Automate, Power BI, Power Pages, and Dataverse.
- Experience with Microsoft 365, Dynamics 365, and Azure services.
- Knowledge of JavaScript, TypeScript, C#, .NET, and Power Fx for custom development.
- Familiarity with SQL, DAX, and data modeling.
- Additional Skills:
- Understanding of ALM practices, solution packaging, and deployment pipelines.
- Experience with Git, Azure DevOps, or similar tools for version control and CI/CD.
- Strong problem-solving and analytical skills.
- Certifications (Preferred):
- Microsoft Certified: Power Platform Developer Associate.
- Microsoft Certified: Power Platform Solution Architect Expert.
Soft Skills
- Excellent communication and collaboration skills.
- Ability to work in agile environments and manage multiple priorities.
- Strong documentation and presentation abilities.
Job Summary:
We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.
Responsibilities:
- Deploy, configure, and troubleshoot various infrastructure and application environments
- Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
- Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
- Collaborate with application teams on infrastructure design and issues
- Architect solutions that optimally meet business needs
- Implement CI/CD pipelines and automate deployment processes
- Disaster recovery and infrastructure restoration
- Restore/Recovery operations from backups
- Automate routine tasks
- Execute company initiatives in the infrastructure space
- Expertise with observability tools like ELK, Prometheus, Grafana , Loki
Qualifications:
- Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
- Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
- Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
- Experience in architecting solutions that optimally meet business needs
- Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
- Strong understanding of system concepts like high availability, scalability, and redundancy
- Ability to work with application teams on infrastructure design and issues
- Excellent problem-solving and troubleshooting skills
- Experience with automation of routine tasks
- Good communication and interpersonal skills
Education and Experience:
- Bachelor's degree in Computer Science or a related field
- 5 to 10 years of experience as a DevOps Engineer or in a related role
- Experience with observability tools like ELK, Prometheus, Grafana
Working Conditions:
The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.
Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.
We are looking for an 4-5 years of experienced AI/ML Engineer who can design, develop, and deploy scalable machine learning models and AI-driven solutions. The ideal candidate should have strong expertise in data processing, model building, and production deployment, along with solid programming and problem-solving skills.
Key Responsibilities
- Develop and deploy machine learning, deep learning, and NLP models for various business use cases.
- Build end-to-end ML pipelines including data preprocessing, feature engineering, training, evaluation, and production deployment.
- Optimize model performance and ensure scalability in production environments.
- Work closely with data scientists, product teams, and engineers to translate business requirements into AI solutions.
- Conduct data analysis to identify trends and insights.
- Implement MLOps practices for versioning, monitoring, and automating ML workflows.
- Research and evaluate new AI/ML techniques, tools, and frameworks.
- Document system architecture, model design, and development processes.
Required Skills
- Strong programming skills in Python (NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch, Keras).
- Hands-on experience in building and deploying ML/DL models in production.
- Good understanding of machine learning algorithms, neural networks, NLP, and computer vision.
- Experience with REST APIs, Docker, Kubernetes, and cloud platforms (AWS/GCP/Azure).
- Working knowledge of MLOps tools such as MLflow, Airflow, DVC, or Kubeflow.
- Familiarity with data pipelines and big data technologies (Spark, Hadoop) is a plus.
- Strong analytical skills and ability to work with large datasets.
- Excellent communication and problem-solving abilities.
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI, Machine Learning, or related fields.
- Experience in deploying models using cloud services (AWS Sagemaker, GCP Vertex AI, etc.).
- Experience in LLM fine-tuning or generative AI is an added advantage.
What You Will be Doing:
● Develop and maintain software that is scalable, secure, and efficient
● Collaborate with Technical Architects & Business Analysts
● Architect and design software solutions that meet project requirements
● Mentor and train junior developers to improve their skills and knowledge
● Conduct code reviews ensuring the code is maintainable, readable, and efficient
● Research and evaluate new technologies to improve the processes
● Effective communication skills, particularly in documenting and explaining code and technical concepts.
Skills We Are Looking For:
● 5+ years extensive hands-on experience with NodeJS and Typescript
● Strong understanding of RESTful API design and implementation.
● Comfortable with debugging, performance tuning, and optimizing Node.js applications.
● Strong problem-solving abilities and attention to detail.
● Experience with authentication and authorization protocols, such as OAuth, JWT and session management.
● Understanding of security best practices in backend development, including data encryption and vulnerability mitigation.
Bonus Skills
● Experience with server-side frameworks such as Express.js or NestJS.
● Familiarity with cloud platforms (e.g., AWS, Azure, (preferred) Google Cloud) and their services for backend deployment.
● Familiarity with NoSQL databases (Mongo preferred), and the ability to design and optimize database queries.
Why You’ll Love It Here
● Innovative Culture - We believe in pushing boundaries
● Impactful Work - You won’t just write code, you will help build the future
● Collaborative Environment - We believe that everyone has a voice that matters
● Work Life Balance - Our flexible work environment encourages you to have space to
recharge
About Intro
Intro is a dating app where LGBTQ South Asians find love. Built by Queer Desis in New York for the 100 million queer South Asians around the globe who deserve a space of their own. Our mission: help 1 million Queer Desis find love by the end of 2026. We’re creating a safer, more intentional, and community-driven dating experience — one that celebrates identity, culture, and connection.
The Role
We’re looking for a Founding Full-Stack Engineer who thrives in 0→1 environments. You’ll take ownership of architecture, design, and execution across backend, web, and mobile (iOS/Android) — helping shape both the product and the culture of the company.
You’ll be joining at the earliest stage — working directly with the founding team on everything from feature development to infrastructure decisions and product strategy.
Responsibilities
- Architect and build scalable backend systems (APIs, data models, authentication, messaging, matching).
- Lead development across web and mobile clients (React, React Native, Swift/Kotlin).
- Collaborate on product design and iterate quickly on user feedback.
- Set up CI/CD, testing, and monitoring pipelines.
- Help define the tech culture, best practices, and early engineering team standards.
- Contribute to early hiring and mentorship as we grow.
You Might Be a Great Fit If You
- Care deeply about building for LGBTQ and South Asian communities.
- Are motivated by impact and ownership, not just code.
- Thrive in ambiguity and love solving real user problems fast.
- Want to help define the technical and cultural DNA of a mission-driven company.
Interview Process
- AI Screen – Initial automated technical and culture-fit assessment.
- Web Challenge – Build a small feature for the web app to demonstrate frontend and full-stack skills.
- iOS Challenge – Build a small feature for the iOS app to showcase mobile development and design sense.
- Android Challenge – Build a small feature for the Android app to highlight cross-platform depth.
- Founder Chat – Meet with the Founder to discuss vision, values, and long-term fit.
What We Offer
- Competitive salary.
- Full-time (40 hours/week) with flexible hours.
- Opportunity to shape a product with global cultural impact.
- Work directly with the founding team building something that truly matters.
Title – Principal Cloud Architect
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
Business Summary :
The Deltek Global Cloud team focuses on the delivery of first-class services and solutions for our customers. We are an innovative and dynamic team that is passionate about transforming the Deltek cloud services that power our customers' project success. Our diverse, global team works cross-functionally to make an impact on the business. If you want to work in a transformational environment, where education and training are encouraged, consider Deltek as the next step in your career!
External Job Title :
Principal Cloud Cost Optimization Engineer
Position Responsibilities :
The Cloud Cost Optimization Engineer plays a key role in supporting the full lifecycle of cloud financial management (FinOps) at Deltek—driving visibility, accountability, and efficiency across our cloud investments. This role is responsible for managing cloud spend, forecasting, and identifying optimization opportunities that support Deltek's cloud expansion and financial performance goals.
We are seeking a candidate with hands-on experience in Cloud FinOps practices, software development capabilities, AI/automation expertise, strong analytical skills, and a passion for driving financial insights that enable smarter business decisions. The ideal candidate is a self-starter with excellent cross-team collaboration abilities and a proven track record of delivering results in a fast-paced environment.
Key Responsibilities:
- Prepare and deliver monthly reports and presentations on cloud spend performance versus plan and forecast for Finance, IT, and business leaders.
- Support the evaluation, implementation, and ongoing management of cloud consumption and financial management tools.
- Apply financial and vendor management principles to support contract optimization, cost modeling, and spend management.
- Clearly communicate technical and financial insights, presenting complex topics in a simple, actionable manner to both technical and non-technical audiences.
- Partner with engineering, product, and infrastructure teams to identify cost drivers, promote best practices for efficient cloud consumption, and implement savings opportunities.
- Lead cost optimization initiatives, including analyzing and recommending savings plans, reserved instances, and right-sizing opportunities across AWS, Azure, and OCI.
- Collaborate with the Cloud Governance team to ensure effective tagging strategies and alerting frameworks are deployed and maintained at scale.
- Support forecasting by partnering with infrastructure and engineering teams to understand demand plans and proactively manage capacity and spend.
- Build and maintain financial models and forecasting tools that provide actionable insights into current and future cloud expenditures.
- Develop and maintain automated FinOps solutions using Python, SQL, and cloud-native services (Lambda, Azure Functions) to streamline cost analysis, anomaly detection, and reporting workflows.
- Design and implement AI-powered cost optimization tools leveraging GenAI APIs (OpenAI, Claude, Bedrock) to automate spend analysis, generate natural language insights, and provide intelligent recommendations to stakeholders.
- Build custom integrations and data pipelines connecting cloud billing APIs, FinOps platforms, and internal systems to enable real-time cost visibility and automated alerting.
- Develop and sustain relationships with internal stakeholders, onboarding them to FinOps tools, processes, and continuous cost optimization practices.
- Create and maintain KPIs, scorecards, and financial dashboards to monitor cloud spend and optimization progress.
- Drive a culture of optimization by translating financial insights into actionable engineering recommendations, promoting cost-conscious architecture, and leveraging automation for resource optimization.
- Use FinOps tools and services to analyze cloud usage patterns and provide technical cost-saving recommendations to application teams.
- Develop self-service FinOps portals and chatbots using GenAI to enable teams to query cost data, receive optimization recommendations, and understand cloud spending through natural language interfaces.
- Leverage Generative AI tools to enhance FinOps automation, streamline reporting, and improve team productivity across forecasting, optimization, and anomaly detection.
Qualifications :
- Bachelor's degree in Finance, Computer Science, Information Systems, or a related field.
- 4+ years of professional experience in Cloud FinOps, IT Financial Management, or Cloud Cost Governance within an IT organization.
- 6-8 years of overall experience in Cloud Infrastructure Management, DevOps, Software Development, or related technical roles with hands-on cloud platform expertise
- Hands-on experience with native cloud cost management tools (e.g., AWS Cost Explorer, Azure Cost Management, OCI Cost Analysis) and/or third-party FinOps platforms (e.g., Cloudability, CloudHealth, Apptio).
- Proven experience working within the FinOps domain in a large enterprise environment.
- Strong background in building and managing custom reports, dashboards, and financial insights.
- Deep understanding of cloud financial management practices, including chargeback/showback models, cost savings and avoidance tracking, variance analysis, and financial forecasting.
- Solid knowledge of cloud provider pricing models, billing structures, and optimization strategies.
- Practical experience with cloud optimization and governance practices such as anomaly detection, capacity planning, rightsizing, tagging strategies, and storage lifecycle policies.
- Skilled in leveraging automation to drive operational efficiency in cloud cost management processes.
- Strong analytical and data storytelling skills, with the ability to collect, interpret, and present complex financial and technical data to diverse audiences.
- Experience developing KPIs, scorecards, and metrics aligned with business goals and industry benchmarks.
- Ability to influence and drive change management initiatives that increase adoption and maturity of FinOps practices.
- Highly results-driven, detail-oriented, and goal-focused, with a passion for continuous improvement.
- Strong communicator and collaborative team player with a passion for mentoring and educating others.
- Strong proficiency in Python and SQL for data analysis, automation, and tool development, with demonstrated experience building production-grade scripts and applications.
- Hands-on development experience building automation solutions, APIs, or internal tools for cloud management or financial operations.
- Practical experience with GenAI technologies including prompt engineering, and integrating LLM APIs (OpenAI, Claude, Bedrock) into business workflows.
- Experience with Infrastructure as Code (Terraform etc.) and CI/CD pipelines for deploying FinOps automation and tooling.
- Familiarity with data visualization libraries (e.g. PowerBI ) and building interactive dashboards programmatically.
- Knowledge of ML/AI frameworks is a plus.
- Experience building chatbots or conversational AI interfaces for internal tooling is a plus.
- FinOps Certified Practitioner.
- AWS, Azure, or OCI cloud certifications are preferred.
Mission
Own architecture across web + backend, ship reliably, and establish patterns the team can scale on.
Responsibilities
- Lead system architecture for Next.js (web) and FastAPI (backend); own code quality, reviews, and release cadence.
- Build and maintain the web app (marketing, auth, dashboard) and a shared TS SDK (@revilo/contracts, @revilo/sdk).
- Integrate Stripe, Maps, analytics; enforce accessibility and performance baselines.
- Define CI/CD (GitHub Actions), containerization (Docker), env/promotions (staging → prod).
- Partner with Mobile and AI engineers on API/tool schemas and developer experience.
Requirements
- 6–10+ years; expert TypeScript, strong Python.
- Next.js (App Router), TanStack Query, shadcn/ui; FastAPI, Postgres, pydantic/SQLModel.
- Auth (OTP/JWT/OAuth), payments, caching, pagination, API versioning.
- Practical CI/CD and observability (logs/metrics/traces).
Nice-to-haves
- OpenAPI typegen (Zod), feature flags, background jobs/queues, Vercel/EAS.
Key Outcomes (ongoing)
- Stable architecture with typed contracts; <2% crash/error on web, p95 API latency in budget, reliable weekly releases.
Job Title: PHP Coordinator / Laravel Developer
Experience: 4+ Years
Work Mode: Work From Home (WFH)
Working Days: 5 Days
Job Description:
We are looking for an experienced PHP Coordinator / Laravel Developer to join our team. The ideal candidate should have strong expertise in PHP and Laravel framework, along with the ability to coordinate and manage development (as Team Lead) tasks effectively.
Key Responsibilities:
- Develop, test, and maintain web applications using PHP and Laravel.
- Coordinate with team members to ensure timely project delivery.
- Write clean, secure, and efficient code.
- Troubleshoot, debug, and optimize existing applications.
- Collaborate with stakeholders to gather and analyze requirements.
Required Skills:
- Strong experience in PHP and Laravel framework.
- Good understanding of MySQL and RESTful APIs and Cloud (AWS/ Azure/ GCP).
- Familiarity with front-end technologies (HTML, CSS, JavaScript).
- Excellent communication and coordination skills.
- Ability to work independently in a remote environment.






















