50+ Remote AWS (Amazon Web Services) Jobs in India
Apply to 50+ Remote AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!
Job Title: Senior Java Architect (12+ Years Experience)
Location: Remote (2 PM - 11 PM IST)
Experience: 12+ Years
Salary: ₹15L - ₹21L/yr
Employment Type: Contract (1 Year Extendable)
Job Description:
We are looking for a highly experienced Java Architect to join our team on a long-term contract basis. The ideal candidate should have deep expertise in designing scalable enterprise applications using Java and microservices architecture. The candidate should be capable of driving architecture decisions, mentoring development teams, and delivering high-performance solutions for enterprise-grade systems.
Key Responsibilities:
- Design and architect scalable enterprise applications using Java and microservices
- Lead system design and architecture decisions for complex applications
- Develop and implement microservices architecture patterns
- Drive technical architecture across multiple development teams
- Mentor and guide senior developers and engineering teams
- Handle high-traffic, scalable enterprise application architecture
- Collaborate with stakeholders to define technical requirements and roadmaps
- Ensure system performance, scalability, and reliability
- Review code and architecture designs for best practices
- Work with Spring Boot, Spring Cloud, and modern Java frameworks
Required Skills & Qualifications:
- 12+ years of hands-on experience in Java development and architecture
- Strong expertise in microservices architecture and design patterns
- Deep knowledge of system design principles and enterprise architecture
- Hands-on experience with Spring Boot and Spring Cloud
- Experience designing scalable, high-performance enterprise applications
- Proficiency in RESTful APIs, messaging systems, and API gateways
- Strong understanding of cloud platforms (AWS, Azure, or GCP)
- Experience with containerization (Docker) and orchestration (Kubernetes)
- Knowledge of database design (SQL and NoSQL)
- Expertise in JVM tuning and performance optimization
Technical Skills:
- Java 11+ (Java 17/21 preferred)
- Spring Boot, Spring Cloud, Spring Security
- Microservices architecture and design patterns
- RESTful APIs, GraphQL, gRPC
- Apache Kafka, RabbitMQ, or similar messaging systems
- Docker, Kubernetes, CI/CD pipelines
- PostgreSQL, MySQL, MongoDB, or similar databases
- Redis, Elasticsearch, or caching solutions
- Maven, Gradle, Git
Additional Requirements:
- Ability to work 2 PM - 11 PM IST (US/Europe Shift Timing)
- Immediate availability or short notice period (15 days max)
- Strong problem-solving and analytical skills
- Excellent communication skills for stakeholder collaboration
- Experience mentoring technical teams
- Contract commitment for a minimum of 1 year (extendable)
Good to Have (Preferred Skills):
- Experience with reactive programming (WebFlux, Project Reactor)
- Cloud certification (AWS Solutions Architect, etc.)
- Experience with observability tools (Prometheus, Grafana)
- Knowledge of domain-driven design (DDD)
- Experience with multi-cloud or hybrid cloud architectures
- Freelance/contract experience (preferred)
What We Offer:
- 1.2 ~ 1.8 LPM fixed contract salary
- Long-term contract (1-year extendable)
- Remote work with a flexible 2-11 PM IST schedule
- Work with cutting-edge enterprise technologies
- Opportunity to architect large-scale systems
- Collaborate with experienced engineering teams
- Immediate start for the right candidates
What are we looking for??
- You have a good understanding and work experience in AKS, Kubernetes, and EKS.
- You are able to manage multi region clusters for disaster recovery.
- You have a good understanding of AWS stack.
- You have experience of production level in Kubernetes.
- You are comfortable coding/programming and can do so whenever required.
- You have worked with programmable infrastructure in some way - Built a CI/CD pipeline, Provisioned infrastructure programmatically or Provisioned monitoring and logging infrastructure for large sets of machines.
- You love automating things, sometimes even what seems like you can’t automate - such as one of our engineers used Ansible to set up the Ubuntu workstation and runs a playbook every time something has to be installed.
- You don’t throw around words such as “high availability” or “resilient systems” without understanding at least their basics. Because you know that words are easy to talk about but there is a fair amount of work to build such a system in practice.
- You love coaching people - about the 12-factor apps or the latest tool that reduced your time of doing a task by X times and so on. You lead by example when it comes to technical work and community.
- You understand the areas you have worked on very well but, you are curious about many systems that you may not have worked on and want to fiddle with them.
- You know that understanding applications and the runtime technologies gives you a better perspective - you never looked at them as two different things.
What you will be learning and doing?
- You will be working with customers trying to transform their applications and adopt cloud-native technologies. The technologies used will be Kubernetes, Prometheus, Service Mesh, Distributed tracing and public cloud technologies or on-premise infrastructure.
- The problems and solutions are continuously evolving in space but fundamentally you will be solving problems with simplest and scalable automation.
- You will be building open source tools for problems that you think are common across customers and industry. No one ever benefited from re-inventing the wheel, did they?
- You will be hacking around open source projects, understand their capabilities, limitations and apply the right tool for the right job.
- You will be educating the customers - from their operations engineers to developers on scalable ways to build and operate applications in modern cloud-native infrastructure.
Job Title: Lead Data Architect (AI & Cloud)
Company: Risosu Consulting
About the Role
Risosu Consulting is hiring a Lead Data Architect / Crew Manager for one of our global clients in the Cloud, Data & AI space. This role focuses on designing scalable data architectures and driving AI-led transformation across modern cloud platforms.
Key Responsibilities
- Design data strategies, architectures, and scalable cloud solutions
- Build and optimize data pipelines, data lakes, and warehouses
- Collaborate with cross-functional teams to enable AI/ML use cases
- Lead client engagements and translate business needs into data solutions
- Mentor and manage a team of consultants as a Crew Manager
Requirements
- 5+ years of experience in Data Architecture / Engineering
- Strong expertise in cloud platforms (GCP/AWS/Azure)
- Experience with data modeling, ETL, and data governance
- Exposure to tools like BigQuery, dbt, Airbyte, or Power BI
- Strong communication skills and stakeholder management
Why Join via Risosu?
- Opportunity to work on high-impact global projects
- Fast-growing, entrepreneurial environment
- Clear growth path with learning & certification support
- Work with cutting-edge Cloud, Data & AI technologies
If you’re passionate about building scalable data systems and leading teams, let’s connect.
Job role: Systems Engineer (L2)
Location: Remote/Bengaluru
Experience: 3-6 years
About the Role:
We are looking for a Systems Engineer (L2) to join our growing infrastructure team. You will be responsible for managing, optimizing, and scaling our cloud communication platform that handles billions of messages and voice calls annually.
Key Responsibilities:
— Design, deploy, and maintain scalable cloud infrastructure — AWS/GCP/Azure.
— Manage and optimize networking components — routers, switches, firewalls, load balancers.
— Handle incident response — monitor systems, identify issues, resolve production problems.
— Implement DevOps best practices — CI/CD pipelines, automation, containerization.
— Collaborate with backend and product teams on system architecture.
— Performance tuning — ensure high availability and reliability of platform.
— Security management — implement security protocols and compliance standards.
Required Skills:
Technical:
- Linux/Unix administration — strong fundamentals
- Networking — TCP/IP, DNS, BGP, VoIP protocols
- Cloud platforms — AWS/GCP/Azure — minimum 2 years
- DevOps tools — Docker, Kubernetes, Jenkins, CI/CD
- Monitoring tools — Grafana, Prometheus, Kibana, Datadog
- Scripting — Python, Bash, Shell
- Databases — MySQL, PostgreSQL, Redis
Soft skills:
- Strong problem-solving under pressure
- Good communication — English written and verbal
- Team player — collaborative mindset
Good to Have:
- Experience in telecom/CPaaS/cloud communications industry
- Knowledge of VoIP, SIP, RTP protocols
- AI/ML operations experience
- CCNA/AWS certifications

Mid Size Product Engineering Services Company
This role will report to the Chief Technology Officer
You Will Be Responsible For
* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.
* Leading a team in building a high-performing and scalable SaaS product.
* Conducting code reviews to maintain code quality and follow best practices
* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams
* Developing and building microservices leveraging cloud services
* Working on application security aspects
* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.
* Creating a culture of innovation that enables the continued growth of individuals and the company
* Working closely with Product and Business teams to build winning solutions
* Led talent management, including hiring, developing, and retaining a world-class team
Ideal Profile
* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.
* Proficiency in MERN / Java / Full Stack.
* Led a team in optimizing the performance and scalability of a product
* You have extensive experience with DevOps environment and CI/CD practices and can train teams.
* You're a hands-on leader, visionary, and problem solver with a passion for excellence.
* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.
What's on Offer?
* Exciting opportunity to drive the Engineering efforts of a reputed organisation
* Work alongside & learn from best in class talent
* Competitive compensation + ESOPs
💼 Job Title: Full Stack Developer (experienced only)
🏢 Company: SDS Softwares
💻 Location: Work from Home
💸 Salary range: ₹10,000 - ₹18,000 per month (based on knowledge and interview)
🕛 Shift Timings: 12 PM to 9 PM (5 days working )
About the role: As a Full Stack Developer, you will work on both the front-end and back-end of web applications. You will be responsible for developing user-friendly interfaces and maintaining the overall functionality of our projects.
⚜️ Key Responsibilities:
- Collaborate with cross-functional teams to define, design, and ship new features.
- Develop and maintain high-quality web applications (frontend + backend )
- Troubleshoot and debug applications to ensure peak performance.
- Participate in code reviews and contribute to the team’s knowledge base.
⚜️ Required Skills:
- Proficiency in HTML, CSS, JavaScript, Redux, React.js for front-end development. ✅
- Understanding of server-side languages such as Node.js. ✅
- Familiarity with database technologies such as MySQL, MongoDB, or ✅ PostgreSQL.
- Basic knowledge of version control systems, particularly Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication skills and a team-oriented mindset.
💠 Qualifications:
- individuals with full-time work experience (1 year to 2 years) in software development.
- Must have a personal laptop and stable internet connection.
- Ability to join immediately is preferred.
If you are passionate about coding and eager to learn, we would love to hear from you. 👍
Brikito — Lead Full-Stack Developer
Job Description
About Brikito
Brikito is an early-stage PropTech startup building a construction management platform for SME developers and contractors. The founder has 7+ years of hands-on construction experience and an MBA from Warwick Business School. We have initial funding, a domain (brikito.com), wireframes ready, and active customer validation underway. We need our first technical leader to take this from wireframes to a live product.
This is a ground-floor opportunity. You will be the first technical hire — the person who makes every architecture decision and writes the first line of production code.
The Role
Title: CTO / Lead Full-Stack Developer (title depends on experience and equity arrangement)
Location: India (remote OK, occasional visits to Chennai office and overseas office planning to set up in Singapore or Dubai)
Type: Full-time
Compensation: ₹1,00,000–₹2,50,000/month + meaningful equity (0.5%–5% depending on role level, vesting over 4 years based on vesting schedule with a cliff.)
Start Date: May 2026
Reports to: Founder/CEO
- What You Will DoMonths 1–3: Build the MVPOwn all technical decisions — architecture, tech stack, database design, hosting
- Build and ship a working MVP with 3 core features: project dashboard, billing/invoicing, and indent/procurement management
- Set up CI/CD pipeline, staging, and production environments
- Integrate payment gateway (Razorpay for India)
- Build both web and mobile-responsive interfaces
- Ship the MVP within 12 weeks
- Months 3–6: Iterate and ScaleOnboard beta users and fix bugs based on real usage
- Build features based on customer feedback (not assumptions)
- Integrate AI capabilities where they add clear user value (e.g., auto-generated progress reports)
- Hire and manage 1–2 junior developers as the team grows
- Set up monitoring, error tracking, and basic analytics
- Months 6–12: Lead the Technical TeamGrow the engineering team to 4–6 people
- Establish code review processes, documentation standards, and sprint rhythms
- Own the technical roadmap alongside the founder
- Participate in investor conversations as the technical co-founder (if CTO-level)
- Make build-vs-buy decisions for new features
- Required SkillsMust Have7+ years of professional software development experience
- Strong proficiency in React or Next.js (frontend)
- Strong proficiency in Node.js (backend) — Express, Nest.js, or similar
- PostgreSQL or MySQL — database design, query optimisation, migrations
- REST API design — clean, well-documented APIs
- Cloud deployment — AWS (EC2, RDS, S3) or GCP equivalent
- Expertise in AI tools and integrations - Anthropic, OpenAI, Perplexity, etc.
- Git — clean branching, PR-based workflow
- Has shipped at least one product that real users used — not just academic or internal tools
- Comfortable working independently — no one will tell you what to do step by step
- Strongly PreferredPrevious experience at a startup (Series A or earlier)
- Experience building SaaS or B2B products
- Experience with mobile development (React Native or Flutter)
- Experience integrating payment gateways (Razorpay, Stripe)
- Experience with third-party API integrations (OpenAI, Twilio, etc.)
- Understanding of CI/CD pipelines (GitHub Actions, Docker)
- Basic understanding of construction, real estate, or field operations (not required, but a plus)
- Nice to HaveExperience with TypeScript
- Experience with real-time features (WebSockets, push notifications)
- Familiarity with Figma (to translate wireframes into UI)
- Experience hiring and mentoring junior developers
- Open source contributions or a personal project portfolio
- What We Are NOT Looking ForSomeone who needs detailed specifications for every task — we move fast and figure things out together
- Someone who only wants to code and not think about the product — you will be in customer calls and strategy discussions
- Someone who optimises for perfect code over shipping — we ship first, refactor later
- Someone looking for a stable corporate job — this is a startup with all the chaos and excitement that comes with it
- What You GetEquity ownership in an early-stage company with a large addressable market ($14.9B global construction SaaS)
- Founding team credit — you will be recognised as a technical co-founder if you take the CTO role
- Direct impact — every line of code you write will be used by real customers within weeks
- Technical freedom — you choose the stack, the tools, the architecture
- A founder who understands the domain — you will never have to guess what contractors need because the CEO has built construction projects himself
- Growth path — as we raise funding and scale, you grow into VP Engineering or CTO of a funded company
How to Apply
Send the following:
- A short note (5–10 lines) on why this role interests you and what you'd bring
- Your LinkedIn profile or resume
- One link to something you've built — a live product, a GitHub repo, an app, anything that shows your work
- Your availability — when can you start?
We will respond within 48 hours. The process is:
- 30-minute video call with the founder
- Small paid technical task (8 hours of work, ₹5,000 paid regardless of outcome)
- Final conversation about role, equity, and start date
- Offer within 1 week of first call
Questions?
DM the founder on LinkedIn: https://www.linkedin.com/in/aashiqahamed/
This is not a job posting from HR. This is a founder looking for his first technical partner. If this excites you, reach out.
Job Title: Full Stack Engineer (Django + Next.js)
We’re looking for a Full Stack Engineer with strong backend fundamentals and solid frontend experience to build scalable web products and APIs.
Must-Have
• Django + DRF (2+ years): Models, serializers, services, API views, migrations, query optimization (select_related / prefetch_related), transaction.atomic, custom managers
• Next.js + React (2+ years): App Router, SSR, client components, dynamic imports, useQuery, responsive UIs with Tailwind
• REST APIs: Auth, permissions, pagination, error handling, CORS, JWT flows
• PostgreSQL: Schema design, indexes, constraints, JSON fields, raw SQL when needed
• Celery / async tasks: Retry logic, idempotency, task chaining
• Git: Clean commits, branching, PR workflow
Good to Have
• AI / LLM integrations
• AWS S3 and presigned uploads
• Multi-tenancy
• WebRTC / MediaRecorder
• Docker
• Testing with pytest / Django TestCase / factory_boy
We’re looking for someone who can independently own features end-to-end and write clean, scalable code.
Software Engineer – EdTech (PHP)
Experience: 3+ Years
Work Mode: Permanent Work From Home
Role Summary
We are seeking a highly skilled software developer with strong experience in EdTech platforms and education ERP systems. The ideal candidate will have expertise in core PHP/Laravel and database technologies, with hands-on experience in building and scaling education-focused modules such as LMS, online examination systems, admissions, and fee management.
This role focuses on developing scalable, secure, and high-performance solutions for schools, colleges, and online learning platforms.
Key Responsibilities
- Design, develop, and maintain Education ERP and EdTech platform modules.
- Build and enhance systems for LMS (Learning Management System), online exams, admissions, fee management, HR, and finance.
- Develop and optimize REST APIs/GraphQL services for seamless integration with web and mobile platforms.
- Ensure high performance, scalability, and security for large-scale student and institutional data.
- Work closely with product, QA, and implementation teams to deliver EdTech features.
- Conduct code reviews, maintain coding standards, and mentor junior developers.
- Continuously improve platform capabilities based on EdTech trends and user needs.
Required Skills & Qualifications
- Strong expertise in Core PHP (Laravel Framework).
- Solid experience with MySQL, MongoDB, PostgreSQL (database design & optimization).
- Understanding of EdTech workflows like student lifecycle, course management, and assessments.
- Frontend basics: JavaScript, jQuery, HTML, CSS (React/Vue is a plus).
- Experience with REST APIs, GraphQL, and third-party integrations (payment gateways, SMS, and email services).
- Familiarity with Git/GitHub, Docker, and CI/CD pipelines.
- Knowledge of cloud platforms (AWS, Azure, GCP) is an advantage.
- Minimum 3+ years of development experience, with at least 2 years in education ERP/EdTech systems.
Preferred Experience
- Prior experience working in EdTech companies or education ERP platforms.
- Deep understanding of LMS, online examination systems, admissions, fees, HR, and finance modules.
- Experience handling high-traffic educational platforms (e.g., exam portals, live classes, student dashboards).
- Exposure to scalable architecture for large student/user bases.
Hiring SOC Investigation Specialist on behalf of high-growth technology and enterprise partners building next-generation SOC automation and AI-driven investigation systems. This role is ideal for experienced SOC analysts who can apply real-world investigative judgment to review, validate, and construct high-quality security investigations across SIEM, endpoint, cloud, and identity environments.
Responsibilities
- Review, monitor, and evaluate SOC alerts and investigation outputs based on predefined scenarios and criteria.
- Distinguish true positives from false positives by validating investigative evidence and alert context.
- Perform end-to-end security investigations when required, including log analysis, entity pivoting, timeline reconstruction, and evidence correlation.
- Assess the correctness, completeness, and quality of SOC investigations produced by automated or human workflows.
- Apply consistent investigative judgment while recognizing that multiple valid investigation paths may exist for the same alert.
- Make clear binary determinations (e.g., ACCEPT / PASS) while also producing detailed ground-truth investigations when required.
- Use Splunk extensively to pivot across logs, entities, and timelines, including reading and reasoning about SPL queries.
- Maintain clear and accurate documentation of investigative steps, assumptions, evidence, and conclusions.
- Collaborate with program leads and other expert annotators to uphold high-quality investigation and annotation standards.
- Mentor or support other analysts where applicable, particularly in long-term or lead annotator roles.
Requirements
- 3+ years of hands-on experience as a SOC analyst in a production SOC environment (Tier 2 or above strongly preferred).
- Strong understanding of alert triage, incident investigation workflows, and evidence-based decision-making under time constraints.
- Mandatory hands-on experience with Splunk, including:
- Conducting investigations using Splunk
- Reading, understanding, and reasoning about SPL queries
- Pivoting between logs, entities, and timelines
- Proven ability to evaluate SOC investigations and determine whether conclusions are valid, incomplete, or incorrect.
- Strong investigative judgment and comfort making decisive evaluations.
- Fluent English (written and spoken) with strong documentation and communication skills.
Nice to Have
- Experience with Endpoint Detection & Response (EDR) tools such as CrowdStrike Falcon, Microsoft Defender for Endpoint, or SentinelOne.
- Experience analyzing cloud security logs and signals:
- AWS (CloudTrail, GuardDuty)
- Azure (Activity Log, Defender for Cloud)
- GCP (Cloud Audit Logs)
- Familiarity with Identity & Access Management platforms such as Okta Identity Cloud or Microsoft Entra ID (Azure AD).
- Experience with email security tools like Proofpoint or Mimecast.
- SOC leadership or mentoring experience.
- Basic scripting experience (Python or similar).
- Security certifications (optional): GCIA, GCIH, GCED, Splunk certifications, Security+, CCNA, or cloud security certifications.
About the Role
As an SDET II, you'll own significant parts of our test infrastructure and drive quality strategy across the engineering team. You'll design testing approaches for complex features, mentor junior engineers, and make architectural decisions that impact how we approach automation at scale.
What You'll Do
- Architect and implement test frameworks and infrastructure
- Design testing strategies for new features and platform capabilities
- Mentor SDET I engineers and conduct technical code reviews
- Refactor and optimize existing test suites for maintainability and performance
- Make architectural decisions about test design patterns and abstractions
- Build and manage AWS-based test environments and infrastructure
- Integrate testing earlier in the development lifecycle through cross-team collaboration
- Optimize CI/CD pipeline performance and test execution times
- Develop custom tooling and reporting to surface quality insights
Technical Requirements
Core Skills:
- Advanced TypeScript expertise: generics, decorators, advanced typing patterns, type inference
- Deep understanding of asynchronous programming, concurrency, and race condition prevention
- Strong software design principles with domain-driven design (DDD) approach
- Extensive experience with Playwright including deep knowledge of fixtures architecture
- Expert-level Git, GitHub, and distributed version control workflows
- Layered architecture design: understanding PCOM (Page Component Object Model) and POM patterns
- Object-oriented design in test frameworks—building scalable abstractions over linear scripts
- API testing and orchestration (REST/GraphQL integration with UI workflows)
Infrastructure & DevOps:
- AWS: EC2 configuration, CloudWatch log analysis, debugging cloud environments
- Terraform for infrastructure as code (plus)
- Docker: containerization, docker-compose, image management
- CI/CD debugging: analyzing pipeline failures, optimizing execution
- Advanced reporting: Allure configuration, Playwright HTML reports, custom reporting solutions
Additional Experience:
- Test infrastructure development and framework architecture
- Design patterns implementation (Factory, Builder, Facade, Composite)
- Performance optimization at scale
- npm ecosystem and package management is a good to have
Overview:
We're looking for a Full Stack Developer with strong backend expertise who can build,
manage, and scale AI-driven products end to end. You'll play a critical role in designing
scalable architectures, optimizing performance and cost, and building robust AI and agentic
systems.
Responsibilities
1. Architect and build scalable backend systems using FastAPI, PostgreSQL, and Redis.
2. Design, develop, and maintain AI-driven applications, integrating multiple LLMs, APIs,
and agentic frameworks.
3. Implement vector databases (pgvector, Qdrant, etc.) for RAG and AI memory systems.
4. Orchestrate multi-agent AI systems with LangChain/LangGraph, including function
calling, agent collaboration, and monitoring.
5. Build and integrate RESTful APIs for frontend and external use.
6. Manage DevOps workflows, including CI/CD, cloud deployments (AWS/GCP), server
scaling, and logging/monitoring (Sentry).
7. Optimize application cost, latency, and reliability, balancing speed with LLM call
efficiency and caching strategies.
8. Collaborate with product, design, and AI teams to translate business requirements into
high-performing tech.
9. Maintain documentation and ensure code quality with tests, reviews, and async-first
architecture.
10. Contribute to frontend development (React + TypeScript) when necessary, ensuring
seamless API integration and data visualization.
Requirements
Core Skills
• Strong proficiency in Python and FastAPI.
• Experience with PostgreSQL (including pgvector) and SQLAlchemy (async).
• Solid understanding of Redis, RQ (Redis Queue), and caching mechanisms.
• Proven experience integrating LLMs and AI APIs (OpenAI, Anthropic, etc.).
• Hands-on experience with LangChain / LangGraph, RAG pipelines, and agent
orchestration.
• Experience working with cloud platforms (AWS / GCP) and managing file storage (S3).
• Familiarity with frontend stacks (React, TypeScript, Tailwind, Zustand).
• Working knowledge of DevOps: Docker, CI/CD pipelines, deployment automation, and
observability tools (Sentry, Mixpanel, Clarity).
Bonus / Nice to Have
• Experience building agent monitoring dashboards or AI workflows.
• Prior experience in startup or product-based environments.
• Understanding of LLM cost optimization, token management, and function calling
orchestration.
• Familiarity with external API integrations like BrightData, Hunter.io, Adzuna, and Serper.
• Experience building scalable AI products (e.g., chatbots, AI copilots, data agents, or
automation tools).
Mindset
• Startup-ready: comfortable working in fast-paced, ambiguous environments.
• Deep curiosity about AI systems and automation.
• Strong sense of ownership and accountability for shipped products.
• Pragmatic and cost-conscious in architectural decisions.
• Excellent communication and documentation skills.
Description
Company is a fast-growing company founded by former Google Cloud leaders, architects, and engineers. We are seeking candidates with significant experience in Google Cloud to join our team. Our engagements aim to eliminate obstacles, reduce risk, and accelerate timelines for customers transitioning to Google and seeking assistance with data and application modernization. We embed within customer teams to provide strategic guidance, facilitate technology decisions, and execute projects in a collaborative, co-development style.
As a member of our Cloud Engineering team, you will be working with fast-paced innovative companies, leveraging Cloud as the key driver of their transformation. Our clients will look to you as their trusted advisor, someone they can rely on and who will be there to help them along their Google Cloud journey. You will be expected to work a large spectrum of technology and tools including public cloud platforms, AI and LLMs, Kubernetes, data processing systems, databases, and more.
What you will do...
- Working with our clients to understand their requirements and technical challenges. Using this input you will develop a technical design for a solution and communicate the value of your solution to the client team.
- You will work to develop delivery estimates and an estimated project plan.
- You will act as the lead technical member of the implementation project team. You are responsible for making the key technical and keeping delivery on track. You should be able to unblock when things are stuck.
- Utilize a broad range of technologies such as Kubernetes, AI, and Large Language Models (LLMs), to develop scalable and efficient cloud applications.
- Stay abreast of industry trends and new technologies to drive continuous improvement in cloud solutions and practices.
- Work closely with cross-functional teams to deliver end-to-end cloud solutions, from conceptualization to deployment and maintenance.
- Engage in problem-solving and troubleshooting to address complex technical challenges in a cloud environment.
What we need...
- 5+ years of experience working in a Software Engineering capacity
- Excellent knowledge and experience with Python, and preferably additional languages such as Go
- Strong critical thinking skills, and a bias towards problem solving
- Familiarity with implementing microservice architectures
- Fundamental skills with Kubernetes. You should be familiar with packaging and deploying your applications to k8s
- Experience building applications that work with data, databases, and other parts of the data ecosystem is preferred
- Familiarity with Generative AI workflows, frameworks like Langchain, and experience with Streamlit are all highly desirable, but at a minimum you should have a willingness to learn
- Experience deploying production workloads on the public cloud - either GCP or AWS
- Experience using CI/CD tools such as GitHub Actions, GitLab, etc
- Able to work with new tools and technologies where you may not have prior experience
- Comfortable with being on video in meetings internally and with clients
- Strong English communications skills
We are a fully remote company and offer competitive compensation and benefits.
The Mission: We are looking for a visionary Technical Leader to own our healthcare data ecosystem from the first byte to the final dashboard. You won't just be managing a platform; you’ll be the primary architect of a clinical data engine that powers life-changing analytics. If you are an expert in SQL and Python who thrives on solving the "puzzle" of healthcare interoperability (FHIR/HL7) while mentoring a high-performing team, this is your seat at the table.
What You’ll Own
- Architectural Sovereignty: Define the end-to-end blueprint for our data warehouse (staging, marts, and semantic layers). You choose the frameworks, set the coding standards, and decide how we handle complex dimensional modeling and SCDs.
- Engineering Excellence: Lead by example. You’ll write production-grade Python for ingestion frameworks and craft advanced, set-based SQL transformations that others use as gold-standard references.
- The Interoperability Bridge: Turn the chaos of EHR exports, REST APIs, and claims data into clean, FHIR-aligned governed datasets. You ensure our data speaks the language of modern healthcare.
- Technical Mentorship: Act as the "Engineer’s Engineer." You’ll run design reviews, champion CI/CD best practices, and build the runbooks that keep our small but mighty team efficient.
- Security by Design: Direct the implementation of HIPAA-compliant data flows, ensuring encryption, auditability, and access controls are baked into the architecture, not bolted on.
The Stack You’ll Command
- Languages: Expert-level SQL (CTE, Window Functions, Tuning) and Production Python.
- Databases: Deep polyglot experience across MSSQL, PostgreSQL, Oracle, and NoSQL (MongoDB/Elasticsearch).
- Orchestration: Advanced Apache Airflow (SLAs, retries, and complex DAGs).
- Ecosystem: GitHub for CI/CD, Tableau/PowerBI for semantic layers, and Unix/Linux for shell scripting.
Who You Are
- Experienced: You have 8–12+ years in data engineering, with a significant portion spent in a Lead or Architect capacity.
- Healthcare-Fluent: You understand the stakes of PHI. You’ve worked with FHIR/HL7 and know how to map clinical resources to analytical models.
- Performance-Obsessed: You don’t just make it work; you make it fast. You’re the person who uses EXPLAIN/ANALYZE to shave minutes off a query.
- Culture-Builder: You believe in documentation, observability (lineage/freshness), and "leaving the campground cleaner than you found it."
Bonus Points for:
- Privacy Pro: Experience with PII/PHI de-identification and privacy-by-design.
- Cloud Native: Deep familiarity with Azure, AWS, or GCP security and data services.
- Search Experts: Experience with near-real-time indexing via Elasticsearch.
To process your resume for the next process, please fill out the Google form with your updated resume.
Pre-screen Question: https://forms.gle/q3CzfdSiWoXTCEZJ7
Details: https://forms.gle/FGgkmQvLnS8tJqo5A
Advanced Software Architect
Position Responsibilities :
- Lead the architecture and development of AI-powered, distributed systems that meet enterprise-grade performance and security standards.
- Leverage AI tools for code generation, architectural design, and documentation to accelerate delivery and improve quality.
- Design, build, and maintain services using Python, Java, and Node.js, following clean-code and secure design principles.
- Develop agentic AI-based tools, domain-specific copilots, and developer productivity enhancements.
- Collaborate with cross-functional teams to define modular, scalable, and compliant architecture patterns.
- Conduct technical design reviews and produce detailed documentation, including system specifications, API docs, and architecture diagrams.
- Integrate AI solutions into CI/CD pipelines, ensuring observability, automated testing, and deployment standards are met.
- Implement robust monitoring and performance engineering practices to maintain high-quality deployments.
- Continuously evaluate emerging AI technologies and integrate them into development workflows for maximum impact.
- Champion best practices in security, automation, and performance optimization across the organization.
Qualifications :
- 8+ years in software engineering with full-stack or backend development in Python, Java, and/or Node.js.
- 3+ years with AI tools for development, prototyping, or documentation tasks.
- Experience with cloud-native development and containerized deployment (Docker, Kubernetes).
- Knowledge of AI integration patterns, vector stores, prompt engineering, and RAG pipelines.
- Ability to design software architecture using sequence diagrams, ERDs, data models, and threat models.
- Comfortable with Gen AI-first environments and working with remote Agile teams.
Preferred Qualifications
- Experience building AI copilots or developer tools using OpenAI/Claude SDKs, LangChain, or similar frameworks.
- Experience working in a fast-paced, AIDLC environment, with a strong understanding of CI/CD practices.
- Familiarity with GitHub Actions, Argo Workflows, Terraform, and monitoring/observability tools.
- Containerization and Orchestration: Proficiency in Docker and Kubernetes for containerization and orchestration.
- Cloud Platforms: Experience with cloud computing platforms such as AWS, Azure, or OCI Cloud.
Job Title: Software Developer (Contractor)
Location: Remote, Up to 1-year contract
Compensation: Hourly
About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.
Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.
Key Responsibilities:
· Develop and optimize software using C/C++ for high-performance computing applications.
· Work on cryptographic algorithm implementations and performance tuning.
· Optimize memory management, threading, and parallel computing techniques.
· Debug, profile, and test software for performance and reliability.
· Write clean, efficient, and well-documented code.
Qualifications:
· Completed a B.S. or higher degree in Computer Science, Computer Engineering.
· Strong programming skills in C and C++.
· Familiarity with Linux-based development environments.
· Basic understanding of cryptographic algorithms and security principles is a plus.
· Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.
· Knowledge of other programming languages such as Python, Rust, or Go is a plus.
· Strong problem-solving skills and attention to detail.
· Ability to work independently and collaboratively in a fast-paced startup environment.
What You’ll Gain:
· Hands-on experience in systems programming, cryptography, and high-performance computing.
· Opportunities to work on real-world security and privacy-focused projects.
· Mentorship from experienced software engineers and researchers.
· Exposure to cutting-edge cryptographic acceleration and secure computing techniques.
· Potential for future full-time employment based on performance.
Job Title: Software Developer
Location: Remote
About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.
Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.
Key Responsibilities:
· Develop and optimize software using C/C++ for high-performance computing applications.
· Work on cryptographic algorithm implementations and performance tuning.
· Optimize memory management, threading, and parallel computing techniques.
· Debug, profile, and test software for performance and reliability.
· Write clean, efficient, and well-documented code.
Qualifications:
· Completed a B.S. or higher degree in Computer Science, Computer Engineering.
· Strong programming skills in C and C++.
· Familiarity with Linux-based development environments.
· Basic understanding of cryptographic algorithms and security principles is a plus.
· Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.
· Knowledge of other programming languages such as Python, Rust, or Go is a plus.
· Strong problem-solving skills and attention to detail.
· Ability to work independently and collaboratively in a fast-paced startup environment.
What You’ll Gain:
· Hands-on experience in systems programming, cryptography, and high-performance computing.
· Opportunities to work on real-world security and privacy-focused projects.
· Mentorship from experienced software engineers and researchers.
· Exposure to cutting-edge cryptographic acceleration and secure computing techniques.
· Potential for future full-time employment based on performance.
Description
Join company as a Backend Developer and become a pivotal force in building the robust, scalable services that power our innovative platforms. In this role, you will design, develop, and maintain server‑side applications, ensuring high performance and reliability for millions of users. You’ll collaborate closely with cross‑functional product, front‑end, and DevOps teams to translate business requirements into clean, efficient code, while participating in code reviews and architectural discussions. Our dynamic environment encourages continuous learning, offering opportunities to work with cutting‑edge technologies, cloud infrastructures, and modern development practices. As a key contributor, your work will directly impact product quality, user satisfaction, and the overall success of company's mission to streamline hiring solutions.
Requirements:
- 1–15 years of professional experience in backend development, with a strong focus on building APIs and microservices.
- Proficiency in server‑side languages such as Python, Java, Node.js, or Go, and solid understanding of object‑oriented and functional programming paradigms.
- Extensive experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis), including schema design and query optimization.
- Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies like Docker and Kubernetes.
- Hands‑on experience with version control (Git), CI/CD pipelines, and automated testing frameworks.
- Strong problem‑solving abilities, effective communication skills, and a collaborative mindset for working within multidisciplinary teams.
Roles and Responsibilities:
- Design, develop, and maintain high‑throughput backend services and RESTful APIs that support core product features.
- Implement data models and storage solutions, ensuring data integrity, security, and optimal performance.
- Collaborate with front‑end engineers, product managers, and designers to define technical requirements and deliver end‑to‑end solutions.
- Participate in code reviews, provide constructive feedback, and uphold coding standards and best practices.
- Monitor, troubleshoot, and optimize production systems, implementing robust logging, alerting, and performance tuning.
- Contribute to the continuous improvement of development workflows, including CI/CD automation, testing strategies, and deployment processes.
- Stay current with emerging technologies and industry trends, proposing innovative approaches to enhance system architecture.
Budget:
- Job Type: payroll
- Experience Range: 1–15 years
Position Responsibilities:
- Collaborate with the development team to maintain, enhance, and scale the product for enterprise use.
- Design and develop scalable, high-performance solutions using cloud technologies and containerization.
- Contribute to all phases of the development lifecycle, following SOLID principles and best practices.
- Write well-designed, testable, and efficient code with a strong emphasis on Test-Driven Development (TDD), ensuring comprehensive unit, integration, and performance testing.
- Ensure software designs comply with specifications and security best practices.
- Recommend changes to improve application architecture, maintainability, and performance.
- Develop and optimize database queries using T-SQL.
- Prepare and produce software component releases.
- Develop and execute unit, integration, and performance tests.
- Support formal testing cycles and resolve test defects.
AI-Specific Responsibilities:
- Integrate AI-powered tools and frameworks to enhance code quality and development efficiency.
- Utilize AI-driven analytics to identify performance bottlenecks and optimize system performance.
- Implement AI-based security measures to proactively detect and mitigate potential threats.
- Leverage AI for automated testing and continuous integration/continuous deployment (CI/CD) processes.
- Guide the adoption and effective use of AI agents for automating repetitive development, deployment, and testing processes within the engineering team.
Qualifications:
- Bachelor’s degree in Computer Science, IT, or a related field.
- Highly proficient in ASP.NET Core (C#) and full-stack development.
- Experience developing REST APIs.
- Proficiency in front-end technologies (JavaScript, HTML, CSS, Bootstrap, and UI frameworks).
- Strong database experience, particularly with T-SQL and relational database design.
- Advanced understanding of object-oriented programming (OOP) and SOLID principles.
- Experience with security best practices in web and API development.
- Knowledge of Agile SCRUM methodology and experience in collaborative environments.
- Experience with Test-Driven Development (TDD).
- Strong analytical skills, problem-solving abilities, and curiosity to explore new technologies.
- Ability to communicate effectively, including explaining technical concepts to non-technical stakeholders.
- High commitment to continuous learning, innovation, and improvement.
AI-Specific Qualifications:
- Proficiency in AI-driven development tools and platforms such as GitHub Copilot in Agentic Mode.
- Knowledge of AI-based security protocols and threat detection systems.
- Experience integrating GenAI or Agentic AI agents into full-stack workflows (e.g., using AI for code reviews, automated bug fixes, or system monitoring).
- Demonstrated proficiency with AI-assisted development tools and prompt engineering for code generation, testing, or documentation.
Senior Quality Engineer – AI Products
Fulltime
Remote
Requirements
● 3-7 years of experience in software quality engineering, preferably in SaaS environments with a platform or infrastructure focus.
● Strong demonstrated experience testing distributed systems, APIs, data pipelines, or cloud-based infrastructure.
● Experience designing and executing test plans for AI/ML systems, data pipelines, or shared platform services.
● Familiarity with AI/LLM infrastructure concepts such as retrieval-augmented generation (RAG), vector search, model routing, and observability.
● Strong demonstrated proficiency in Linux distributions and CLI-based testing, including log file analysis and other troubleshooting tasks.
● Experience with AWS or other major cloud platforms.
● Basic Python/Shell scripting knowledge with ability to edit existing scripts and create new automation for pipeline validation.
● Advanced skills with API and SQL testing methodologies.
● Familiarity with test management tools such as TestRail; experience with Qase is a plus.
● Demonstrated experience leveraging Version Control Systems with a focus on GitHub.
● Experience with testing tools: Jira, Sentry, DataDog.
● Strong understanding of Agile/Scrum methodologies.
● Proven track record of mentoring junior engineers and contributing to process improvements.
● Excellent analytical and problem-solving abilities.
● Strong communication skills with ability to present to both technical and non-technical stakeholders.
● Proficiency in English (C1-C2 level).
● Most importantly: The courage to be vocal about quality concerns, platform risks, and testing impediments.
Preferred Qualifications
● Experience with AI/ML evaluation frameworks or tools (e.g., LLM-as-judge, Ragas, custom eval harnesses).
● Hands-on experience with document parsing, OCR, or unstructured data pipelines.
● Experience with observability tooling (e.g., Datadog, Grafana, OpenTelemetry) from a QA perspective.
● Experience testing SaaS products in regulated industries (such as PCI-compliant).
● Basic understanding of containerization, Kubernetes, and CI/CD pipelines (Jenkins, CircleCI).
● Experience with microservice architectures and distributed systems.
● Knowledge of basic non-functional testing (security, performance) with emphasis on AI-specific concerns.
● Background in security or compliance testing for AI systems.
● Certifications such as ISTQB or CSTE.
● Experience working in legal technology, fintech, or professional services software.
● Familiarity with AI-assisted testing tools and leveraging LLMs as a productivity-boosting tool.
● Experience evaluating and implementing new QE tools and processes
Job Summary
We are looking for an experienced Java Full Stack Developer with strong expertise in Java, React.js, and AWS to design, develop, and maintain scalable web applications. The ideal candidate should have experience building high-performance applications and working across both front-end and back-end technologies.
Key Responsibilities
- Develop and maintain full-stack web applications using Java and React.js
- Design and build RESTful APIs and microservices using Java frameworks
- Develop responsive and interactive frontend interfaces using React.js
- Work with AWS services for deployment, scalability, and infrastructure
- Collaborate with cross-functional teams including product managers, designers, and QA
- Write clean, maintainable, and efficient code following best practices
- Participate in code reviews, testing, debugging, and performance optimization
- Implement CI/CD pipelines and cloud-based solutions
Required Skills
- Strong experience in Java (Spring Boot / Spring Framework)
- Good knowledge of React.js, JavaScript, HTML, CSS
- Experience building REST APIs and microservices architecture
- Hands-on experience with AWS services (EC2, S3, Lambda, RDS, etc.)
- Familiarity with Git, CI/CD pipelines, and Agile development
- Experience with database technologies (MySQL, PostgreSQL, or MongoDB)
Preferred Skills
- Experience with Docker / Kubernetes
- Knowledge of serverless architecture
- Experience working in cloud-native environments
- Understanding of system design and scalable architecture
Job Description:
Position Type: Full-Time Contract (with potential to convert to Permanent)
Location: Remote (Australian Time Zone)
Availability: Immediate Joiners Preferred
About the Role
We are seeking an experienced Tableau and Snowflake Specialist with 5+ years of hands‑on expertise to join our team as a full‑time contractor for the next few months. Based on performance and business requirements, this role has a strong potential to transition into a permanent position.
The ideal candidate is highly proficient in designing scalable dashboards, managing Snowflake data warehousing environments, and collaborating with cross-functional teams to drive data‑driven insights.
Key Responsibilities
- Develop, design, and optimize advanced Tableau dashboards, reports, and visual analytics.
- Build, maintain, and optimize datasets and data models in Snowflake Cloud Data Warehouse.
- Collaborate with business stakeholders to gather requirements and translate them into analytics solutions.
- Write efficient SQL queries, stored procedures, and data pipelines to support reporting needs.
- Perform data profiling, data validation, and ensure data quality across systems.
- Work closely with data engineering teams to improve data structures for better reporting efficiency.
- Troubleshoot performance issues and implement best practices for both Snowflake and Tableau.
- Support deployment, version control, and documentation of BI solutions.
- Ensure availability of dashboards during Australian business hours.
Required Skills & Experience
- 5+ years of strong hands-on experience with Tableau development (Dashboards, Storyboards, Calculated Fields, LOD Expressions).
- 5+ years of experience working with Snowflake including schema design, warehouse configuration, and query optimization.
- Advanced knowledge of SQL and performance tuning.
- Strong understanding of data modeling, ETL processes, and cloud data platforms.
- Experience working in fast-paced environments with tight delivery timelines.
- Excellent communication and stakeholder management skills.
- Ability to work independently and deliver high‑quality outputs aligned with business objectives.
Nice-to-Have Skills
- Knowledge of Python or any ETL tool.
- Experience with Snowflake integrations (Fivetran, DBT, Azure/AWS/GCP).
- Tableau Server/Prep experience.
Contract Details
- Full-Time Contract for several months.
- High possibility of conversion to permanent, based on performance.
- Must be available to work on the Australian Time Zone.
- Immediate joiners are highly encouraged.
Role Overview:
We are looking for a skilled DevOps Engineer to join our team. You will be responsible for managing and automating the deployment, monitoring, and scaling of our applications, ensuring high availability, security, and performance. The ideal candidate is passionate about automation, CI/CD, and cloud infrastructure.
Key Responsibilities:
- Design, implement, and maintain CI/CD pipelines for development, testing, and production environments.
- Manage cloud infrastructure (AWS, Azure, GCP, or others) and ensure scalability, reliability, and security.
- Automate deployment, configuration management, and infrastructure provisioning using tools like Terraform, Ansible, or Chef.
- Monitor application performance and infrastructure health using tools like Prometheus, Grafana, ELK Stack, or Datadog.
- Collaborate with development and QA teams to streamline workflows and resolve deployment issues.
- Implement security best practices in pipelines, infrastructure, and cloud environments.
- Maintain version control and manage release cycles.
- Troubleshoot and resolve production issues efficiently.
Required Skills & Qualifications:
- Bachelor’s degree in Computer Science, IT, or related field.
- Proven experience in DevOps, system administration, or cloud engineering.
- Strong knowledge of CI/CD tools (Jenkins, GitLab CI/CD, CircleCI, etc.).
- Hands-on experience with containerization (Docker, Kubernetes).
- Experience with cloud platforms (AWS, Azure, or GCP).
- Scripting skills (Python, Bash, or PowerShell).
- Knowledge of infrastructure as code (Terraform, CloudFormation).
- Familiarity with monitoring and logging tools.
- Strong problem-solving, communication, and teamwork skills.
Preferred Qualifications:
- Experience with microservices architecture.
- Knowledge of networking, load balancing, and firewalls.
- Exposure to Agile/Scrum methodologies.
What We Offer:
- Competitive salary
- Flexible working hours and remote options.
- Learning and development opportunities.
- Collaborative and inclusive work environment.
Job Title: Data Engineer
Experience: 4–14 Years
Work Mode: Remote
Employment Type: Full-Time
Position Overview:
We are looking for highly experienced Senior Data Engineers to design, architect, and lead scalable, cloud-based data platforms on AWS. The role involves building enterprise-grade data pipelines, modernizing legacy systems, and developing high-performance scoring engines and analytics solutions and collaborate closely with architecture, analytics, risk, and business teams to deliver secure, reliable, and scalable data solutions.
Key Responsibilities:
· Design and build scalable data pipelines for financial and customer data
· Build and optimize scoring engines (credit, risk, fraud, customer scoring)
· Design, develop, and optimize complex ETL/ELT pipelines (batch & real-time)
· Ensure data quality, governance, reliability, and compliance standards
· Optimize large-scale data processing using SQL, Spark/PySpark, and cloud technologies
· Lead cloud data architecture, cost optimization, and performance tuning initiatives
· Collaborate with Data Science, Analytics, and Product teams to deliver business-ready datasets
· Mentor junior engineers and establish best practices for data engineering
Key Requirements:
· Strong programming skills in Python and advanced SQL
· Experience building scalable scoring or rule-based decision engines
· Hands-on experience with Big Data technologies (Spark/PySpark/Kafka)
· Strong expertise in designing ETL/ELT pipelines and data modeling
· Experience with cloud platforms (AWS/Azure) and modern data architectures
· Solid understanding of data warehousing, data lakes, and performance tuning
· Knowledge of CI/CD, version control (Git), and production support best practices
Job Title : Data / Generative AI Engineer
Experience : 5+ Years (Mid-Level) | 10+ Years (Senior)
Location : Remote
Employment Type : Contract
Open Positions : 5
Job Overview :
We are hiring Data / Generative AI Engineers for remote contract engagements supporting client-facing AI implementations. The role involves building production-grade Generative AI solutions on AWS, including conversational AI systems, RAG-based architectures, intelligent automation platforms, and scalable data engineering pipelines.
Mandatory Skills :
Amazon Bedrock, Generative AI, RAG Architecture, LangChain/LlamaIndex/Bedrock Agents, Python (3.9+), AWS Serverless (Lambda, API Gateway, Step Functions), Vector Databases, Data Engineering & ETL, AWS Glue, Amazon Athena.
Key Responsibilities :
- Design and build production-ready Generative AI applications on AWS.
- Implement Retrieval-Augmented Generation (RAG) architectures for enterprise AI solutions.
- Integrate Amazon Bedrock with foundation models and enterprise systems.
- Develop AI agent orchestration workflows using frameworks such as LangChain, LlamaIndex, or Bedrock Agents.
- Build and manage serverless architectures using AWS services like Lambda, API Gateway, and Step Functions.
- Implement vector databases and semantic search solutions for intelligent knowledge retrieval.
- Design and maintain data engineering pipelines and ETL workflows for large-scale data processing.
- Use AWS Glue for data transformation and orchestration.
- Utilize Amazon Athena for querying large datasets and performing analytics.
- Develop scalable Python-based APIs and backend services.
- Collaborate with cross-functional teams and clients to deliver AI-powered solutions in production environments.
Required Skills :
- Strong experience with Amazon Bedrock and foundation model integrations
- Hands-on experience with LangChain, LlamaIndex, or Bedrock Agents
- Advanced Python (3.9+) development and API building
- Experience with AWS serverless architectures (Lambda, API Gateway, Step Functions)
- Experience implementing vector databases and semantic search systems
- Strong knowledge of data engineering and ETL pipeline development
- Hands-on experience with AWS Glue for data transformation and orchestration
- Experience using Amazon Athena for querying and analytics
- Experience building RAG-based AI applications
Engagement Details :
- Contract Duration : Minimum 3 to 6 Months
- Work Timing : 8:00 AM – 4:00 PM EST
- Start Timeline : Within 2 Weeks
- Open Positions : 5
Key Responsibilities
- Design, implement, and maintain highly available infrastructure on AWS.
- Automate infrastructure provisioning using Terraform (Infrastructure as Code).
- Define and monitor SLIs, SLOs, and error budgets to improve service reliability.
- Build and manage CI/CD pipelines to enable safe and frequent deployments.
- Implement robust monitoring, alerting, and logging solutions.
- Perform incident response, root cause analysis (RCA), and postmortems.
- Improve system resilience through automation and self-healing mechanisms.
- Optimize cloud resource utilization and cost (FinOps awareness).
- Collaborate with development teams to improve application reliability.
- Manage containerized workloads using Docker and Kubernetes (EKS preferred).
- Implement security and compliance best practices across infrastructure.
- Maintain operational runbooks and documentation.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 7–8 years of experience in SRE, DevOps, or Production Engineering.
- Strong hands-on experience with AWS services.
- Proven experience with Terraform for infrastructure automation.
- Experience building CI/CD pipelines (GitHub Actions, Jenkins, or similar).
- Strong scripting skills (Python, Bash, or Shell).
- Experience with Linux system administration.
- Hands-on experience with monitoring and observability tools.
- Good understanding of networking and cloud security fundamentals.
- Experience with Git and branching strategies
Key Responsibilities
- Design, implement, and maintain highly available infrastructure on AWS.
- Automate infrastructure provisioning using Terraform (Infrastructure as Code).
- Define and monitor SLIs, SLOs, and error budgets to improve service reliability.
- Build and manage CI/CD pipelines to enable safe and frequent deployments.
- Implement robust monitoring, alerting, and logging solutions.
- Perform incident response, root cause analysis (RCA), and postmortems.
- Improve system resilience through automation and self-healing mechanisms.
- Optimize cloud resource utilization and cost (FinOps awareness).
- Collaborate with development teams to improve application reliability.
- Manage containerized workloads using Docker and Kubernetes (EKS preferred).
- Implement security and compliance best practices across infrastructure.
- Maintain operational runbooks and documentation.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 7–8 years of experience in SRE, DevOps, or Production Engineering.
- Strong hands-on experience with AWS services.
- Proven experience with Terraform for infrastructure automation.
- Experience building CI/CD pipelines (GitHub Actions, Jenkins, or similar).
- Strong scripting skills (Python, Bash, or Shell).
- Experience with Linux system administration.
- Hands-on experience with monitoring and observability tools.
- Good understanding of networking and cloud security fundamentals.
- Experience with Git and branching strategies
Key Responsibilities
- Design end-to-end architecture for scalable full-stack applications.
- Lead backend development using Python and Flask framework.
- Design and optimize MongoDB data models and queries.
- Define frontend architecture (React/Angular/Vue – as applicable).
- Establish coding standards, design patterns, and best practices.
- Build and optimize RESTful APIs and microservices.
- Implement authentication, authorization, and security best practices.
- Ensure high performance, scalability, and reliability of applications.
- Drive CI/CD implementation and DevOps best practices.
- Review code, mentor developers, and guide technical decisions.
- Collaborate with product, DevOps, and data teams.
- Troubleshoot complex production issues and perform root cause analysis.
- Lead cloud deployment strategies (Azure/AWS/GCP preferred).
Required Qualifications
- Bachelor’s or Master’s degree in Computer Science or related field.
- 8+ years of software development experience.
- 4+ years of hands-on Python backend development.
- Strong expertise in Flask framework.
- Deep experience with MongoDB (schema design, indexing, aggregation).
- Experience designing RESTful and microservices architectures.
- Strong understanding of frontend technologies (JavaScript, HTML, CSS).
- Experience with Git and modern CI/CD pipelines.
- Solid knowledge of system design, scalability, and performance tuning.
- Experience with containerization (Docker preferred).
- Strong problem-solving and architectural thinking skills.
What makes Techjays an inspiring place to work
At Techjays, we are helping companies reimagine how they build, operate, and scale with AI at the core.
We operate as part of the 1% of companies globally that can truly leverage AI the right way and not just as experimentation, but as secure, scalable, production-grade systems that drive measurable business outcomes.
Our strength lies in combining deep backend engineering with AI system design, building AI-native platforms, intelligent workflows, and cloud architectures that are reliable, observable, and enterprise-ready.
Our team includes engineers and leaders who have built and scaled products at global technology organizations such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. Today, we function as a high-agency, execution-focused team building advanced AI systems for global clients.
We are looking for a strong backend engineer who can design and build secure, scalable Python systems that power AI-native applications.
You will work on AI-enabled platforms, production systems, and scalable backend services that support LLM integrations, RAG pipelines, and intelligent workflows.
Years of Experience: 5 - 8 years
Location: Remote/ Coimbatore
Key Skills:
- Backend Development (Expert): Python, Django/Flask, RESTful APIs, Websockets
- Cloud Technologies (Proficient): AWS (EC2, S3, Lambda), GCP (Compute Engine, Cloud Storage, Cloud Functions), CI/CD pipelines with Jenkins/GitLab CI or Github Actions
- Databases (Advanced): PostgreSQL, MySQL, MongoDB
- AI/ML (Familiar): Basic understanding of Machine Learning concepts, experience with RAG, Vector Databases (Pinecone or ChromaDB or others)
- Tools (Expert): Git, Docker, Linux
Roles and Responsibilities:
- Design, development, and implementation of highly scalable and secure backend services using Python and Django.
- Architect and develop complex features for our AI-powered platforms
- Write clean, maintainable, and well-tested code, adhering to best practices and coding standards.
- Collaborate with cross-functional teams, including front-end developers, data scientists, and product managers, to deliver high-quality software.
- Mentor junior developers and provide technical guidance.
What We’re Looking For Beyond Skills
- Builder mindset — you think in systems, not just tickets
- Ownership — you take features from idea to production
- Structured thinking in ambiguous environments
- Clear communication and collaborative approach
- Ability to work in a fast-paced, evolving startup environment
What We Offer
- Competitive compensation
- Flexible work environment (Remote / Coimbatore office)
- Paid holidays & flexible time off
- Medical insurance (Self & Family up to ₹4 Lakhs per person)
- Opportunity to work on production-grade AI systems
- Exposure to global clients and high-impact projects
- A culture that values clarity, integrity, and continuous growth
If you want to build AI-native systems that are used in the real world, not just prototypes, Techjays is the place to do it.
Survey Form Link
Key Responsibilities
- Design, implement, and maintain CI/CD pipelines using Azure DevOps
- Manage cloud infrastructure on Microsoft Azure including VMs, App Services, AKS, Networking, and Storage
- Implement Infrastructure as Code (IaC) using Terraform, ARM Templates, or Bicep
- Build and manage containerized environments using Docker and Kubernetes
- Deploy and manage Azure Kubernetes Service (AKS) clusters
- Automate configuration management and deployments
- Implement monitoring and logging solutions using Azure Monitor, Log Analytics, and Application Insights
- Integrate security best practices (DevSecOps) within CI/CD pipelines
- Collaborate with development teams to improve build, release, and deployment processes
- Troubleshoot production issues and optimize system performance
- Ensure high availability, scalability, and disaster recovery strategies
Required Skills & Qualifications
- 7+ years of experience in DevOps, Cloud Engineering, or Infrastructure Automation
- Strong hands-on experience with Microsoft Azure
- Expertise in CI/CD implementation using Azure DevOps
- Experience with scripting languages such as PowerShell, Bash, or Python
- Proficiency in Infrastructure as Code (Terraform, ARM, Bicep)
- Experience with container orchestration (Kubernetes/AKS)
- Knowledge of Git-based version control systems
- Experience with configuration management tools
- Strong understanding of networking, security, and cloud architecture
- Experience working in Agile/Scrum environments
Palcode.ai is an AI-first platform built to solve real, high-impact problems in the construction and preconstruction ecosystem. We work at the intersection of AI, product execution, and domain depth, and are backed by leading global ecosystems
Role: Full Stack Developer
Industry Type: Software Product
Department: Engineering - Software & QA
Employment Type: Full Time, Permanent
Role Category: Software Development
Education
UG: Any Graduate
About Company:
Snapsight is an AI-powered platform that delivers real-time event summaries in 75+ languages. We work with conferences worldwide and won the 2024 Skift Award for Most Innovative Event Tech. We're an early-stage startup scaling fast.
Join us if you want to become part of a vibrant and fast-moving product company that's on a mission to connect people around the world through events.
Location: Remote/Work From Home
What you'll be doing:
- Writing reusable, testable, and efficient code in Node.js for back-end services.
- Ensuring optimal and high-performance code logic for the data from/to the database.
- Collaborating with front-end developers on the integrations.
- Implementing effective security protocols, data protection measures, and storage solutions.
- Preparing technical specification documents for the developed features.
- Providing technical recommendations and suggesting improvements to the product.
- Writing unit test cases for APIs.
- Documenting code standards and practicing it.
- Staying updated on the advancements in the field of Node.js development.
- Should be open to new challenges and be comfortable in taking up new exploration tasks.
Skills:
- 3-5 years of strong proficiency in Node.js and its core principles.
- Experience in test-driven development.
- Experience with NoSQL databases like MongoDB is required
- Experience with MySQL database
- RESTful/GraphQL API design and development
- Docker and AWS experience is a plus
- Extensive knowledge of JavaScript, PHP, web stacks, libraries, and frameworks.
- Strong interpersonal, communication, and collaboration skills.
- Exceptional analytical and problem-solving aptitude
- Experience with a version control system like Git
- Knowledge about the Software Development Life Cycle Model, secure development best practices and standards, source control, code review, build and deployment, continuous integration
We are looking for an experienced DevOps Architect with strong expertise in telecom environments (OSS/BSS, 4G/5G core, network systems). The candidate will design and implement scalable, highly available, and automated DevOps solutions to support telecom-grade applications and infrastructure.
Responsibilities:
- Design and implement DevOps architecture for telecom applications (OSS/BSS, mediation systems, billing platforms)
- Architect CI/CD pipelines using Jenkins, GitLab, or Azure DevOps
- Manage cloud infrastructure on Amazon Web Services, Microsoft Azure, or hybrid telecom data centers
- Implement containerization using Docker and orchestration with Kubernetes
- Design Infrastructure as Code (IaC) using Terraform
- Ensure high availability, disaster recovery, and zero-downtime deployment strategies
- Automate deployments for 4G/5G core network functions (CNFs/VNFs)
- Implement monitoring solutions using Prometheus, Grafana, and ELK Stack
- Work closely with network engineering and telecom operations teams
- Ensure compliance with telecom-grade security standards
🚀 We’re Hiring: Senior Full Stack Engineer (On-Call Support) 🚀
Work Mode-Remote
Shift Timings-PST
Working hours-9hours(including a 1 hour of break)
Are you a seasoned Full Stack Engineer who enjoys solving real-world production challenges and being the go-to expert when it matters most? This role is for you! 💡
Role Overview
We’re looking for 3 Senior Resources to join our On-Call Support Team, ensuring platform stability and rapid issue resolution across backend, frontend, and infrastructure.
Tech Stack
Node.js (NestJS)
React.js (Next.js)
React Native
PostgreSQL
AWS (Hybrid with On-Premise)
Linux
Docker Swarm
Portainer
What You’ll Do
Provide on-call support for production systems
Troubleshoot and resolve high-priority issues
Collaborate with senior engineers to maintain system reliability
Work across backend, frontend, and infrastructure layers
Ensure uptime, performance, and scalability of applications
What We’re Looking For
Strong experience with modern JavaScript frameworks
Hands-on knowledge of cloud + on-prem environments
Solid understanding of containerized deployments
Excellent problem-solving and debugging skills
Comfortable working in on-call support rotations
Job Description
We are looking for a Data Scientist with 3–5 years of experience in data analysis, statistical modeling, and machine learning to drive actionable business insights. This role involves translating complex business problems into analytical solutions, building and evaluating ML models, and communicating insights through compelling data stories. The ideal candidate combines strong statistical foundations with hands-on experience across modern data platforms and cross-functional collaboration.
What will you need to be successful in this role?
Core Data Science Skills
• Strong foundation in statistics, probability, and mathematical modeling
• Expertise in Python for data analysis (NumPy, Pandas, Scikit-learn, SciPy)
• Strong SQL skills for data extraction, transformation, and complex analytical queries
• Experience with exploratory data analysis (EDA) and statistical hypothesis testing
• Proficiency in data visualization tools (Matplotlib, Seaborn, Plotly, Tableau, or Power BI)
• Strong understanding of feature engineering and data preprocessing techniques
• Experience with A/B testing, experimental design, and causal inference
Machine Learning & Analytics
• Strong experience building and deploying ML models (regression, classification, clustering)
• Knowledge of ensemble methods, gradient boosting (XGBoost, LightGBM, CatBoost)
• Understanding of time series analysis and forecasting techniques
• Experience with model evaluation metrics and cross-validation strategies
• Familiarity with dimensionality reduction techniques (PCA, t-SNE, UMAP)
• Understanding of bias-variance tradeoff and model interpretability
• Experience with hyperparameter tuning and model optimization
GenAI & Advanced Analytics
• Working knowledge of LLMs and their application to business problems
• Experience with prompt engineering for analytical tasks
• Understanding of embeddings and semantic similarity for analytics
• Familiarity with NLP techniques (text classification, sentiment analysis, entity,extraction)
• Experience integrating AI/ML models into analytical workflows
Data Platforms & Tools
• Experience with cloud data platforms (Snowflake, Databricks, BigQuery)
• Proficiency in Jupyter notebooks and collaborative development environments
• Familiarity with version control (Git) and collaborative workflows
• Experience working with large datasets and distributed computing (Spark/PySpark)
• Understanding of data warehousing concepts and dimensional modeling
• Experience with cloud platforms (AWS, Azure, or GCP)
Business Acumen & Communication
• Strong ability to translate business problems into analytical frameworks
• Experience presenting complex analytical findings to non-technical stakeholders
• Ability to create compelling data stories and visualizations
• Track record of driving business decisions through data-driven insights
• Experience working with cross-functional teams (Product, Engineering, Business)
• Strong documentation skills for analytical methodologies and findings
Good to have
• Experience with deep learning frameworks (TensorFlow, PyTorch, Keras)
• Knowledge of reinforcement learning and optimization techniques
• Familiarity with graph analytics and network analysis
• Experience with MLOps and model deployment pipelines
• Understanding of model monitoring and performance tracking in production
• Knowledge of AutoML tools and automated feature engineering
• Experience with real-time analytics and streaming data
• Familiarity with causal ML and uplift modeling
• Publications or contributions to data science community
• Kaggle competitions or open-source contributions
• Experience in specific domains (finance, healthcare, e-commerce)

We are building VALLI AI SecurePay, an AI-powered fintech and cybersecurity platform focused on fraud detection and transaction risk scoring.
We are looking for an AI / Machine Learning Engineer with strong AWS experience to design, develop, and deploy ML models in a cloud-native environment.
Responsibilities:
- Build ML models for fraud detection and anomaly detection
- Work with transactional and behavioral data
- Deploy models on AWS (S3, SageMaker, EC2/Lambda)
- Build data pipelines and inference workflows
- Integrate ML models with backend APIs
Requirements:
- Strong Python and Machine Learning experience
- Hands-on AWS experience
- Experience deploying ML models in production
- Ability to work independently in a remote setup
Job Type: Contract / Freelance
Duration: 3–6 months (extendable)
Location: Remote (India)
About Us:
MyOperator is a Business AI Operator and a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform.
Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
Job Title : Full Stack Developer
Experience : 5+ Years (Mandatory)
Mandatory Tech Stack : Node.js (NestJS), React.js (Next.js), React Native, PostgreSQL, AWS (Hybrid with On-Premise infrastructure) & Docker Swarm & Portainer
Location : Remote
Working Days : Monday to Saturday
Shift : Night Shift
Job Summary :
We are scaling rapidly and looking for a high-impact Full Stack Developer who thrives on solving complex problems across Web, Mobile, and Cloud Infrastructure.
The ideal candidate is hands-on, adaptable, and comfortable working in distributed systems and hybrid cloud environments, delivering end-to-end solutions with ownership and accountability.
Mandatory Technical Skills :
- Backend : Node.js with NestJS
- Frontend (Web) : React.js with Next.js
- Mobile : React Native
- Database : PostgreSQL
- Cloud : AWS (Hybrid with On-Premise infrastructure)
- OS : Linux
- Containers & Orchestration : Docker Swarm
- Container Management : Portainer
🎯 Key Responsibilities :
- Design, develop, and maintain scalable full-stack applications (Web + Mobile)
- Build and manage microservices and RESTful APIs
- Work in distributed and hybrid cloud environments
- Develop cloud-ready solutions and manage deployments
- Handle containerized applications using Docker Swarm & Portainer
- Collaborate closely with Product, DevOps, and Engineering teams
- Ensure application performance, security, and reliability
- Participate in code reviews and follow best engineering practices
- Troubleshoot, debug, and optimize applications across the stack
✅ Required Qualifications :
- Strong hands-on experience with Node.js (NestJS)
- Solid expertise in React.js (Next.js) and React Native
- Experience with PostgreSQL and backend data modeling
- Working knowledge of AWS services in hybrid environments
- Good understanding of Linux systems
- Hands-on experience with Docker Swarm & Portainer
- Strong understanding of microservices architecture
- Ability to manage end-to-end full-stack delivery
⭐ Good-to-Have Skills :
- Experience with CI/CD pipelines
- Exposure to monitoring & logging tools
- Knowledge of event-driven systems
- Experience working in high-availability systems
Experience: 8+ Years
Work Mode: Remote
Engagement: Full-time / Freelancer
Dual Project: Acceptable
Job Description:
We are looking for an experienced AWS Cloud Engineer II with strong hands-on system engineering expertise in AWS production environments.
Key Responsibilities and Skills:
Hands-on experience in AWS system engineering with a strong focus on Amazon RDS, including performance tuning, backups, restores, Multi-AZ configurations, and read replicas
Strong experience in application troubleshooting across AWS services including EC2, ALB, VPC, and IAM
Expertise in log analysis and monitoring using AWS CloudWatch
Ability to troubleshoot connectivity issues, latency problems, and service dependencies
Experience in end-to-end root cause analysis and production issue resolution
Strong understanding of AWS networking and security best practices
Ability to work independently in a remote setup and handle production-level issues
Preferred Qualifications:
Experience working in high-availability and production-critical environments
Strong analytical and problem-solving skills
Good communication skills for collaborating with cross-functional teams
Hands-on experience with Microsoft Azure core services including Virtual Machines, storage, networking, and identity management
Strong expertise in Azure RDS / Azure Virtual Desktop (AVD) deployment, configuration, and performance tuning
Solid systems engineering background with Windows Server administration, Active Directory, GPO, DNS, and basic Linux management
Proficiency in automation and scripting, primarily using PowerShell, with working knowledge of Azure CLI and Infrastructure as Code (ARM/Bicep/Terraform)
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (B2B SaaS preferred)
Hands-on experience implementing and managing DLP solutions in AWS and AzureStrong expertise in data classification, labeling, and protection (e.g., Microsoft Purview, AIP)Experience designing and enforcing DLP policies across cloud storage, email, endpoints, and SaaS appsProficient in monitoring, investigating, and remediating data leakage incidents

US based large Biotech company with WW operations.
Senior Cloud Engineer Job Description
Position Title: Senior Cloud Engineer -- AWS [LONG TERM-CONTRACT POSITION]
Location: Remote [REQUIRES WORKING IN CST TIME ZONE]
Position Overview
The Senior Cloud Engineer will play a critical role in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure across multiple platforms (AWS, Azure, Google Cloud). This role requires deep technical expertise, leadership in cloud
strategy, and hands-on experience with automation, DevOps practices, and cloud-native technologies. The ideal candidate will work collaboratively with cross-functional teams to deliver robust cloud solutions, drive best practices, and support business objectives
through innovative cloud engineering.
Key Responsibilities
Design, implement, and maintain cloud infrastructure and services, ensuring high availability, performance, and security across multi-cloud environments (AWS, Azure, GCP)
Develop and manage Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, and Ansible for automated provisioning and configuration
Lead the adoption and optimization of DevOps methodologies, including CI/CD pipelines, automated testing, and deployment processes
Collaborate with software engineers, architects, and stakeholders to architect cloud-native solutions that meet business and technical requirements
Monitor, troubleshoot, and optimize cloud systems for cost, performance, and reliability, using cloud monitoring and logging tools
Ensure cloud environments adhere to security best practices, compliance standards, and governance policies, including identity and access management, encryption, and vulnerability management
Mentor and guide junior engineers, sharing knowledge and fostering a culture of continuous improvement and innovation
Participate in on-call rotation and provide escalation support for critical cloud infrastructure issues
Document cloud architectures, processes, and procedures to ensure knowledge transfer and operational excellence
Stay current with emerging cloud technologies, trends, and best practices,
Required Qualifications
- Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field, or equivalent work experience
- 6–10 years of experience in cloud engineering or related roles, with a proven track record in large-scale cloud environments
- Deep expertise in at least one major cloud platform (AWS, Azure, Google Cloud) and experience in multi-cloud environments
- Strong programming and scripting skills (Python, Bash, PowerShell, etc.) for automation and cloud service integration
- Proficiency with DevOps tools and practices, including CI/CD (Jenkins, GitLab CI), containerization (Docker, Kubernetes), and configuration management (Ansible, Chef)
- Solid understanding of networking concepts (VPC, VPN, DNS, firewalls, load balancers), system administration (Linux/Windows), and cloud storage solutions
- Experience with cloud security, governance, and compliance frameworks
- Excellent analytical, troubleshooting, and root cause analysis skills
- Strong communication and collaboration abilities, with experience working in agile, interdisciplinary teams
- Ability to work independently, manage multiple priorities, and lead complex projects to completion
Preferred Qualifications
- Relevant cloud certifications (e.g., AWS Certified Solutions Architect, AWS DevOps Engineer, Microsoft AZ-300/400/500, Google Professional Cloud Architect)
- Experience with cloud cost optimization and FinOps practices
- Familiarity with monitoring/logging tools (CloudWatch, Kibana, Logstash, Datadog, etc.)
- Exposure to cloud database technologies (SQL, NoSQL, managed database services)
- Knowledge of cloud migration strategies and hybrid cloud architectures
Job Details
- Job Title: Software Developer (Python, React/Vue)
- Industry: Technology
- Experience Required: 2-4 years
- Working Days: 5 days/week
- Job Location: Remote working
- CTC Range: Best in Industry
Review Criteria
- Strong Full stack/Backend engineer profile
- 2+ years of hands-on experience as a full stack developer (backend-heavy)
- (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
- (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
- (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
- (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
- (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
- (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
- Product companies (B2B SaaS preferred)
Preferred
- Preferred (Location) - Mumbai
- Preferred (Skills): Candidates with strong backend or full-stack experience in other languages/frameworks are welcome if fundamentals are strong
- Preferred (Education): B.Tech from Tier 1, Tier 2 institutes
Role & Responsibilities
This is not just another dev job. You’ll help engineer the backbone of the world’s first AI Agentic manufacturing OS.
You will:
- Build and own features end-to-end — from design → deployment → scale.
- Architect scalable, loosely coupled systems powering AI-native workflows.
- Create robust integrations with 3rd-party systems.
- Push boundaries on reliability, performance, and automation.
- Write clean, tested, secure code → and continuously improve it.
- Collaborate directly with Founders & Snr engineers in a high-trust environment.
Our Tech Arsenal:
- We believe in always using the sharpest tools for the job. On that end we always try to remain tech agnostic and leave it up to discussion on what tools to use to get the problem solved in the most robust and quick way.
- That being said, our bright team of engineers have already constructed a formidable arsenal of tools which helps us fortify our defense and always play on the offensive. Take a look at the Tech Stack we already use.
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (B2B SaaS preferred)
Hands-on experience implementing and managing DLP solutions in AWS and Azure
Strong expertise in data classification, labeling, and protection (e.g., Microsoft Purview, AIP)
Experience designing and enforcing DLP policies across cloud storage, email, endpoints, and SaaS apps
Proficient in monitoring, investigating, and remediating data leakage incidents
Senior Penetration Tester
Experience: 2–5 years
Industry: EdTech / SaaS
Role Summary:
We are looking for a Penetration Tester to identify and remediate security vulnerabilities in our EdTech platforms including LMS, ERP, web apps, mobile apps, and APIs.
Key Responsibilities:
Perform VAPT on web, mobile, API, and cloud systems
Identify vulnerabilities using OWASP standards
Prepare security reports and remediation guidance
Re-test fixes with development teams
Skills Required:
Web & API security (OWASP Top 10)
Tools: Burp Suite, Nmap, Nessus, Metasploit
Basic scripting (Python/Bash)
Understanding of cloud security basics
Preferred:
EdTech or SaaS experience
Certifications: CEH / OSCP
Company: Grey Chain AI
Location: Remote
Experience: 7+ Years
Employment Type: Full Time
About the Role
We are looking for a Senior Python AI Engineer who will lead the design, development, and delivery of production-grade GenAI and agentic AI solutions for global clients. This role requires a strong Python engineering background, experience working with foreign enterprise clients, and the ability to own delivery, guide teams, and build scalable AI systems.
You will work closely with product, engineering, and client stakeholders to deliver high-impact AI-driven platforms, intelligent agents, and LLM-powered systems.
Key Responsibilities
- Lead the design and development of Python-based AI systems, APIs, and microservices.
- Architect and build GenAI and agentic AI workflows using modern LLM frameworks.
- Own end-to-end delivery of AI projects for international clients, from requirement gathering to production deployment.
- Design and implement LLM pipelines, prompt workflows, and agent orchestration systems.
- Ensure reliability, scalability, and security of AI solutions in production.
- Mentor junior engineers and provide technical leadership to the team.
- Work closely with clients to understand business needs and translate them into robust AI solutions.
- Drive adoption of latest GenAI trends, tools, and best practices across projects.
Must-Have Technical Skills
- 7+ years of hands-on experience in Python development, building scalable backend systems.
- Strong experience with Python frameworks and libraries (FastAPI, Flask, Pydantic, SQLAlchemy, etc.).
- Solid experience working with LLMs and GenAI systems (OpenAI, Claude, Gemini, open-source models).
- Good experience with agentic AI frameworks such as LangChain, LangGraph, LlamaIndex, or similar.
- Experience designing multi-agent workflows, tool calling, and prompt pipelines.
- Strong understanding of REST APIs, microservices, and cloud-native architectures.
- Experience deploying AI solutions on AWS, Azure, or GCP.
- Knowledge of MLOps / LLMOps, model monitoring, evaluation, and logging.
- Proficiency with Git, CI/CD, and production deployment pipelines.
Leadership & Client-Facing Experience
- Proven experience leading engineering teams or acting as a technical lead.
- Strong experience working directly with foreign or enterprise clients.
- Ability to gather requirements, propose solutions, and own delivery outcomes.
- Comfortable presenting technical concepts to non-technical stakeholders.
What We Look For
- Excellent communication, comprehension, and presentation skills.
- High level of ownership, accountability, and reliability.
- Self-driven professional who can operate independently in a remote setup.
- Strong problem-solving mindset and attention to detail.
- Passion for GenAI, agentic systems, and emerging AI trends.
Why Grey Chain AI
Grey Chain AI is a Generative AI-as-a-Service and Digital Transformation company trusted by global brands such as UNICEF, BOSE, KFINTECH, WHO, and Fortune 500 companies. We build real-world, production-ready AI systems that drive business impact across industries like BFSI, Non-profits, Retail, and Consulting.
Here, you won’t just experiment with AI — you will build, deploy, and scale it for the real world.
Technical Lead – Golang | AWS | Database DesignWork Model: Hybrid (Mandatory Work From Office for the first 1 month in Chennai, followed by remote work)
Location: Chennai, India
Experience: 8–12 Years
Budget: 1L ~ 1.2L MonthlyRole Summary
We are seeking an experienced Technical Lead with strong expertise in Golang, AWS, and Database Design to spearhead backend development initiatives, drive architectural decisions, and mentor engineering teams. The ideal candidate will combine hands-on technical skills with leadership capabilities to deliver scalable, secure, and high-performance solutions.
Key Responsibilities
Backend Development Leadership:
Lead the design and development of backend systems using Golang and microservices architecture.
Ensure scalability, reliability, and maintainability of backend services.Database
Design & Optimization:
Own database schema modeling, normalization, and performance tuning.
Work with MySQL, PostgreSQL, and NoSQL databases to design efficient data storage solutions.
Implement strategies for query optimization and high availability.
Cloud Infrastructure Management:
Architect and manage scalable solutions on AWS cloud services including EC2, ECS/EKS, Lambda, RDS, DynamoDB, and S3.
Ensure cost optimization, security compliance, and disaster recovery planning.
Technical Governance & Mentorship:
Review code, enforce best practices, and maintain coding standards.
Mentor and guide developers, fostering a culture of continuous learning and innovation.
Collaboration & Delivery:
Partner with product managers, architects, and stakeholders to align technical solutions with business goals.
Drive end-to-end delivery of projects with a focus on quality and timelines.
Production Support & Optimization:
Troubleshoot and resolve production issues.
Continuously monitor system performance and implement improvements.
Required Skills & Qualifications
Technical Expertise:
Strong hands-on experience with Golang in production-grade applications.
Solid knowledge of Database Design (MySQL, PostgreSQL, NoSQL).
Proficiency in AWS services (EC2, ECS/EKS, Lambda, RDS, DynamoDB, S3).
Strong understanding of microservices and distributed systems.
DevOps & Tools:
Experience with Docker, Kubernetes, and container orchestration.
Familiarity with CI/CD pipelines using tools like Jenkins, Maven, or GitHub Actions.
Soft Skills:
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities.
Ability to mentor and inspire engineering teams.
Shift + Return to add a new line
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Job Summary
We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.
Key Responsibilities
- Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
- Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
- Containerize applications using Docker and manage deployments with Helm charts
- Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
- Provision and manage infrastructure using Terraform (Infrastructure as Code)
- Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
- Write and maintain Python scripts for automation, monitoring, and operational tasks
- Ensure high availability, scalability, performance, and cost optimization of cloud resources
- Implement and follow security best practices across AWS and Kubernetes environments
- Troubleshoot production issues, perform root cause analysis, and support incident resolution
- Collaborate closely with development and QA teams to streamline deployment and release processes
Required Skills & Qualifications
- 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
- Strong experience with AWS services, including:
- EC2, RDS, OpenSearch, VPC, S3
- Application Load Balancer (ALB), API Gateway, Lambda
- SNS and SQS.
- Hands-on experience with AWS EKS (Kubernetes)
- Strong knowledge of Docker and Helm charts
- Experience with Terraform for infrastructure provisioning and management
- Solid experience building and managing CI/CD pipelines using Jenkins
- Practical experience with Prometheus and Grafana for monitoring and alerting
- Proficiency in Python scripting for automation and operational tasks
- Good understanding of Linux systems, networking concepts, and cloud security
- Strong problem-solving and troubleshooting skills
Good to Have (Preferred Skills)
- Exposure to GitOps practices
- Experience managing multi-environment setups (Dev, QA, UAT, Production)
- Knowledge of cloud cost optimization techniques
- Understanding of Kubernetes security best practices
- Experience with log aggregation tools (e.g., ELK/OpenSearch stack)
Language Preference
- Fluency in English is mandatory.
- Fluency in Hindi is preferred.






















