50+ Remote AWS (Amazon Web Services) Jobs in India
Apply to 50+ Remote AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!
Hiring SOC Investigation Specialist on behalf of high-growth technology and enterprise partners building next-generation SOC automation and AI-driven investigation systems. This role is ideal for experienced SOC analysts who can apply real-world investigative judgment to review, validate, and construct high-quality security investigations across SIEM, endpoint, cloud, and identity environments.
Responsibilities
- Review, monitor, and evaluate SOC alerts and investigation outputs based on predefined scenarios and criteria.
- Distinguish true positives from false positives by validating investigative evidence and alert context.
- Perform end-to-end security investigations when required, including log analysis, entity pivoting, timeline reconstruction, and evidence correlation.
- Assess the correctness, completeness, and quality of SOC investigations produced by automated or human workflows.
- Apply consistent investigative judgment while recognizing that multiple valid investigation paths may exist for the same alert.
- Make clear binary determinations (e.g., ACCEPT / PASS) while also producing detailed ground-truth investigations when required.
- Use Splunk extensively to pivot across logs, entities, and timelines, including reading and reasoning about SPL queries.
- Maintain clear and accurate documentation of investigative steps, assumptions, evidence, and conclusions.
- Collaborate with program leads and other expert annotators to uphold high-quality investigation and annotation standards.
- Mentor or support other analysts where applicable, particularly in long-term or lead annotator roles.
Requirements
- 3+ years of hands-on experience as a SOC analyst in a production SOC environment (Tier 2 or above strongly preferred).
- Strong understanding of alert triage, incident investigation workflows, and evidence-based decision-making under time constraints.
- Mandatory hands-on experience with Splunk, including:
- Conducting investigations using Splunk
- Reading, understanding, and reasoning about SPL queries
- Pivoting between logs, entities, and timelines
- Proven ability to evaluate SOC investigations and determine whether conclusions are valid, incomplete, or incorrect.
- Strong investigative judgment and comfort making decisive evaluations.
- Fluent English (written and spoken) with strong documentation and communication skills.
Nice to Have
- Experience with Endpoint Detection & Response (EDR) tools such as CrowdStrike Falcon, Microsoft Defender for Endpoint, or SentinelOne.
- Experience analyzing cloud security logs and signals:
- AWS (CloudTrail, GuardDuty)
- Azure (Activity Log, Defender for Cloud)
- GCP (Cloud Audit Logs)
- Familiarity with Identity & Access Management platforms such as Okta Identity Cloud or Microsoft Entra ID (Azure AD).
- Experience with email security tools like Proofpoint or Mimecast.
- SOC leadership or mentoring experience.
- Basic scripting experience (Python or similar).
- Security certifications (optional): GCIA, GCIH, GCED, Splunk certifications, Security+, CCNA, or cloud security certifications.
About the Role
As an SDET II, you'll own significant parts of our test infrastructure and drive quality strategy across the engineering team. You'll design testing approaches for complex features, mentor junior engineers, and make architectural decisions that impact how we approach automation at scale.
What You'll Do
- Architect and implement test frameworks and infrastructure
- Design testing strategies for new features and platform capabilities
- Mentor SDET I engineers and conduct technical code reviews
- Refactor and optimize existing test suites for maintainability and performance
- Make architectural decisions about test design patterns and abstractions
- Build and manage AWS-based test environments and infrastructure
- Integrate testing earlier in the development lifecycle through cross-team collaboration
- Optimize CI/CD pipeline performance and test execution times
- Develop custom tooling and reporting to surface quality insights
Technical Requirements
Core Skills:
- Advanced TypeScript expertise: generics, decorators, advanced typing patterns, type inference
- Deep understanding of asynchronous programming, concurrency, and race condition prevention
- Strong software design principles with domain-driven design (DDD) approach
- Extensive experience with Playwright including deep knowledge of fixtures architecture
- Expert-level Git, GitHub, and distributed version control workflows
- Layered architecture design: understanding PCOM (Page Component Object Model) and POM patterns
- Object-oriented design in test frameworks—building scalable abstractions over linear scripts
- API testing and orchestration (REST/GraphQL integration with UI workflows)
Infrastructure & DevOps:
- AWS: EC2 configuration, CloudWatch log analysis, debugging cloud environments
- Terraform for infrastructure as code (plus)
- Docker: containerization, docker-compose, image management
- CI/CD debugging: analyzing pipeline failures, optimizing execution
- Advanced reporting: Allure configuration, Playwright HTML reports, custom reporting solutions
Additional Experience:
- Test infrastructure development and framework architecture
- Design patterns implementation (Factory, Builder, Facade, Composite)
- Performance optimization at scale
- npm ecosystem and package management
Key Responsibilities
AI Architecture & Solution Design
- Design end-to-end AI solution architectures, including:
- Generative AI and LLM-based systems
- Retrieval-Augmented Generation (RAG) pipelines
- Agentic and multi-agent workflows
- Define reference architectures and best practices for AI-enabled features within enterprise products.
- Ensure AI solutions integrate seamlessly with existing applications, data, and cloud architectures.
AI Integration & MCP Servers
- Design and implement Model Context Protocol (MCP) servers to securely expose tools, APIs, and data to AI agents.
- Define standards for tool interfaces, access control, auditing, and safety guardrails.
- Enable product teams to onboard AI tools and capabilities using reusable, scalable integration patterns.
Agentic AI & Workflow Enablement
- Architect AI-driven workflows that support collaboration between humans and AI agents.
- Design AI-to-AI (A2A) and AI-to-system interaction patterns.
- Ensure agent behaviors are deterministic, explainable, and aligned with enterprise requirements.
Hands-On Development & Prototyping
- Build proofs-of-concept and production-ready implementations using Python and/or TypeScript.
- Rapidly validate ideas from ideation to deployment.
- Establish reusable frameworks, libraries, and CI/CD pipelines for AI development.
AI Governance, Quality & Safety
- Implement guardrails to minimize hallucinations, unsafe actions, and data leakage.
- Define evaluation and monitoring strategies for AI systems, including prompt regression and RAG accuracy checks.
- Ensure AI solutions comply with enterprise security, privacy, and governance standards.
Developer Enablement & Collaboration
- Partner with Product, Engineering, QE, Performance, and Security teams to deliver AI capabilities.
- Mentor teams on AI design patterns, tooling, and best practices.
- Contribute to internal AI communities through demos, documentation, and knowledge sharing.
Qualifications :
Required Qualifications
- Bachelor’s degree in computer science, Engineering, or a related technical field, or equivalent practical experience.
- Demonstrated expertise in cloud‑native system design, distributed architectures, and enterprise‑scale integrations.
- Proven ability to architect and implement AI-enabled systems, including integrating Large Language Models (LLMs) into production-grade software.
- Strong ownership of architectural decisions, technical direction, and solution delivery across complex, cross-functional initiatives.
- Hands-on experience applying security, observability, and automation best practices within enterprise environments.
- 6–10 years of experience in software architecture and distributed systems.
- 5+ years of experience building Generative AI or LLM-based solutions.
- Practical experience designing and implementing:
- Retrieval-Augmented Generation (RAG) architectures
- Agentic AI systems
- Tool-calling frameworks and AI integration layers
- Proficiency in Python and/or .Net/TypeScript/Node.js.
- Experience working with major cloud platforms such as Azure, AWS, or Google Cloud Platform (GCP).
Preferred Qualifications
- Experience with OpenAI, Azure OpenAI, Anthropic, or similar LLM platforms.
- Familiarity with Model Context Protocol (MCP) or equivalent AI tool-integration frameworks.
- Experience applying AI engineering practices beyond prototyping, including evaluation, reliability, and scalability considerations.
- Ability to translate ambiguous business problems into clear technical architecture and execution plans.
- History of influencing technical standards and mentoring senior engineers or architects.
- Experience with vector databases, embeddings, and retrieval optimisation.
- Experience building AI-enabled developer tooling and CI/CD pipelines.
- Prior experience in enterprise SaaS environments.
Overview:
We're looking for a Full Stack Developer with strong backend expertise who can build,
manage, and scale AI-driven products end to end. You'll play a critical role in designing
scalable architectures, optimizing performance and cost, and building robust AI and agentic
systems.
Responsibilities
1. Architect and build scalable backend systems using FastAPI, PostgreSQL, and Redis.
2. Design, develop, and maintain AI-driven applications, integrating multiple LLMs, APIs,
and agentic frameworks.
3. Implement vector databases (pgvector, Qdrant, etc.) for RAG and AI memory systems.
4. Orchestrate multi-agent AI systems with LangChain/LangGraph, including function
calling, agent collaboration, and monitoring.
5. Build and integrate RESTful APIs for frontend and external use.
6. Manage DevOps workflows, including CI/CD, cloud deployments (AWS/GCP), server
scaling, and logging/monitoring (Sentry).
7. Optimize application cost, latency, and reliability, balancing speed with LLM call
efficiency and caching strategies.
8. Collaborate with product, design, and AI teams to translate business requirements into
high-performing tech.
9. Maintain documentation and ensure code quality with tests, reviews, and async-first
architecture.
10. Contribute to frontend development (React + TypeScript) when necessary, ensuring
seamless API integration and data visualization.
Requirements
Core Skills
• Strong proficiency in Python and FastAPI.
• Experience with PostgreSQL (including pgvector) and SQLAlchemy (async).
• Solid understanding of Redis, RQ (Redis Queue), and caching mechanisms.
• Proven experience integrating LLMs and AI APIs (OpenAI, Anthropic, etc.).
• Hands-on experience with LangChain / LangGraph, RAG pipelines, and agent
orchestration.
• Experience working with cloud platforms (AWS / GCP) and managing file storage (S3).
• Familiarity with frontend stacks (React, TypeScript, Tailwind, Zustand).
• Working knowledge of DevOps: Docker, CI/CD pipelines, deployment automation, and
observability tools (Sentry, Mixpanel, Clarity).
Bonus / Nice to Have
• Experience building agent monitoring dashboards or AI workflows.
• Prior experience in startup or product-based environments.
• Understanding of LLM cost optimization, token management, and function calling
orchestration.
• Familiarity with external API integrations like BrightData, Hunter.io, Adzuna, and Serper.
• Experience building scalable AI products (e.g., chatbots, AI copilots, data agents, or
automation tools).
Mindset
• Startup-ready: comfortable working in fast-paced, ambiguous environments.
• Deep curiosity about AI systems and automation.
• Strong sense of ownership and accountability for shipped products.
• Pragmatic and cost-conscious in architectural decisions.
• Excellent communication and documentation skills.
Description
Company is a fast-growing company founded by former Google Cloud leaders, architects, and engineers. We are seeking candidates with significant experience in Google Cloud to join our team. Our engagements aim to eliminate obstacles, reduce risk, and accelerate timelines for customers transitioning to Google and seeking assistance with data and application modernization. We embed within customer teams to provide strategic guidance, facilitate technology decisions, and execute projects in a collaborative, co-development style.
As a member of our Cloud Engineering team, you will be working with fast-paced innovative companies, leveraging Cloud as the key driver of their transformation. Our clients will look to you as their trusted advisor, someone they can rely on and who will be there to help them along their Google Cloud journey. You will be expected to work a large spectrum of technology and tools including public cloud platforms, AI and LLMs, Kubernetes, data processing systems, databases, and more.
What you will do...
- Working with our clients to understand their requirements and technical challenges. Using this input you will develop a technical design for a solution and communicate the value of your solution to the client team.
- You will work to develop delivery estimates and an estimated project plan.
- You will act as the lead technical member of the implementation project team. You are responsible for making the key technical and keeping delivery on track. You should be able to unblock when things are stuck.
- Utilize a broad range of technologies such as Kubernetes, AI, and Large Language Models (LLMs), to develop scalable and efficient cloud applications.
- Stay abreast of industry trends and new technologies to drive continuous improvement in cloud solutions and practices.
- Work closely with cross-functional teams to deliver end-to-end cloud solutions, from conceptualization to deployment and maintenance.
- Engage in problem-solving and troubleshooting to address complex technical challenges in a cloud environment.
What we need...
- 5+ years of experience working in a Software Engineering capacity
- Excellent knowledge and experience with Python, and preferably additional languages such as Go
- Strong critical thinking skills, and a bias towards problem solving
- Familiarity with implementing microservice architectures
- Fundamental skills with Kubernetes. You should be familiar with packaging and deploying your applications to k8s
- Experience building applications that work with data, databases, and other parts of the data ecosystem is preferred
- Familiarity with Generative AI workflows, frameworks like Langchain, and experience with Streamlit are all highly desirable, but at a minimum you should have a willingness to learn
- Experience deploying production workloads on the public cloud - either GCP or AWS
- Experience using CI/CD tools such as GitHub Actions, GitLab, etc
- Able to work with new tools and technologies where you may not have prior experience
- Comfortable with being on video in meetings internally and with clients
- Strong English communications skills
We are a fully remote company and offer competitive compensation and benefits.
The Mission: We are looking for a visionary Technical Leader to own our healthcare data ecosystem from the first byte to the final dashboard. You won't just be managing a platform; you’ll be the primary architect of a clinical data engine that powers life-changing analytics. If you are an expert in SQL and Python who thrives on solving the "puzzle" of healthcare interoperability (FHIR/HL7) while mentoring a high-performing team, this is your seat at the table.
What You’ll Own
- Architectural Sovereignty: Define the end-to-end blueprint for our data warehouse (staging, marts, and semantic layers). You choose the frameworks, set the coding standards, and decide how we handle complex dimensional modeling and SCDs.
- Engineering Excellence: Lead by example. You’ll write production-grade Python for ingestion frameworks and craft advanced, set-based SQL transformations that others use as gold-standard references.
- The Interoperability Bridge: Turn the chaos of EHR exports, REST APIs, and claims data into clean, FHIR-aligned governed datasets. You ensure our data speaks the language of modern healthcare.
- Technical Mentorship: Act as the "Engineer’s Engineer." You’ll run design reviews, champion CI/CD best practices, and build the runbooks that keep our small but mighty team efficient.
- Security by Design: Direct the implementation of HIPAA-compliant data flows, ensuring encryption, auditability, and access controls are baked into the architecture, not bolted on.
The Stack You’ll Command
- Languages: Expert-level SQL (CTE, Window Functions, Tuning) and Production Python.
- Databases: Deep polyglot experience across MSSQL, PostgreSQL, Oracle, and NoSQL (MongoDB/Elasticsearch).
- Orchestration: Advanced Apache Airflow (SLAs, retries, and complex DAGs).
- Ecosystem: GitHub for CI/CD, Tableau/PowerBI for semantic layers, and Unix/Linux for shell scripting.
Who You Are
- Experienced: You have 8–12+ years in data engineering, with a significant portion spent in a Lead or Architect capacity.
- Healthcare-Fluent: You understand the stakes of PHI. You’ve worked with FHIR/HL7 and know how to map clinical resources to analytical models.
- Performance-Obsessed: You don’t just make it work; you make it fast. You’re the person who uses EXPLAIN/ANALYZE to shave minutes off a query.
- Culture-Builder: You believe in documentation, observability (lineage/freshness), and "leaving the campground cleaner than you found it."
Bonus Points for:
- Privacy Pro: Experience with PII/PHI de-identification and privacy-by-design.
- Cloud Native: Deep familiarity with Azure, AWS, or GCP security and data services.
- Search Experts: Experience with near-real-time indexing via Elasticsearch.
To process your resume for the next process, please fill out the Google form with your updated resume.
Pre-screen Question: https://forms.gle/q3CzfdSiWoXTCEZJ7
Details: https://forms.gle/FGgkmQvLnS8tJqo5A
💼 Job Title: Full Stack Developer (experienced only)
🏢 Company: SDS Softwares
💻 Location: Work from Home
💸 Salary range: ₹10,000 - ₹18,000 per month (based on knowledge and interview)
🕛 Shift Timings: 12 PM to 9 PM (5 days working )
About the role: As a Full Stack Developer, you will work on both the front-end and back-end of web applications. You will be responsible for developing user-friendly interfaces and maintaining the overall functionality of our projects.
⚜️ Key Responsibilities:
- Collaborate with cross-functional teams to define, design, and ship new features.
- Develop and maintain high-quality web applications (frontend + backend )
- Troubleshoot and debug applications to ensure peak performance.
- Participate in code reviews and contribute to the team’s knowledge base.
⚜️ Required Skills:
- Proficiency in HTML, CSS, JavaScript, Redux, React.js for front-end development. ✅
- Understanding of server-side languages such as Node.js. ✅
- Familiarity with database technologies such as MySQL, MongoDB, or ✅ PostgreSQL.
- Basic knowledge of version control systems, particularly Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication skills and a team-oriented mindset.
💠 Qualifications:
- individuals with full-time work experience (1 year to 2 years) in software development.
- Must have a personal laptop and stable internet connection.
- Ability to join immediately is preferred.
If you are passionate about coding and eager to learn, we would love to hear from you. 👍
Advanced Software Architect
Position Responsibilities :
- Lead the architecture and development of AI-powered, distributed systems that meet enterprise-grade performance and security standards.
- Leverage AI tools for code generation, architectural design, and documentation to accelerate delivery and improve quality.
- Design, build, and maintain services using Python, Java, and Node.js, following clean-code and secure design principles.
- Develop agentic AI-based tools, domain-specific copilots, and developer productivity enhancements.
- Collaborate with cross-functional teams to define modular, scalable, and compliant architecture patterns.
- Conduct technical design reviews and produce detailed documentation, including system specifications, API docs, and architecture diagrams.
- Integrate AI solutions into CI/CD pipelines, ensuring observability, automated testing, and deployment standards are met.
- Implement robust monitoring and performance engineering practices to maintain high-quality deployments.
- Continuously evaluate emerging AI technologies and integrate them into development workflows for maximum impact.
- Champion best practices in security, automation, and performance optimization across the organization.
Qualifications :
- 8+ years in software engineering with full-stack or backend development in Python, Java, and/or Node.js.
- 3+ years with AI tools for development, prototyping, or documentation tasks.
- Experience with cloud-native development and containerized deployment (Docker, Kubernetes).
- Knowledge of AI integration patterns, vector stores, prompt engineering, and RAG pipelines.
- Ability to design software architecture using sequence diagrams, ERDs, data models, and threat models.
- Comfortable with Gen AI-first environments and working with remote Agile teams.
Preferred Qualifications
- Experience building AI copilots or developer tools using OpenAI/Claude SDKs, LangChain, or similar frameworks.
- Experience working in a fast-paced, AIDLC environment, with a strong understanding of CI/CD practices.
- Familiarity with GitHub Actions, Argo Workflows, Terraform, and monitoring/observability tools.
- Containerization and Orchestration: Proficiency in Docker and Kubernetes for containerization and orchestration.
- Cloud Platforms: Experience with cloud computing platforms such as AWS, Azure, or OCI Cloud.
Job Title: Software Developer (Contractor)
Location: Remote, Up to 1-year contract
Compensation: Hourly
About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.
Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.
Key Responsibilities:
· Develop and optimize software using C/C++ for high-performance computing applications.
· Work on cryptographic algorithm implementations and performance tuning.
· Optimize memory management, threading, and parallel computing techniques.
· Debug, profile, and test software for performance and reliability.
· Write clean, efficient, and well-documented code.
Qualifications:
· Completed a B.S. or higher degree in Computer Science, Computer Engineering.
· Strong programming skills in C and C++.
· Familiarity with Linux-based development environments.
· Basic understanding of cryptographic algorithms and security principles is a plus.
· Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.
· Knowledge of other programming languages such as Python, Rust, or Go is a plus.
· Strong problem-solving skills and attention to detail.
· Ability to work independently and collaboratively in a fast-paced startup environment.
What You’ll Gain:
· Hands-on experience in systems programming, cryptography, and high-performance computing.
· Opportunities to work on real-world security and privacy-focused projects.
· Mentorship from experienced software engineers and researchers.
· Exposure to cutting-edge cryptographic acceleration and secure computing techniques.
· Potential for future full-time employment based on performance.
Job Title: Software Developer
Location: Remote
About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.
Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.
Key Responsibilities:
· Develop and optimize software using C/C++ for high-performance computing applications.
· Work on cryptographic algorithm implementations and performance tuning.
· Optimize memory management, threading, and parallel computing techniques.
· Debug, profile, and test software for performance and reliability.
· Write clean, efficient, and well-documented code.
Qualifications:
· Completed a B.S. or higher degree in Computer Science, Computer Engineering.
· Strong programming skills in C and C++.
· Familiarity with Linux-based development environments.
· Basic understanding of cryptographic algorithms and security principles is a plus.
· Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.
· Knowledge of other programming languages such as Python, Rust, or Go is a plus.
· Strong problem-solving skills and attention to detail.
· Ability to work independently and collaboratively in a fast-paced startup environment.
What You’ll Gain:
· Hands-on experience in systems programming, cryptography, and high-performance computing.
· Opportunities to work on real-world security and privacy-focused projects.
· Mentorship from experienced software engineers and researchers.
· Exposure to cutting-edge cryptographic acceleration and secure computing techniques.
· Potential for future full-time employment based on performance.
Description
Join company as a Backend Developer and become a pivotal force in building the robust, scalable services that power our innovative platforms. In this role, you will design, develop, and maintain server‑side applications, ensuring high performance and reliability for millions of users. You’ll collaborate closely with cross‑functional product, front‑end, and DevOps teams to translate business requirements into clean, efficient code, while participating in code reviews and architectural discussions. Our dynamic environment encourages continuous learning, offering opportunities to work with cutting‑edge technologies, cloud infrastructures, and modern development practices. As a key contributor, your work will directly impact product quality, user satisfaction, and the overall success of company's mission to streamline hiring solutions.
Requirements:
- 1–15 years of professional experience in backend development, with a strong focus on building APIs and microservices.
- Proficiency in server‑side languages such as Python, Java, Node.js, or Go, and solid understanding of object‑oriented and functional programming paradigms.
- Extensive experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis), including schema design and query optimization.
- Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies like Docker and Kubernetes.
- Hands‑on experience with version control (Git), CI/CD pipelines, and automated testing frameworks.
- Strong problem‑solving abilities, effective communication skills, and a collaborative mindset for working within multidisciplinary teams.
Roles and Responsibilities:
- Design, develop, and maintain high‑throughput backend services and RESTful APIs that support core product features.
- Implement data models and storage solutions, ensuring data integrity, security, and optimal performance.
- Collaborate with front‑end engineers, product managers, and designers to define technical requirements and deliver end‑to‑end solutions.
- Participate in code reviews, provide constructive feedback, and uphold coding standards and best practices.
- Monitor, troubleshoot, and optimize production systems, implementing robust logging, alerting, and performance tuning.
- Contribute to the continuous improvement of development workflows, including CI/CD automation, testing strategies, and deployment processes.
- Stay current with emerging technologies and industry trends, proposing innovative approaches to enhance system architecture.
Budget:
- Job Type: payroll
- Experience Range: 1–15 years
Position Responsibilities:
- Collaborate with the development team to maintain, enhance, and scale the product for enterprise use.
- Design and develop scalable, high-performance solutions using cloud technologies and containerization.
- Contribute to all phases of the development lifecycle, following SOLID principles and best practices.
- Write well-designed, testable, and efficient code with a strong emphasis on Test-Driven Development (TDD), ensuring comprehensive unit, integration, and performance testing.
- Ensure software designs comply with specifications and security best practices.
- Recommend changes to improve application architecture, maintainability, and performance.
- Develop and optimize database queries using T-SQL.
- Prepare and produce software component releases.
- Develop and execute unit, integration, and performance tests.
- Support formal testing cycles and resolve test defects.
AI-Specific Responsibilities:
- Integrate AI-powered tools and frameworks to enhance code quality and development efficiency.
- Utilize AI-driven analytics to identify performance bottlenecks and optimize system performance.
- Implement AI-based security measures to proactively detect and mitigate potential threats.
- Leverage AI for automated testing and continuous integration/continuous deployment (CI/CD) processes.
- Guide the adoption and effective use of AI agents for automating repetitive development, deployment, and testing processes within the engineering team.
Qualifications:
- Bachelor’s degree in Computer Science, IT, or a related field.
- Highly proficient in ASP.NET Core (C#) and full-stack development.
- Experience developing REST APIs.
- Proficiency in front-end technologies (JavaScript, HTML, CSS, Bootstrap, and UI frameworks).
- Strong database experience, particularly with T-SQL and relational database design.
- Advanced understanding of object-oriented programming (OOP) and SOLID principles.
- Experience with security best practices in web and API development.
- Knowledge of Agile SCRUM methodology and experience in collaborative environments.
- Experience with Test-Driven Development (TDD).
- Strong analytical skills, problem-solving abilities, and curiosity to explore new technologies.
- Ability to communicate effectively, including explaining technical concepts to non-technical stakeholders.
- High commitment to continuous learning, innovation, and improvement.
AI-Specific Qualifications:
- Proficiency in AI-driven development tools and platforms such as GitHub Copilot in Agentic Mode.
- Knowledge of AI-based security protocols and threat detection systems.
- Experience integrating GenAI or Agentic AI agents into full-stack workflows (e.g., using AI for code reviews, automated bug fixes, or system monitoring).
- Demonstrated proficiency with AI-assisted development tools and prompt engineering for code generation, testing, or documentation.
Senior Quality Engineer – AI Products
Fulltime
Remote
Requirements
● 3-7 years of experience in software quality engineering, preferably in SaaS environments with a platform or infrastructure focus.
● Strong demonstrated experience testing distributed systems, APIs, data pipelines, or cloud-based infrastructure.
● Experience designing and executing test plans for AI/ML systems, data pipelines, or shared platform services.
● Familiarity with AI/LLM infrastructure concepts such as retrieval-augmented generation (RAG), vector search, model routing, and observability.
● Strong demonstrated proficiency in Linux distributions and CLI-based testing, including log file analysis and other troubleshooting tasks.
● Experience with AWS or other major cloud platforms.
● Basic Python/Shell scripting knowledge with ability to edit existing scripts and create new automation for pipeline validation.
● Advanced skills with API and SQL testing methodologies.
● Familiarity with test management tools such as TestRail; experience with Qase is a plus.
● Demonstrated experience leveraging Version Control Systems with a focus on GitHub.
● Experience with testing tools: Jira, Sentry, DataDog.
● Strong understanding of Agile/Scrum methodologies.
● Proven track record of mentoring junior engineers and contributing to process improvements.
● Excellent analytical and problem-solving abilities.
● Strong communication skills with ability to present to both technical and non-technical stakeholders.
● Proficiency in English (C1-C2 level).
● Most importantly: The courage to be vocal about quality concerns, platform risks, and testing impediments.
Preferred Qualifications
● Experience with AI/ML evaluation frameworks or tools (e.g., LLM-as-judge, Ragas, custom eval harnesses).
● Hands-on experience with document parsing, OCR, or unstructured data pipelines.
● Experience with observability tooling (e.g., Datadog, Grafana, OpenTelemetry) from a QA perspective.
● Experience testing SaaS products in regulated industries (such as PCI-compliant).
● Basic understanding of containerization, Kubernetes, and CI/CD pipelines (Jenkins, CircleCI).
● Experience with microservice architectures and distributed systems.
● Knowledge of basic non-functional testing (security, performance) with emphasis on AI-specific concerns.
● Background in security or compliance testing for AI systems.
● Certifications such as ISTQB or CSTE.
● Experience working in legal technology, fintech, or professional services software.
● Familiarity with AI-assisted testing tools and leveraging LLMs as a productivity-boosting tool.
● Experience evaluating and implementing new QE tools and processes
Job Summary
We are looking for an experienced Java Full Stack Developer with strong expertise in Java, React.js, and AWS to design, develop, and maintain scalable web applications. The ideal candidate should have experience building high-performance applications and working across both front-end and back-end technologies.
Key Responsibilities
- Develop and maintain full-stack web applications using Java and React.js
- Design and build RESTful APIs and microservices using Java frameworks
- Develop responsive and interactive frontend interfaces using React.js
- Work with AWS services for deployment, scalability, and infrastructure
- Collaborate with cross-functional teams including product managers, designers, and QA
- Write clean, maintainable, and efficient code following best practices
- Participate in code reviews, testing, debugging, and performance optimization
- Implement CI/CD pipelines and cloud-based solutions
Required Skills
- Strong experience in Java (Spring Boot / Spring Framework)
- Good knowledge of React.js, JavaScript, HTML, CSS
- Experience building REST APIs and microservices architecture
- Hands-on experience with AWS services (EC2, S3, Lambda, RDS, etc.)
- Familiarity with Git, CI/CD pipelines, and Agile development
- Experience with database technologies (MySQL, PostgreSQL, or MongoDB)
Preferred Skills
- Experience with Docker / Kubernetes
- Knowledge of serverless architecture
- Experience working in cloud-native environments
- Understanding of system design and scalable architecture
BluePMS Software Solutions Pvt Ltd is hiring a talented DevOps Engineer to join our growing engineering team. In this role, you will be responsible for building and maintaining scalable infrastructure, automating deployment processes, and improving the reliability of our software delivery pipelines.
KeyResponsibilities:
1: Design, build, and maintain CI/CD pipelines for faster and reliable deployments.
2: Manage and monitor cloud infrastructure and servers.
3: Automate build, testing, and deployment processes.
4: Collaborate with development and QA teams to improve release cycles.
5: Monitor system performance and ensure high availability and reliability.
6: Troubleshoot infrastructure and deployment issues.
7: Implement security best practices in DevOps workflows.
RequiredSkills:
1: Strong understanding of DevOps principles and CI/CD pipelines.
2: Experience with Docker, Kubernetes, or containerization technologies.
3: Familiarity with cloud platforms such as AWS, Azure, or GCP.
4: Experience with Git, Jenkins, GitHub Actions, or similar tools.
5: Basic scripting knowledge (Bash, Python, or Shell).
6: Good understanding of Linux systems and networking concepts.
Eligibility:
1: Experience: 2 – 7 years
2: Qualification: Bachelor's degree in Computer Science, IT, or related field
3: Strong analytical and problem-solving skills.
Location: Chennai / Remote
Apply here: https://connectsblue.com/jobs/753/devops-engineer-at-bluepms-software-solutions-pvt-ltd
Job Description:
Position Type: Full-Time Contract (with potential to convert to Permanent)
Location: Remote (Australian Time Zone)
Availability: Immediate Joiners Preferred
About the Role
We are seeking an experienced Tableau and Snowflake Specialist with 5+ years of hands‑on expertise to join our team as a full‑time contractor for the next few months. Based on performance and business requirements, this role has a strong potential to transition into a permanent position.
The ideal candidate is highly proficient in designing scalable dashboards, managing Snowflake data warehousing environments, and collaborating with cross-functional teams to drive data‑driven insights.
Key Responsibilities
- Develop, design, and optimize advanced Tableau dashboards, reports, and visual analytics.
- Build, maintain, and optimize datasets and data models in Snowflake Cloud Data Warehouse.
- Collaborate with business stakeholders to gather requirements and translate them into analytics solutions.
- Write efficient SQL queries, stored procedures, and data pipelines to support reporting needs.
- Perform data profiling, data validation, and ensure data quality across systems.
- Work closely with data engineering teams to improve data structures for better reporting efficiency.
- Troubleshoot performance issues and implement best practices for both Snowflake and Tableau.
- Support deployment, version control, and documentation of BI solutions.
- Ensure availability of dashboards during Australian business hours.
Required Skills & Experience
- 5+ years of strong hands-on experience with Tableau development (Dashboards, Storyboards, Calculated Fields, LOD Expressions).
- 5+ years of experience working with Snowflake including schema design, warehouse configuration, and query optimization.
- Advanced knowledge of SQL and performance tuning.
- Strong understanding of data modeling, ETL processes, and cloud data platforms.
- Experience working in fast-paced environments with tight delivery timelines.
- Excellent communication and stakeholder management skills.
- Ability to work independently and deliver high‑quality outputs aligned with business objectives.
Nice-to-Have Skills
- Knowledge of Python or any ETL tool.
- Experience with Snowflake integrations (Fivetran, DBT, Azure/AWS/GCP).
- Tableau Server/Prep experience.
Contract Details
- Full-Time Contract for several months.
- High possibility of conversion to permanent, based on performance.
- Must be available to work on the Australian Time Zone.
- Immediate joiners are highly encouraged.
Role Overview:
We are looking for a skilled DevOps Engineer to join our team. You will be responsible for managing and automating the deployment, monitoring, and scaling of our applications, ensuring high availability, security, and performance. The ideal candidate is passionate about automation, CI/CD, and cloud infrastructure.
Key Responsibilities:
- Design, implement, and maintain CI/CD pipelines for development, testing, and production environments.
- Manage cloud infrastructure (AWS, Azure, GCP, or others) and ensure scalability, reliability, and security.
- Automate deployment, configuration management, and infrastructure provisioning using tools like Terraform, Ansible, or Chef.
- Monitor application performance and infrastructure health using tools like Prometheus, Grafana, ELK Stack, or Datadog.
- Collaborate with development and QA teams to streamline workflows and resolve deployment issues.
- Implement security best practices in pipelines, infrastructure, and cloud environments.
- Maintain version control and manage release cycles.
- Troubleshoot and resolve production issues efficiently.
Required Skills & Qualifications:
- Bachelor’s degree in Computer Science, IT, or related field.
- Proven experience in DevOps, system administration, or cloud engineering.
- Strong knowledge of CI/CD tools (Jenkins, GitLab CI/CD, CircleCI, etc.).
- Hands-on experience with containerization (Docker, Kubernetes).
- Experience with cloud platforms (AWS, Azure, or GCP).
- Scripting skills (Python, Bash, or PowerShell).
- Knowledge of infrastructure as code (Terraform, CloudFormation).
- Familiarity with monitoring and logging tools.
- Strong problem-solving, communication, and teamwork skills.
Preferred Qualifications:
- Experience with microservices architecture.
- Knowledge of networking, load balancing, and firewalls.
- Exposure to Agile/Scrum methodologies.
What We Offer:
- Competitive salary
- Flexible working hours and remote options.
- Learning and development opportunities.
- Collaborative and inclusive work environment.
Job Title: Data Engineer
Experience: 4–14 Years
Work Mode: Remote
Employment Type: Full-Time
Position Overview:
We are looking for highly experienced Senior Data Engineers to design, architect, and lead scalable, cloud-based data platforms on AWS. The role involves building enterprise-grade data pipelines, modernizing legacy systems, and developing high-performance scoring engines and analytics solutions and collaborate closely with architecture, analytics, risk, and business teams to deliver secure, reliable, and scalable data solutions.
Key Responsibilities:
· Design and build scalable data pipelines for financial and customer data
· Build and optimize scoring engines (credit, risk, fraud, customer scoring)
· Design, develop, and optimize complex ETL/ELT pipelines (batch & real-time)
· Ensure data quality, governance, reliability, and compliance standards
· Optimize large-scale data processing using SQL, Spark/PySpark, and cloud technologies
· Lead cloud data architecture, cost optimization, and performance tuning initiatives
· Collaborate with Data Science, Analytics, and Product teams to deliver business-ready datasets
· Mentor junior engineers and establish best practices for data engineering
Key Requirements:
· Strong programming skills in Python and advanced SQL
· Experience building scalable scoring or rule-based decision engines
· Hands-on experience with Big Data technologies (Spark/PySpark/Kafka)
· Strong expertise in designing ETL/ELT pipelines and data modeling
· Experience with cloud platforms (AWS/Azure) and modern data architectures
· Solid understanding of data warehousing, data lakes, and performance tuning
· Knowledge of CI/CD, version control (Git), and production support best practices
Job Title : Data / Generative AI Engineer
Experience : 5+ Years (Mid-Level) | 10+ Years (Senior)
Location : Remote
Employment Type : Contract
Open Positions : 5
Job Overview :
We are hiring Data / Generative AI Engineers for remote contract engagements supporting client-facing AI implementations. The role involves building production-grade Generative AI solutions on AWS, including conversational AI systems, RAG-based architectures, intelligent automation platforms, and scalable data engineering pipelines.
Mandatory Skills :
Amazon Bedrock, Generative AI, RAG Architecture, LangChain/LlamaIndex/Bedrock Agents, Python (3.9+), AWS Serverless (Lambda, API Gateway, Step Functions), Vector Databases, Data Engineering & ETL, AWS Glue, Amazon Athena.
Key Responsibilities :
- Design and build production-ready Generative AI applications on AWS.
- Implement Retrieval-Augmented Generation (RAG) architectures for enterprise AI solutions.
- Integrate Amazon Bedrock with foundation models and enterprise systems.
- Develop AI agent orchestration workflows using frameworks such as LangChain, LlamaIndex, or Bedrock Agents.
- Build and manage serverless architectures using AWS services like Lambda, API Gateway, and Step Functions.
- Implement vector databases and semantic search solutions for intelligent knowledge retrieval.
- Design and maintain data engineering pipelines and ETL workflows for large-scale data processing.
- Use AWS Glue for data transformation and orchestration.
- Utilize Amazon Athena for querying large datasets and performing analytics.
- Develop scalable Python-based APIs and backend services.
- Collaborate with cross-functional teams and clients to deliver AI-powered solutions in production environments.
Required Skills :
- Strong experience with Amazon Bedrock and foundation model integrations
- Hands-on experience with LangChain, LlamaIndex, or Bedrock Agents
- Advanced Python (3.9+) development and API building
- Experience with AWS serverless architectures (Lambda, API Gateway, Step Functions)
- Experience implementing vector databases and semantic search systems
- Strong knowledge of data engineering and ETL pipeline development
- Hands-on experience with AWS Glue for data transformation and orchestration
- Experience using Amazon Athena for querying and analytics
- Experience building RAG-based AI applications
Engagement Details :
- Contract Duration : Minimum 3 to 6 Months
- Work Timing : 8:00 AM – 4:00 PM EST
- Start Timeline : Within 2 Weeks
- Open Positions : 5
Key Responsibilities
- Design, implement, and maintain highly available infrastructure on AWS.
- Automate infrastructure provisioning using Terraform (Infrastructure as Code).
- Define and monitor SLIs, SLOs, and error budgets to improve service reliability.
- Build and manage CI/CD pipelines to enable safe and frequent deployments.
- Implement robust monitoring, alerting, and logging solutions.
- Perform incident response, root cause analysis (RCA), and postmortems.
- Improve system resilience through automation and self-healing mechanisms.
- Optimize cloud resource utilization and cost (FinOps awareness).
- Collaborate with development teams to improve application reliability.
- Manage containerized workloads using Docker and Kubernetes (EKS preferred).
- Implement security and compliance best practices across infrastructure.
- Maintain operational runbooks and documentation.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 7–8 years of experience in SRE, DevOps, or Production Engineering.
- Strong hands-on experience with AWS services.
- Proven experience with Terraform for infrastructure automation.
- Experience building CI/CD pipelines (GitHub Actions, Jenkins, or similar).
- Strong scripting skills (Python, Bash, or Shell).
- Experience with Linux system administration.
- Hands-on experience with monitoring and observability tools.
- Good understanding of networking and cloud security fundamentals.
- Experience with Git and branching strategies
Key Responsibilities
- Design, implement, and maintain highly available infrastructure on AWS.
- Automate infrastructure provisioning using Terraform (Infrastructure as Code).
- Define and monitor SLIs, SLOs, and error budgets to improve service reliability.
- Build and manage CI/CD pipelines to enable safe and frequent deployments.
- Implement robust monitoring, alerting, and logging solutions.
- Perform incident response, root cause analysis (RCA), and postmortems.
- Improve system resilience through automation and self-healing mechanisms.
- Optimize cloud resource utilization and cost (FinOps awareness).
- Collaborate with development teams to improve application reliability.
- Manage containerized workloads using Docker and Kubernetes (EKS preferred).
- Implement security and compliance best practices across infrastructure.
- Maintain operational runbooks and documentation.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 7–8 years of experience in SRE, DevOps, or Production Engineering.
- Strong hands-on experience with AWS services.
- Proven experience with Terraform for infrastructure automation.
- Experience building CI/CD pipelines (GitHub Actions, Jenkins, or similar).
- Strong scripting skills (Python, Bash, or Shell).
- Experience with Linux system administration.
- Hands-on experience with monitoring and observability tools.
- Good understanding of networking and cloud security fundamentals.
- Experience with Git and branching strategies
Key Responsibilities
- Design end-to-end architecture for scalable full-stack applications.
- Lead backend development using Python and Flask framework.
- Design and optimize MongoDB data models and queries.
- Define frontend architecture (React/Angular/Vue – as applicable).
- Establish coding standards, design patterns, and best practices.
- Build and optimize RESTful APIs and microservices.
- Implement authentication, authorization, and security best practices.
- Ensure high performance, scalability, and reliability of applications.
- Drive CI/CD implementation and DevOps best practices.
- Review code, mentor developers, and guide technical decisions.
- Collaborate with product, DevOps, and data teams.
- Troubleshoot complex production issues and perform root cause analysis.
- Lead cloud deployment strategies (Azure/AWS/GCP preferred).
Required Qualifications
- Bachelor’s or Master’s degree in Computer Science or related field.
- 8+ years of software development experience.
- 4+ years of hands-on Python backend development.
- Strong expertise in Flask framework.
- Deep experience with MongoDB (schema design, indexing, aggregation).
- Experience designing RESTful and microservices architectures.
- Strong understanding of frontend technologies (JavaScript, HTML, CSS).
- Experience with Git and modern CI/CD pipelines.
- Solid knowledge of system design, scalability, and performance tuning.
- Experience with containerization (Docker preferred).
- Strong problem-solving and architectural thinking skills.
What makes Techjays an inspiring place to work
At Techjays, we are helping companies reimagine how they build, operate, and scale with AI at the core.
We operate as part of the 1% of companies globally that can truly leverage AI the right way and not just as experimentation, but as secure, scalable, production-grade systems that drive measurable business outcomes.
Our strength lies in combining deep backend engineering with AI system design, building AI-native platforms, intelligent workflows, and cloud architectures that are reliable, observable, and enterprise-ready.
Our team includes engineers and leaders who have built and scaled products at global technology organizations such as Google, Akamai, NetApp, ADP, Cognizant Consulting, and Capgemini. Today, we function as a high-agency, execution-focused team building advanced AI systems for global clients.
We are looking for a strong backend engineer who can design and build secure, scalable Python systems that power AI-native applications.
You will work on AI-enabled platforms, production systems, and scalable backend services that support LLM integrations, RAG pipelines, and intelligent workflows.
Years of Experience: 5 - 8 years
Location: Remote/ Coimbatore
Key Skills:
- Backend Development (Expert): Python, Django/Flask, RESTful APIs, Websockets
- Cloud Technologies (Proficient): AWS (EC2, S3, Lambda), GCP (Compute Engine, Cloud Storage, Cloud Functions), CI/CD pipelines with Jenkins/GitLab CI or Github Actions
- Databases (Advanced): PostgreSQL, MySQL, MongoDB
- AI/ML (Familiar): Basic understanding of Machine Learning concepts, experience with RAG, Vector Databases (Pinecone or ChromaDB or others)
- Tools (Expert): Git, Docker, Linux
Roles and Responsibilities:
- Design, development, and implementation of highly scalable and secure backend services using Python and Django.
- Architect and develop complex features for our AI-powered platforms
- Write clean, maintainable, and well-tested code, adhering to best practices and coding standards.
- Collaborate with cross-functional teams, including front-end developers, data scientists, and product managers, to deliver high-quality software.
- Mentor junior developers and provide technical guidance.
What We’re Looking For Beyond Skills
- Builder mindset — you think in systems, not just tickets
- Ownership — you take features from idea to production
- Structured thinking in ambiguous environments
- Clear communication and collaborative approach
- Ability to work in a fast-paced, evolving startup environment
What We Offer
- Competitive compensation
- Flexible work environment (Remote / Coimbatore office)
- Paid holidays & flexible time off
- Medical insurance (Self & Family up to ₹4 Lakhs per person)
- Opportunity to work on production-grade AI systems
- Exposure to global clients and high-impact projects
- A culture that values clarity, integrity, and continuous growth
If you want to build AI-native systems that are used in the real world, not just prototypes, Techjays is the place to do it.
Survey Form Link
Key Responsibilities
- Design, implement, and maintain CI/CD pipelines using Azure DevOps
- Manage cloud infrastructure on Microsoft Azure including VMs, App Services, AKS, Networking, and Storage
- Implement Infrastructure as Code (IaC) using Terraform, ARM Templates, or Bicep
- Build and manage containerized environments using Docker and Kubernetes
- Deploy and manage Azure Kubernetes Service (AKS) clusters
- Automate configuration management and deployments
- Implement monitoring and logging solutions using Azure Monitor, Log Analytics, and Application Insights
- Integrate security best practices (DevSecOps) within CI/CD pipelines
- Collaborate with development teams to improve build, release, and deployment processes
- Troubleshoot production issues and optimize system performance
- Ensure high availability, scalability, and disaster recovery strategies
Required Skills & Qualifications
- 7+ years of experience in DevOps, Cloud Engineering, or Infrastructure Automation
- Strong hands-on experience with Microsoft Azure
- Expertise in CI/CD implementation using Azure DevOps
- Experience with scripting languages such as PowerShell, Bash, or Python
- Proficiency in Infrastructure as Code (Terraform, ARM, Bicep)
- Experience with container orchestration (Kubernetes/AKS)
- Knowledge of Git-based version control systems
- Experience with configuration management tools
- Strong understanding of networking, security, and cloud architecture
- Experience working in Agile/Scrum environments
Palcode.ai is an AI-first platform built to solve real, high-impact problems in the construction and preconstruction ecosystem. We work at the intersection of AI, product execution, and domain depth, and are backed by leading global ecosystems
Role: Full Stack Developer
Industry Type: Software Product
Department: Engineering - Software & QA
Employment Type: Full Time, Permanent
Role Category: Software Development
Education
UG: Any Graduate
About Company:
Snapsight is an AI-powered platform that delivers real-time event summaries in 75+ languages. We work with conferences worldwide and won the 2024 Skift Award for Most Innovative Event Tech. We're an early-stage startup scaling fast.
Join us if you want to become part of a vibrant and fast-moving product company that's on a mission to connect people around the world through events.
Location: Remote/Work From Home
What you'll be doing:
- Writing reusable, testable, and efficient code in Node.js for back-end services.
- Ensuring optimal and high-performance code logic for the data from/to the database.
- Collaborating with front-end developers on the integrations.
- Implementing effective security protocols, data protection measures, and storage solutions.
- Preparing technical specification documents for the developed features.
- Providing technical recommendations and suggesting improvements to the product.
- Writing unit test cases for APIs.
- Documenting code standards and practicing it.
- Staying updated on the advancements in the field of Node.js development.
- Should be open to new challenges and be comfortable in taking up new exploration tasks.
Skills:
- 3-5 years of strong proficiency in Node.js and its core principles.
- Experience in test-driven development.
- Experience with NoSQL databases like MongoDB is required
- Experience with MySQL database
- RESTful/GraphQL API design and development
- Docker and AWS experience is a plus
- Extensive knowledge of JavaScript, PHP, web stacks, libraries, and frameworks.
- Strong interpersonal, communication, and collaboration skills.
- Exceptional analytical and problem-solving aptitude
- Experience with a version control system like Git
- Knowledge about the Software Development Life Cycle Model, secure development best practices and standards, source control, code review, build and deployment, continuous integration
We are looking for an experienced DevOps Architect with strong expertise in telecom environments (OSS/BSS, 4G/5G core, network systems). The candidate will design and implement scalable, highly available, and automated DevOps solutions to support telecom-grade applications and infrastructure.
Responsibilities:
- Design and implement DevOps architecture for telecom applications (OSS/BSS, mediation systems, billing platforms)
- Architect CI/CD pipelines using Jenkins, GitLab, or Azure DevOps
- Manage cloud infrastructure on Amazon Web Services, Microsoft Azure, or hybrid telecom data centers
- Implement containerization using Docker and orchestration with Kubernetes
- Design Infrastructure as Code (IaC) using Terraform
- Ensure high availability, disaster recovery, and zero-downtime deployment strategies
- Automate deployments for 4G/5G core network functions (CNFs/VNFs)
- Implement monitoring solutions using Prometheus, Grafana, and ELK Stack
- Work closely with network engineering and telecom operations teams
- Ensure compliance with telecom-grade security standards
🚀 We’re Hiring: Senior Full Stack Engineer (On-Call Support) 🚀
Work Mode-Remote
Shift Timings-PST
Working hours-9hours(including a 1 hour of break)
Are you a seasoned Full Stack Engineer who enjoys solving real-world production challenges and being the go-to expert when it matters most? This role is for you! 💡
Role Overview
We’re looking for 3 Senior Resources to join our On-Call Support Team, ensuring platform stability and rapid issue resolution across backend, frontend, and infrastructure.
Tech Stack
Node.js (NestJS)
React.js (Next.js)
React Native
PostgreSQL
AWS (Hybrid with On-Premise)
Linux
Docker Swarm
Portainer
What You’ll Do
Provide on-call support for production systems
Troubleshoot and resolve high-priority issues
Collaborate with senior engineers to maintain system reliability
Work across backend, frontend, and infrastructure layers
Ensure uptime, performance, and scalability of applications
What We’re Looking For
Strong experience with modern JavaScript frameworks
Hands-on knowledge of cloud + on-prem environments
Solid understanding of containerized deployments
Excellent problem-solving and debugging skills
Comfortable working in on-call support rotations
Job Description
We are looking for a Data Scientist with 3–5 years of experience in data analysis, statistical modeling, and machine learning to drive actionable business insights. This role involves translating complex business problems into analytical solutions, building and evaluating ML models, and communicating insights through compelling data stories. The ideal candidate combines strong statistical foundations with hands-on experience across modern data platforms and cross-functional collaboration.
What will you need to be successful in this role?
Core Data Science Skills
• Strong foundation in statistics, probability, and mathematical modeling
• Expertise in Python for data analysis (NumPy, Pandas, Scikit-learn, SciPy)
• Strong SQL skills for data extraction, transformation, and complex analytical queries
• Experience with exploratory data analysis (EDA) and statistical hypothesis testing
• Proficiency in data visualization tools (Matplotlib, Seaborn, Plotly, Tableau, or Power BI)
• Strong understanding of feature engineering and data preprocessing techniques
• Experience with A/B testing, experimental design, and causal inference
Machine Learning & Analytics
• Strong experience building and deploying ML models (regression, classification, clustering)
• Knowledge of ensemble methods, gradient boosting (XGBoost, LightGBM, CatBoost)
• Understanding of time series analysis and forecasting techniques
• Experience with model evaluation metrics and cross-validation strategies
• Familiarity with dimensionality reduction techniques (PCA, t-SNE, UMAP)
• Understanding of bias-variance tradeoff and model interpretability
• Experience with hyperparameter tuning and model optimization
GenAI & Advanced Analytics
• Working knowledge of LLMs and their application to business problems
• Experience with prompt engineering for analytical tasks
• Understanding of embeddings and semantic similarity for analytics
• Familiarity with NLP techniques (text classification, sentiment analysis, entity,extraction)
• Experience integrating AI/ML models into analytical workflows
Data Platforms & Tools
• Experience with cloud data platforms (Snowflake, Databricks, BigQuery)
• Proficiency in Jupyter notebooks and collaborative development environments
• Familiarity with version control (Git) and collaborative workflows
• Experience working with large datasets and distributed computing (Spark/PySpark)
• Understanding of data warehousing concepts and dimensional modeling
• Experience with cloud platforms (AWS, Azure, or GCP)
Business Acumen & Communication
• Strong ability to translate business problems into analytical frameworks
• Experience presenting complex analytical findings to non-technical stakeholders
• Ability to create compelling data stories and visualizations
• Track record of driving business decisions through data-driven insights
• Experience working with cross-functional teams (Product, Engineering, Business)
• Strong documentation skills for analytical methodologies and findings
Good to have
• Experience with deep learning frameworks (TensorFlow, PyTorch, Keras)
• Knowledge of reinforcement learning and optimization techniques
• Familiarity with graph analytics and network analysis
• Experience with MLOps and model deployment pipelines
• Understanding of model monitoring and performance tracking in production
• Knowledge of AutoML tools and automated feature engineering
• Experience with real-time analytics and streaming data
• Familiarity with causal ML and uplift modeling
• Publications or contributions to data science community
• Kaggle competitions or open-source contributions
• Experience in specific domains (finance, healthcare, e-commerce)

We are building VALLI AI SecurePay, an AI-powered fintech and cybersecurity platform focused on fraud detection and transaction risk scoring.
We are looking for an AI / Machine Learning Engineer with strong AWS experience to design, develop, and deploy ML models in a cloud-native environment.
Responsibilities:
- Build ML models for fraud detection and anomaly detection
- Work with transactional and behavioral data
- Deploy models on AWS (S3, SageMaker, EC2/Lambda)
- Build data pipelines and inference workflows
- Integrate ML models with backend APIs
Requirements:
- Strong Python and Machine Learning experience
- Hands-on AWS experience
- Experience deploying ML models in production
- Ability to work independently in a remote setup
Job Type: Contract / Freelance
Duration: 3–6 months (extendable)
Location: Remote (India)
About Us:
MyOperator is a Business AI Operator and a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform.
Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
Job Title : Full Stack Developer
Experience : 5+ Years (Mandatory)
Mandatory Tech Stack : Node.js (NestJS), React.js (Next.js), React Native, PostgreSQL, AWS (Hybrid with On-Premise infrastructure) & Docker Swarm & Portainer
Location : Remote
Working Days : Monday to Saturday
Shift : Night Shift
Job Summary :
We are scaling rapidly and looking for a high-impact Full Stack Developer who thrives on solving complex problems across Web, Mobile, and Cloud Infrastructure.
The ideal candidate is hands-on, adaptable, and comfortable working in distributed systems and hybrid cloud environments, delivering end-to-end solutions with ownership and accountability.
Mandatory Technical Skills :
- Backend : Node.js with NestJS
- Frontend (Web) : React.js with Next.js
- Mobile : React Native
- Database : PostgreSQL
- Cloud : AWS (Hybrid with On-Premise infrastructure)
- OS : Linux
- Containers & Orchestration : Docker Swarm
- Container Management : Portainer
🎯 Key Responsibilities :
- Design, develop, and maintain scalable full-stack applications (Web + Mobile)
- Build and manage microservices and RESTful APIs
- Work in distributed and hybrid cloud environments
- Develop cloud-ready solutions and manage deployments
- Handle containerized applications using Docker Swarm & Portainer
- Collaborate closely with Product, DevOps, and Engineering teams
- Ensure application performance, security, and reliability
- Participate in code reviews and follow best engineering practices
- Troubleshoot, debug, and optimize applications across the stack
✅ Required Qualifications :
- Strong hands-on experience with Node.js (NestJS)
- Solid expertise in React.js (Next.js) and React Native
- Experience with PostgreSQL and backend data modeling
- Working knowledge of AWS services in hybrid environments
- Good understanding of Linux systems
- Hands-on experience with Docker Swarm & Portainer
- Strong understanding of microservices architecture
- Ability to manage end-to-end full-stack delivery
⭐ Good-to-Have Skills :
- Experience with CI/CD pipelines
- Exposure to monitoring & logging tools
- Knowledge of event-driven systems
- Experience working in high-availability systems
Experience: 8+ Years
Work Mode: Remote
Engagement: Full-time / Freelancer
Dual Project: Acceptable
Job Description:
We are looking for an experienced AWS Cloud Engineer II with strong hands-on system engineering expertise in AWS production environments.
Key Responsibilities and Skills:
Hands-on experience in AWS system engineering with a strong focus on Amazon RDS, including performance tuning, backups, restores, Multi-AZ configurations, and read replicas
Strong experience in application troubleshooting across AWS services including EC2, ALB, VPC, and IAM
Expertise in log analysis and monitoring using AWS CloudWatch
Ability to troubleshoot connectivity issues, latency problems, and service dependencies
Experience in end-to-end root cause analysis and production issue resolution
Strong understanding of AWS networking and security best practices
Ability to work independently in a remote setup and handle production-level issues
Preferred Qualifications:
Experience working in high-availability and production-critical environments
Strong analytical and problem-solving skills
Good communication skills for collaborating with cross-functional teams
Hands-on experience with Microsoft Azure core services including Virtual Machines, storage, networking, and identity management
Strong expertise in Azure RDS / Azure Virtual Desktop (AVD) deployment, configuration, and performance tuning
Solid systems engineering background with Windows Server administration, Active Directory, GPO, DNS, and basic Linux management
Proficiency in automation and scripting, primarily using PowerShell, with working knowledge of Azure CLI and Infrastructure as Code (ARM/Bicep/Terraform)
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (B2B SaaS preferred)
Hands-on experience implementing and managing DLP solutions in AWS and AzureStrong expertise in data classification, labeling, and protection (e.g., Microsoft Purview, AIP)Experience designing and enforcing DLP policies across cloud storage, email, endpoints, and SaaS appsProficient in monitoring, investigating, and remediating data leakage incidents

US based large Biotech company with WW operations.
Senior Cloud Engineer Job Description
Position Title: Senior Cloud Engineer -- AWS [LONG TERM-CONTRACT POSITION]
Location: Remote [REQUIRES WORKING IN CST TIME ZONE]
Position Overview
The Senior Cloud Engineer will play a critical role in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure across multiple platforms (AWS, Azure, Google Cloud). This role requires deep technical expertise, leadership in cloud
strategy, and hands-on experience with automation, DevOps practices, and cloud-native technologies. The ideal candidate will work collaboratively with cross-functional teams to deliver robust cloud solutions, drive best practices, and support business objectives
through innovative cloud engineering.
Key Responsibilities
Design, implement, and maintain cloud infrastructure and services, ensuring high availability, performance, and security across multi-cloud environments (AWS, Azure, GCP)
Develop and manage Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, and Ansible for automated provisioning and configuration
Lead the adoption and optimization of DevOps methodologies, including CI/CD pipelines, automated testing, and deployment processes
Collaborate with software engineers, architects, and stakeholders to architect cloud-native solutions that meet business and technical requirements
Monitor, troubleshoot, and optimize cloud systems for cost, performance, and reliability, using cloud monitoring and logging tools
Ensure cloud environments adhere to security best practices, compliance standards, and governance policies, including identity and access management, encryption, and vulnerability management
Mentor and guide junior engineers, sharing knowledge and fostering a culture of continuous improvement and innovation
Participate in on-call rotation and provide escalation support for critical cloud infrastructure issues
Document cloud architectures, processes, and procedures to ensure knowledge transfer and operational excellence
Stay current with emerging cloud technologies, trends, and best practices,
Required Qualifications
- Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field, or equivalent work experience
- 6–10 years of experience in cloud engineering or related roles, with a proven track record in large-scale cloud environments
- Deep expertise in at least one major cloud platform (AWS, Azure, Google Cloud) and experience in multi-cloud environments
- Strong programming and scripting skills (Python, Bash, PowerShell, etc.) for automation and cloud service integration
- Proficiency with DevOps tools and practices, including CI/CD (Jenkins, GitLab CI), containerization (Docker, Kubernetes), and configuration management (Ansible, Chef)
- Solid understanding of networking concepts (VPC, VPN, DNS, firewalls, load balancers), system administration (Linux/Windows), and cloud storage solutions
- Experience with cloud security, governance, and compliance frameworks
- Excellent analytical, troubleshooting, and root cause analysis skills
- Strong communication and collaboration abilities, with experience working in agile, interdisciplinary teams
- Ability to work independently, manage multiple priorities, and lead complex projects to completion
Preferred Qualifications
- Relevant cloud certifications (e.g., AWS Certified Solutions Architect, AWS DevOps Engineer, Microsoft AZ-300/400/500, Google Professional Cloud Architect)
- Experience with cloud cost optimization and FinOps practices
- Familiarity with monitoring/logging tools (CloudWatch, Kibana, Logstash, Datadog, etc.)
- Exposure to cloud database technologies (SQL, NoSQL, managed database services)
- Knowledge of cloud migration strategies and hybrid cloud architectures
Job Details
- Job Title: Software Developer (Python, React/Vue)
- Industry: Technology
- Experience Required: 2-4 years
- Working Days: 5 days/week
- Job Location: Remote working
- CTC Range: Best in Industry
Review Criteria
- Strong Full stack/Backend engineer profile
- 2+ years of hands-on experience as a full stack developer (backend-heavy)
- (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
- (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
- (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
- (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
- (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
- (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
- Product companies (B2B SaaS preferred)
Preferred
- Preferred (Location) - Mumbai
- Preferred (Skills): Candidates with strong backend or full-stack experience in other languages/frameworks are welcome if fundamentals are strong
- Preferred (Education): B.Tech from Tier 1, Tier 2 institutes
Role & Responsibilities
This is not just another dev job. You’ll help engineer the backbone of the world’s first AI Agentic manufacturing OS.
You will:
- Build and own features end-to-end — from design → deployment → scale.
- Architect scalable, loosely coupled systems powering AI-native workflows.
- Create robust integrations with 3rd-party systems.
- Push boundaries on reliability, performance, and automation.
- Write clean, tested, secure code → and continuously improve it.
- Collaborate directly with Founders & Snr engineers in a high-trust environment.
Our Tech Arsenal:
- We believe in always using the sharpest tools for the job. On that end we always try to remain tech agnostic and leave it up to discussion on what tools to use to get the problem solved in the most robust and quick way.
- That being said, our bright team of engineers have already constructed a formidable arsenal of tools which helps us fortify our defense and always play on the offensive. Take a look at the Tech Stack we already use.
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (B2B SaaS preferred)
Hands-on experience implementing and managing DLP solutions in AWS and Azure
Strong expertise in data classification, labeling, and protection (e.g., Microsoft Purview, AIP)
Experience designing and enforcing DLP policies across cloud storage, email, endpoints, and SaaS apps
Proficient in monitoring, investigating, and remediating data leakage incidents
Senior Penetration Tester
Experience: 2–5 years
Industry: EdTech / SaaS
Role Summary:
We are looking for a Penetration Tester to identify and remediate security vulnerabilities in our EdTech platforms including LMS, ERP, web apps, mobile apps, and APIs.
Key Responsibilities:
Perform VAPT on web, mobile, API, and cloud systems
Identify vulnerabilities using OWASP standards
Prepare security reports and remediation guidance
Re-test fixes with development teams
Skills Required:
Web & API security (OWASP Top 10)
Tools: Burp Suite, Nmap, Nessus, Metasploit
Basic scripting (Python/Bash)
Understanding of cloud security basics
Preferred:
EdTech or SaaS experience
Certifications: CEH / OSCP
Company: Grey Chain AI
Location: Remote
Experience: 7+ Years
Employment Type: Full Time
About the Role
We are looking for a Senior Python AI Engineer who will lead the design, development, and delivery of production-grade GenAI and agentic AI solutions for global clients. This role requires a strong Python engineering background, experience working with foreign enterprise clients, and the ability to own delivery, guide teams, and build scalable AI systems.
You will work closely with product, engineering, and client stakeholders to deliver high-impact AI-driven platforms, intelligent agents, and LLM-powered systems.
Key Responsibilities
- Lead the design and development of Python-based AI systems, APIs, and microservices.
- Architect and build GenAI and agentic AI workflows using modern LLM frameworks.
- Own end-to-end delivery of AI projects for international clients, from requirement gathering to production deployment.
- Design and implement LLM pipelines, prompt workflows, and agent orchestration systems.
- Ensure reliability, scalability, and security of AI solutions in production.
- Mentor junior engineers and provide technical leadership to the team.
- Work closely with clients to understand business needs and translate them into robust AI solutions.
- Drive adoption of latest GenAI trends, tools, and best practices across projects.
Must-Have Technical Skills
- 7+ years of hands-on experience in Python development, building scalable backend systems.
- Strong experience with Python frameworks and libraries (FastAPI, Flask, Pydantic, SQLAlchemy, etc.).
- Solid experience working with LLMs and GenAI systems (OpenAI, Claude, Gemini, open-source models).
- Good experience with agentic AI frameworks such as LangChain, LangGraph, LlamaIndex, or similar.
- Experience designing multi-agent workflows, tool calling, and prompt pipelines.
- Strong understanding of REST APIs, microservices, and cloud-native architectures.
- Experience deploying AI solutions on AWS, Azure, or GCP.
- Knowledge of MLOps / LLMOps, model monitoring, evaluation, and logging.
- Proficiency with Git, CI/CD, and production deployment pipelines.
Leadership & Client-Facing Experience
- Proven experience leading engineering teams or acting as a technical lead.
- Strong experience working directly with foreign or enterprise clients.
- Ability to gather requirements, propose solutions, and own delivery outcomes.
- Comfortable presenting technical concepts to non-technical stakeholders.
What We Look For
- Excellent communication, comprehension, and presentation skills.
- High level of ownership, accountability, and reliability.
- Self-driven professional who can operate independently in a remote setup.
- Strong problem-solving mindset and attention to detail.
- Passion for GenAI, agentic systems, and emerging AI trends.
Why Grey Chain AI
Grey Chain AI is a Generative AI-as-a-Service and Digital Transformation company trusted by global brands such as UNICEF, BOSE, KFINTECH, WHO, and Fortune 500 companies. We build real-world, production-ready AI systems that drive business impact across industries like BFSI, Non-profits, Retail, and Consulting.
Here, you won’t just experiment with AI — you will build, deploy, and scale it for the real world.
Technical Lead – Golang | AWS | Database DesignWork Model: Hybrid (Mandatory Work From Office for the first 1 month in Chennai, followed by remote work)
Location: Chennai, India
Experience: 8–12 Years
Budget: 1L ~ 1.2L MonthlyRole Summary
We are seeking an experienced Technical Lead with strong expertise in Golang, AWS, and Database Design to spearhead backend development initiatives, drive architectural decisions, and mentor engineering teams. The ideal candidate will combine hands-on technical skills with leadership capabilities to deliver scalable, secure, and high-performance solutions.
Key Responsibilities
Backend Development Leadership:
Lead the design and development of backend systems using Golang and microservices architecture.
Ensure scalability, reliability, and maintainability of backend services.Database
Design & Optimization:
Own database schema modeling, normalization, and performance tuning.
Work with MySQL, PostgreSQL, and NoSQL databases to design efficient data storage solutions.
Implement strategies for query optimization and high availability.
Cloud Infrastructure Management:
Architect and manage scalable solutions on AWS cloud services including EC2, ECS/EKS, Lambda, RDS, DynamoDB, and S3.
Ensure cost optimization, security compliance, and disaster recovery planning.
Technical Governance & Mentorship:
Review code, enforce best practices, and maintain coding standards.
Mentor and guide developers, fostering a culture of continuous learning and innovation.
Collaboration & Delivery:
Partner with product managers, architects, and stakeholders to align technical solutions with business goals.
Drive end-to-end delivery of projects with a focus on quality and timelines.
Production Support & Optimization:
Troubleshoot and resolve production issues.
Continuously monitor system performance and implement improvements.
Required Skills & Qualifications
Technical Expertise:
Strong hands-on experience with Golang in production-grade applications.
Solid knowledge of Database Design (MySQL, PostgreSQL, NoSQL).
Proficiency in AWS services (EC2, ECS/EKS, Lambda, RDS, DynamoDB, S3).
Strong understanding of microservices and distributed systems.
DevOps & Tools:
Experience with Docker, Kubernetes, and container orchestration.
Familiarity with CI/CD pipelines using tools like Jenkins, Maven, or GitHub Actions.
Soft Skills:
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities.
Ability to mentor and inspire engineering teams.
Shift + Return to add a new line
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Job Summary
We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.
Key Responsibilities
- Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
- Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
- Containerize applications using Docker and manage deployments with Helm charts
- Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
- Provision and manage infrastructure using Terraform (Infrastructure as Code)
- Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
- Write and maintain Python scripts for automation, monitoring, and operational tasks
- Ensure high availability, scalability, performance, and cost optimization of cloud resources
- Implement and follow security best practices across AWS and Kubernetes environments
- Troubleshoot production issues, perform root cause analysis, and support incident resolution
- Collaborate closely with development and QA teams to streamline deployment and release processes
Required Skills & Qualifications
- 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
- Strong experience with AWS services, including:
- EC2, RDS, OpenSearch, VPC, S3
- Application Load Balancer (ALB), API Gateway, Lambda
- SNS and SQS.
- Hands-on experience with AWS EKS (Kubernetes)
- Strong knowledge of Docker and Helm charts
- Experience with Terraform for infrastructure provisioning and management
- Solid experience building and managing CI/CD pipelines using Jenkins
- Practical experience with Prometheus and Grafana for monitoring and alerting
- Proficiency in Python scripting for automation and operational tasks
- Good understanding of Linux systems, networking concepts, and cloud security
- Strong problem-solving and troubleshooting skills
Good to Have (Preferred Skills)
- Exposure to GitOps practices
- Experience managing multi-environment setups (Dev, QA, UAT, Production)
- Knowledge of cloud cost optimization techniques
- Understanding of Kubernetes security best practices
- Experience with log aggregation tools (e.g., ELK/OpenSearch stack)
Language Preference
- Fluency in English is mandatory.
- Fluency in Hindi is preferred.
We are seeking a highly skilled software developer with proven experience in developing and scaling education ERP solutions. The ideal candidate should have strong expertise in Node.js or PHP (Laravel), MySQL, and MongoDB, along with hands-on experience in implementing ERP modules such as HR, Exams, Inventory, Learning Management System (LMS), Admissions, Fee Management, and Finance.
Key Responsibilities
Design, develop, and maintain scalable Education ERP modules.
Work on end-to-end ERP features, including HR, exams, inventory, LMS, admissions, fees, and finance.
Build and optimize REST APIs/GraphQL services and ensure seamless integrations.
Optimize system performance, scalability, and security for high-volume ERP usage.
Conduct code reviews, enforce coding standards, and mentor junior developers.
Stay updated with emerging technologies and recommend improvements for ERP solutions.
Required Skills & Qualifications
Strong expertise in Node.js and PHP (Laravel, Core PHP).
Proficiency with MySQL, MongoDB, and PostgreSQL (database design & optimization).
Frontend knowledge: JavaScript, jQuery, HTML, CSS (React/Vue preferred).
Experience with REST APIs, GraphQL, and third-party integrations (payment gateways, SMS, and email).
Hands-on with Git/GitHub, Docker, and CI/CD pipelines.
Familiarity with cloud platforms (AWS, Azure, GCP) is a plus.
4+ years of professional development experience, with a minimum of 2 years in ERP systems.
Preferred Experience
Prior work in the education ERP domain.
Deep knowledge of HR, Exam, Inventory, LMS, Admissions, Fees & Finance modules.
Exposure to high-traffic enterprise applications.
Strong leadership, mentoring, and problem-solving abilities
Benefit:
Permanent Work From Home
Procedure is hiring for Drover.
This is not a DevOps/SRE/cloud-migration role — this is a hands-on backend engineering and architecture role where you build the platform powering our hardware at scale.
About Drover
Ranching is getting harder. Increased labor costs and a volatile climate are placing mounting pressure to provide for a growing population. Drover is empowering ranchers to efficiently and sustainably feed the world by making it cheaper and easier to manage livestock, unlock productivity gains, and reduce carbon footprint with rotational grazing. Not only is this a $46B opportunity, you'll be working on a climate solution with the potential for real, meaningful impact.
We use patent-pending low-voltage electrical muscle stimulation (EMS) to steer and contain cows, replacing the need for physical fences or electric shock. We are building something that has never been done before, and we have hundreds of ranches on our waitlist.
Drover is founded by Callum Taylor (ex-Harvard), who comes from 5 generations of ranching, and Samuel Aubin, both of whom grew up in Australian ranching towns and have an intricate understanding of the problem space. We are well-funded and supported by Workshop Ventures, a VC firm with experience in building unicorn IoT companies.
We're looking to assemble a team of exceptional talent with a high eagerness to dive headfirst into understanding the challenges and opportunities within ranching.
About The Role
As our founding cloud engineer, you will be responsible for building and scaling the infrastructure that powers our IoT platform, connecting thousands of devices across ranches nationwide.
Because we are an early-stage startup, you will have high levels of ownership in what you build. You will play a pivotal part in architecting our cloud infrastructure, building robust APIs, and ensuring our systems can scale reliably. We are looking for someone who is excited about solving complex technical challenges at the intersection of IoT, agriculture, and cloud computing.
What You'll Do
- Develop Drover IoT cloud architecture from the ground up (it’s a green field project)
- Design and implement services to support wearable devices, mobile app, and backend API
- Implement data processing and storage pipelines
- Create and maintain Infrastructure-as-Code
- Support the engineering team across all aspects of early-stage development -- after all, this is a startup
Requirements
- 5+ years of experience developing cloud architecture on AWS
- In-depth understanding of various AWS services, especially those related to IoT
- Expertise in cloud-hosted, event-driven, serverless architectures
- Expertise in programming languages suitable for AWS micro-services (eg: TypeScript, Python)
- Experience with networking and socket programming
- Experience with Kubernetes or similar orchestration platforms
- Experience with Infrastructure-as-Code tools (e.g., Terraform, AWS CDK)
- Familiarity with relational databases (PostgreSQL)
- Familiarity with Continuous Integration and Continuous Deployment (CI/CD)
Nice To Have
- Bachelor’s or Master’s degree in Computer Science, Software Engineering, Electrical Engineering, or a related field
We are looking for a skilled Node.js Developer with PHP experience to build, enhance, and maintain ERP and EdTech platforms. The role involves developing scalable backend services, integrating ERP modules, and supporting education-focused systems such as LMS, student management, exams, and fee management.
Key Responsibilities
Develop and maintain backend services using Node.js and PHP.
Build and integrate ERP modules for EdTech platforms (Admissions, Students, Exams, Attendance, Fees, Reports).
Design and consume RESTful APIs and third-party integrations (payment gateway, SMS, email).
Work with databases (MySQL / MongoDB / PostgreSQL) for high-volume education data.
Optimize application performance, scalability, and security.
Collaborate with frontend, QA, and product teams.
Debug, troubleshoot, and provide production support.
Required Skills
Strong experience in Node.js (Express.js / NestJS).
Working experience in PHP (Core PHP / Laravel / CodeIgniter).
Hands-on experience with ERP systems.
Domain experience in EdTech / Education ERP / LMS.
Strong knowledge of MySQL and database design.
Experience with authentication, role-based access, and reporting.
Familiarity with Git, APIs, and server environments.
Preferred Skills
Experience with online examination systems.
Knowledge of cloud platforms (AWS / Azure).
Understanding of security best practices (CSRF, XSS, SQL Injection).
Exposure to microservices or modular architecture.
Qualification
Bachelor’s degree in Computer Science or equivalent experience.
3–6 years of relevant experience in Node.js & PHP development
Job Summary
We are seeking an experienced Databricks Developer with strong skills in PySpark, SQL, Python, and hands-on experience deploying data solutions on AWS (preferred), Azure. The role involves designing, developing, and optimizing scalable data pipelines and analytics workflows on the Databricks platform.
Key Responsibilities
- Develop and optimize ETL/ELT pipelines using Databricks and PySpark.
- Build scalable data workflows on AWS (EC2, S3, Glue, Lambda, IAM) or Azure (ADF, ADLS, Synapse).
- Implement and manage Delta Lake (ACID, schema evolution, time travel).
- Write efficient, complex SQL for transformation and analytics.
- Build and support batch and streaming ingestion (Kafka, Kinesis, EventHub).
- Optimize Databricks clusters, jobs, notebooks, and PySpark performance.
- Collaborate with cross-functional teams to deliver reliable data solutions.
- Ensure data governance, security, and compliance.
- Troubleshoot pipelines and support CI/CD deployments.
Required Skills & Experience
- 4–8 years in Data Engineering / Big Data development.
- Strong hands-on experience with Databricks (clusters, jobs, workflows).
- Advanced PySpark and strong Python skills.
- Expert-level SQL (complex queries, window functions).
- Practical experience with AWS (preferred) or Azure cloud services.
- Experience with Delta Lake, Parquet, and data lake architectures.
- Familiarity with CI/CD tools (GitHub Actions, Azure DevOps, Jenkins).
- Good understanding of data modeling, optimization, and distributed systems.
Summary:
We are seeking a highly skilled Python Backend Developer with proven expertise in FastAPI to join our team as a full-time contractor for 12 months. The ideal candidate will have 5+ years of experience in backend development, a strong understanding of API design, and the ability to deliver scalable, secure solutions. Knowledge of front-end technologies is an added advantage. Immediate joiners are preferred. This role requires full-time commitment—please apply only if you are not engaged in other projects.
Job Type:
Full-Time Contractor (12 months)
Location:
Remote / On-site (Jaipur preferred, as per project needs)
Experience:
5+ years in backend development
Key Responsibilities:
- Design, develop, and maintain robust backend services using Python and FastAPI.
- Implement and manage Prisma ORM for database operations.
- Build scalable APIs and integrate with SQL databases and third-party services.
- Deploy and manage backend services using Azure Function Apps and Microsoft Azure Cloud.
- Collaborate with front-end developers and other team members to deliver high-quality web applications.
- Ensure application performance, security, and reliability.
- Participate in code reviews, testing, and deployment processes.
Required Skills:
- Expertise in Python backend development with strong experience in FastAPI.
- Solid understanding of RESTful API design and implementation.
- Proficiency in SQL databases and ORM tools (preferably Prisma)
- Hands-on experience with Microsoft Azure Cloud and Azure Function Apps.
- Familiarity with CI/CD pipelines and containerization (Docker).
- Knowledge of cloud architecture best practices.
Added Advantage:
- Front-end development knowledge (React, Angular, or similar frameworks).
- Exposure to AWS/GCP cloud platforms.
- Experience with NoSQL databases.
Eligibility:
- Minimum 5 years of professional experience in backend development.
- Available for full-time engagement.
- Please excuse if you are currently engaged in other projects—we require dedicated availability.
Role: Senior Backend Engineer(Nodes.js+Typescript+Postgres)
Location: Pune
Type: Full-Time
Who We Are:
After a highly successful launch, Azodha is ready to take its next major step. We are seeking a passionate and experienced Senior Backend Engineer to build and enhance a disruptive healthcare product. This is a unique opportunity to get in on the ground floor of a fast-growing startup and play a pivotal role in shaping both the product and the team.
If you are an experienced backend engineer who thrives in an agile startup environment and has a strong technical background, we want to hear from you!
About The Role:
As a Senior Backend Engineer at Azodha, you’ll play a key role in architecting, solutioning and driving development of our AI led interoperable digital enablement platform.You will work closely with the founder/CEO to refine the product vision, drive product innovation, delivery and grow with a strong technical team.
What You’ll Do:
* Technical Excellence: Design, develop, and scale backend services using Node.js and TypeScript, including REST and GraphQL APIs. Ensure systems are scalable, secure, and high-performing.
* Data Management and Integrity: Work with Prisma or TypeORM, and relational databases like PostgreSQL and MySQL
* Continuous Improvement: Stay updated with the latest trends in backend development, incorporating new technologies where appropriate. Drive innovation and efficiency within the team
* Utilize ORMs such as Prisma or TypeORM to interact with database and ensure data integrity.
* Follow Agile sprint methodology for development.
* Conduct code reviews to maintain code quality and adherence to best practices.
* Optimize API performance for optimal user experiences.
* Participate in the entire development lifecycle, from initial planning , design and maintenance
* Troubleshoot and debug issues to ensure system stability.
* Collaborate with QA teams to ensure high quality releases.
* Mentor and provide guidance to junior developers, offering technical expertise and constructive feedback.
Requirements
* Bachelor's degree in Computer Science, software Engineering, or a related field.
* 5+ years of hands-on experience in backend development using Node.js and TypeScript.
* Experience working on Postgres or My SQL.
* Proficiency in TypeScript and its application in Node.js
* Experience with ORM such as Prisma or TypeORM.
* Familiarity with Agile development methodologies.
* Strong analytical and problem solving skills.
* Ability to work independently and in a team oriented, fast-paced environment.
* Excellent written and oral communication skills.
* Self motivated and proactive attitude.
Preferred:
* Experience with other backend technologies and languages.
* Familiarity with continuous integration and deployment process.
* Contributions to open-source projects related to backend development.
Note: please don't apply if you're profile if you're primary database is postgres SQL.
Join our team of talented engineers and be part of building cutting edge backend systems that drive our applications. As a Senior Backend Engineer, you'll have the opportunity to shape the future of our backend infrastructure and contribute company's success. If you are passionate about backend development and meet the above requirements, we encourage you to apply and become valued member of our team at Azodha.




















