50+ AWS (Amazon Web Services) Jobs in India
Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!

We are building VALLI AI SecurePay, an AI-powered fintech and cybersecurity platform focused on fraud detection and transaction risk scoring.
We are looking for an AI / Machine Learning Engineer with strong AWS experience to design, develop, and deploy ML models in a cloud-native environment.
Responsibilities:
- Build ML models for fraud detection and anomaly detection
- Work with transactional and behavioral data
- Deploy models on AWS (S3, SageMaker, EC2/Lambda)
- Build data pipelines and inference workflows
- Integrate ML models with backend APIs
Requirements:
- Strong Python and Machine Learning experience
- Hands-on AWS experience
- Experience deploying ML models in production
- Ability to work independently in a remote setup
Job Type: Contract / Freelance
Duration: 3–6 months (extendable)
Location: Remote (India)
About Us:
MyOperator is a Business AI Operator and a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform.
Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
JOB DETAILS:
* Job Title: Lead I - Software Engineering-Kotlin, Java, Spring Boot, Aws
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 5 -7 years
* Location: Trivandrum, Thiruvananthapuram
Role Proficiency:
Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities
Skill Examples:
- Explain and communicate the design / development to the customer
- Perform and evaluate test results against product specifications
- Break down complex problems into logical components
- Develop user interfaces business software components
- Use data models
- Estimate time and effort required for developing / debugging features / components
- Perform and evaluate test in the customer or target environment
- Make quick decisions on technical/project related challenges
- Manage a Team mentor and handle people related issues in team
- Maintain high motivation levels and positive dynamics in the team.
- Interface with other teams’ designers and other parallel practices
- Set goals for self and team. Provide feedback to team members
- Create and articulate impactful technical presentations
- Follow high level of business etiquette in emails and other business communication
- Drive conference calls with customers addressing customer questions
- Proactively ask for and offer help
- Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks.
- Build confidence with customers by meeting the deliverables on time with quality.
- Estimate time and effort resources required for developing / debugging features / components
- Make on appropriate utilization of Software / Hardware’s.
- Strong analytical and problem-solving abilities
Knowledge Examples:
- Appropriate software programs / modules
- Functional and technical designing
- Programming languages – proficient in multiple skill clusters
- DBMS
- Operating Systems and software platforms
- Software Development Life Cycle
- Agile – Scrum or Kanban Methods
- Integrated development environment (IDE)
- Rapid application development (RAD)
- Modelling technology and languages
- Interface definition languages (IDL)
- Knowledge of customer domain and deep understanding of sub domain where problem is solved
Additional Comments:
We are seeking an experienced Senior Backend Engineer with strong expertise in Kotlin and Java to join our dynamic engineering team.
The ideal candidate will have a deep understanding of backend frameworks, cloud technologies, and scalable microservices architectures, with a passion for clean code, resilience, and system observability.
You will play a critical role in designing, developing, and maintaining core backend services that power our high-availability e-commerce and promotion platforms.
Key Responsibilities
Design, develop, and maintain backend services using Kotlin (JVM, Coroutines, Serialization) and Java.
Build robust microservices with Spring Boot and related Spring ecosystem components (Spring Cloud, Spring Security, Spring Kafka, Spring Data).
Implement efficient serialization/deserialization using Jackson and Kotlin Serialization. Develop, maintain, and execute automated tests using JUnit 5, Mockk, and ArchUnit to ensure code quality.
Work with Kafka Streams (Avro), Oracle SQL (JDBC, JPA), DynamoDB, and Redis for data storage and caching needs. Deploy and manage services in AWS environment leveraging DynamoDB, Lambdas, and IAM.
Implement CI/CD pipelines with GitLab CI to automate build, test, and deployment processes.
Containerize applications using Docker and integrate monitoring using Datadog for tracing, metrics, and dashboards.
Define and maintain infrastructure as code using Terraform for services including GitLab, Datadog, Kafka, and Optimizely.
Develop and maintain RESTful APIs with OpenAPI (Swagger) and JSON API standards.
Apply resilience patterns using Resilience4j to build fault-tolerant systems.
Adhere to architectural and design principles such as Domain-Driven Design (DDD), Object-Oriented Programming (OOP), and Contract Testing (Pact).
Collaborate with cross-functional teams in an Agile Scrum environment to deliver high-quality features.
Utilize feature flagging tools like Optimizely to enable controlled rollouts.
Mandatory Skills & Technologies Languages:
Kotlin (JVM, Coroutines, Serialization),
Java Frameworks: Spring Boot (Spring Cloud, Spring Security, Spring Kafka, Spring Data)
Serialization: Jackson, Kotlin Serialization
Testing: JUnit 5, Mockk, ArchUnit
Data: Kafka (Avro) Streams Oracle SQL (JDBC, JPA) DynamoDB (NoSQL) Redis (Caching)
Cloud: AWS (DynamoDB, Lambda, IAM)
CI/CD: GitLab CI Containers: Docker
Monitoring & Observability: Datadog (Tracing, Metrics, Dashboards, Monitors)
Infrastructure as Code: Terraform (GitLab, Datadog, Kafka, Optimizely)
API: OpenAPI (Swagger), REST API, JSON API
Resilience: Resilience4j
Architecture & Practices: Domain-Driven Design (DDD) Object-Oriented Programming (OOP) Contract Testing (Pact) Feature Flags (Optimizely)
Platforms: E-Commerce Platform (CommerceTools), Promotion Engine (Talon.One)
Methodologies: Scrum, Agile
Skills: Kotlin, Java, Spring Boot, Aws
Must-Haves
Kotlin (JVM, Coroutines, Serialization), Java, Spring Boot (Spring Cloud, Spring Security, Spring Kafka, Spring Data), AWS (DynamoDB, Lambda, IAM), Microservices Architecture
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Trivandrum
Virtual Weekend Interview on 7th Feb 2026 - Saturday
5–10 years of experience in backend or full-stack development (Java, C#, Python, or Node.js preferred).
•Design, develop, and deploy full-stack web applications (front-end, back-end, APIs, and databases).
•Build responsive, user-friendly UIs using modern JavaScript frameworks (React, Vue, or Angular).
•Develop robust backend services and RESTful or GraphQL APIs using Node.js, Python, Java, or similar technologies.
•Manage and optimize databases (SQL and NoSQL).
•Collaborate with UX/UI designers, product managers, and QA engineers to refine requirements and deliver solutions.
•Implement CI/CD pipelines and support cloud deployments (AWS, Azure, or GCP).
•Write clean, testable, and maintainable code with appropriate documentation.
•Monitor performance, identify bottlenecks, and troubleshoot production issues.
•Stay up to date with emerging technologies and recommend improvements to tools, processes, and architecture.
•Proficiency in front-end technologies: HTML5, CSS3, JavaScript/TypeScript, and frameworks like React, Vue.js, or Angular.
•Strong experience with server-side programming (Node.js, Python/Django, Java/Spring Boot, or .NET).
•Experience with databases: PostgreSQL, MySQL, MongoDB, or similar.
•Familiarity with API design, microservices architecture, and REST/GraphQL best practices.
•Working knowledge of version control (Git/GitHub) and DevOps pipelines.
Understanding of cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes).

🚀 Hiring: Associate Tech Architect / Senior Tech Specialist
🌍 Remote | Contract Opportunity
We’re looking for a seasoned tech professional who can lead the design and implementation of cloud-native data and platform solutions. This is a remote, contract-based role for someone with strong ownership and architecture experience.
🔴 Mandatory & Most Important Skill Set
Hands-on expertise in the following technologies is essential:
✅ AWS – Cloud architecture & services
✅ Python – Backend & data engineering
✅ Terraform – Infrastructure as Code
✅ Airflow – Workflow orchestration
✅ SQL – Data processing & querying
✅ DBT – Data transformation & modeling
💼 Key Responsibilities
- Architect and build scalable AWS-based data platforms
- Design and manage ETL/ELT pipelines
- Orchestrate workflows using Airflow
- Implement cloud infrastructure using Terraform
- Lead best practices in data architecture, performance, and scalability
- Collaborate with engineering teams and provide technical leadership
🎯 Ideal Profile
✔ Strong experience in cloud and data platform architecture
✔ Ability to take end-to-end technical ownership
✔ Comfortable working in a remote, distributed team environment
📄 Role Type: Contract
🌍 Work Mode: 100% Remote
If you have deep expertise in these core technologies and are ready to take on a high-impact architecture role, we’d love to hear from you.
JOB DETAILS:
* Job Title: Associate III - Data Engineering
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4-6 years
* Location: Trivandrum, Kochi
Job Description
Job Title:
Data Services Engineer – AWS & Snowflake
Job Summary:
As a Data Services Engineer, you will be responsible for designing, developing, and maintaining robust data solutions using AWS cloud services and Snowflake.
You will work closely with cross-functional teams to ensure data is accessible, secure, and optimized for performance.
Your role will involve implementing scalable data pipelines, managing data integration, and supporting analytics initiatives.
Responsibilities:
• Design and implement scalable and secure data pipelines on AWS and Snowflake (Star/Snowflake schema)
• Optimize query performance using clustering keys, materialized views, and caching
• Develop and maintain Snowflake data warehouses and data marts.
• Build and maintain ETL/ELT workflows using Snowflake-native features (Snowpipe, Streams, Tasks).
• Integrate Snowflake with cloud platforms (AWS, Azure, GCP) and third-party tools (Airflow, dbt, Informatica)
• Utilize Snowpark and Python/Java for complex transformations
• Implement RBAC, data masking, and row-level security.
• Optimize data storage and retrieval for performance and cost-efficiency.
• Collaborate with stakeholders to gather data requirements and deliver solutions.
• Ensure data quality, governance, and compliance with industry standards.
• Monitor, troubleshoot, and resolve data pipeline and performance issues.
• Document data architecture, processes, and best practices.
• Support data migration and integration from various sources.
Qualifications:
• Bachelor’s degree in Computer Science, Information Technology, or a related field.
• 3 to 4 years of hands-on experience in data engineering or data services.
• Proven experience with AWS data services (e.g., S3, Glue, Redshift, Lambda).
• Strong expertise in Snowflake architecture, development, and optimization.
• Proficiency in SQL and Python for data manipulation and scripting.
• Solid understanding of ETL/ELT processes and data modeling.
• Experience with data integration tools and orchestration frameworks.
• Excellent analytical, problem-solving, and communication skills.
Preferred Skills:
• AWS Glue, AWS Lambda, Amazon Redshift
• Snowflake Data Warehouse
• SQL & Python
Skills: Aws Lambda, AWS Glue, Amazon Redshift, Snowflake Data Warehouse
Must-Haves
AWS data services (4-6 years), Snowflake architecture (4-6 years), SQL (proficient), Python (proficient), ETL/ELT processes (solid understanding)
Skills: AWS, AWS lambda, Snowflake, Data engineering, Snowpipe, Data integration tools, orchestration framework
Relevant 4 - 6 Years
python is mandatory
******
Notice period - 0 to 15 days only (Feb joiners’ profiles only)
Location: Kochi
F2F Interview 7th Feb
What You’ll Do:
We’re looking for a Full Stack Software Engineer to join us early, own critical systems, and help shape both the product and the engineering culture from day one.
Responsibilities will include but are not limited to:
- Own end-to-end product development, from user experience to backend integration
- Build and scale a modern SPA using React, TypeScript, Vite, and Tailwind Design intuitive, high-trust UIs for finance workflows (payments, approvals, dashboards)
- Collaborate closely with backend systems written in Go via well-designed APIs
- Translate product requirements into clean, maintainable components and state models
- Optimize frontend performance, bundle size, and load times for complex dashboards
- Work directly with founders and design partners to iterate rapidly on product direction
- Establish frontend best practices around architecture, testing, and developer experience
- Contribute across the stack when needed, including API design and data modeling discussions.
What You’ll Need:
- Strong experience with Go in production systems
- Solid backend fundamentals: APIs, distributed systems, concurrency, and data modeling
- Hands-on experience with AWS, including deploying and operating production services
- Deep familiarity with Postgres, including schema design, indexing, and performance considerations
- Comfort working in early-stage environments with ambiguity, ownership, and rapid iteration
- Product mindset — you care about why you’re building something, not just how
- Strong problem-solving skills and the ability to make pragmatic tradeoffs
Set Yourself Apart With:
- Experience with Tailwind or other utility-first CSS frameworks
- Familiarity with design systems and component libraries
- Experience building fintech or enterprise SaaS UIs
- Exposure to AI-powered UX (LLM-driven workflows, assistants, or automation)
- Prior experience as an early engineer or founder
- product engineering culture from the ground up.
JOB DETAILS:
* Job Title: Lead I - (Web Api, C# .Net, .Net Core, Aws (Mandatory)
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 6 -9 years
* Location: Hyderabad
Job Description
Role Overview
We are looking for a highly skilled Senior .NET Developer who has strong experience in building scalable, high‑performance backend services using .NET Core and C#, with hands‑on expertise in AWS cloud services. The ideal candidate should be capable of working in an Agile environment, collaborating with cross‑functional teams, and contributing to both design and development. Experience with React and Datadog monitoring tools will be an added advantage.
Key Responsibilities
- Design, develop, and maintain backend services and APIs using .NET Core and C#.
- Work with AWS services (Lambda, S3, ECS/EKS, API Gateway, RDS, etc.) to build cloud‑native applications.
- Collaborate with architects and senior engineers on solution design and implementation.
- Write clean, scalable, and well‑documented code.
- Use Postman to build and test RESTful APIs.
- Participate in code reviews and provide technical guidance to junior developers.
- Troubleshoot and optimize application performance.
- Work closely with QA, DevOps, and Product teams in an Agile setup.
- (Optional) Contribute to frontend development using React.
- (Optional) Use Datadog for monitoring, logging, and performance metrics.
Required Skills & Experience
- 6+ years of experience in backend development.
- Strong proficiency in C# and .NET Core.
- Experience building RESTful services and microservices.
- Hands‑on experience with AWS cloud platform.
- Solid understanding of API testing using Postman.
- Knowledge of relational databases (SQL Server, PostgreSQL, etc.).
- Strong problem‑solving and debugging skills.
- Experience working in Agile/Scrum teams.
Good to Have
- Experience with React for frontend development.
- Exposure to Datadog for monitoring and logging.
- Knowledge of CI/CD tools (GitHub Actions, Jenkins, AWS CodePipeline, etc.).
- Containerization experience (Docker, Kubernetes).
Soft Skills
- Strong communication and collaboration abilities.
- Ability to work in a fast‑paced environment.
- Ownership mindset with a focus on delivering high‑quality solutions.
Skills
.NET Core, C#, AWS, Postman
Notice period - 0 to 15 days only
Location: Hyderabad
Virtual Interview: 7th Feb 2026
First round will be Virtual
2nd round will be F2F
JOB DETAILS:
* Job Title: Tester III - Software Testing (Automation testing + Python + AWS)
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -10 years
* Location: Hyderabad
Job Description
Responsibilities:
- Develop, maintain, and execute automation test scripts using Python.
- Build reliable and reusable test automation frameworks for web and cloud-based applications.
- Work with AWS cloud services for test execution, environment management, and integration needs.
- Perform functional, regression, and integration testing as part of the QA lifecycle.
- Analyze test failures, identify root causes, raise defects, and collaborate with development teams.
- Participate in requirement review, test planning, and strategy discussions.
- Contribute to CI/CD setup and integration of automation suites.
Required Experience:
- Strong hands-on experience in Automation Testing.
- Proficiency in Python for automation scripting and framework development.
- Understanding and practical exposure to AWS services (Lambda, EC2, S3, CloudWatch, or similar).
- Good knowledge of QA methodologies, SDLC/STLC, and defect management.
- Familiarity with automation tools/frameworks (e.g., Selenium, PyTest).
- Experience with Git or other version control systems.
Good to Have:
- API testing experience (REST, Postman, REST Assured).
- Knowledge of Docker/Kubernetes.
- Exposure to Agile/Scrum environment.
Skills: Automation testing, Python, Java, ETL, AWS
JOB DETAILS:
* Job Title: Tester III - Software Testing- Playwright + API testing
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -10 years
* Location: Hyderabad
Job Description
Responsibilities:
- Design, develop, and maintain automated test scripts for web applications using Playwright.
- Perform API testing using industry-standard tools and frameworks.
- Collaborate with developers, product owners, and QA teams to ensure high-quality releases.
- Analyze test results, identify defects, and track them to closure.
- Participate in requirement reviews, test planning, and test strategy discussions.
- Ensure automation coverage, maintain reusable test frameworks, and optimize execution pipelines.
Required Experience:
- Strong hands-on experience in Automation Testing for web-based applications.
- Proven expertise in Playwright (JavaScript, TypeScript, or Python-based scripting).
- Solid experience in API testing (Postman, REST Assured, or similar tools).
- Good understanding of software QA methodologies, tools, and processes.
- Ability to write clear, concise test cases and automation scripts.
- Experience with CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) is an added advantage.
Good to Have:
- Knowledge of cloud environments (AWS/Azure)
- Experience with version control tools like Git
- Familiarity with Agile/Scrum methodologies
Skills: automation testing, sql, api testing, soap ui testing, playwright
Numino Labs
Business: Software product engineering services: Pune, Goa.
Clients: Software product companies in the USA.
Business model: Exclusive teams for working on client products; direct and daily interactions with clients
Client
Silicon Valley startup in genAI: 45m+ in funding.
Product: B2B SaaS.
Core IP: Physics AI foundation model for hardware designers with specific focus on semi-conductor chip design.
Customers: World's top chip manufacturers
Responsibilities
- Team player: Delivers effectively with teams; interpersonal skills, communication skills, risk management skills
- Technical Leadership: Works with ambiguous requirements, designs solutions, independently drives delivery to customers
- Hands on coder: Leverages AI to drive implementation across Reactjs, Python, DB, UnitTest, TestAutomation & Cloud Infra & CI/CD Automation.
Requirements
- Strong computer science fundamentals: data structures & algorithms, networking, RDBMS, and distributed computing
- 8-15 years of experience on Python Stack: Behave, PyTest, Python Generators & async operations, multithreading, context managers, decorators, descriptors
- Python frameworks: FastAPI or Flask or DJango or SQLAlchemy
- Expertise in Microservices, REST/gRPC APIs design, Authentication, Single Sign-on
- Experience in high performance delivering solutions on Cloud
- Some experience in FE: Js & Nextjs/ReactJs
- Some experience in DevOps, Cloud Infra Automation, Test Automation
Position Title: IT Intern (Full Time)
Department: Information Technology
Work Mode: Work From Home (WFH)
Educational Qualification: B.Tech (IT) / M.Tech (IT)
Shift: Rotational Shifts (6am to 3pm, 2pm to 11pm, and 10pm to 07am)
---
Role Summary
The IT Intern will support day-to-day IT operations and assist in maintaining the organization’s IT infrastructure. This role provides structured exposure to desktop support, cloud platforms, user management, server infrastructure, and IT security practices under the guidance of senior IT team members.
---
Key Responsibilities
· Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.
· Support user account management activities using Azure Entra ID (Azure Active Directory), Active Directory, and Microsoft 365.
· Assist the IT team in configuring, monitoring, and supporting AWS cloud services, including EC2, S3, IAM, and WorkSpaces.
· Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.
· Assist with data backups, basic disaster recovery tasks, and implementation of security procedures in line with company policies.
· Create, update, and maintain technical documentation, SOPs, and knowledge base articles.
· Collaborate with internal teams to support system upgrades, IT infrastructure improvements, and ongoing IT projects.
· Adhere to company IT policies, data security standards, and confidentiality requirements.
---
Required Skills & Competencies
· Basic understanding of IT infrastructure, networking concepts, and operating systems
· Familiarity with cloud platforms such as AWS and/or Microsoft Azure
· Fundamental knowledge of Active Directory and user access management
· Strong willingness to learn and adapt to new technologies
· Good analytical, problem-solving, and communication skills
· Ability to work independently in a remote environment
---
Technical Requirements
· Personal laptop/desktop with required specifications
· Reliable internet connectivity to support remote work
---
Learning & Development Opportunities
· Hands-on exposure to enterprise IT environments
· Practical experience with cloud technologies and infrastructure support
· Mentorship from experienced IT professionals
· Opportunity to develop technical, documentation, and operational skills
About ARDEM
ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. With over 20 years of experience, ARDEM has consistently delivered high-quality outsourcing and automation services to clients across the USA and Canada. We are growing rapidly and continuously innovating to improve our services. Our goal is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company for our customers.
About the Role:
We are seeking a highly skilled and motivated individual to join our development team. The ideal candidate will have extensive experience with Node.js, AWS, and MongoDB, and a strong pro-activeness and ownership mindset.
Technical Expertise:
- Architect, design, and develop scalable and efficient backend services using Node.js (Nest.js).
- Design and manage cloud-based infrastructure on AWS, including EC2, ECS, RDS, Lambda, and other services.
- Work with MongoDB to design, implement, and maintain high-performance database solutions.
- Leverage Kafka, Docker and serverless technologies like SST to streamline deployments and infrastructure management.
- Optimize application performance and scalability across the stack.
- Ensure security and compliance standards are met across all development and deployment processes.
Bonus Points:
- Experience with other backend languages like Python and worked on Agentic AI
- Security knowledge and best practices.
JOB DETAILS:
- Job Title: Lead II - Software Engineering- React Native - React Native, Mobile App Architecture, Performance Optimization & Scalability
- Industry: Global digital transformation solutions provider
- Experience: 7-9 years
- Working Days: 5 days/week
- Job Location: Mumbai
- CTC Range: Best in Industry
Job Description
Job Title
Lead React Native Developer (6–8 Years Experience)
Position Overview
We are looking for a Lead React Native Developer to provide technical leadership for our mobile applications. This role involves owning architectural decisions, setting development standards, mentoring teams, and driving scalable, high-performance mobile solutions aligned with business goals.
Must-Have Skills
- 6–8 years of experience in mobile application development
- Extensive hands-on experience leading React Native projects
- Expert-level understanding of React Native architecture and internals
- Strong knowledge of mobile app architecture patterns
- Proven experience with performance optimization and scalability
- Experience in technical leadership, team management, and mentorship
- Strong problem-solving and analytical skills
- Excellent communication and collaboration abilities
- Proficiency in modern React Native development practices
- Experience with Expo toolkit and libraries
- Strong understanding of custom hooks development
- Focus on writing clean, maintainable, and scalable code
- Understanding of mobile app lifecycle
- Knowledge of cross-platform design consistency
Good-to-Have Skills
- Experience with microservices architecture
- Knowledge of cloud platforms such as AWS, Firebase, etc.
- Understanding of DevOps practices and CI/CD pipelines
- Experience with A/B testing and feature flag implementation
- Familiarity with machine learning integration in mobile applications
- Exposure to innovation-driven technical decision-making
Skills: React native, mobile app development, devops, machine learning
******
Notice period - 0 to 15 days only (Need Feb Joiners)
Location: Navi Mumbai, Belapur
Skills Required:
- Deep expertise in modern frontend frameworks - React, Next.js, Vue, or Svelte.
- Proficiency in *TypeScript, JavaScript (ES6+), and functional programming patterns.
- Experience in state management (Zustand, Redux Toolkit, Recoil, Jotai) and component-driven architecture.
- Deep expertise in backend architecture using Python (FastAPI, Django), Node.js (NestJS, Express), or GoLang.
- Strong experience with cloud infrastructure - AWS, GCP, Azure, and containerization (Docker, Kubernetes).
- Proficiency in infrastructure-as-code (Terraform, Pulumi, Ansible).
- Mastery in CI/CD pipelines, GitOps workflows, and deployment automation (GitHub Actions, Jenkins, ArgoCD, Flux).
- Experience building high-performance distributed systems, APIs, and microservices architectures.
- Understanding of event-driven systems, message queues, and streaming platforms (Kafka, RabbitMQ, Redis Streams).
- Familiarity with database design and scaling - PostgreSQL, MongoDB, DynamoDB, TimescaleDB.
- Deep understanding of system observability, tracing, and performance tuning (Prometheus, Grafana, OpenTelemetry).
- Familiarity with AI integration stacks - deploying and scaling LLMs, vector databases (Pinecone, Weaviate, Milvus), and inference APIs (vLLM, Ollama, TensorRT).
- Awareness of DevSecOps practices, zero-trust architecture, and cloud cost optimization.
- Bonus: Hands-on with Rust, WebAssembly, or edge computing platforms (Fly.io, Cloudflare Workers, AWS Greengrass).
What You Bring
First-principles thinking
You understand how systems work at a foundational level. When something breaks, you reason backward from the error to potential causes. You know the difference between a network timeout, a malformed query, a race condition, and a misconfigured environment—even if you haven't memorized the fix.
You should be able to read code in any mainstream language and understand what it's doing.
AI-native workflow
You've already built real things using AI tools. You know how to prompt effectively, how to structure problems so AI can help, how to validate AI output, and when to step in manually.
High agency
You don't wait for permission or detailed specs. You figure out what needs to happen and make it happen. Ambiguity doesn't paralyze you.
Proof of work
Show us what you've built. Live products, GitHub repos, side projects, internal tools—anything that demonstrates you can ship complete systems.
What We Don't Care About
- Degrees or formal credentials
- Years of experience in a specific language or framework
- Whether you came from a "traditional" engineering path
What You'll Get
- Direct line to the CEO
- Autonomy to own large problem spaces
- A front-row seat to how engineering work is evolving
- Colleagues who ship fast and think clearly
Proficiency in Java 8+.
Solid understanding of REST APIs(Spring boot), microservices, databases (SQL/NoSQL), and caching systems like Redis/Aerospike.
Familiarity with cloud platforms (AWS, GCP, Azure) and DevOps tools (Docker, Kubernetes, CI/CD).
Good understanding of data structures, algorithms, and software design principles.
Role Description
This is a full-time on-site role for a Python Developer located in Pune. The Python Developer will be responsible for back-end web development, software development, and programming using Python. Day-to-day tasks include developing, testing, and maintaining scalable web applications and server-side logic, as well as optimizing performance and integrating user-facing elements with server-side logic. The role also demands collaboration with cross-functional teams to define, design, and ship new features.
Key Responsibilities
- Lead the backend development team, ensuring best practices in coding, architecture, and performance optimization.
- Design, develop, and maintain scalable backend services using Python and FastAPI.
- Architect and optimize databases, ensuring efficient storage and retrieval of data using MongoDB.
- Integrate AI models and data science workflows into enterprise applications.
- Implement and manage AWS cloud services, including Lambda, S3, EC2, and other AWS components.
- Automate deployment pipelines using Jenkins and CI/CD best practices.
- Ensure security and reliability, implementing best practices for authentication, authorization, and data privacy.
- Monitor and troubleshoot system performance, optimizing infrastructure and codebase.
- Collaborate with data scientists, front-end engineers, and product team to build AI-driven solutions.
- Stay up to date with the latest technologies in AI, backend development, and cloud computing.
Required Skills & Qualifications
- 3-4 years of experience in backend development with Python.
- Strong experience in FastAPI framework.
- Proficiency in MongoDB or other NoSQL databases.
- Hands-on experience with AWS services (Lambda, S3, EC2, etc.).
- Experience with Jenkins and CI/CD pipelines.
- Data Science knowledge with experience integrating AI models and data pipelines.
- Strong understanding of RESTful API design, microservices, and event-driven architecture.
- Experience in performance tuning, caching, and security best practices.
- Proficiency in working with Docker and containerized applications.

A real time Customer Data Platform and cross channel marketing automation delivers superior experiences that result in an increased revenue for some of the largest enterprises in the world.
Key Responsibilities:
- Design and develop backend components and sub-systems for large-scale platforms under guidance from senior engineers.
- Contribute to building and evolving the next-generation customer data platform.
- Write clean, efficient, and well-tested code with a focus on scalability and performance.
- Explore and experiment with modern technologies—especially open-source frameworks—
- and build small prototypes or proof-of-concepts.
- Use AI-assisted development tools to accelerate coding, testing, debugging, and learning while adhering to engineering best practices.
- Participate in code reviews, design discussions, and continuous improvement of the platform.
Qualifications:
- 0–2 years of experience (or strong academic/project background) in backend development with Java.
- Good fundamentals in algorithms, data structures, and basic performance optimizations.
- Bachelor’s or Master’s degree in Computer Science or IT (B.E / B.Tech / M.Tech / M.S) from premier institutes.
Technical Skill Set:
- Strong aptitude and analytical skills with emphasis on problem solving and clean coding.
- Working knowledge of SQL and NoSQL databases.
- Familiarity with unit testing frameworks and writing testable code is a plus.
- Basic understanding of distributed systems, messaging, or streaming platforms is a bonus.
AI-Assisted Engineering (LLM-Era Skills):
- Familiarity with modern AI coding tools such as Cursor, Claude Code, Codex, Windsurf, Opencode, or similar.
- Ability to use AI tools for code generation, refactoring, test creation, and learning new systems responsibly.
- Willingness to learn how to combine human judgment with AI assistance for high-quality engineering outcomes.
Soft Skills & Nice to Have
- Appreciation for technology and its ability to create real business value, especially in data and marketing platforms.
- Clear written and verbal communication skills.
- Strong ownership mindset and ability to execute in fast-paced environments.
- Prior internship or startup experience is a plus.
Job Title : Full Stack Developer
Experience : 6+ Years
Mandatory Tech Stack : Node.js (NestJS), React.js (Next.js), React Native, PostgreSQL, AWS (Hybrid with On-Premise infrastructure) & Docker Swarm & Portainer
Location : Remote
Working Days : Monday to Sunday (Full Week)
Shift : Night Shift
Job Summary :
We are scaling rapidly and looking for a high-impact Full Stack Developer who thrives on solving complex problems across Web, Mobile, and Cloud Infrastructure.
The ideal candidate is hands-on, adaptable, and comfortable working in distributed systems and hybrid cloud environments, delivering end-to-end solutions with ownership and accountability.
Mandatory Technical Skills :
- Backend : Node.js with NestJS
- Frontend (Web) : React.js with Next.js
- Mobile : React Native
- Database : PostgreSQL
- Cloud : AWS (Hybrid with On-Premise infrastructure)
- OS : Linux
- Containers & Orchestration : Docker Swarm
- Container Management : Portainer
🎯 Key Responsibilities :
- Design, develop, and maintain scalable full-stack applications (Web + Mobile)
- Build and manage microservices and RESTful APIs
- Work in distributed and hybrid cloud environments
- Develop cloud-ready solutions and manage deployments
- Handle containerized applications using Docker Swarm & Portainer
- Collaborate closely with Product, DevOps, and Engineering teams
- Ensure application performance, security, and reliability
- Participate in code reviews and follow best engineering practices
- Troubleshoot, debug, and optimize applications across the stack
✅ Required Qualifications :
- Strong hands-on experience with Node.js (NestJS)
- Solid expertise in React.js (Next.js) and React Native
- Experience with PostgreSQL and backend data modeling
- Working knowledge of AWS services in hybrid environments
- Good understanding of Linux systems
- Hands-on experience with Docker Swarm & Portainer
- Strong understanding of microservices architecture
- Ability to manage end-to-end full-stack delivery
⭐ Good-to-Have Skills :
- Experience with CI/CD pipelines
- Exposure to monitoring & logging tools
- Knowledge of event-driven systems
- Experience working in high-availability systems
Job Title : Node.js Developer / Backend Developer
Experience : 4+ Years
Job Location : Mumbai – Andheri
Work Mode : Work From Office (5 Days a Week)
Job Type : Full-time Opportunity
Role Overview :
We are seeking an experienced Node.js / Backend Developer to design, develop, and maintain scalable backend systems.
The ideal candidate will have strong hands-on experience with Node.js, Nest.js, relational and NoSQL databases, and AWS cloud services.
You will work closely with frontend developers, DevOps, and product teams to deliver secure, high-performance, and reliable backend solutions.
Mandatory Skills : Node.js, Nest.js, MongoDB, PostgreSQL, AWS, REST API development, strong backend fundamentals.
Key Responsibilities :
• Design, develop, and maintain scalable backend applications using Node.js & Nest.js
• Build and manage RESTful APIs and backend services
• Work with MongoDB and PostgreSQL for efficient data storage and retrieval
• Develop cloud-ready applications and deploy them on AWS
• Ensure application performance, security, and scalability
• Write clean, well-documented, and maintainable code
• Participate in code reviews and follow best engineering practices
• Troubleshoot, debug, and optimize existing applications
• Collaborate with cross-functional teams for end-to-end delivery
Required Skills & Qualifications :
• 4+ years of experience in Backend / Node.js development
• Strong hands-on experience with Node.js and Nest.js
• Experience working with MongoDB and PostgreSQL
• Good understanding of AWS services (EC2, S3, RDS, etc.)
• Experience building RESTful APIs
• Understanding of backend architecture, design patterns, and best practices
• Strong problem-solving and debugging skills
• Familiarity with version control systems (Git)
Good-to-Have Skills :
• Experience with microservices architecture
• Knowledge of Docker and CI/CD pipelines
• Exposure to message queues or event-driven systems
• Basic understanding of frontend-backend integration
What You’ll Do:
We are looking for a Staff Operations Engineer based in Pune, India who can master both DeepIntent’s data architectures and pharma research and analytics methodologies to make significant contributions to how health media is analyzed by our clients. This role requires an Engineer who not only understands DBA functions but also how they impact research objectives and can work with researchers and data scientists to achieve impactful results.
This role will be in the Engineering Operations team and will require integration and partnership with the Engineering Organization. The ideal candidate is a self-starter who is inquisitive who is not afraid to take on and learn from challenges and will constantly seek to improve the facets of the business they manage. The ideal candidate will also need to demonstrate the ability to collaborate and partner with others.
- Serve as the Engineering interface between Analytics and Engineering teams.
- Develop and standardize all interface points for analysts to retrieve and analyze data with a focus on research methodologies and data-based decision-making.
- Optimize queries and data access efficiencies, serve as an expert in how to most efficiently attain desired data points.
- Build “mastered” versions of the data for Analytics-specific querying use cases.
- Establish a formal data practice for the Analytics practice in conjunction with the rest of DeepIntent.
- Interpret analytics methodology requirements and apply them to data architecture to create standardized queries and operations for use by analytics teams.
- Implement DataOps practices.
- Master existing and new Data Pipelines and develop appropriate queries to meet analytics-specific objectives.
- Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.
- Operate between Engineers and Analysts to unify both practices for analytics insight creation.
Who You Are:
- 8+ years of experience in Tech Support (Specialised in Monitoring and maintaining Data pipeline).
- Adept in market research methodologies and using data to deliver representative insights.
- Inquisitive, curious, understands how to query complicated data sets, move and combine data between databases.
- Deep SQL experience is a must.
- Exceptional communication skills with the ability to collaborate and translate between technical and non-technical needs.
- English Language Fluency and proven success working with teams in the U.S.
- Experience in designing, developing and operating configurable Data pipelines serving high-volume and velocity data.
- Experience working with public clouds like GCP/AWS.
- Good understanding of software engineering, DataOps, and data architecture, Agile and DevOps methodologies.
- Proficient with SQL, Python or JVM-based language, Bash.
- Experience with any of Apache open-source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc.
- Ability to think big, take bets and innovate, dive deep, hire and develop the best talent, learn and be curious.
- Experience in debugging UI and Backend issues will be add on.
Required Skills & Qualifications
● Strong hands-on experience with LLM frameworks and models, including LangChain,
OpenAI (GPT-4), and LLaMA
● Proven experience in LLM orchestration, workflow management, and multi-agent
system design using frameworks such as LangGraph
● Strong problem-solving skills with the ability to propose end-to-end solutions and
contribute at an architectural/system design level
● Experience building scalable AI-backed backend services using FastAPI and
asynchronous programming patterns
● Solid experience with cloud infrastructure on AWS, including EC2, S3, and Load
Balancers
● Hands-on experience with Docker and containerization for deploying and managing
AI/ML applications
● Good understanding of Transformer-based architectures and how modern LLMs work
internally
● Strong skills in data processing and analysis using NumPy and Pandas
● Experience with data visualization tools such as Matplotlib and Seaborn for analysis
and insights
● Hands-on experience with Retrieval-Augmented Generation (RAG), including
document ingestion, embeddings, and vector search pipelines
● Experience in model optimization and training techniques, including fine-tuning,
LoRA, and QLoRA
Nice to Have / Preferred
● Experience designing and operating production-grade AI systems
● Familiarity with cost optimization, observability, and performance tuning for
LLM-based applications
● Exposure to multi-cloud or large-scale AI platforms
Description :
About the Role :
We're seeking a dynamic and technically strong Engineering Manager to lead, grow, and inspire our high-performing engineering team. In this role, you'll drive technical strategy, deliver scalable systems, and ensure SolarSquare's platforms continue to delight users at scale. You'll combine hands-on technical expertise with a passion for mentoring engineers, shaping culture, and collaborating across functions to bring bold ideas to life in a fast-paced startup environment.
Responsibilities :
- Lead and manage a team of full stack developers (SDE1 to SDE3), fostering a culture of ownership, technical excellence, and continuous learning.
- Drive the technical vision and architectural roadmap for the MERN stack platform, ensuring scalability, security, and high performance.
- Collaborate closely with product, design, and business teams to align engineering priorities with business goals and deliver impactful products.
- Ensure engineering best practices across code reviews, testing strategies, and deployment pipelines (CI/CD).
- Implement robust observability and monitoring systems to proactively identify and resolve issues in production environments.
- Optimize system performance and cost-efficiency in cloud infrastructure (AWS, Azure, GCP).
- Manage technical debt effectively, balancing long-term engineering health with short-term product needs.
- Recruit, onboard, and develop top engineering talent, creating growth paths for team members.
- Drive delivery excellence by setting clear goals, metrics, and expectations, and ensuring timely execution of projects.
- Advocate for secure coding practices and compliance with data protection standards (e.g., OWASP, GDPR).
Requirements :
- 8 to 12 years of experience in full stack development, with at least 2+ years in a technical leadership or people management role.
- Proven expertise in the MERN stack (MongoDB, Express.js, React.js, Node.js) and strong understanding of distributed systems and microservices.
- Hands-on experience designing and scaling high-traffic web applications.
- Deep knowledge of cloud platforms (AWS, Azure, GCP), containerization (Docker), and orchestration tools (Kubernetes).
- Strong understanding of observability practices and tools (Prometheus, Grafana, ELK, Datadog) for maintaining production-grade systems.
- Track record of building and leading high-performing engineering teams in agile environments.
- Excellent communication and stakeholder management skills, with the ability to align technical efforts with business objectives.
- Experience with cost optimization, security best practices, and performance tuning in cloud-native environments.
Bonus : Prior experience in established Product companies or experience with scaling teams in early stage startup and designing systems from scratch.
Work Arrangement :
- Flexible work setup, including hybrid options. Monday to Friday.
About CoPoint AI
CoPoint AI is a specialized consulting firm focused on transforming businesses through process improvement, data insights, and technology-driven innovation. We leverage AI technologies, Microsoft cloud platforms, and modern web development frameworks to deliver intelligent, scalable solutions that drive measurable impact for our clients. Our team partners across industries to design and deploy solutions that streamline operations, enhance customer experiences, and enable data-driven growth.
Our Vision
We transform businesses through process improvement and data insights leveraging AI on the Microsoft stack
Our Values
- Be Purposeful: Think Straight, Communicate, Always Do The Right Thing
- In Partnership: With our Team, For our Clients, In our communities.
- Create Impact: Deliver value-based solution, help Individual achieve their dream, demand profitable growth.
Role Overview
As a Senior Consultant at CoPoint AI, you will drive end-to-end delivery of both AI-enabled data solutions and modern web applications. You will blend technical expertise in AI, Microsoft platforms, and full-stack web development with business insight to architect and implement impactful solutions across client environments.
Key Responsibilities
- Lead design and implementation of end-to-end data, AI, and web application solutions
- Architect and build responsive, user-friendly web interfaces integrated with enterprise data systems
- Develop and optimize secure, scalable APIs and microservices using cloud-native principles
- Implement AI-powered features in web applications using LLMs, Azure OpenAI, and Cognitive Services
- Guide teams in AI-assisted software development lifecycle improvements
- Build frameworks for responsible AI governance and model monitoring
- Comprehensive understanding of the AI solution landscape
- Leadership experience in AI-enabled digital transformation initiatives
- Expertise in AI adoption strategies and change management
- Ability to translate AI capabilities into measurable business value
- Design multi-model architectures combining analytics, AI, and web experiences
- Act as a subject matter expert in Microsoft Azure and modern web frameworks (e.g., React, Angular, .NET Core)
- Manage project work streams and lead cross-functional delivery teams
- Cultivate and manage client relationships, providing strategic and technical guidance
- Identify and propose innovation opportunities through data and digital experiences
- Mentor junior developers, analysts, and consultants
- Ensure high quality and consistency in solution delivery and user experience
Qualifications
- Deep expertise in Microsoft data technologies (Azure Data Factory, Synapse, Power BI).
- Proven experience implementing enterprise AI solutions on Azure
- Advanced knowledge of large language models and their business applications
- Expertise in AI-enhanced software development methodologies
- Experience with AI model evaluation, validation, and responsible deployment
- Proficiency in developing custom AI solutions using Azure OpenAI, Cognitive Services, and ML services
- Experience integrating AI into existing enterprise applications and data platforms
- Experience managing client expectations and delivering high-quality solutions
- Strong technical leadership and problem-solving capabilities
- Excellent communication and presentation skills
- Ability to anticipate client needs and propose strategic solutions
What should You expect:
- A culture of continuous learning with certification support.
- Clear career advancement pathways.
- Competitive compensation and benefits.
- Flexible work arrangements.
- A collaborative environment that values innovation and creativity.
Ready to shape the future of enterprise technology? Join our team of Microsoft technology experts and make an impact.

Global digital transformation solutions provider.
JOB DETAILS:
Job Role: Lead I - Software Engineering - Java, Spring Boot, Microservices
Industry: Global digital transformation solutions provider
Work Mode: 3 days in office, Hybrid model.
Salary: Best in Industry
Experience: 5-7 years
Location: Trivandrum, Kochi, Thiruvananthapuram
Job Description
Job Title: Senior Java Developer Experience: 5+ years
Job Summary: We are looking for a Senior Java Developer with strong experience in Spring Boot and Microservices to work on high-performance applications for a leading financial services client. The ideal candidate will have deep expertise in Java backend development, cloud (preferably GCP), and strong problem-solving abilities.
Key Responsibilities:
• Develop and maintain Java-based microservices using Spring Boot
• Collaborate with Product Owners and teams to gather and review requirements
• Participate in design reviews, code reviews, and unit testing
• Ensure application performance, scalability, and security
• Contribute to solution architecture and design documentation
• Support Agile development processes including daily stand-ups and sprint planning
• Mentor junior developers and lead small modules or features
Required Skills:
• Java, Spring Boot, Microservices architecture
• GCP (or other cloud platforms like AWS)
• REST/SOAP APIs, Hibernate, SQL, Tomcat
• CI/CD tools: Jenkins, Bitbucket
• Agile methodologies (Scrum/Kanban)
• Unit testing (JUnit), debugging and troubleshooting
• Good communication and team leadership skills
Preferred Skills:
• Frontend familiarity (Angular, AJAX)
• Experience with API documentation tools (Swagger)
• Understanding of design patterns and UML
• Exposure to Confluence, Jira
Must-Haves
Java/J2EE (5+ years), Spring/Spring Boot (5+ years), Microservices (5+ years), AWS/GCP/Azure (mandatory), CI/CD (Jenkins, SonarQube, Git)
Mandatory Skills Required: Strong proficiency in Java, spring boot, microservices, GCP/AWS.
Experience Required: Minimum 5+ years of relevant experience
Java, Spring Boot, Microservices architecture
GCP (or other cloud platforms like AWS)
REST/SOAP APIs, Hibernate, SQL, Tomcat
CI/CD tools: Jenkins, Bitbucket
Agile methodologies (Scrum/Kanban)
Unit testing (JUnit), debugging and troubleshooting
Good communication and team leadership skills
Notice period - 0 to 15 days only (Immediate or candidates who are serving notice period and who can join by Feb)
Job stability is mandatory
Location: Trivandrum, Kochi
Virtual Interview: 31st Jan-Saturday
Nice to Haves
Frontend familiarity (Angular, AJAX)
Experience with API documentation tools (Swagger)
Understanding of design patterns and UML
Exposure to Confluence, Jira
About Unilog
Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.
With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale Distribution, Manufacturing, and Specialty Retail.
Unilog’s Mission Statement
At Unilog, our mission is to provide purpose-built connected product content and eCommerce solutions that empower our customers to succeed in the face of intense competition. By virtue of living our mission, we are able to transform the way Wholesale Distributors, Manufacturers, and Specialty Retailers go to market. We help our customers extend a digital version of their business and accelerate their growth.
Job Details
- Designation: Principal Engineer – Solr
- Location: Bangalore / Mysore / Remote
- Job Type: Full-time
- Department: Software R&D
Job Summary
We are seeking a highly skilled and experienced Principal Engineer with a strong background in Apache Solr and Java to lead our Engineering and customer-led initiatives. The ideal candidate will be responsible for ensuring the reliability, scalability, and performance of our search platform while providing expert-level troubleshooting and resolution for critical production issues.
This role will involve designing the architecture for new platforms while reviewing and recommending better approaches for existing ones to drive continuous improvement and efficiency.
Key Responsibilities
- Lead Engineering and support activities for Solr-based search applications, ensuring minimal downtime and optimal performance
- Design and develop the architecture of new platforms while reviewing and recommending better approaches for existing ones
- Regularly work towards enhancing search ranking, query understanding, and retrieval effectiveness
- Diagnose, troubleshoot, and resolve complex technical issues in Solr, Java-based applications, and supporting infrastructure
- Perform deep-dive analysis of logs, performance metrics, and alerts to proactively prevent incidents
- Optimize Solr indexes, queries, and configurations to enhance search performance and reliability
- Work closely with development, operations, and business teams to drive improvements in system stability and efficiency
- Implement monitoring tools, dashboards, and alerting mechanisms to enhance observability and proactive issue detection
- Exposure to AI-based search using vector databases, RAG models, NLP, and LLMs
- Collaborate on capacity planning, system scaling, and disaster recovery strategies for mission-critical search systems
- Provide mentorship and technical guidance to junior engineers and support teams
- Drive innovation by tracking latest trends, emerging technologies, and best practices in AI-based Search, Solr, and other search platforms
Requirement
- 8+ years of experience in software development and production support with a focus on Apache Solr, Java, and databases (Oracle, MySQL, PostgreSQL, etc.)
- Strong understanding of Solr indexing, query execution, schema design, configuration, and tuning
- Experience in designing and implementing scalable system architectures for search platforms
- Proven ability to review and assess existing platform architectures, identifying areas for improvement and recommending better approaches
- Proficiency in Java, Spring Boot, and micro-services architectures
- Experience with Linux / Unix-based environments, shell scripting, and debugging production systems
- Hands-on experience with monitoring tools (e.g., Prometheus, Grafana, Splunk, ELK Stack) and log analysis
- Expertise in troubleshooting performance issues related to Solr, JVM tuning, and memory management
- Familiarity with cloud platforms such as AWS, Azure, or GCP and containerization technologies like Docker / Kubernetes
- Strong analytical and problem-solving skills, with the ability to work under pressure in a fast-paced environment
- Certifications in Solr, Java, or cloud technologies
- Excellent communication and leadership abilities
About Our Benefits
- Competitive salary
- Health insurance
- Retirement plan
- Paid time off
- Training and development opportunities
Company Description
Appiness Interactive Pvt. Ltd. is a Bangalore-based product development and UX firm that specializes in digital services for startups to Fortune 500s. We work closely with our clients to create a comprehensive soul for their brand in the online world, engaging through multiple platforms of digital media. Our team is young, passionate, and aggressive, not afraid to think out of the box or tread the uncharted path in order to deliver the best results for our clients. We
pride ourselves on Practical Creativity, where the idea is only as good as the returns it fetches
for our clients.
Job Summary
We are looking for a highly skilled AI/ML Engineer with strong experience in LLMs, multi-agent orchestration, and production-grade AI systems. The ideal candidate will have hands-on expertise in Python, LangChain/LangGraph, OpenAI & LLaMA models, and building scalable AI solutions deployed on AWS using containerized architectures. You will be responsible for designing, architecting, and deploying AI/ML solutions that solve complex business problems using modern transformer-based models and Retrieval-Augmented Generation (RAG)
techniques.
Key Responsibilities
AI / ML & LLM Development
● Design and implement LLM-powered solutions using OpenAI (GPT-4), LLaMA, and other transformer models
● Build multi-agent systems for orchestration, reasoning, and problem-solving using LangChain and LangGraph
● Develop Retrieval-Augmented Generation (RAG) pipelines using vector databases
● Fine-tune LLMs using LoRA and QLoRA techniques for domain-specific use cases
Architecture & System Design
● Propose end-to-end AI solution architectures at an enterprise level
● Design scalable, fault-tolerant systems using Docker & containerization
● Deploy AI services using FastAPI for high-performance APIs
● Implement orchestration and workflow management for AI pipelines
Cloud & DevOps
● Deploy and manage AI workloads on AWS (EC2, S3, Load Balancers)
● Optimize performance, scalability, and cost of AI/ML systems on cloud
● Work with CI/CD pipelines for ML model deployment
● Data Processing & Visualization
● Perform data analysis and feature engineering using NumPy and Pandas
● Visualize insights using Matplotlib and Seaborn
● Work with structured and unstructured datasets for ML training
Required Skills
● 5+ years of experience in AI/ML Engineering and strong proficiency in Python
● Hands-on experience with LangChain, LangGraph
● Experience with LLMs (OpenAI GPT-4, LLaMA)
● Solid understanding of Transformer architectures
● Expertise in RAG frameworks and vector databases
● Model fine-tuning experience using LoRA / QLoRA
● API development using FastAPI
● Experience with Docker & containerized deployments
● Strong knowledge of AWS (EC2, S3, Load Balancers)
Good to Have
● Experience with multi-agent AI systems
● Exposure to MLOps practices
● Knowledge of prompt engineering and evaluation frameworks
● Experience deploying AI solutions in production environments
Soft Skills
● Strong problem-solving and analytical skills
● Ability to propose architectural-level solutions
● Excellent communication and stakeholder collaboration skills
● Ownership mindset and ability to work independently
What you’ll do
- Build and scale backend services and APIs using Python
- Work on cross-language integrations (Python ↔ PHP)
- Develop frontend features using React (Angular is a plus)
- Deploy, monitor, and manage applications on AWS
- Own features end-to-end: development, performance, and reliability
- Collaborate closely with product, QA, and engineering teams
Tech Stack
- Backend: Python (working knowledge of PHP is a strong plus)
- Frontend: React (Angular is a plus)
- Cloud: AWS
- Version Control: Git / GitHub
Experience
- 5–10 years of professional software development experience
- Strong hands-on experience with Python
- Hands-on experience deploying and managing applications on AWS
- Working knowledge of modern frontend frameworks
Company Description
NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.
Role Description
This is a full-time hybrid role for a Java Software Engineer, based in Pune. The Java Software Engineer will be responsible for designing, developing, and maintaining software applications. Key responsibilities include working with microservices architecture, implementing and managing the Spring Framework, and programming in Java. Collaboration with cross-functional teams to define, design, and ship new features is also a key aspect of this role.
Responsibilities:
● Develop and Maintain: Write clean, efficient, and maintainable code for Java-based applications
● Collaborate: Work with cross-functional teams to gather requirements and translate them into technical solutions
● Code Reviews: Participate in code reviews to maintain high-quality standards
● Troubleshooting: Debug and resolve application issues in a timely manner
● Testing: Develop and execute unit and integration tests to ensure software reliability
● Optimize: Identify and address performance bottlenecks to enhance application performance
Qualifications & Skills:
● Strong knowledge of Java, Spring Framework (Spring Boot, Spring MVC), and Hibernate/JPA
● Familiarity with RESTful APIs and web services
● Proficiency in working with relational databases like MySQL or PostgreSQL
● Practical experience with AWS cloud services and building scalable, microservices-based architectures
● Experience with build tools like Maven or Gradle
● Understanding of version control systems, especially Git
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to work effectively in a collaborative team environment
Why Join Us?
● Opportunity to work on cutting-edge technology products
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you
Job Location: Kharadi, Pune
Job Type: Full-Time
About Us:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have 10 years of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide their operations and believe in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the "givers gain" philosophy and strive to provide value in order to seek value. We are committed to delivering top-notch solutions to our clients and are looking for a talented Web UI Developer to join our dynamic team.
Qualifications:
- Strong Experience in JavaScript and React
- Experience in building multi-tier SaaS applications with exposure to micro-services, caching, pub-sub, and messaging technologies
- Experience with design patterns
- Familiarity with UI components library (such as material-UI or Bootstrap) and RESTful APIs
- Experience with web frontend technologies such as HTML5, CSS3, LESS, Bootstrap
- A strong foundation in computer science, with competencies in data structures, algorithms, and software design
- Bachelor's / Master's Degree in CS
- Experience in GIT in mandatory
- Exposure to AWS, Docker, and CI/CD systems like Jenkins is a plus
Key Responsibilities:
- Lead the architecture, design, and implementation of scalable, secure, and highly available AWS infrastructure leveraging services such as VPC, EC2, IAM, S3, SNS/SQS, EKS, KMS, and Secrets Manager.
- Develop and maintain reusable, modular IaC frameworks using Terraform and Terragrunt, and mentor team members on IaC best practices.
- Drive automation of infrastructure provisioning, deployment workflows, and routine operations through advanced Python scripting.
- Take ownership of cost optimization strategy by analyzing usage patterns, identifying savings opportunities, and implementing guardrails across multiple AWS environments.
- Define and enforce infrastructure governance, including secure access controls, encryption policies, and secret management mechanisms.
- Collaborate cross-functionally with development, QA, and operations teams to streamline and scale CI/CD pipelines for containerized microservices on Kubernetes (EKS).
- Establish monitoring, alerting, and observability practices to ensure platform health, resilience, and performance.
- Serve as a technical mentor and thought leader, guiding junior engineers and shaping cloud adoption and DevOps culture across the organization.
- Evaluate emerging technologies and tools, recommending improvements to enhance system performance, reliability, and developer productivity.
- Ensure infrastructure complies with security, regulatory, and operational standards, and drive initiatives around audit readiness and compliance.
Mandatory Skills & Experience:
- AWS (Advanced Expertise): VPC, EC2, IAM, S3, SNS/SQS, EKS, KMS, Secrets Management
- Infrastructure as Code: Extensive experience with Terraform and Terragrunt, including module design and IaC strategy
- Strong hold in Kubernetes
- Scripting & Automation: Proficient in Python, with a strong track record of building tools, automating workflows, and integrating cloud services
- Cloud Cost Optimization: Proven ability to analyze cloud spend and implement sustainable cost control strategies
- Leadership: Experience in leading DevOps/infrastructure teams or initiatives, mentoring engineers, and making architecture-level decisions
Nice to Have:
- Experience designing or managing CI/CD pipelines for Kubernetes-based environments
- Backend development background in Python (e.g., FastAPI, Flask)
- Familiarity with monitoring/observability tools such as Prometheus, Grafana, CloudWatch
- Understanding of system performance tuning, capacity planning, and scalability best practices
- Exposure to compliance standards such as SOC 2, HIPAA, or ISO 27001
Forbes Advisor is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life.
We do this by combining data-driven content, rigorous product comparisons, and user-first design all built on top of a modern, scalable platform. Our teams operate globally and bring deep expertise across journalism, product, performance marketing, and analytics.
The Role
We are hiring a Senior Data Engineer to help design and scale the infrastructure behind our analytics,performance marketing, and experimentation platforms.
This role is ideal for someone who thrives on solving complex data problems, enjoys owning systems end-to-end, and wants to work closely with stakeholders across product, marketing, and analytics.
You’ll build reliable, scalable pipelines and models that support decision-making and automation at every level of the business.
What you’ll do
● Build, maintain, and optimize data pipelines using Spark, Kafka, Airflow, and Python
● Orchestrate workflows across GCP (GCS, BigQuery, Composer) and AWS-based systems
● Model data using dbt, with an emphasis on quality, reuse, and documentation
● Ingest, clean, and normalize data from third-party sources such as Google Ads, Meta,Taboola, Outbrain, and Google Analytics
● Write high-performance SQL and support analytics and reporting teams in self-serve data access
● Monitor and improve data quality, lineage, and governance across critical workflows
● Collaborate with engineers, analysts, and business partners across the US, UK, and India
What You Bring
● 4+ years of data engineering experience, ideally in a global, distributed team
● Strong Python development skills and experience
● Expert in SQL for data transformation, analysis, and debugging
● Deep knowledge of Airflow and orchestration best practices
● Proficient in DBT (data modeling, testing, release workflows)
● Experience with GCP (BigQuery, GCS, Composer); AWS familiarity is a plus
● Strong grasp of data governance, observability, and privacy standards
● Excellent written and verbal communication skills
Nice to have
● Experience working with digital marketing and performance data, including:
Google Ads, Meta (Facebook), TikTok, Taboola, Outbrain, Google Analytics (GA4)
● Familiarity with BI tools like Tableau or Looker
● Exposure to attribution models, media mix modeling, or A/B testing infrastructure
● Collaboration experience with data scientists or machine learning workflows
Why Join Us
● Monthly long weekends — every third Friday off
● Wellness reimbursement to support your health and balance
● Paid parental leave
● Remote-first with flexibility and trust
● Work with a world-class data and marketing team inside a globally recognized brand
About the Role
We are looking for a motivated Full Stack Developer with 2–5 years of hands-on experience in building scalable web applications. You will work closely with senior engineers and product teams to develop new features, improve system performance, and ensure high-
quality code delivery.
Responsibilities
- Develop and maintain full-stack applications.
- Implement clean, maintainable, and efficient code.
- Collaborate with designers, product managers, and backend engineers.
- Participate in code reviews and debugging.
- Work with REST APIs/GraphQL.
- Contribute to CI/CD pipelines.
- Ability to work independently as well as within a collaborative team environment.
Required Technical Skills
- Strong knowledge of JavaScript/TypeScript.
- Experience with React.js, Next.js.
- Backend experience with Node.js, Express, NestJS.
- Understanding of SQL/NoSQL databases.
- Experience with Git, APIs, debugging tools.ß
- Cloud familiarity (AWS/GCP/Azure).
AI and System Mindset
Experience working with AI-powered systems is a strong plus. Candidates should be comfortable integrating AI agents, third-party APIs, and automation workflows into applications, and should demonstrate curiosity and adaptability toward emerging AI technologies.
Soft Skills
- Strong problem-solving ability.
- Good communication and teamwork.
- Fast learner and adaptable.
Education
Bachelor's degree in Computer Science / Engineering or equivalent.
Key Responsibilities:
- Develop and maintain web applications using Java, Spring Boot, and Angular (v8+).
- Design and implement RESTful APIs and integrate with front-end components.
- Collaborate with cross-functional teams to gather and analyze requirements.
- Write clean, maintainable, and testable code following best practices.
- Debug, optimize, and enhance application performance.
- Participate in code reviews, unit testing, and deployment activities.
Required Skills:
- Strong programming skills in Java and Spring Boot.
- Hands-on experience with Angular (v8 or above), HTML5, CSS3, and TypeScript.
- Good understanding of REST APIs, JSON, and SQL databases (MySQL/PostgreSQL).
- Experience with Git and Agile development methodologies.
- Excellent communication and teamwork skills.
Good to Have:
- Knowledge of Microservices, Docker, or cloud platforms (AWS/Azure/GCP).
- Familiarity with CI/CD pipelines and unit testing frameworks.
About the Company
We are a well-established and growing software product company with decades of experience delivering innovative and scalable technology solutions. The organization continues to expand year on year through strong market presence, continuous investment in technology, and a culture that promotes learning and collaboration. Exciting growth opportunities lie ahead for motivated professionals.
Job Description
We are looking for a Full Stack Developer to join our engineering team. The ideal candidate will be comfortable working across both front-end and back-end technologies, with a preference for either side being acceptable, as long as you are open to contributing across the full stack.
Technology Stack
Front-end:
- JavaScript
- Angular (or good understanding of React, Vue.js, Knockout.js, or similar frameworks)
Back-end:
- C#
- ASP.NET
- Web API
- MVC
- Entity Framework
Database:
- SQL Server (knowledge of NoSQL databases is a plus)
Cloud:
- Microsoft Azure and/or AWS
Key Responsibilities
- Design and develop the overall architecture of the web application
- Implement robust services and APIs to support the application
- Build reusable code and libraries for future use
- Optimize applications for maximum speed and scalability
- Implement security and data protection measures
- Translate UI/UX wireframes into visual and functional elements
- Integrate front-end and back-end components seamlessly
Additional Responsibilities (For Senior / Lead-Level Candidates)
- Participate actively in the full SDLC (design, development, testing, deployment)
- Provide technical analysis and resolve complex issues during delivery
- Conduct code and test case reviews
- Collaborate with product and design teams on innovative solutions
- Convert functional requirements into technical tasks and effort estimates
- Mentor junior developers
Skills & Qualifications
- Bachelor’s degree in Software Engineering or related field
- 3–5 years of relevant experience in Full Stack development
- Strong experience in C#, ASP.NET MVC, Web API, and SQL Server
- Good knowledge of JavaScript and modern front-end frameworks
- Understanding of cloud-native architecture and SaaS applications
- Experience with CI/CD, Docker, and DevOps practices is a plus
- Experience working in cross-functional teams
- Ability to build scalable and robust enterprise-grade solutions
Role: AWS Cloud Engineer (Principal / Senior Level)
Employment Type: Contract (12+ Months)
Location: Fully Remote (USA)
Experience Required: 14+ Years
Company Type: Global Remote Talent & Technology Services Platform
About us:
We are a global remote talent and technology services company, enabling leading organizations worldwide to hire elite engineers and build next-generation products.
Our platform connects companies with highly skilled professionals who work remotely while delivering enterprise-grade innovation across cloud, AI, DevOps, and modern software engineering.
We partner with high-growth startups and Fortune-level enterprises to design, build, and scale mission-critical technology platforms—all in a fully remote, distributed model.
For more details, visit our site: Recruiting Bond (https://recruitingbond.com/)
Role Overview:
We are seeking a highly experienced AWS Cloud Engineer to support one of our global clients in designing, securing, and operating enterprise-scale AWS environments.
This is a hands-on, senior-level role focused on cloud architecture, governance, automation, and Python-based tooling, working closely with distributed DevOps, Security, and Application teams.
Key Responsibilities
- Design and maintain scalable, secure AWS cloud architectures aligned with enterprise best practices
- Build and govern multi-account AWS environments using AWS Control Tower, Organizations, and landing zones
- Implement IAM strategies, including SCPs, identity federation, and least-privilege access models
- Develop and manage Infrastructure as Code (IaC) using Terraform and/or AWS CloudFormation
- Architect and manage AWS networking (VPCs, subnets, routing, security groups, NACLs, Transit Gateway)
- Implement AWS Config, logging, monitoring, and compliance controls
- Automate infrastructure operations and workflows using Python
- Enable DevOps practices, CI/CD pipelines, and cloud-native deployments
- Troubleshoot complex cloud infrastructure issues and drive root-cause analysis
- Produce and maintain technical documentation, standards, and runbooks
- Act as a trusted cloud advisor to client stakeholders and engineering teams
Required Skills & Qualifications
- 14+ years of overall IT experience, with deep hands-on expertise in AWS
- Strong experience in AWS architecture, security, and governance
- Proven experience with AWS Control Tower and multi-account strategies
- Advanced knowledge of IAM, SCPs, identity management, and access control
- Strong hands-on experience with Terraform and/or CloudFormation
- Solid understanding of AWS networking and VPC design
- Proficiency in Python for automation and cloud operations
- Experience working in DevOps-driven, cloud-native environments
- Strong communication skills and ability to work independently in a fully remote setup
Nice to Have
- AWS Professional or Speciality certifications
- Experience supporting global or enterprise clients
- Exposure to compliance or regulated cloud environments
Why Join Us
- Work fully remote with global teams and top-tier clients
- Long-term 12+ month engagement with extension potential
- Opportunity to influence large-scale cloud platforms
- Be part of a company shaping the future of remote-first engineering
Hope you are doing great!
We have an Urgent opening for a Senior Automation QA professional to join a global life sciences data platform company. Immediate interview slots available.
🔹 Quick Role Overview
- Role: Senior Automation QA
- Location: Pune(Hybrid -3 days work from office)
- Employment Type: Full-Time
- Experience Required: 5+ Years
- Interview Process: 2–3 Rounds
- Qualification: B.E / B.Tech
- Notice Period : 0-30 Days
📌 Job Description
IntegriChain is the data and business process platform for life sciences manufacturers, delivering visibility into patient access, affordability, and adherence. The platform enables manufacturers to drive gross-to-net savings, ensure channel integrity, and improve patient outcomes.
We are expanding our Engineering team to strengthen our ability to process large volumes of healthcare and pharmaceutical data at enterprise scale.
The Senior Automation QA will be responsible for ensuring software quality by designing, developing, and maintaining automated test frameworks. This role involves close collaboration with engineering and product teams, ownership of test strategy, mentoring junior QA engineers, and driving best practices to improve product reliability and release efficiency.
🎯 Key Responsibilities
- Hands-on QA across UI, API, and Database testing – both Automation & Manual
- Analyze requirements, user stories, and technical documents to design detailed test cases and test data
- Design, build, execute, and maintain automation scripts using BDD (Gherkin), Pytest, and Playwright
- Own and maintain QA artifacts: Test Strategy, BRD, defect metrics, leakage reports, quality dashboards
- Work with stakeholders to review and improve testing approaches using data-backed quality metrics
- Ensure maximum feasible automation coverage in every sprint
- Perform functional, integration, and regression testing in Agile & DevOps environments
- Drive Shift-left testing, identifying defects early and ensuring faster closures
- Contribute to enhancing automation frameworks with minimal guidance
- Lead and mentor a QA team (up to 5 members)
- Support continuous improvement initiatives and institutionalize QA best practices
- Act as a problem-solver and strong team collaborator in a fast-paced environment
🧩 Desired Skills & Competencies
✅ Must-Have:
- 5+ years of experience in test planning, test case design, test data preparation, automation & manual testing
- 3+ years of strong UI & API automation experience using Playwright with Python
- Solid experience in BDD frameworks (Gherkin, Pytest)
- Strong database testing skills (Postgres / Snowflake / MySQL / RDS)
- Hands-on experience with Git and Jenkins (DevOps exposure)
- Working experience with JMeter
- Experience in Agile methodologies (Scrum / Kanban)
- Excellent problem-solving, analytical, communication, and stakeholder management skills
👍 Good to Have:
- Experience testing AWS / Cloud-hosted applications
- Exposure to ETL processes and BI reporting systems
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
We are looking for a skilled Node.js software developer to work on and scale education ERP solutions. The ideal candidate should have strong hands-on experience with Node.js, databases, and ERP modules. Basic to moderate experience in PHP (Laravel/Core PHP) will be considered an added advantage.
This role offers the benefit of permanent work from home (6 days working).
Key Responsibilities
- Design, develop, and maintain scalable Education ERP modules using Node.js.
- Work on end-to-end ERP features, including HR, exams, inventory, LMS, admissions, fee management, and finance.
- Develop and optimize REST APIs / GraphQL services and handle system integrations.
- Ensure performance, scalability, and security of high-usage ERP systems.
- Collaborate with frontend and product teams for smooth feature delivery.
- Perform code reviews, follow best coding practices, and guide junior developers.
- Continuously explore and suggest improvements using modern backend technologies.
Required Skills & Qualifications
- Strong expertise in Node.js .
- Working knowledge of PHP (Laravel / Core PHP)
- Proficiency in databases: MySQL and MongoDB (PostgreSQL is a plus).
- Experience with REST APIs, GraphQL, and third-party integrations (payment gateways, SMS, and email).
- Frontend basics: JavaScript, HTML, CSS, and jQuery (React/Vue is a plus).
- Hands-on experience with Git/GitHub, Docker, and CI/CD pipelines.
- Understanding of scalable architecture and secure backend systems.
- 3+ years of overall experience, with at least 2 years working on ERP or large-scale systems.
Preferred Experience
- Prior experience in Education ERP systems.
- Strong understanding of HR, Exams, Inventory, LMS, Admissions, Fees, and Finance modules.
- Experience working on high-traffic or enterprise-level applications.
- Exposure to cloud platforms (AWS / Azure / GCP) is an added advantage.
Job Title: Technology Intern
Location: Remote (India)
Shift Timings:
- 5:00 PM – 2:00 AM
- 6:00 PM – 3:00 AM
Compensation: Stipend
Job Summary
ARDEM is looking for enthusiastic Technology Interns from Tier 1 colleges who are eager to build hands-on experience across web technologies, cloud platforms, and emerging technologies such as AI/ML. This role is ideal for final-year students (2026 pass-outs) or fresh graduates seeking real-world exposure in a fast-growing, technology-driven organization.
Eligibility & Qualifications
- Education:
- B.Tech (Computer Science) / M.Tech (Computer Science)
- Tier 1 colleges preferred
- Final semester students pursuing graduation (2026 pass-outs) or recently hired interns
- Experience Level: Fresher
- Communication: Excellent English communication skills (verbal & written)
Skills Required
Technical & Development Skills
- Basic understanding of AI / Machine Learning concepts
- Exposure to AWS (deployment or cloud fundamentals)
- PHP development
- WordPress development and customization
- JavaScript (ES5 / ES6+)
- jQuery
- AJAX calls and asynchronous handling
- Event handling
- HTML5 & CSS3
- Client-side form validation
Work Environment & Tools
- Comfortable working in a remote setup
- Familiarity with collaboration and remote access tools
Additional Requirements (Work-from-Home Setup)
This opportunity promotes a healthy work-life balance with remote work flexibility. Candidates must have the following minimum infrastructure:
- System: Laptop or Desktop (Windows-based)
- Operating System: Windows
- Screen Size: Minimum 14 inches
- Screen Resolution: Full HD (1920 × 1080)
- Processor: Intel i5 or higher
- RAM: Minimum 8 GB (Mandatory)
- Software: AnyDesk
- Internet Speed: 100 Mbps or higher
About ARDEM
ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. With over 20 years of experience, ARDEM has consistently delivered high-quality outsourcing and automation services to clients across the USA and Canada. We are growing rapidly and continuously innovating to improve our services. Our goal is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company for our customers.
Palcode.ai is an AI-first platform built to solve real, high-impact problems in the construction and preconstruction ecosystem. We work at the intersection of AI, product execution, and domain depth, and are backed by leading global ecosystems
Role: Full Stack Developer
Industry Type: Software Product
Department: Engineering - Software & QA
Employment Type: Full Time, Permanent
Role Category: Software Development
Education
UG: Any Graduate
Job Title: Technology Intern
Location: Remote (India)
Shift Timings:
- 5:00 PM – 2:00 AM
- 6:00 PM – 3:00 AM
Compensation: Stipend
Job Summary
ARDEM is seeking highly motivated Technology Interns from Tier 1 colleges who are passionate about software development and eager to work with modern Microsoft technologies. This role is ideal for final-year students (2026 pass-outs) who want hands-on experience in building scalable web applications while maintaining a healthy work-life balance through remote work opportunities.
Eligibility & Qualifications
- Education:
- B.Tech (Computer Science) / M.Tech (Computer Science)
- Tier 1 colleges preferred
- Final semester students (2026 pass-outs) or recent interns
- Experience Level: Fresher
- Communication: Excellent English communication skills (verbal & written)
Skills Required
1. Technical Skills (Must Have)
- Experience with .NET Core (.NET 6 / 7 / 8)
- Strong knowledge of C#, including:
- Object-Oriented Programming (OOP) concepts
- async/await
- LINQ
- ASP.NET Core (Web API / MVC)
2. Database Skills
- SQL Server (preferred)
- Writing complex SQL queries, joins, and subqueries
- Stored Procedures, Functions, and Indexes
- Database design and performance tuning
- Entity Framework Core
- Migrations and transaction handling
3. Frontend Skills (Required)
- JavaScript (ES5 / ES6+)
- jQuery
- DOM manipulation
- AJAX calls
- Event handling
- HTML5 & CSS3
- Client-side form validation
4. Security & Performance
- Data validation and exception handling
- Caching concepts (In-memory / Redis – good to have)
5. Tools & Environment
- Visual Studio / VS Code
- Git (GitHub / Azure DevOps)
- Basic knowledge of server deployment
6. Good to Have (Optional)
- Azure or AWS deployment experience
- CI/CD pipelines
- Docker
- Experience with data handling
Additional Requirements (Work-from-Home Setup)
This role supports remote work. Candidates must ensure the following minimum infrastructure requirements:
- Laptop/Desktop: Windows-based system
- Operating System: Windows
- Screen Size: Minimum 14 inches
- Screen Resolution: Full HD (1920 × 1080)
- Processor: Intel i5 or higher
- RAM: Minimum 8 GB (Mandatory)
- Software: AnyDesk
- Internet Speed: 100 Mbps or higher
About ARDEM
ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. For over 20 years, ARDEM has successfully delivered high-quality outsourcing and automation services to clients across the USA and Canada.
We are growing rapidly and continuously innovating to become a better service provider for our customers. Our mission is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company in the industry.
Senior Full Stack Developer – Analytics Dashboard
Job Summary
We are seeking an experienced Full Stack Developer to design and build a scalable, data-driven analytics dashboard platform. The role involves developing a modern web application that integrates with multiple external data sources, processes large datasets, and presents actionable insights through interactive dashboards.
The ideal candidate should be comfortable working across the full stack and have strong experience in building analytical or reporting systems.
Key Responsibilities
- Design and develop a full-stack web application using modern technologies.
- Build scalable backend APIs to handle data ingestion, processing, and storage.
- Develop interactive dashboards and data visualisations for business reporting.
- Implement secure user authentication and role-based access.
- Integrate with third-party APIs using OAuth and REST protocols.
- Design efficient database schemas for analytical workloads.
- Implement background jobs and scheduled tasks for data syncing.
- Ensure performance, scalability, and reliability of the system.
- Write clean, maintainable, and well-documented code.
- Collaborate with product and design teams to translate requirements into features.
Required Technical Skills
Frontend
- Strong experience with React.js
- Experience with Next.js
- Knowledge of modern UI frameworks (Tailwind, MUI, Ant Design, etc.)
- Experience building dashboards using chart libraries (Recharts, Chart.js, D3, etc.)
Backend
- Strong experience with Node.js (Express or NestJS)
- REST and/or GraphQL API development
- Background job systems (cron, queues, schedulers)
- Experience with OAuth-based integrations
Database
- Strong experience with PostgreSQL
- Data modelling and performance optimisation
- Writing complex analytical SQL queries
DevOps / Infrastructure
- Cloud platforms (AWS)
- Docker and basic containerisation
- CI/CD pipelines
- Git-based workflows
Experience & Qualifications
- 5+ years of professional full stack development experience.
- Proven experience building production-grade web applications.
- Prior experience with analytics, dashboards, or data platforms is highly preferred.
- Strong problem-solving and system design skills.
- Comfortable working in a fast-paced, product-oriented environment.
Nice to Have (Bonus Skills)
- Experience with data pipelines or ETL systems.
- Knowledge of Redis or caching systems.
- Experience with SaaS products or B2B platforms.
- Basic understanding of data science or machine learning concepts.
- Familiarity with time-series data and reporting systems.
- Familiarity with meta ads/Google ads API
Soft Skills
- Strong communication skills.
- Ability to work independently and take ownership.
- Attention to detail and focus on code quality.
- Comfortable working with ambiguous requirements.
Ideal Candidate Profile (Summary)
A senior-level full stack engineer who has built complex web applications, understands data-heavy systems, and enjoys creating analytical products with a strong focus on performance, scalability, and user experience.
About Kanerika:
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.
We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.
Awards and Recognitions:
Kanerika has won several awards over the years, including:
1. Best Place to Work 2023 by Great Place to Work®
2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today
3. NASSCOM Emerge 50 Award in 2014
4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture
5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.
Working for us:
Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.
Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.
About the role:
As a DevOps Engineer, you will play a critical role in bridging the gap between development, operations, and security teams to enable fast, secure, and reliable software delivery. With 5+ years of hands-on experience, the engineer is responsible for designing, implementing, and maintaining scalable, automated, and cloud-native infrastructure solutions.
Key Responsibilities:
- 5+ years of hands-on experience in DevOps or Cloud Engineering roles.
- Strong expertise in at least one public cloud provider (AWS / Azure / GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Solid experience with Kubernetes and containerized applications.
- Strong knowledge of CI/CD tools (Jenkins, GitHub Actions, GitLab CI, Azure DevOps, ArgoCD).
- Scripting/programming skills in Python, Shell, or Go for automation.
- Hands-on experience with monitoring, logging, and incident management.
- Familiarity with security practices in DevOps (secrets management, IAM, vulnerability scanning).
Employee Benefits:
1. Culture:
- Open Door Policy: Encourages open communication and accessibility to management.
- Open Office Floor Plan: Fosters a collaborative and interactive work environment.
- Flexible Working Hours: Allows employees to have flexibility in their work schedules.
- Employee Referral Bonus: Rewards employees for referring qualified candidates.
- Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.
2. Inclusivity and Diversity:
- Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
- Mandatory POSH training: Promotes a safe and respectful work environment.
3. Health Insurance and Wellness Benefits:
- GMC and Term Insurance: Offers medical coverage and financial protection.
- Health Insurance: Provides coverage for medical expenses.
- Disability Insurance: Offers financial support in case of disability.
4. Child Care & Parental Leave Benefits:
- Company-sponsored family events: Creates opportunities for employees and their families to bond.
- Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
- Family Medical Leave: Offers leave for employees to take care of family members' medical needs.
5. Perks and Time-Off Benefits:
- Company-sponsored outings: Organizes recreational activities for employees.
- Gratuity: Provides a monetary benefit as a token of appreciation.
- Provident Fund: Helps employees save for retirement.
- Generous PTO: Offers more than the industry standard for paid time off.
- Paid sick days: Allows employees to take paid time off when they are unwell.
- Paid holidays: Gives employees paid time off for designated holidays.
- Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.
6. Professional Development Benefits:
- L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
- Mentorship Program: Offers guidance and support from experienced professionals.
- Job Training: Provides training to enhance job-related skills.
- Professional Certification Reimbursements: Assists employees in obtaining professional certifications.
- Promote from Within: Encourages internal growth and advancement opportunities.
Experience: 8+ Years
Work Mode: Remote
Engagement: Full-time / Freelancer
Dual Project: Acceptable
Job Description:
We are looking for an experienced AWS Cloud Engineer II with strong hands-on system engineering expertise in AWS production environments.
Key Responsibilities and Skills:
Hands-on experience in AWS system engineering with a strong focus on Amazon RDS, including performance tuning, backups, restores, Multi-AZ configurations, and read replicas
Strong experience in application troubleshooting across AWS services including EC2, ALB, VPC, and IAM
Expertise in log analysis and monitoring using AWS CloudWatch
Ability to troubleshoot connectivity issues, latency problems, and service dependencies
Experience in end-to-end root cause analysis and production issue resolution
Strong understanding of AWS networking and security best practices
Ability to work independently in a remote setup and handle production-level issues
Preferred Qualifications:
Experience working in high-availability and production-critical environments
Strong analytical and problem-solving skills
Good communication skills for collaborating with cross-functional teams
We are looking for a Senior Software Engineer to join our team and contribute to key business functions. The ideal candidate will bring relevant experience, strong problem-solving skills, and a collaborative
mindset.
Responsibilities:
- Design, build, and maintain high-performance systems using modern C++
- Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
- Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and
- managing applications in the cloud.
- Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
- Participate in system design, peer code reviews, and performance tuning
Qualifications:
- 5+ years of software development experience, with strong command over modern C++
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud.
- Apache Airflow for orchestrating complex data workflows.
- EKS (Elastic Kubernetes Service) for managing containerized workloads.
- Proven expertise in designing and managing robust data pipelines & Microservices.
- Proficient in building and scaling data processing workflows and working with structured/unstructured data
- Strong hands-on experience with Docker, container orchestration, and microservices architecture
- Working knowledge of CI/CD practices, Git, and build/release tools
- Strong problem-solving, debugging, and cross-functional collaboration skills
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.
Hands-on experience with Microsoft Azure core services including Virtual Machines, storage, networking, and identity management
Strong expertise in Azure RDS / Azure Virtual Desktop (AVD) deployment, configuration, and performance tuning
Solid systems engineering background with Windows Server administration, Active Directory, GPO, DNS, and basic Linux management
Proficiency in automation and scripting, primarily using PowerShell, with working knowledge of Azure CLI and Infrastructure as Code (ARM/Bicep/Terraform)
Strong Full stack/Backend engineer profile
Mandatory (Experience): Must have 2+ years of hands-on experience as a full stack developer (backend-heavy)
Mandatory (Backend Skills): Must have 1.5+ strong experience in Python, building REST APIs, and microservices-based architectures
Mandatory (Frontend Skills): Must have hands-on experience with modern frontend frameworks (React or Vue) and JavaScript, HTML, and CSS
Mandatory (Database Skills): Must have solid experience working with relational and NoSQL databases such as MySQL, MongoDB, and Redis
Mandatory (Cloud & Infra): Must have hands-on experience with AWS services including EC2, ELB, AutoScaling, S3, RDS, CloudFront, and SNS
Mandatory (DevOps & Infra): Must have working experience with Linux environments, Apache, CI/CD pipelines, and application monitoring
Mandatory (CS Fundamentals): Must have strong fundamentals in Data Structures, Algorithms, OS concepts, and system design
Mandatory (Company) : Product companies (B2B SaaS preferred)
Hands-on experience implementing and managing DLP solutions in AWS and AzureStrong expertise in data classification, labeling, and protection (e.g., Microsoft Purview, AIP)Experience designing and enforcing DLP policies across cloud storage, email, endpoints, and SaaS appsProficient in monitoring, investigating, and remediating data leakage incidents

US based large Biotech company with WW operations.
Senior Cloud Engineer Job Description
Position Title: Senior Cloud Engineer -- AWS [LONG TERM-CONTRACT POSITION]
Location: Remote [REQUIRES WORKING IN CST TIME ZONE]
Position Overview
The Senior Cloud Engineer will play a critical role in designing, deploying, and managing scalable, secure, and highly available cloud infrastructure across multiple platforms (AWS, Azure, Google Cloud). This role requires deep technical expertise, leadership in cloud
strategy, and hands-on experience with automation, DevOps practices, and cloud-native technologies. The ideal candidate will work collaboratively with cross-functional teams to deliver robust cloud solutions, drive best practices, and support business objectives
through innovative cloud engineering.
Key Responsibilities
Design, implement, and maintain cloud infrastructure and services, ensuring high availability, performance, and security across multi-cloud environments (AWS, Azure, GCP)
Develop and manage Infrastructure as Code (IaC) using tools such as Terraform, CloudFormation, and Ansible for automated provisioning and configuration
Lead the adoption and optimization of DevOps methodologies, including CI/CD pipelines, automated testing, and deployment processes
Collaborate with software engineers, architects, and stakeholders to architect cloud-native solutions that meet business and technical requirements
Monitor, troubleshoot, and optimize cloud systems for cost, performance, and reliability, using cloud monitoring and logging tools
Ensure cloud environments adhere to security best practices, compliance standards, and governance policies, including identity and access management, encryption, and vulnerability management
Mentor and guide junior engineers, sharing knowledge and fostering a culture of continuous improvement and innovation
Participate in on-call rotation and provide escalation support for critical cloud infrastructure issues
Document cloud architectures, processes, and procedures to ensure knowledge transfer and operational excellence
Stay current with emerging cloud technologies, trends, and best practices,
Required Qualifications
- Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field, or equivalent work experience
- 6–10 years of experience in cloud engineering or related roles, with a proven track record in large-scale cloud environments
- Deep expertise in at least one major cloud platform (AWS, Azure, Google Cloud) and experience in multi-cloud environments
- Strong programming and scripting skills (Python, Bash, PowerShell, etc.) for automation and cloud service integration
- Proficiency with DevOps tools and practices, including CI/CD (Jenkins, GitLab CI), containerization (Docker, Kubernetes), and configuration management (Ansible, Chef)
- Solid understanding of networking concepts (VPC, VPN, DNS, firewalls, load balancers), system administration (Linux/Windows), and cloud storage solutions
- Experience with cloud security, governance, and compliance frameworks
- Excellent analytical, troubleshooting, and root cause analysis skills
- Strong communication and collaboration abilities, with experience working in agile, interdisciplinary teams
- Ability to work independently, manage multiple priorities, and lead complex projects to completion
Preferred Qualifications
- Relevant cloud certifications (e.g., AWS Certified Solutions Architect, AWS DevOps Engineer, Microsoft AZ-300/400/500, Google Professional Cloud Architect)
- Experience with cloud cost optimization and FinOps practices
- Familiarity with monitoring/logging tools (CloudWatch, Kibana, Logstash, Datadog, etc.)
- Exposure to cloud database technologies (SQL, NoSQL, managed database services)
- Knowledge of cloud migration strategies and hybrid cloud architectures




















