50+ AWS (Amazon Web Services) Jobs in India
Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!
Build, deploy, and maintain production-grade AI/ML solutions for Fortune 500 enterprise clients on Google Cloud Platform. Hands-on role focused on shipping scalable AI systems across GenAI, agentic workflows, traditional ML, and computer vision.
Key Responsibilities:
Generative AI & Agentic Systems
- Design and build GenAI applications (RAG, agentic workflows, multi-agent systems)
- Develop intelligent systems with memory, planning, and reasoning capabilities
- Implement prompt engineering, context optimization, and evaluation frameworks
- Build observable and reliable multi-agent architectures
Traditional ML & Computer Vision
- Develop ML pipelines (forecasting, recommendation, classification, regression)
- Build production-grade computer vision solutions (document AI, image analysis)
- Perform feature engineering, model optimization, and benchmarking
MLOps & Production Engineering
- Own end-to-end ML lifecycle (CI/CD, testing, versioning, deployment)
- Build scalable APIs, microservices, and data pipelines
- Monitor models, detect drift, and implement A/B testing frameworks
Knowledge Solutions
- Architect knowledge graphs and semantic search systems
- Implement hybrid retrieval (vector + keyword search)
Client Collaboration
- Present technical solutions to enterprise clients
- Collaborate with architects, data engineers, and business teams
Required Skills & Experience
- 3–6 years of hands-on ML Engineering experience
- Strong Python and software engineering fundamentals
- Experience shipping production ML systems on cloud (GCP preferred)
- Experience across GenAI, Traditional ML, Computer Vision
- MLOps experience and RAG-based systems
Preferred
- GCP Professional ML Engineer certification
- Knowledge graphs / semantic search experience
- Experience in regulated industries (Healthcare / BFSI)
- Open-source or technical publications
We are looking for an experienced AI Technical Architect who can design and lead end-to-end AI/ML solutions, define scalable architecture, and guide development teams in building intelligent applications aligned with business goals.
Key Responsibilities:
- Design AI/ML architecture and technical solutions.
- Lead AI strategy, model deployment, and integration.
- Build scalable AI pipelines and cloud-based solutions.
- Work closely with data scientists, developers, and stakeholders.
- Ensure best practices in MLOps, automation, and performance optimization.
- Evaluate new AI technologies and frameworks.
JOB DETAILS:
* Job Title: Java Lead-Java, MS, Kafka-TVM - Java (Core & Enterprise), Spring/Micronaut, Kafka
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 9 to 12 years
* Location: Trivandrum, Thiruvananthapuram
Job Description
Experience
- 9+ years of experience in Java-based backend application development
- Proven experience building and maintaining enterprise-grade, scalable applications
- Hands-on experience working with microservices and event-driven architectures
- Experience working in Agile and DevOps-driven development environments
Mandatory Skills
- Advanced proficiency in core Java and enterprise Java concepts
- Strong hands-on experience with Spring Framework and/or Micronaut for building scalable backend applications
- Strong expertise in SQL, including database design, query optimization, and performance tuning
- Hands-on experience with PostgreSQL or other relational database management systems
- Strong experience with Kafka or similar event-driven messaging and streaming platforms
- Practical knowledge of CI/CD pipelines using GitLab
- Experience with Jenkins for build automation and deployment processes
- Strong understanding of GitLab for source code management and DevOps workflows
Responsibilities
- Design, develop, and maintain robust, scalable, and high-performance backend solutions
- Develop and deploy microservices using Spring or Micronaut frameworks
- Implement and integrate event-driven systems using Kafka
- Optimize SQL queries and manage PostgreSQL databases for performance and reliability
- Build, implement, and maintain CI/CD pipelines using GitLab and Jenkins
- Collaborate with cross-functional teams including product, QA, and DevOps to deliver high-quality software solutions
- Ensure code quality through best practices, reviews, and automated testing
Good-to-Have Skills
- Strong problem-solving and analytical abilities
- Experience working with Agile development methodologies such as Scrum or Kanban
- Exposure to cloud platforms such as AWS, Azure, or GCP
- Familiarity with containerization and orchestration tools such as Docker or Kubernetes
Skills: java, spring boot, kafka development, cicd, postgresql, gitlab
Must-Haves
Java Backend (9+ years), Spring Framework/Micronaut, SQL/PostgreSQL, Kafka, CI/CD (GitLab/Jenkins)
Advanced proficiency in core Java and enterprise Java concepts
Strong hands-oacn experience with Spring Framework and/or Micronaut for building scalable backend applications
Strong expertise in SQL, including database design, query optimization, and performance tuning
Hands-on experience with PostgreSQL or other relational database management systems
Strong experience with Kafka or similar event-driven messaging and streaming platforms
Practical knowledge of CI/CD pipelines using GitLab
Experience with Jenkins for build automation and deployment processes
Strong understanding of GitLab for source code management and DevOps workflows
*******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: only Trivandrum
F2F Interview on 21st Feb 2026
Job Summary
We are looking for a skilled AWS DevOps Engineer with 5+ years of experience to design, implement, and manage scalable, secure, and highly available cloud infrastructure. The ideal candidate should have strong expertise in CI/CD, automation, containerization, and cloud-native deployments on Amazon Web Services.
Key Responsibilities
- Design, build, and maintain scalable infrastructure on AWS
- Implement and manage CI/CD pipelines for automated build, test, and deployment
- Automate infrastructure using Infrastructure as Code (IaC) tools
- Monitor system performance, availability, and security
- Manage containerized applications and orchestration platforms
- Troubleshoot production issues and ensure high availability
- Collaborate with development teams for DevOps best practices
- Implement logging, monitoring, and alerting systems
Required Skills
- Strong hands-on experience with AWS services (EC2, S3, RDS, VPC, IAM, Lambda, CloudWatch)
- Experience with CI/CD tools like Jenkins, GitHub Actions, or GitLab CI/CD
- Infrastructure as Code using Terraform / CloudFormation
- Containerization using Docker
- Container orchestration using Kubernetes / EKS
- Scripting knowledge in Python / Bash / Shell
- Experience with monitoring tools (CloudWatch, Prometheus, Grafana)
- Strong understanding of Linux systems and networking
- Experience with Git and version control
Good to Have
- Experience with configuration management tools (Ansible, Chef, Puppet)
- Knowledge of microservices architecture
- Experience with security best practices and DevSecOps
- AWS Certification (Solutions Architect / DevOps Engineer)
- Experience working in Agile/Scrum teams
Job Description
We are looking for a Data Scientist with 3–5 years of experience in data analysis, statistical modeling, and machine learning to drive actionable business insights. This role involves translating complex business problems into analytical solutions, building and evaluating ML models, and communicating insights through compelling data stories. The ideal candidate combines strong statistical foundations with hands-on experience across modern data platforms and cross-functional collaboration.
What will you need to be successful in this role?
Core Data Science Skills
• Strong foundation in statistics, probability, and mathematical modeling
• Expertise in Python for data analysis (NumPy, Pandas, Scikit-learn, SciPy)
• Strong SQL skills for data extraction, transformation, and complex analytical queries
• Experience with exploratory data analysis (EDA) and statistical hypothesis testing
• Proficiency in data visualization tools (Matplotlib, Seaborn, Plotly, Tableau, or Power BI)
• Strong understanding of feature engineering and data preprocessing techniques
• Experience with A/B testing, experimental design, and causal inference
Machine Learning & Analytics
• Strong experience building and deploying ML models (regression, classification, clustering)
• Knowledge of ensemble methods, gradient boosting (XGBoost, LightGBM, CatBoost)
• Understanding of time series analysis and forecasting techniques
• Experience with model evaluation metrics and cross-validation strategies
• Familiarity with dimensionality reduction techniques (PCA, t-SNE, UMAP)
• Understanding of bias-variance tradeoff and model interpretability
• Experience with hyperparameter tuning and model optimization
GenAI & Advanced Analytics
• Working knowledge of LLMs and their application to business problems
• Experience with prompt engineering for analytical tasks
• Understanding of embeddings and semantic similarity for analytics
• Familiarity with NLP techniques (text classification, sentiment analysis, entity,extraction)
• Experience integrating AI/ML models into analytical workflows
Data Platforms & Tools
• Experience with cloud data platforms (Snowflake, Databricks, BigQuery)
• Proficiency in Jupyter notebooks and collaborative development environments
• Familiarity with version control (Git) and collaborative workflows
• Experience working with large datasets and distributed computing (Spark/PySpark)
• Understanding of data warehousing concepts and dimensional modeling
• Experience with cloud platforms (AWS, Azure, or GCP)
Business Acumen & Communication
• Strong ability to translate business problems into analytical frameworks
• Experience presenting complex analytical findings to non-technical stakeholders
• Ability to create compelling data stories and visualizations
• Track record of driving business decisions through data-driven insights
• Experience working with cross-functional teams (Product, Engineering, Business)
• Strong documentation skills for analytical methodologies and findings
Good to have
• Experience with deep learning frameworks (TensorFlow, PyTorch, Keras)
• Knowledge of reinforcement learning and optimization techniques
• Familiarity with graph analytics and network analysis
• Experience with MLOps and model deployment pipelines
• Understanding of model monitoring and performance tracking in production
• Knowledge of AutoML tools and automated feature engineering
• Experience with real-time analytics and streaming data
• Familiarity with causal ML and uplift modeling
• Publications or contributions to data science community
• Kaggle competitions or open-source contributions
• Experience in specific domains (finance, healthcare, e-commerce)
JOB DETAILS:
* Job Title: Principal Data Scientist
* Industry: Healthcare
* Salary: Best in Industry
* Experience: 6-10 years
* Location: Bengaluru
Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps
Criteria:
- Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
- Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
- Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
- Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
- Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.
Job Description
Principal Data Scientist
(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)
Job Details
- Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
- Location: Hebbal Ring Road, Bengaluru
- Work Mode: Work from Office
- Shift: Day Shift
- Reporting To: SVP
- Compensation: Best in the industry (for suitable candidates)
Educational Qualifications
- Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
- Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage
Experience Required
- 7+ years of experience solving real-world problems using:
- Natural Language Processing (NLP)
- Automatic Speech Recognition (ASR)
- Large Language Models (LLMs)
- Machine Learning (ML)
- Preferably within the healthcare domain
- Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable
Role Overview
This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.
We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:
- Reduce administrative burden in EMR data entry
- Improve provider satisfaction and productivity
- Enhance quality of care and patient outcomes
Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.
The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.
Key Responsibilities
AI Strategy & Solution Development
- Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
- Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
- Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
- Design scalable, reusable, and production-ready AI frameworks for speech and text analytics
Model Development & Optimization
- Fine-tune, train, and optimize large-scale NLP and ASR models
- Develop and optimize ML algorithms for speech, text, and structured healthcare data
- Conduct rigorous testing and validation to ensure high clinical accuracy and performance
- Continuously evaluate and enhance model efficiency and reliability
Cloud & MLOps Implementation
- Architect and deploy AI models on AWS, Azure, or GCP
- Deploy and manage models using containerization, Kubernetes, and serverless architectures
- Design and implement robust MLOps strategies for lifecycle management
Integration & Compliance
- Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
- Integrate AI systems with EHR/EMR platforms
- Implement ethical AI practices, regulatory compliance, and bias mitigation techniques
Collaboration & Leadership
- Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
- Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
- Mentor and lead junior data scientists and engineers
- Contribute to AI research, publications, patents, and long-term AI strategy
Required Skills & Competencies
- Expertise in Machine Learning, Deep Learning, and Generative AI
- Strong Python programming skills
- Hands-on experience with PyTorch and TensorFlow
- Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
- Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
- Experience with text embeddings and vector databases
- Proficiency in cloud platforms (AWS, Azure, GCP)
- Experience with LangChain, OpenAI APIs, and RAG architectures
- Knowledge of agentic AI frameworks and reinforcement learning
- Familiarity with Docker, Kubernetes, and MLOps best practices
- Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
- Strong communication, collaboration, and mentoring skills
TITLE - PHP Developer:
JOB LOCATION - Mumbai
Please send resumes to admin at hansinfotech dot in or hr.hansinfotech at gmal or conect with us Hans InfoTech on linked in
JOB DESCRIPTION
We are looking for PHP Developer position above 3 years of experience in below mentioned skillset.
Skills & Qualifications
- Basic understanding of front-end technologies and platforms, such as JavaScript, HTML5, and CSS3. Good understanding of server-side CSS preprocessors, such as LESS and SASS
- Understanding accessibility and security compliance
- User authentication and authorization between multiple systems, servers, and environments
- Integration of multiple data sources and databases into one system
- Management of hosting environment, including database administration and scaling an application to support load changes
- Data migration, transformation, and scripting
- Setup and administration of backups
- Outputting data in different formats
- Understanding differences between multiple delivery platforms such as mobile vs desktop, and optimizing output to match the specific platform
- Creating database schemas that represent and support business processes
- Implementing automated testing platforms and unit tests
- Proficient knowledge of a back-end programming language.
- Proficient understanding of code versioning tools, such as Git
- Proficient understanding of OWASP security principles
- Understanding of “session management” in a distributed server environment
- Qualification- BE – CS/ IT / BCA/ MCA/BSc CS/IT/ MSc CS/ IT
Job Role: Teamcenter Admin
• Teamcenter and CAD (NX) Configuration Management
• Advanced debugging and root-cause analysis beyond L2
• Code fixes and minor defect remediation
• AWS knowledge, which is foundational to our Teamcenter architecture
• Experience supporting weekend and holiday code deployments
• Operational administration (break/fix, handle ticket escalations, problem management
• Support for project activities
• Deployment and code release support
• Hypercare support following deployment, which is expected to onboard approximately 1,000+ additional users
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
JOB DETAILS:
* Job Title: Head of Engineering/Senior Product Manager
* Industry: Digital transformation excellence provider
* Salary: Best in Industry
* Experience: 12-20 years
* Location: Mumbai
Job Description
Role Overview
The VP / Head of Technology will lead company’s technology function across engineering, product development, cloud infrastructure, security, and AI-led initiatives. This role focuses on delivering scalable, high-quality technology solutions across company’s core verticals including eCommerce, Procurement & e-Sourcing, ERP integrations, Sustainability/ESG, and Business Services.
This leader will drive execution, ensure technical excellence, modernize platforms, and collaborate closely with business and delivery teams.
Roles and Responsibilities:
Technology Execution & Architecture Leadership
· Own and execute the technology roadmap aligned with business goals.
· Build and maintain scalable architecture supporting multiple verticals.
· Enforce engineering best practices, code quality, performance, and security.
· Lead platform modernization including microservices, cloud-native architecture, API-first systems, and integration frameworks.
Product & Engineering Delivery
· Manage multi-product engineering teams across eCommerce platforms, procurement systems, ERP integrations, analytics, and ESG solutions.
· Own the full SDLC — requirements, design, development, testing, deployment, support.
· Implement Agile, DevOps, CI/CD for faster releases and improved reliability.
· Oversee product/platform interoperability across all company systems.
Vertical-Specific Technology Leadership
Procurement Tech:
· Lead architecture and enhancements of procurement and indirect spend platforms.
· Ensure interoperability with SAP Ariba, Coupa, Oracle, MS Dynamics, etc.
eCommerce:
· Drive development of scalable B2B/B2C commerce platforms, headless commerce, marketplace integrations, and personalization capabilities.
Sustainability/ESG:
· Support development of GHG tracking, reporting systems, and sustainability analytics platforms.
Business Services:
· Enhance operational platforms with automation, workflow management, dashboards, and AI-driven efficiency tools.
Data, Cloud, Security & Infrastructure
· Own cloud infrastructure strategy (Azure/AWS/GCP).
· Ensure adherence to compliance standards (SOC2, ISO 27001, GDPR).
· Lead cybersecurity policies, monitoring, threat detection, and recovery planning.
· Drive observability, cost optimization, and system scalability.
AI, Automation & Innovation
· Integrate AI/ML, analytics, and automation into product platforms and service delivery.
· Build frameworks for workflow automation, supplier analytics, personalization, and operational efficiency.
· Lead R&D for emerging tech aligned to business needs.
Leadership & Team Management
· Lead and mentor engineering managers, architects, developers, QA, and DevOps.
· Drive a culture of ownership, innovation, continuous learning, and performance accountability.
· Build capability development frameworks and internal talent pipelines.
Stakeholder Collaboration
· Partner with Sales, Delivery, Product, and Business Teams to align technology outcomes with customer needs.
· Ensure transparent reporting on project status, risks, and technology KPIs.
· Manage vendor relationships, technology partnerships, and external consultants.
Education, Training, Skills, and Experience Requirements:
Experience & Background
· 16+ years in technology execution roles, including 5–7 years in senior leadership.
· Strong background in multi-product engineering for B2B platforms or enterprise systems.
· Proven delivery experience across: eCommerce, ERP integrations, procurement platforms, ESG solutions, and automation.
Technical Skills
· Expertise in cloud platforms (Azure/AWS/GCP), microservices architecture, API frameworks.
· Strong grasp of procurement tech, ERP integrations, eCommerce platforms, and enterprise-scale systems.
· Hands-on exposure to AI/ML, automation tools, data engineering, and analytics stacks.
· Strong understanding of security, compliance, scalability, performance engineering.
Leadership Competencies
· Execution-focused technology leadership.
· Strong communication and stakeholder management skills.
· Ability to lead distributed teams, manage complexity, and drive measurable outcomes.
· Innovation mindset with practical implementation capability.
Education
· Bachelor’s or Master’s in Computer Science/Engineering or equivalent.
· Additional leadership education (MBA or similar) is a plus, not mandatory.
Travel Requirements
· Occasional travel for client meetings, technology reviews, or global delivery coordination.
Must-Haves
· 10+ years of technology experience, with with at least 6 years leading large (50-100+) multi product engineering teams.
· Must have worked on B2B Platforms. Experience in Procurement Tech or Supply Chain
· Min. 10+ Years of Expertise in Cloud-Native Architecture, Expert-level design in Azure, AWS, or GCP using Microservices, Kubernetes (K8s), and Docker.
· Min. 8+ Years of Expertise in Modern Engineering Practices, Advanced DevOps, CI/CD pipelines, and automated testing frameworks (Selenium, Cypress, etc.).
· Hands-on leadership experience in Security & Compliance.
· Min. 3+ Years of Expertise in AI & Data Engineering, Practical implementation of LLMs, Predictive Analytics, or AI-driven automation
· Strong technology execution leadership, with ownership of end-to-end technology roadmaps aligned to business outcomes.
· Min. 6+ Years of Expertise in B2B eCommerce Logic Architecture of Headless Commerce, marketplace integrations, and complex B2B catalog management.
· Strong product management exposure
· Proven experience in leading end-to-end team operations
· Relevant experience in product-driven organizations or platforms
· Strong Subject Matter Expertise (SME)
Education: - Master degree.
**************
Joining time / Notice Period: Immediate - 45days.
Location: - Andheri,
5 days working (3 - 2 days’ work from office)
Company Description:
NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.
Brief Description:
NonStop io is seeking a proficient .NET Developer to join our growing team. You will be responsible for developing, enhancing, and maintaining scalable applications using .NET technologies. This role involves working on a healthcare-focused product and requires strong problem-solving skills, attention to detail, and a passion for software development.
Responsibilities:
- Design, develop, and maintain applications using .NET Core/.NET Framework, C#, and related technologies
- Write clean, scalable, and efficient code while following best practices
- Develop and optimize APIs and microservices
- Work with SQL Server and other databases to ensure high performance and reliability
- Collaborate with cross-functional teams, including UI/UX designers, QA, and DevOps
- Participate in code reviews and provide constructive feedback
- Troubleshoot, debug, and enhance existing applications
- Ensure compliance with security and performance standards, especially for healthcare-related applications
Qualifications & Skills:
- Strong experience in .NET Core/.NET Framework and C#
- Proficiency in building RESTful APIs and microservices architecture
- Experience with Entity Framework, LINQ, and SQL Server
- Familiarity with front-end technologies like React, Angular, or Blazor is a plus
- Knowledge of cloud services (Azure/AWS) is a plus
- Experience with version control (Git) and CI/CD pipelines
- Strong understanding of object-oriented programming (OOP) and design patterns
- Prior experience in healthcare tech or working with HIPAA-compliant systems is a plus
Why Join Us?
- Opportunity to work on a cutting-edge healthcare product
- A collaborative and learning-driven environment
- Exposure to AI and software engineering innovations
- Excellent work ethics and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Global digital transformation solutions provider.
JOB DETAILS:
* Job Title: Lead II - Software Engineering - AWS, Apache Spark (PySpark/Scala), Apache Kafka
* Industry: Global digital transformation solutions provider
* Salary: Best in Industry
* Experience: 5-8 years
* Location: Hyderabad
Job Summary
We are seeking a skilled Data Engineer to design, build, and optimize scalable data pipelines and cloud-based data platforms. The role involves working with large-scale batch and real-time data processing systems, collaborating with cross-functional teams, and ensuring data reliability, security, and performance across the data lifecycle.
Key Responsibilities
ETL Pipeline Development & Optimization
- Design, develop, and maintain complex end-to-end ETL pipelines for large-scale data ingestion and processing.
- Optimize data pipelines for performance, scalability, fault tolerance, and reliability.
Big Data Processing
- Develop and optimize batch and real-time data processing solutions using Apache Spark (PySpark/Scala) and Apache Kafka.
- Ensure fault-tolerant, scalable, and high-performance data processing systems.
Cloud Infrastructure Development
- Build and manage scalable, cloud-native data infrastructure on AWS.
- Design resilient and cost-efficient data pipelines adaptable to varying data volume and formats.
Real-Time & Batch Data Integration
- Enable seamless ingestion and processing of real-time streaming and batch data sources (e.g., AWS MSK).
- Ensure consistency, data quality, and a unified view across multiple data sources and formats.
Data Analysis & Insights
- Partner with business teams and data scientists to understand data requirements.
- Perform in-depth data analysis to identify trends, patterns, and anomalies.
- Deliver high-quality datasets and present actionable insights to stakeholders.
CI/CD & Automation
- Implement and maintain CI/CD pipelines using Jenkins or similar tools.
- Automate testing, deployment, and monitoring to ensure smooth production releases.
Data Security & Compliance
- Collaborate with security teams to ensure compliance with organizational and regulatory standards (e.g., GDPR, HIPAA).
- Implement data governance practices ensuring data integrity, security, and traceability.
Troubleshooting & Performance Tuning
- Identify and resolve performance bottlenecks in data pipelines.
- Apply best practices for monitoring, tuning, and optimizing data ingestion and storage.
Collaboration & Cross-Functional Work
- Work closely with engineers, data scientists, product managers, and business stakeholders.
- Participate in agile ceremonies, sprint planning, and architectural discussions.
Skills & Qualifications
Mandatory (Must-Have) Skills
- AWS Expertise
- Hands-on experience with AWS Big Data services such as EMR, Managed Apache Airflow, Glue, S3, DMS, MSK, and EC2.
- Strong understanding of cloud-native data architectures.
- Big Data Technologies
- Proficiency in PySpark or Scala Spark and SQL for large-scale data transformation and analysis.
- Experience with Apache Spark and Apache Kafka in production environments.
- Data Frameworks
- Strong knowledge of Spark DataFrames and Datasets.
- ETL Pipeline Development
- Proven experience in building scalable and reliable ETL pipelines for both batch and real-time data processing.
- Database Modeling & Data Warehousing
- Expertise in designing scalable data models for OLAP and OLTP systems.
- Data Analysis & Insights
- Ability to perform complex data analysis and extract actionable business insights.
- Strong analytical and problem-solving skills with a data-driven mindset.
- CI/CD & Automation
- Basic to intermediate experience with CI/CD pipelines using Jenkins or similar tools.
- Familiarity with automated testing and deployment workflows.
Good-to-Have (Preferred) Skills
- Knowledge of Java for data processing applications.
- Experience with NoSQL databases (e.g., DynamoDB, Cassandra, MongoDB).
- Familiarity with data governance frameworks and compliance tooling.
- Experience with monitoring and observability tools such as AWS CloudWatch, Splunk, or Dynatrace.
- Exposure to cost optimization strategies for large-scale cloud data platforms.
Skills: big data, scala spark, apache spark, ETL pipeline development
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Hyderabad
Note: If a candidate is a short joiner, based in Hyderabad, and fits within the approved budget, we will proceed with an offer
F2F Interview: 14th Feb 2026
3 days in office, Hybrid model.
💼 Job Title: Full Stack Developer (experienced only)
🏢 Company: SDS Softwares
💻 Location: Work from Home
💸 Salary range: ₹10,000 - ₹15,000 per month (based on knowledge and interview)
🕛 Shift Timings: 12 PM to 9 PM (5 days working )
About the role: As a Full Stack Developer, you will work on both the front-end and back-end of web applications. You will be responsible for developing user-friendly interfaces and maintaining the overall functionality of our projects.
⚜️ Key Responsibilities:
- Collaborate with cross-functional teams to define, design, and ship new features.
- Develop and maintain high-quality web applications (frontend + backend )
- Troubleshoot and debug applications to ensure peak performance.
- Participate in code reviews and contribute to the team’s knowledge base.
⚜️ Required Skills:
- Proficiency in HTML, CSS, JavaScript, Redux, React.js for front-end development. ✅
- Understanding of server-side languages such as Node.js, Python. ✅
- Familiarity with database technologies such as MySQL, MongoDB, or ✅ PostgreSQL.
- Basic knowledge of version control systems, particularly Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication skills and a team-oriented mindset.
💠 Qualifications:
- individuals with full-time work experience (1 year to 2 years) in software development.
- Must have a personal laptop and stable internet connection.
- Ability to join immediately is preferred.
If you are passionate about coding and eager to learn, we would love to hear from you. 👍
Role Overview
We are hiring for Humming Apps Technologies LLP who are seeking a Senior Threat Modeler to join the security team and act as a strategic bridge between architecture and defense. This role focuses on proactively identifying vulnerabilities during the design phase to ensure applications, APIs, and cloud infrastructures are secure by design.
The position requires thinking from an attacker’s perspective to analyze trust boundaries, map attack paths, and influence the overall security posture of next-generation AI-driven and cloud-native systems. The goal is not only to detect issues but to prevent risks before implementation.
Key Responsibilities
Architectural Analysis
• Lead deep-dive threat modeling sessions across applications, APIs, microservices, and cloud-native environments
• Perform detailed reviews of system architecture, data flows, and trust boundaries
Threat Modeling Frameworks & Methodologies
• Apply industry-standard frameworks including STRIDE, PASTA, ATLAS, and MITRE ATT&CK
• Identify sophisticated attack vectors and model realistic threat scenarios
Security Design & Risk Mitigation
• Detect weaknesses during the design stage
• Provide actionable and prioritized mitigation recommendations
• Strengthen security posture through secure-by-design principles
Collaborative Security Integration
• Work closely with architects and developers during design and build phases
• Embed security practices directly into the SDLC
• Ensure security is incorporated early rather than retrofitted
Communication & Enablement
• Facilitate threat modeling demonstrations and walkthroughs
• Present findings and risk assessments to stakeholders
• Translate complex technical risks into clear, business-relevant insights
• Educate teams on secure design practices and emerging threats
Required Qualifications
Experience
• 5–10 years of dedicated experience in threat modeling, product security, or application security
Technical Expertise
• Strong understanding of software architecture and distributed systems
• Experience designing and securing RESTful APIs
• Hands-on knowledge of cloud platforms such as AWS, Azure, or GCP
Modern Threat Knowledge
• Expertise in current attack vectors including OWASP Top 10
• Understanding of API-specific threats
• Awareness of emerging risks in AI/LLM-based applications
Tools & Practices
• Practical experience with threat modeling tools
• Proficiency in technical diagramming and system visualization
Communication
• Excellent written and verbal English communication skills
• Ability to collaborate across engineering teams and stakeholders in different time zones
Preferred Qualifications
• Experience in consulting or client-facing professional services roles
• Industry certifications such as CISSP, CSSLP, OSCP, or equivalent
JOB DESCRIPTION:
Role: GRC Consultant
(1–7 Years Experience)
📍 Location: Anna Salai, Mount Road, Chennai, India
🕒 Full-Time |On-site
Pentabay Software Solutions, a fast-growing IT services company based in Chennai, is hiring a GRC Consultant with 1–4 years of experience in cloud security and GRC (Governance, Risk & Compliance). This is an exciting opportunity for early-career professionals to work with modern cloud platforms, global clients, and industry-standard security frameworks.
🔑 Key Responsibilities
Secure Cloud Infrastructure
Design and deploy secure environments ensuring alignment with HIPAA, GDPR, PCI DSS, PHI, and PII standards.
Firewall & Endpoint Protection
Manage and configure firewalls, WAFs, and endpoint security solutions to protect cloud and hybrid infrastructures.
GRC & Compliance Support
Contribute to Governance, Risk, and Compliance (GRC) initiatives by helping develop and implement policies, assess risks, maintain audit trails, and support compliance reporting.
Security Audits & Risk Assessments
Assist in vulnerability assessments, penetration tests, and audits to identify and remediate risks, ensuring continuous improvement of security posture.
✅ Required Skills & Qualifications:
- 1–7years of experience in cybersecurity, cloud security, or related areas.
- Working knowledge of GRC frameworks, including policy creation, risk identification, and compliance reporting.
- Understanding of HIPAA, GDPR, PCI DSS, and data protection best practices.
- Familiarity with firewall management, network segmentation, and endpoint protection tools.
- Hands-on expertise with cloud platforms: AWS, Azure, GCP.
- Experience working with Canadian or international clients is an added advantage.
- Strong analytical mindset and good written and verbal communication skills
Role Overview:
Challenge convention and work on cutting edge technology that is transforming the way our customers manage their physical, virtual and cloud computing environments. Virtual Instruments seeks highly talented people to join our growing team, where your contributions will impact the development and delivery of our product roadmap. Our award-winning Virtana Platform provides the only real-time, system-wide, enterprise scale solution for providing visibility into performance, health and utilization metrics, translating into improved performance and availability while lowering the total cost of the infrastructure supporting mission-critical applications.
We are seeking an individual with knowledge in Systems Management and/or Systems Monitoring Software and/or Performance Management Software and Solutions with insight into integrated infrastructure platforms like Cisco UCS, infrastructure providers like Nutanix, VMware, EMC & NetApp and public cloud platforms like Google Cloud and AWS to expand the depth and breadth of Virtana Products.
Work Location: Pune/ Chennai
Job Type: Hybrid
Role Responsibilities:
- The engineer will be primarily responsible for design and development of software solutions for the Virtana Platform
- Partner and work closely with team leads, architects and engineering managers to design and implement new integrations and solutions for the Virtana Platform.
- Communicate effectively with people having differing levels of technical knowledge.
- Work closely with Quality Assurance and DevOps teams assisting with functional and system testing design and deployment
- Provide customers with complex application support, problem diagnosis and problem resolution
Required Qualifications:
- Minimum of 4+ years of experience in a Web Application centric Client Server Application development environment focused on Systems Management, Systems Monitoring and Performance Management Software.
- Able to understand and comprehend integrated infrastructure platforms and experience working with one or more data collection technologies like SNMP, REST, OTEL, WMI, WBEM.
- Minimum of 4 years of development experience with one of these high level languages like Python, Java, GO is required.
- Bachelor's (B.E, B.Tech) or Master's degree (M.E, M.Tech. MCA) in computer science, Computer Engineering or equivalent
- 2 years of development experience in public cloud environment using Kubernetes etc (Google Cloud and/or AWS)
Desired Qualifications:
- Prior experience with other virtualization platforms like OpenShift is a plus
- Prior experience as a contributor to engineering and integration efforts with strong attention to detail and exposure to Open-Source software is a plus
- Demonstrated ability as a strong technical engineer who can design and code with strong communication skills
- Firsthand development experience with the development of Systems, Network and performance Management Software and/or Solutions is a plus
- Ability to use a variety of debugging tools, simulators and test harnesses is a plus
About Virtana:
Virtana delivers the industry's only broadest and deepest Observability Platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more.
Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana's software solutions for over a decade.
Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30BIT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success.
About us:
Trential is engineering the future of digital identity with W3C Verifiable Credentials—secure, decentralized, privacy-first. We make identity and credentials verifiable anywhere, instantly.
We are looking for a Team lead to architect, build, and scale high-performance web applications that power our core products. You will lead the full development lifecycle—from system design to deployment—while mentoring the team and driving best engineering practices across frontend and backend stacks.
Design & Implement: Lead the design, implementation and management of Trential products.
Lead by example: Be the most senior and impactful engineer on the team, setting the technical bar through your direct contributions.
Code Quality & Best Practices: Enforce high standards for code quality, security, and performance through rigorous code reviews, automated testing, and continuous delivery pipelines.
Standards Adherence: Ensure all solutions comply with relevant open standards like W3C Verifiable Credentials (VCs), Decentralized Identifiers (DIDs) & Privacy Laws, maintaining global interoperability.
Continuous Improvement: Lead the charge to continuously evaluate and improve the products & processes. Instill a culture of metrics-driven process improvement to boost team efficiency and product quality.
Cross-Functional Collaboration: Work closely with the Co-Founders & Product Team to translate business requirements and market needs into clear, actionable technical specifications and stories. Represent Trential in interactions with external stakeholders for integrations.
What we're looking for:
Experience: 5+ years of experience in software development, with at least 2 years as a Technical Lead.
Technical Depth: Deep proficiency in JavaScript and experience in building and operating distributed, fault-tolerant systems.
Cloud & Infrastructure: Hands-on experience with cloud platforms (AWS & GCP) and modern DevOps practices (e.g., CI/CD, Infrastructure as Code, Docker).
Databases: Strong knowledge of SQL/NoSQL databases and data modeling for high-throughput, secure applications.
Preferred Qualifications (Nice to Have)
Identity & Credentials: Knowledge of decentralized identity principles, Verifiable Credentials (W3C VCs), DIDs, and relevant protocols (e.g., OpenID4VC, DIDComm)
Familiarity with data privacy and security standards (GDPR, SOC 2, ISO 27001) and designing systems complying to these laws.
Experience integrating AI/ML models into verification or data extraction workflows
Job Location: Kharadi, Pune
Job Type: Full-Time
About Us:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have 10 years of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide their operations and believe in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the "givers gain" philosophy and strive to provide value in order to seek value. We are committed to delivering top-notch solutions to our clients and are looking for a talented Web UI Developer to join our dynamic team.
Qualifications:
- Strong Experience in JavaScript and React
- Experience in building multi-tier SaaS applications with exposure to micro-services, caching, pub-sub, and messaging technologies
- Experience with design patterns
- Familiarity with UI components library (such as material-UI or Bootstrap) and RESTful APIs
- Experience with web frontend technologies such as HTML5, CSS3, LESS, Bootstrap
- A strong foundation in computer science, with competencies in data structures, algorithms, and software design
- Bachelor's / Master's Degree in CS
- Experience in GIT in mandatory
- Exposure to AWS, Docker, and CI/CD systems like Jenkins is a plus
Company Description
NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.
Role Description
This is a full-time hybrid role for a Java Software Engineer, based in Pune. The Java Software Engineer will be responsible for designing, developing, and maintaining software applications. Key responsibilities include working with microservices architecture, implementing and managing the Spring Framework, and programming in Java. Collaboration with cross-functional teams to define, design, and ship new features is also a key aspect of this role.
Responsibilities:
● Develop and Maintain: Write clean, efficient, and maintainable code for Java-based applications
● Collaborate: Work with cross-functional teams to gather requirements and translate them into technical solutions
● Code Reviews: Participate in code reviews to maintain high-quality standards
● Troubleshooting: Debug and resolve application issues in a timely manner
● Testing: Develop and execute unit and integration tests to ensure software reliability
● Optimize: Identify and address performance bottlenecks to enhance application performance
Qualifications & Skills:
● Strong knowledge of Java, Spring Framework (Spring Boot, Spring MVC), and Hibernate/JPA
● Familiarity with RESTful APIs and web services
● Proficiency in working with relational databases like MySQL or PostgreSQL
● Practical experience with AWS cloud services and building scalable, microservices-based architectures
● Experience with build tools like Maven or Gradle
● Understanding of version control systems, especially Git
● Strong understanding of object-oriented programming principles and design patterns
● Familiarity with automated testing frameworks and methodologies
● Excellent problem-solving skills and attention to detail
● Strong communication skills and ability to work effectively in a collaborative team environment
Why Join Us?
● Opportunity to work on cutting-edge technology products
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you
Objective of the Role:
The Application Developer’s primary objective is to design, create, and maintain integration solutions using Oracle Integration Cloud (OIC) to streamline business processes and enhance operational efficiency.
Main Role (Overall Accountability):
- Design, develop, and implement integration solutions using Oracle Integration Cloud (OIC) to connect various applications, systems, and services.
- Customize and configure OIC adapters, connectors, and components to meet specific integration requirements.
- Develop RESTful and SOAP web services for data exchange and communication between different systems.
- Demonstrate good knowledge of cloud technologies (e.g., AWS Lambda functions for integration with AWS services).
- Collaborate with business analysts and stakeholders to gather requirements and define integration workflows and data flows.
- Perform troubleshooting, debugging, and performance tuning of integration solutions to ensure optimal performance and reliability.
- Develop and maintain documentation for integration processes, interfaces, and configurations.
- Perform code reviews to ensure quality and consistency.
- Ensure adherence to coding standards, development methodologies, and security protocols throughout the software development lifecycle.
Personnel Specification
Education:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
Experience:
- 5 or more years of experience in the IT industry.
- Experience in cloud-based integration platforms.
Skills and Abilities:
- Proven experience in designing, developing, and implementing integration solutions using OIC.
- Strong understanding of RESTful and SOAP web services, JSON, and other data formats.
- Experience in cloud-based integration platforms, writing AWS Lambda functions, and creating integrations across various channels.
- Strong knowledge of OIC API integrations.
- Strong understanding of SOAP-based and REST-based services.
- Strong development skills in Java.
- Strong knowledge of authentication methodologies used in integration platforms (OAuth, JWT, Basic Auth, etc.).
- Strong knowledge of OIC GEN2 and GEN3.
Note: This position is for the Bahrain region and is a full-time opportunity.
AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 3- 5 years of prior experience in data engineering, with a strong background in AWS (Amazon Web Services) technologies. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.
Experience : 3 - 5years
Notice : Immediate to 15days
Responsibilities :
Design, develop, and maintain scalable data pipelines and ETL processes leveraging AWS services such as S3, Glue, EMR, Lambda, and Redshift.
Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.
Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness.
Implement data governance and security best practices to ensure compliance and data integrity.
Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring.
Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.
Qualifications :
Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
3 - 5 years of prior experience in data engineering, with a focus on designing and building data pipelines.
Proficiency in AWS services, particularly S3, Glue, EMR, Lambda, and Redshift.
Strong programming skills in languages such as Python, Java, or Scala.
Experience with SQL and NoSQL databases, data warehousing concepts, and big data technologies.
Familiarity with containerization technologies (e.g., Docker, Kubernetes) and orchestration tools (e.g., Apache Airflow) is a plus.
Responsibilities:
- Design, develop, and maintain efficient and reliable data pipelines.
- Identify and implement process improvements, automating manual tasks and optimizing data delivery.
- Build and maintain the infrastructure for data extraction, transformation, and loading (ETL) from diverse sources using SQL and AWS cloud technologies.
- Develop data tools and solutions to empower our analytics and data science teams, contributing to product innovation.
Qualifications:
Must Have:
- 2-3 years of experience in a Data Engineering role.
- Familiarity with data pipeline and workflow management tools (e.g., Airflow, Luigi, Azkaban).
- Experience with AWS cloud services.
- Working knowledge of object-oriented/functional scripting in Python
- Experience building and optimizing data pipelines and datasets.
- Strong analytical skills and experience working with structured and unstructured data.
- Understanding of data transformation, data structures, dimensional modeling, metadata management, schema evolution, and workload management.
- A passion for building high-quality, scalable data solutions.
Good to have:
- Experience with stream-processing systems (e.g., Spark Streaming, Flink).
- Working knowledge of message queuing, stream processing, and scalable data stores.
- Proficiency in SQL and experience with NoSQL databases like Elasticsearch and Cassandra/MongoDB.
Experience with big data tools such as HDFS/S3, Spark/Flink, Hive, HBase, Kafka/Kinesis.
Job Description -
Profile: .Net Full Stack Lead
Experience Required: 7–12 Years
Location: Pune, Bangalore, Chennai, Coimbatore, Delhi, Hosur, Hyderabad, Kochi, Kolkata, Trivandrum
Work Mode: Hybrid
Shift: Normal Shift
Key Responsibilities:
- Design, develop, and deploy scalable microservices using .NET Core and C#
- Build and maintain serverless applications using AWS services (Lambda, SQS, SNS)
- Develop RESTful APIs and integrate them with front-end applications
- Work with both SQL and NoSQL databases to optimize data storage and retrieval
- Implement Entity Framework for efficient database operations and ORM
- Lead technical discussions and provide architectural guidance to the team
- Write clean, maintainable, and testable code following best practices
- Collaborate with cross-functional teams to deliver high-quality solutions
- Participate in code reviews and mentor junior developers
- Troubleshoot and resolve production issues in a timely manner
Required Skills & Qualifications:
- 7–12 years of hands-on experience in .NET development
- Strong proficiency in .NET Framework, .NET Core, and C#
- Proven expertise with AWS services (Lambda, SQS, SNS)
- Solid understanding of SQL and NoSQL databases (SQL Server, MongoDB, DynamoDB, etc.)
- Experience building and deploying Microservices architecture
- Proficiency in Entity Framework or EF Core
- Strong knowledge of RESTful API design and development
- Experience with React or Angular is a good to have
- Understanding of CI/CD pipelines and DevOps practices
- Strong debugging, performance optimization, and problem-solving skills
- Experience with design patterns, SOLID principles, and best coding practices
- Excellent communication and team leadership skills
• Minimum 4+ years of years
• Experience in designing, developing, and maintain backend services using C# 12 and .NET 8 or .NET 9
• Experience in building and operating cloud native and serverless applications on AWS
• Experience in developing and integrating services using AWS lambda, API Gateway , dynamo DB, Eventbridge, CloudWatch, SQS, SNS, Kinesis, Secret Manager, S3 storage, server architectural models etc.
Experience in integrating services using AWS SDK
• Should be cognizant of the OMS paradigms including Inventory Management, Inventory publish, supply feed processing, control mechanisms, ATP publish, Order Orchestration, workflow set up and customizations, integrations with tax, AVS, payment engines, sourcing algorithms and managing reservations with back orders, schedule mechanisms, flash sales management etc.
• Should have a decent End to End knowledge of various Commerce subsystems which include Storefront, Core Commerce back end, Post Purchase processing, OMS, Store / Warehouse Management processes, Supply Chain and Logistic processes. This is to ascertain candidates knowhow on the overall Retail landscape of any customer.
• Strong knowledge in Querying in Oracle DB and SQL Server
• Able to read, write and manage PLSQL procedures in oracle.
• Strong debugging, performance tuning and problem solving skills
• Experience with event driven and micro services architectures
Candidate must be AWS Certified Solutions Architect – Professional certification.
2. Minimum of 10 years of experience in design and implementation of software development
3. Strong understanding of networking, security, and compliance in AWS environments
4. Proficient in infrastructure design, including high availability, disaster recovery, and scalability.
5. Experience in designing and implementing highly available, fault-tolerant, and scalable cloud solutions.
6. Hands-on experience with IaC tools, scripting languages, and automation.
7. Excellent communication and collaboration skills.
Job Details
- Job Title: SDE-3
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 5-8 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Role & Responsibilities
As a Software Development Engineer - 3, Backend Engineer at company, you will play a critical role in architecting, designing, and delivering robust backend systems that power our platform. You will lead by example, driving technical excellence and mentoring peers while solving complex engineering problems. This position offers the opportunity to work with a highly motivated team in a fast-paced and innovative environment.
Key Responsibilities:
Technical Leadership-
- Design and develop highly scalable, fault-tolerant, and maintainable backend systems using Java and related frameworks.
- Provide technical guidance and mentorship to junior developers, fostering a culture of learning and growth.
- Review code and ensure adherence to best practices, coding standards, and security guidelines.
System Architecture and Design-
- Collaborate with cross-functional teams, including product managers and frontend engineers, to translate business requirements into efficient technical solutions.
- Own the architecture of core modules and contribute to overall platform scalability and reliability.
- Advocate for and implement microservices architecture, ensuring modularity and reusability.
Problem Solving and Optimization-
- Analyze and resolve complex system issues, ensuring high availability and performance of the platform.
- Optimize database queries and design scalable data storage solutions.
- Implement robust logging, monitoring, and alerting systems to proactively identify and mitigate issues.
Innovation and Continuous Improvement-
- Stay updated on emerging backend technologies and incorporate relevant advancements into our systems.
- Identify and drive initiatives to improve codebase quality, deployment processes, and team productivity.
- Contribute to an advocate for a DevOps culture, supporting CI/CD pipelines and automated testing.
Collaboration and Communication-
- Act as a liaison between the backend team and other technical and non-technical teams, ensuring smooth communication and alignment.
- Document system designs, APIs, and workflows to maintain clarity and knowledge transfer across the team.
Ideal Candidate
- Strong Java Backend Engineer.
- Must have 5+ years of backend development with strong focus on Java (Spring / Spring Boot)
- Must have been SDE-2 for at least 2.5 years
- Hands-on experience with RESTful APIs and microservices architecture
- Strong understanding of distributed systems, multithreading, and async programming
- Experience with relational and NoSQL databases
- Exposure to Kafka/RabbitMQ and Redis/Memcached
- Experience with AWS / GCP / Azure, Docker, and Kubernetes
- Familiar with CI/CD pipelines and modern DevOps practices
- Product companies (B2B SAAS preferred)
- have stayed for at least 2 years with each of the previous companies
- (Education): B.Tech in computer science from Tier 1, Tier 2 colleges
Job Details
- Job Title: Staff Engineer
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 9-12 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Role & Responsibilities
As a Staff Engineer at company, you will play a critical role in defining and driving our backend architecture as we scale globally. You’ll own key systems that handle high volumes of data and transactions, ensuring performance, reliability, and maintainability across distributed environments.
Key Responsibilities-
- Own one or more core applications end-to-end, ensuring reliability, performance, and scalability.
- Lead the design, architecture, and development of complex, distributed systems, frameworks, and libraries aligned with company’s technical strategy.
- Drive engineering operational excellence by defining robust roadmaps for system reliability, observability, and performance improvements.
- Analyze and optimize existing systems for latency, throughput, and efficiency, ensuring they perform at scale.
- Collaborate cross-functionally with Product, Data, and Infrastructure teams to translate business requirements into technical deliverables.
- Mentor and guide engineers, fostering a culture of technical excellence, ownership, and continuous learning.
- Establish and uphold coding standards, conduct design and code reviews, and promote best practices across teams.
- Stay ahead of the curve on emerging technologies, frameworks, and patterns to strengthen company’s technology foundation.
- Contribute to hiring by identifying and attracting top-tier engineering talent.
Ideal Candidate
- Strong staff engineer profile
- Must have 9+ years in backend engineering with Java, Spring/Spring Boot, and microservices building large and schalable systems
- Must have been SDE-3 / Tech Lead / Lead SE for at least 2.5 years
- Strong in DSA, system design, design patterns, and problem-solving
- Proven experience building scalable, reliable, high-performance distributed systems
- Hands-on with SQL/NoSQL databases, REST/gRPC APIs, concurrency & async processing
- Experience in AWS/GCP, CI/CD pipelines, and observability/monitoring
- Excellent ability to explain complex technical concepts to varied stakeholders
- Product companies (B2B SAAS preferred)
- Must have stayed for at least 2 years with each of the previous companies
- (Education): B.Tech in computer science from Tier 1, Tier 2 colleges
Job Details
- Job Title: Lead I - Software Engineering - Java, Spring Boot, Microservices
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 5-7 years
- Employment Type: Full Time
- Job Location: Trivandrum, Chennai, Kochi, Thiruvananthapuram
- CTC Range: Best in Industry
Job Description
Job Title: Senior Java Developer Experience: 5+ years
Job Summary:
We are looking for a Senior Java Developer with strong experience in Spring Boot and Microservices to work on high-performance applications for a leading financial services client. The ideal candidate will have deep expertise in Java backend development, cloud (preferably GCP), and strong problem-solving abilities.
Key Responsibilities:
• Develop and maintain Java-based microservices using Spring Boot
• Collaborate with Product Owners and teams to gather and review requirements
• Participate in design reviews, code reviews, and unit testing
• Ensure application performance, scalability, and security
• Contribute to solution architecture and design documentation
• Support Agile development processes including daily stand-ups and sprint planning
• Mentor junior developers and lead small modules or features
Required Skills:
• Java, Spring Boot, Microservices architecture
• GCP (or other cloud platforms like AWS)
• REST/SOAP APIs, Hibernate, SQL, Tomcat
• CI/CD tools: Jenkins, Bitbucket
• Agile methodologies (Scrum/Kanban)
• Unit testing (JUnit), debugging and troubleshooting
• Good communication and team leadership skills
Preferred Skills:
• Frontend familiarity (Angular, AJAX)
• Experience with API documentation tools (Swagger)
• Understanding of design patterns and UML
• Exposure to Confluence, Jira
Mandatory Skills Required:
Strong proficiency in Java, spring boot, Microservices, GCP/AWS.
Experience Required: Minimum 5+ years of relevant experience
Java/J2EE (5+ years), Spring/Spring Boot (5+ years), Microservices (5+ years), AWS/GCP/Azure (mandatory), CI/CD (Jenkins, SonarQube, Git)
Java, Spring Boot, Microservices architecture
GCP (or other cloud platforms like AWS)
REST/SOAP APIs, Hibernate, SQL, Tomcat
CI/CD tools: Jenkins, Bitbucket
Agile methodologies (Scrum/Kanban)
Unit testing (JUnit), debugging and troubleshooting
Good communication and team leadership skills
******
Notice period - 0 to 15 days only (Immediate and who can join by Feb)
Job stability is mandatory
Location: Trivandrum, Kochi, Chennai
Virtual Interview - 14th Feb 2026
We’re building a suite of SaaS products for WordPress professionals—each with a clear product-market fit and the potential to become a $100M+ business. As we grow, we need engineers who go beyond feature delivery. We’re looking for someone who wants to build enduring systems, make practical decisions, and help us ship great products with high velocity.
What You’ll Do
- Work with product, design, and support teams to turn real customer problems into thoughtful, scalable solutions.
- Design and build robust backend systems, services, and APIs that prioritize long-term maintainability and performance.
- Use AI-assisted tooling (where appropriate) to explore solution trees, accelerate development, and reduce toil.
- Improve velocity across the team by building reusable tools, abstractions, and internal workflows—not just shipping isolated features.
- Dig into problems deeply—whether it's debugging a performance issue, streamlining a process, or questioning a product assumption.
- Document your decisions clearly and communicate trade-offs with both technical and non-technical stakeholders.
What Makes You a Strong Fit
- You’ve built and maintained real-world software systems, ideally at meaningful scale or complexity.
- You think in systems and second-order effects—not just in ticket-by-ticket outputs.
- You prefer well-reasoned defaults over overengineering.
- You take ownership—not just of code, but of the outcomes it enables.
- You work cleanly, write clear code, and make life easier for those who come after you.
- You’re curious about the why, not just the what—and you’re comfortable contributing to product discussions.
Bonus if You Have Experience With
- Building tools or workflows that accelerate other developers.
- Working with AI coding tools and integrating them meaningfully into your workflow.
- Building for SaaS products, especially those with large user bases or self-serve motions.
- Working in small, fast-moving product teams with a high bar for ownership.
Why Join Us
- A small team that values craftsmanship, curiosity, and momentum.
- A product-driven culture where engineering decisions are informed by customer outcomes.
- A chance to work on multiple zero-to-one opportunities with strong PMF.
- No vanity perks—just meaningful work with people who care.
Job Title : Senior Backend Developer (Node.js + AWS + MongoDB)
Experience : 4+ Years
Location : Andheri, Mumbai (Work From Office)
About the Role :
We are looking for a highly skilled Senior Backend Developer with strong expertise in Node.js (NestJS), AWS, and MongoDB to join our growing engineering team.
This role requires someone who takes ownership, is proactive, and enjoys building scalable, high-performance backend systems in a fast-paced environment.
Key Responsibilities :
- Architect, design, and develop scalable backend services using Node.js (NestJS).
- Design and manage cloud infrastructure on AWS Services (EC2, ECS, RDS, Lambda, etc.).
- Develop and maintain high-performance database solutions using MongoDB.
- Work with Kafka, Docker, and serverless frameworks (SST) for efficient deployments.
- Optimize system performance, scalability, and reliability across services.
- Ensure application security, best practices, and compliance standards.
- Collaborate with cross-functional teams to deliver robust product features.
- Take end-to-end ownership of features from design to deployment.
Technical Requirements :
- 4+ years of backend development experience.
- 3+ years of hands-on experience with Node.js.
- 2+ years of hands-on experience with AWS.
- Strong experience with NestJS framework.
- Solid experience with MongoDB and database design.
- Experience with Kafka, Docker, and serverless architecture.
- Understanding of system design, scalability, and performance optimization.
Good to Have (Bonus Skills) :
- Experience with Python or other backend languages.
- Exposure to Agentic AI use cases or implementations.
- Strong understanding of security best practices.
What We’re Looking For :
- Curious mindset and eagerness to learn new technologies.
- Proactive problem solver with strong ownership attitude.
- Strong team player with effective communication skills.
- Positive, energetic, and passionate about building great systems.
Job Title: Dot Net Full Stack Lead
Experience Required:7-12 Years
Location: Pune, Bangalore, Chennai, Coimbatore, Delhi, Hosur, Hyderabad, Kochi, Kolkata, Trivandrum
Job Type: Full-time
About the Role:
We are looking for a skilled .NET Developer with strong AWS cloud experience to join our engineering team. You will be responsible for designing, developing, and maintaining scalable microservices-based applications using .NET technologies and AWS cloud services.
Key Responsibilities:
- Design, develop, and deploy microservices using .NET Core and C#
- Build and maintain serverless applications using AWS Lambda, SQS, and SNS
- Develop RESTful APIs and integrate them with front-end applications
- Work with both SQL and NoSQL databases to optimize data storage and retrieval
- Implement Entity Framework for database operations and ORM
- Write clean, maintainable, and testable code following best practices
- Collaborate with cross-functional teams to deliver high-quality solutions
- Participate in code reviews and contribute to technical documentation
- Troubleshoot and resolve production issues in a timely manner
Mandatory Skills:
- Strong proficiency in .NET Framework and .NET Core
- Expertise in C# programming
- Hands-on experience with AWS services (Lambda, SQS, SNS)
- Solid understanding of SQL and NoSQL databases
- Experience building and deploying Microservices architecture
- Proficiency in Entity Framework or EF Core
- Knowledge of RESTful API design and development
- Understanding of CI/CD pipelines and DevOps practices
Good to Have:
- Experience with React or Angular for full-stack development
- Knowledge of containerization (Docker, Kubernetes)
- Familiarity with other AWS services (EC2, S3, DynamoDB, API Gateway)
- Experience with message queuing and event-driven architecture
- Understanding of SOLID principles and design patterns
- Experience with unit testing and test-driven development (TDD)
Role- Sr Software Developer (Fullstack)
Location- Gurgaon/ Bangalore
Mode- Hybrid
Job Description: Sr Software Developer (Golang expertise) – 5+ years
Role Summary:-
We are seeking an experienced Senior Engineer with strong expertise in Golang and cloud-based web applications. The ideal candidate will work across multiple backend services, contribute to architectural decisions, ensure quality through best practices, and collaborate effectively with cross-functional teams.
Key Responsibilities:-
• Design, develop, and maintain backend services using Golang.
• Work across multiple Golang applications and microservices.
• Have good understanding of internal enterprise services such as SSO, authorization, and user management.
• Collaborate with cloud engineering teams to build and manage applications on AWS or Azure.
• Follow and enforce backend development best practices including CI/CD, coding guidelines, and code reviews.
• Use tools like SonarQube for static and dynamic code analysis.
• Write high‑quality unit tests and maintain test coverage.
• Document system designs, APIs, and technical workflows.
• Mentor junior team members and contribute to overall team maturity.
Required Skills:-
• Strong, hands‑on experience with Golang across multiple real‑world projects.
• Good experience with dell cloud
• Good understanding of cloud services (AWS or Azure) for web application development.
• Knowledge of SSO, authorization services, and internal service integrations.
• Excellent communication and collaboration skills.
• Experience with CI/CD pipelines, coding standards, and automated testing.
• Familiarity with code quality tools such as SonarQube.
• Strong documentation skills.
Good-to-Have Skills:-
• Knowledge of Python or JavaScript.
• Understanding of frontend technologies (react.js).
• Experience mentoring or guiding team members.
Note- We are looking out immediate joiners.
Review Criteria:
- Strong MLOps profile
- 8+ years of DevOps experience and 4+ years in MLOps / ML pipeline automation and production deployments
- 4+ years hands-on experience in Apache Airflow / MWAA managing workflow orchestration in production
- 4+ years hands-on experience in Apache Spark (EMR / Glue / managed or self-hosted) for distributed computation
- Must have strong hands-on experience across key AWS services including EKS/ECS/Fargate, Lambda, Kinesis, Athena/Redshift, S3, and CloudWatch
- Must have hands-on Python for pipeline & automation development
- 4+ years of experience in AWS cloud, with recent companies
- (Company) - Product companies preferred; Exception for service company candidates with strong MLOps + AWS depth
Preferred:
- Hands-on in Docker deployments for ML workflows on EKS / ECS
- Experience with ML observability (data drift / model drift / performance monitoring / alerting) using CloudWatch / Grafana / Prometheus / OpenSearch.
- Experience with CI / CD / CT using GitHub Actions / Jenkins.
- Experience with JupyterHub/Notebooks, Linux, scripting, and metadata tracking for ML lifecycle.
- Understanding of ML frameworks (TensorFlow / PyTorch) for deployment scenarios.
Job Specific Criteria:
- CV Attachment is mandatory
- Please provide CTC Breakup (Fixed + Variable)?
- Are you okay for F2F round?
- Have candidate filled the google form?
Role & Responsibilities:
We are looking for a Senior MLOps Engineer with 8+ years of experience building and managing production-grade ML platforms and pipelines. The ideal candidate will have strong expertise across AWS, Airflow/MWAA, Apache Spark, Kubernetes (EKS), and automation of ML lifecycle workflows. You will work closely with data science, data engineering, and platform teams to operationalize and scale ML models in production.
Key Responsibilities:
- Design and manage cloud-native ML platforms supporting training, inference, and model lifecycle automation.
- Build ML/ETL pipelines using Apache Airflow / AWS MWAA and distributed data workflows using Apache Spark (EMR/Glue).
- Containerize and deploy ML workloads using Docker, EKS, ECS/Fargate, and Lambda.
- Develop CI/CT/CD pipelines integrating model validation, automated training, testing, and deployment.
- Implement ML observability: model drift, data drift, performance monitoring, and alerting using CloudWatch, Grafana, Prometheus.
- Ensure data governance, versioning, metadata tracking, reproducibility, and secure data pipelines.
- Collaborate with data scientists to productionize notebooks, experiments, and model deployments.
Ideal Candidate:
- 8+ years in MLOps/DevOps with strong ML pipeline experience.
- Strong hands-on experience with AWS:
- Compute/Orchestration: EKS, ECS, EC2, Lambda
- Data: EMR, Glue, S3, Redshift, RDS, Athena, Kinesis
- Workflow: MWAA/Airflow, Step Functions
- Monitoring: CloudWatch, OpenSearch, Grafana
- Strong Python skills and familiarity with ML frameworks (TensorFlow/PyTorch/Scikit-learn).
- Expertise with Docker, Kubernetes, Git, CI/CD tools (GitHub Actions/Jenkins).
- Strong Linux, scripting, and troubleshooting skills.
- Experience enabling reproducible ML environments using Jupyter Hub and containerized development workflows.
Education:
- Master’s degree in computer science, Machine Learning, Data Engineering, or related field.
ROLE: Ai ML Senior Developer
Exp: 5 to 8 Years
Location: Bangalore (Onsite)
About ProductNova
ProductNova is a fast-growing product development organization that partners with
ambitious companies to build, modernize, and scale high-impact digital products. Our teams
of product leaders, engineers, AI specialists, and growth experts work at the intersection of
strategy, technology, and execution to help organizations create differentiated product
portfolios and accelerate business outcomes.
Founded in early 2023, ProductNova has successfully designed, built, and launched 20+
large-scale, AI-powered products and platforms across industries. We specialize in solving
complex business problems through thoughtful product design, robust engineering, and
responsible use of AI.
Product Development
We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply
aligned with business goals and long-term value creation.
Our end-to-end product development approach covers the full lifecycle:
1. Product discovery and problem definition
2. User research and product strategy
3. Experience design and rapid prototyping
4. AI-enabled engineering, testing, and platform architecture
5. Product launch, adoption, and continuous improvement
From early concepts to market-ready solutions, we focus on building products that are
resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with
customers to iterate based on user feedback and expand products across new use cases,
customer segments, and markets.
Growth & Scale
For early-stage companies and startups, we act as product partners—shaping ideas into
viable products, identifying target customers, achieving product-market fit, and supporting
go-to-market execution, iteration, and scale.
For established organizations, we help unlock the next phase of growth by identifying
opportunities to modernize and scale existing products, enter new geographies, and build
entirely new product lines. Our teams enable innovation through AI, platform re-
architecture, and portfolio expansion to support sustained business growth.
Role Overview:
We are seeking an experienced AI / ML Senior Developer with strong hands-on expertise in large language models (LLMs) and AI-driven application development. The ideal candidate will have practical experience working with GPT and Anthropic models, building and training B2B products powered by AI, and leveraging AI-assisted development tools to deliver scalable and intelligent solutions.
Key Responsibilities:
1. Model Analysis & Optimization
Analyze, customize, and optimize GPT and Anthropic-based models to ensure reliability, scalability, and performance for real-world business use cases.
2. AI Product Design & Development
Design and build AI-powered products, including model training, fine-tuning, evaluation, and performance optimization across development lifecycles.
3. Prompt Engineering & Response Quality
Develop and refine prompt engineering strategies to improve model accuracy, consistency, relevance, and contextual understanding.
4. AI Service Integration
Build, integrate, and deploy AI services into applications using modern development practices, APIs, and scalable architectures.
5. AI-Assisted Development Productivity
Leverage AI-enabled coding tools such as Cursor and GitHub Copilot to accelerate development, improve code quality, and enhance efficiency.
6. Cross-Functional Collaboration
Work closely with product, business, and engineering teams to translate business requirements into effective AI-driven solutions.
7. Model Monitoring & Continuous Improvement
Monitor model performance, analyze outputs, and iteratively improve accuracy, safety, and overall system effectiveness.
Qualifications:
1. Hands-on experience analyzing, developing, fine-tuning, and optimizing GPT and Anthropic-based large language models.
2. Strong expertise in prompt design, experimentation, and optimization to enhance response accuracy and reliability.
3. Proven experience building, training, and deploying chatbots or conversational AI systems.
4. Practical experience using AI-assisted coding tools such as Cursor or GitHub Copilot in production environments.
5. Solid programming experience in Python, with strong problem-solving and development fundamentals.
6. Experience working with embeddings, similarity search, and vector databases for retrieval-augmented generation (RAG).
7. Knowledge of MLOps practices, including model deployment, versioning, monitoring, and lifecycle management.
8. Experience with cloud environments such as Azure, AWS for deploying and managing AI solutions.
9. Experience with APIs, microservices architecture, and system integration for scalable AI applications.
Why Join Us
• Build cutting-edge AI-powered B2B SaaS products
• Own architecture and technology decisions end-to-end
• Work with highly skilled ML and Full Stack teams
• Be part of a fast-growing, innovation-driven product organization
If you are a results-driven AiML Senior Developer with a passion for developing innovative products that drive business growth, we invite you to join our dynamic team at ProductNova.
ROLE - TECH LEAD/ARCHITECT with AI Expertise
Experience: 10–15 Years
Location: Bangalore (Onsite)
Company Type: Product-based | AI B2B SaaS
About ProductNova
ProductNova is a fast-growing product development organization that partners with
ambitious companies to build, modernize, and scale high-impact digital products. Our teams
of product leaders, engineers, AI specialists, and growth experts work at the intersection of
strategy, technology, and execution to help organizations create differentiated product
portfolios and accelerate business outcomes.
Founded in early 2023, ProductNova has successfully designed, built, and launched 20+
large-scale, AI-powered products and platforms across industries. We specialize in solving
complex business problems through thoughtful product design, robust engineering, and
responsible use of AI.
Product Development
We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply
aligned with business goals and long-term value creation.
Our end-to-end product development approach covers the full lifecycle:
1. Product discovery and problem definition
2. User research and product strategy
3. Experience design and rapid prototyping
4. AI-enabled engineering, testing, and platform architecture
5. Product launch, adoption, and continuous improvement
From early concepts to market-ready solutions, we focus on building products that are
resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with
customers to iterate based on user feedback and expand products across new use cases,
customer segments, and markets.
Growth & Scale
For early-stage companies and startups, we act as product partners—shaping ideas into
viable products, identifying target customers, achieving product-market fit, and supporting
go-to-market execution, iteration, and scale.
For established organizations, we help unlock the next phase of growth by identifying
opportunities to modernize and scale existing products, enter new geographies, and build
entirely new product lines. Our teams enable innovation through AI, platform re-
architecture, and portfolio expansion to support sustained business growth.
Role Overview
We are looking for a Tech Lead / Architect to drive the end-to-end technical design and
development of AI-powered B2B SaaS products. This role requires a strong hands-on
technologist who can work closely with ML Engineers and Full Stack Development teams,
own the product architecture, and ensure scalability, security, and compliance across the
platform.
Key Responsibilities
• Lead the end-to-end architecture and development of AI-driven B2B SaaS products
• Collaborate closely with ML Engineers, Data Scientists, and Full Stack Developers to
integrate AI/ML models into production systems
• Define and own the overall product technology stack, including backend, frontend,
data, and cloud infrastructure
• Design scalable, resilient, and high-performance architectures for multi-tenant SaaS
platforms
• Drive cloud-native deployments (Azure) using modern DevOps and CI/CD practices
• Ensure data privacy, security, compliance, and governance (SOC2, GDPR, ISO, etc.)
across the product
• Take ownership of application security, access controls, and compliance
requirements
• Actively contribute hands-on through coding, code reviews, complex feature development and architectural POCs
• Mentor and guide engineering teams, setting best practices for coding, testing, and
system design
• Work closely with Product Management and Leadership to translate business
requirements into technical solutions
Qualifications:
• 10–15 years of overall experience in software engineering and product
development
• Strong experience building B2B SaaS products at scale
• Proven expertise in system architecture, design patterns, and distributed systems
• Hands-on experience with cloud platforms (Azure, AWS/GCP)
• Solid background in backend technologies (Python/ .NET / Node.js / Java) and
modern frontend frameworks like (React, JS, etc.)
• Experience working with AI/ML teams in deploying and tuning ML models into production
environments
• Strong understanding of data security, privacy, and compliance frameworks
• Experience with microservices, APIs, containers, Kubernetes, and cloud-native
architectures
• Strong working knowledge of CI/CD pipelines, DevOps, and infrastructure as code
• Excellent communication and leadership skills with the ability to work cross-
functionally
• Experience in AI-first or data-intensive SaaS platforms
• Exposure to MLOps frameworks and model lifecycle management
• Experience with multi-tenant SaaS security models
• Prior experience in product-based companies or startups
Why Join Us
• Build cutting-edge AI-powered B2B SaaS products
• Own architecture and technology decisions end-to-end
• Work with highly skilled ML and Full Stack teams
• Be part of a fast-growing, innovation-driven product organization
If you are a results-driven Technical Lead with a passion for developing innovative products that drives business growth, we invite you to join our dynamic team at ProductNova.
Key Responsibilities
- Develop and maintain custom WordPress themes, plugins, and APIs using PHP, MySQL, HTML, CSS, jQuery, and JavaScript.
- Build and optimize REST APIs and integrate with third-party services.
- Ensure high performance, scalability, and security of WordPress applications.
- Collaborate with Product Managers, UI/UX Designers, QA, and DevOps to deliver high-quality solutions.
- Write clean, testable, and maintainable code following best practices.
- Troubleshoot and resolve WordPress-related technical issues.
- Stay updated on WordPress and web technology trends.
Required Skills & Experience
- 7+ years of experience in PHP and WordPress development.
- Strong expertise in custom theme and plugin development.
- Proficiency in JavaScript, jQuery, AJAX, HTML5, and CSS3.
- Solid experience with MySQL and database optimization.
- Hands-on experience with Git and Agile methodologies.
- Knowledge of WordPress security best practices, SEO, and performance tuning.
- Familiarity with CI/CD pipelines, Docker, and cloud platforms (AWS/GCP) is a plus.
- Experience with multisite or headless WordPress is an advantage.
- Experience with Laravel, Symfony, Yii, and other PHP-based frameworks is a plus.
Nice to have
- Cloudflare Workers (Wrangler, KV/R2, Durable Objects)
- Salesforce OAuth/API experience; HubSpot Forms event hooks; middleware patterns.
- AWS basic understanding
- Cloudflare basic understanding
- Uptime/transaction monitoring via Checkly or other automated systems.
- Entry-level DevOps/networking understanding: HTTP/TLS, CORS, DNS, proxies, caching, request/response debugging (HAR).
Qualifications
- Associate or bachelor’s degree preferred (Computer Science, Engineering, etc.), but equivalent work experience in a technology-related area may substitute.
- Proven track record in building and maintaining large-scale WordPress platforms.
AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 3-7 years of prior experience in data engineering, with a strong background in working on modern data platforms. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.
Location : Bangalore, Hyderabad, Mumbai, and Gurgaon
Responsibilities:
· Designing, building, and operating scalable on-premises or cloud data architecture
· Analyzing business requirements and translating them into technical specifications
· Design, develop, and implement data engineering solutions using DBT on cloud platforms (Snowflake, Databricks)
· Design, develop, and maintain scalable data pipelines and ETL processes
· Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.
· Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness
· Implement data governance and security best practices to ensure compliance and data integrity
· Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring
· Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.
Requirements
· Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
· Overall 3+ years of prior experience in data engineering, with a focus on designing and building data pipelines
· Experience of working with DBT to implement end-to-end data engineering processes on Snowflake and Databricks
· Comprehensive understanding of the Snowflake and Databricks ecosystem
· Strong programming skills in languages like SQL and Python or PySpark.
· Experience with data modeling, ETL processes, and data warehousing concepts.
· Familiarity with implementing CI/CD processes or other orchestration tools is a plus.

We are building VALLI AI SecurePay, an AI-powered fintech and cybersecurity platform focused on fraud detection and transaction risk scoring.
We are looking for an AI / Machine Learning Engineer with strong AWS experience to design, develop, and deploy ML models in a cloud-native environment.
Responsibilities:
- Build ML models for fraud detection and anomaly detection
- Work with transactional and behavioral data
- Deploy models on AWS (S3, SageMaker, EC2/Lambda)
- Build data pipelines and inference workflows
- Integrate ML models with backend APIs
Requirements:
- Strong Python and Machine Learning experience
- Hands-on AWS experience
- Experience deploying ML models in production
- Ability to work independently in a remote setup
Job Type: Contract / Freelance
Duration: 3–6 months (extendable)
Location: Remote (India)
About Us:
MyOperator is a Business AI Operator and a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform.
Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.Trusted by 12,000+ brands including Amazon, Domino’s, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain a scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one
JOB DETAILS:
* Job Title: Lead I - Software Engineering-Kotlin, Java, Spring Boot, Aws
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 5 -7 years
* Location: Trivandrum, Thiruvananthapuram
Role Proficiency:
Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities
Skill Examples:
- Explain and communicate the design / development to the customer
- Perform and evaluate test results against product specifications
- Break down complex problems into logical components
- Develop user interfaces business software components
- Use data models
- Estimate time and effort required for developing / debugging features / components
- Perform and evaluate test in the customer or target environment
- Make quick decisions on technical/project related challenges
- Manage a Team mentor and handle people related issues in team
- Maintain high motivation levels and positive dynamics in the team.
- Interface with other teams’ designers and other parallel practices
- Set goals for self and team. Provide feedback to team members
- Create and articulate impactful technical presentations
- Follow high level of business etiquette in emails and other business communication
- Drive conference calls with customers addressing customer questions
- Proactively ask for and offer help
- Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks.
- Build confidence with customers by meeting the deliverables on time with quality.
- Estimate time and effort resources required for developing / debugging features / components
- Make on appropriate utilization of Software / Hardware’s.
- Strong analytical and problem-solving abilities
Knowledge Examples:
- Appropriate software programs / modules
- Functional and technical designing
- Programming languages – proficient in multiple skill clusters
- DBMS
- Operating Systems and software platforms
- Software Development Life Cycle
- Agile – Scrum or Kanban Methods
- Integrated development environment (IDE)
- Rapid application development (RAD)
- Modelling technology and languages
- Interface definition languages (IDL)
- Knowledge of customer domain and deep understanding of sub domain where problem is solved
Additional Comments:
We are seeking an experienced Senior Backend Engineer with strong expertise in Kotlin and Java to join our dynamic engineering team.
The ideal candidate will have a deep understanding of backend frameworks, cloud technologies, and scalable microservices architectures, with a passion for clean code, resilience, and system observability.
You will play a critical role in designing, developing, and maintaining core backend services that power our high-availability e-commerce and promotion platforms.
Key Responsibilities
Design, develop, and maintain backend services using Kotlin (JVM, Coroutines, Serialization) and Java.
Build robust microservices with Spring Boot and related Spring ecosystem components (Spring Cloud, Spring Security, Spring Kafka, Spring Data).
Implement efficient serialization/deserialization using Jackson and Kotlin Serialization. Develop, maintain, and execute automated tests using JUnit 5, Mockk, and ArchUnit to ensure code quality.
Work with Kafka Streams (Avro), Oracle SQL (JDBC, JPA), DynamoDB, and Redis for data storage and caching needs. Deploy and manage services in AWS environment leveraging DynamoDB, Lambdas, and IAM.
Implement CI/CD pipelines with GitLab CI to automate build, test, and deployment processes.
Containerize applications using Docker and integrate monitoring using Datadog for tracing, metrics, and dashboards.
Define and maintain infrastructure as code using Terraform for services including GitLab, Datadog, Kafka, and Optimizely.
Develop and maintain RESTful APIs with OpenAPI (Swagger) and JSON API standards.
Apply resilience patterns using Resilience4j to build fault-tolerant systems.
Adhere to architectural and design principles such as Domain-Driven Design (DDD), Object-Oriented Programming (OOP), and Contract Testing (Pact).
Collaborate with cross-functional teams in an Agile Scrum environment to deliver high-quality features.
Utilize feature flagging tools like Optimizely to enable controlled rollouts.
Mandatory Skills & Technologies Languages:
Kotlin (JVM, Coroutines, Serialization),
Java Frameworks: Spring Boot (Spring Cloud, Spring Security, Spring Kafka, Spring Data)
Serialization: Jackson, Kotlin Serialization
Testing: JUnit 5, Mockk, ArchUnit
Data: Kafka (Avro) Streams Oracle SQL (JDBC, JPA) DynamoDB (NoSQL) Redis (Caching)
Cloud: AWS (DynamoDB, Lambda, IAM)
CI/CD: GitLab CI Containers: Docker
Monitoring & Observability: Datadog (Tracing, Metrics, Dashboards, Monitors)
Infrastructure as Code: Terraform (GitLab, Datadog, Kafka, Optimizely)
API: OpenAPI (Swagger), REST API, JSON API
Resilience: Resilience4j
Architecture & Practices: Domain-Driven Design (DDD) Object-Oriented Programming (OOP) Contract Testing (Pact) Feature Flags (Optimizely)
Platforms: E-Commerce Platform (CommerceTools), Promotion Engine (Talon.One)
Methodologies: Scrum, Agile
Skills: Kotlin, Java, Spring Boot, Aws
Must-Haves
Kotlin (JVM, Coroutines, Serialization), Java, Spring Boot (Spring Cloud, Spring Security, Spring Kafka, Spring Data), AWS (DynamoDB, Lambda, IAM), Microservices Architecture
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Trivandrum
Virtual Weekend Interview on 7th Feb 2026 - Saturday
5–10 years of experience in backend or full-stack development (Java, C#, Python, or Node.js preferred).
•Design, develop, and deploy full-stack web applications (front-end, back-end, APIs, and databases).
•Build responsive, user-friendly UIs using modern JavaScript frameworks (React, Vue, or Angular).
•Develop robust backend services and RESTful or GraphQL APIs using Node.js, Python, Java, or similar technologies.
•Manage and optimize databases (SQL and NoSQL).
•Collaborate with UX/UI designers, product managers, and QA engineers to refine requirements and deliver solutions.
•Implement CI/CD pipelines and support cloud deployments (AWS, Azure, or GCP).
•Write clean, testable, and maintainable code with appropriate documentation.
•Monitor performance, identify bottlenecks, and troubleshoot production issues.
•Stay up to date with emerging technologies and recommend improvements to tools, processes, and architecture.
•Proficiency in front-end technologies: HTML5, CSS3, JavaScript/TypeScript, and frameworks like React, Vue.js, or Angular.
•Strong experience with server-side programming (Node.js, Python/Django, Java/Spring Boot, or .NET).
•Experience with databases: PostgreSQL, MySQL, MongoDB, or similar.
•Familiarity with API design, microservices architecture, and REST/GraphQL best practices.
•Working knowledge of version control (Git/GitHub) and DevOps pipelines.
Understanding of cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes).

🚀 Hiring: Associate Tech Architect / Senior Tech Specialist
🌍 Remote | Contract Opportunity
We’re looking for a seasoned tech professional who can lead the design and implementation of cloud-native data and platform solutions. This is a remote, contract-based role for someone with strong ownership and architecture experience.
🔴 Mandatory & Most Important Skill Set
Hands-on expertise in the following technologies is essential:
✅ AWS – Cloud architecture & services
✅ Python – Backend & data engineering
✅ Terraform – Infrastructure as Code
✅ Airflow – Workflow orchestration
✅ SQL – Data processing & querying
✅ DBT – Data transformation & modeling
💼 Key Responsibilities
- Architect and build scalable AWS-based data platforms
- Design and manage ETL/ELT pipelines
- Orchestrate workflows using Airflow
- Implement cloud infrastructure using Terraform
- Lead best practices in data architecture, performance, and scalability
- Collaborate with engineering teams and provide technical leadership
🎯 Ideal Profile
✔ Strong experience in cloud and data platform architecture
✔ Ability to take end-to-end technical ownership
✔ Comfortable working in a remote, distributed team environment
📄 Role Type: Contract
🌍 Work Mode: 100% Remote
If you have deep expertise in these core technologies and are ready to take on a high-impact architecture role, we’d love to hear from you.
JOB DETAILS:
* Job Title: Associate III - Data Engineering
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4-6 years
* Location: Trivandrum, Kochi
Job Description
Job Title:
Data Services Engineer – AWS & Snowflake
Job Summary:
As a Data Services Engineer, you will be responsible for designing, developing, and maintaining robust data solutions using AWS cloud services and Snowflake.
You will work closely with cross-functional teams to ensure data is accessible, secure, and optimized for performance.
Your role will involve implementing scalable data pipelines, managing data integration, and supporting analytics initiatives.
Responsibilities:
• Design and implement scalable and secure data pipelines on AWS and Snowflake (Star/Snowflake schema)
• Optimize query performance using clustering keys, materialized views, and caching
• Develop and maintain Snowflake data warehouses and data marts.
• Build and maintain ETL/ELT workflows using Snowflake-native features (Snowpipe, Streams, Tasks).
• Integrate Snowflake with cloud platforms (AWS, Azure, GCP) and third-party tools (Airflow, dbt, Informatica)
• Utilize Snowpark and Python/Java for complex transformations
• Implement RBAC, data masking, and row-level security.
• Optimize data storage and retrieval for performance and cost-efficiency.
• Collaborate with stakeholders to gather data requirements and deliver solutions.
• Ensure data quality, governance, and compliance with industry standards.
• Monitor, troubleshoot, and resolve data pipeline and performance issues.
• Document data architecture, processes, and best practices.
• Support data migration and integration from various sources.
Qualifications:
• Bachelor’s degree in Computer Science, Information Technology, or a related field.
• 3 to 4 years of hands-on experience in data engineering or data services.
• Proven experience with AWS data services (e.g., S3, Glue, Redshift, Lambda).
• Strong expertise in Snowflake architecture, development, and optimization.
• Proficiency in SQL and Python for data manipulation and scripting.
• Solid understanding of ETL/ELT processes and data modeling.
• Experience with data integration tools and orchestration frameworks.
• Excellent analytical, problem-solving, and communication skills.
Preferred Skills:
• AWS Glue, AWS Lambda, Amazon Redshift
• Snowflake Data Warehouse
• SQL & Python
Skills: Aws Lambda, AWS Glue, Amazon Redshift, Snowflake Data Warehouse
Must-Haves
AWS data services (4-6 years), Snowflake architecture (4-6 years), SQL (proficient), Python (proficient), ETL/ELT processes (solid understanding)
Skills: AWS, AWS lambda, Snowflake, Data engineering, Snowpipe, Data integration tools, orchestration framework
Relevant 4 - 6 Years
python is mandatory
******
Notice period - 0 to 15 days only (Feb joiners’ profiles only)
Location: Kochi
F2F Interview 7th Feb
What You’ll Do:
We’re looking for a Full Stack Software Engineer to join us early, own critical systems, and help shape both the product and the engineering culture from day one.
Responsibilities will include but are not limited to:
- Own end-to-end product development, from user experience to backend integration
- Build and scale a modern SPA using React, TypeScript, Vite, and Tailwind Design intuitive, high-trust UIs for finance workflows (payments, approvals, dashboards)
- Collaborate closely with backend systems written in Go via well-designed APIs
- Translate product requirements into clean, maintainable components and state models
- Optimize frontend performance, bundle size, and load times for complex dashboards
- Work directly with founders and design partners to iterate rapidly on product direction
- Establish frontend best practices around architecture, testing, and developer experience
- Contribute across the stack when needed, including API design and data modeling discussions.
What You’ll Need:
- Strong experience with Go in production systems
- Solid backend fundamentals: APIs, distributed systems, concurrency, and data modeling
- Hands-on experience with AWS, including deploying and operating production services
- Deep familiarity with Postgres, including schema design, indexing, and performance considerations
- Comfort working in early-stage environments with ambiguity, ownership, and rapid iteration
- Product mindset — you care about why you’re building something, not just how
- Strong problem-solving skills and the ability to make pragmatic tradeoffs
Set Yourself Apart With:
- Experience with Tailwind or other utility-first CSS frameworks
- Familiarity with design systems and component libraries
- Experience building fintech or enterprise SaaS UIs
- Exposure to AI-powered UX (LLM-driven workflows, assistants, or automation)
- Prior experience as an early engineer or founder
- product engineering culture from the ground up.
JOB DETAILS:
* Job Title: Lead I - (Web Api, C# .Net, .Net Core, Aws (Mandatory)
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 6 -9 years
* Location: Hyderabad
Job Description
Role Overview
We are looking for a highly skilled Senior .NET Developer who has strong experience in building scalable, high‑performance backend services using .NET Core and C#, with hands‑on expertise in AWS cloud services. The ideal candidate should be capable of working in an Agile environment, collaborating with cross‑functional teams, and contributing to both design and development. Experience with React and Datadog monitoring tools will be an added advantage.
Key Responsibilities
- Design, develop, and maintain backend services and APIs using .NET Core and C#.
- Work with AWS services (Lambda, S3, ECS/EKS, API Gateway, RDS, etc.) to build cloud‑native applications.
- Collaborate with architects and senior engineers on solution design and implementation.
- Write clean, scalable, and well‑documented code.
- Use Postman to build and test RESTful APIs.
- Participate in code reviews and provide technical guidance to junior developers.
- Troubleshoot and optimize application performance.
- Work closely with QA, DevOps, and Product teams in an Agile setup.
- (Optional) Contribute to frontend development using React.
- (Optional) Use Datadog for monitoring, logging, and performance metrics.
Required Skills & Experience
- 6+ years of experience in backend development.
- Strong proficiency in C# and .NET Core.
- Experience building RESTful services and microservices.
- Hands‑on experience with AWS cloud platform.
- Solid understanding of API testing using Postman.
- Knowledge of relational databases (SQL Server, PostgreSQL, etc.).
- Strong problem‑solving and debugging skills.
- Experience working in Agile/Scrum teams.
Good to Have
- Experience with React for frontend development.
- Exposure to Datadog for monitoring and logging.
- Knowledge of CI/CD tools (GitHub Actions, Jenkins, AWS CodePipeline, etc.).
- Containerization experience (Docker, Kubernetes).
Soft Skills
- Strong communication and collaboration abilities.
- Ability to work in a fast‑paced environment.
- Ownership mindset with a focus on delivering high‑quality solutions.
Skills
.NET Core, C#, AWS, Postman
Notice period - 0 to 15 days only
Location: Hyderabad
Virtual Interview: 7th Feb 2026
First round will be Virtual
2nd round will be F2F
JOB DETAILS:
* Job Title: Tester III - Software Testing (Automation testing + Python + AWS)
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -10 years
* Location: Hyderabad
Job Description
Responsibilities:
- Develop, maintain, and execute automation test scripts using Python.
- Build reliable and reusable test automation frameworks for web and cloud-based applications.
- Work with AWS cloud services for test execution, environment management, and integration needs.
- Perform functional, regression, and integration testing as part of the QA lifecycle.
- Analyze test failures, identify root causes, raise defects, and collaborate with development teams.
- Participate in requirement review, test planning, and strategy discussions.
- Contribute to CI/CD setup and integration of automation suites.
Required Experience:
- Strong hands-on experience in Automation Testing.
- Proficiency in Python for automation scripting and framework development.
- Understanding and practical exposure to AWS services (Lambda, EC2, S3, CloudWatch, or similar).
- Good knowledge of QA methodologies, SDLC/STLC, and defect management.
- Familiarity with automation tools/frameworks (e.g., Selenium, PyTest).
- Experience with Git or other version control systems.
Good to Have:
- API testing experience (REST, Postman, REST Assured).
- Knowledge of Docker/Kubernetes.
- Exposure to Agile/Scrum environment.
Skills: Automation testing, Python, Java, ETL, AWS
JOB DETAILS:
* Job Title: Tester III - Software Testing- Playwright + API testing
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -10 years
* Location: Hyderabad
Job Description
Responsibilities:
- Design, develop, and maintain automated test scripts for web applications using Playwright.
- Perform API testing using industry-standard tools and frameworks.
- Collaborate with developers, product owners, and QA teams to ensure high-quality releases.
- Analyze test results, identify defects, and track them to closure.
- Participate in requirement reviews, test planning, and test strategy discussions.
- Ensure automation coverage, maintain reusable test frameworks, and optimize execution pipelines.
Required Experience:
- Strong hands-on experience in Automation Testing for web-based applications.
- Proven expertise in Playwright (JavaScript, TypeScript, or Python-based scripting).
- Solid experience in API testing (Postman, REST Assured, or similar tools).
- Good understanding of software QA methodologies, tools, and processes.
- Ability to write clear, concise test cases and automation scripts.
- Experience with CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) is an added advantage.
Good to Have:
- Knowledge of cloud environments (AWS/Azure)
- Experience with version control tools like Git
- Familiarity with Agile/Scrum methodologies
Skills: automation testing, sql, api testing, soap ui testing, playwright
Numino Labs
Business: Software product engineering services: Pune, Goa.
Clients: Software product companies in the USA.
Business model: Exclusive teams for working on client products; direct and daily interactions with clients
Client
Silicon Valley startup in genAI: 45m+ in funding.
Product: B2B SaaS.
Core IP: Physics AI foundation model for hardware designers with specific focus on semi-conductor chip design.
Customers: World's top chip manufacturers
Responsibilities
- Team player: Delivers effectively with teams; interpersonal skills, communication skills, risk management skills
- Technical Leadership: Works with ambiguous requirements, designs solutions, independently drives delivery to customers
- Hands on coder: Leverages AI to drive implementation across Reactjs, Python, DB, UnitTest, TestAutomation & Cloud Infra & CI/CD Automation.
Requirements
- Strong computer science fundamentals: data structures & algorithms, networking, RDBMS, and distributed computing
- 8-15 years of experience on Python Stack: Behave, PyTest, Python Generators & async operations, multithreading, context managers, decorators, descriptors
- Python frameworks: FastAPI or Flask or DJango or SQLAlchemy
- Expertise in Microservices, REST/gRPC APIs design, Authentication, Single Sign-on
- Experience in high performance delivering solutions on Cloud
- Some experience in FE: Js & Nextjs/ReactJs
- Some experience in DevOps, Cloud Infra Automation, Test Automation
About the Role:
We are seeking a highly skilled and motivated individual to join our development team. The ideal candidate will have extensive experience with Node.js, AWS, and MongoDB, and a strong pro-activeness and ownership mindset.
Technical Expertise:
- Architect, design, and develop scalable and efficient backend services using Node.js (Nest.js).
- Design and manage cloud-based infrastructure on AWS, including EC2, ECS, RDS, Lambda, and other services.
- Work with MongoDB to design, implement, and maintain high-performance database solutions.
- Leverage Kafka, Docker and serverless technologies like SST to streamline deployments and infrastructure management.
- Optimize application performance and scalability across the stack.
- Ensure security and compliance standards are met across all development and deployment processes.
Bonus Points:
- Experience with other backend languages like Python and worked on Agentic AI
- Security knowledge and best practices.
JOB DETAILS:
- Job Title: Lead II - Software Engineering- React Native - React Native, Mobile App Architecture, Performance Optimization & Scalability
- Industry: Global digital transformation solutions provider
- Experience: 7-9 years
- Working Days: 5 days/week
- Job Location: Mumbai
- CTC Range: Best in Industry
Job Description
Job Title
Lead React Native Developer (6–8 Years Experience)
Position Overview
We are looking for a Lead React Native Developer to provide technical leadership for our mobile applications. This role involves owning architectural decisions, setting development standards, mentoring teams, and driving scalable, high-performance mobile solutions aligned with business goals.
Must-Have Skills
- 6–8 years of experience in mobile application development
- Extensive hands-on experience leading React Native projects
- Expert-level understanding of React Native architecture and internals
- Strong knowledge of mobile app architecture patterns
- Proven experience with performance optimization and scalability
- Experience in technical leadership, team management, and mentorship
- Strong problem-solving and analytical skills
- Excellent communication and collaboration abilities
- Proficiency in modern React Native development practices
- Experience with Expo toolkit and libraries
- Strong understanding of custom hooks development
- Focus on writing clean, maintainable, and scalable code
- Understanding of mobile app lifecycle
- Knowledge of cross-platform design consistency
Good-to-Have Skills
- Experience with microservices architecture
- Knowledge of cloud platforms such as AWS, Firebase, etc.
- Understanding of DevOps practices and CI/CD pipelines
- Experience with A/B testing and feature flag implementation
- Familiarity with machine learning integration in mobile applications
- Exposure to innovation-driven technical decision-making
Skills: React native, mobile app development, devops, machine learning
******
Notice period - 0 to 15 days only (Need Feb Joiners)
Location: Navi Mumbai, Belapur


















