50+ Windows Azure Jobs in India
Apply to 50+ Windows Azure Jobs on CutShort.io. Find your next job, effortlessly. Browse Windows Azure Jobs and apply today!
Description
SRE Engineer
Role Overview
As a Site Reliability Engineer, you will play a critical role in ensuring the availability and performance of our customer-facing platform. You will work closely with DevOps, DBA, and Development teams to provision and maintain infrastructure, deploy and monitor our applications, and automate workflows. Your contributions will have a direct impact on customer satisfaction and overall experience.
Responsibilities and Deliverables
• Manage, monitor, and maintain highly available systems (Windows and Linux)
• Analyze metrics and trends to ensure rapid scalability.
• Address routine service requests while identifying ways to automate and simplify.
• Create infrastructure as code using Terraform, ARM Templates, Cloud Formation.
• Maintain data backups and disaster recovery plans.
• Design and deploy CI/CD pipelines using GitHub Actions, Octopus, Ansible, Jenkins, Azure DevOps.
• Adhere to security best practices through all stages of the software development lifecycle
• Follow and champion ITIL best practices and standards.
• Become a resource for emerging and existing cloud technologies with a focus on AWS.
Organizational Alignment
• Reports to the Senior SRE Manager
• This role involves close collaboration with DevOps, DBA, and security teams.
Technical Proficiencies
• Hands-on experience with AWS is a must-have.
• Proficiency analyzing application, IIS, system, security logs and CloudTrail events
• Practical experience with CI/CD tools such as GitHub Actions, Jenkins, Octopus
• Experience with observability tools such as New Relic, Application Insights, AppDynamics, or DataDog.
• Experience maintaining and administering Windows, Linux, and Kubernetes.
• Experience in automation using scripting languages such as Bash, PowerShell, or Python.
• Configuration management experience using Ansible, Terraform, Azure Automation Run book or similar.
• Experience with SQL Server database maintenance and administration is preferred.
• Good Understanding of networking (VNET, subnet, private link, VNET peering).
• Familiarity with cloud concepts including certificates, Oauth, AzureAD, ASE, ASP, AKS, Azure Apps,
Load Balancers, Application Gateway, Firewall, Load Balancer, API Management, SQL Server, Databases on Azure
Experience
• 7+ years of experience in SRE or System Administration role
• Demonstrated ability building and supporting high availability Windows/Linux servers, with emphasis on the WISA stack (Windows/IIS/SQL Server/ASP.net)
• 3+ years of experience working with cloud technologies including AWS, Azure.
• 1+ years of experience working with container technology including Docker and Kubernetes.
• Comfortable using Scrum, Kanban, or Lean methodologies.
Education
• Bachelor’s Degree or College Diploma in Computer Science, Information Systems, or equivalent
experience.
Additional Job Details:
• Working hours: 2:00 PM / 3:00 PM to 11:30 PM IST
• Interview process: 3 technical rounds
• Work model: 3 days’ work from office
Strong Azure DevOps Engineer Profiles.
Mandatory (Experience 1) – Must have minimum 1+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
Mandatory (Experience 2) – Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
Mandatory (Experience 3) – Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
Mandatory (Experience 4) – Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
Mandatory (Experience 5) – Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Mandatory (Note) - Only Male candidates are considered.
Mandatory (Location): The candidate must be currently in Bengaluru.
Job Title: Python Developer (Django / Databricks / Azure)
📍 Location: Bangalore
🕒 Experience: 3–8 Year
💼 Employment Type: FTE
🔹 Job Summary:
We are seeking a skilled Python Developer with strong experience in Django, Flask API development, Databricks, and Azure Cloud. The ideal candidate will be responsible for designing scalable backend systems, developing REST APIs, building data pipelines, and working with cloud-based data platforms.
🔹 Key Responsibilities:
✔ Develop and maintain web applications using Django framework
✔ Design and build RESTful APIs using Flask
✔ Develop and optimize data pipelines using Azure Databricks
✔ Integrate applications with Azure services (Blob, Data Factory, SQL, etc.)
✔ Write clean, scalable, and efficient Python code
✔ Collaborate with frontend, DevOps, and data engineering teams
✔ Perform code reviews and ensure best practices
✔ Troubleshoot, debug, and upgrade existing systems
🔹 Required Skills:
- Strong proficiency in Python programming
- Hands-on experience with Django framework
- Experience building Flask-based REST APIs
- Experience working with Azure Databricks
- Knowledge of Azure Cloud services
- Experience with SQL / NoSQL databases
- Understanding of CI/CD and Git workflows
🔹 Good to Have:
- Experience with PySpark
- Knowledge of microservices architecture
- Docker / Kubernetes exposure
- Experience in data engineering projects
Role Overview
The Azure Presales Engineer is responsible for engaging with customers to understand their business and technical requirements and translating them into well-architected Microsoft Azure solutions. This role plays a key part in cloud transformation initiatives by supporting presales activities, building solution proposals, responding to RFPs, and ensuring a smooth transition from presales to delivery.
Key Responsibilities
- Participate in customer discovery sessions to gather technical and business requirements
- Design Azure cloud architectures across IaaS, PaaS, and hybrid environments following best practices
- Prepare technical solution proposals, architectures, BOMs, and presales documentation
- Support RFP and RFQ responses with detailed technical inputs and cost estimations
- Deliver Azure solution demonstrations, workshops, and technical presentations to customers
- Collaborate closely with sales and delivery teams to ensure accurate solution design and handover
- Stay updated with Azure services, licensing models, pricing, and new feature releases
- Work with Microsoft account teams for co-selling opportunities, funding programs, and alignment
- Contribute to reusable presales assets, templates, and solution accelerators
Required Qualifications
- 2–3+ years of experience in Azure cloud engineering or presales roles
- Strong hands-on understanding of Azure core services including compute, storage, networking, security, IAM, monitoring, backup, and disaster recovery
- Experience in preparing technical proposals, SOWs, and solution designs
- Strong communication, presentation, and customer-facing skills
- Ability to translate business needs into effective cloud solutions
- Experience working with or for a Microsoft Partner is a strong plus
Preferred Certifications
- AZ-104, AZ-305, AZ-900, AZ-700, AZ-500 (any relevant Azure certifications)
Job Title: Java Backend Developer
Experience: ~3-6 years (Mid-to-Senior)
Employment Type: Full-time, Permanent
Location : Bangalore
Role Overview
As a Java Backend Developer, you’ll be responsible for designing, developing, and maintaining scalable backend systems and microservices. You’ll work with cross-functional teams to build high-performance distributed services, APIs, and data-driven applications that power business solutions.
Key Responsibilities
- Design and implement microservices and backend components using Java (8+) and Spring Boot.
- Build and consume RESTful APIs and integrate with internal/external services.
- Work with event-driven systems and messaging using Apache Kafka (producers/consumers).
- Develop and optimize databases, including SQL (e.g., MySQL/PostgreSQL) and NoSQL (e.g., MongoDB/Cassandra).
- Participate in CI/CD pipelines, automated builds, and deployments using tools like Git, Maven, Jenkins.
- Ensure code quality through unit and integration testing, documentation, and code reviews.
- Collaborate with frontend developers, QA, DevOps, and product teams following Agile methodologies.
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Information Technology, or related field.
- Proven hands-on experience with Core Java and Spring Boot development.
- Strong understanding of microservices architecture, REST APIs, and distributed systems.
- Experience with message queues/event streaming (Apache Kafka).
- Skilled in relational and NoSQL databases and writing optimized queries.
- Comfortable with CI/CD tools (e.g., Git, Maven, Jenkins) and version control.
- Good problem-solving, debugging, and collaboration skills.
Preferred / Nice-to-Have
- Cloud platform experience (AWS / Azure / GCP).
- Familiarity with containerization (Docker) and orchestration (Kubernetes).
- Knowledge of performance tuning, caching strategies, observability (metrics/logging).
- Agile/Scrum development experience.
Role Context & Importance
At ARDEM, uninterrupted connectivity is mission-critical. Our remote teams across India process millions of pages and records annually using ARDEM Cloud Platforms, AWS WorkSpaces, and enterprise tools. The Network Engineer plays a central role in ensuring zero downtime for our processing operations, maintaining secure remote access for hundreds of team members, and supporting the cloud and on-premises infrastructure that underpins every client engagement.
Key Responsibilities
1. Remote Desktop & End-User Support
• Provide prompt remote desktop support to ARDEM’s distributed workforce, resolving hardware, software, and network-related issues via AnyDesk and other remote tools.
• Diagnose and resolve connectivity issues affecting access to ARDEM Cloud Platforms and AWS WorkSpaces.
• Support onboarding and configuration of workstations (Windows 14” FHD laptops, minimum i5/8GB RAM) per ARDEM standard specifications.
• Ensure minimum 100 Mbps internet connectivity compliance for remote staff and assist with ISP-related escalations.
2. Identity & Access Management
• Manage and maintain user identities, access policies, and lifecycle operations using Azure Entra ID (Azure AD), Active Directory, and Microsoft 365 Admin Center.
• Configure role-based access controls (RBAC), group policies, and conditional access to protect client data in line with SOC 2 and ISO 27001 requirements.
• Manage Microsoft 365 services including Exchange Online, Teams, SharePoint, and OneDrive for ARDEM’s internal and remote teams.
3. AWS Cloud Services Administration
• Configure, monitor, and support AWS services critical to ARDEM’s cloud operations: EC2, S3, IAM, AWS WorkSpaces, and VPC.
• Manage AWS IAM policies, user roles, and security groups to ensure least-privilege access across cloud environments.
• Monitor cloud resource utilisation, performance metrics, and costs; generate reports and recommend optimisations.
• Support cloud-based remote desktop (AWS WorkSpaces) used by ARDEM’s BPO processing teams.
4. Network Infrastructure & Cisco Hardware
• Configure, manage, and troubleshoot Cisco switches, routers, and firewalls at ARDEM’s processing centres.
• Manage DNS, DHCP, VPN, and VLAN configurations to support secure and high-availability operations.
• Monitor network performance and bandwidth; implement QoS policies to prioritise critical BPO workloads.
• Coordinate with ISPs and hardware vendors to resolve infrastructure issues with minimal service disruption.
5. On-Premises Server Administration
• Maintain Windows Server infrastructure including file servers, application hosting servers, and internal email servers.
• Administer DNS, DHCP, Group Policy, and Active Directory Domain Services (AD DS) across on-premises environments.
• Perform routine health checks, patch management, and capacity planning for on-prem systems.
6. Security, Backup & Disaster Recovery
• Implement and maintain data backup schedules and disaster recovery (DR) procedures in line with ARDEM’s data security policies.
• Support compliance with ARDEM’s ISO 27001-aligned, SOC 2, HIPAA, and GDPR security frameworks through network-level controls.
• Manage VPNs, SSL certificates, endpoint security tools, and encryption at rest/in-transit for all ARDEM platforms.
• Respond to and document security incidents; participate in periodic security audits and remediation activities.
7. Documentation & Knowledge Management
• Create and maintain clear, accurate technical documentation: network diagrams, SOPs, runbooks, and incident logs.
• Build and update the internal IT knowledge base to enable faster issue resolution and reduce repeat incidents.
• Document all changes to infrastructure, cloud configurations, and access policies in accordance with change management protocols.
8. Collaboration & Project Support
• Work closely with ARDEM’s Project Managers, Operations teams, and client-facing staff to resolve IT dependencies impacting BPO delivery.
• Assist with IT infrastructure upgrades, cloud migrations, and automation initiatives that support ARDEM’s growth.
• Participate in rotational shifts to ensure 24/7 coverage aligned with ARDEM’s three-shift processing operations.
Qualifications & Requirements
Education
• B.Tech in Information Technology
Experience
• 3–5 years of professional experience in network support, IT infrastructure management, or cloud administration.
• Proven track record supporting remote or distributed teams in a BPO, IT services, or technology company environment.
Technical Skills – Required
• AWS Cloud Services: EC2, S3, IAM, VPC, AWS WorkSpaces – hands-on configuration and monitoring.
• Azure Entra ID (Azure AD), Active Directory, Group Policy, and Microsoft 365 administration.
• Windows Server administration: AD DS, DNS, DHCP, File Services, patch management.
• Cisco networking hardware: switches, routers, firewalls – configuration and troubleshooting.
• VPN, VLAN, SSL, and remote access technologies (AnyDesk, RDP, VPN clients).
• Network monitoring tools and log analysis for proactive issue detection.
• Backup and disaster recovery tools and procedures.
Technical Skills – Preferred
• Experience with ARDEM-type BPO cloud platforms or similar multi-tenant cloud environments.
• Familiarity with security frameworks: ISO 27001, SOC 2, HIPAA, GDPR.
• Exposure to automation scripting (PowerShell, Python) for IT operations tasks.
Certifications (Preferred)
AWS Cloud Practitioner (CLF-C02)
CCNA (Cisco Certified Network Associate)
AWS SysOps Administrator
MCSE / Windows Server
Azure Fundamentals (AZ-900)
ITIL v4 Foundation
Microsoft 365
Soft Skills
• Strong analytical and systematic troubleshooting skills with a solution-first mindset.
• Excellent written and verbal communication in English; ability to explain technical issues to non-technical stakeholders.
• Ability to work independently and collaboratively in a fully remote, distributed team environment.
• High sense of accountability, punctuality, and commitment to SLAs critical to BPO operations.
• Willingness to work rotational shifts to support ARDEM’s round-the-clock processing operations.
• Responsible for assisting technology and production team in client deliverables and receipt.
Mandatory Work-from-Home Equipment Requirements
All candidates must confirm that they meet the following minimum home office specifications before selection:
Device Type
Windows Laptop
Operating System
Windows 10 / Windows 11
Screen Size
14 Inches/ preferable to have 2 monitors
Screen Resolution
FHD (1920 × 1080) or higher
Processor
Intel Core i5 (8th Gen or later) or higher
RAM
Minimum 8 GB (Mandatory) – 16 GB preferred
Internet Speed
100 Mbps or higher (dedicated broadband connection)
Remote Tool
AnyDesk (to be installed and configured prior to joining)
Power Backup
UPS / Inverter recommended for uninterrupted connectivity
About the role:
We are looking for a skilled and driven Security Engineer to join our growing security team. This role requires a hands-on professional who can evaluate and strengthen the security posture of our
applications and infrastructure across Web, Android, iOS, APIs, and cloud-native environments.
The ideal candidate will also lead technical triage from our bug bounty program, integrate security into the DevOps lifecycle, and contribute to building a security-first engineering culture.
Required Skills & Experience:
● 3 to 6 years of solid hands-on experience in the VAPT domain
● Solid understanding of Web, Android, and iOS application security
● Experience with DevSecOps tools and integrating security into CI/CD
● Strong knowledge of cloud platforms (AWS/GCP/Azure) and their security models
● Familiarity with bug bounty programs and responsible disclosure practices
● Familiarity with tools like Burp Suite, MobSF, OWASP ZAP, Terraform, Checkov..etc
● Good knowledge of API security
● Scripting experience (Python, Bash, or similar) for automation tasks
Preferred Qualifications:
● OSCP, CEH, AWS Security Specialty, or similar certifications
● Experience working in a regulated environment (e.g., FinTech, InsurTech)
Responsibilities:
● Perform Security reviews, Vulnerability Assessments & Penetration Testing for Web,
Android, iOS, and API endpoints
● Perform Threat Modelling & anticipate potential attack vectors and improve security
architecture on complex or cross-functional components
● Identify and remediate OWASP Top 10 and mobile-specific vulnerabilities
● Conduct secure code reviews and red team assessments
● Integrate SAST, DAST, SCA, and secret scanning tools into CI/CD pipelines
● Automate security checks using tools like SonarQube, Snyk, Trivy, etc.
● Maintain and manage vulnerability scanning infrastructure
● Perform security assessments of AWS, Azure, and GCP environments, with an emphasis
on container security, particularly for Docker and Kubernetes.
● Implement guardrails for IAM, network segmentation, encryption, and cloud monitoring
● Contribute to infrastructure hardening for containers, Kubernetes, and virtual machines
● Triage bug bounty reports and coordinate remediation with engineering teams
● Act as the primary responder for external security disclosures
● Maintain documentation and metrics related to bug bounty and penetration testing
activities
● Collaborate with developers and architects to ensure secure design decisions
● Lead security design reviews for new features and products
● Provide actionable risk assessments and mitigation plans to stakeholders
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for an Engineering Manager who combines technical depth with leadership strength. This role involves leading one or more product engineering pods, driving architecture decisions, ensuring delivery excellence, and working closely with stakeholders to build scalable web and mobile technology solutions. As a key part of our leadership team, you’ll play a pivotal role in mentoring engineers, improving processes, and fostering a culture of ownership, innovation, and continuous learning.
Roles and Responsibilities:
● Team Management: Lead, coach, and grow a team of 15-20 software engineers, tech leads, and QA engineers
● Technical Leadership: Guide the team in building scalable, high-performance web and mobile applications using modern frameworks and technologies
● Architecture Ownership: Architect robust, secure, and maintainable technology solutions aligned with product goals
● Project Execution: Ensure timely and high-quality delivery of projects by driving engineering best practices, agile processes, and cross-functional collaboration
● Stakeholder Collaboration: Act as a bridge between business stakeholders, product managers, and engineering teams to translate requirements into technology plans
● Culture & Growth: Help build and nurture a culture of technical excellence, accountability, and continuous improvement
● Hiring & Onboarding: Contribute to recruitment efforts, onboarding, and learning & development of team members.
Requirements:
● 8+ years of software development experience, with 2+ years in a technical leadership or engineering manager role
● Proven experience in architecting and building web and mobile applications at scale
● Hands-on knowledge of technologies such as JavaScript/TypeScript, Angular, React, Node.js, .NET, Java, Python, or similar stacks
● Solid understanding of cloud platforms (AWS/Azure/GCP) and DevOps practices
● Strong interpersonal skills with a proven ability to manage stakeholders and lead diverse teams
● Excellent problem-solving, communication, and organizational skills
● Nice to have:
- Prior experience in working with startups or product-based companies
- Experience mentoring tech leads and helping shape engineering culture
- Exposure to AI/ML, data engineering, or platform thinking
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethics and culture.
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
Role Objective
We are looking for a proactive InfoSec Associate to support our compliance and audit functions. You will play a key role in maintaining our ISO standards, handling vendor security assessments, and ensuring our documentation is audit-ready for our banking and NBFC clients.
Key Responsibilities
- Audit Support: Assist in internal and external audits for ISO 27001, SOC2, and ISO 27701.
- Vendor Compliance: Independently handle and respond to detailed Vendor Security Questionnaires from banks and NBFCs.
- Evidence Management: Collect, organize, and present technical audit evidence from engineering and IT teams.
- Policy & Documentation: Help draft and review Security Policies, SOPs, and ISMS documentation.
- Risk Tracking: Track audit observations and manage the Corrective Action Plan (CAPA) to ensure timely remediation.
- Data Privacy: Assist in aligning internal processes with the DPDP Act and GDPR requirements.
Required Skills & Competencies
- Framework Knowledge: Basic understanding of ISO 27001 and Risk Assessment principles.
- Technical Literacy: Ability to understand AWS/Azure cloud security settings from a compliance standpoint.
- Documentation: High proficiency in organizing audit trails and drafting professional security reports.
- Communication: Comfortable interacting with external auditors and internal technical teams.
Preferred Certifications (Good to Have)
- ISO 27001 Internal Auditor
- CompTIA Security+
- CISA (In-progress/Foundation)
About NonStop io Technologies
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We're seeking an AI/ML Engineer to join our team. As AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real-world business problems. You will work closely with engineering teams, including software engineers, domain experts, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop io. Your role will involve researching cutting-edge algorithms and data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.
Responsibilities
● Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI
● AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.
● Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data
● Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics
● Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behaviour, and performance metrics
● Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems
● Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes
● Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions
● Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference.
Qualifications & Skills
● Bachelor's, Master's, or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus
● Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects
● Proficiency in programming languages commonly used for AI/ML. Preferably Python
● Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.
● Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.
● Strong understanding of machine learning algorithms, statistics, and data structures
● Experience with data preprocessing, data wrangling, and feature engineering
● Knowledge of deep learning architectures, neural networks, and transfer learning
● Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment
● Solid understanding of software engineering principles and best practices for writing maintainable and scalable code
● Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions
● Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders
We are seeking a talented AI/ML Engineer with strong hands-on experience in Generative AI and Large Language Models (LLMs) to join our Business Intelligence team. The role involves designing, developing, and deploying advanced AI/ML and GenAI-driven solutions to unlock business insights and enhance data-driven decision-making.
Key Responsibilities:
• Collaborate with business analysts and stakeholders to identify AI/ML and Generative AI use cases.
• Design and implement ML models for predictive analytics, segmentation, anomaly detection, and forecasting.
• Develop and deploy Generative AI solutions using LLMs (GPT, LLaMA, Mistral, etc.).
• Build and maintain Retrieval-Augmented Generation (RAG) pipelines and semantic search systems.
• Work with vector databases (FAISS, Pinecone, ChromaDB) for embedding storage and retrieval.
• Develop end-to-end AI/ML pipelines from data preprocessing to deployment.
• Integrate AI/ML and GenAI solutions into BI dashboards and reporting tools.
• Optimize models for performance, scalability, and reliability.
• Maintain documentation and promote knowledge sharing within the team.
Mandatory Requirements:
• 4+ years of relevant experience as an AI/ML Engineer.
• Hands-on experience in Generative AI and Large Language Models (LLMs) – Mandatory.
• Experience implementing RAG pipelines and prompt engineering techniques.
• Strong programming skills in Python.
• Experience with ML frameworks (TensorFlow, PyTorch, scikit-learn).
• Experience with vector databases (FAISS, Pinecone, ChromaDB).
• Strong understanding of SQL and database systems.
• Experience integrating AI solutions into BI tools (Power BI, Tableau).
• Strong analytical, problem-solving, and communication skills. Good to Have
• Experience with cloud platforms (AWS, Azure, GCP).
• Experience with Docker or Kubernetes.
• Exposure to NLP, computer vision, or deep learning use cases.
• Experience in MLOps and CI/CD pipelines
Way2DreamJobs is building a premium cloud mentorship ecosystem focused on real-world Microsoft Azure and Modern Workplace skills.
We are inviting experienced Azure professionals to collaborate as founding weekend mentors for a remote mentorship program.
This is not a traditional full-time job. It is a flexible mentor collaboration model designed for working IT professionals who want to share real enterprise experience.
Responsibilities:
• Conduct weekend mentorship sessions
• Guide learners through practical Azure scenarios
• Support hands-on lab oriented learning
Ideal Profile:
• 4+ years Azure infrastructure experience
• Exposure to Microsoft Intune or M365 device management preferred
• Comfortable guiding professionals in live sessions
Benefits:
• Remote weekend engagement
• Build industry mentor brand authority
• Paid mentorship collaboration (structure discussed during call)
Job Details
- Job Title: Lead Software Engineer - Java, Python, API Development
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 8-10 years
- Employment Type: Full Time
- Job Location: Pune & Trivandrum/ Thiruvananthapuram
- CTC Range: Best in Industry
Job Description
Job Summary
We are seeking a Lead Software Engineer with strong hands-on expertise in Java and Python to design, build, and optimize scalable backend applications and APIs. The ideal candidate will bring deep experience in cloud technologies, large-scale data processing, and leading the design of high-performance, reliable backend systems.
Key Responsibilities
- Design, develop, and maintain backend services and APIs using Java and Python
- Build and optimize Java-based APIs for large-scale data processing
- Ensure high performance, scalability, and reliability of backend systems
- Architect and manage backend services on cloud platforms (AWS, GCP, or Azure)
- Collaborate with cross-functional teams to deliver production-ready solutions
- Lead technical design discussions and guide best practices
Requirements
- 8+ years of experience in backend software development
- Strong proficiency in Java and Python
- Proven experience building scalable APIs and data-driven applications
- Hands-on experience with cloud services and distributed systems
- Solid understanding of databases, microservices, and API performance optimization
Nice to Have
- Experience with Spring Boot, Flask, or FastAPI
- Familiarity with Docker, Kubernetes, and CI/CD pipelines
- Exposure to Kafka, Spark, or other big data tools
Skills
Java, Python, API Development, Data Processing, AWS Backend
Skills: Java, API development, Data Processing, AWS backend, Python,
Must-Haves
Java (8+ years), Python (8+ years), API Development (8+ years), Cloud Services (AWS/GCP/Azure), Database & Microservices
8+ years of experience in backend software development
Strong proficiency in Java and Python
Proven experience building scalable APIs and data-driven applications
Hands-on experience with cloud services and distributed systems
Solid understanding of databases, microservices, and API performance optimization
Mandatory Skills: Java API AND AWS
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Pune, Trivandrum
Customer currently uses ELK stack, and the goal is to standardize and modernize logs, metrics, and traces using OpenTelemetry, while improving visibility, reliability, and operational intelligence.
Observability Architecture & Modernization
· Assess the existing ELK-based observability setup and define a modern observability architecture
· Design and implement standardized logging, metrics, and distributed tracing using OpenTelemetry
· Define observability best practices for cloud-native and Azure-based applications
· Ensure consistent telemetry collection across microservices, APIs, and infrastructure
Logging, Metrics & Tracing
· Instrument applications using OpenTelemetry SDKs (SpringBoot, .NET, Python, Javascript – as applicable)
· Support Kubernetes and container-based workloads (if applicable)
· Configure and optimize log pipelines, trace exporters, and metric collectors
· Integrate OpenTelemetry with ELK / OpenSearch / Azure Monitor / other backends
· Define SLIs, SLOs, and alerting strategies
· Knowldege in integrating the GitHub and Jira metrics as DORA metrics to observability.
Operational Excellence
· Improve observability performance, cost efficiency, and data retention strategies
· Create dashboards, runbooks, and documentation
AI-based Anomaly Detection & Triage (Good to Have )
· Design or integrate AI/ML-based anomaly detection for logs, metrics, and traces
· Worked on AIOps capabilities for automated incident triage and insights
Required Technical Skills
Core Observability
· Strong hands-on experience with ELK Stack (Elasticsearch, Logstash, Kibana)
· Deep understanding of logs, metrics, traces, and distributed systems
· Practical experience with OpenTelemetry (Collectors, SDKs, exporters, receivers)
Cloud & Platforms
· Strong experience with Microsoft Azure to integrate with Observability platform.
· Experience with Kubernetes / AKS to integrate with Observability platform.
· Knowledge of Azure monitoring tools (Azure Monitor, Log Analytics, Application Insights)
· Experience with Kubernetes / AKS is a strong plus.
Soft Skills
· Strong architecture and problem-solving skills
· Clear communication and documentation skills
· Hands-on mindset with an architect-level view
Good to Have / Preferred Skills
· Experience with AIOps / anomaly detection platforms
· Exposure to tools like Prometheus, Grafana, Jaeger, OpenSearch, Datadog, Dynatrace, New Relic (any)
· Experience with incident management, SRE practices, and reliability engineering
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a passionate and experienced Full Stack Engineer to join our engineering team. The ideal candidate will have strong experience in both frontend and backend development, with the ability to design, build, and scale high-quality applications. You will collaborate with cross-functional teams to deliver robust and user-centric solutions.
Roles and Responsibilities:
● Design, develop, and maintain scalable web applications
● Build responsive and high-performance user interfaces
● Develop secure and efficient backend services and APIs
● Collaborate with product managers, designers, and QA teams to deliver features
● Write clean, maintainable, and testable code
● Participate in code reviews and contribute to engineering best practices
● Optimize applications for speed, performance, and scalability
● Troubleshoot and resolve production issues
● Contribute to architectural decisions and technical improvements.
Requirements:
● 3 to 5 years of experience in full-stack development
● Strong proficiency in frontend technologies such as React, Angular, or Vue
● Solid experience with backend technologies such as Node.js, .NET, Java, or Python
● Experience in building RESTful APIs and microservices
● Strong understanding of databases such as PostgreSQL, MySQL, MongoDB, or SQL Server
● Experience with version control systems like Git
● Familiarity with CI CD pipelines
● Good understanding of cloud platforms such as AWS, Azure, or GCP
● Strong understanding of software design principles and data structures
● Experience with containerization tools such as Docker
● Knowledge of automated testing frameworks
● Experience working in Agile environments
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
Company Description
Krish is committed to enabling customers to achieve their technological goals by delivering solutions that combine the right technology, people, and costs. Our approach emphasizes building long-term relationships while ensuring customer success through tailored solutions, leveraging the expertise and integrity of our consultants and robust delivery processes.
Location : Mumbai – Tech Data Office
Experience : 5 - 8 years.
Duration : 1-year contract (extendable)
Job Overview
We are seeking a highly skilled Sales Engineer (L2/L3) with in-depth expertise in Palo Alto Networks solutions. This role requires designing, implementing, and supporting cutting-edge network and security solutions to meet customers' technical and business needs. The ideal candidate will have strong experience in sales engineering and advanced skills in deploying, troubleshooting, and optimizing Palo Alto products and related technologies, with the ability to assist in implementation tasks when required.
Key Responsibilities
Solution Design & Technical Consultation:
- Collaborate with sales teams and customers to understand business and technical requirements.
- Design and propose solutions leveraging Palo Alto Networks technologies, including Next-Generation Firewalls (NGFW), Prisma Access, Panorama, SD-WAN, and Threat Prevention.
- Prepare detailed technical proposals, configurations, and proof-of-concept (POC) demonstrations tailored to client needs.
- Optimize existing customer deployments, ensuring alignment with industry best practices.
Customer Engagement & Implementation:
- Present and demonstrate Palo Alto solutions to stakeholders, addressing technical challenges and business objectives.
- Conduct customer and partner workshops, enablement sessions, and product training.
- Provide post-sales support to address implementation challenges and fine-tune deployments.
- Lead and assist with hands-on implementations of Palo Alto Networks products when required.
Support & Troubleshooting:
- Provide L2-L3 level troubleshooting and issue resolution for Palo Alto Networks products, including advanced debugging and system analysis.
- Assist with upgrades, migrations, and integration of Palo Alto solutions with other security and network infrastructures.
- Develop runbooks, workflows, and documentation for post-sales handover to operations teams.
Partner Enablement & Ecosystem Management:
- Collaborate with channel partners to build technical competency and promote adoption of Palo Alto solutions.
- Support certification readiness and compliance for both internal and partner teams.
- Participate in events, workshops, and seminars to showcase technical expertise.
Skills and Qualifications
Technical Skills:
- Advanced expertise in Palo Alto Networks technologies, including NGFW, Panorama, Prisma Access, SD-WAN, and GlobalProtect.
- Strong knowledge of networking protocols (e.g., TCP/IP, BGP, OSPF) and security frameworks (e.g., Zero Trust, SASE).
- Proficiency in troubleshooting and root-cause analysis for complex networking and security issues.
- Experience with security automation tools and integrations (e.g., API scripting, Ansible, Terraform).
Soft Skills:
- Excellent communication and presentation skills, with the ability to convey technical concepts to diverse audiences.
- Strong analytical and problem-solving skills, with a focus on delivering customer-centric solutions.
- Ability to manage competing priorities and maintain operational discipline under tight deadlines.
Experience:
- 5+ years of experience in sales engineering, solution architecture, or advanced technical support roles in the IT security domain.
- Hands-on experience in designing and deploying large-scale Palo Alto Networks solutions in enterprise environments.
Education and Certifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Relevant certifications such as PCNSA, PCNSE, or equivalent vendor certifications (e.g., CCNP Security, NSE4) are highly preferred.
- Strong experience in Azure – mainly Azure ML Studio, AKS, Blob Storage, ADF, ADO Pipelines.
- Ability and experience to register and deploy ML/AI/GenAI models via Azure ML Studio.
- Working knowledge of deploying models in AKS clusters.
- Design and implement data processing, training, inference, and monitoring pipelines using Azure ML.
- Excellent Python skills – environment setup and dependency management, coding as per best practices, and knowledge of automatic code review tools like linting and Black.
- Experience with MLflow for model experiments, logging artifacts and models, and monitoring.
- Experience in orchestrating machine learning pipelines using MLOps best practices.
- Experience in DevOps with CI/CD knowledge (Git in Azure DevOps).
- Experience in model monitoring (drift detection and performance monitoring).
- Fundamentals of data engineering.
- Docker-based deployment is good to have.
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.
Roles and Responsibilities:
● Design, implement, and manage CI CD pipelines for multiple environments
● Automate infrastructure provisioning using Infrastructure as Code tools
● Manage and optimize cloud infrastructure on AWS, Azure, or GCP
● Monitor system performance, availability, and security
● Implement logging, monitoring, and alerting solutions
● Collaborate with development teams to streamline release processes
● Troubleshoot production issues and ensure high availability
● Implement containerization and orchestration solutions such as Docker and Kubernetes
● Enforce DevOps best practices across the engineering lifecycle
● Ensure security compliance and data protection standards are maintained
Requirements:
● 4 to 7 years of experience in DevOps or Site Reliability Engineering
● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage
● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps
● Experience working in microservices architecture
● Exposure to DevSecOps practices
● Experience in cost optimization and performance tuning in cloud environments
● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM
● Strong knowledge of containerization using Docker
● Experience with Kubernetes in production environments
● Good understanding of Linux systems and shell scripting
● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog
● Strong troubleshooting and debugging skills
● Understanding of networking concepts and security best practices
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
15+ years in enterprise product development
5+ years in Director/VP-level product leadership
Proven AI/ML product commercialization experience
Expertise in Industry 4.0 (IoT, predictive maintenance, digital twins)
Hands-on experience with AWS/Azure/GCP cloud platforms
Strong architecture experience in microservices ecosystems
Experience implementing MLOps and DevSecOps frameworks
Experience integrating MES, ERP, SCADA, automation platforms
SaaS business model and enterprise software commercialization expertise
Experience leading large engineering/product teams
Job Overview
As a software Engineer, you will play a crucial role in leading our development efforts, ensuring best practices, and supporting the team on a day-to-day basis. This role requires deep technical knowledge, a proactive mindset, and a commitment to guiding the team in tackling challenging issues. You will work primarily with .NET Core on the backend while also keeping a strategic focus on product security, DevOps, quality assurance, and cloud infrastructure.
Responsibilities
• Forward-Looking Product Development:
o Collaborate with product and engineering teams to align on the technical
direction, scalability, and maintainability of the product.
o Proactively consider and address security, performance, and scalability
requirements during development.
- Cloud and Infrastructure: Leverage Microsoft Azure for cloud infrastructure,
- ensuring efficient and secure use of cloud services. Work closely with DevOps to
- improve deployment processes.
- DevOps & CI/CD: Support the setup and maintenance of CI/CD pipelines, enabling
- smooth and frequent deployments. Collaborate with the DevOps team to automate and
- optimize the development process.
- Technical Mentorship: Provide technical guidance and support to team members,
- helping them solve day-to-day challenges, enhance code quality, and adopt best
- practices.
- Quality Assurance: Collaborate with QA to ensure thorough testing, automated testing
- coverage, and overall product quality.
- Product Security: Actively implement and promote security best practices to protect
- data and ensure compliance with industry standards.
- Documentation & Code Reviews: Promote good coding practices, conduct code
- reviews, and maintain clear documentation.
- Qualifications
• Technical Skills:
o Strong experience with .NET Core for backend development and RESTful API
design.
o Hands-on experience with Microsoft Azure services, including but not limited
to VMs, databases, application gateways, and user management.
o Familiarity with DevOps practices and tools, particularly CI/CD pipeline
configuration and deployment automation.
o Strong knowledge of product security best practices and experience implementing secure coding practices.
o Familiarity with QA processes and automated testing tools is a plus.
o Ability to support team members in solving technical challenges and sharing
knowledge effectively.
Preferred Qualifications
- 3+ years of experience in software development, with a strong focus on .NET Core
- Previous experience as a Staff SE, tech lead, or in a similar hands-on tech role.
- Strong problem-solving skills and ability to work in a fast-paced, startup environment.
- What We Offer
- Opportunity to lead and grow within a dynamic and ambitious team.
- Challenging projects that focus on innovation and cutting-edge technology.
- Collaborative work environment with a focus on learning, mentorship, and growth.
- Competitive compensation, benefits, and stock options.
- If you’re a proactive, forward-thinking technology leader with a passion for .NET Core and you’re ready to make an impact, we’d love to meet you!
JOB DETAILS:
* Job Title: Java Lead-Java, MS, Kafka-TVM - Java (Core & Enterprise), Spring/Micronaut, Kafka
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 9 to 12 years
* Location: Trivandrum, Thiruvananthapuram
Job Description
Experience
- 9+ years of experience in Java-based backend application development
- Proven experience building and maintaining enterprise-grade, scalable applications
- Hands-on experience working with microservices and event-driven architectures
- Experience working in Agile and DevOps-driven development environments
Mandatory Skills
- Advanced proficiency in core Java and enterprise Java concepts
- Strong hands-on experience with Spring Framework and/or Micronaut for building scalable backend applications
- Strong expertise in SQL, including database design, query optimization, and performance tuning
- Hands-on experience with PostgreSQL or other relational database management systems
- Strong experience with Kafka or similar event-driven messaging and streaming platforms
- Practical knowledge of CI/CD pipelines using GitLab
- Experience with Jenkins for build automation and deployment processes
- Strong understanding of GitLab for source code management and DevOps workflows
Responsibilities
- Design, develop, and maintain robust, scalable, and high-performance backend solutions
- Develop and deploy microservices using Spring or Micronaut frameworks
- Implement and integrate event-driven systems using Kafka
- Optimize SQL queries and manage PostgreSQL databases for performance and reliability
- Build, implement, and maintain CI/CD pipelines using GitLab and Jenkins
- Collaborate with cross-functional teams including product, QA, and DevOps to deliver high-quality software solutions
- Ensure code quality through best practices, reviews, and automated testing
Good-to-Have Skills
- Strong problem-solving and analytical abilities
- Experience working with Agile development methodologies such as Scrum or Kanban
- Exposure to cloud platforms such as AWS, Azure, or GCP
- Familiarity with containerization and orchestration tools such as Docker or Kubernetes
Skills: java, spring boot, kafka development, cicd, postgresql, gitlab
Must-Haves
Java Backend (9+ years), Spring Framework/Micronaut, SQL/PostgreSQL, Kafka, CI/CD (GitLab/Jenkins)
Advanced proficiency in core Java and enterprise Java concepts
Strong hands-oacn experience with Spring Framework and/or Micronaut for building scalable backend applications
Strong expertise in SQL, including database design, query optimization, and performance tuning
Hands-on experience with PostgreSQL or other relational database management systems
Strong experience with Kafka or similar event-driven messaging and streaming platforms
Practical knowledge of CI/CD pipelines using GitLab
Experience with Jenkins for build automation and deployment processes
Strong understanding of GitLab for source code management and DevOps workflows
*******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: only Trivandrum
F2F Interview on 21st Feb 2026
Job Description -
Profile: Senior ML Lead
Experience Required: 10+ Years
Work Mode: Remote
Key Responsibilities:
- Design end-to-end AI/ML architectures including data ingestion, model development, training, deployment, and monitoring
- Evaluate and select appropriate ML algorithms, frameworks, and cloud platforms (Azure, Snowflake)
- Guide teams in model operationalization (MLOps), versioning, and retraining pipelines
- Ensure AI/ML solutions align with business goals, performance, and compliance requirements
- Collaborate with cross-functional teams on data strategy, governance, and AI adoption roadmap
Required Skills:
- Strong expertise in ML algorithms, Linear Regression, and modeling fundamentals
- Proficiency in Python with ML libraries and frameworks
- MLOps: CI/CD/CT pipelines for ML deployment with Azure
- Experience with OpenAI/Generative AI solutions
- Cloud-native services: Azure ML, Snowflake
- 8+ years in data science with at least 2 years in solution architecture role
- Experience with large-scale model deployment and performance tuning
Good-to-Have:
- Strong background in Computer Science or Data Science
- Azure certifications
- Experience in data governance and compliance
The Sr. Consultant, Microsoft AI Solutions drives end-to-end delivery success for assigned Microsoft Copilot and AI solution components. This includes leading solution design activities within engagement scope, aligning stakeholders on requirements and implementation approach, developing delivery-ready artifacts, and executing configuration, integration, and deployment tasks. The role ensures solutions meet security, governance, and operational standards and supports go-live readiness, stabilization, and handoff to operations teams.
This role will also support presales and technical deep-dive sessions (on an as-needed basis) with customers prior to the initiation of delivery engagements, focused on solution feasibility, technical validation, and delivery readiness.
SUMMARY OF ESSENTIAL JOB FUNCTIONS:
Solution Envisioning and Business Alignment
- Lead and support AI solution envisioning activities with customers, including workshops, demonstrations, and technical deep-dive sessions.
- Translate business scenarios and use cases into conceptual solution designs - aligned to Microsoft AI products and services via Copilot, Azure, etc.
- Support technical feasibility and delivery readiness assessments prior to delivery initiation, validating platform fit, approach, and constraints.
- Facilitate alignment with customer stakeholders on solution scope, requirements, architecture approach, and success criteria.
- Develop conceptual designs and delivery-aligned solution definitions to guide successful implementation.
Solution Delivery and Execution
- Lead solution design activities within delivery engagements, translating approved concepts into functional and non-functional requirements.
- Configure, build, and implement Microsoft solutions through Microsoft Copilot, Copilot Studio, Power Platform, and supporting Azure services.
- Integrate Copilot solutions with Microsoft 365, Teams, Microsoft Foundry, and existing enterprise systems and workflows.
- Implement identity, security, governance, and access controls aligned to customer and organizational standards.
- Execute testing, validation, and troubleshooting to ensure solution quality and readiness for production use.
- Support deployment, go-live, and stabilization activities to ensure successful adoption.
Communication and Collaboration
- Ability to serve as the primary delivery lead for assigned solution components or workstreams within an engagement.
- Partner with solution architects and project managers to plan, execute, and track delivery milestones.
- Collaborate with customer technical and business teams to drive alignment, decision-making, and adoption throughout the engagement.
- Communicate delivery status, risks, and dependencies to internal and customer stakeholders.
- Support limited presales and technical deep-dive sessions (on an as-needed basis) to enable solution feasibility, technical validation, and delivery readiness.
Continuous Improvement and Delivery Excellence
- Develop and contribute to delivery artifacts including architecture workshop agendas, diagrams, configuration specifications, runbooks, deployment guides, and validation checklists.
- Capture, sanitize, and contribute reusable solution assets, patterns, and implementation guidance to internal repositories.
- Contribute feedback and lessons learned to improve delivery efficiency, consistency, and quality across similar engagements.
- Support initiatives focused on standardizing delivery approaches and accelerating future implementations.
- Stay current on Microsoft Copilot, Microsoft Foundry, and related platform updates to continuously improve delivery practices.
REQUIRED SKILLS AND EXPERIENCE:
· Bachelor’s degree required; advanced degree or relevant certifications preferred.
· 8+ years of experience in consulting, enterprise architecture, or digital transformation with client-facing responsibilities.
· Experience advising senior leaders on AI, Copilot, or cloud modernization initiatives.
· Strong hands-on expertise with Microsoft Copilot and Copilot Studio.
· Experience designing AI-enabled solutions / automations, or integrating with existing business processes leveraging Microsoft 365, Teams, and Azure services.
· Strong understanding of Identity and Access Management design concepts for Microsoft Copilot and AI agents
· Familiar with agent design patterns & data access patterns.
· Familiar with Azure OpenAI, Azure AI Search, Logic Apps, Azure Functions, and integration architectures. Hands-on experience in integrating with one or more of these services through Copilot Studio is preferred.
· Experience in leading and contributing to presales efforts including readiness assessments, envisioning workshops, and proposal development.
· Microsoft certifications in Microsoft Copilot, Power Platform, Azure, and Microsoft AI are preferred.
· Highly organized, detail-oriented, excellent time management skills, and able to effectively prioritize tasks in a fast-paced, high-volume, and evolving work environment.
· Ability to approach customer requests with a proactive and consultative manner; listen to and understand user requests and needs, and effectively deliver.
· Strong influencing skills to get things done and inspire business transformation.
· Excellent oral, written communication, and presentation skills with an ability to present AI-related concepts to C-Level Executives and non-technical audiences.
· Conflict negotiation and critical thinking skills and agility.
· Ability to travel when needed.
JOB DETAILS:
* Job Title: Principal Data Scientist
* Industry: Healthcare
* Salary: Best in Industry
* Experience: 6-10 years
* Location: Bengaluru
Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps
Criteria:
- Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
- Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
- Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
- Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
- Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.
Job Description
Principal Data Scientist
(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)
Job Details
- Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
- Location: Hebbal Ring Road, Bengaluru
- Work Mode: Work from Office
- Shift: Day Shift
- Reporting To: SVP
- Compensation: Best in the industry (for suitable candidates)
Educational Qualifications
- Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
- Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage
Experience Required
- 7+ years of experience solving real-world problems using:
- Natural Language Processing (NLP)
- Automatic Speech Recognition (ASR)
- Large Language Models (LLMs)
- Machine Learning (ML)
- Preferably within the healthcare domain
- Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable
Role Overview
This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.
We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:
- Reduce administrative burden in EMR data entry
- Improve provider satisfaction and productivity
- Enhance quality of care and patient outcomes
Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.
The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.
Key Responsibilities
AI Strategy & Solution Development
- Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
- Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
- Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
- Design scalable, reusable, and production-ready AI frameworks for speech and text analytics
Model Development & Optimization
- Fine-tune, train, and optimize large-scale NLP and ASR models
- Develop and optimize ML algorithms for speech, text, and structured healthcare data
- Conduct rigorous testing and validation to ensure high clinical accuracy and performance
- Continuously evaluate and enhance model efficiency and reliability
Cloud & MLOps Implementation
- Architect and deploy AI models on AWS, Azure, or GCP
- Deploy and manage models using containerization, Kubernetes, and serverless architectures
- Design and implement robust MLOps strategies for lifecycle management
Integration & Compliance
- Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
- Integrate AI systems with EHR/EMR platforms
- Implement ethical AI practices, regulatory compliance, and bias mitigation techniques
Collaboration & Leadership
- Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
- Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
- Mentor and lead junior data scientists and engineers
- Contribute to AI research, publications, patents, and long-term AI strategy
Required Skills & Competencies
- Expertise in Machine Learning, Deep Learning, and Generative AI
- Strong Python programming skills
- Hands-on experience with PyTorch and TensorFlow
- Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
- Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
- Experience with text embeddings and vector databases
- Proficiency in cloud platforms (AWS, Azure, GCP)
- Experience with LangChain, OpenAI APIs, and RAG architectures
- Knowledge of agentic AI frameworks and reinforcement learning
- Familiarity with Docker, Kubernetes, and MLOps best practices
- Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
- Strong communication, collaboration, and mentoring skills
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
JOB DETAILS:
* Job Title: Lead I - Azure, Terraform, GitLab CI
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 3-5 years
* Location: Trivandrum/Pune
Job Description
Job Title: DevOps Engineer
Experience: 4–8 Years
Location: Trivandrum & Pune
Job Type: Full-Time
Mandatory skills: Azure, Terraform, GitLab CI, Splunk
Job Description
We are looking for an experienced and driven DevOps Engineer with 4 to 8 years of experience to join our team in Trivandrum or Pune. The ideal candidate will take ownership of automating cloud infrastructure, maintaining CI/CD pipelines, and implementing monitoring solutions to support scalable and reliable software delivery in a cloud-first environment.
Key Responsibilities
- Design, manage, and automate Azure cloud infrastructure using Terraform.
- Develop scalable, reusable, and version-controlled Infrastructure as Code (IaC) modules.
- Implement monitoring and logging solutions using Splunk, Azure Monitor, and Dynatrace.
- Build and maintain secure and efficient CI/CD pipelines using GitLab CI or Harness.
- Collaborate with cross-functional teams to enable smooth deployment workflows and infrastructure updates.
- Analyze system logs, performance metrics, and s to troubleshoot and optimize performance.
- Ensure infrastructure security, compliance, and scalability best practices are followed.
Mandatory Skills
Candidates must have hands-on experience with the following technologies:
- Azure – Cloud infrastructure management and deployment
- Terraform – Infrastructure as Code for scalable provisioning
- GitLab CI – Pipeline development, automation, and integration
- Splunk – Monitoring, logging, and troubleshooting production systems
Preferred Skills
- Experience with Harness (for CI/CD)
- Familiarity with Azure Monitor and Dynatrace
- Scripting proficiency in Python, Bash, or PowerShell
- Understanding of DevOps best practices, containerization, and microservices architecture
- Exposure to Agile and collaborative development environments
Skills Summary
Azure, Terraform, GitLab CI, Splunk (Mandatory) Additional: Harness, Azure Monitor, Dynatrace, Python, Bash, PowerShell
Skills: Azure, Splunk, Terraform, Gitlab Ci
******
Notice period - 0 to 15days only
Job stability is mandatory
Location: Trivandrum/Pune
Company Description:
NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.
Brief Description:
NonStop io is seeking a proficient .NET Developer to join our growing team. You will be responsible for developing, enhancing, and maintaining scalable applications using .NET technologies. This role involves working on a healthcare-focused product and requires strong problem-solving skills, attention to detail, and a passion for software development.
Responsibilities:
- Design, develop, and maintain applications using .NET Core/.NET Framework, C#, and related technologies
- Write clean, scalable, and efficient code while following best practices
- Develop and optimize APIs and microservices
- Work with SQL Server and other databases to ensure high performance and reliability
- Collaborate with cross-functional teams, including UI/UX designers, QA, and DevOps
- Participate in code reviews and provide constructive feedback
- Troubleshoot, debug, and enhance existing applications
- Ensure compliance with security and performance standards, especially for healthcare-related applications
Qualifications & Skills:
- Strong experience in .NET Core/.NET Framework and C#
- Proficiency in building RESTful APIs and microservices architecture
- Experience with Entity Framework, LINQ, and SQL Server
- Familiarity with front-end technologies like React, Angular, or Blazor is a plus
- Knowledge of cloud services (Azure/AWS) is a plus
- Experience with version control (Git) and CI/CD pipelines
- Strong understanding of object-oriented programming (OOP) and design patterns
- Prior experience in healthcare tech or working with HIPAA-compliant systems is a plus
Why Join Us?
- Opportunity to work on a cutting-edge healthcare product
- A collaborative and learning-driven environment
- Exposure to AI and software engineering innovations
- Excellent work ethics and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
● Participate in the customer’s system design meetings and collect the functional/technical requirements.
● Build up and deploy data pipelines for consumption by various teams as needed.
● Skillful in ETL process and tools.
● Clear understanding and experience with Python and PySpark , HIVE, Airflow, and Hadoop and RDBMS architecture
. ● Experience in writing Python programs and SQL queries.
● Experience in SQL Query tuning.
● Working knowledge of Apache Kafka in building streaming pipelines. ● Strong knowledge of the Azure data engineering ecosystem.
● Experienced in Shell Scripting(Unix/Linux).
● Build and maintain data pipelines in Spark/Pyspark with SQL and Python
● Knowledge of Azure technologies is required.
● Good to have knowledge of Kubernetes, CI/CD concepts.
● Suggest and implement best practices in data integration.
● Guide the QA team in defining system integration tests as needed.
● Split the planned deliverables into tasks and assign them to the team. ● Needs to Maintain/Deploy the ETL code and follow the Agile methodology
● Needs to work on optimization wherever applicable.
● Good oral, written and presentation skills.
Preferred Qualifications:
● Degree in Computer Science, IT, or similar field; a Master’s is a plus. ● Minimum 4+ years working with Pyspark, SQL, and Python.
● Minimum 3+ years working with Databricks, Data factory and Data lake
Role Overview
We are hiring for Humming Apps Technologies LLP who are seeking a Senior Threat Modeler to join the security team and act as a strategic bridge between architecture and defense. This role focuses on proactively identifying vulnerabilities during the design phase to ensure applications, APIs, and cloud infrastructures are secure by design.
The position requires thinking from an attacker’s perspective to analyze trust boundaries, map attack paths, and influence the overall security posture of next-generation AI-driven and cloud-native systems. The goal is not only to detect issues but to prevent risks before implementation.
Key Responsibilities
Architectural Analysis
• Lead deep-dive threat modeling sessions across applications, APIs, microservices, and cloud-native environments
• Perform detailed reviews of system architecture, data flows, and trust boundaries
Threat Modeling Frameworks & Methodologies
• Apply industry-standard frameworks including STRIDE, PASTA, ATLAS, and MITRE ATT&CK
• Identify sophisticated attack vectors and model realistic threat scenarios
Security Design & Risk Mitigation
• Detect weaknesses during the design stage
• Provide actionable and prioritized mitigation recommendations
• Strengthen security posture through secure-by-design principles
Collaborative Security Integration
• Work closely with architects and developers during design and build phases
• Embed security practices directly into the SDLC
• Ensure security is incorporated early rather than retrofitted
Communication & Enablement
• Facilitate threat modeling demonstrations and walkthroughs
• Present findings and risk assessments to stakeholders
• Translate complex technical risks into clear, business-relevant insights
• Educate teams on secure design practices and emerging threats
Required Qualifications
Experience
• 5–10 years of dedicated experience in threat modeling, product security, or application security
Technical Expertise
• Strong understanding of software architecture and distributed systems
• Experience designing and securing RESTful APIs
• Hands-on knowledge of cloud platforms such as AWS, Azure, or GCP
Modern Threat Knowledge
• Expertise in current attack vectors including OWASP Top 10
• Understanding of API-specific threats
• Awareness of emerging risks in AI/LLM-based applications
Tools & Practices
• Practical experience with threat modeling tools
• Proficiency in technical diagramming and system visualization
Communication
• Excellent written and verbal English communication skills
• Ability to collaborate across engineering teams and stakeholders in different time zones
Preferred Qualifications
• Experience in consulting or client-facing professional services roles
• Industry certifications such as CISSP, CSSLP, OSCP, or equivalent
🚀 Hiring: Data Engineer ( Azure )
⭐ Experience: 5+ Years
📍 Location: Pune, Bhopal, Jaipur, Gurgaon, Delhi, Banglore,
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
Hiring: Databricks Data Engineer – Lakeflow | Streaming | DBSQL | Data Intelligence
We are looking for a Databricks Data Engineer to build reliable, scalable, and governed data pipelines powering analytics, operational reporting, and the Data Intelligence Layer.
🔹 Key Responsibilities
- Build optimized batch pipelines using Delta Lake (partitioning, OPTIMIZE, Z-ORDER, VACUUM)
- Implement incremental ingestion using Databricks Autoloader with schema evolution & checkpointing
- Develop Structured Streaming pipelines with watermarking, late data handling & restart safety
- Implement declarative pipelines using Lakeflow
- Design idempotent, replayable pipelines with safe backfills
- Optimize Spark workloads (AQE, skew handling, shuffle & join tuning)
- Build curated datasets for Databricks SQL (DBSQL), dashboards & downstream applications
- Package and deploy using Databricks Repos & Asset Bundles (CI/CD)
- Ensure governance using Unity Catalog and embedded data quality checks
✅ Mandatory Skills (Must Have)
- Databricks & Delta Lake (Advanced Optimization & Performance Tuning)
- Structured Streaming & Autoloader Implementation
- Databricks SQL (DBSQL) & Data Modeling for Analytics
- 3+ years hands-on Azure cloud & automation experience.
- Experience managing high-availability enterprise systems.
- Microsoft Azure (AKS, VNets, App Gateway, Load Balancers).
- Kubernetes (AKS) & Docker.
- Networking (VPN, DNS, routing, firewalls, NSGs).
- Infra-as-Code (Terraform / Bicep optional).
- Monitoring tools: Azure Monitor, Grafana, Prometheus.
- CI/CD: Azure DevOps, GitLab/Jenkins (added advantage).
- Security: Key Vault, certificates, encryption, RBAC.
- Understanding of PostgreSQL/PostGIS networking.
- Design and manage Azure infrastructure (VMs, VNets, NSGs, Load Balancers, AKS, Storage).
- Deploy and maintain AKS workloads for NiFi, PostGIS, and microservices.
- Architect secure network topology including VNet peering, VPNs, Private Endpoints, DNS & Zero Trust policies.
- Implement monitoring and alerting using Azure Monitor, Log Analytics, Grafana & Prometheus.
- Ensure high uptime, DR planning, backup and failover strategies.
- Automate deployments with Azure DevOps, Helm, ArgoCD & GitOps principles.
- Enforce security, RBAC, compliance, and audit standards across environments.
- Good to have knowledge/experince in Linux administration (Ubuntu/Debian).
Job Details
- Job Title: Specialist I - Software Engineering-.Net Fullstack Lead-TVM
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 5-9 years
- Employment Type: Full Time
- Job Location: Trivandrum, Thiruvananthapuram
- CTC Range: Best in Industry
Job Description
· Minimum 5+ years experienced senior/Lead .Net developer, including experience of the full development lifecycle, including post-live support.
· Significant experience delivering software using Agile iterative delivery methodologies.
· JIRA knowledge preferred.
· Excellent ability to understand requirement/story scope and visualise technical elements required for application solutions.
· Ability to clearly articulate complex problems and solutions in terms that others can understand.
· Lots of experience working with .Net backend API development.
· Significant experience of pipeline design, build and enhancement to support release cadence targets, including Infrastructure as Code (preferably Terraform).
· Strong understanding of HTML and CSS including cross-browser, compatibility, and performance.
· Excellent knowledge of unit and integration testing techniques.
· Azure knowledge (Web/Container Apps, Azure Functions, SQL Server).
· Kubernetes / Docker knowledge. Knowledge of JavaScript UI frameworks, ideally Vue Extensive experience with source control (preferably Git).
· Strong understanding of RESTful services (JSON) and API Design.
· Broad knowledge of Cloud infrastructure (PaaS, DBaaS).
· Experience of mentoring and coaching engineers operating within a co-located environment.
Skills: .Net Fullstack, Azure Cloudformation, Javascript, Angular
Must-Haves:
.Net (5+ years), Agile methodologies, RESTful API design, Azure (Web/Container Apps, Functions, SQL Server), Git source control
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Trivandrum
F2F Weekend Interview on 14th Feb 2026
About This Opportunity
We're seeking a Solution Architect who thrives on autonomy and impact. If you're a tech-savvy innovator ready to architect cloud solutions at scale while working from anywhere in the world, this is your role.
What You'll Lead
Core Responsibilities
- Design scalable cloud architectures that serve as the technical foundation for enterprise applications
- Lead major projects end-to-end, from concept through deployment, owning architectural decisions and outcomes
- Architect high-performance APIs and microservices using modern tech stacks (Node.js, TypeScript, Java, GoLang)
- Integrate cutting-edge AI capabilities into solutions—leveraging Azure AI, OpenAI, and similar platforms to drive competitive advantage
- Own cloud strategy & optimization on Azure, ensuring security, scalability, and cost-efficiency
- Mentor technical teams and guide cross-functional stakeholders toward shared architectural vision
What We're Looking For
- 8+ years in solution architecture, cloud engineering, or equivalent leadership roles
- Technical depth across Node.js, TypeScript, Java, GoLang, and Generative AI frameworks
- Cloud mastery with Azure, Kubernetes, containerization, and CI/CD pipelines
- Leadership mindset: You drive decisions, mentor peers, and own project outcomes
- Communication excellence: You translate complex technical concepts for both engineers and business stakeholders
- Entrepreneurial spirit: You work best with autonomy and take ownership of major initiatives
Why Join Us
✅ 100% Remote – Work from home, a coffee shop, or anywhere globally
✅ Lead Significant Projects – Take ownership of architectural decisions that impact our platform
✅ Tech-Forward Culture – We invest in latest cloud, AI, and DevOps technologies
✅ Founded by Architects – Leadership team with 25+ years of cloud expertise—mentorship built in
✅ Rapid Innovation – We ship fast. You'll see your designs live in production within weeks
✅ Continuous Learning – Access to tools, courses, and conferences to sharpen your craft
Ready to Shape the Future?
If you're energized by architectural challenges and want the freedom to work from anywhere, we'd love to connect.
🌐 Learn more: prismcloudinc.com
Job Details
- Job Title: SDE-3
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 5-8 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Role & Responsibilities
As a Software Development Engineer - 3, Backend Engineer at company, you will play a critical role in architecting, designing, and delivering robust backend systems that power our platform. You will lead by example, driving technical excellence and mentoring peers while solving complex engineering problems. This position offers the opportunity to work with a highly motivated team in a fast-paced and innovative environment.
Key Responsibilities:
Technical Leadership-
- Design and develop highly scalable, fault-tolerant, and maintainable backend systems using Java and related frameworks.
- Provide technical guidance and mentorship to junior developers, fostering a culture of learning and growth.
- Review code and ensure adherence to best practices, coding standards, and security guidelines.
System Architecture and Design-
- Collaborate with cross-functional teams, including product managers and frontend engineers, to translate business requirements into efficient technical solutions.
- Own the architecture of core modules and contribute to overall platform scalability and reliability.
- Advocate for and implement microservices architecture, ensuring modularity and reusability.
Problem Solving and Optimization-
- Analyze and resolve complex system issues, ensuring high availability and performance of the platform.
- Optimize database queries and design scalable data storage solutions.
- Implement robust logging, monitoring, and alerting systems to proactively identify and mitigate issues.
Innovation and Continuous Improvement-
- Stay updated on emerging backend technologies and incorporate relevant advancements into our systems.
- Identify and drive initiatives to improve codebase quality, deployment processes, and team productivity.
- Contribute to an advocate for a DevOps culture, supporting CI/CD pipelines and automated testing.
Collaboration and Communication-
- Act as a liaison between the backend team and other technical and non-technical teams, ensuring smooth communication and alignment.
- Document system designs, APIs, and workflows to maintain clarity and knowledge transfer across the team.
Ideal Candidate
- Strong Java Backend Engineer.
- Must have 5+ years of backend development with strong focus on Java (Spring / Spring Boot)
- Must have been SDE-2 for at least 2.5 years
- Hands-on experience with RESTful APIs and microservices architecture
- Strong understanding of distributed systems, multithreading, and async programming
- Experience with relational and NoSQL databases
- Exposure to Kafka/RabbitMQ and Redis/Memcached
- Experience with AWS / GCP / Azure, Docker, and Kubernetes
- Familiar with CI/CD pipelines and modern DevOps practices
- Product companies (B2B SAAS preferred)
- have stayed for at least 2 years with each of the previous companies
- (Education): B.Tech in computer science from Tier 1, Tier 2 colleges
Job Title:
Senior Full Stack Developer
Experience: 5 to 7 Years. Minimum 5yrs FSD exp mandatory
Location: Bangalore (Onsite)
About ProductNova:
ProductNova is a fast-growing product development organization that partners with ambitious companies to build, modernize, and scale high-impact digital products. Our teams of product leaders, engineers, AI specialists, and growth experts work at the intersection of strategy, technology, and execution to help organizations create differentiated product portfolios and accelerate business outcomes.
Founded in early 2023, ProductNova has successfully designed, built, and launched 20+ large-scale, AI-powered products and platforms across industries. We specialize in solving complex business problems through thoughtful product design, robust engineering, and responsible use of AI.
What We Do
Product Development
We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply aligned with business goals and long-term value creation.
Our end-to-end product development approach covers the full lifecycle:
● Product discovery and problem definition
● User research and product strategy
● Experience design and rapid prototyping
● AI-enabled engineering, testing, and platform architecture
● Product launch, adoption, and continuous improvement
From early concepts to market-ready solutions, we focus on building products that are resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with customers to iterate based on user feedback and expand products across new use cases, customer segments, and markets.
Growth & Scale
For early-stage companies and startups, we act as product partners—shaping ideas into viable products, identifying target customers, achieving product-market fit, and supporting go-to-market execution, iteration, and scale.
For established organizations, we help unlock the next phase of growth by identifying opportunities to modernize and scale existing products, enter new geographies, and build entirely new product lines. Our teams enable innovation through AI, platform re-architecture, and portfolio expansion to support sustained business growth.
Role Overview
We are looking for a Senior Full Stack Developer with strong expertise in frontend development using React JS, backend microservices architecture in C#/Python, and hands-on experience with AI-enabled development tools. The ideal candidate should be comfortable working in an onsite environment and collaborating closely with cross-functional teams to deliver scalable, high-quality applications.
Key Responsibilities:
• Develop and maintain responsive, high-performance frontend applications using React JS
• Design, develop, and maintain microservices-based backend systems using C#, Python
• Experienced in building Data Layer and Databases using MS SQL, Cosmos DB, PostgreSQL
• Leverage AI-assisted development tools (Cursor / GitHub Copilot) to improve coding
efficiency and quality
• Collaborate with product managers, designers, and backend teams to deliver end-to-end
solutions
• Write clean, reusable, and well-documented code following best practices
• Participate in code reviews, debugging, and performance optimization
• Ensure application security, scalability, and reliability
Mandatory Technical Skills:
• Strong hands-on experience in React JS (Frontend Coding) – 3+ yrs
• Solid experience in Microservices Architecture C#, Python – 3+ yrs
• Experience building Data Layer and Databases using MS SQL – 2+ yrs
• Practical exposure to AI-enabled development using Cursor or GitHub Copilot – 1yr
• Good understanding of REST APIs and system integration
• Experience with version control systems (Git), ADO
Good to Have:
• Experience with cloud platforms (Azure)
• Knowledge of containerization tools like Docker and Kubernetes
• Exposure to CI/CD pipelines
• Understanding of Agile/Scrum methodologies
Why Join ProductNova
● Work on real-world, high-impact products used at scale
● Collaborate with experienced product, engineering, and AI leaders
● Solve complex problems with ownership and autonomy
● Build AI-first systems, not experimental prototypes
● Grow rapidly in a culture that values clarity, execution, and learning
If you are passionate about building meaningful products, solving hard problems, and shaping the future of AI-driven software, ProductNova offers the environment and challenges to grow your career.
ROLE: Ai ML Senior Developer
Exp: 5 to 8 Years
Location: Bangalore (Onsite)
About ProductNova
ProductNova is a fast-growing product development organization that partners with
ambitious companies to build, modernize, and scale high-impact digital products. Our teams
of product leaders, engineers, AI specialists, and growth experts work at the intersection of
strategy, technology, and execution to help organizations create differentiated product
portfolios and accelerate business outcomes.
Founded in early 2023, ProductNova has successfully designed, built, and launched 20+
large-scale, AI-powered products and platforms across industries. We specialize in solving
complex business problems through thoughtful product design, robust engineering, and
responsible use of AI.
Product Development
We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply
aligned with business goals and long-term value creation.
Our end-to-end product development approach covers the full lifecycle:
1. Product discovery and problem definition
2. User research and product strategy
3. Experience design and rapid prototyping
4. AI-enabled engineering, testing, and platform architecture
5. Product launch, adoption, and continuous improvement
From early concepts to market-ready solutions, we focus on building products that are
resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with
customers to iterate based on user feedback and expand products across new use cases,
customer segments, and markets.
Growth & Scale
For early-stage companies and startups, we act as product partners—shaping ideas into
viable products, identifying target customers, achieving product-market fit, and supporting
go-to-market execution, iteration, and scale.
For established organizations, we help unlock the next phase of growth by identifying
opportunities to modernize and scale existing products, enter new geographies, and build
entirely new product lines. Our teams enable innovation through AI, platform re-
architecture, and portfolio expansion to support sustained business growth.
Role Overview:
We are seeking an experienced AI / ML Senior Developer with strong hands-on expertise in large language models (LLMs) and AI-driven application development. The ideal candidate will have practical experience working with GPT and Anthropic models, building and training B2B products powered by AI, and leveraging AI-assisted development tools to deliver scalable and intelligent solutions.
Key Responsibilities:
1. Model Analysis & Optimization
Analyze, customize, and optimize GPT and Anthropic-based models to ensure reliability, scalability, and performance for real-world business use cases.
2. AI Product Design & Development
Design and build AI-powered products, including model training, fine-tuning, evaluation, and performance optimization across development lifecycles.
3. Prompt Engineering & Response Quality
Develop and refine prompt engineering strategies to improve model accuracy, consistency, relevance, and contextual understanding.
4. AI Service Integration
Build, integrate, and deploy AI services into applications using modern development practices, APIs, and scalable architectures.
5. AI-Assisted Development Productivity
Leverage AI-enabled coding tools such as Cursor and GitHub Copilot to accelerate development, improve code quality, and enhance efficiency.
6. Cross-Functional Collaboration
Work closely with product, business, and engineering teams to translate business requirements into effective AI-driven solutions.
7. Model Monitoring & Continuous Improvement
Monitor model performance, analyze outputs, and iteratively improve accuracy, safety, and overall system effectiveness.
Qualifications:
1. Hands-on experience analyzing, developing, fine-tuning, and optimizing GPT and Anthropic-based large language models.
2. Strong expertise in prompt design, experimentation, and optimization to enhance response accuracy and reliability.
3. Proven experience building, training, and deploying chatbots or conversational AI systems.
4. Practical experience using AI-assisted coding tools such as Cursor or GitHub Copilot in production environments.
5. Solid programming experience in Python, with strong problem-solving and development fundamentals.
6. Experience working with embeddings, similarity search, and vector databases for retrieval-augmented generation (RAG).
7. Knowledge of MLOps practices, including model deployment, versioning, monitoring, and lifecycle management.
8. Experience with cloud environments such as Azure, AWS for deploying and managing AI solutions.
9. Experience with APIs, microservices architecture, and system integration for scalable AI applications.
Why Join Us
• Build cutting-edge AI-powered B2B SaaS products
• Own architecture and technology decisions end-to-end
• Work with highly skilled ML and Full Stack teams
• Be part of a fast-growing, innovation-driven product organization
If you are a results-driven AiML Senior Developer with a passion for developing innovative products that drive business growth, we invite you to join our dynamic team at ProductNova.
ROLE - TECH LEAD/ARCHITECT with AI Expertise
Experience: 10–15 Years
Location: Bangalore (Onsite)
Company Type: Product-based | AI B2B SaaS
About ProductNova
ProductNova is a fast-growing product development organization that partners with
ambitious companies to build, modernize, and scale high-impact digital products. Our teams
of product leaders, engineers, AI specialists, and growth experts work at the intersection of
strategy, technology, and execution to help organizations create differentiated product
portfolios and accelerate business outcomes.
Founded in early 2023, ProductNova has successfully designed, built, and launched 20+
large-scale, AI-powered products and platforms across industries. We specialize in solving
complex business problems through thoughtful product design, robust engineering, and
responsible use of AI.
Product Development
We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply
aligned with business goals and long-term value creation.
Our end-to-end product development approach covers the full lifecycle:
1. Product discovery and problem definition
2. User research and product strategy
3. Experience design and rapid prototyping
4. AI-enabled engineering, testing, and platform architecture
5. Product launch, adoption, and continuous improvement
From early concepts to market-ready solutions, we focus on building products that are
resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with
customers to iterate based on user feedback and expand products across new use cases,
customer segments, and markets.
Growth & Scale
For early-stage companies and startups, we act as product partners—shaping ideas into
viable products, identifying target customers, achieving product-market fit, and supporting
go-to-market execution, iteration, and scale.
For established organizations, we help unlock the next phase of growth by identifying
opportunities to modernize and scale existing products, enter new geographies, and build
entirely new product lines. Our teams enable innovation through AI, platform re-
architecture, and portfolio expansion to support sustained business growth.
Role Overview
We are looking for a Tech Lead / Architect to drive the end-to-end technical design and
development of AI-powered B2B SaaS products. This role requires a strong hands-on
technologist who can work closely with ML Engineers and Full Stack Development teams,
own the product architecture, and ensure scalability, security, and compliance across the
platform.
Key Responsibilities
• Lead the end-to-end architecture and development of AI-driven B2B SaaS products
• Collaborate closely with ML Engineers, Data Scientists, and Full Stack Developers to
integrate AI/ML models into production systems
• Define and own the overall product technology stack, including backend, frontend,
data, and cloud infrastructure
• Design scalable, resilient, and high-performance architectures for multi-tenant SaaS
platforms
• Drive cloud-native deployments (Azure) using modern DevOps and CI/CD practices
• Ensure data privacy, security, compliance, and governance (SOC2, GDPR, ISO, etc.)
across the product
• Take ownership of application security, access controls, and compliance
requirements
• Actively contribute hands-on through coding, code reviews, complex feature development and architectural POCs
• Mentor and guide engineering teams, setting best practices for coding, testing, and
system design
• Work closely with Product Management and Leadership to translate business
requirements into technical solutions
Qualifications:
• 10–15 years of overall experience in software engineering and product
development
• Strong experience building B2B SaaS products at scale
• Proven expertise in system architecture, design patterns, and distributed systems
• Hands-on experience with cloud platforms (Azure, AWS/GCP)
• Solid background in backend technologies (Python/ .NET / Node.js / Java) and
modern frontend frameworks like (React, JS, etc.)
• Experience working with AI/ML teams in deploying and tuning ML models into production
environments
• Strong understanding of data security, privacy, and compliance frameworks
• Experience with microservices, APIs, containers, Kubernetes, and cloud-native
architectures
• Strong working knowledge of CI/CD pipelines, DevOps, and infrastructure as code
• Excellent communication and leadership skills with the ability to work cross-
functionally
• Experience in AI-first or data-intensive SaaS platforms
• Exposure to MLOps frameworks and model lifecycle management
• Experience with multi-tenant SaaS security models
• Prior experience in product-based companies or startups
Why Join Us
• Build cutting-edge AI-powered B2B SaaS products
• Own architecture and technology decisions end-to-end
• Work with highly skilled ML and Full Stack teams
• Be part of a fast-growing, innovation-driven product organization
If you are a results-driven Technical Lead with a passion for developing innovative products that drives business growth, we invite you to join our dynamic team at ProductNova.
JOB DETAILS:
* Job Title: DevOps Engineer (Azure)
* Industry: Technology
* Salary: Best in Industry
* Experience: 2-5 years
* Location: Bengaluru, Koramangala
Review Criteria
- Strong Azure DevOps Engineer Profiles.
- Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
- Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
- Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
- Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
- Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Preferred
- Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
- Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
- Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline
Role & Responsibilities
- Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
- Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
- Implement Git branching strategies and automate release workflows.
- Develop scripts using Bash, Python, or PowerShell for DevOps automation.
- Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
- Collaborate with dev and QA teams in an Agile/Scrum environment.
- Maintain documentation, runbooks, and participate in root cause analysis.
Ideal Candidate
- 2–5 years of experience as an Azure DevOps Engineer.
- Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
- Experience with Microsoft Azure (OCI/AWS exposure is a plus).
- Working knowledge of SQL Server, PostgreSQL, or Oracle.
- Good scripting, troubleshooting, and communication skills.
- Bonus: Docker, Kubernetes, Terraform, Ansible experience.
- Comfortable with WFO (Koramangala, Bangalore).
Application Architect – .NET
Role Overview
We are looking for a senior, hands-on Application Architect with deep .NET experience who can fix and modernize our current systems and build a strong engineering team over time.
Important – This role hands-on with architectural mindset. This person should be comfortable working with legacy systems and can make and explain tradeoffs.
Key Responsibilities
Application Architecture & Modernization
- Own application architecture across legacy .NET Framework and modern .NET systems
- Review the existing application, and drive an incremental modernization approach along with new feature development as per business growth of the company.
- Own the gradual move away from outdated patterns (Web Forms, tightly coupled MVC, legacy UI constructs)
- Define clean API contracts between front-end and backend services
- Identify and resolve performance bottlenecks across code and database layers
- Improve data access patterns, caching strategies, and system responsiveness
- Strong proponent of AI and has extensively used AI tools such as Github Copilot, Cursor, Windsurf, Codex, etc.
Backend, APIs & Integrations
- Design scalable backend services and APIs
- Improve how newer .NET services interact with legacy systems
- Lead integrations with external systems, including Zoho
- Prior experience integrating with Zoho (CRM, Finance, or other modules) is a strong value add
- Experience designing and implementing integrations using EDI standards
Data & Schema Design
- Review existing database schemas and core data structures
- Redesign data models to support growth, and reporting/analytics requirements
- Optimize SǪL queries to reduce the load on execution and DB engine
Cloud Awareness
- Design applications with cloud deployment in mind (primarily Azure)
- Understand how to use Azure services to improve security, scalability, and availability
- Work with Cloud and DevOps teams to ensure application architecture aligns with cloud best practices
- Push for CI/CD automation so that team pushes code regularly and makes progress.
Team Leadership & Best Practices
- Act as a technical leader and mentor for the engineering team
- Help hire, onboard, and grow a team under this role over time.
- Define KPIs and engineering best practices (including focus on documentation)
- Set coding standards, architectural guidelines, and review practices
- Improve testability and long-term health of the codebase
- Raise the overall engineering bar through reviews, coaching, and clear standards
- Create a culture of ownership and quality
Cross-Platform Thinking
- Strong communicator who can convert complex tech topics into business-friendly lingo. Understands the business needs and importance of user experience
- While .NET is the core stack, contribute to architecture decisions across platforms
- Leverages AI tools to accelerate design, coding, reviews, and troubleshooting while maintaining high quality
Skills and Experience
- 12+ years of hands-on experience in application development (preferably on .NET stack)
- Experience leading technical direction while remaining hands-on
- Deep expertise in .NET Framework (4.x) and modern .NET (.NET Core / .NET 6+)
- Must have lead a project to modernize legacy system – preferably moving from .NET Framework to .NET Core.
- Experience with MVC, Web Forms, and legacy UI patterns
- Solid backend and API design experience
- Strong understanding of database design and schema evolution
- Understanding of Analytical systems – OLAP, Data warehousing, data lakes.
- Strong proponent of AI and has extensively used AI tools such as Github Copilot, Cursor, Windsurf, Codex, etc.
- Integration with Zoho would be a plus.
Roles and Responsibilities:
▪ Data Pipeline Development: Build, deploy, and maintain efficient ETL/ELT pipelines using Azure
Data Factory, Data Factory & Azure Synapse Analytics.
▪ We are only looking for senior candidates with over 5 yrs of relevant exp with ample client
facing exp.
· Finance/Insurance experience is also a must.
▪ Data Modelling & Warehousing: Design and optimize data models, warehouses, and lakes for
structured/unstructured data.
▪ SQL & Query Optimization: Write complex SQL queries, optimize performance, and manage
databases. · Python Automation: Develop scripts for data processing, automation, and
integration using Python (Pandas, NumPy).
Technical Skills:
▪ Cloud Technologies: Azure Synapse Analytics, Azure Fabric, Azure Databricks and AWS(good to
have)
▪ Knowledge of Python, Pyspark, SQL, ETL concepts
▪ Good understanding of Insurance Operations and KPI reporting is an advantage.
JOB DETAILS:
* Job Title: Associate III - Azure Data Engineer
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -6 years
* Location: Trivandrum, Kochi
Job Description: Azure Data Engineer (4–6 Years Experience)
Job Type: Full-time
Locations: Kochi, Trivandrum
Must-Have Skills
Azure & Data Engineering
- Azure Data Factory (ADF)
- Azure Databricks (PySpark)
- Azure Synapse Analytics
- Azure Data Lake Storage Gen2
- Azure SQL Database
Programming & Querying
- Python (PySpark)
- SQL / Spark SQL
Data Modelling
- Star & Snowflake schema
- Dimensional modelling
Source Systems
- SQL Server
- Oracle
- SAP
- REST APIs
- Flat files (CSV, JSON, XML)
CI/CD & Version Control
- Git
- Azure DevOps / GitHub Actions
Monitoring & Scheduling
- ADF triggers
- Databricks jobs
- Log Analytics
Security
- Managed Identity
- Azure Key Vault
- Azure RBAC / Access Control
Soft Skills
- Strong analytical & problem-solving skills
- Good communication and collaboration
- Ability to work in Agile/Scrum environments
- Self-driven and proactive
Good-to-Have Skills
- Power BI basics
- Delta Live Tables
- Synapse Pipelines
- Real-time processing (Event Hub / Stream Analytics)
- Infrastructure as Code (Terraform / ARM templates)
- Data governance tools like Azure Purview
- Azure Data Engineer Associate (DP-203) certification
Educational Qualifications
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
Skills: Azure Data Factory, Azure Databricks, Azure Synapse, Azure Data Lake Storage
Must-Haves
Azure Data Factory (4-6 years), Azure Databricks/PySpark (4-6 years), Azure Synapse Analytics (4-6 years), SQL/Spark SQL (4-6 years), Git/Azure DevOps (4-6 years)
Skills: Azure, Azure data factory, Python, Pyspark, Sql, Rest Api, Azure Devops
Relevant 4 - 6 Years
python is mandatory
******
Notice period - 0 to 15 days only (Feb joiners’ profiles only)
Location: Kochi
F2F Interview 7th Feb
JOB DETAILS:
* Job Title: Associate III - Data Engineering
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4-6 years
* Location: Trivandrum, Kochi
Job Description
Job Title:
Data Services Engineer – AWS & Snowflake
Job Summary:
As a Data Services Engineer, you will be responsible for designing, developing, and maintaining robust data solutions using AWS cloud services and Snowflake.
You will work closely with cross-functional teams to ensure data is accessible, secure, and optimized for performance.
Your role will involve implementing scalable data pipelines, managing data integration, and supporting analytics initiatives.
Responsibilities:
• Design and implement scalable and secure data pipelines on AWS and Snowflake (Star/Snowflake schema)
• Optimize query performance using clustering keys, materialized views, and caching
• Develop and maintain Snowflake data warehouses and data marts.
• Build and maintain ETL/ELT workflows using Snowflake-native features (Snowpipe, Streams, Tasks).
• Integrate Snowflake with cloud platforms (AWS, Azure, GCP) and third-party tools (Airflow, dbt, Informatica)
• Utilize Snowpark and Python/Java for complex transformations
• Implement RBAC, data masking, and row-level security.
• Optimize data storage and retrieval for performance and cost-efficiency.
• Collaborate with stakeholders to gather data requirements and deliver solutions.
• Ensure data quality, governance, and compliance with industry standards.
• Monitor, troubleshoot, and resolve data pipeline and performance issues.
• Document data architecture, processes, and best practices.
• Support data migration and integration from various sources.
Qualifications:
• Bachelor’s degree in Computer Science, Information Technology, or a related field.
• 3 to 4 years of hands-on experience in data engineering or data services.
• Proven experience with AWS data services (e.g., S3, Glue, Redshift, Lambda).
• Strong expertise in Snowflake architecture, development, and optimization.
• Proficiency in SQL and Python for data manipulation and scripting.
• Solid understanding of ETL/ELT processes and data modeling.
• Experience with data integration tools and orchestration frameworks.
• Excellent analytical, problem-solving, and communication skills.
Preferred Skills:
• AWS Glue, AWS Lambda, Amazon Redshift
• Snowflake Data Warehouse
• SQL & Python
Skills: Aws Lambda, AWS Glue, Amazon Redshift, Snowflake Data Warehouse
Must-Haves
AWS data services (4-6 years), Snowflake architecture (4-6 years), SQL (proficient), Python (proficient), ETL/ELT processes (solid understanding)
Skills: AWS, AWS lambda, Snowflake, Data engineering, Snowpipe, Data integration tools, orchestration framework
Relevant 4 - 6 Years
python is mandatory
******
Notice period - 0 to 15 days only (Feb joiners’ profiles only)
Location: Kochi
F2F Interview 7th Feb
JOB DETAILS:
* Job Title: Tester III - Software Testing (Automation Testing + Python + Azure)
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -10 years
* Location: Hyderabad
Job Description
Responsibilities:
- Design, develop, and execute automation test scripts using Python.
- Build and maintain scalable test automation frameworks.
- Work with Azure DevOps for CI/CD, pipeline automation, and test management.
- Perform functional, regression, and integration testing for web and cloud‑based applications.
- Analyze test results, log defects, and collaborate with developers for timely closure.
- Participate in requirement analysis, test planning, and strategy discussions.
- Ensure test coverage, maintain script quality, and optimize automation suites.
Required Experience:
- Strong hands-on expertise in automation testing for web/cloud applications.
- Solid proficiency in Python for creating automation scripts and frameworks.
- Experience working with Azure services and Azure DevOps pipelines.
- Good understanding of QA methodologies, SDLC/STLC, and defect lifecycle.
- Experience with tools like Selenium, PyTest, or similar frameworks (good to have).
- Familiarity with Git or other version control tools.
Good to Have:
- Experience with API testing (REST, Postman, or similar tools)
- Knowledge of Docker/Kubernetes
- Exposure to Agile/Scrum environments
Skills: automation testing, python, java, azure
JOB DETAILS:
* Job Title: Tester III - Software Testing- Playwright + API testing
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -10 years
* Location: Hyderabad
Job Description
Responsibilities:
- Design, develop, and maintain automated test scripts for web applications using Playwright.
- Perform API testing using industry-standard tools and frameworks.
- Collaborate with developers, product owners, and QA teams to ensure high-quality releases.
- Analyze test results, identify defects, and track them to closure.
- Participate in requirement reviews, test planning, and test strategy discussions.
- Ensure automation coverage, maintain reusable test frameworks, and optimize execution pipelines.
Required Experience:
- Strong hands-on experience in Automation Testing for web-based applications.
- Proven expertise in Playwright (JavaScript, TypeScript, or Python-based scripting).
- Solid experience in API testing (Postman, REST Assured, or similar tools).
- Good understanding of software QA methodologies, tools, and processes.
- Ability to write clear, concise test cases and automation scripts.
- Experience with CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) is an added advantage.
Good to Have:
- Knowledge of cloud environments (AWS/Azure)
- Experience with version control tools like Git
- Familiarity with Agile/Scrum methodologies
Skills: automation testing, sql, api testing, soap ui testing, playwright
About Unilog
Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.
With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale Distribution, Manufacturing, and Specialty Retail.
Unilog’s Mission Statement
At Unilog, our mission is to provide purpose-built connected product content and eCommerce solutions that empower our customers to succeed in the face of intense competition. By virtue of living our mission, we are able to transform the way Wholesale Distributors, Manufacturers, and Specialty Retailers go to market. We help our customers extend a digital version of their business and accelerate their growth.
Job Details
- Designation: Principal Engineer – Solr
- Location: Bangalore / Mysore / Remote
- Job Type: Full-time
- Department: Software R&D
Job Summary
We are seeking a highly skilled and experienced Principal Engineer with a strong background in Apache Solr and Java to lead our Engineering and customer-led initiatives. The ideal candidate will be responsible for ensuring the reliability, scalability, and performance of our search platform while providing expert-level troubleshooting and resolution for critical production issues.
This role will involve designing the architecture for new platforms while reviewing and recommending better approaches for existing ones to drive continuous improvement and efficiency.
Key Responsibilities
- Lead Engineering and support activities for Solr-based search applications, ensuring minimal downtime and optimal performance
- Design and develop the architecture of new platforms while reviewing and recommending better approaches for existing ones
- Regularly work towards enhancing search ranking, query understanding, and retrieval effectiveness
- Diagnose, troubleshoot, and resolve complex technical issues in Solr, Java-based applications, and supporting infrastructure
- Perform deep-dive analysis of logs, performance metrics, and alerts to proactively prevent incidents
- Optimize Solr indexes, queries, and configurations to enhance search performance and reliability
- Work closely with development, operations, and business teams to drive improvements in system stability and efficiency
- Implement monitoring tools, dashboards, and alerting mechanisms to enhance observability and proactive issue detection
- Exposure to AI-based search using vector databases, RAG models, NLP, and LLMs
- Collaborate on capacity planning, system scaling, and disaster recovery strategies for mission-critical search systems
- Provide mentorship and technical guidance to junior engineers and support teams
- Drive innovation by tracking latest trends, emerging technologies, and best practices in AI-based Search, Solr, and other search platforms
Requirement
- 8+ years of experience in software development and production support with a focus on Apache Solr, Java, and databases (Oracle, MySQL, PostgreSQL, etc.)
- Strong understanding of Solr indexing, query execution, schema design, configuration, and tuning
- Experience in designing and implementing scalable system architectures for search platforms
- Proven ability to review and assess existing platform architectures, identifying areas for improvement and recommending better approaches
- Proficiency in Java, Spring Boot, and micro-services architectures
- Experience with Linux / Unix-based environments, shell scripting, and debugging production systems
- Hands-on experience with monitoring tools (e.g., Prometheus, Grafana, Splunk, ELK Stack) and log analysis
- Expertise in troubleshooting performance issues related to Solr, JVM tuning, and memory management
- Familiarity with cloud platforms such as AWS, Azure, or GCP and containerization technologies like Docker / Kubernetes
- Strong analytical and problem-solving skills, with the ability to work under pressure in a fast-paced environment
- Certifications in Solr, Java, or cloud technologies
- Excellent communication and leadership abilities
About Our Benefits
- Competitive salary
- Health insurance
- Retirement plan
- Paid time off
- Training and development opportunities
MIC Global is a full-stack micro-insurance provider, purpose-built to design and deliver embedded parametric micro-insurance solutions to platform companies. Our mission is to make insurance more accessible for new, emerging, and underserved risks using our MiIncome loss-of-income products, MiConnect, MiIdentity, Coverpoint technology, and more - backed by innovative underwriting capabilities as a Lloyd’s Coverholder and through our in-house reinsurer, MicRe.
We operate across 12+ countries, with our Global Operations Center in Bangalore supporting clients worldwide, including a leading global ride-hailing platform and a top international property rental marketplace. Our distributed teams across the UK, USA, and Asia collaborate to ensure that no one is beyond the reach of financial security.
Role Overview
As Lead – Product Support & IT Infrastructure, you will oversee the technology backbone that supports MIC Global’s products, data operations, and global business continuity. You will manage all aspects of IT infrastructure, system uptime, cybersecurity, and support operations ensuring that MIC’s platforms remain reliable, secure, and scalable.
This is a pivotal, hands-on leadership role, blending strategic oversight with operational execution. The ideal candidate combines strong technical expertise with a proactive, service-oriented mindset to support both internal teams and external partners.
Key Responsibilities
Infrastructure & Operations
- Oversee all IT infrastructure and operations, including database administration, hosting environments, and production systems.
- Ensure system reliability, uptime, and performance across global deployments.
- Align IT operations with Agile development cycles and product release plans.
- Manage the IT service desk (MiTracker), ensuring timely and high-quality resolution of incidents.
- Drive continuous improvement in monitoring, alerting, and automation processes.
- Lead the development, testing, and maintenance of Disaster Recovery (DR) and Business Continuity Plans (BCP).
- Manage vendor relationships, IT budgets, and monthly cost reporting.
Security & Compliance
- Lead cybersecurity efforts across the organization, developing and implementing comprehensive information security strategies.
- Monitor, respond to, and mitigate security incidents in a timely manner.
- Maintain compliance with industry standards and data protection regulations (e.g., SOC 2, GDPR, ISO27001).
- Prepare regular reports on security incidents, IT costs, and system performance for review with the Head of Technology.
Team & Process Management
- Deliver exceptional customer service by ensuring internal and external technology users are supported effectively.
- Implement strategies to ensure business continuity during absences — including defined backup responsibilities and robust process documentation.
- Promote knowledge sharing and operational excellence across Product Support and IT teams.
- Build and maintain a culture of accountability, responsiveness, and cross-team collaboration.
Required Qualifications
- Azure administration experience and qualifications, such as Microsoft Certified: Azure Administrator Associate or Azure Solutions Architect Expert.
- Strong SQL Server DBA capabilities and experience, including performance tuning, high availability configurations, and certifications like Microsoft Certified: Azure Database Administrator Associate.
- 8+ years of experience in IT infrastructure management, DevOps, or IT operations, essential to be within Product focused companies; fintech, insurtech, or SaaS environments.
- Proven experience leading service desk or technical support functions in a 24/7 uptime environment.
- Deep understanding of cloud infrastructure (AWS/Azure/GCP), database administration, and monitoring tools (e.g., Grafana, Datadog, CloudWatch).
- Hands-on experience with security frameworks, incident response, and business continuity planning.
- Strong analytical, problem-solving, and communication skills, with the ability to work cross-functionally.
- Demonstrated leadership in managing teams and implementing scalable IT systems and processes.
Benefits
- 33 days of paid holiday
- Competitive compensation well above market average
- Work in a high-growth, high-impact environment with passionate, talented peers
- Clear path for personal growth and leadership development.
Job Title: Technology Intern
Location: Remote (India)
Shift Timings:
- 5:00 PM – 2:00 AM
- 6:00 PM – 3:00 AM
Compensation: Stipend
Job Summary
ARDEM is seeking highly motivated Technology Interns from Tier 1 colleges who are passionate about software development and eager to work with modern Microsoft technologies. This role is ideal for final-year students (2026 pass-outs) who want hands-on experience in building scalable web applications while maintaining a healthy work-life balance through remote work opportunities.
Eligibility & Qualifications
- Education:
- B.Tech (Computer Science) / M.Tech (Computer Science)
- Tier 1 colleges preferred
- Final semester students (2026 pass-outs) or recent interns
- Experience Level: Fresher
- Communication: Excellent English communication skills (verbal & written)
Skills Required
1. Technical Skills (Must Have)
- Experience with .NET Core (.NET 6 / 7 / 8)
- Strong knowledge of C#, including:
- Object-Oriented Programming (OOP) concepts
- async/await
- LINQ
- ASP.NET Core (Web API / MVC)
2. Database Skills
- SQL Server (preferred)
- Writing complex SQL queries, joins, and subqueries
- Stored Procedures, Functions, and Indexes
- Database design and performance tuning
- Entity Framework Core
- Migrations and transaction handling
3. Frontend Skills (Required)
- JavaScript (ES5 / ES6+)
- jQuery
- DOM manipulation
- AJAX calls
- Event handling
- HTML5 & CSS3
- Client-side form validation
4. Security & Performance
- Data validation and exception handling
- Caching concepts (In-memory / Redis – good to have)
5. Tools & Environment
- Visual Studio / VS Code
- Git (GitHub / Azure DevOps)
- Basic knowledge of server deployment
6. Good to Have (Optional)
- Azure or AWS deployment experience
- CI/CD pipelines
- Docker
- Experience with data handling
Additional Requirements (Work-from-Home Setup)
This role supports remote work. Candidates must ensure the following minimum infrastructure requirements:
- Laptop/Desktop: Windows-based system
- Operating System: Windows
- Screen Size: Minimum 14 inches
- Screen Resolution: Full HD (1920 × 1080)
- Processor: Intel i5 or higher
- RAM: Minimum 8 GB (Mandatory)
- Software: AnyDesk
- Internet Speed: 100 Mbps or higher
About ARDEM
ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. For over 20 years, ARDEM has successfully delivered high-quality outsourcing and automation services to clients across the USA and Canada.
We are growing rapidly and continuously innovating to become a better service provider for our customers. Our mission is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company in the industry.
About Kanerika:
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.
We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.
Awards and Recognitions:
Kanerika has won several awards over the years, including:
1. Best Place to Work 2023 by Great Place to Work®
2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today
3. NASSCOM Emerge 50 Award in 2014
4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture
5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.
Working for us:
Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.
Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.
About the role:
As a DevOps Engineer, you will play a critical role in bridging the gap between development, operations, and security teams to enable fast, secure, and reliable software delivery. With 5+ years of hands-on experience, the engineer is responsible for designing, implementing, and maintaining scalable, automated, and cloud-native infrastructure solutions.
Key Responsibilities:
- 5+ years of hands-on experience in DevOps or Cloud Engineering roles.
- Strong expertise in at least one public cloud provider (AWS / Azure / GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Solid experience with Kubernetes and containerized applications.
- Strong knowledge of CI/CD tools (Jenkins, GitHub Actions, GitLab CI, Azure DevOps, ArgoCD).
- Scripting/programming skills in Python, Shell, or Go for automation.
- Hands-on experience with monitoring, logging, and incident management.
- Familiarity with security practices in DevOps (secrets management, IAM, vulnerability scanning).
Employee Benefits:
1. Culture:
- Open Door Policy: Encourages open communication and accessibility to management.
- Open Office Floor Plan: Fosters a collaborative and interactive work environment.
- Flexible Working Hours: Allows employees to have flexibility in their work schedules.
- Employee Referral Bonus: Rewards employees for referring qualified candidates.
- Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.
2. Inclusivity and Diversity:
- Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
- Mandatory POSH training: Promotes a safe and respectful work environment.
3. Health Insurance and Wellness Benefits:
- GMC and Term Insurance: Offers medical coverage and financial protection.
- Health Insurance: Provides coverage for medical expenses.
- Disability Insurance: Offers financial support in case of disability.
4. Child Care & Parental Leave Benefits:
- Company-sponsored family events: Creates opportunities for employees and their families to bond.
- Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
- Family Medical Leave: Offers leave for employees to take care of family members' medical needs.
5. Perks and Time-Off Benefits:
- Company-sponsored outings: Organizes recreational activities for employees.
- Gratuity: Provides a monetary benefit as a token of appreciation.
- Provident Fund: Helps employees save for retirement.
- Generous PTO: Offers more than the industry standard for paid time off.
- Paid sick days: Allows employees to take paid time off when they are unwell.
- Paid holidays: Gives employees paid time off for designated holidays.
- Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.
6. Professional Development Benefits:
- L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
- Mentorship Program: Offers guidance and support from experienced professionals.
- Job Training: Provides training to enhance job-related skills.
- Professional Certification Reimbursements: Assists employees in obtaining professional certifications.
- Promote from Within: Encourages internal growth and advancement opportunities.
Role: Software Development (Senior and Associate)
Experience Level: 4 to 9 Years
Work location: Remote
What you’ll do:
We are seeking a Mid-Level Node.js Developer to join our development team as an individual contributor. You will design, develop, and maintain scalable microservices for diverse client projects, working on enterprise applications that require high performance, reliability, and seamless deployment in containerized environments.
Key Responsibilities:
● Develop and maintain scalable Node.js microservices for diverse client projects
● Implement robust REST APIs with proper error handling and validation
● Write comprehensive unit and integration tests ensuring high code quality
● Design portable, efficient solutions deployable across different client environments
● Collaborate with cross-functional teams and client stakeholders
● Optimize application performance for high-concurrency scenarios
● Implement security best practices for enterprise applications
● Participate in code reviews and maintain coding standards
● Support deployment and troubleshooting in client environments
Must have skills:
Core Technical Expertise:
● Node.js: 4+ years of production experience with Node.js (ES6+, Async/Await, Promises, Event Loop understanding)
● Frameworks: Strong hands-on experience with Express.js, Fastify, or NestJS
● REST API Development: Proven experience designing and implementing RESTful web services, middleware
implementation
● JavaScript/TypeScript: Proficient in modern JavaScript (ES6+) and TypeScript for type-safe development
● Testing: Experience with testing frameworks (Jest, Mocha, Chai), unit testing, integration testing, mocking
Microservices & Deployment:
● Containerization: Hands-on Docker experience for packaging and deploying Node.js applications
● Microservices Architecture: Understanding of service decomposition, inter-service communication, event-driven
architecture
● Abstraction & Portability: Environment-agnostic design, configuration management (dotenv, config modules)
● Build Tools: NPM/Yarn for dependency management, understanding of package.json
Good to have have skills:
Advanced Technical:
● Advanced Frameworks: NestJS, Koa.js, Hapi.js
● Orchestration: Kubernetes, Docker
● Cloud Platforms: Alibaba, Azure, or GCP services and deployment
● Message Brokers: Apache Kafka, RabbitMQ for asynchronous communication
● Databases: Both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra)
● API Gateway: Express Gateway, Kong API Gateway
Development & Operations:
● CI/CD pipelines (Jenkins, GitLab CI/CD)
● Monitoring & Observability (Winston, Morgan, Prometheus, New Relic)
● GraphQL with Apollo Server or similar
● Security best practices (Helmet.js, authentication, authorization)
Client-Facing Experience:
● Experience working in service-based organizations
● Adaptability to different domain requirements
● Understanding of various industry standards and compliance requirements
Why Join Quantiphi?
● Be part of an award-winning Google Cloud partner recognized for innovation and impact.
● Work on cutting-edge GCP-based data engineering and AI projects.
● Collaborate with a global team of data scientists, engineers, and AI experts.
● Access continuous learning, certifications, and leadership development opportunities.
Job Role Senior Dot Net Developer
Experience 8+ years
Notice period Immediate
Location Trivandrum / Kochi
Details Job Description
Candidates with 8+ years of experience in IT industry and with strong .Net/.Net Core/Azure Cloud Service/ Azure
DevOps. This is a client facing role and hence should have strong communication skills. This is for a US client, and
the resource should be hands-on - experience in coding and Azure Cloud.
Working hours - 8 hours, with 4 hours of overlap during EST Time zone. (12 PM - 9 PM) This overlap hours is
mandatory as meetings happen during this overlap hours.
Responsibilities
Design, develop, enhance, document, and maintain robust applications using .NET Core 6/8+, C#, REST APIs, T-
SQL, and modern JavaScript/jQuery
☑ Integrate and support third-party APIs and external services
☑ Collaborate across cross-functional teams to deliver scalable solutions across the full technology stack
☑ Identify, prioritize, and execute tasks throughout the Software Development Life Cycle (SDLC)
☑ Participate in Agile/Scrum ceremonies and manage tasks using Jira
☑ Understand technical priorities, architectural dependencies, risks, and implementation challenges
☑ Troubleshoot, debug, and optimize existing solutions with a strong focus on performance and reliability.
Primary Skills
8+ years of hands-on development experience with:
☑ C#, .NET Core 6/8+, Entity Framework / EF Core
☑ JavaScript, jQuery, REST APIs
☑ Expertise in MS SQL Server, including:
☑ Complex SQL queries, Stored Procedures, Views, Functions, Packages, Cursors, Tables, and Object Types
☑ Skilled in unit testing with XUnit, MSTest
☑ Strong in software design patterns, system architecture, and scalable solution design
☑ Ability to lead and inspire teams through clear communication, technical mentorship, and ownership
☑ Strong problem-solving and debugging capabilities
☑ Ability to write reusable, testable, and efficient code
☑ Develop and maintain frameworks and shared libraries to support large-scale applications
☑ Excellent technical documentation, communication, and leadership
skills
☑ Microservices and Service-Oriented Architecture (SOA)
☑ Experience in API Integrations
2+ years of hands with Azure Cloud Services, including:
☑Azure Functions
☑Azure Durable Functions
☑Azure Service Bus, Event Grid, Storage Queues
☑Blob Storage, Azure Key Vault, SQL Azure
☑Application Insights, Azure Monitoring.
Secondary Skills
☑Familiarity with AngularJS, ReactJS, and other front-end frameworks
☑Experience with Azure API Management (APIM)
☑Knowledge of Azure Containerization and Orchestration (e.g., AKS/Kubernetes)
☑Experience with Azure Data Factory (ADF) and Logic Apps
☑Exposure to Application Support and operational monitoring
☑Azure DevOps - CI/CD pipelines (Classic / YAML).
Certifications Required (IF Any)
Microsoft Certified: Azure Fundamentals
☑Microsoft Certified: Azure Developer Associate
☑Other relevant certifications in Azure, .NET, or Cloud technologies.

















