50+ Windows Azure Jobs in India
Apply to 50+ Windows Azure Jobs on CutShort.io. Find your next job, effortlessly. Browse Windows Azure Jobs and apply today!
Job Details
- Job Title: Lead Software Engineer - Java, Python, API Development
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 8-10 years
- Employment Type: Full Time
- Job Location: Pune & Trivandrum/ Thiruvananthapuram
- CTC Range: Best in Industry
Job Description
Job Summary
We are seeking a Lead Software Engineer with strong hands-on expertise in Java and Python to design, build, and optimize scalable backend applications and APIs. The ideal candidate will bring deep experience in cloud technologies, large-scale data processing, and leading the design of high-performance, reliable backend systems.
Key Responsibilities
- Design, develop, and maintain backend services and APIs using Java and Python
- Build and optimize Java-based APIs for large-scale data processing
- Ensure high performance, scalability, and reliability of backend systems
- Architect and manage backend services on cloud platforms (AWS, GCP, or Azure)
- Collaborate with cross-functional teams to deliver production-ready solutions
- Lead technical design discussions and guide best practices
Requirements
- 8+ years of experience in backend software development
- Strong proficiency in Java and Python
- Proven experience building scalable APIs and data-driven applications
- Hands-on experience with cloud services and distributed systems
- Solid understanding of databases, microservices, and API performance optimization
Nice to Have
- Experience with Spring Boot, Flask, or FastAPI
- Familiarity with Docker, Kubernetes, and CI/CD pipelines
- Exposure to Kafka, Spark, or other big data tools
Skills
Java, Python, API Development, Data Processing, AWS Backend
Skills: Java, API development, Data Processing, AWS backend, Python,
Must-Haves
Java (8+ years), Python (8+ years), API Development (8+ years), Cloud Services (AWS/GCP/Azure), Database & Microservices
8+ years of experience in backend software development
Strong proficiency in Java and Python
Proven experience building scalable APIs and data-driven applications
Hands-on experience with cloud services and distributed systems
Solid understanding of databases, microservices, and API performance optimization
Mandatory Skills: Java API AND AWS
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Pune, Trivandrum
Customer currently uses ELK stack, and the goal is to standardize and modernize logs, metrics, and traces using OpenTelemetry, while improving visibility, reliability, and operational intelligence.
Observability Architecture & Modernization
· Assess the existing ELK-based observability setup and define a modern observability architecture
· Design and implement standardized logging, metrics, and distributed tracing using OpenTelemetry
· Define observability best practices for cloud-native and Azure-based applications
· Ensure consistent telemetry collection across microservices, APIs, and infrastructure
Logging, Metrics & Tracing
· Instrument applications using OpenTelemetry SDKs (SpringBoot, .NET, Python, Javascript – as applicable)
· Support Kubernetes and container-based workloads (if applicable)
· Configure and optimize log pipelines, trace exporters, and metric collectors
· Integrate OpenTelemetry with ELK / OpenSearch / Azure Monitor / other backends
· Define SLIs, SLOs, and alerting strategies
· Knowldege in integrating the GitHub and Jira metrics as DORA metrics to observability.
Operational Excellence
· Improve observability performance, cost efficiency, and data retention strategies
· Create dashboards, runbooks, and documentation
AI-based Anomaly Detection & Triage (Good to Have )
· Design or integrate AI/ML-based anomaly detection for logs, metrics, and traces
· Worked on AIOps capabilities for automated incident triage and insights
Required Technical Skills
Core Observability
· Strong hands-on experience with ELK Stack (Elasticsearch, Logstash, Kibana)
· Deep understanding of logs, metrics, traces, and distributed systems
· Practical experience with OpenTelemetry (Collectors, SDKs, exporters, receivers)
Cloud & Platforms
· Strong experience with Microsoft Azure to integrate with Observability platform.
· Experience with Kubernetes / AKS to integrate with Observability platform.
· Knowledge of Azure monitoring tools (Azure Monitor, Log Analytics, Application Insights)
· Experience with Kubernetes / AKS is a strong plus.
Soft Skills
· Strong architecture and problem-solving skills
· Clear communication and documentation skills
· Hands-on mindset with an architect-level view
Good to Have / Preferred Skills
· Experience with AIOps / anomaly detection platforms
· Exposure to tools like Prometheus, Grafana, Jaeger, OpenSearch, Datadog, Dynatrace, New Relic (any)
· Experience with incident management, SRE practices, and reliability engineering
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a passionate and experienced Full Stack Engineer to join our engineering team. The ideal candidate will have strong experience in both frontend and backend development, with the ability to design, build, and scale high-quality applications. You will collaborate with cross-functional teams to deliver robust and user-centric solutions.
Roles and Responsibilities:
● Design, develop, and maintain scalable web applications
● Build responsive and high-performance user interfaces
● Develop secure and efficient backend services and APIs
● Collaborate with product managers, designers, and QA teams to deliver features
● Write clean, maintainable, and testable code
● Participate in code reviews and contribute to engineering best practices
● Optimize applications for speed, performance, and scalability
● Troubleshoot and resolve production issues
● Contribute to architectural decisions and technical improvements.
Requirements:
● 3 to 5 years of experience in full-stack development
● Strong proficiency in frontend technologies such as React, Angular, or Vue
● Solid experience with backend technologies such as Node.js, .NET, Java, or Python
● Experience in building RESTful APIs and microservices
● Strong understanding of databases such as PostgreSQL, MySQL, MongoDB, or SQL Server
● Experience with version control systems like Git
● Familiarity with CI CD pipelines
● Good understanding of cloud platforms such as AWS, Azure, or GCP
● Strong understanding of software design principles and data structures
● Experience with containerization tools such as Docker
● Knowledge of automated testing frameworks
● Experience working in Agile environments
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
Company Description
Krish is committed to enabling customers to achieve their technological goals by delivering solutions that combine the right technology, people, and costs. Our approach emphasizes building long-term relationships while ensuring customer success through tailored solutions, leveraging the expertise and integrity of our consultants and robust delivery processes.
Location : Mumbai – Tech Data Office
Experience : 5 - 8 years.
Duration : 1-year contract (extendable)
Job Overview
We are seeking a highly skilled Sales Engineer (L2/L3) with in-depth expertise in Palo Alto Networks solutions. This role requires designing, implementing, and supporting cutting-edge network and security solutions to meet customers' technical and business needs. The ideal candidate will have strong experience in sales engineering and advanced skills in deploying, troubleshooting, and optimizing Palo Alto products and related technologies, with the ability to assist in implementation tasks when required.
Key Responsibilities
Solution Design & Technical Consultation:
- Collaborate with sales teams and customers to understand business and technical requirements.
- Design and propose solutions leveraging Palo Alto Networks technologies, including Next-Generation Firewalls (NGFW), Prisma Access, Panorama, SD-WAN, and Threat Prevention.
- Prepare detailed technical proposals, configurations, and proof-of-concept (POC) demonstrations tailored to client needs.
- Optimize existing customer deployments, ensuring alignment with industry best practices.
Customer Engagement & Implementation:
- Present and demonstrate Palo Alto solutions to stakeholders, addressing technical challenges and business objectives.
- Conduct customer and partner workshops, enablement sessions, and product training.
- Provide post-sales support to address implementation challenges and fine-tune deployments.
- Lead and assist with hands-on implementations of Palo Alto Networks products when required.
Support & Troubleshooting:
- Provide L2-L3 level troubleshooting and issue resolution for Palo Alto Networks products, including advanced debugging and system analysis.
- Assist with upgrades, migrations, and integration of Palo Alto solutions with other security and network infrastructures.
- Develop runbooks, workflows, and documentation for post-sales handover to operations teams.
Partner Enablement & Ecosystem Management:
- Collaborate with channel partners to build technical competency and promote adoption of Palo Alto solutions.
- Support certification readiness and compliance for both internal and partner teams.
- Participate in events, workshops, and seminars to showcase technical expertise.
Skills and Qualifications
Technical Skills:
- Advanced expertise in Palo Alto Networks technologies, including NGFW, Panorama, Prisma Access, SD-WAN, and GlobalProtect.
- Strong knowledge of networking protocols (e.g., TCP/IP, BGP, OSPF) and security frameworks (e.g., Zero Trust, SASE).
- Proficiency in troubleshooting and root-cause analysis for complex networking and security issues.
- Experience with security automation tools and integrations (e.g., API scripting, Ansible, Terraform).
Soft Skills:
- Excellent communication and presentation skills, with the ability to convey technical concepts to diverse audiences.
- Strong analytical and problem-solving skills, with a focus on delivering customer-centric solutions.
- Ability to manage competing priorities and maintain operational discipline under tight deadlines.
Experience:
- 5+ years of experience in sales engineering, solution architecture, or advanced technical support roles in the IT security domain.
- Hands-on experience in designing and deploying large-scale Palo Alto Networks solutions in enterprise environments.
Education and Certifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Relevant certifications such as PCNSA, PCNSE, or equivalent vendor certifications (e.g., CCNP Security, NSE4) are highly preferred.
- Strong experience in Azure – mainly Azure ML Studio, AKS, Blob Storage, ADF, ADO Pipelines.
- Ability and experience to register and deploy ML/AI/GenAI models via Azure ML Studio.
- Working knowledge of deploying models in AKS clusters.
- Design and implement data processing, training, inference, and monitoring pipelines using Azure ML.
- Excellent Python skills – environment setup and dependency management, coding as per best practices, and knowledge of automatic code review tools like linting and Black.
- Experience with MLflow for model experiments, logging artifacts and models, and monitoring.
- Experience in orchestrating machine learning pipelines using MLOps best practices.
- Experience in DevOps with CI/CD knowledge (Git in Azure DevOps).
- Experience in model monitoring (drift detection and performance monitoring).
- Fundamentals of data engineering.
- Docker-based deployment is good to have.
Hiring for Azure Devops Engineer
Exp : 5 - 9 yrs
Edu : BE/B.Tech
Work Location : Noida WFO
Notice Period : Immediate - 15 days
Skills :
5+ years of hands-on experience in Azure DevOps, cloud deployment, and security.
Strong expertise in designing and implementing CI/CD pipelines using Azure DevOps or similar tools (Jenkins, GitLab CI/CD).
Experience with Azure services such as Azure App Service, AKS, Azure SQL, Azure AD, and networking components (VNet, NSG).
Solid understanding of Infrastructure as Code (IaC) using Terraform, ARM templates, or Azure Bicep.
Experience with containerization technologies like Docker and Kubernetes (AKS preferred).
Knowledge of DevSecOps practices and integrating security into CI/CD pipelines.
Proficiency in scripting and automation using PowerShell, Bash, or Python.
Familiarity with monitoring and logging tools such as Azure Monitor, Log Analytics, Application Insights, Prometheus, Grafana, or ELK stack.
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.
Roles and Responsibilities:
● Design, implement, and manage CI CD pipelines for multiple environments
● Automate infrastructure provisioning using Infrastructure as Code tools
● Manage and optimize cloud infrastructure on AWS, Azure, or GCP
● Monitor system performance, availability, and security
● Implement logging, monitoring, and alerting solutions
● Collaborate with development teams to streamline release processes
● Troubleshoot production issues and ensure high availability
● Implement containerization and orchestration solutions such as Docker and Kubernetes
● Enforce DevOps best practices across the engineering lifecycle
● Ensure security compliance and data protection standards are maintained
Requirements:
● 4 to 7 years of experience in DevOps or Site Reliability Engineering
● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage
● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps
● Experience working in microservices architecture
● Exposure to DevSecOps practices
● Experience in cost optimization and performance tuning in cloud environments
● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM
● Strong knowledge of containerization using Docker
● Experience with Kubernetes in production environments
● Good understanding of Linux systems and shell scripting
● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog
● Strong troubleshooting and debugging skills
● Understanding of networking concepts and security best practices
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
15+ years in enterprise product development
5+ years in Director/VP-level product leadership
Proven AI/ML product commercialization experience
Expertise in Industry 4.0 (IoT, predictive maintenance, digital twins)
Hands-on experience with AWS/Azure/GCP cloud platforms
Strong architecture experience in microservices ecosystems
Experience implementing MLOps and DevSecOps frameworks
Experience integrating MES, ERP, SCADA, automation platforms
SaaS business model and enterprise software commercialization expertise
Experience leading large engineering/product teams
Responsibilities:
• End-to-end design, development, and deployment of enterprise-grade AI solutions leveraging Azure AI, Google Vertex AI, or comparable cloud platforms.
• Architect and implement advanced AI systems, including agentic workflows, LLM integrations, MCP-based solutions, RAG pipelines, and scalable microservices.
• Oversee the development of Python-based applications, RESTful APIs, data processing pipelines, and complex system integrations.
• Define and uphold engineering best practices, including CI/CD automation, testing frameworks, model evaluation procedures, observability, and operational monitoring.
• Partner closely with product owners and business stakeholders to translate requirements into actionable technical designs, delivery plans, and execution roadmaps.
• Provide hands-on technical leadership, conducting code reviews, offering architectural guidance, and ensuring adherence to security, governance, and compliance standards.
• Communicate technical decisions, delivery risks, and mitigation strategies effectively to senior leadership and cross-functional teams.
Job Overview
As a software Engineer, you will play a crucial role in leading our development efforts, ensuring best practices, and supporting the team on a day-to-day basis. This role requires deep technical knowledge, a proactive mindset, and a commitment to guiding the team in tackling challenging issues. You will work primarily with .NET Core on the backend while also keeping a strategic focus on product security, DevOps, quality assurance, and cloud infrastructure.
Responsibilities
• Forward-Looking Product Development:
o Collaborate with product and engineering teams to align on the technical
direction, scalability, and maintainability of the product.
o Proactively consider and address security, performance, and scalability
requirements during development.
- Cloud and Infrastructure: Leverage Microsoft Azure for cloud infrastructure,
- ensuring efficient and secure use of cloud services. Work closely with DevOps to
- improve deployment processes.
- DevOps & CI/CD: Support the setup and maintenance of CI/CD pipelines, enabling
- smooth and frequent deployments. Collaborate with the DevOps team to automate and
- optimize the development process.
- Technical Mentorship: Provide technical guidance and support to team members,
- helping them solve day-to-day challenges, enhance code quality, and adopt best
- practices.
- Quality Assurance: Collaborate with QA to ensure thorough testing, automated testing
- coverage, and overall product quality.
- Product Security: Actively implement and promote security best practices to protect
- data and ensure compliance with industry standards.
- Documentation & Code Reviews: Promote good coding practices, conduct code
- reviews, and maintain clear documentation.
- Qualifications
• Technical Skills:
o Strong experience with .NET Core for backend development and RESTful API
design.
o Hands-on experience with Microsoft Azure services, including but not limited
to VMs, databases, application gateways, and user management.
o Familiarity with DevOps practices and tools, particularly CI/CD pipeline
configuration and deployment automation.
o Strong knowledge of product security best practices and experience implementing secure coding practices.
o Familiarity with QA processes and automated testing tools is a plus.
o Ability to support team members in solving technical challenges and sharing
knowledge effectively.
Preferred Qualifications
- 3+ years of experience in software development, with a strong focus on .NET Core
- Previous experience as a Staff SE, tech lead, or in a similar hands-on tech role.
- Strong problem-solving skills and ability to work in a fast-paced, startup environment.
- What We Offer
- Opportunity to lead and grow within a dynamic and ambitious team.
- Challenging projects that focus on innovation and cutting-edge technology.
- Collaborative work environment with a focus on learning, mentorship, and growth.
- Competitive compensation, benefits, and stock options.
- If you’re a proactive, forward-thinking technology leader with a passion for .NET Core and you’re ready to make an impact, we’d love to meet you!
JOB DETAILS:
* Job Title: Java Lead-Java, MS, Kafka-TVM - Java (Core & Enterprise), Spring/Micronaut, Kafka
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 9 to 12 years
* Location: Trivandrum, Thiruvananthapuram
Job Description
Experience
- 9+ years of experience in Java-based backend application development
- Proven experience building and maintaining enterprise-grade, scalable applications
- Hands-on experience working with microservices and event-driven architectures
- Experience working in Agile and DevOps-driven development environments
Mandatory Skills
- Advanced proficiency in core Java and enterprise Java concepts
- Strong hands-on experience with Spring Framework and/or Micronaut for building scalable backend applications
- Strong expertise in SQL, including database design, query optimization, and performance tuning
- Hands-on experience with PostgreSQL or other relational database management systems
- Strong experience with Kafka or similar event-driven messaging and streaming platforms
- Practical knowledge of CI/CD pipelines using GitLab
- Experience with Jenkins for build automation and deployment processes
- Strong understanding of GitLab for source code management and DevOps workflows
Responsibilities
- Design, develop, and maintain robust, scalable, and high-performance backend solutions
- Develop and deploy microservices using Spring or Micronaut frameworks
- Implement and integrate event-driven systems using Kafka
- Optimize SQL queries and manage PostgreSQL databases for performance and reliability
- Build, implement, and maintain CI/CD pipelines using GitLab and Jenkins
- Collaborate with cross-functional teams including product, QA, and DevOps to deliver high-quality software solutions
- Ensure code quality through best practices, reviews, and automated testing
Good-to-Have Skills
- Strong problem-solving and analytical abilities
- Experience working with Agile development methodologies such as Scrum or Kanban
- Exposure to cloud platforms such as AWS, Azure, or GCP
- Familiarity with containerization and orchestration tools such as Docker or Kubernetes
Skills: java, spring boot, kafka development, cicd, postgresql, gitlab
Must-Haves
Java Backend (9+ years), Spring Framework/Micronaut, SQL/PostgreSQL, Kafka, CI/CD (GitLab/Jenkins)
Advanced proficiency in core Java and enterprise Java concepts
Strong hands-oacn experience with Spring Framework and/or Micronaut for building scalable backend applications
Strong expertise in SQL, including database design, query optimization, and performance tuning
Hands-on experience with PostgreSQL or other relational database management systems
Strong experience with Kafka or similar event-driven messaging and streaming platforms
Practical knowledge of CI/CD pipelines using GitLab
Experience with Jenkins for build automation and deployment processes
Strong understanding of GitLab for source code management and DevOps workflows
*******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: only Trivandrum
F2F Interview on 21st Feb 2026
Job Description -
Profile: Senior ML Lead
Experience Required: 10+ Years
Work Mode: Remote
Key Responsibilities:
- Design end-to-end AI/ML architectures including data ingestion, model development, training, deployment, and monitoring
- Evaluate and select appropriate ML algorithms, frameworks, and cloud platforms (Azure, Snowflake)
- Guide teams in model operationalization (MLOps), versioning, and retraining pipelines
- Ensure AI/ML solutions align with business goals, performance, and compliance requirements
- Collaborate with cross-functional teams on data strategy, governance, and AI adoption roadmap
Required Skills:
- Strong expertise in ML algorithms, Linear Regression, and modeling fundamentals
- Proficiency in Python with ML libraries and frameworks
- MLOps: CI/CD/CT pipelines for ML deployment with Azure
- Experience with OpenAI/Generative AI solutions
- Cloud-native services: Azure ML, Snowflake
- 8+ years in data science with at least 2 years in solution architecture role
- Experience with large-scale model deployment and performance tuning
Good-to-Have:
- Strong background in Computer Science or Data Science
- Azure certifications
- Experience in data governance and compliance
The Sr. Consultant, Microsoft AI Solutions drives end-to-end delivery success for assigned Microsoft Copilot and AI solution components. This includes leading solution design activities within engagement scope, aligning stakeholders on requirements and implementation approach, developing delivery-ready artifacts, and executing configuration, integration, and deployment tasks. The role ensures solutions meet security, governance, and operational standards and supports go-live readiness, stabilization, and handoff to operations teams.
This role will also support presales and technical deep-dive sessions (on an as-needed basis) with customers prior to the initiation of delivery engagements, focused on solution feasibility, technical validation, and delivery readiness.
SUMMARY OF ESSENTIAL JOB FUNCTIONS:
Solution Envisioning and Business Alignment
- Lead and support AI solution envisioning activities with customers, including workshops, demonstrations, and technical deep-dive sessions.
- Translate business scenarios and use cases into conceptual solution designs - aligned to Microsoft AI products and services via Copilot, Azure, etc.
- Support technical feasibility and delivery readiness assessments prior to delivery initiation, validating platform fit, approach, and constraints.
- Facilitate alignment with customer stakeholders on solution scope, requirements, architecture approach, and success criteria.
- Develop conceptual designs and delivery-aligned solution definitions to guide successful implementation.
Solution Delivery and Execution
- Lead solution design activities within delivery engagements, translating approved concepts into functional and non-functional requirements.
- Configure, build, and implement Microsoft solutions through Microsoft Copilot, Copilot Studio, Power Platform, and supporting Azure services.
- Integrate Copilot solutions with Microsoft 365, Teams, Microsoft Foundry, and existing enterprise systems and workflows.
- Implement identity, security, governance, and access controls aligned to customer and organizational standards.
- Execute testing, validation, and troubleshooting to ensure solution quality and readiness for production use.
- Support deployment, go-live, and stabilization activities to ensure successful adoption.
Communication and Collaboration
- Ability to serve as the primary delivery lead for assigned solution components or workstreams within an engagement.
- Partner with solution architects and project managers to plan, execute, and track delivery milestones.
- Collaborate with customer technical and business teams to drive alignment, decision-making, and adoption throughout the engagement.
- Communicate delivery status, risks, and dependencies to internal and customer stakeholders.
- Support limited presales and technical deep-dive sessions (on an as-needed basis) to enable solution feasibility, technical validation, and delivery readiness.
Continuous Improvement and Delivery Excellence
- Develop and contribute to delivery artifacts including architecture workshop agendas, diagrams, configuration specifications, runbooks, deployment guides, and validation checklists.
- Capture, sanitize, and contribute reusable solution assets, patterns, and implementation guidance to internal repositories.
- Contribute feedback and lessons learned to improve delivery efficiency, consistency, and quality across similar engagements.
- Support initiatives focused on standardizing delivery approaches and accelerating future implementations.
- Stay current on Microsoft Copilot, Microsoft Foundry, and related platform updates to continuously improve delivery practices.
REQUIRED SKILLS AND EXPERIENCE:
· Bachelor’s degree required; advanced degree or relevant certifications preferred.
· 8+ years of experience in consulting, enterprise architecture, or digital transformation with client-facing responsibilities.
· Experience advising senior leaders on AI, Copilot, or cloud modernization initiatives.
· Strong hands-on expertise with Microsoft Copilot and Copilot Studio.
· Experience designing AI-enabled solutions / automations, or integrating with existing business processes leveraging Microsoft 365, Teams, and Azure services.
· Strong understanding of Identity and Access Management design concepts for Microsoft Copilot and AI agents
· Familiar with agent design patterns & data access patterns.
· Familiar with Azure OpenAI, Azure AI Search, Logic Apps, Azure Functions, and integration architectures. Hands-on experience in integrating with one or more of these services through Copilot Studio is preferred.
· Experience in leading and contributing to presales efforts including readiness assessments, envisioning workshops, and proposal development.
· Microsoft certifications in Microsoft Copilot, Power Platform, Azure, and Microsoft AI are preferred.
· Highly organized, detail-oriented, excellent time management skills, and able to effectively prioritize tasks in a fast-paced, high-volume, and evolving work environment.
· Ability to approach customer requests with a proactive and consultative manner; listen to and understand user requests and needs, and effectively deliver.
· Strong influencing skills to get things done and inspire business transformation.
· Excellent oral, written communication, and presentation skills with an ability to present AI-related concepts to C-Level Executives and non-technical audiences.
· Conflict negotiation and critical thinking skills and agility.
· Ability to travel when needed.
JOB DETAILS:
* Job Title: Principal Data Scientist
* Industry: Healthcare
* Salary: Best in Industry
* Experience: 6-10 years
* Location: Bengaluru
Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps
Criteria:
- Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
- Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
- Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
- Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
- Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.
Job Description
Principal Data Scientist
(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)
Job Details
- Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
- Location: Hebbal Ring Road, Bengaluru
- Work Mode: Work from Office
- Shift: Day Shift
- Reporting To: SVP
- Compensation: Best in the industry (for suitable candidates)
Educational Qualifications
- Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
- Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage
Experience Required
- 7+ years of experience solving real-world problems using:
- Natural Language Processing (NLP)
- Automatic Speech Recognition (ASR)
- Large Language Models (LLMs)
- Machine Learning (ML)
- Preferably within the healthcare domain
- Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable
Role Overview
This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.
We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:
- Reduce administrative burden in EMR data entry
- Improve provider satisfaction and productivity
- Enhance quality of care and patient outcomes
Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.
The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.
Key Responsibilities
AI Strategy & Solution Development
- Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
- Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
- Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
- Design scalable, reusable, and production-ready AI frameworks for speech and text analytics
Model Development & Optimization
- Fine-tune, train, and optimize large-scale NLP and ASR models
- Develop and optimize ML algorithms for speech, text, and structured healthcare data
- Conduct rigorous testing and validation to ensure high clinical accuracy and performance
- Continuously evaluate and enhance model efficiency and reliability
Cloud & MLOps Implementation
- Architect and deploy AI models on AWS, Azure, or GCP
- Deploy and manage models using containerization, Kubernetes, and serverless architectures
- Design and implement robust MLOps strategies for lifecycle management
Integration & Compliance
- Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
- Integrate AI systems with EHR/EMR platforms
- Implement ethical AI practices, regulatory compliance, and bias mitigation techniques
Collaboration & Leadership
- Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
- Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
- Mentor and lead junior data scientists and engineers
- Contribute to AI research, publications, patents, and long-term AI strategy
Required Skills & Competencies
- Expertise in Machine Learning, Deep Learning, and Generative AI
- Strong Python programming skills
- Hands-on experience with PyTorch and TensorFlow
- Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
- Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
- Experience with text embeddings and vector databases
- Proficiency in cloud platforms (AWS, Azure, GCP)
- Experience with LangChain, OpenAI APIs, and RAG architectures
- Knowledge of agentic AI frameworks and reinforcement learning
- Familiarity with Docker, Kubernetes, and MLOps best practices
- Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
- Strong communication, collaboration, and mentoring skills
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
JOB DETAILS:
* Job Title: Lead I - Azure, Terraform, GitLab CI
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 3-5 years
* Location: Trivandrum/Pune
Job Description
Job Title: DevOps Engineer
Experience: 4–8 Years
Location: Trivandrum & Pune
Job Type: Full-Time
Mandatory skills: Azure, Terraform, GitLab CI, Splunk
Job Description
We are looking for an experienced and driven DevOps Engineer with 4 to 8 years of experience to join our team in Trivandrum or Pune. The ideal candidate will take ownership of automating cloud infrastructure, maintaining CI/CD pipelines, and implementing monitoring solutions to support scalable and reliable software delivery in a cloud-first environment.
Key Responsibilities
- Design, manage, and automate Azure cloud infrastructure using Terraform.
- Develop scalable, reusable, and version-controlled Infrastructure as Code (IaC) modules.
- Implement monitoring and logging solutions using Splunk, Azure Monitor, and Dynatrace.
- Build and maintain secure and efficient CI/CD pipelines using GitLab CI or Harness.
- Collaborate with cross-functional teams to enable smooth deployment workflows and infrastructure updates.
- Analyze system logs, performance metrics, and s to troubleshoot and optimize performance.
- Ensure infrastructure security, compliance, and scalability best practices are followed.
Mandatory Skills
Candidates must have hands-on experience with the following technologies:
- Azure – Cloud infrastructure management and deployment
- Terraform – Infrastructure as Code for scalable provisioning
- GitLab CI – Pipeline development, automation, and integration
- Splunk – Monitoring, logging, and troubleshooting production systems
Preferred Skills
- Experience with Harness (for CI/CD)
- Familiarity with Azure Monitor and Dynatrace
- Scripting proficiency in Python, Bash, or PowerShell
- Understanding of DevOps best practices, containerization, and microservices architecture
- Exposure to Agile and collaborative development environments
Skills Summary
Azure, Terraform, GitLab CI, Splunk (Mandatory) Additional: Harness, Azure Monitor, Dynatrace, Python, Bash, PowerShell
Skills: Azure, Splunk, Terraform, Gitlab Ci
******
Notice period - 0 to 15days only
Job stability is mandatory
Location: Trivandrum/Pune
Company Description:
NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.
Brief Description:
NonStop io is seeking a proficient .NET Developer to join our growing team. You will be responsible for developing, enhancing, and maintaining scalable applications using .NET technologies. This role involves working on a healthcare-focused product and requires strong problem-solving skills, attention to detail, and a passion for software development.
Responsibilities:
- Design, develop, and maintain applications using .NET Core/.NET Framework, C#, and related technologies
- Write clean, scalable, and efficient code while following best practices
- Develop and optimize APIs and microservices
- Work with SQL Server and other databases to ensure high performance and reliability
- Collaborate with cross-functional teams, including UI/UX designers, QA, and DevOps
- Participate in code reviews and provide constructive feedback
- Troubleshoot, debug, and enhance existing applications
- Ensure compliance with security and performance standards, especially for healthcare-related applications
Qualifications & Skills:
- Strong experience in .NET Core/.NET Framework and C#
- Proficiency in building RESTful APIs and microservices architecture
- Experience with Entity Framework, LINQ, and SQL Server
- Familiarity with front-end technologies like React, Angular, or Blazor is a plus
- Knowledge of cloud services (Azure/AWS) is a plus
- Experience with version control (Git) and CI/CD pipelines
- Strong understanding of object-oriented programming (OOP) and design patterns
- Prior experience in healthcare tech or working with HIPAA-compliant systems is a plus
Why Join Us?
- Opportunity to work on a cutting-edge healthcare product
- A collaborative and learning-driven environment
- Exposure to AI and software engineering innovations
- Excellent work ethics and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
We are seeking a seasoned Senior Developer to join our team. The ideal candidate is a C# expert who doesn't just write code but understands how to orchestrate complex business processes using the Microsoft ecosystem. You will be responsible for building scalable backend services, optimizing SQL databases, and leveraging Azure and Power Automate to deliver end-to-end automation solutions.
Responsibilities:
- Design and maintain robust, high-performance applications using C# and .NET Core.
- Write complex SQL queries, stored procedures, and optimize database schemas for performance and security.
- Deploy and manage cloud resources within Azure (App Services, Functions, Logic Apps).
- Design enterprise-level automated workflows using Microsoft Power Automate, including custom connectors to bridge the gap between Power Platform and legacy APIs.
- Provide technical mentorship, conduct code reviews, and ensure best practices in the Software Development Life Cycle (SDLC).
Technical Skills:
- C# / .NET: 8+ years of deep expertise in ASP.NET MVC, Web API, and Entity Framework.
- Database: Advanced proficiency in SQL Server
- Azure: Hands-on experience with Azure cloud architecture and integration services.
- Power Automate: Proven experience building complex flows, handling error logic, and integrating Power Automate with custom-coded environments.
- DevOps: Familiarity with CI/CD pipelines (Azure DevOps or GitHub Actions).
Company Description: Bits in Glass - India
- Industry Leader:
- Bits in Glass(BIG) has been in business for more than 20 years. In 2021 Bits in Glass joined hands with Crochet Technologies, forming a larger organization under the Bits In Glass brand to better serve customers across the globe.
- Offices across three locations in India: Pune, Hyderabad & Chandigarh.
- Specialized Pega partner since 2017, delivering Pega solutions with deep industry expertise and experience.
- Proudly ranked among the top 30 Pega partners, Bits In Glass has been one of the very few sponsors of the annual PegaWorld event.
- Elite Appian partner since 2008, delivering Appian solutions with deep industry expertise and experience.
- Operating in the United States, Canada, United Kingdom, and India.
- Dedicated global Pega CoE to support our customers and internal dev teams.
- Specializes in Databricks, AI, and cloud-based data engineering to help companies transition from manual to automated workflows.
- Employee Benefits:
- Career Growth: Opportunities for career advancement and professional development.
- Challenging Projects: Work on innovative, cutting-edge projects that make a global impact.
- Global Exposure: Collaborate with international teams and clients to broaden your professional network.
- Flexible Work Arrangements: Support for work-life balance through flexible working conditions.
- Comprehensive Benefits: Competitive compensation packages and comprehensive benefits including health insurance, and paid time off.
- Learning Opportunities- Great opportunity to upskill yourself and work on new technologies like AI-enabled Pega solutions, Data engineering, Integration, cloud migration etc.
- Company Culture:
- Collaborative Environment: Emphasizes teamwork, innovation, and knowledge sharing.
- Inclusive Workplace: Values diversity and fosters an inclusive environment where all ideas are respected.
- Continuous Learning: Encourages professional development through ongoing learning opportunities and certifications.
- Core Values:
- Integrity: Commitment to ethical practices and transparency in all business dealings.
- Excellence: Strive for the highest standards in everything we do.
- Client-Centric Approach: Focus on delivering the best solutions tailored to client needs.
● Participate in the customer’s system design meetings and collect the functional/technical requirements.
● Build up and deploy data pipelines for consumption by various teams as needed.
● Skillful in ETL process and tools.
● Clear understanding and experience with Python and PySpark , HIVE, Airflow, and Hadoop and RDBMS architecture
. ● Experience in writing Python programs and SQL queries.
● Experience in SQL Query tuning.
● Working knowledge of Apache Kafka in building streaming pipelines. ● Strong knowledge of the Azure data engineering ecosystem.
● Experienced in Shell Scripting(Unix/Linux).
● Build and maintain data pipelines in Spark/Pyspark with SQL and Python
● Knowledge of Azure technologies is required.
● Good to have knowledge of Kubernetes, CI/CD concepts.
● Suggest and implement best practices in data integration.
● Guide the QA team in defining system integration tests as needed.
● Split the planned deliverables into tasks and assign them to the team. ● Needs to Maintain/Deploy the ETL code and follow the Agile methodology
● Needs to work on optimization wherever applicable.
● Good oral, written and presentation skills.
Preferred Qualifications:
● Degree in Computer Science, IT, or similar field; a Master’s is a plus. ● Minimum 4+ years working with Pyspark, SQL, and Python.
● Minimum 3+ years working with Databricks, Data factory and Data lake
Role Overview
We are hiring for Humming Apps Technologies LLP who are seeking a Senior Threat Modeler to join the security team and act as a strategic bridge between architecture and defense. This role focuses on proactively identifying vulnerabilities during the design phase to ensure applications, APIs, and cloud infrastructures are secure by design.
The position requires thinking from an attacker’s perspective to analyze trust boundaries, map attack paths, and influence the overall security posture of next-generation AI-driven and cloud-native systems. The goal is not only to detect issues but to prevent risks before implementation.
Key Responsibilities
Architectural Analysis
• Lead deep-dive threat modeling sessions across applications, APIs, microservices, and cloud-native environments
• Perform detailed reviews of system architecture, data flows, and trust boundaries
Threat Modeling Frameworks & Methodologies
• Apply industry-standard frameworks including STRIDE, PASTA, ATLAS, and MITRE ATT&CK
• Identify sophisticated attack vectors and model realistic threat scenarios
Security Design & Risk Mitigation
• Detect weaknesses during the design stage
• Provide actionable and prioritized mitigation recommendations
• Strengthen security posture through secure-by-design principles
Collaborative Security Integration
• Work closely with architects and developers during design and build phases
• Embed security practices directly into the SDLC
• Ensure security is incorporated early rather than retrofitted
Communication & Enablement
• Facilitate threat modeling demonstrations and walkthroughs
• Present findings and risk assessments to stakeholders
• Translate complex technical risks into clear, business-relevant insights
• Educate teams on secure design practices and emerging threats
Required Qualifications
Experience
• 5–10 years of dedicated experience in threat modeling, product security, or application security
Technical Expertise
• Strong understanding of software architecture and distributed systems
• Experience designing and securing RESTful APIs
• Hands-on knowledge of cloud platforms such as AWS, Azure, or GCP
Modern Threat Knowledge
• Expertise in current attack vectors including OWASP Top 10
• Understanding of API-specific threats
• Awareness of emerging risks in AI/LLM-based applications
Tools & Practices
• Practical experience with threat modeling tools
• Proficiency in technical diagramming and system visualization
Communication
• Excellent written and verbal English communication skills
• Ability to collaborate across engineering teams and stakeholders in different time zones
Preferred Qualifications
• Experience in consulting or client-facing professional services roles
• Industry certifications such as CISSP, CSSLP, OSCP, or equivalent
🚀 Hiring: Data Engineer ( Azure )
⭐ Experience: 5+ Years
📍 Location: Pune, Bhopal, Jaipur, Gurgaon, Delhi, Banglore,
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
Hiring: Databricks Data Engineer – Lakeflow | Streaming | DBSQL | Data Intelligence
We are looking for a Databricks Data Engineer to build reliable, scalable, and governed data pipelines powering analytics, operational reporting, and the Data Intelligence Layer.
🔹 Key Responsibilities
- Build optimized batch pipelines using Delta Lake (partitioning, OPTIMIZE, Z-ORDER, VACUUM)
- Implement incremental ingestion using Databricks Autoloader with schema evolution & checkpointing
- Develop Structured Streaming pipelines with watermarking, late data handling & restart safety
- Implement declarative pipelines using Lakeflow
- Design idempotent, replayable pipelines with safe backfills
- Optimize Spark workloads (AQE, skew handling, shuffle & join tuning)
- Build curated datasets for Databricks SQL (DBSQL), dashboards & downstream applications
- Package and deploy using Databricks Repos & Asset Bundles (CI/CD)
- Ensure governance using Unity Catalog and embedded data quality checks
✅ Mandatory Skills (Must Have)
- Databricks & Delta Lake (Advanced Optimization & Performance Tuning)
- Structured Streaming & Autoloader Implementation
- Databricks SQL (DBSQL) & Data Modeling for Analytics
- 3+ years hands-on Azure cloud & automation experience.
- Experience managing high-availability enterprise systems.
- Microsoft Azure (AKS, VNets, App Gateway, Load Balancers).
- Kubernetes (AKS) & Docker.
- Networking (VPN, DNS, routing, firewalls, NSGs).
- Infra-as-Code (Terraform / Bicep optional).
- Monitoring tools: Azure Monitor, Grafana, Prometheus.
- CI/CD: Azure DevOps, GitLab/Jenkins (added advantage).
- Security: Key Vault, certificates, encryption, RBAC.
- Understanding of PostgreSQL/PostGIS networking.
- Design and manage Azure infrastructure (VMs, VNets, NSGs, Load Balancers, AKS, Storage).
- Deploy and maintain AKS workloads for NiFi, PostGIS, and microservices.
- Architect secure network topology including VNet peering, VPNs, Private Endpoints, DNS & Zero Trust policies.
- Implement monitoring and alerting using Azure Monitor, Log Analytics, Grafana & Prometheus.
- Ensure high uptime, DR planning, backup and failover strategies.
- Automate deployments with Azure DevOps, Helm, ArgoCD & GitOps principles.
- Enforce security, RBAC, compliance, and audit standards across environments.
- Good to have knowledge/experince in Linux administration (Ubuntu/Debian).
Job Details
- Job Title: Specialist I - Software Engineering-.Net Fullstack Lead-TVM
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 5-9 years
- Employment Type: Full Time
- Job Location: Trivandrum, Thiruvananthapuram
- CTC Range: Best in Industry
Job Description
· Minimum 5+ years experienced senior/Lead .Net developer, including experience of the full development lifecycle, including post-live support.
· Significant experience delivering software using Agile iterative delivery methodologies.
· JIRA knowledge preferred.
· Excellent ability to understand requirement/story scope and visualise technical elements required for application solutions.
· Ability to clearly articulate complex problems and solutions in terms that others can understand.
· Lots of experience working with .Net backend API development.
· Significant experience of pipeline design, build and enhancement to support release cadence targets, including Infrastructure as Code (preferably Terraform).
· Strong understanding of HTML and CSS including cross-browser, compatibility, and performance.
· Excellent knowledge of unit and integration testing techniques.
· Azure knowledge (Web/Container Apps, Azure Functions, SQL Server).
· Kubernetes / Docker knowledge. Knowledge of JavaScript UI frameworks, ideally Vue Extensive experience with source control (preferably Git).
· Strong understanding of RESTful services (JSON) and API Design.
· Broad knowledge of Cloud infrastructure (PaaS, DBaaS).
· Experience of mentoring and coaching engineers operating within a co-located environment.
Skills: .Net Fullstack, Azure Cloudformation, Javascript, Angular
Must-Haves:
.Net (5+ years), Agile methodologies, RESTful API design, Azure (Web/Container Apps, Functions, SQL Server), Git source control
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Trivandrum
F2F Weekend Interview on 14th Feb 2026
About This Opportunity
We're seeking a Solution Architect who thrives on autonomy and impact. If you're a tech-savvy innovator ready to architect cloud solutions at scale while working from anywhere in the world, this is your role.
What You'll Lead
Core Responsibilities
- Design scalable cloud architectures that serve as the technical foundation for enterprise applications
- Lead major projects end-to-end, from concept through deployment, owning architectural decisions and outcomes
- Architect high-performance APIs and microservices using modern tech stacks (Node.js, TypeScript, Java, GoLang)
- Integrate cutting-edge AI capabilities into solutions—leveraging Azure AI, OpenAI, and similar platforms to drive competitive advantage
- Own cloud strategy & optimization on Azure, ensuring security, scalability, and cost-efficiency
- Mentor technical teams and guide cross-functional stakeholders toward shared architectural vision
What We're Looking For
- 8+ years in solution architecture, cloud engineering, or equivalent leadership roles
- Technical depth across Node.js, TypeScript, Java, GoLang, and Generative AI frameworks
- Cloud mastery with Azure, Kubernetes, containerization, and CI/CD pipelines
- Leadership mindset: You drive decisions, mentor peers, and own project outcomes
- Communication excellence: You translate complex technical concepts for both engineers and business stakeholders
- Entrepreneurial spirit: You work best with autonomy and take ownership of major initiatives
Why Join Us
✅ 100% Remote – Work from home, a coffee shop, or anywhere globally
✅ Lead Significant Projects – Take ownership of architectural decisions that impact our platform
✅ Tech-Forward Culture – We invest in latest cloud, AI, and DevOps technologies
✅ Founded by Architects – Leadership team with 25+ years of cloud expertise—mentorship built in
✅ Rapid Innovation – We ship fast. You'll see your designs live in production within weeks
✅ Continuous Learning – Access to tools, courses, and conferences to sharpen your craft
Ready to Shape the Future?
If you're energized by architectural challenges and want the freedom to work from anywhere, we'd love to connect.
🌐 Learn more: prismcloudinc.com
Job Details
- Job Title: SDE-3
- Industry: Technology
- Domain - Information technology (IT)
- Experience Required: 5-8 years
- Employment Type: Full Time
- Job Location: Bengaluru
- CTC Range: Best in Industry
Role & Responsibilities
As a Software Development Engineer - 3, Backend Engineer at company, you will play a critical role in architecting, designing, and delivering robust backend systems that power our platform. You will lead by example, driving technical excellence and mentoring peers while solving complex engineering problems. This position offers the opportunity to work with a highly motivated team in a fast-paced and innovative environment.
Key Responsibilities:
Technical Leadership-
- Design and develop highly scalable, fault-tolerant, and maintainable backend systems using Java and related frameworks.
- Provide technical guidance and mentorship to junior developers, fostering a culture of learning and growth.
- Review code and ensure adherence to best practices, coding standards, and security guidelines.
System Architecture and Design-
- Collaborate with cross-functional teams, including product managers and frontend engineers, to translate business requirements into efficient technical solutions.
- Own the architecture of core modules and contribute to overall platform scalability and reliability.
- Advocate for and implement microservices architecture, ensuring modularity and reusability.
Problem Solving and Optimization-
- Analyze and resolve complex system issues, ensuring high availability and performance of the platform.
- Optimize database queries and design scalable data storage solutions.
- Implement robust logging, monitoring, and alerting systems to proactively identify and mitigate issues.
Innovation and Continuous Improvement-
- Stay updated on emerging backend technologies and incorporate relevant advancements into our systems.
- Identify and drive initiatives to improve codebase quality, deployment processes, and team productivity.
- Contribute to an advocate for a DevOps culture, supporting CI/CD pipelines and automated testing.
Collaboration and Communication-
- Act as a liaison between the backend team and other technical and non-technical teams, ensuring smooth communication and alignment.
- Document system designs, APIs, and workflows to maintain clarity and knowledge transfer across the team.
Ideal Candidate
- Strong Java Backend Engineer.
- Must have 5+ years of backend development with strong focus on Java (Spring / Spring Boot)
- Must have been SDE-2 for at least 2.5 years
- Hands-on experience with RESTful APIs and microservices architecture
- Strong understanding of distributed systems, multithreading, and async programming
- Experience with relational and NoSQL databases
- Exposure to Kafka/RabbitMQ and Redis/Memcached
- Experience with AWS / GCP / Azure, Docker, and Kubernetes
- Familiar with CI/CD pipelines and modern DevOps practices
- Product companies (B2B SAAS preferred)
- have stayed for at least 2 years with each of the previous companies
- (Education): B.Tech in computer science from Tier 1, Tier 2 colleges
We’re building a suite of SaaS products for WordPress professionals—each with a clear product-market fit and the potential to become a $100M+ business. As we grow, we need engineers who go beyond feature delivery. We’re looking for someone who wants to build enduring systems, make practical decisions, and help us ship great products with high velocity.
What You’ll Do
- Work with product, design, and support teams to turn real customer problems into thoughtful, scalable solutions.
- Design and build robust backend systems, services, and APIs that prioritize long-term maintainability and performance.
- Use AI-assisted tooling (where appropriate) to explore solution trees, accelerate development, and reduce toil.
- Improve velocity across the team by building reusable tools, abstractions, and internal workflows—not just shipping isolated features.
- Dig into problems deeply—whether it's debugging a performance issue, streamlining a process, or questioning a product assumption.
- Document your decisions clearly and communicate trade-offs with both technical and non-technical stakeholders.
What Makes You a Strong Fit
- You’ve built and maintained real-world software systems, ideally at meaningful scale or complexity.
- You think in systems and second-order effects—not just in ticket-by-ticket outputs.
- You prefer well-reasoned defaults over overengineering.
- You take ownership—not just of code, but of the outcomes it enables.
- You work cleanly, write clear code, and make life easier for those who come after you.
- You’re curious about the why, not just the what—and you’re comfortable contributing to product discussions.
Bonus if You Have Experience With
- Building tools or workflows that accelerate other developers.
- Working with AI coding tools and integrating them meaningfully into your workflow.
- Building for SaaS products, especially those with large user bases or self-serve motions.
- Working in small, fast-moving product teams with a high bar for ownership.
Why Join Us
- A small team that values craftsmanship, curiosity, and momentum.
- A product-driven culture where engineering decisions are informed by customer outcomes.
- A chance to work on multiple zero-to-one opportunities with strong PMF.
- No vanity perks—just meaningful work with people who care.
Job Title:
Senior Full Stack Developer
Experience: 5 to 7 Years. Minimum 5yrs FSD exp mandatory
Location: Bangalore (Onsite)
About ProductNova:
ProductNova is a fast-growing product development organization that partners with ambitious companies to build, modernize, and scale high-impact digital products. Our teams of product leaders, engineers, AI specialists, and growth experts work at the intersection of strategy, technology, and execution to help organizations create differentiated product portfolios and accelerate business outcomes.
Founded in early 2023, ProductNova has successfully designed, built, and launched 20+ large-scale, AI-powered products and platforms across industries. We specialize in solving complex business problems through thoughtful product design, robust engineering, and responsible use of AI.
What We Do
Product Development
We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply aligned with business goals and long-term value creation.
Our end-to-end product development approach covers the full lifecycle:
● Product discovery and problem definition
● User research and product strategy
● Experience design and rapid prototyping
● AI-enabled engineering, testing, and platform architecture
● Product launch, adoption, and continuous improvement
From early concepts to market-ready solutions, we focus on building products that are resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with customers to iterate based on user feedback and expand products across new use cases, customer segments, and markets.
Growth & Scale
For early-stage companies and startups, we act as product partners—shaping ideas into viable products, identifying target customers, achieving product-market fit, and supporting go-to-market execution, iteration, and scale.
For established organizations, we help unlock the next phase of growth by identifying opportunities to modernize and scale existing products, enter new geographies, and build entirely new product lines. Our teams enable innovation through AI, platform re-architecture, and portfolio expansion to support sustained business growth.
Role Overview
We are looking for a Senior Full Stack Developer with strong expertise in frontend development using React JS, backend microservices architecture in C#/Python, and hands-on experience with AI-enabled development tools. The ideal candidate should be comfortable working in an onsite environment and collaborating closely with cross-functional teams to deliver scalable, high-quality applications.
Key Responsibilities:
• Develop and maintain responsive, high-performance frontend applications using React JS
• Design, develop, and maintain microservices-based backend systems using C#, Python
• Experienced in building Data Layer and Databases using MS SQL, Cosmos DB, PostgreSQL
• Leverage AI-assisted development tools (Cursor / GitHub Copilot) to improve coding
efficiency and quality
• Collaborate with product managers, designers, and backend teams to deliver end-to-end
solutions
• Write clean, reusable, and well-documented code following best practices
• Participate in code reviews, debugging, and performance optimization
• Ensure application security, scalability, and reliability
Mandatory Technical Skills:
• Strong hands-on experience in React JS (Frontend Coding) – 3+ yrs
• Solid experience in Microservices Architecture C#, Python – 3+ yrs
• Experience building Data Layer and Databases using MS SQL – 2+ yrs
• Practical exposure to AI-enabled development using Cursor or GitHub Copilot – 1yr
• Good understanding of REST APIs and system integration
• Experience with version control systems (Git), ADO
Good to Have:
• Experience with cloud platforms (Azure)
• Knowledge of containerization tools like Docker and Kubernetes
• Exposure to CI/CD pipelines
• Understanding of Agile/Scrum methodologies
Why Join ProductNova
● Work on real-world, high-impact products used at scale
● Collaborate with experienced product, engineering, and AI leaders
● Solve complex problems with ownership and autonomy
● Build AI-first systems, not experimental prototypes
● Grow rapidly in a culture that values clarity, execution, and learning
If you are passionate about building meaningful products, solving hard problems, and shaping the future of AI-driven software, ProductNova offers the environment and challenges to grow your career.
ROLE: Ai ML Senior Developer
Exp: 5 to 8 Years
Location: Bangalore (Onsite)
About ProductNova
ProductNova is a fast-growing product development organization that partners with
ambitious companies to build, modernize, and scale high-impact digital products. Our teams
of product leaders, engineers, AI specialists, and growth experts work at the intersection of
strategy, technology, and execution to help organizations create differentiated product
portfolios and accelerate business outcomes.
Founded in early 2023, ProductNova has successfully designed, built, and launched 20+
large-scale, AI-powered products and platforms across industries. We specialize in solving
complex business problems through thoughtful product design, robust engineering, and
responsible use of AI.
Product Development
We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply
aligned with business goals and long-term value creation.
Our end-to-end product development approach covers the full lifecycle:
1. Product discovery and problem definition
2. User research and product strategy
3. Experience design and rapid prototyping
4. AI-enabled engineering, testing, and platform architecture
5. Product launch, adoption, and continuous improvement
From early concepts to market-ready solutions, we focus on building products that are
resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with
customers to iterate based on user feedback and expand products across new use cases,
customer segments, and markets.
Growth & Scale
For early-stage companies and startups, we act as product partners—shaping ideas into
viable products, identifying target customers, achieving product-market fit, and supporting
go-to-market execution, iteration, and scale.
For established organizations, we help unlock the next phase of growth by identifying
opportunities to modernize and scale existing products, enter new geographies, and build
entirely new product lines. Our teams enable innovation through AI, platform re-
architecture, and portfolio expansion to support sustained business growth.
Role Overview:
We are seeking an experienced AI / ML Senior Developer with strong hands-on expertise in large language models (LLMs) and AI-driven application development. The ideal candidate will have practical experience working with GPT and Anthropic models, building and training B2B products powered by AI, and leveraging AI-assisted development tools to deliver scalable and intelligent solutions.
Key Responsibilities:
1. Model Analysis & Optimization
Analyze, customize, and optimize GPT and Anthropic-based models to ensure reliability, scalability, and performance for real-world business use cases.
2. AI Product Design & Development
Design and build AI-powered products, including model training, fine-tuning, evaluation, and performance optimization across development lifecycles.
3. Prompt Engineering & Response Quality
Develop and refine prompt engineering strategies to improve model accuracy, consistency, relevance, and contextual understanding.
4. AI Service Integration
Build, integrate, and deploy AI services into applications using modern development practices, APIs, and scalable architectures.
5. AI-Assisted Development Productivity
Leverage AI-enabled coding tools such as Cursor and GitHub Copilot to accelerate development, improve code quality, and enhance efficiency.
6. Cross-Functional Collaboration
Work closely with product, business, and engineering teams to translate business requirements into effective AI-driven solutions.
7. Model Monitoring & Continuous Improvement
Monitor model performance, analyze outputs, and iteratively improve accuracy, safety, and overall system effectiveness.
Qualifications:
1. Hands-on experience analyzing, developing, fine-tuning, and optimizing GPT and Anthropic-based large language models.
2. Strong expertise in prompt design, experimentation, and optimization to enhance response accuracy and reliability.
3. Proven experience building, training, and deploying chatbots or conversational AI systems.
4. Practical experience using AI-assisted coding tools such as Cursor or GitHub Copilot in production environments.
5. Solid programming experience in Python, with strong problem-solving and development fundamentals.
6. Experience working with embeddings, similarity search, and vector databases for retrieval-augmented generation (RAG).
7. Knowledge of MLOps practices, including model deployment, versioning, monitoring, and lifecycle management.
8. Experience with cloud environments such as Azure, AWS for deploying and managing AI solutions.
9. Experience with APIs, microservices architecture, and system integration for scalable AI applications.
Why Join Us
• Build cutting-edge AI-powered B2B SaaS products
• Own architecture and technology decisions end-to-end
• Work with highly skilled ML and Full Stack teams
• Be part of a fast-growing, innovation-driven product organization
If you are a results-driven AiML Senior Developer with a passion for developing innovative products that drive business growth, we invite you to join our dynamic team at ProductNova.
ROLE - TECH LEAD/ARCHITECT with AI Expertise
Experience: 10–15 Years
Location: Bangalore (Onsite)
Company Type: Product-based | AI B2B SaaS
About ProductNova
ProductNova is a fast-growing product development organization that partners with
ambitious companies to build, modernize, and scale high-impact digital products. Our teams
of product leaders, engineers, AI specialists, and growth experts work at the intersection of
strategy, technology, and execution to help organizations create differentiated product
portfolios and accelerate business outcomes.
Founded in early 2023, ProductNova has successfully designed, built, and launched 20+
large-scale, AI-powered products and platforms across industries. We specialize in solving
complex business problems through thoughtful product design, robust engineering, and
responsible use of AI.
Product Development
We design and build user-centric, scalable, AI-native B2B SaaS products that are deeply
aligned with business goals and long-term value creation.
Our end-to-end product development approach covers the full lifecycle:
1. Product discovery and problem definition
2. User research and product strategy
3. Experience design and rapid prototyping
4. AI-enabled engineering, testing, and platform architecture
5. Product launch, adoption, and continuous improvement
From early concepts to market-ready solutions, we focus on building products that are
resilient, scalable, and ready for real-world adoption. Post-launch, we work closely with
customers to iterate based on user feedback and expand products across new use cases,
customer segments, and markets.
Growth & Scale
For early-stage companies and startups, we act as product partners—shaping ideas into
viable products, identifying target customers, achieving product-market fit, and supporting
go-to-market execution, iteration, and scale.
For established organizations, we help unlock the next phase of growth by identifying
opportunities to modernize and scale existing products, enter new geographies, and build
entirely new product lines. Our teams enable innovation through AI, platform re-
architecture, and portfolio expansion to support sustained business growth.
Role Overview
We are looking for a Tech Lead / Architect to drive the end-to-end technical design and
development of AI-powered B2B SaaS products. This role requires a strong hands-on
technologist who can work closely with ML Engineers and Full Stack Development teams,
own the product architecture, and ensure scalability, security, and compliance across the
platform.
Key Responsibilities
• Lead the end-to-end architecture and development of AI-driven B2B SaaS products
• Collaborate closely with ML Engineers, Data Scientists, and Full Stack Developers to
integrate AI/ML models into production systems
• Define and own the overall product technology stack, including backend, frontend,
data, and cloud infrastructure
• Design scalable, resilient, and high-performance architectures for multi-tenant SaaS
platforms
• Drive cloud-native deployments (Azure) using modern DevOps and CI/CD practices
• Ensure data privacy, security, compliance, and governance (SOC2, GDPR, ISO, etc.)
across the product
• Take ownership of application security, access controls, and compliance
requirements
• Actively contribute hands-on through coding, code reviews, complex feature development and architectural POCs
• Mentor and guide engineering teams, setting best practices for coding, testing, and
system design
• Work closely with Product Management and Leadership to translate business
requirements into technical solutions
Qualifications:
• 10–15 years of overall experience in software engineering and product
development
• Strong experience building B2B SaaS products at scale
• Proven expertise in system architecture, design patterns, and distributed systems
• Hands-on experience with cloud platforms (Azure, AWS/GCP)
• Solid background in backend technologies (Python/ .NET / Node.js / Java) and
modern frontend frameworks like (React, JS, etc.)
• Experience working with AI/ML teams in deploying and tuning ML models into production
environments
• Strong understanding of data security, privacy, and compliance frameworks
• Experience with microservices, APIs, containers, Kubernetes, and cloud-native
architectures
• Strong working knowledge of CI/CD pipelines, DevOps, and infrastructure as code
• Excellent communication and leadership skills with the ability to work cross-
functionally
• Experience in AI-first or data-intensive SaaS platforms
• Exposure to MLOps frameworks and model lifecycle management
• Experience with multi-tenant SaaS security models
• Prior experience in product-based companies or startups
Why Join Us
• Build cutting-edge AI-powered B2B SaaS products
• Own architecture and technology decisions end-to-end
• Work with highly skilled ML and Full Stack teams
• Be part of a fast-growing, innovation-driven product organization
If you are a results-driven Technical Lead with a passion for developing innovative products that drives business growth, we invite you to join our dynamic team at ProductNova.
JOB DETAILS:
* Job Title: DevOps Engineer (Azure)
* Industry: Technology
* Salary: Best in Industry
* Experience: 2-5 years
* Location: Bengaluru, Koramangala
Review Criteria
- Strong Azure DevOps Engineer Profiles.
- Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
- Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
- Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
- Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
- Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Preferred
- Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
- Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
- Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline
Role & Responsibilities
- Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
- Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
- Implement Git branching strategies and automate release workflows.
- Develop scripts using Bash, Python, or PowerShell for DevOps automation.
- Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
- Collaborate with dev and QA teams in an Agile/Scrum environment.
- Maintain documentation, runbooks, and participate in root cause analysis.
Ideal Candidate
- 2–5 years of experience as an Azure DevOps Engineer.
- Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
- Experience with Microsoft Azure (OCI/AWS exposure is a plus).
- Working knowledge of SQL Server, PostgreSQL, or Oracle.
- Good scripting, troubleshooting, and communication skills.
- Bonus: Docker, Kubernetes, Terraform, Ansible experience.
- Comfortable with WFO (Koramangala, Bangalore).
Application Architect – .NET
Role Overview
We are looking for a senior, hands-on Application Architect with deep .NET experience who can fix and modernize our current systems and build a strong engineering team over time.
Important – This role hands-on with architectural mindset. This person should be comfortable working with legacy systems and can make and explain tradeoffs.
Key Responsibilities
Application Architecture & Modernization
- Own application architecture across legacy .NET Framework and modern .NET systems
- Review the existing application, and drive an incremental modernization approach along with new feature development as per business growth of the company.
- Own the gradual move away from outdated patterns (Web Forms, tightly coupled MVC, legacy UI constructs)
- Define clean API contracts between front-end and backend services
- Identify and resolve performance bottlenecks across code and database layers
- Improve data access patterns, caching strategies, and system responsiveness
- Strong proponent of AI and has extensively used AI tools such as Github Copilot, Cursor, Windsurf, Codex, etc.
Backend, APIs & Integrations
- Design scalable backend services and APIs
- Improve how newer .NET services interact with legacy systems
- Lead integrations with external systems, including Zoho
- Prior experience integrating with Zoho (CRM, Finance, or other modules) is a strong value add
- Experience designing and implementing integrations using EDI standards
Data & Schema Design
- Review existing database schemas and core data structures
- Redesign data models to support growth, and reporting/analytics requirements
- Optimize SǪL queries to reduce the load on execution and DB engine
Cloud Awareness
- Design applications with cloud deployment in mind (primarily Azure)
- Understand how to use Azure services to improve security, scalability, and availability
- Work with Cloud and DevOps teams to ensure application architecture aligns with cloud best practices
- Push for CI/CD automation so that team pushes code regularly and makes progress.
Team Leadership & Best Practices
- Act as a technical leader and mentor for the engineering team
- Help hire, onboard, and grow a team under this role over time.
- Define KPIs and engineering best practices (including focus on documentation)
- Set coding standards, architectural guidelines, and review practices
- Improve testability and long-term health of the codebase
- Raise the overall engineering bar through reviews, coaching, and clear standards
- Create a culture of ownership and quality
Cross-Platform Thinking
- Strong communicator who can convert complex tech topics into business-friendly lingo. Understands the business needs and importance of user experience
- While .NET is the core stack, contribute to architecture decisions across platforms
- Leverages AI tools to accelerate design, coding, reviews, and troubleshooting while maintaining high quality
Skills and Experience
- 12+ years of hands-on experience in application development (preferably on .NET stack)
- Experience leading technical direction while remaining hands-on
- Deep expertise in .NET Framework (4.x) and modern .NET (.NET Core / .NET 6+)
- Must have lead a project to modernize legacy system – preferably moving from .NET Framework to .NET Core.
- Experience with MVC, Web Forms, and legacy UI patterns
- Solid backend and API design experience
- Strong understanding of database design and schema evolution
- Understanding of Analytical systems – OLAP, Data warehousing, data lakes.
- Strong proponent of AI and has extensively used AI tools such as Github Copilot, Cursor, Windsurf, Codex, etc.
- Integration with Zoho would be a plus.
Roles and Responsibilities:
▪ Data Pipeline Development: Build, deploy, and maintain efficient ETL/ELT pipelines using Azure
Data Factory, Data Factory & Azure Synapse Analytics.
▪ We are only looking for senior candidates with over 5 yrs of relevant exp with ample client
facing exp.
· Finance/Insurance experience is also a must.
▪ Data Modelling & Warehousing: Design and optimize data models, warehouses, and lakes for
structured/unstructured data.
▪ SQL & Query Optimization: Write complex SQL queries, optimize performance, and manage
databases. · Python Automation: Develop scripts for data processing, automation, and
integration using Python (Pandas, NumPy).
Technical Skills:
▪ Cloud Technologies: Azure Synapse Analytics, Azure Fabric, Azure Databricks and AWS(good to
have)
▪ Knowledge of Python, Pyspark, SQL, ETL concepts
▪ Good understanding of Insurance Operations and KPI reporting is an advantage.
JOB DETAILS:
* Job Title: Associate III - Azure Data Engineer
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -6 years
* Location: Trivandrum, Kochi
Job Description: Azure Data Engineer (4–6 Years Experience)
Job Type: Full-time
Locations: Kochi, Trivandrum
Must-Have Skills
Azure & Data Engineering
- Azure Data Factory (ADF)
- Azure Databricks (PySpark)
- Azure Synapse Analytics
- Azure Data Lake Storage Gen2
- Azure SQL Database
Programming & Querying
- Python (PySpark)
- SQL / Spark SQL
Data Modelling
- Star & Snowflake schema
- Dimensional modelling
Source Systems
- SQL Server
- Oracle
- SAP
- REST APIs
- Flat files (CSV, JSON, XML)
CI/CD & Version Control
- Git
- Azure DevOps / GitHub Actions
Monitoring & Scheduling
- ADF triggers
- Databricks jobs
- Log Analytics
Security
- Managed Identity
- Azure Key Vault
- Azure RBAC / Access Control
Soft Skills
- Strong analytical & problem-solving skills
- Good communication and collaboration
- Ability to work in Agile/Scrum environments
- Self-driven and proactive
Good-to-Have Skills
- Power BI basics
- Delta Live Tables
- Synapse Pipelines
- Real-time processing (Event Hub / Stream Analytics)
- Infrastructure as Code (Terraform / ARM templates)
- Data governance tools like Azure Purview
- Azure Data Engineer Associate (DP-203) certification
Educational Qualifications
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
Skills: Azure Data Factory, Azure Databricks, Azure Synapse, Azure Data Lake Storage
Must-Haves
Azure Data Factory (4-6 years), Azure Databricks/PySpark (4-6 years), Azure Synapse Analytics (4-6 years), SQL/Spark SQL (4-6 years), Git/Azure DevOps (4-6 years)
Skills: Azure, Azure data factory, Python, Pyspark, Sql, Rest Api, Azure Devops
Relevant 4 - 6 Years
python is mandatory
******
Notice period - 0 to 15 days only (Feb joiners’ profiles only)
Location: Kochi
F2F Interview 7th Feb
JOB DETAILS:
* Job Title: Associate III - Data Engineering
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4-6 years
* Location: Trivandrum, Kochi
Job Description
Job Title:
Data Services Engineer – AWS & Snowflake
Job Summary:
As a Data Services Engineer, you will be responsible for designing, developing, and maintaining robust data solutions using AWS cloud services and Snowflake.
You will work closely with cross-functional teams to ensure data is accessible, secure, and optimized for performance.
Your role will involve implementing scalable data pipelines, managing data integration, and supporting analytics initiatives.
Responsibilities:
• Design and implement scalable and secure data pipelines on AWS and Snowflake (Star/Snowflake schema)
• Optimize query performance using clustering keys, materialized views, and caching
• Develop and maintain Snowflake data warehouses and data marts.
• Build and maintain ETL/ELT workflows using Snowflake-native features (Snowpipe, Streams, Tasks).
• Integrate Snowflake with cloud platforms (AWS, Azure, GCP) and third-party tools (Airflow, dbt, Informatica)
• Utilize Snowpark and Python/Java for complex transformations
• Implement RBAC, data masking, and row-level security.
• Optimize data storage and retrieval for performance and cost-efficiency.
• Collaborate with stakeholders to gather data requirements and deliver solutions.
• Ensure data quality, governance, and compliance with industry standards.
• Monitor, troubleshoot, and resolve data pipeline and performance issues.
• Document data architecture, processes, and best practices.
• Support data migration and integration from various sources.
Qualifications:
• Bachelor’s degree in Computer Science, Information Technology, or a related field.
• 3 to 4 years of hands-on experience in data engineering or data services.
• Proven experience with AWS data services (e.g., S3, Glue, Redshift, Lambda).
• Strong expertise in Snowflake architecture, development, and optimization.
• Proficiency in SQL and Python for data manipulation and scripting.
• Solid understanding of ETL/ELT processes and data modeling.
• Experience with data integration tools and orchestration frameworks.
• Excellent analytical, problem-solving, and communication skills.
Preferred Skills:
• AWS Glue, AWS Lambda, Amazon Redshift
• Snowflake Data Warehouse
• SQL & Python
Skills: Aws Lambda, AWS Glue, Amazon Redshift, Snowflake Data Warehouse
Must-Haves
AWS data services (4-6 years), Snowflake architecture (4-6 years), SQL (proficient), Python (proficient), ETL/ELT processes (solid understanding)
Skills: AWS, AWS lambda, Snowflake, Data engineering, Snowpipe, Data integration tools, orchestration framework
Relevant 4 - 6 Years
python is mandatory
******
Notice period - 0 to 15 days only (Feb joiners’ profiles only)
Location: Kochi
F2F Interview 7th Feb
JOB DETAILS:
* Job Title: Tester III - Software Testing (Automation Testing + Python + Azure)
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -10 years
* Location: Hyderabad
Job Description
Responsibilities:
- Design, develop, and execute automation test scripts using Python.
- Build and maintain scalable test automation frameworks.
- Work with Azure DevOps for CI/CD, pipeline automation, and test management.
- Perform functional, regression, and integration testing for web and cloud‑based applications.
- Analyze test results, log defects, and collaborate with developers for timely closure.
- Participate in requirement analysis, test planning, and strategy discussions.
- Ensure test coverage, maintain script quality, and optimize automation suites.
Required Experience:
- Strong hands-on expertise in automation testing for web/cloud applications.
- Solid proficiency in Python for creating automation scripts and frameworks.
- Experience working with Azure services and Azure DevOps pipelines.
- Good understanding of QA methodologies, SDLC/STLC, and defect lifecycle.
- Experience with tools like Selenium, PyTest, or similar frameworks (good to have).
- Familiarity with Git or other version control tools.
Good to Have:
- Experience with API testing (REST, Postman, or similar tools)
- Knowledge of Docker/Kubernetes
- Exposure to Agile/Scrum environments
Skills: automation testing, python, java, azure
JOB DETAILS:
* Job Title: Tester III - Software Testing- Playwright + API testing
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -10 years
* Location: Hyderabad
Job Description
Responsibilities:
- Design, develop, and maintain automated test scripts for web applications using Playwright.
- Perform API testing using industry-standard tools and frameworks.
- Collaborate with developers, product owners, and QA teams to ensure high-quality releases.
- Analyze test results, identify defects, and track them to closure.
- Participate in requirement reviews, test planning, and test strategy discussions.
- Ensure automation coverage, maintain reusable test frameworks, and optimize execution pipelines.
Required Experience:
- Strong hands-on experience in Automation Testing for web-based applications.
- Proven expertise in Playwright (JavaScript, TypeScript, or Python-based scripting).
- Solid experience in API testing (Postman, REST Assured, or similar tools).
- Good understanding of software QA methodologies, tools, and processes.
- Ability to write clear, concise test cases and automation scripts.
- Experience with CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) is an added advantage.
Good to Have:
- Knowledge of cloud environments (AWS/Azure)
- Experience with version control tools like Git
- Familiarity with Agile/Scrum methodologies
Skills: automation testing, sql, api testing, soap ui testing, playwright
About Unilog
Unilog is the only connected product content and eCommerce provider serving the Wholesale Distribution, Manufacturing, and Specialty Retail industries. Our flagship CX1 Platform is at the center of some of the most successful digital transformations in North America. CX1 Platform’s syndicated product content, integrated eCommerce storefront, and automated PIM tool simplify our customers' path to success in the digital marketplace.
With more than 500 customers, Unilog is uniquely positioned as the leader in eCommerce and product content for Wholesale Distribution, Manufacturing, and Specialty Retail.
Unilog’s Mission Statement
At Unilog, our mission is to provide purpose-built connected product content and eCommerce solutions that empower our customers to succeed in the face of intense competition. By virtue of living our mission, we are able to transform the way Wholesale Distributors, Manufacturers, and Specialty Retailers go to market. We help our customers extend a digital version of their business and accelerate their growth.
Job Details
- Designation: Principal Engineer – Solr
- Location: Bangalore / Mysore / Remote
- Job Type: Full-time
- Department: Software R&D
Job Summary
We are seeking a highly skilled and experienced Principal Engineer with a strong background in Apache Solr and Java to lead our Engineering and customer-led initiatives. The ideal candidate will be responsible for ensuring the reliability, scalability, and performance of our search platform while providing expert-level troubleshooting and resolution for critical production issues.
This role will involve designing the architecture for new platforms while reviewing and recommending better approaches for existing ones to drive continuous improvement and efficiency.
Key Responsibilities
- Lead Engineering and support activities for Solr-based search applications, ensuring minimal downtime and optimal performance
- Design and develop the architecture of new platforms while reviewing and recommending better approaches for existing ones
- Regularly work towards enhancing search ranking, query understanding, and retrieval effectiveness
- Diagnose, troubleshoot, and resolve complex technical issues in Solr, Java-based applications, and supporting infrastructure
- Perform deep-dive analysis of logs, performance metrics, and alerts to proactively prevent incidents
- Optimize Solr indexes, queries, and configurations to enhance search performance and reliability
- Work closely with development, operations, and business teams to drive improvements in system stability and efficiency
- Implement monitoring tools, dashboards, and alerting mechanisms to enhance observability and proactive issue detection
- Exposure to AI-based search using vector databases, RAG models, NLP, and LLMs
- Collaborate on capacity planning, system scaling, and disaster recovery strategies for mission-critical search systems
- Provide mentorship and technical guidance to junior engineers and support teams
- Drive innovation by tracking latest trends, emerging technologies, and best practices in AI-based Search, Solr, and other search platforms
Requirement
- 8+ years of experience in software development and production support with a focus on Apache Solr, Java, and databases (Oracle, MySQL, PostgreSQL, etc.)
- Strong understanding of Solr indexing, query execution, schema design, configuration, and tuning
- Experience in designing and implementing scalable system architectures for search platforms
- Proven ability to review and assess existing platform architectures, identifying areas for improvement and recommending better approaches
- Proficiency in Java, Spring Boot, and micro-services architectures
- Experience with Linux / Unix-based environments, shell scripting, and debugging production systems
- Hands-on experience with monitoring tools (e.g., Prometheus, Grafana, Splunk, ELK Stack) and log analysis
- Expertise in troubleshooting performance issues related to Solr, JVM tuning, and memory management
- Familiarity with cloud platforms such as AWS, Azure, or GCP and containerization technologies like Docker / Kubernetes
- Strong analytical and problem-solving skills, with the ability to work under pressure in a fast-paced environment
- Certifications in Solr, Java, or cloud technologies
- Excellent communication and leadership abilities
About Our Benefits
- Competitive salary
- Health insurance
- Retirement plan
- Paid time off
- Training and development opportunities
MIC Global is a full-stack micro-insurance provider, purpose-built to design and deliver embedded parametric micro-insurance solutions to platform companies. Our mission is to make insurance more accessible for new, emerging, and underserved risks using our MiIncome loss-of-income products, MiConnect, MiIdentity, Coverpoint technology, and more - backed by innovative underwriting capabilities as a Lloyd’s Coverholder and through our in-house reinsurer, MicRe.
We operate across 12+ countries, with our Global Operations Center in Bangalore supporting clients worldwide, including a leading global ride-hailing platform and a top international property rental marketplace. Our distributed teams across the UK, USA, and Asia collaborate to ensure that no one is beyond the reach of financial security.
Role Overview
As Lead – Product Support & IT Infrastructure, you will oversee the technology backbone that supports MIC Global’s products, data operations, and global business continuity. You will manage all aspects of IT infrastructure, system uptime, cybersecurity, and support operations ensuring that MIC’s platforms remain reliable, secure, and scalable.
This is a pivotal, hands-on leadership role, blending strategic oversight with operational execution. The ideal candidate combines strong technical expertise with a proactive, service-oriented mindset to support both internal teams and external partners.
Key Responsibilities
Infrastructure & Operations
- Oversee all IT infrastructure and operations, including database administration, hosting environments, and production systems.
- Ensure system reliability, uptime, and performance across global deployments.
- Align IT operations with Agile development cycles and product release plans.
- Manage the IT service desk (MiTracker), ensuring timely and high-quality resolution of incidents.
- Drive continuous improvement in monitoring, alerting, and automation processes.
- Lead the development, testing, and maintenance of Disaster Recovery (DR) and Business Continuity Plans (BCP).
- Manage vendor relationships, IT budgets, and monthly cost reporting.
Security & Compliance
- Lead cybersecurity efforts across the organization, developing and implementing comprehensive information security strategies.
- Monitor, respond to, and mitigate security incidents in a timely manner.
- Maintain compliance with industry standards and data protection regulations (e.g., SOC 2, GDPR, ISO27001).
- Prepare regular reports on security incidents, IT costs, and system performance for review with the Head of Technology.
Team & Process Management
- Deliver exceptional customer service by ensuring internal and external technology users are supported effectively.
- Implement strategies to ensure business continuity during absences — including defined backup responsibilities and robust process documentation.
- Promote knowledge sharing and operational excellence across Product Support and IT teams.
- Build and maintain a culture of accountability, responsiveness, and cross-team collaboration.
Required Qualifications
- Azure administration experience and qualifications, such as Microsoft Certified: Azure Administrator Associate or Azure Solutions Architect Expert.
- Strong SQL Server DBA capabilities and experience, including performance tuning, high availability configurations, and certifications like Microsoft Certified: Azure Database Administrator Associate.
- 8+ years of experience in IT infrastructure management, DevOps, or IT operations, essential to be within Product focused companies; fintech, insurtech, or SaaS environments.
- Proven experience leading service desk or technical support functions in a 24/7 uptime environment.
- Deep understanding of cloud infrastructure (AWS/Azure/GCP), database administration, and monitoring tools (e.g., Grafana, Datadog, CloudWatch).
- Hands-on experience with security frameworks, incident response, and business continuity planning.
- Strong analytical, problem-solving, and communication skills, with the ability to work cross-functionally.
- Demonstrated leadership in managing teams and implementing scalable IT systems and processes.
Benefits
- 33 days of paid holiday
- Competitive compensation well above market average
- Work in a high-growth, high-impact environment with passionate, talented peers
- Clear path for personal growth and leadership development.
About the Role
We are looking for a motivated Full Stack Developer with 2–5 years of hands-on experience in building scalable web applications. You will work closely with senior engineers and product teams to develop new features, improve system performance, and ensure high-
quality code delivery.
Responsibilities
- Develop and maintain full-stack applications.
- Implement clean, maintainable, and efficient code.
- Collaborate with designers, product managers, and backend engineers.
- Participate in code reviews and debugging.
- Work with REST APIs/GraphQL.
- Contribute to CI/CD pipelines.
- Ability to work independently as well as within a collaborative team environment.
Required Technical Skills
- Strong knowledge of JavaScript/TypeScript.
- Experience with React.js, Next.js.
- Backend experience with Node.js, Express, NestJS.
- Understanding of SQL/NoSQL databases.
- Experience with Git, APIs, debugging tools.ß
- Cloud familiarity (AWS/GCP/Azure).
AI and System Mindset
Experience working with AI-powered systems is a strong plus. Candidates should be comfortable integrating AI agents, third-party APIs, and automation workflows into applications, and should demonstrate curiosity and adaptability toward emerging AI technologies.
Soft Skills
- Strong problem-solving ability.
- Good communication and teamwork.
- Fast learner and adaptable.
Education
Bachelor's degree in Computer Science / Engineering or equivalent.
Job Title: Technology Intern
Location: Remote (India)
Shift Timings:
- 5:00 PM – 2:00 AM
- 6:00 PM – 3:00 AM
Compensation: Stipend
Job Summary
ARDEM is seeking highly motivated Technology Interns from Tier 1 colleges who are passionate about software development and eager to work with modern Microsoft technologies. This role is ideal for final-year students (2026 pass-outs) who want hands-on experience in building scalable web applications while maintaining a healthy work-life balance through remote work opportunities.
Eligibility & Qualifications
- Education:
- B.Tech (Computer Science) / M.Tech (Computer Science)
- Tier 1 colleges preferred
- Final semester students (2026 pass-outs) or recent interns
- Experience Level: Fresher
- Communication: Excellent English communication skills (verbal & written)
Skills Required
1. Technical Skills (Must Have)
- Experience with .NET Core (.NET 6 / 7 / 8)
- Strong knowledge of C#, including:
- Object-Oriented Programming (OOP) concepts
- async/await
- LINQ
- ASP.NET Core (Web API / MVC)
2. Database Skills
- SQL Server (preferred)
- Writing complex SQL queries, joins, and subqueries
- Stored Procedures, Functions, and Indexes
- Database design and performance tuning
- Entity Framework Core
- Migrations and transaction handling
3. Frontend Skills (Required)
- JavaScript (ES5 / ES6+)
- jQuery
- DOM manipulation
- AJAX calls
- Event handling
- HTML5 & CSS3
- Client-side form validation
4. Security & Performance
- Data validation and exception handling
- Caching concepts (In-memory / Redis – good to have)
5. Tools & Environment
- Visual Studio / VS Code
- Git (GitHub / Azure DevOps)
- Basic knowledge of server deployment
6. Good to Have (Optional)
- Azure or AWS deployment experience
- CI/CD pipelines
- Docker
- Experience with data handling
Additional Requirements (Work-from-Home Setup)
This role supports remote work. Candidates must ensure the following minimum infrastructure requirements:
- Laptop/Desktop: Windows-based system
- Operating System: Windows
- Screen Size: Minimum 14 inches
- Screen Resolution: Full HD (1920 × 1080)
- Processor: Intel i5 or higher
- RAM: Minimum 8 GB (Mandatory)
- Software: AnyDesk
- Internet Speed: 100 Mbps or higher
About ARDEM
ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. For over 20 years, ARDEM has successfully delivered high-quality outsourcing and automation services to clients across the USA and Canada.
We are growing rapidly and continuously innovating to become a better service provider for our customers. Our mission is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company in the industry.
Role Overview
The Azure Presales Engineer is responsible for engaging with customers to understand their business and technical requirements and translating them into well-architected Microsoft Azure solutions. This role plays a key part in cloud transformation initiatives by supporting presales activities, building solution proposals, responding to RFPs, and ensuring a smooth transition from presales to delivery.
Key Responsibilities
- Participate in customer discovery sessions to gather technical and business requirements
- Design Azure cloud architectures across IaaS, PaaS, and hybrid environments following best practices
- Prepare technical solution proposals, architectures, BOMs, and presales documentation
- Support RFP and RFQ responses with detailed technical inputs and cost estimations
- Deliver Azure solution demonstrations, workshops, and technical presentations to customers
- Collaborate closely with sales and delivery teams to ensure accurate solution design and handover
- Stay updated with Azure services, licensing models, pricing, and new feature releases
- Work with Microsoft account teams for co-selling opportunities, funding programs, and alignment
- Contribute to reusable presales assets, templates, and solution accelerators
Required Qualifications
- 2–3+ years of experience in Azure cloud engineering or presales roles
- Strong hands-on understanding of Azure core services including compute, storage, networking, security, IAM, monitoring, backup, and disaster recovery
- Experience in preparing technical proposals, SOWs, and solution designs
- Strong communication, presentation, and customer-facing skills
- Ability to translate business needs into effective cloud solutions
- Experience working with or for a Microsoft Partner is a strong plus
Preferred Certifications
- AZ-104, AZ-305, AZ-900, AZ-700, AZ-500 (any relevant Azure certifications)
About Kanerika:
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.
We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.
Awards and Recognitions:
Kanerika has won several awards over the years, including:
1. Best Place to Work 2023 by Great Place to Work®
2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today
3. NASSCOM Emerge 50 Award in 2014
4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture
5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.
Working for us:
Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.
Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.
About the role:
As a DevOps Engineer, you will play a critical role in bridging the gap between development, operations, and security teams to enable fast, secure, and reliable software delivery. With 5+ years of hands-on experience, the engineer is responsible for designing, implementing, and maintaining scalable, automated, and cloud-native infrastructure solutions.
Key Responsibilities:
- 5+ years of hands-on experience in DevOps or Cloud Engineering roles.
- Strong expertise in at least one public cloud provider (AWS / Azure / GCP).
- Proficiency in Infrastructure as Code (IaC) tools (Terraform, Ansible, Pulumi, or CloudFormation).
- Solid experience with Kubernetes and containerized applications.
- Strong knowledge of CI/CD tools (Jenkins, GitHub Actions, GitLab CI, Azure DevOps, ArgoCD).
- Scripting/programming skills in Python, Shell, or Go for automation.
- Hands-on experience with monitoring, logging, and incident management.
- Familiarity with security practices in DevOps (secrets management, IAM, vulnerability scanning).
Employee Benefits:
1. Culture:
- Open Door Policy: Encourages open communication and accessibility to management.
- Open Office Floor Plan: Fosters a collaborative and interactive work environment.
- Flexible Working Hours: Allows employees to have flexibility in their work schedules.
- Employee Referral Bonus: Rewards employees for referring qualified candidates.
- Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.
2. Inclusivity and Diversity:
- Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
- Mandatory POSH training: Promotes a safe and respectful work environment.
3. Health Insurance and Wellness Benefits:
- GMC and Term Insurance: Offers medical coverage and financial protection.
- Health Insurance: Provides coverage for medical expenses.
- Disability Insurance: Offers financial support in case of disability.
4. Child Care & Parental Leave Benefits:
- Company-sponsored family events: Creates opportunities for employees and their families to bond.
- Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
- Family Medical Leave: Offers leave for employees to take care of family members' medical needs.
5. Perks and Time-Off Benefits:
- Company-sponsored outings: Organizes recreational activities for employees.
- Gratuity: Provides a monetary benefit as a token of appreciation.
- Provident Fund: Helps employees save for retirement.
- Generous PTO: Offers more than the industry standard for paid time off.
- Paid sick days: Allows employees to take paid time off when they are unwell.
- Paid holidays: Gives employees paid time off for designated holidays.
- Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.
6. Professional Development Benefits:
- L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
- Mentorship Program: Offers guidance and support from experienced professionals.
- Job Training: Provides training to enhance job-related skills.
- Professional Certification Reimbursements: Assists employees in obtaining professional certifications.
- Promote from Within: Encourages internal growth and advancement opportunities.
Role: Software Development (Senior and Associate)
Experience Level: 4 to 9 Years
Work location: Remote
What you’ll do:
We are seeking a Mid-Level Node.js Developer to join our development team as an individual contributor. You will design, develop, and maintain scalable microservices for diverse client projects, working on enterprise applications that require high performance, reliability, and seamless deployment in containerized environments.
Key Responsibilities:
● Develop and maintain scalable Node.js microservices for diverse client projects
● Implement robust REST APIs with proper error handling and validation
● Write comprehensive unit and integration tests ensuring high code quality
● Design portable, efficient solutions deployable across different client environments
● Collaborate with cross-functional teams and client stakeholders
● Optimize application performance for high-concurrency scenarios
● Implement security best practices for enterprise applications
● Participate in code reviews and maintain coding standards
● Support deployment and troubleshooting in client environments
Must have skills:
Core Technical Expertise:
● Node.js: 4+ years of production experience with Node.js (ES6+, Async/Await, Promises, Event Loop understanding)
● Frameworks: Strong hands-on experience with Express.js, Fastify, or NestJS
● REST API Development: Proven experience designing and implementing RESTful web services, middleware
implementation
● JavaScript/TypeScript: Proficient in modern JavaScript (ES6+) and TypeScript for type-safe development
● Testing: Experience with testing frameworks (Jest, Mocha, Chai), unit testing, integration testing, mocking
Microservices & Deployment:
● Containerization: Hands-on Docker experience for packaging and deploying Node.js applications
● Microservices Architecture: Understanding of service decomposition, inter-service communication, event-driven
architecture
● Abstraction & Portability: Environment-agnostic design, configuration management (dotenv, config modules)
● Build Tools: NPM/Yarn for dependency management, understanding of package.json
Good to have have skills:
Advanced Technical:
● Advanced Frameworks: NestJS, Koa.js, Hapi.js
● Orchestration: Kubernetes, Docker
● Cloud Platforms: Alibaba, Azure, or GCP services and deployment
● Message Brokers: Apache Kafka, RabbitMQ for asynchronous communication
● Databases: Both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra)
● API Gateway: Express Gateway, Kong API Gateway
Development & Operations:
● CI/CD pipelines (Jenkins, GitLab CI/CD)
● Monitoring & Observability (Winston, Morgan, Prometheus, New Relic)
● GraphQL with Apollo Server or similar
● Security best practices (Helmet.js, authentication, authorization)
Client-Facing Experience:
● Experience working in service-based organizations
● Adaptability to different domain requirements
● Understanding of various industry standards and compliance requirements
Why Join Quantiphi?
● Be part of an award-winning Google Cloud partner recognized for innovation and impact.
● Work on cutting-edge GCP-based data engineering and AI projects.
● Collaborate with a global team of data scientists, engineers, and AI experts.
● Access continuous learning, certifications, and leadership development opportunities.
Job Role Senior Dot Net Developer
Experience 8+ years
Notice period Immediate
Location Trivandrum / Kochi
Details Job Description
Candidates with 8+ years of experience in IT industry and with strong .Net/.Net Core/Azure Cloud Service/ Azure
DevOps. This is a client facing role and hence should have strong communication skills. This is for a US client, and
the resource should be hands-on - experience in coding and Azure Cloud.
Working hours - 8 hours, with 4 hours of overlap during EST Time zone. (12 PM - 9 PM) This overlap hours is
mandatory as meetings happen during this overlap hours.
Responsibilities
Design, develop, enhance, document, and maintain robust applications using .NET Core 6/8+, C#, REST APIs, T-
SQL, and modern JavaScript/jQuery
☑ Integrate and support third-party APIs and external services
☑ Collaborate across cross-functional teams to deliver scalable solutions across the full technology stack
☑ Identify, prioritize, and execute tasks throughout the Software Development Life Cycle (SDLC)
☑ Participate in Agile/Scrum ceremonies and manage tasks using Jira
☑ Understand technical priorities, architectural dependencies, risks, and implementation challenges
☑ Troubleshoot, debug, and optimize existing solutions with a strong focus on performance and reliability.
Primary Skills
8+ years of hands-on development experience with:
☑ C#, .NET Core 6/8+, Entity Framework / EF Core
☑ JavaScript, jQuery, REST APIs
☑ Expertise in MS SQL Server, including:
☑ Complex SQL queries, Stored Procedures, Views, Functions, Packages, Cursors, Tables, and Object Types
☑ Skilled in unit testing with XUnit, MSTest
☑ Strong in software design patterns, system architecture, and scalable solution design
☑ Ability to lead and inspire teams through clear communication, technical mentorship, and ownership
☑ Strong problem-solving and debugging capabilities
☑ Ability to write reusable, testable, and efficient code
☑ Develop and maintain frameworks and shared libraries to support large-scale applications
☑ Excellent technical documentation, communication, and leadership
skills
☑ Microservices and Service-Oriented Architecture (SOA)
☑ Experience in API Integrations
2+ years of hands with Azure Cloud Services, including:
☑Azure Functions
☑Azure Durable Functions
☑Azure Service Bus, Event Grid, Storage Queues
☑Blob Storage, Azure Key Vault, SQL Azure
☑Application Insights, Azure Monitoring.
Secondary Skills
☑Familiarity with AngularJS, ReactJS, and other front-end frameworks
☑Experience with Azure API Management (APIM)
☑Knowledge of Azure Containerization and Orchestration (e.g., AKS/Kubernetes)
☑Experience with Azure Data Factory (ADF) and Logic Apps
☑Exposure to Application Support and operational monitoring
☑Azure DevOps - CI/CD pipelines (Classic / YAML).
Certifications Required (IF Any)
Microsoft Certified: Azure Fundamentals
☑Microsoft Certified: Azure Developer Associate
☑Other relevant certifications in Azure, .NET, or Cloud technologies.
📍 Position: IT Intern
👩💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)
🎓 Qualification: B.Tech (IT) / M.Tech (IT) only
📌 Mode: Remote (WFH)
⏳ Shift: Willingness to work in night/rotational shifts
🗣 Communication: Excellent English
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.
- Support user account management activities using hashtag
hashtag
#Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.
- Assist the IT team in configuring, monitoring, and supporting hashtag
hashtag
#AWS cloud services (EC2, S3, IAM, WorkSpaces).
- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.
- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.
- Help create and update technical documentation and knowledge base articles.
- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.
💻 Technical Requirements:
- Laptop with i5 or higher processor
-Reliable internet connectivity with 100mbps speed
About Us
MIC Global is a full-stack micro-insurance provider, purpose-built to design and deliver embedded parametric micro-insurance solutions to platform companies. Our mission is to make insurance more accessible for new, emerging, and underserved risks using our MiIncome loss-of-income products, MiConnect, MiIdentity, Coverpoint technology, and more — backed by innovative underwriting capabilities as a Lloyd’s Coverholder and through our in-house reinsurer, MicRe.
We operate across 12+ countries, with our Global Operations Center in Bangalore supporting clients worldwide, including a leading global ride-hailing platform and a top international property rental marketplace. Our distributed teams across the UK, USA, and Asia collaborate to ensure that no one is beyond the reach of financial security.
About the Team
As a Lead Data Specialist at MIC Global, you will play a key role in transforming data into actionable insights that inform strategic and operational decisions. You will work closely with Product, Engineering, and Business teams to analyze trends, build dashboards, and ensure that data pipelines and reporting structures are accurate, automated, and scalable.
This is a hands-on, analytical, and technically focused role ideal for someone experienced in data analytics and engineering practices. You will use SQL, Python, and modern BI tools to interpret large datasets, support pricing models, and help shape the data-driven culture across MIC Global
Key Roles and Responsibilities
Data Analytics & Insights
- Analyze complex datasets to identify trends, patterns, and insights that support business and product decisions.
- Partner with Product, Operations, and Finance teams to generate actionable intelligence on customer behavior, product performance, and risk modeling.
- Contribute to the development of pricing models, ensuring accuracy and commercial relevance.
- Deliver clear, concise data stories and visualizations that drive executive and operational understanding.
- Develop analytical toolkits for underwriting, pricing and claims
Data Engineering & Pipeline Management
- Design, implement, and maintain reliable data pipelines and ETL workflows.
- Write clean, efficient scripts in Python for data cleaning, transformation, and automation.
- Ensure data quality, integrity, and accessibility across multiple systems and environments.
- Work with Azure data services to store, process, and manage large datasets efficiently.
Business Intelligence & Reporting
- Develop, maintain, and optimize dashboards and reports using Power BI (or similar tools).
- Automate data refreshes and streamline reporting processes for cross-functional teams.
- Track and communicate key business metrics, providing proactive recommendations.
Collaboration & Innovation
- Collaborate with engineers, product managers, and business leads to align analytical outputs with company goals.
- Support the adoption of modern data tools and agentic AI frameworks to improve insight generation and automation.
- Continuously identify opportunities to enhance data-driven decision-making across the organization.
Ideal Candidate Profile
- 10+ years of relevant experience in data analysis or business intelligence, ideally
- within product-based SaaS, fintech, or insurance environments.
- Proven expertise in SQL for data querying, manipulation, and optimization.
- Hands-on experience with Python for data analytics, automation, and scripting.
- Strong proficiency in Power BI, Tableau, or equivalent BI tools.
- Experience working in Azure or other cloud-based data ecosystems.
- Solid understanding of data modeling, ETL processes, and data governance.
- Ability to translate business questions into technical analysis and communicate findings effectively.
Preferred Attributes
- Experience in insurance or fintech environments, especially operations, and claims analytics.
- Exposure to agentic AI and modern data stack tools (e.g., dbt, Snowflake, Databricks).
- Strong attention to detail, analytical curiosity, and business acumen.
- Collaborative mindset with a passion for driving measurable impact through data.
Benefits
- 33 days of paid holiday
- Competitive compensation well above market average
- Work in a high-growth, high-impact environment with passionate, talented peers
- Clear path for personal growth and leadership development
About Us
MIC Global is a full-stack micro-insurance provider, purpose-built to design and deliver embedded parametric micro-insurance solutions to platform companies. Our mission is to make insurance more accessible for new, emerging, and underserved risks using our MiIncome loss-of-income products, MiConnect, MiIdentity, Coverpoint technology, and more — backed by innovative underwriting capabilities as a Lloyd’s Coverholder and through our in-house reinsurer, MicRe.
We operate across 12+ countries, with our Global Operations Center in Bangalore supporting clients worldwide, including a leading global ride-hailing platform and a top international property rental marketplace. Our distributed teams across the UK, USA, and Asia collaborate to ensure that no one is beyond the reach of financial security.
About the Team
We're seeking a mid-level Data Engineer with strong DBA experience to join our insurtech data analytics team. This role focuses on supporting various teams including infrastructure, reporting, and analytics. You'll be responsible for SQL performance optimization, building data pipelines, implementing data quality checks, and helping teams with database-related challenges. You'll work closely with the infrastructure team on production support, assist the reporting team with complex queries, and support the analytics team in building visualizations and dashboards.
Key Roles and Responsibilities
Database Administration & Optimization
- Support infrastructure team with production database issues and troubleshooting
- Debug and resolve SQL performance issues, identify bottlenecks, and optimize queries
- Optimize stored procedures, functions, and views for better performance
- Perform query tuning, index optimization, and execution plan analysis
- Design and develop complex stored procedures, functions, and views
- Support the reporting team with complex SQL queries and database design
Data Engineering & Pipelines
- Design and build ETL/ELT pipelines using Azure Data Factory and Python
- Implement data quality checks and validation rules before data enters pipelines
- Develop data integration solutions to connect various data sources and systems
- Create automated data validation, quality monitoring, and alerting mechanisms
- Develop Python scripts for data processing, transformation, and automation
- Build and maintain data models to support reporting and analytics requirements
Support & Collaboration
- Help data analytics team build visualizations and dashboards by providing data models and queries
- Support reporting team with data extraction, transformation, and complex reporting queries
- Collaborate with development teams to support application database requirements
- Provide technical guidance and best practices for database design and query optimization
Azure & Cloud
- Work with Azure services including Azure SQL Database, Azure Data Factory, Azure Storage, Azure Functions, and Azure ML
- Implement cloud-based data solutions following Azure best practices
- Support cloud database migrations and optimizations
- Work with Agentic AI concepts and tools to build intelligent data solutions
Ideal Candidate Profile
Essential
- 5-8 years of experience in data engineering and database administration
- Strong expertise in MS SQL Server (2016+) administration and development
- Proficient in writing complex SQL queries, stored procedures, functions, and views
- Hands-on experience with Microsoft Azure services (Azure SQL Database, Azure Data Factory, Azure Storage)
- Strong Python scripting skills for data processing and automation
- Experience with ETL/ELT design and implementation
- Knowledge of database performance tuning, query optimization, and indexing strategies
- Experience with SQL performance debugging tools (XEvents, Profiler, or similar)
- Understanding of data modeling and dimensional design concepts
- Knowledge of Agile methodology and experience working in Agile teams
- Strong problem-solving and analytical skills
- Understanding of Agentic AI concepts and tools
- Excellent communication skills and ability to work with cross-functional teams
Desirable
- Knowledge of insurance or financial services domain
- Experience with Azure ML and machine learning pipelines
- Experience with Azure DevOps and CI/CD pipelines
- Familiarity with data visualization tools (Power BI, Tableau)
- Experience with NoSQL databases (Cosmos DB, MongoDB)
- Knowledge of Spark, Databricks, or other big data technologies
- Azure certifications (Azure Data Engineer Associate, Azure Database Administrator Associate)
- Experience with version control systems (Git, Azure Repos)
Tech Stack
- MS SQL Server 2016+, Azure SQL Database, Azure Data Factory, Azure ML, Azure Storage, Azure Functions, Python, T-SQL, Stored Procedures, ETL/ELT, SQL Performance Tools (XEvents, Profiler), Agentic AI Tools, Azure DevOps, Power BI, Agile, Git
Benefits
- 33 days of paid holiday
- Competitive compensation well above market average
- Work in a high-growth, high-impact environment with passionate, talented peers
- Clear path for personal growth and leadership development

Global digital transformation solutions provider.
JOB DETAILS:
Job Role: Lead I - .Net Developer - .NET, Azure, Software Engineering
Industry: Global digital transformation solutions provider
Work Mode: Hybrid
Salary: Best in Industry
Experience: 6-8 years
Location: Hyderabad
Job Description:
• Experience in Microsoft Web development technologies such as Web API, SOAP XML
• C#/.NET .Netcore and ASP.NET Web application experience Cloud based development experience in AWS or Azure
• Knowledge of cloud architecture and technologies
• Support/Incident management experience in a 24/7 environment
• SQL Server and SSIS experience
• DevOps experience of Github and Jenkins CI/CD pipelines or similar
• Windows Server 2016/2019+ and SQL Server 2019+ experience
• Experience of the full software development lifecycle
• You will write clean, scalable code, with a view towards design patterns and security best practices
• Understanding of Agile methodologies working within the SCRUM framework AWS knowledge
Must-Haves
C#/.NET/.NET Core (experienced), ASP.NET Web application (experienced), SQL Server/SSIS (experienced), DevOps (Github/Jenkins CI/CD), Cloud architecture (AWS or Azure)
.NET (Senior level), Azure (Very good knowledge), Stakeholder Management (Good)
Mandatory skills: Net core with Azure or AWS experience
Notice period - 0 to 15 days only
Location: Hyderabad
Virtual Drive - 17th Jan
We are looking for a passionate DevOps Engineer who can support deployment and monitor our Production, QE, and Staging environments performance. Applicants should have a strong understanding of UNIX internals and should be able to clearly articulate how it works. Knowledge of shell scripting & security aspects is a must. Any experience with infrastructure as code is a big plus. The key responsibility of the role is to manage deployments, security, and support of business solutions. Having experience in database applications like Postgres, ELK, NodeJS, NextJS & Ruby on Rails is a huge plus. At VakilSearch. Experience doesn't matter, passion to produce change matters
Responsibilities and Accountabilities:
- As part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the infra components of VakilSearch’s product which are hosted in cloud services & on-prem facility
- Design, build tools and framework that support deploying and managing our platform & Exploring new tools, technologies, and processes to improve speed, efficiency, and scalability
- Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restore of different Env
- Manage resources in a cost-effective, innovative manner including assisting subordinates ineffective use of resources and tools
- Resolve incidents as escalated from Monitoring tools and Business Development Team
- Implement and follow security guidelines, both policy and technology to protect our data
- Identify root cause for issues and develop long-term solutions to fix recurring issues and Document it
- Strong in performing production operation activities even at night times if required
- Ability to automate [Scripts] recurring tasks to increase velocity and quality
- Ability to manage and deliver multiple project phases at the same time
I Qualification(s):
- Experience in working with Linux Server, DevOps tools, and Orchestration tools
- Linux, AWS, GCP, Azure, CompTIA+, and any other certification are a value-add
II Experience Required in DevOps Aspects:
- Length of Experience: Minimum 1-4 years of experience
- Nature of Experience:
- Experience in Cloud deployments, Linux administration[ Kernel Tuning is a value add ], Linux clustering, AWS, virtualization, and networking concepts [ Azure, GCP value add ]
- Experience in deployment solutions CI/CD like Jenkins, GitHub Actions [ Release Management is a value add ]
- Hands-on experience in any of the configuration management IaC tools like Chef, Terraform, and CloudFormation [ Ansible & Puppet is a value add ]
- Administration, Configuring and utilizing Monitoring and Alerting tools like Prometheus, Grafana, Loki, ELK, Zabbix, Datadog, etc
- Experience with Containerization and orchestration tools like Docker, and Kubernetes [ Docker swarm is a value add ]Good Scripting skills in at least one interpreted language - Shell/bash scripting or Ruby/Python/Perl
- Experience in Database applications like PostgreSQL, MongoDB & MySQL [DataOps]
- Good at Version Control & source code management systems like GitHub, GIT
- Experience in Serverless [ Lambda/GCP cloud function/Azure function ]
- Experience in Web Server Nginx, and Apache
- Knowledge in Redis, RabbitMQ, ELK, REST API [ MLOps Tools is a value add ]
- Knowledge in Puma, Unicorn, Gunicorn & Yarn
- Hands-on VMWare ESXi/Xencenter deployments is a value add
- Experience in Implementing and troubleshooting TCP/IP networks, VPN, Load Balancing & Web application firewalls
- Deploying, Configuring, and Maintaining Linux server systems ON premises and off-premises
- Code Quality like SonarQube is a value-add
- Test Automation like Selenium, JMeter, and JUnit is a value-add
- Experience in Heroku and OpenStack is a value-add
- Experience in Identifying Inbound and Outbound Threats and resolving it
- Knowledge of CVE & applying the patches for OS, Ruby gems, Node, and Python packages
- Documenting the Security fix for future use
- Establish cross-team collaboration with security built into the software development lifecycle
- Forensics and Root Cause Analysis skills are mandatory
- Weekly Sanity Checks of the on-prem and off-prem environment
III Skill Set & Personality Traits required:
- An understanding of programming languages such as Ruby, NodeJS, ReactJS, Perl, Java, Python, and PHP
- Good written and verbal communication skills to facilitate efficient and effective interaction with peers, partners, vendors, and customers
IV Age Group: 21 – 36 Years
V Cost to the Company: As per industry standards
𝐇𝐢 𝐂𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐨𝐧𝐬! 👋 𝐖𝐞𝐥𝐜𝐨𝐦𝐞 𝐭𝐨 2026! 🎉
Starting the new year with an exciting opportunity!
Deqode 𝐈𝐒 𝐇𝐈𝐑𝐈𝐍𝐆! 💻
Hiring: .Net Developer
⭐ Experience: 4+ Years
⭐ Work Mode: Remote
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
🔧 Role Overview
We are looking for passionate .NET Developers to design, develop, and maintain scalable microservices for enterprise-grade applications. You’ll work closely with cross-functional teams and clients on high-performance, cloud-native solutions.
🛠️ Key Responsibilities
✅Build and maintain scalable .NET microservices
✅Develop secure, high-quality RESTful Web APIs
✅Write unit and integration tests to ensure code quality
✅Optimize performance and implement caching strategies
💫 Must-Have Skills
✅ 4+ years of experience with .NET Core / .NET 5+ & C#
✅Strong hands-on experience with ASP.NET Core Web API & EF Core
✅REST API development & middleware implementation
✅Solid understanding of SOLID principles & design patterns
✅Unit testing experience (xUnit, NUnit, MSTest, Moq)

















