50+ Windows Azure Jobs in India
Apply to 50+ Windows Azure Jobs on CutShort.io. Find your next job, effortlessly. Browse Windows Azure Jobs and apply today!
Key responsibilities
• Design, build, and maintain robust CI/CD pipelines using Azure DevOps Services (Azure Pipelines) and Git-based workflows.
• Implement and manage infrastructure as code (IaC) using ARM templates, Bicep, and/or Terraform for repeatable environment provisioning.
• Containerize applications (Docker) and manage container orchestration platforms such as AKS (Azure Kubernetes Service).
• Automate build, test, release, and rollback processes; integrate automated testing and quality gates into pipelines.
• Monitor and improve platform reliability and observability using logging and monitoring tools (e.g., Azure Monitor, Application Insights, Prometheus, Grafana).
• Drive platform security and compliance through pipeline controls, secrets management (Key Vault / Vault), and secure configuration practices.
• Implement cost-optimization and governance for Azure resources (tags, policies, budgets).
• Troubleshoot build/release failures, production incidents, and performance bottlenecks; perform root-cause analysis and implement permanent fixes.
• Mentor developers in Git workflows, pipeline authoring, best practices for IaC, and cloud-native design.
• Maintain clear documentation: runbooks, deployment playbooks, architecture diagrams, and pipeline templates.
Required skills & experience
• 4+ years hands-on experience working with Azure and cloud-native application delivery.
• Deep experience with Azure DevOps (Repos, Pipelines, Artifacts, Boards).
• Strong IaC skills with Terraform, ARM templates, or Bicep.
• Solid experience with CI/CD design and YAML pipeline authoring.
• Practical knowledge of containerization (Docker) and Kubernetes — preferably AKS.
• Scripting skills: PowerShell, Bash, and/or Python for automation.
• Experience with Git workflows (branching strategies, PRs, code reviews).
• Familiarity with configuration management and secrets management (Azure Key Vault, HashiCorp Vault).
• Understanding of networking, identity (Azure AD), and security fundamentals in Azure.
• Strong troubleshooting, debugging, and incident response skills.
• Good collaboration and communication skills; ability to work across teams.
Certification
AZ-400: Microsoft Certified: DevOps Engineer Expert or AZ-104 or AZ 305 or Terraform Associate.
About TVARIT
TVARIT GmbH specializes in developing and delivering cutting-edge artificial intelligence (AI) solutions for the metal industry, including steel, aluminum, copper, cast iron, and more. Our software products empower customers to make intelligent, data-driven decisions, driving advancements in Predictive Quality (PsQ), Predictive Maintenance (PdM), and Energy Consumption Reduction (PsE), etc. With a strong portfolio of renowned reference customers, state-of-the-art technology, a talented research team from prestigious universities, and recognition through esteemed awards such as the EU Horizon 2020 AI Prize, TVARIT is recognized as one of the most innovative AI companies in Germany and Europe. We are seeking a self-motivated individual with a positive "can-do" attitude and excellent oral and written communication skills in English to join our team.
Job Description: We are looking for a Senior Data Engineer with strong expertise in Azure Databricks, PySpark, and distributed computing to develop and optimize scalable ETL pipelines for manufacturing analytics. The role involves working with high-frequency industrial data to enable real-time and batch data processing.
Key Responsibilities · Build scalable real-time and batch processing workflows using Azure Databricks, PySpark, and Apache Spark.
· Perform data pre-processing, including cleaning, transformation, deduplication, normalization, encoding, and scaling to ensure high-quality input for downstream analytics.
· Design and maintain cloud-based data architectures, including data lakes, lakehouses, and warehouses, following Medallion Architecture.
· Deploy and optimize data solutions on Azure (preferred), AWS, or GCP with a focus on performance, security, and scalability.
· Develop and optimize ETL/ELT pipelines for structured and unstructured data from IoT, MES, SCADA, LIMS, and ERP systems. · Automate data workflows using CI/CD and DevOps best practices, ensuring security and compliance with industry standards
· Monitor, troubleshoot, and enhance data pipelines for high availability and reliability.
· Utilize Docker and Kubernetes for scalable data processing.
· Collaborate with automation team, data scientists and engineers to provide clean, structured data for AI/ML models.
Desired Skills and Qualifications · Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
· 7+ years of experience in core data engineering, with a strong focus on cloud platforms such as Azure (preferred), AWS, or GCP · Proficiency in PySpark, Azure Databricks, Python and Apache Spark, etc.
. 2 years of team handling experience.
· Expertise in relational databases (e.g., SQL Server, PostgreSQL), time series databases (e.g. Influx DB), and NoSQL databases (e.g., MongoDB, Cassandra) · Experience in containerization (Docker, Kubernetes).
· Strong analytical and problem-solving skills with attention to detail.
· Good to have MLOps, DevOps including model lifecycle management
· Excellent communication and collaboration skills, with a proven ability to work effectively as a team player.
· Comfortable working in a dynamic, fast-paced startup environment, adapting quickly to changing priorities and responsibilities.
📍 Position: IT Intern
👩💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)
🎓 Qualification: B.Tech (IT) / M.Tech (IT) only
📌 Mode: Remote (WFH)
⏳ Shift: Willingness to work in night/rotational shifts
🗣 Communication: Excellent English
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.
- Support user account management activities using hashtag
hashtag
#Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.
- Assist the IT team in configuring, monitoring, and supporting hashtag
hashtag
#AWS cloud services (EC2, S3, IAM, WorkSpaces).
- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.
- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.
- Help create and update technical documentation and knowledge base articles.
- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.
💻 Technical Requirements:
- Laptop with i5 or higher processor
-Reliable internet connectivity with 100mbps speed
Job Description:
We are seeking a Cloud & AI Platform Engineer to design and operate AI-native infrastructure that supports large-scale machine learning, generative AI, and agentic AI systems.
This role will focus on building secure, scalable, and automated multi-cloud platforms across AWS, Azure, GCP, and hybrid on-prem environments, enabling teams to deploy LLMs, AI agents, and data-driven applications reliably in production.
You will work at the intersection of cloud engineering, MLOps, LLMOps, DevOps, and data infrastructure, helping build platforms that support RAG pipelines, vector search, AI model lifecycle management, and AI observability.
Key Responsibilities
AI & Agentic Infrastructure
- Design infrastructure to support agentic AI systems, autonomous agents, and multi-agent workflows.
- Build scalable runtime environments for LLM orchestration frameworks.
- Enable deployment of AI copilots, assistants, and autonomous decision systems.
Common frameworks may include:
- LangChain
- LlamaIndex
- AutoGPT
LLMOps & AI Model Lifecycle
Design and manage LLMOps pipelines for the full lifecycle of large language models:
- Model deployment
- Prompt management
- Versioning
- Evaluation and testing
- Model monitoring
Integrate with AI platforms such as:
- Azure Machine Learning
- Amazon SageMaker
- Vertex AI
Retrieval-Augmented Generation (RAG) Infrastructure
Design and optimize RAG pipelines that integrate enterprise knowledge with LLMs.
Responsibilities include:
- Document ingestion pipelines
- Embedding generation workflows
- Knowledge indexing
- Query orchestration
- Retrieval optimization
- Support scalable semantic search architectures.
Vector Database & Knowledge Infrastructure
Deploy and manage vector databases used for AI applications and semantic retrieval.
Common technologies include:
- Pinecone
- Weaviate
- Milvus
- FAISS
Responsibilities include:
- Index optimization
- Query latency tuning
- Scalable embedding storage
- Hybrid search architecture
Multi-Cloud AI Infrastructure
Design and maintain AI-ready infrastructure across:
- Amazon Web Services
- Microsoft Azure
- Google Cloud Platform
Key responsibilities include:
- GPU infrastructure management
- Distributed training environments
- Hybrid cloud integrations with on-prem data centers
- Infrastructure scaling for AI workloads
Data Platforms & Integration
- Support deployment and optimization of data lakes, data warehouses, and streaming platforms.
- Work with data engineering teams to ensure secure and scalable data infrastructure.
Cloud Architecture & Infrastructure
- Design and implement scalable multi-cloud infrastructure across Azure, AWS, and Google Cloud.
- Build hybrid cloud architectures integrating on-premise environments with cloud platforms.
- Implement high availability, disaster recovery, and auto-scaling architectures for AI workloads.
DevOps, Platform Engineering & Automation
Build automated cloud infrastructure using modern DevOps practices.
Tools may include:
- Terraform
- Docker
- Kubernetes
- GitHub Actions
Responsibilities include:
- Infrastructure as Code (IaC)
- Automated deployments
- CI/CD pipelines for AI models and services
- Platform reliability and scalability
AI Observability & Monitoring
Implement observability frameworks to monitor AI systems in production.
This includes:
- Model performance monitoring
- Prompt evaluation
- Hallucination detection
- Latency and throughput analysis
- Cost monitoring for LLM usage
Tools may include:
- Arize AI
- WhyLabs
- Weights & Biases
Security, Governance & Responsible AI
Ensure AI systems follow strong governance and security practices.
Responsibilities include:
- Data privacy and compliance
- Model governance frameworks
- Secure model deployment
- Monitoring model bias and drift
- AI risk management
Support enterprise frameworks for Responsible AI and AI compliance.
Data & Security
- Experience with data lake architectures, distributed storage, and ETL pipelines
- Knowledge of data security, encryption, IAM, and compliance frameworks
- Familiarity with AI governance and responsible AI practices
Required Skills
Cloud & Infrastructure
- Strong experience in Azure (must have), AWS or GCP
- Hybrid and multi-cloud architecture
- GPU infrastructure management
DevOps & Automation
- Kubernetes
- Docker
- Terraform
- CI/CD pipelines
AI / ML Platforms
- MLOps pipelines
- Model deployment
- Model monitoring
AI Application Infrastructure
- Vector databases
- RAG pipelines
- LLM orchestration frameworks
Programming
Experience in one or more languages:
- Python
- Go
- Java
- TypeScript
Preferred Qualifications
- Experience building AI copilots or autonomous agents
- Knowledge of distributed model training - Knowledge of GPU infrastructure and distributed training
- Familiarity with AI evaluation frameworks - Familiarity with model monitoring, drift detection, and AI observability
- Experience building enterprise AI platforms
Education & Experience
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- 4–8+ years experience in cloud infrastructure, DevOps, or platform engineering
- Experience working in data-driven or AI-focused environments
What Success Looks Like
- Reliable ML model deployment pipelines - Reliable infrastructure for LLMs and AI agents, Scalable RAG knowledge platforms
- Efficient multi-cloud infrastructure management - Fast deployment cycles for AI products
- Secure and scalable AI-ready cloud platforms
- Strong automation and governance across cloud and AI systems
About Us:
We are hiring for a pre seed funded startup called Zeromoblt (https://zeromoblt.com/), a high-agency Hyderabad-based startup revolutionizing student transportation with lean, intelligent tech stacks.
Our mission: architect world-class systems from scratch—fast, scalable, and algorithmically sharp—using Kotlin, React, AWS (EC2, IoT, IAM), Google Maps, and multi-cloud setups. Stealth mode operations mean you're building 0→1 products with founders, not fixing tickets.
What You'll Do
- Lead end-to-end ownership of complex systems: design, build, deploy, monitor, and iterate at scale.
- Architect high-performance backends in Kotlin (or JVM langs) that handle real-time routing and IoT data.
- Craft scalable React UIs that power ops dashboards and parent-facing apps.
- Drive cloud decisions across AWS, Azure/GCP—optimising costs for our bootstrap runway.
- Apply DSA/system design to solve hard problems like dynamic route optimization and predictive scaling.
- Shape the engineering roadmap: propose, prioritise, and ship features with founders.
- Mentor juniors while executing solo on high-impact bets—no layers, just results.
We're Looking For
- 3-6 years of hands-on engineering where you've owned and shipped production systems (prove it with code/stories).
- Elite CS fundamentals: advanced DSA, system design (distributed systems a must), design patterns.
- Mastery of Kotlin/Java + modern React; real AWS experience (EC2, IAM, CLI—you know our stack).
- Proven "leap-taker": startup grit, side projects, or open-source that screams hunger.
- Figure-it-out velocity: you thrive in chaos, learn our domain overnight, and deliver 10x faster than peers.
This Role Is Not For You If…
- You need structured roadmaps, PM hand-holding, or big-tech process.
- Comfort > impact: stable salary over equity upside and chaos.
- You've never worn all hats (dev, ops, product) in a resource-constrained environment.
Why Join Us
- Massive ownership: lead tech for 10k+ students, direct founder access, shape ZeroMoblt's scale.
- Flat, high-trust team: flexible Hyderabad/remote, no bureaucracy.
- Hungry culture: we hire hustlers scaling from 700 to 10k students—your wins are visible daily.
- Hungry to Leap? Apply now!
Job Title : Senior DevOps Engineer (Only Mumbai Candidates)
Experience : 5+ Years
Location : Mumbai (On-site)
Notice Period : Immediate to 15 Days
Interview Process : 1 Internal Round + 1 Client Round
Mandatory Skills :
Multi-Cloud (AWS/GCP/Azure – any two), Kubernetes, Terraform, Helm (writing Helm Charts), CI/CD (GitLab CI/Jenkins/GitHub Actions), GitOps (ArgoCD/FluxCD), Multi-tenant deployments, Stateful microservices on Kubernetes, Enterprise Linux.
Role Overview :
We are looking for a Senior DevOps Engineer to design, build, and manage scalable cloud infrastructure and DevOps pipelines for product-based platforms.
The ideal candidate should have strong experience with Kubernetes, Terraform, Helm Charts, CI/CD, and GitOps practices.
Key Responsibilities :
- Design and manage scalable cloud infrastructure across AWS/GCP/Azure.
- Deploy and manage microservices on Kubernetes clusters.
- Build and maintain Infrastructure as Code using Terraform and Helm.
- Implement CI/CD pipelines using GitLab CI, Jenkins, or GitHub Actions.
- Implement GitOps workflows using ArgoCD or FluxCD.
- Ensure secure, scalable, and reliable DevOps architecture.
- Implement monitoring and logging using Prometheus, Grafana, or ELK.
Good to Have :
- Packer, OpenShift/Rancher/K3s, On-prem deployments, PaaS experience, scripting (Bash/Python), Terraform modules.
Hiring: IT Operations & Helpdesk Engineer (3–5 Years)
📍 Location: [Bangalore / Hybrid]
We are looking for a hands-on IT Operations Engineer who will anchor our internal IT helpdesk while managing servers, backups, DR drills, and cloud infrastructure. This role is responsible for day-to-day IT stability across endpoints, servers, and Azure environments.
Key Activities
Internal IT Helpdesk (Primary Anchor)
· Act as the single point of contact for internal IT support.
· Resolve L1/L2 issues (laptops, OS, network, access, software installs).
· Manage onboarding/offboarding IT setup.
· Track tickets, SLAs, and recurring issues.
Infrastructure & Servers
· Install and maintain Windows & Linux servers.
· Maintain the centralized IT asset inventory.
· Support manual and automated application deployments
· Handle patching, upgrades, performance monitoring.
Cloud Administration (Azure)
· Manage VMs, storage, networking.
· Maintain access controls and security configurations.
Backup & DR Readiness
· Manage and test backup processes.
· Conduct periodic DR drills to support organizational continuity standards.
· Maintain recovery runbooks and documentation.
What We’re Looking For
· Strong Windows Server & Linux hands-on experience.
· Experience managing Azure Cloud infrastructure.
· Practical backup & restore execution experience.
· Strong troubleshooting mindset.
· Process-driven and documentation disciplined.
· Comfortable working with DevOps & Cyber Security teams.
Impact of This Role
· Stable internal IT operations.
· DR-tested infrastructure.
· Reduced downtime and faster issue resolution.
· Strong operational hygiene in a growing environment.
Software Engineer (Backend) – Kotlin & React
About Us
We are a high-agency startup building elegant technological solutions to real-world problems.
Our mission is to build world-class systems from scratch that are lean, fast, and intelligent. We are currently operating in stealth mode, developing deeply technical products involving Kotlin, React, Azure, AWS, GCP, Google Maps integrations, and algorithmically intensive backends.
We are building a team of builders — not ticket takers. If you want to design systems, make real decisions, and own your work end-to-end, this is the place for you.
Role Overview
As a Software Engineer, you will take full ownership of building and scaling critical product systems. You will work directly with the founding team to transform complex real-world problems into scalable technical solutions.
This role is ideal for engineers who enjoy thinking deeply about systems, writing clean code, and building products from 0 → 1.
Key Responsibilities
System Development & Architecture
- Design, develop, and maintain scalable backend services, primarily using Kotlin or JVM-based languages (Java/Scala).
- Architect systems that are robust, high-performance, and production-ready.
- Apply strong data structures, algorithms, and system design principles to solve complex engineering challenges.
Full Stack Development
- Build fast, maintainable front-end applications using React.
- Ensure seamless integration between frontend systems and backend services.
Cloud Infrastructure
- Design and manage cloud architecture using AWS, Azure, and/or Google Cloud Platform (GCP).
- Implement scalable deployment pipelines, monitoring, and infrastructure optimization.
Product & Technical Collaboration
- Work closely with founders and product stakeholders to translate business problems into technical solutions.
- Contribute actively to product and engineering roadmap decisions.
Performance Optimization
- Continuously improve system performance, scalability, and reliability.
- Implement efficient algorithms and system optimizations to gain a technical advantage.
Engineering Excellence
- Write clean, well-tested, and maintainable code.
- Maintain strong engineering standards across the codebase.
Required Skills & Qualifications
We value capability and ownership over years of experience. Whether you have 10 years of experience or none, what matters is your ability to build and solve hard problems.
Core Requirements
- Strong computer science fundamentals (Data Structures, Algorithms, System Design).
- Experience with Kotlin or JVM languages such as Java or Scala.
- Experience building modern React applications.
- Hands-on experience with cloud platforms (AWS / Azure / GCP).
- Experience designing and deploying scalable distributed systems.
- Strong problem-solving and analytical thinking.
Preferred / Bonus Skills
- Experience with Google Maps APIs or geospatial integrations.
- Prior startup experience.
- Contributions to open-source projects.
- Personal side projects demonstrating strong engineering ability.
Ideal Candidate
You will thrive in this role if you:
- Take ownership of problems, not just tasks.
- Are comfortable working in high-ambiguity environments.
- Have a builder mindset and enjoy creating systems from scratch.
- Learn quickly and execute with speed and precision.
This Role May Not Be For You If
- You prefer strict task assignments and detailed specifications before starting work.
- You want to focus only on coding tickets without product involvement.
- You prefer large teams with multiple layers of management.
Why Join Us
- Build 0 → 1 products with massive ownership.
- Work in a flat organization with no unnecessary hierarchy.
- Collaborate directly with founders and core product builders.
- Your contributions will have immediate and visible impact.
- Flexible remote work environment.
- Opportunity to shape the technology, culture, and future of the company.
If you are passionate about building powerful systems, solving complex problems, and owning your work, we would love to hear from you.
Location: Bangalore
Experience required: 7-10 years.
Key skills: .NET core, ASP .NET, Microsoft Azure, MVC, AWS
"At Pace Wisdom Solutions, our .NET team is a dynamic and collaborative group of experts specializing in end-to-end development. With a focus on both front-end and back-end technologies, we leverage the robust .NET framework and Azure to deliver innovative and scalable solutions. Our agile approach ensures adaptability to industry changes, empowering us to provide clients with cutting-edge and tailored applications."
We are seeking a highly skilled and experienced Senior .NET Developer with a minimum of 7 years of hands-on experience. The ideal candidate will possess expertise in both front-end and back-end development, with a strong background in MVC architecture and exposure to Microsoft Azure technologies. The role requires an individual who can work independently, lead a team effectively, and contribute to the successful delivery of projects.
Engineering Culture at Pace Wisdom:
We foster a collaborative and communicative environment where engineers are empowered to share ideas freely. Teamwork is paramount, and we believe the best solutions come from diverse perspectives. We are committed to promoting from within, providing clear career paths and mentorship opportunities to help our engineers reach their full potential. Our culture prioritizes continuous learning and growth, offering a safe space to experiment, innovate, and refine your skills.
Responsibilities:
• Create scalable solutions by understanding business requirements, write code, test according to best practices.
• Own and Collaborate with the team including our customers, QA, design, and other stakeholders to drive successful project delivery.
• Advocate and mentor teams to follow best practices around: documentation, unit testing, code reviews etc.
• Comply with security policies and processes.
Qualifications:
• 7-10 years of professional experience in developing applications using .NET framework, .NET Core, Azure Services, Entity Framework
• Good knowledge of common software architecture design patterns, Object Oriented Programming, Data structures, Algorithms, Database design patterns and other best practices.
• Exposure to Cloud technologies (AWS, Azure, Google Cloud - at least one of them)
• Exposure to developing SPA on React, Angular or VueJS
• Experience with micro services, messaging systems (RabbitMQ/Kafka)
• Proven ability to lead and mentor development teams.
• Effective communication and interpersonal skills.
About the Company:
Pace Wisdom Solutions is a deep-tech Product engineering and consulting firm. We have offices in San Francisco, Bengaluru, and Singapore. We specialize in designing and developing bespoke software solutions that cater to solving niche business problems.
We engage with our clients at various stages:
• Right from the idea stage to scope out business requirements.
• Design & architect the right solution and define tangible milestones.
• Setup dedicated and on-demand tech teams for agile delivery.
• Take accountability for successful deployments to ensure efficient go-to-market Implementations.
Pace Wisdom has been working with Fortune 500 Enterprises and growth-stage startups/SMEs since 2012. We also work as an extended Tech team and at times we have played the role of a Virtual CTO too. We believe in building lasting relationships and providing value-add every time and going beyond business.
Position Title : Senior Data Engineer(Founding Member) - Insurtech StartUp
Location : Hyderabad(Onsite)
Immediate to 15 days Joiners
Experience : 5+ to 13 Years
Role Summary
We are looking for a Senior Data Engineer who will play a foundational role in:
- Client onboarding from a data perspective
- Understanding complex insurance data flows
- Designing secure, scalable ingestion pipelines
- Establishing strong data modeling and governance standards
This role sits at the intersection of technology, data architecture, security, and business onboarding.
.
Key Responsibilities
- Lead end-to-end data onboarding for new clients and partners, working closely with business and product teams to understand client systems, data formats, and migration constraints
- Define and implement data ingestion strategies supporting multiple sources and formats, including CSV, XML, JSON files, and API-based integrations
- Design, build, and operate robust, scalable ETL/ELT pipelines, supporting both batch and near-real-time data processing
- Handle complex insurance-domain data including Contracts, Claims, Reserves, Cancellations, and Refunds
- Architect ingestion pipelines with security-by-design principles, including secure credential management (keys, secrets, tokens), encryption at rest and in transit, and network-level controls where required
- Enforce role-based and attribute-based access controls, ensuring strict data isolation, tenancy boundaries, and stakeholder-specific access rules
- Design, maintain, and evolve canonical data models that support operational workflows, reporting & analytics, and regulatory/audit requirements
- Define and enforce data governance standards, ensuring compliance with insurance and financial data regulations and consistent definitions of business metrics across stakeholders
- Build and operate data pipelines on a cloud-native platform, leveraging distributed processing frameworks (Spark / PySpark), data lakes, lakehouses, and warehouses
- Implement and manage orchestration, monitoring, alerting, and cost-optimization mechanisms across the data platform
- Contribute to long-term data strategy, platform architecture decisions, and cost-optimization initiatives while maintaining strict security and compliance standards
Required Technical Skills
- Core Stack: Python, Advanced SQL(Complex joins, window functions, performance tuning), Pyspark
- Platforms: Azure, AWS, Data Bricks, Snowflake
- ETL / Orchestration: Airflow or similar frameworks
- Data Modeling: Star/Snowflake schema, dimensional modeling, OLAP/OLTP
- Visualization Exposure: Power BI
- Version Control & CI/CD: GitHub, Azure Devops, or equivalent
- Integrations: APIs, real-time data streaming, ML model integration exposure
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- 5+ years of experience in data engineering or similar roles
- Strong ability to align technical solutions with business objectives
- Excellent communication and stakeholder management skills
What We Offer
- Direct collaboration with the core US data leadership team
- High ownership and trust to manage the function end-to-end
- Exposure to a global environment with advanced tools and best practices
We are seeking a seasoned Senior Developer to join our team. The ideal candidate is a C# expert who doesn't just write code but understands how to orchestrate complex business processes using the Microsoft ecosystem. You will be responsible for building scalable backend services, optimizing SQL databases, and leveraging Azure and Power Automate to deliver end-to-end automation solutions.
Responsibilities:
- Design and maintain robust, high-performance applications using C# and .NET Core.
- Write complex SQL queries, stored procedures, and optimize database schemas for performance and security.
- Deploy and manage cloud resources within Azure (App Services, Functions, Logic Apps).
- Design enterprise-level automated workflows using Microsoft Power Automate, including custom connectors to bridge the gap between Power Platform and legacy APIs.
- Provide technical mentorship, conduct code reviews, and ensure best practices in the Software Development Life Cycle (SDLC).
Technical Skills:
- C# / .NET: 8+ years of deep expertise in ASP.NET MVC, Web API, and Entity Framework.
- Database: Advanced proficiency in SQL Server
- Azure: Hands-on experience with Azure cloud architecture and integration services.
- Power Automate: Proven experience building complex flows, handling error logic, and integrating Power Automate with custom-coded environments.
- DevOps: Familiarity with CI/CD pipelines (Azure DevOps or GitHub Actions).
Company Description: Bits in Glass - India
- Industry Leader:
- Bits in Glass(BIG) has been in business for more than 20 years. In 2021 Bits in Glass joined hands with Crochet Technologies, forming a larger organization under the Bits In Glass brand to better serve customers across the globe.
- Offices across three locations in India: Pune, Hyderabad & Chandigarh.
- Specialized Pega partner since 2017, delivering Pega solutions with deep industry expertise and experience.
- Proudly ranked among the top 30 Pega partners, Bits In Glass has been one of the very few sponsors of the annual PegaWorld event.
- Elite Appian partner since 2008, delivering Appian solutions with deep industry expertise and experience.
- Operating in the United States, Canada, United Kingdom, and India.
- Dedicated global Pega CoE to support our customers and internal dev teams.
- Specializes in Databricks, AI, and cloud-based data engineering to help companies transition from manual to automated workflows.
- Employee Benefits:
- Career Growth: Opportunities for career advancement and professional development.
- Challenging Projects: Work on innovative, cutting-edge projects that make a global impact.
- Global Exposure: Collaborate with international teams and clients to broaden your professional network.
- Flexible Work Arrangements: Support for work-life balance through flexible working conditions.
- Comprehensive Benefits: Competitive compensation packages and comprehensive benefits including health insurance, and paid time off.
- Learning Opportunities- Great opportunity to upskill yourself and work on new technologies like AI-enabled Pega solutions, Data engineering, Integration, cloud migration etc.
- Company Culture:
- Collaborative Environment: Emphasizes teamwork, innovation, and knowledge sharing.
- Inclusive Workplace: Values diversity and fosters an inclusive environment where all ideas are respected.
- Continuous Learning: Encourages professional development through ongoing learning opportunities and certifications.
- Core Values:
- Integrity: Commitment to ethical practices and transparency in all business dealings.
- Excellence: Strive for the highest standards in everything we do.
- Client-Centric Approach: Focus on delivering the best solutions tailored to client needs.
Job Description
We are looking for a skilled .NET FullStack Developer with expertise in .NET , React.js and AWS/Azure to join our development team. The ideal candidate should have strong programming skills and experience building scalable web applications using modern technologies.
Key Responsibilities
- Develop and maintain scalable applications using .NET Core.
- Design and implement Microservices architecture and RESTful APIs.
- Build responsive and dynamic user interfaces using React.js.
- Integrate frontend applications with backend APIs.
- Deploy and manage applications on AWS/Azure
- Collaborate with cross-functional teams to define, design, and deliver new features.
- Write clean, maintainable, and efficient code following best development practices.
Required Skills
- Strong experience in .NET development.
- Hands-on experience with Microservices architecture and API development.
- Experience working with React.js, including API integration and design principles.
- Experience with AWS / Azure
About the Role
Applix is looking for a Python Software Engineer with strong Azure cloud experience to build and operate AI-powered applications and agentic workflows. The engineer will work closely with our enterprise client teams to develop, deploy, and maintain AI solutions running on the Azure platform.
This role combines Python application development, AI platform integration, and cloud deployment responsibilities.
Key Responsibilities
- Build and maintain Python-based services and AI agents
- Develop and manage agentic workflows and automation pipelines
- Deploy and monitor applications on Azure cloud services
- Integrate with Azure AI services such as Azure OpenAI and Azure Document Intelligence
- Manage application deployments using Azure App Services or equivalent cloud platforms
- Monitor system performance, logs, and reliability in production environments
- Work with engineering teams to ensure scalable and secure deployments
- Support CI/CD pipelines and DevOps practices for application delivery
Experience
3–8 years of relevant experience in software engineering and cloud development.
Required Skills
- Strong programming experience in Python
- Experience deploying applications on Microsoft Azure
- Familiarity with Azure App Services or equivalent cloud services
- Understanding of cloud deployment, monitoring, and DevOps practices
- Experience building APIs, automation workflows, or backend services
- Good problem-solving ability and communication skills
- Experience with Azure OpenAI
- Experience with Azure Document Intelligence
- Familiarity with Azure AI Foundry or AI platform services
- Exposure to LLM-based applications or AI workflows
- Experience with CI/CD pipelines and cloud automation
FULL STACK DEVELOPER
JOB DESCRIPTION – FULL STACK DEVELOPER
Location: Bangalore
Key Responsibilities:
Establish processes, SLAs, and escalation protocols for the support & maintenance of web applications
Manage stakeholders with effective communication & collaborate with cross functional teams to address issues and maintain business continuity.
Design, implement, unit test, and build business applications using React, React-Native, .Net Core, .Net 8, Azure/AWS and leveraging an agile methodology and latest tech like Agentic AI & Gihub Copilot.
Facilitate scrum ceremonies including sprint planning, retrospectives, reviews, and daily stand-ups·
Facilitate discussion, assessment of alternatives or different approaches, decision making, and conflict resolution within the development team
Develop and administer CI/CD pipelines in cloud-hosted Git repositories, and source control artifacts via Git in alignment with common branching strategies and workflows
Assist Software Designer/Implementers with the creation of detailed software design specifications
Participate in the system specification review process to ensure system requirements can be translated into valid software architecture
Integrate internal and external product designs into a cohesive user experience
Identify and keep track of metrics that indicate how software is performing
Handle technical and non-technical queries from the development team and stakeholders
Ensure that all development practices follow best practices and any relevant policies / procedures
Other Duties· Maintain project reporting including dashboards, status reports, road maps, burn down, velocity, and resource utilization.
Own the technical solution and ensure all technical aspects are implemented as designed. ·
Partner with the customer success team and aid in triaging and troubleshooting customer support issues spanning across a range of software components, infrastructure, integrations, and services, some of which target 24/7/365 availability
Flexible to work in rotational shift
Required Qualification
Previous experience of leading full stack technology projects with scrum teams and stakeholder management·
BTech or MTech in computer science, or related field·
3-5 years of experience.
Required Knowledge, Skills and Abilities: (Include any required computer skills, certifications, licenses, languages, etc)·
With Proficiency in .NET Core/.Net 8/, React, React-Native, Redux, Material, Bootstrap, Typescript, SCSS, Microservices, EF, LINQ, SQL, Azure/AWS, CI CD, Agile, Agentic AI, Github Copilot·
Azure Dev Ops, Design System, Micro front ends, Data Science·
Stakeholder management & excellent communication skills.
Must have skills
React - 3 years
React Native - 3 years
Redux - 1 years
Material UI - 1 years
Typescript - 1 years
Bootstrap - 1 years
Microservices - 2 years
SQL - 1 years
Azure - 1 years
Nice to have skills
.NET Core - 3 years
NET 8 - 3 years
AWS - 1 years
LINQ - 1 years
Description
We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years’ experience).
Please find the detailed Job Description attached for your reference. We are looking for candidates with strong experience in:
- Machine Learning model development
- Scalable data pipeline development (ETL/ELT)
- Python and SQL
- Cloud platforms such as Azure/AWS/Databricks
- ML deployment environments (SageMaker, Azure ML, etc.)
Kindly note:
- Location: Pune (Work From Office)
- Immediate joiners preferred
While sharing profiles, please ensure the following details are included:
- Current CTC
- Expected CTC
- Notice Period
- Current Location
- Confirmation on Pune WFO comfort
Must have skills
Machine Learning - 6 years
Python - 6 years
ETL(Extract, Transform, Load) - 6 years
SQL - 6 years
Azure - 6 years
Hiring: Cloud Engineer – MLOps Platform 🚨
📍 Location: Bangalore
🧠 Experience: 5–8 Years
We are looking for an experienced Cloud Engineer to support ML teams and drive end-to-end automation for model deployment across modern cloud platforms.
🔹 Tech Stack:
Azure | Databricks | AKS | ARO | Terraform | MLflow | CI/CD
🔹 Key Responsibilities:
• Build and maintain CI/CD and Continuous Training (CT) pipelines using Azure DevOps, GitHub Actions, or Jenkins.
• Deploy Databricks jobs, MLflow models, and microservices on AKS / ARO environments.
• Automate infrastructure using Terraform and GitOps practices.
• Manage Databricks workspaces, AKS clusters, and networking configurations.
• Implement monitoring, logging, and alerting systems for ML workloads.
• Ensure cloud security, governance, and cost optimization best practices.
🔹 Required Skills:
✔ Strong hands-on experience with Azure, AKS, ARO, and Databricks
✔ Experience with MLflow and Kubernetes-based deployments
✔ Proficiency in Python and Bash / PowerShell scripting
✔ Strong understanding of cloud security, infrastructure automation, and distributed systems
Senior Software Engineer (.NET & Azure)
Location: Remote/ Pune / Mohali / Hyderabad
We are looking for a Senior Software Engineer with 5-7 years of professional experience to join our engineering team. You will be a key contributor to our cloud ecosystem, focusing on building high-performance, REST-based ASP.NET Core applications. A major part of this role involves leveraging the Azure platform to ensure our infrastructure is secure, observable, and scalable.
Key Responsibilities:
- API Development: Design and develop scalable, high-availability RESTful APIs using ASP.NET Core.
- Clean Architecture: Apply advanced Dependency Injection (DI) and SOLID principles to maintain a decoupled and testable codebase.
- Identity & Security: Manage Azure App Registrations, including Service Principals, OAuth2 permissions, and Client Secrets, to secure cross-service communication.
- Observability: Implement and monitor application health using Azure App Insights, creating custom telemetry and alerts to proactively resolve issues.
Collaboration: Partner with data engineers and front-end teams to integrate diverse cloud components into a cohesive user experience.
Technical Requirements (Must-Have):
- Core Development: 5-7 years of deep expertise in C# and ASP.NET Core (Web API).
- Design Patterns: Expert-level understanding of the Dependency Injection pattern (Autofac, Microsoft.Extensions.DependencyInjection, etc.).
- Azure Infrastructure: Hands-on experience with:
- App Registrations (Identity/Security)
- Application Insights (Monitoring/Telemetry)
- Azure App Services (Deployment/Hosting)
- API Standards: Strong experience in RESTful architecture and JSON-based communication.
Good to Have:
- NoSQL Mastery: Experience with Azure Cosmos DB (document modeling, partitioning, and RU management).
- Big Data Integration: Experience with Azure Databricks (Spark, notebooks, or data processing pipelines).
- Full-Stack Capability: Proficiency in Angular (v14+) or modern React to build and maintain UI components.
Company Description: Bits in Glass - India
Industry Leader:
- Bits in Glass(BIG) has been in business for more than 20 years. In 2021 Bits in Glass joined hands with Crochet Technologies, forming a larger organization under the Bits In Glass brand to better serve customers across the globe.
- Offices across three locations in India: Pune, Hyderabad & Chandigarh.
- Specialized Pega partner since 2017, delivering Pega solutions with deep industry expertise and experience.
- Proudly ranked among the top 30 Pega partners, Bits In Glass has been one of the very few sponsors of the annual PegaWorld event.
- Elite Appian partner since 2008, delivering Appian solutions with deep industry expertise and experience.
- Operating in the United States, Canada, United Kingdom, and India.
- Dedicated global Pega CoE to support our customers and internal dev teams.
- Specializes in Databricks, AI, and cloud-based data engineering to help companies transition from manual to automated workflows.
Employee Benefits:
- Career Growth: Opportunities for career advancement and professional development.
- Challenging Projects: Work on innovative, cutting-edge projects that make a global impact.
- Global Exposure: Collaborate with international teams and clients to broaden your professional network.
- Flexible Work Arrangements: Support for work-life balance through flexible working conditions.
- Comprehensive Benefits: Competitive compensation packages and comprehensive benefits including health insurance, and paid time off.
- Learning Opportunities- Great opportunity to upskill yourself and work on new technologies like AI-enabled Pega solutions, Data engineering, Integration, cloud migration etc.
Company Culture:
- Collaborative Environment: Emphasizes teamwork, innovation, and knowledge sharing.
- Inclusive Workplace: Values diversity and fosters an inclusive environment where all ideas are respected.
- Continuous Learning: Encourages professional development through ongoing learning opportunities and certifications.
- Core Values:
- Integrity: Commitment to ethical practices and transparency in all business dealings.
- Excellence: Strive for the highest standards in everything we do.
- Client-Centric Approach: Focus on delivering the best solutions tailored to client needs.
About the role
Applix is seeking a highly skilled Senior Power BI Developer to join our Hyderabad office on a full-time, work-from-office basis. In this role, you will work directly with Caterpillar’s global analytics and GCIO BI Services teams to design, develop, and maintain enterprise-grade Power BI reports, dashboards, scorecards, and advanced data visualizations. You will operate as a member of a Project/Scrum team within Caterpillar’s technology environment, engaging with business partners and internal support teams to provide data visualization development services for a wide variety of projects and business needs.
The ideal candidate combines deep Power BI expertise with strong backend data engineering skills, and can champion BI COE standards while partnering closely with data scientists, business analysts, and IT professionals across Caterpillar’s global operations. A minimum 5-hour daily overlap with US Central Time is required to ensure seamless collaboration with onshore stakeholders and end users.
Key responsibilities
- Design and develop enterprise-grade Power BI dashboards, reports, and scorecards aligned to business needs.
- Implement BI COE standards, governance, security (RLS), and best practices across BI tools and environments.
- Build and optimize data models, DAX calculations, SQL queries, and data transformation pipelines.
- Enhance performance using aggregation, incremental refresh, storage modes, and query optimization techniques.
- Collaborate with business stakeholders, data engineers, and data scientists to deliver actionable insights.
- Support documentation, training, troubleshooting, and continuous improvement initiatives.
- Drive advanced analytics adoption, CI/CD practices, and mentor junior team members.
Requires Qualifications
- Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Science, Industrial Engineering, or a related field.
- 5+ years of hands-on experience in Power BI development, including reports, dashboards, enterprise scorecards, and paginated reports.
- Expert-level proficiency in Power BI Desktop, Power BI Service, Power BI Report Server, and Power BI Report Builder.
- Expert working knowledge of DAX, Power Query (M language), and data modeling best practices (star schema, snowflake schema, dimensional modeling).
- Strong backend skills with SQL Server, Azure SQL Database, Azure Synapse Analytics, or Snowflake – including writing complex T-SQL queries, stored procedures, CTEs, and window functions.
- 3+ years of experience in relational database design, data modeling, and structured query language (SQL).
- Hands-on experience with Azure Data Factory (ADF), Azure Data Lake, or similar ETL/ELT tools.
- Experience working in Agile/Scrum methodology, with tools like Azure DevOps, Jira, or ServiceNow.
Caterpillar-Specific Experience (Strongly Preferred)
- Prior experience within Caterpillar’s BI ecosystem, GCIO BI Services, and governance frameworks.
- Familiarity with multi-tool BI environments (Power BI, Tableau, ThoughtSpot, Cognos, BOBJ).
- Exposure to Caterpillar’s Azure cloud infrastructure, data lakes, and enterprise platforms.
- Understanding of BI COE standards, data governance, naming conventions, and security protocols.
- Domain experience in manufacturing, heavy equipment, construction, or mining industries.
- Experience managing complex, enterprise-grade BI applications integrating multiple data sources.
Preferred Qualifications
- Microsoft PL-300 certification or equivalent.
- Experience with Microsoft Fabric, Azure Databricks, Snowflake/Snowpark.
- Working knowledge of Python or R for advanced analytics.
- Experience with Microsoft Power Platform (Power Apps, Power Automate).
- Knowledge of SSAS Tabular models and XMLA endpoints.
- Experience implementing CI/CD for Power BI using Azure DevOps.
- Familiarity with ETL tools (SnapLogic, SSIS).
- Prior consulting or client-facing delivery experience.
What we offer
- Opportunity to work on high-impact analytics projects for Caterpillar Inc. - a Fortune 100 global leader with $67B+ in annual revenue.
- Direct engagement with Caterpillar’s GCIO BI Services organization and US-based leadership teams.
- Collaborative, innovation-driven work culture at Applix’s Hyderabad office with a team focused on enterprise BI excellence.
- Competitive compensation and benefits package aligned with market standards.
- Career growth with exposure to cutting-edge Microsoft data technologies, Snowflake, and enterprise-scale BI solutions.
- Learning and development support, including Microsoft certification sponsorship (PL-300, DP-500, etc.).
- Opportunity to contribute to Caterpillar’s BI Centre of Excellence standards and shape analytics best practices.
Job Summary:
We are seeking a highly skilled and self-driven Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.Key Responsibilities:
- Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
- Implement and maintain RESTful APIs, ensuring high performance and scalability.
- Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
- Develop and manage Docker containers, enabling efficient development and deployment pipelines.
- Integrate messaging services like Apache Kafka into microservice architectures.
- Design and maintain data models using PostgreSQL or other SQL databases.
- Implement unit testing using JUnit and mocking frameworks to ensure code quality.
- Develop and execute API automation tests using Cucumber or similar tools.
- Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
- Work with Kubernetes for orchestrating containerized services.
- Utilize Couchbase or similar NoSQL technologies when necessary.
- Participate in code reviews, design discussions, and contribute to best practices and standards.
Required Skills & Qualifications:
- Strong experience in Java (11 or above) and Spring Boot framework.
- Solid understanding of microservices architecture and deployment on Azure.
- Hands-on experience with Docker, and exposure to Kubernetes.
- Proficiency in Kafka, with real-world project experience.
- Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
- Experience in writing unit tests using JUnit and mocking tools.
- Experience with Cucumber or similar frameworks for API automation testing.
- Exposure to CI/CD tools, DevOps processes, and Git-based workflows.
Nice to Have:
- Azure certifications (e.g., Azure Developer Associate)
- Familiarity with Couchbase or other NoSQL databases.
- Familiarity with other cloud providers (AWS, GCP)
- Knowledge of observability tools (Prometheus, Grafana, ELK)
Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent verbal and written communication.
- Ability to work in an agile environment and contribute to continuous improvement.
Why Join Us:
- Work on cutting-edge microservice architectures
- Strong learning and development culture
- Opportunity to innovate and influence technical decisions
- Collaborative and inclusive work environment
Hi Folks, we are currently Hiring for Security Engineer.
Gemini said
Hiring: Security Engineer
Company : Pentabay Softwares
Location : Anna salai, Mount Road
Mode: Fulltime
Pentabay Softwares INC is looking for a proactive Security Engineer (2–7 Years Exp) to fortify our global digital solutions. As we scale our footprint in the Healthcare IT sector, you will play a critical role in safeguarding sensitive data (ePHI) and ensuring our cloud-native architectures are resilient against evolving threats.
The Mission
You will be the architect of our defense, bridging the gap between high-speed development and rigorous security standards. Your day-to-day will involve "shifting security left" by embedding DevSecOps practices into our CI/CD pipelines and leading our compliance efforts for SOC 2, ISO 27001, and HIPAA.
Key Responsibilities
Defense & Architecture: Design and maintain secure cloud (AWS/Azure/GCP) and on-prem environments. Implement IAM policies, Zero Trust frameworks, and robust secrets management.
Offensive Testing: Conduct regular vulnerability assessments (VAPT), penetration testing, and code reviews using tools like Burp Suite and Nessus.
DevSecOps & Automation: Integrate SAST/DAST/SCA scanning into engineering workflows. Automate security tasks using Python or Bash.
Incident Response: Monitor SIEM tools (Splunk/CrowdStrike), respond to threats, and develop risk mitigation strategies.
Healthcare Compliance (Plus): Ensure data integrity for HL7/FHIR APIs and maintain HIPAA/HITECH audit readiness for healthcare clients.
What You Bring
Experience: 2–7 years in Information/Application Security with a strong grasp of the OWASP Top 10 and threat modeling (STRIDE).
Technical Depth: Proficiency in network/endpoint security, PKI, encryption standards (TLS/SSL), and container security (Docker/Kubernetes).
Compliance Knowledge: Familiarity with NIST, GDPR, and SOC 2 frameworks.
Tools: Hands-on experience with Metasploit, Wireshark, and Infrastructure-as-Code (Terraform).
Bonus Points: Industry certifications like OSCP, CISSP, or CEH, and experience in Healthcare IT workflows.
Auditing space like ISO27001 , ISO9001 prefered
Why Pentabay?
At Pentabay, we offer more than just a job; we offer a security-first engineering culture.
Growth: A dedicated learning budget for certifications and conferences.
Impact: Work on cutting-edge Healthcare projects that demand the highest levels of data privacy.
Send resumes to : sandhiya.m at pentabay.com
Job opportunity for Developer -Python Full Stack with Siemens at Bangalore.
Interview Process:
1st round of interview - F2F (in-Person)-Technical
2nd round of interview – F2F /Virtual Interview - Technical
3rd round of interview – Virtual Interview – Technical + HR
Job Title / Designation: Developer -Python Full Stack
Employment Type: Full Time, Permanent
Location: Bangalore
Experience: 3-5 Years Job Description: : Developer -Python Full Stack
We are looking for a python full stack expert who has proven 5+ years of experience in developing automating solutions on Linux based environments. You should be capable of developing python-based web applications or automation solutions and have with excellent knowledge on DB handling and decent knowledge of the K8-based deployment environment.
Required Skills:
- Solid experience in Python back-end technology
- Sound experience in web application development
- Decent knowledge and experience in UI development using JavaScript, React/Angular or related tech stack.
- Strong understanding of software design patterns and testing principles
- Ability to learn and adapt to working with multiple programming languages.
- Experience Docker, ArgoCD, Kubernetes and Terraform
- Understanding of ETL processes to extract data from different data sources is a plus.
- Proven experience in Linux development environments using Python.
- Excellent knowledge in interacting with database systems (SQL, NoSQL) and webservices (REST)
- Experienced in establishing an optimized CI / CD environment relevant to the project.
- Good knowledge on repository management tools like Git, Bit Bucket, etc.
- Excellent debugging skills/strategies.
- Excellent communication skills
- Experienced in working in an Agile environment.
Nice to have
- Good Knowledge in eclipse IDE, developed add-ons/ plugins on eclipse Platform.
- Knowledge of 93K Semiconductor test platforms
- Good know-how of agile management tools like Jira, Azure DevOps.
- Good knowledge of RHEL
- Knowledge of JIRA administration
Data Scientist or Senior Machine Learning Engineer
We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years' experience).
Please find the detailed Job Description attached for your reference.
We are looking for candidates with strong experience in:
- Machine Learning model development
- Scalable data pipeline development (ETL/ELT)
- Python and SQL
- Cloud platforms such as Azure/AWS/Databricks
- ML deployment environments (SageMaker, Azure ML, etc.)
Kindly note:
- Location: Pune (Work from Office)
- Immediate joiners preferred
While sharing profiles, please ensure the following details are included:
- Current CTC
- Expected CTC
- Notice Period
- Current Location
- Confirmation on Pune WFO comfort
Must have Skills
- Machine Learning - 6 Years
- Python - 6 Years
- ETL (Extract, Transform, Load) - 6 Years
- SQL - 6 Years
- Azure - 6 Years
Request you to share relevant profiles at the earliest. Looking forward to your support.
About the Role
We're seeking a Junior .NET Developer with 2 years of experience to join our insurtech team. This role offers an opportunity to work with cloud technologies and contribute to our existing codebase and cloud migration initiatives.
Key Responsibilities
- Write clean, maintainable code using C# and .NET Framework (.NET Core, ASP.NET, web API)
- Develop new features and participate in microservices architecture development
- Write unit and integration tests to ensure code quality
- Work with MS SQL Server - write Stored Procedures, Views, and Functions
- Support Azure cloud integration and automated deployment pipelines using Azure DevOps
- Collaborate with infrastructure teams and senior architects on migration initiatives
- Estimate work, break down deliverables, and deliver to deadlines
- Take ownership of your work with focus on quality and continuous improvement
Requirements
Essential
- 2 years of experience with C# and .NET development
- Strong understanding of OOP concepts and Design Patterns
- MS SQL Server programming experience
- Experience working on critical projects
- Self-starter with strong problem-solving and analytical skills
- Excellent communication and ability to work independently and in teams
Desirable
- Microsoft Azure experience (App Service, Functions, SQL Database, Service Bus)
- Knowledge of distributed systems and microservices architecture
- DevOps and CI/CD pipeline experience (Azure DevOps preferred)
- Front-end development with HTML5, CSS, JavaScript, React
Tech Stack
C#, .NET Framework, WPF, WCF, REST & SOAP APIs, MS SQL Server 2016+, Microsoft Azure, HTML5, CSS, JavaScript, React, Azure DevOps, TFS, Github
5+ years of experience as a Senior Analytics or Data Engineer building pipelines, developing data models, and improving BI infrastructure, ideally at a SaaS company.
Core Stack: Expert-level knowledge of SQL and Python.
Platform Expertise: Expert-level knowledge in Snowflake, Azure, Fivetran, and dbt.
Orchestration: Hands-on expertise with at least one orchestration framework such as Airflow, Prefect, or Dagster.
Technical Skills: Solid understanding of data modeling concepts, specifically star schemas and normalized vs. denormalized structures.
Workflow Development: Experience building ELT/ETL workflows and integrating APIs or webhooks.Analytical Mindset: Proven ability to translate ambiguous business questions into structured analyses.
Soft Skills: Excellent communication skills with the ability to articulate technical problems to non-technical audiences.Agility: Comfortable working in an agile environment.Education: Bachelor's Degree in Engineering,
Description
SRE Engineer
Role Overview
As a Site Reliability Engineer, you will play a critical role in ensuring the availability and performance of our customer-facing platform. You will work closely with DevOps, DBA, and Development teams to provision and maintain infrastructure, deploy and monitor our applications, and automate workflows. Your contributions will have a direct impact on customer satisfaction and overall experience.
Responsibilities and Deliverables
• Manage, monitor, and maintain highly available systems (Windows and Linux)
• Analyze metrics and trends to ensure rapid scalability.
• Address routine service requests while identifying ways to automate and simplify.
• Create infrastructure as code using Terraform, ARM Templates, Cloud Formation.
• Maintain data backups and disaster recovery plans.
• Design and deploy CI/CD pipelines using GitHub Actions, Octopus, Ansible, Jenkins, Azure DevOps.
• Adhere to security best practices through all stages of the software development lifecycle
• Follow and champion ITIL best practices and standards.
• Become a resource for emerging and existing cloud technologies with a focus on AWS.
Organizational Alignment
• Reports to the Senior SRE Manager
• This role involves close collaboration with DevOps, DBA, and security teams.
Technical Proficiencies
• Hands-on experience with AWS is a must-have.
• Proficiency analyzing application, IIS, system, security logs and CloudTrail events
• Practical experience with CI/CD tools such as GitHub Actions, Jenkins, Octopus
• Experience with observability tools such as New Relic, Application Insights, AppDynamics, or DataDog.
• Experience maintaining and administering Windows, Linux, and Kubernetes.
• Experience in automation using scripting languages such as Bash, PowerShell, or Python.
• Configuration management experience using Ansible, Terraform, Azure Automation Run book or similar.
• Experience with SQL Server database maintenance and administration is preferred.
• Good Understanding of networking (VNET, subnet, private link, VNET peering).
• Familiarity with cloud concepts including certificates, Oauth, AzureAD, ASE, ASP, AKS, Azure Apps,
Load Balancers, Application Gateway, Firewall, Load Balancer, API Management, SQL Server, Databases on Azure
Experience
• 7+ years of experience in SRE or System Administration role
• Demonstrated ability building and supporting high availability Windows/Linux servers, with emphasis on the WISA stack (Windows/IIS/SQL Server/ASP.net)
• 3+ years of experience working with cloud technologies including AWS, Azure.
• 1+ years of experience working with container technology including Docker and Kubernetes.
• Comfortable using Scrum, Kanban, or Lean methodologies.
Education
• Bachelor’s Degree or College Diploma in Computer Science, Information Systems, or equivalent
experience.
Additional Job Details:
• Working hours: 2:00 PM / 3:00 PM to 11:30 PM IST
• Interview process: 3 technical rounds
• Work model: 3 days’ work from office
Strong Azure DevOps Engineer Profiles.
Mandatory (Experience 1) – Must have minimum 1+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
Mandatory (Experience 2) – Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
Mandatory (Experience 3) – Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
Mandatory (Experience 4) – Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
Mandatory (Experience 5) – Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Mandatory (Note) - Only Male candidates are considered.
Mandatory (Location): The candidate must be currently in Bengaluru.
Job Title: Python Developer (Django / Databricks / Azure)
📍 Location: Bangalore
🕒 Experience: 3–8 Year
💼 Employment Type: FTE
🔹 Job Summary:
We are seeking a skilled Python Developer with strong experience in Django, Flask API development, Databricks, and Azure Cloud. The ideal candidate will be responsible for designing scalable backend systems, developing REST APIs, building data pipelines, and working with cloud-based data platforms.
🔹 Key Responsibilities:
✔ Develop and maintain web applications using Django framework
✔ Design and build RESTful APIs using Flask
✔ Develop and optimize data pipelines using Azure Databricks
✔ Integrate applications with Azure services (Blob, Data Factory, SQL, etc.)
✔ Write clean, scalable, and efficient Python code
✔ Collaborate with frontend, DevOps, and data engineering teams
✔ Perform code reviews and ensure best practices
✔ Troubleshoot, debug, and upgrade existing systems
🔹 Required Skills:
- Strong proficiency in Python programming
- Hands-on experience with Django framework
- Experience building Flask-based REST APIs
- Experience working with Azure Databricks
- Knowledge of Azure Cloud services
- Experience with SQL / NoSQL databases
- Understanding of CI/CD and Git workflows
🔹 Good to Have:
- Experience with PySpark
- Knowledge of microservices architecture
- Docker / Kubernetes exposure
- Experience in data engineering projects
Job Title: Java Backend Developer
Experience: ~3-6 years (Mid-to-Senior)
Employment Type: Full-time, Permanent
Location : Bangalore
Role Overview
As a Java Backend Developer, you’ll be responsible for designing, developing, and maintaining scalable backend systems and microservices. You’ll work with cross-functional teams to build high-performance distributed services, APIs, and data-driven applications that power business solutions.
Key Responsibilities
- Design and implement microservices and backend components using Java (8+) and Spring Boot.
- Build and consume RESTful APIs and integrate with internal/external services.
- Work with event-driven systems and messaging using Apache Kafka (producers/consumers).
- Develop and optimize databases, including SQL (e.g., MySQL/PostgreSQL) and NoSQL (e.g., MongoDB/Cassandra).
- Participate in CI/CD pipelines, automated builds, and deployments using tools like Git, Maven, Jenkins.
- Ensure code quality through unit and integration testing, documentation, and code reviews.
- Collaborate with frontend developers, QA, DevOps, and product teams following Agile methodologies.
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Information Technology, or related field.
- Proven hands-on experience with Core Java and Spring Boot development.
- Strong understanding of microservices architecture, REST APIs, and distributed systems.
- Experience with message queues/event streaming (Apache Kafka).
- Skilled in relational and NoSQL databases and writing optimized queries.
- Comfortable with CI/CD tools (e.g., Git, Maven, Jenkins) and version control.
- Good problem-solving, debugging, and collaboration skills.
Preferred / Nice-to-Have
- Cloud platform experience (AWS / Azure / GCP).
- Familiarity with containerization (Docker) and orchestration (Kubernetes).
- Knowledge of performance tuning, caching strategies, observability (metrics/logging).
- Agile/Scrum development experience.
About the role:
We are looking for a skilled and driven Security Engineer to join our growing security team. This role requires a hands-on professional who can evaluate and strengthen the security posture of our
applications and infrastructure across Web, Android, iOS, APIs, and cloud-native environments.
The ideal candidate will also lead technical triage from our bug bounty program, integrate security into the DevOps lifecycle, and contribute to building a security-first engineering culture.
Required Skills & Experience:
● 3 to 6 years of solid hands-on experience in the VAPT domain
● Solid understanding of Web, Android, and iOS application security
● Experience with DevSecOps tools and integrating security into CI/CD
● Strong knowledge of cloud platforms (AWS/GCP/Azure) and their security models
● Familiarity with bug bounty programs and responsible disclosure practices
● Familiarity with tools like Burp Suite, MobSF, OWASP ZAP, Terraform, Checkov..etc
● Good knowledge of API security
● Scripting experience (Python, Bash, or similar) for automation tasks
Preferred Qualifications:
● OSCP, CEH, AWS Security Specialty, or similar certifications
● Experience working in a regulated environment (e.g., FinTech, InsurTech)
Responsibilities:
● Perform Security reviews, Vulnerability Assessments & Penetration Testing for Web,
Android, iOS, and API endpoints
● Perform Threat Modelling & anticipate potential attack vectors and improve security
architecture on complex or cross-functional components
● Identify and remediate OWASP Top 10 and mobile-specific vulnerabilities
● Conduct secure code reviews and red team assessments
● Integrate SAST, DAST, SCA, and secret scanning tools into CI/CD pipelines
● Automate security checks using tools like SonarQube, Snyk, Trivy, etc.
● Maintain and manage vulnerability scanning infrastructure
● Perform security assessments of AWS, Azure, and GCP environments, with an emphasis
on container security, particularly for Docker and Kubernetes.
● Implement guardrails for IAM, network segmentation, encryption, and cloud monitoring
● Contribute to infrastructure hardening for containers, Kubernetes, and virtual machines
● Triage bug bounty reports and coordinate remediation with engineering teams
● Act as the primary responder for external security disclosures
● Maintain documentation and metrics related to bug bounty and penetration testing
activities
● Collaborate with developers and architects to ensure secure design decisions
● Lead security design reviews for new features and products
● Provide actionable risk assessments and mitigation plans to stakeholders
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for an Engineering Manager who combines technical depth with leadership strength. This role involves leading one or more product engineering pods, driving architecture decisions, ensuring delivery excellence, and working closely with stakeholders to build scalable web and mobile technology solutions. As a key part of our leadership team, you’ll play a pivotal role in mentoring engineers, improving processes, and fostering a culture of ownership, innovation, and continuous learning.
Roles and Responsibilities:
● Team Management: Lead, coach, and grow a team of 15-20 software engineers, tech leads, and QA engineers
● Technical Leadership: Guide the team in building scalable, high-performance web and mobile applications using modern frameworks and technologies
● Architecture Ownership: Architect robust, secure, and maintainable technology solutions aligned with product goals
● Project Execution: Ensure timely and high-quality delivery of projects by driving engineering best practices, agile processes, and cross-functional collaboration
● Stakeholder Collaboration: Act as a bridge between business stakeholders, product managers, and engineering teams to translate requirements into technology plans
● Culture & Growth: Help build and nurture a culture of technical excellence, accountability, and continuous improvement
● Hiring & Onboarding: Contribute to recruitment efforts, onboarding, and learning & development of team members.
Requirements:
● 8+ years of software development experience, with 2+ years in a technical leadership or engineering manager role
● Proven experience in architecting and building web and mobile applications at scale
● Hands-on knowledge of technologies such as JavaScript/TypeScript, Angular, React, Node.js, .NET, Java, Python, or similar stacks
● Solid understanding of cloud platforms (AWS/Azure/GCP) and DevOps practices
● Strong interpersonal skills with a proven ability to manage stakeholders and lead diverse teams
● Excellent problem-solving, communication, and organizational skills
● Nice to have:
- Prior experience in working with startups or product-based companies
- Experience mentoring tech leads and helping shape engineering culture
- Exposure to AI/ML, data engineering, or platform thinking
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethics and culture.
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
Role Objective
We are looking for a proactive InfoSec Associate to support our compliance and audit functions. You will play a key role in maintaining our ISO standards, handling vendor security assessments, and ensuring our documentation is audit-ready for our banking and NBFC clients.
Key Responsibilities
- Audit Support: Assist in internal and external audits for ISO 27001, SOC2, and ISO 27701.
- Vendor Compliance: Independently handle and respond to detailed Vendor Security Questionnaires from banks and NBFCs.
- Evidence Management: Collect, organize, and present technical audit evidence from engineering and IT teams.
- Policy & Documentation: Help draft and review Security Policies, SOPs, and ISMS documentation.
- Risk Tracking: Track audit observations and manage the Corrective Action Plan (CAPA) to ensure timely remediation.
- Data Privacy: Assist in aligning internal processes with the DPDP Act and GDPR requirements.
Required Skills & Competencies
- Framework Knowledge: Basic understanding of ISO 27001 and Risk Assessment principles.
- Technical Literacy: Ability to understand AWS/Azure cloud security settings from a compliance standpoint.
- Documentation: High proficiency in organizing audit trails and drafting professional security reports.
- Communication: Comfortable interacting with external auditors and internal technical teams.
Preferred Certifications (Good to Have)
- ISO 27001 Internal Auditor
- CompTIA Security+
- CISA (In-progress/Foundation)
About NonStop io Technologies
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We're seeking an AI/ML Engineer to join our team. As AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real-world business problems. You will work closely with engineering teams, including software engineers, domain experts, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop io. Your role will involve researching cutting-edge algorithms and data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.
Responsibilities
● Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI
● AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.
● Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data
● Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics
● Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behaviour, and performance metrics
● Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems
● Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes
● Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions
● Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference.
Qualifications & Skills
● Bachelor's, Master's, or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus
● Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects
● Proficiency in programming languages commonly used for AI/ML. Preferably Python
● Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.
● Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.
● Strong understanding of machine learning algorithms, statistics, and data structures
● Experience with data preprocessing, data wrangling, and feature engineering
● Knowledge of deep learning architectures, neural networks, and transfer learning
● Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment
● Solid understanding of software engineering principles and best practices for writing maintainable and scalable code
● Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions
● Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders
We are seeking a talented AI/ML Engineer with strong hands-on experience in Generative AI and Large Language Models (LLMs) to join our Business Intelligence team. The role involves designing, developing, and deploying advanced AI/ML and GenAI-driven solutions to unlock business insights and enhance data-driven decision-making.
Key Responsibilities:
• Collaborate with business analysts and stakeholders to identify AI/ML and Generative AI use cases.
• Design and implement ML models for predictive analytics, segmentation, anomaly detection, and forecasting.
• Develop and deploy Generative AI solutions using LLMs (GPT, LLaMA, Mistral, etc.).
• Build and maintain Retrieval-Augmented Generation (RAG) pipelines and semantic search systems.
• Work with vector databases (FAISS, Pinecone, ChromaDB) for embedding storage and retrieval.
• Develop end-to-end AI/ML pipelines from data preprocessing to deployment.
• Integrate AI/ML and GenAI solutions into BI dashboards and reporting tools.
• Optimize models for performance, scalability, and reliability.
• Maintain documentation and promote knowledge sharing within the team.
Mandatory Requirements:
• 4+ years of relevant experience as an AI/ML Engineer.
• Hands-on experience in Generative AI and Large Language Models (LLMs) – Mandatory.
• Experience implementing RAG pipelines and prompt engineering techniques.
• Strong programming skills in Python.
• Experience with ML frameworks (TensorFlow, PyTorch, scikit-learn).
• Experience with vector databases (FAISS, Pinecone, ChromaDB).
• Strong understanding of SQL and database systems.
• Experience integrating AI solutions into BI tools (Power BI, Tableau).
• Strong analytical, problem-solving, and communication skills. Good to Have
• Experience with cloud platforms (AWS, Azure, GCP).
• Experience with Docker or Kubernetes.
• Exposure to NLP, computer vision, or deep learning use cases.
• Experience in MLOps and CI/CD pipelines
Way2DreamJobs is building a premium cloud mentorship ecosystem focused on real-world Microsoft Azure and Modern Workplace skills.
We are inviting experienced Azure professionals to collaborate as founding weekend mentors for a remote mentorship program.
This is not a traditional full-time job. It is a flexible mentor collaboration model designed for working IT professionals who want to share real enterprise experience.
Responsibilities:
• Conduct weekend mentorship sessions
• Guide learners through practical Azure scenarios
• Support hands-on lab oriented learning
Ideal Profile:
• 4+ years Azure infrastructure experience
• Exposure to Microsoft Intune or M365 device management preferred
• Comfortable guiding professionals in live sessions
Benefits:
• Remote weekend engagement
• Build industry mentor brand authority
• Paid mentorship collaboration (structure discussed during call)
Job Details
- Job Title: Lead Software Engineer - Java, Python, API Development
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 8-10 years
- Employment Type: Full Time
- Job Location: Pune & Trivandrum/ Thiruvananthapuram
- CTC Range: Best in Industry
Job Description
Job Summary
We are seeking a Lead Software Engineer with strong hands-on expertise in Java and Python to design, build, and optimize scalable backend applications and APIs. The ideal candidate will bring deep experience in cloud technologies, large-scale data processing, and leading the design of high-performance, reliable backend systems.
Key Responsibilities
- Design, develop, and maintain backend services and APIs using Java and Python
- Build and optimize Java-based APIs for large-scale data processing
- Ensure high performance, scalability, and reliability of backend systems
- Architect and manage backend services on cloud platforms (AWS, GCP, or Azure)
- Collaborate with cross-functional teams to deliver production-ready solutions
- Lead technical design discussions and guide best practices
Requirements
- 8+ years of experience in backend software development
- Strong proficiency in Java and Python
- Proven experience building scalable APIs and data-driven applications
- Hands-on experience with cloud services and distributed systems
- Solid understanding of databases, microservices, and API performance optimization
Nice to Have
- Experience with Spring Boot, Flask, or FastAPI
- Familiarity with Docker, Kubernetes, and CI/CD pipelines
- Exposure to Kafka, Spark, or other big data tools
Skills
Java, Python, API Development, Data Processing, AWS Backend
Skills: Java, API development, Data Processing, AWS backend, Python,
Must-Haves
Java (8+ years), Python (8+ years), API Development (8+ years), Cloud Services (AWS/GCP/Azure), Database & Microservices
8+ years of experience in backend software development
Strong proficiency in Java and Python
Proven experience building scalable APIs and data-driven applications
Hands-on experience with cloud services and distributed systems
Solid understanding of databases, microservices, and API performance optimization
Mandatory Skills: Java API AND AWS
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Pune, Trivandrum
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a passionate and experienced Full Stack Engineer to join our engineering team. The ideal candidate will have strong experience in both frontend and backend development, with the ability to design, build, and scale high-quality applications. You will collaborate with cross-functional teams to deliver robust and user-centric solutions.
Roles and Responsibilities:
● Design, develop, and maintain scalable web applications
● Build responsive and high-performance user interfaces
● Develop secure and efficient backend services and APIs
● Collaborate with product managers, designers, and QA teams to deliver features
● Write clean, maintainable, and testable code
● Participate in code reviews and contribute to engineering best practices
● Optimize applications for speed, performance, and scalability
● Troubleshoot and resolve production issues
● Contribute to architectural decisions and technical improvements.
Requirements:
● 3 to 5 years of experience in full-stack development
● Strong proficiency in frontend technologies such as React, Angular, or Vue
● Solid experience with backend technologies such as Node.js, .NET, Java, or Python
● Experience in building RESTful APIs and microservices
● Strong understanding of databases such as PostgreSQL, MySQL, MongoDB, or SQL Server
● Experience with version control systems like Git
● Familiarity with CI CD pipelines
● Good understanding of cloud platforms such as AWS, Azure, or GCP
● Strong understanding of software design principles and data structures
● Experience with containerization tools such as Docker
● Knowledge of automated testing frameworks
● Experience working in Agile environments
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
- Strong experience in Azure – mainly Azure ML Studio, AKS, Blob Storage, ADF, ADO Pipelines.
- Ability and experience to register and deploy ML/AI/GenAI models via Azure ML Studio.
- Working knowledge of deploying models in AKS clusters.
- Design and implement data processing, training, inference, and monitoring pipelines using Azure ML.
- Excellent Python skills – environment setup and dependency management, coding as per best practices, and knowledge of automatic code review tools like linting and Black.
- Experience with MLflow for model experiments, logging artifacts and models, and monitoring.
- Experience in orchestrating machine learning pipelines using MLOps best practices.
- Experience in DevOps with CI/CD knowledge (Git in Azure DevOps).
- Experience in model monitoring (drift detection and performance monitoring).
- Fundamentals of data engineering.
- Docker-based deployment is good to have.
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.
Roles and Responsibilities:
● Design, implement, and manage CI CD pipelines for multiple environments
● Automate infrastructure provisioning using Infrastructure as Code tools
● Manage and optimize cloud infrastructure on AWS, Azure, or GCP
● Monitor system performance, availability, and security
● Implement logging, monitoring, and alerting solutions
● Collaborate with development teams to streamline release processes
● Troubleshoot production issues and ensure high availability
● Implement containerization and orchestration solutions such as Docker and Kubernetes
● Enforce DevOps best practices across the engineering lifecycle
● Ensure security compliance and data protection standards are maintained
Requirements:
● 4 to 7 years of experience in DevOps or Site Reliability Engineering
● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage
● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps
● Experience working in microservices architecture
● Exposure to DevSecOps practices
● Experience in cost optimization and performance tuning in cloud environments
● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM
● Strong knowledge of containerization using Docker
● Experience with Kubernetes in production environments
● Good understanding of Linux systems and shell scripting
● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog
● Strong troubleshooting and debugging skills
● Understanding of networking concepts and security best practices
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
15+ years in enterprise product development
5+ years in Director/VP-level product leadership
Proven AI/ML product commercialization experience
Expertise in Industry 4.0 (IoT, predictive maintenance, digital twins)
Hands-on experience with AWS/Azure/GCP cloud platforms
Strong architecture experience in microservices ecosystems
Experience implementing MLOps and DevSecOps frameworks
Experience integrating MES, ERP, SCADA, automation platforms
SaaS business model and enterprise software commercialization expertise
Experience leading large engineering/product teams
JOB DETAILS:
* Job Title: Java Lead-Java, MS, Kafka-TVM - Java (Core & Enterprise), Spring/Micronaut, Kafka
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 9 to 12 years
* Location: Trivandrum, Thiruvananthapuram
Job Description
Experience
- 9+ years of experience in Java-based backend application development
- Proven experience building and maintaining enterprise-grade, scalable applications
- Hands-on experience working with microservices and event-driven architectures
- Experience working in Agile and DevOps-driven development environments
Mandatory Skills
- Advanced proficiency in core Java and enterprise Java concepts
- Strong hands-on experience with Spring Framework and/or Micronaut for building scalable backend applications
- Strong expertise in SQL, including database design, query optimization, and performance tuning
- Hands-on experience with PostgreSQL or other relational database management systems
- Strong experience with Kafka or similar event-driven messaging and streaming platforms
- Practical knowledge of CI/CD pipelines using GitLab
- Experience with Jenkins for build automation and deployment processes
- Strong understanding of GitLab for source code management and DevOps workflows
Responsibilities
- Design, develop, and maintain robust, scalable, and high-performance backend solutions
- Develop and deploy microservices using Spring or Micronaut frameworks
- Implement and integrate event-driven systems using Kafka
- Optimize SQL queries and manage PostgreSQL databases for performance and reliability
- Build, implement, and maintain CI/CD pipelines using GitLab and Jenkins
- Collaborate with cross-functional teams including product, QA, and DevOps to deliver high-quality software solutions
- Ensure code quality through best practices, reviews, and automated testing
Good-to-Have Skills
- Strong problem-solving and analytical abilities
- Experience working with Agile development methodologies such as Scrum or Kanban
- Exposure to cloud platforms such as AWS, Azure, or GCP
- Familiarity with containerization and orchestration tools such as Docker or Kubernetes
Skills: java, spring boot, kafka development, cicd, postgresql, gitlab
Must-Haves
Java Backend (9+ years), Spring Framework/Micronaut, SQL/PostgreSQL, Kafka, CI/CD (GitLab/Jenkins)
Advanced proficiency in core Java and enterprise Java concepts
Strong hands-oacn experience with Spring Framework and/or Micronaut for building scalable backend applications
Strong expertise in SQL, including database design, query optimization, and performance tuning
Hands-on experience with PostgreSQL or other relational database management systems
Strong experience with Kafka or similar event-driven messaging and streaming platforms
Practical knowledge of CI/CD pipelines using GitLab
Experience with Jenkins for build automation and deployment processes
Strong understanding of GitLab for source code management and DevOps workflows
*******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: only Trivandrum
F2F Interview on 21st Feb 2026
Job Description -
Profile: Senior ML Lead
Experience Required: 10+ Years
Work Mode: Remote
Key Responsibilities:
- Design end-to-end AI/ML architectures including data ingestion, model development, training, deployment, and monitoring
- Evaluate and select appropriate ML algorithms, frameworks, and cloud platforms (Azure, Snowflake)
- Guide teams in model operationalization (MLOps), versioning, and retraining pipelines
- Ensure AI/ML solutions align with business goals, performance, and compliance requirements
- Collaborate with cross-functional teams on data strategy, governance, and AI adoption roadmap
Required Skills:
- Strong expertise in ML algorithms, Linear Regression, and modeling fundamentals
- Proficiency in Python with ML libraries and frameworks
- MLOps: CI/CD/CT pipelines for ML deployment with Azure
- Experience with OpenAI/Generative AI solutions
- Cloud-native services: Azure ML, Snowflake
- 8+ years in data science with at least 2 years in solution architecture role
- Experience with large-scale model deployment and performance tuning
Good-to-Have:
- Strong background in Computer Science or Data Science
- Azure certifications
- Experience in data governance and compliance
The Sr. Consultant, Microsoft AI Solutions drives end-to-end delivery success for assigned Microsoft Copilot and AI solution components. This includes leading solution design activities within engagement scope, aligning stakeholders on requirements and implementation approach, developing delivery-ready artifacts, and executing configuration, integration, and deployment tasks. The role ensures solutions meet security, governance, and operational standards and supports go-live readiness, stabilization, and handoff to operations teams.
This role will also support presales and technical deep-dive sessions (on an as-needed basis) with customers prior to the initiation of delivery engagements, focused on solution feasibility, technical validation, and delivery readiness.
SUMMARY OF ESSENTIAL JOB FUNCTIONS:
Solution Envisioning and Business Alignment
- Lead and support AI solution envisioning activities with customers, including workshops, demonstrations, and technical deep-dive sessions.
- Translate business scenarios and use cases into conceptual solution designs - aligned to Microsoft AI products and services via Copilot, Azure, etc.
- Support technical feasibility and delivery readiness assessments prior to delivery initiation, validating platform fit, approach, and constraints.
- Facilitate alignment with customer stakeholders on solution scope, requirements, architecture approach, and success criteria.
- Develop conceptual designs and delivery-aligned solution definitions to guide successful implementation.
Solution Delivery and Execution
- Lead solution design activities within delivery engagements, translating approved concepts into functional and non-functional requirements.
- Configure, build, and implement Microsoft solutions through Microsoft Copilot, Copilot Studio, Power Platform, and supporting Azure services.
- Integrate Copilot solutions with Microsoft 365, Teams, Microsoft Foundry, and existing enterprise systems and workflows.
- Implement identity, security, governance, and access controls aligned to customer and organizational standards.
- Execute testing, validation, and troubleshooting to ensure solution quality and readiness for production use.
- Support deployment, go-live, and stabilization activities to ensure successful adoption.
Communication and Collaboration
- Ability to serve as the primary delivery lead for assigned solution components or workstreams within an engagement.
- Partner with solution architects and project managers to plan, execute, and track delivery milestones.
- Collaborate with customer technical and business teams to drive alignment, decision-making, and adoption throughout the engagement.
- Communicate delivery status, risks, and dependencies to internal and customer stakeholders.
- Support limited presales and technical deep-dive sessions (on an as-needed basis) to enable solution feasibility, technical validation, and delivery readiness.
Continuous Improvement and Delivery Excellence
- Develop and contribute to delivery artifacts including architecture workshop agendas, diagrams, configuration specifications, runbooks, deployment guides, and validation checklists.
- Capture, sanitize, and contribute reusable solution assets, patterns, and implementation guidance to internal repositories.
- Contribute feedback and lessons learned to improve delivery efficiency, consistency, and quality across similar engagements.
- Support initiatives focused on standardizing delivery approaches and accelerating future implementations.
- Stay current on Microsoft Copilot, Microsoft Foundry, and related platform updates to continuously improve delivery practices.
REQUIRED SKILLS AND EXPERIENCE:
· Bachelor’s degree required; advanced degree or relevant certifications preferred.
· 8+ years of experience in consulting, enterprise architecture, or digital transformation with client-facing responsibilities.
· Experience advising senior leaders on AI, Copilot, or cloud modernization initiatives.
· Strong hands-on expertise with Microsoft Copilot and Copilot Studio.
· Experience designing AI-enabled solutions / automations, or integrating with existing business processes leveraging Microsoft 365, Teams, and Azure services.
· Strong understanding of Identity and Access Management design concepts for Microsoft Copilot and AI agents
· Familiar with agent design patterns & data access patterns.
· Familiar with Azure OpenAI, Azure AI Search, Logic Apps, Azure Functions, and integration architectures. Hands-on experience in integrating with one or more of these services through Copilot Studio is preferred.
· Experience in leading and contributing to presales efforts including readiness assessments, envisioning workshops, and proposal development.
· Microsoft certifications in Microsoft Copilot, Power Platform, Azure, and Microsoft AI are preferred.
· Highly organized, detail-oriented, excellent time management skills, and able to effectively prioritize tasks in a fast-paced, high-volume, and evolving work environment.
· Ability to approach customer requests with a proactive and consultative manner; listen to and understand user requests and needs, and effectively deliver.
· Strong influencing skills to get things done and inspire business transformation.
· Excellent oral, written communication, and presentation skills with an ability to present AI-related concepts to C-Level Executives and non-technical audiences.
· Conflict negotiation and critical thinking skills and agility.
· Ability to travel when needed.
JOB DETAILS:
* Job Title: Principal Data Scientist
* Industry: Healthcare
* Salary: Best in Industry
* Experience: 6-10 years
* Location: Bengaluru
Preferred Skills: Generative AI, NLP & ASR, Transformer Models, Cloud Deployment, MLOps
Criteria:
- Candidate must have 7+ years of experience in ML, Generative AI, NLP, ASR, and LLMs (preferably healthcare).
- Candidate must have strong Python skills with hands-on experience in PyTorch/TensorFlow and transformer model fine-tuning.
- Candidate must have experience deploying scalable AI solutions on AWS/Azure/GCP with MLOps, Docker, and Kubernetes.
- Candidate must have hands-on experience with LangChain, OpenAI APIs, vector databases, and RAG architectures.
- Candidate must have experience integrating AI with EHR/EMR systems, ensuring HIPAA/HL7/FHIR compliance, and leading AI initiatives.
Job Description
Principal Data Scientist
(Healthcare AI | ASR | LLM | NLP | Cloud | Agentic AI)
Job Details
- Designation: Principal Data Scientist (Healthcare AI, ASR, LLM, NLP, Cloud, Agentic AI)
- Location: Hebbal Ring Road, Bengaluru
- Work Mode: Work from Office
- Shift: Day Shift
- Reporting To: SVP
- Compensation: Best in the industry (for suitable candidates)
Educational Qualifications
- Ph.D. or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field
- Technical certifications in AI/ML, NLP, or Cloud Computing are an added advantage
Experience Required
- 7+ years of experience solving real-world problems using:
- Natural Language Processing (NLP)
- Automatic Speech Recognition (ASR)
- Large Language Models (LLMs)
- Machine Learning (ML)
- Preferably within the healthcare domain
- Experience in Agentic AI, cloud deployments, and fine-tuning transformer-based models is highly desirable
Role Overview
This position is part of company, a healthcare division of Focus Group specializing in medical coding and scribing.
We are building a suite of AI-powered, state-of-the-art web and mobile solutions designed to:
- Reduce administrative burden in EMR data entry
- Improve provider satisfaction and productivity
- Enhance quality of care and patient outcomes
Our solutions combine cutting-edge AI technologies with live scribing services to streamline clinical workflows and strengthen clinical decision-making.
The Principal Data Scientist will lead the design, development, and deployment of cognitive AI solutions, including advanced speech and text analytics for healthcare applications. The role demands deep expertise in generative AI, classical ML, deep learning, cloud deployments, and agentic AI frameworks.
Key Responsibilities
AI Strategy & Solution Development
- Define and develop AI-driven solutions for speech recognition, text processing, and conversational AI
- Research and implement transformer-based models (Whisper, LLaMA, GPT, T5, BERT, etc.) for speech-to-text, medical summarization, and clinical documentation
- Develop and integrate Agentic AI frameworks enabling multi-agent collaboration
- Design scalable, reusable, and production-ready AI frameworks for speech and text analytics
Model Development & Optimization
- Fine-tune, train, and optimize large-scale NLP and ASR models
- Develop and optimize ML algorithms for speech, text, and structured healthcare data
- Conduct rigorous testing and validation to ensure high clinical accuracy and performance
- Continuously evaluate and enhance model efficiency and reliability
Cloud & MLOps Implementation
- Architect and deploy AI models on AWS, Azure, or GCP
- Deploy and manage models using containerization, Kubernetes, and serverless architectures
- Design and implement robust MLOps strategies for lifecycle management
Integration & Compliance
- Ensure compliance with healthcare standards such as HIPAA, HL7, and FHIR
- Integrate AI systems with EHR/EMR platforms
- Implement ethical AI practices, regulatory compliance, and bias mitigation techniques
Collaboration & Leadership
- Work closely with business analysts, healthcare professionals, software engineers, and ML engineers
- Implement LangChain, OpenAI APIs, vector databases (Pinecone, FAISS, Weaviate), and RAG architectures
- Mentor and lead junior data scientists and engineers
- Contribute to AI research, publications, patents, and long-term AI strategy
Required Skills & Competencies
- Expertise in Machine Learning, Deep Learning, and Generative AI
- Strong Python programming skills
- Hands-on experience with PyTorch and TensorFlow
- Experience fine-tuning transformer-based LLMs (GPT, BERT, T5, LLaMA, etc.)
- Familiarity with ASR models (Whisper, Canary, wav2vec, DeepSpeech)
- Experience with text embeddings and vector databases
- Proficiency in cloud platforms (AWS, Azure, GCP)
- Experience with LangChain, OpenAI APIs, and RAG architectures
- Knowledge of agentic AI frameworks and reinforcement learning
- Familiarity with Docker, Kubernetes, and MLOps best practices
- Understanding of FHIR, HL7, HIPAA, and healthcare system integrations
- Strong communication, collaboration, and mentoring skills
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
JOB DETAILS:
* Job Title: Lead I - Azure, Terraform, GitLab CI
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 3-5 years
* Location: Trivandrum/Pune
Job Description
Job Title: DevOps Engineer
Experience: 4–8 Years
Location: Trivandrum & Pune
Job Type: Full-Time
Mandatory skills: Azure, Terraform, GitLab CI, Splunk
Job Description
We are looking for an experienced and driven DevOps Engineer with 4 to 8 years of experience to join our team in Trivandrum or Pune. The ideal candidate will take ownership of automating cloud infrastructure, maintaining CI/CD pipelines, and implementing monitoring solutions to support scalable and reliable software delivery in a cloud-first environment.
Key Responsibilities
- Design, manage, and automate Azure cloud infrastructure using Terraform.
- Develop scalable, reusable, and version-controlled Infrastructure as Code (IaC) modules.
- Implement monitoring and logging solutions using Splunk, Azure Monitor, and Dynatrace.
- Build and maintain secure and efficient CI/CD pipelines using GitLab CI or Harness.
- Collaborate with cross-functional teams to enable smooth deployment workflows and infrastructure updates.
- Analyze system logs, performance metrics, and s to troubleshoot and optimize performance.
- Ensure infrastructure security, compliance, and scalability best practices are followed.
Mandatory Skills
Candidates must have hands-on experience with the following technologies:
- Azure – Cloud infrastructure management and deployment
- Terraform – Infrastructure as Code for scalable provisioning
- GitLab CI – Pipeline development, automation, and integration
- Splunk – Monitoring, logging, and troubleshooting production systems
Preferred Skills
- Experience with Harness (for CI/CD)
- Familiarity with Azure Monitor and Dynatrace
- Scripting proficiency in Python, Bash, or PowerShell
- Understanding of DevOps best practices, containerization, and microservices architecture
- Exposure to Agile and collaborative development environments
Skills Summary
Azure, Terraform, GitLab CI, Splunk (Mandatory) Additional: Harness, Azure Monitor, Dynatrace, Python, Bash, PowerShell
Skills: Azure, Splunk, Terraform, Gitlab Ci
******
Notice period - 0 to 15days only
Job stability is mandatory
Location: Trivandrum/Pune
Company Description:
NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.
Brief Description:
NonStop io is seeking a proficient .NET Developer to join our growing team. You will be responsible for developing, enhancing, and maintaining scalable applications using .NET technologies. This role involves working on a healthcare-focused product and requires strong problem-solving skills, attention to detail, and a passion for software development.
Responsibilities:
- Design, develop, and maintain applications using .NET Core/.NET Framework, C#, and related technologies
- Write clean, scalable, and efficient code while following best practices
- Develop and optimize APIs and microservices
- Work with SQL Server and other databases to ensure high performance and reliability
- Collaborate with cross-functional teams, including UI/UX designers, QA, and DevOps
- Participate in code reviews and provide constructive feedback
- Troubleshoot, debug, and enhance existing applications
- Ensure compliance with security and performance standards, especially for healthcare-related applications
Qualifications & Skills:
- Strong experience in .NET Core/.NET Framework and C#
- Proficiency in building RESTful APIs and microservices architecture
- Experience with Entity Framework, LINQ, and SQL Server
- Familiarity with front-end technologies like React, Angular, or Blazor is a plus
- Knowledge of cloud services (Azure/AWS) is a plus
- Experience with version control (Git) and CI/CD pipelines
- Strong understanding of object-oriented programming (OOP) and design patterns
- Prior experience in healthcare tech or working with HIPAA-compliant systems is a plus
Why Join Us?
- Opportunity to work on a cutting-edge healthcare product
- A collaborative and learning-driven environment
- Exposure to AI and software engineering innovations
- Excellent work ethics and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
Role Overview
We are hiring for Humming Apps Technologies LLP who are seeking a Senior Threat Modeler to join the security team and act as a strategic bridge between architecture and defense. This role focuses on proactively identifying vulnerabilities during the design phase to ensure applications, APIs, and cloud infrastructures are secure by design.
The position requires thinking from an attacker’s perspective to analyze trust boundaries, map attack paths, and influence the overall security posture of next-generation AI-driven and cloud-native systems. The goal is not only to detect issues but to prevent risks before implementation.
Key Responsibilities
Architectural Analysis
• Lead deep-dive threat modeling sessions across applications, APIs, microservices, and cloud-native environments
• Perform detailed reviews of system architecture, data flows, and trust boundaries
Threat Modeling Frameworks & Methodologies
• Apply industry-standard frameworks including STRIDE, PASTA, ATLAS, and MITRE ATT&CK
• Identify sophisticated attack vectors and model realistic threat scenarios
Security Design & Risk Mitigation
• Detect weaknesses during the design stage
• Provide actionable and prioritized mitigation recommendations
• Strengthen security posture through secure-by-design principles
Collaborative Security Integration
• Work closely with architects and developers during design and build phases
• Embed security practices directly into the SDLC
• Ensure security is incorporated early rather than retrofitted
Communication & Enablement
• Facilitate threat modeling demonstrations and walkthroughs
• Present findings and risk assessments to stakeholders
• Translate complex technical risks into clear, business-relevant insights
• Educate teams on secure design practices and emerging threats
Required Qualifications
Experience
• 5–10 years of dedicated experience in threat modeling, product security, or application security
Technical Expertise
• Strong understanding of software architecture and distributed systems
• Experience designing and securing RESTful APIs
• Hands-on knowledge of cloud platforms such as AWS, Azure, or GCP
Modern Threat Knowledge
• Expertise in current attack vectors including OWASP Top 10
• Understanding of API-specific threats
• Awareness of emerging risks in AI/LLM-based applications
Tools & Practices
• Practical experience with threat modeling tools
• Proficiency in technical diagramming and system visualization
Communication
• Excellent written and verbal English communication skills
• Ability to collaborate across engineering teams and stakeholders in different time zones
Preferred Qualifications
• Experience in consulting or client-facing professional services roles
• Industry certifications such as CISSP, CSSLP, OSCP, or equivalent
🚀 Hiring: Data Engineer ( Azure )
⭐ Experience: 5+ Years
📍 Location: Pune, Bhopal, Jaipur, Gurgaon, Delhi, Banglore,
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
Hiring: Databricks Data Engineer – Lakeflow | Streaming | DBSQL | Data Intelligence
We are looking for a Databricks Data Engineer to build reliable, scalable, and governed data pipelines powering analytics, operational reporting, and the Data Intelligence Layer.
🔹 Key Responsibilities
- Build optimized batch pipelines using Delta Lake (partitioning, OPTIMIZE, Z-ORDER, VACUUM)
- Implement incremental ingestion using Databricks Autoloader with schema evolution & checkpointing
- Develop Structured Streaming pipelines with watermarking, late data handling & restart safety
- Implement declarative pipelines using Lakeflow
- Design idempotent, replayable pipelines with safe backfills
- Optimize Spark workloads (AQE, skew handling, shuffle & join tuning)
- Build curated datasets for Databricks SQL (DBSQL), dashboards & downstream applications
- Package and deploy using Databricks Repos & Asset Bundles (CI/CD)
- Ensure governance using Unity Catalog and embedded data quality checks
✅ Mandatory Skills (Must Have)
- Databricks & Delta Lake (Advanced Optimization & Performance Tuning)
- Structured Streaming & Autoloader Implementation
- Databricks SQL (DBSQL) & Data Modeling for Analytics
- 3+ years hands-on Azure cloud & automation experience.
- Experience managing high-availability enterprise systems.
- Microsoft Azure (AKS, VNets, App Gateway, Load Balancers).
- Kubernetes (AKS) & Docker.
- Networking (VPN, DNS, routing, firewalls, NSGs).
- Infra-as-Code (Terraform / Bicep optional).
- Monitoring tools: Azure Monitor, Grafana, Prometheus.
- CI/CD: Azure DevOps, GitLab/Jenkins (added advantage).
- Security: Key Vault, certificates, encryption, RBAC.
- Understanding of PostgreSQL/PostGIS networking.
- Design and manage Azure infrastructure (VMs, VNets, NSGs, Load Balancers, AKS, Storage).
- Deploy and maintain AKS workloads for NiFi, PostGIS, and microservices.
- Architect secure network topology including VNet peering, VPNs, Private Endpoints, DNS & Zero Trust policies.
- Implement monitoring and alerting using Azure Monitor, Log Analytics, Grafana & Prometheus.
- Ensure high uptime, DR planning, backup and failover strategies.
- Automate deployments with Azure DevOps, Helm, ArgoCD & GitOps principles.
- Enforce security, RBAC, compliance, and audit standards across environments.
- Good to have knowledge/experince in Linux administration (Ubuntu/Debian).
Job Details
- Job Title: Specialist I - Software Engineering-.Net Fullstack Lead-TVM
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 5-9 years
- Employment Type: Full Time
- Job Location: Trivandrum, Thiruvananthapuram
- CTC Range: Best in Industry
Job Description
· Minimum 5+ years experienced senior/Lead .Net developer, including experience of the full development lifecycle, including post-live support.
· Significant experience delivering software using Agile iterative delivery methodologies.
· JIRA knowledge preferred.
· Excellent ability to understand requirement/story scope and visualise technical elements required for application solutions.
· Ability to clearly articulate complex problems and solutions in terms that others can understand.
· Lots of experience working with .Net backend API development.
· Significant experience of pipeline design, build and enhancement to support release cadence targets, including Infrastructure as Code (preferably Terraform).
· Strong understanding of HTML and CSS including cross-browser, compatibility, and performance.
· Excellent knowledge of unit and integration testing techniques.
· Azure knowledge (Web/Container Apps, Azure Functions, SQL Server).
· Kubernetes / Docker knowledge. Knowledge of JavaScript UI frameworks, ideally Vue Extensive experience with source control (preferably Git).
· Strong understanding of RESTful services (JSON) and API Design.
· Broad knowledge of Cloud infrastructure (PaaS, DBaaS).
· Experience of mentoring and coaching engineers operating within a co-located environment.
Skills: .Net Fullstack, Azure Cloudformation, Javascript, Angular
Must-Haves:
.Net (5+ years), Agile methodologies, RESTful API design, Azure (Web/Container Apps, Functions, SQL Server), Git source control
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Trivandrum
F2F Weekend Interview on 14th Feb 2026

















