50+ Windows Azure Jobs in India
Apply to 50+ Windows Azure Jobs on CutShort.io. Find your next job, effortlessly. Browse Windows Azure Jobs and apply today!
Role Overview
The Azure Presales Engineer is responsible for engaging with customers to understand their business and technical requirements and translating them into well-architected Microsoft Azure solutions. This role plays a key part in cloud transformation initiatives by supporting presales activities, building solution proposals, responding to RFPs, and ensuring a smooth transition from presales to delivery.
Key Responsibilities
- Participate in customer discovery sessions to gather technical and business requirements
- Design Azure cloud architectures across IaaS, PaaS, and hybrid environments following best practices
- Prepare technical solution proposals, architectures, BOMs, and presales documentation
- Support RFP and RFQ responses with detailed technical inputs and cost estimations
- Deliver Azure solution demonstrations, workshops, and technical presentations to customers
- Collaborate closely with sales and delivery teams to ensure accurate solution design and handover
- Stay updated with Azure services, licensing models, pricing, and new feature releases
- Work with Microsoft account teams for co-selling opportunities, funding programs, and alignment
- Contribute to reusable presales assets, templates, and solution accelerators
Required Qualifications
- 2–3+ years of experience in Azure cloud engineering or presales roles
- Strong hands-on understanding of Azure core services including compute, storage, networking, security, IAM, monitoring, backup, and disaster recovery
- Experience in preparing technical proposals, SOWs, and solution designs
- Strong communication, presentation, and customer-facing skills
- Ability to translate business needs into effective cloud solutions
- Experience working with or for a Microsoft Partner is a strong plus
Preferred Certifications
- AZ-104, AZ-305, AZ-900, AZ-700, AZ-500 (any relevant Azure certifications)
Description
As a Power Apps Developer, you will be at the forefront of crafting innovative, low‑code solutions that streamline business processes and empower end‑users across the organization. You will collaborate closely with functional analysts, business stakeholders, and fellow developers to translate complex requirements into intuitive, scalable applications on the Microsoft Power Platform. The role offers a dynamic environment where continuous learning is encouraged, providing access to the latest Power Apps features, Azure services, and integration techniques. You will contribute to a culture of knowledge sharing, participate in code reviews, and mentor junior team members, ensuring high‑quality deliverables that drive operational efficiency and measurable business impact.
Requirements:
- 5–15 years of experience developing enterprise‑grade solutions using Microsoft Power Apps, Power Automate, and Power BI.
- Strong proficiency in Canvas and Model‑Driven apps, Common Data Service (Dataverse), and integration with Azure services (e.g., Azure Functions, Logic Apps).
- Solid understanding of relational databases, SQL, and data modeling concepts.
- Experience with JavaScript, TypeScript, and RESTful APIs for extending Power Apps functionality.
- Excellent problem‑solving abilities, strong communication skills, and a collaborative mindset.
- Relevant certifications such as Microsoft Power Platform Developer Associate (PL‑400) are a plus.
Roles and Responsibilities:
- Design, develop, and deploy custom Power Apps solutions that meet business requirements and adhere to best practices.
- Create and maintain automated workflows using Power Automate to streamline repetitive tasks and improve efficiency.
- Integrate Power Apps with external systems via connectors, APIs, and Azure services to ensure seamless data flow.
- Perform performance tuning, debugging, and troubleshooting of applications to ensure optimal user experience.
- Collaborate with business analysts and stakeholders to gather requirements, provide technical guidance, and deliver prototypes.
- Conduct code reviews, enforce governance standards, and contribute to the development of a reusable component library.
- Stay updated with the latest Power Platform releases, evaluate new features, and recommend adoption strategies.
- Provide training and mentorship to junior developers and end‑users to foster platform adoption.
Must have skills
Power apps - 5 years
Microsoft Power Automate - 1 years
Nice to have skills
Canvas App Development and Scripting - 4 years
Canvas Apps Development - 4 years
SQL - 2 years
SharePoint APIs - 1 years
Power Fx - 2 years
C Sharp - 3 years
RESTful API - 2 years
🚀 Hiring: AI/M and Gen AI Engineer
⭐ Experience: 5+ Years
⭐ Work Mode:- Remote
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
🌟 About the Role
We are looking for a highly skilled AI/ML Software Engineer to design, build, and productionize enterprise-grade AI solutions. This role focuses on Generative AI, RAG systems, and AI agent–driven automation, with deployment on Microsoft Azure.
You will collaborate with cross-functional teams including architects, engineers, and business stakeholders to deliver scalable and secure AI solutions that create real business impact.
🔑 Mandatory Skills (Must Have)
- ✅ Azure AI Ecosystem (Azure Machine Learning, Azure OpenAI, Cognitive Services)
- ✅ Generative AI & RAG Systems (vector embeddings, retrieval pipelines)
- ✅ Strong Software Engineering + MLOps (CI/CD, containerization, scalable deployments)
💼 Key Responsibilities
- Design, develop, and deploy AI/ML models in production environments
- Build and optimize RAG-based applications and AI agent workflows
- Develop scalable data pipelines and integrate with enterprise systems
- Implement MLOps practices for continuous deployment and monitoring
- Work with big data tools to process large-scale datasets
- Ensure security, scalability, and performance of AI systems
- Collaborate with stakeholders to translate business problems into AI solutions
🧠 Required Experience & Skills
- 5–8 years of hands-on experience in AI/ML development
- Strong programming and software engineering expertise
- Experience with Azure services (ML, Data Lake, OpenAI, Cognitive Services)
- Knowledge of vector databases and embedding models
- Experience with Databricks, Azure Data Factory, or Kafka
- Familiarity with multi-agent systems / agentic AI frameworks
- Proficiency in TensorFlow, PyTorch, Keras, or Scikit-learn
- Background in NLP, Computer Vision, or Deep Learning
- Experience with SQL/NoSQL databases and ETL pipelines
- Strong analytical and problem-solving skills
Senior Data Engineer (Azure Databricks)
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using Azure Databricks and PySpark
- Work extensively with PySpark notebooks within Databricks for data processing and transformation
- Build and optimize batch data processing workflows
- Develop and manage data integrations using Azure Functions and Logic Apps
- Write efficient and optimized SQL queries for data extraction and transformation
Required Skills:
- Strong hands-on experience with Azure Databricks, PySpark, and SQL
- Experience working with batch processing frameworks
- Proficiency in building and managing data pipelines in Azure ecosystem
Good to Have:
- Experience with Python
Mandatory Requirement:
- Candidate must have hands-on experience working with PySpark notebooks in Databricks
Job Description
Position Title: IT Intern (Full Time)
Department: Information Technology
Work Mode: Work From Home (WFH)
Educational Qualification: B.Tech (IT) / M.Tech (IT)
Shift: Rotational Shifts (6am to 3pm, 2pm to 11pm, and 10pm to 07am)
Role Summary
The IT Intern will support day-to-day IT operations and assist in maintaining the organization’s IT infrastructure. This role provides structured exposure to desktop support, cloud platforms, user management, server infrastructure, and IT security practices under the guidance of senior IT team members.
Key Responsibilities
- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.
- Support user account management activities using Azure Entra ID (Azure Active Directory), Active Directory, and Microsoft 365.
- Assist the IT team in configuring, monitoring, and supporting AWS cloud services, including EC2, S3, IAM, and WorkSpaces.
- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.
- Assist with data backups, basic disaster recovery tasks, and implementation of security procedures in line with company policies.
- Create, update, and maintain technical documentation, SOPs, and knowledge base articles.
- Collaborate with internal teams to support system upgrades, IT infrastructure improvements, and ongoing IT projects.
- Adhere to company IT policies, data security standards, and confidentiality requirements.
Required Skills & Competencies
- Basic understanding of IT infrastructure, networking concepts, and operating systems
- Familiarity with cloud platforms such as AWS and/or Microsoft Azure
- Fundamental knowledge of Active Directory and user access management
- Strong willingness to learn and adapt to new technologies
- Good analytical, problem-solving, and communication skills
- Ability to work independently in a remote environment
Technical Requirements
- Personal laptop/desktop with required specifications
- Reliable internet connectivity to support remote work
Learning & Development Opportunities
- Hands-on exposure to enterprise IT environments
- Practical experience with cloud technologies and infrastructure support
- Mentorship from experienced IT professionals
- Opportunity to develop technical, documentation, and operational skills
We are seeking a skilled and passionate ML Engineer with 3+ years of experience to join our team. The ideal candidate will be instrumental in developing, deploying, and maintaining machine learning models, with a strong focus on MLOps practices.
This role requires hands-on experience with Azure cloud services, Databricks, and MLflow to build robust and scalable ML solutions.
Responsibilities
- Design, develop, and implement machine learning models and algorithms to solve complex business problems.
- Collaborate with data scientists to transition models from research and development into production-ready systems.
- Build and maintain scalable data pipelines for ML model training and inference using Databricks.
- Implement and manage the ML model lifecycle using MLflow, including experiment tracking, model versioning, and model registry.
- Deploy and manage ML models in production environments on Azure, leveraging services such as:
- Azure Machine Learning
- Azure Kubernetes Service (AKS)
- Azure Functions
- Support MLOps workloads by automating model training, evaluation, deployment, and monitoring processes.
- Ensure the reliability, performance, and scalability of ML systems in production.
- Monitor model performance, detect model drift, and implement retraining strategies.
- Collaborate with DevOps and Data Engineering teams to integrate ML solutions into existing infrastructure and CI/CD pipelines.
- Document model architecture, data flows, and operational procedures.
Qualifications
Education
- Bachelor’s or Master’s degree in Computer Science, Engineering, Statistics, or a related quantitative field.
Experience
- Minimum 3+ years of professional experience as an ML Engineer or in a similar role.
Required Skills
- Strong proficiency in Python for data manipulation, machine learning, and scripting.
- Hands-on experience with machine learning frameworks, such as:
- Scikit-learn
- TensorFlow
- PyTorch
- Keras
- Demonstrated experience with MLflow for:
- Experiment tracking
- Model management
- Model deployment
- Proven experience working with Microsoft Azure cloud services, specifically:
- Azure Machine Learning
- Azure Databricks
- Related compute and storage services
- Solid experience with Databricks for:
- Data processing
- ETL pipelines
- ML model development
- Strong understanding of MLOps principles and practices, including:
- CI/CD for ML
- Model versioning
- Model monitoring
- Model retraining
- Experience with containerization and orchestration technologies, including:
- Docker
- Kubernetes (especially AKS)
- Familiarity with SQL and data warehousing concepts.
- Experience working with large datasets and distributed computing frameworks.
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration skills.
Nice-to-Have Skills
- Experience with other cloud platforms (AWS or GCP).
- Knowledge of big data technologies such as Apache Spark.
- Experience with Azure DevOps for CI/CD pipelines.
- Familiarity with real-time inference patterns and streaming data.
- Understanding of Responsible AI principles, including fairness, explainability, and privacy.
Certifications (Preferred)
- Microsoft Certified: Azure AI Engineer Associate
- Databricks Certified Machine Learning Associate (or higher)
🚀 Hiring: Data Engineer ( Azure ) at Deqode
⭐ Experience: 5+ Years
📍 Location: Pune, Bhopal, Jaipur, Gurgaon, Delhi, Banglore,
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
⭐ Hiring: Databricks Data Engineer – Lakeflow | Streaming | DBSQL | Data Intelligence
We are looking for a Databricks Data Engineer ( Azure ) to build reliable, scalable, and governed data pipelines powering analytics, operational reporting, and the Data Intelligence Layer.
🔹 Key Responsibilities
✅ Build optimized batch pipelines using Delta Lake (partitioning, OPTIMIZE, Z-ORDER, VACUUM)
✅ Implement incremental ingestion using Databricks Autoloader with schema evolution & checkpointing
✅ Develop Structured Streaming pipelines with watermarking, late data handling & restart safety
✅ Implement declarative pipelines using Lakeflow
✅ Design idempotent, replayable pipelines with safe backfills
✅ Optimize Spark workloads (AQE, skew handling, shuffle & join tuning)
✅ Build curated datasets for Databricks SQL (DBSQL), dashboards & downstream applications
✅ Package and deploy using Databricks Repos & Asset Bundles (CI/CD)
Ensure governance using Unity Catalog and embedded data quality checks
✅ Mandatory Skills (Must Have)
👉 Databricks & Delta Lake (Advanced Optimization & Performance Tuning)
👉 Structured Streaming & Autoloader Implementation
👉 Databricks SQL (DBSQL) & Data Modeling for Analytics
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are seeking a highly skilled QA Automation Engineer with strong expertise in Java and Selenium to join our growing engineering team. The ideal candidate will play a key role in designing, developing, and maintaining scalable test automation frameworks while ensuring high product quality across releases.
Roles and Responsibilities:
● Design, develop, and maintain robust automation frameworks using Java and Selenium
● Build automated test scripts for web applications and integrate them into CI CD pipelines
● Collaborate closely with developers, product managers, and business analysts to understand requirements and define effective test strategies
● Participate in sprint planning, requirement reviews, and technical discussions
● Perform root cause analysis for defects and work with engineering teams for resolution
● Improve automation coverage and reduce manual regression effort
● Ensure test environments, test data, and execution reports are maintained and documented
● Mentor junior QA engineers and promote best practices in automation
● Develop, execute, and maintain comprehensive test plans and test cases for manual and automated testing
● Perform functional, regression, performance, and security testing to ensure software quality
● Design and develop automated test scripts using tools such as Selenium, Appium, or similar frameworks
● Identify, document, and track software defects, working closely with development teams for resolution
● Ensure test coverage by working closely with developers, product managers, and other stakeholders
● Establish and maintain continuous integration (CI) and continuous deployment (CD) pipelines for test automation
● Conduct API testing using tools like Postman or RestAssured
● Collaborate with cross-functional teams to enhance the overall quality of the product
● Stay up to date with the latest industry trends and best practices in QA methodologies and automation frameworks
Requirements:
● 5 to 7 years of experience in QA automation
● Strong hands-on experience with Java and Selenium WebDriver
● Experience in building or enhancing automation frameworks from scratch
● Good understanding of TestNG or JUnit
● Experience with Maven or Gradle
● Familiarity with CI CD tools such as Jenkins, GitHub Actions, or similar
● Strong understanding of Agile Scrum methodology
● Experience with API testing tools such as Rest Assured or Postman is a plus
● Knowledge of version control systems like Git
● Strong analytical and problem-solving skills
● Strong understanding of software testing life cycle (STLC) and defect lifecycle management
● Experience with version control systems (e.g., Git)
● Relevant certifications in software testing (e.g., ISTQB) are desirable but not required
● Solid understanding of software testing principles, methodologies, and techniques
● Excellent analytical and problem-solving skills
● Strong attention to detail and a commitment to delivering high-quality software
● Good communication and collaboration skills, with the ability to work effectively in a team environment
Good to Have:
● Experience with performance testing tools
● Exposure to cloud platforms such as AWS or Azure
● Knowledge of containerization tools like Docker
● Experience in BDD frameworks such as Cucumber.
Why Join Us?
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.
Roles and Responsibilities:
● Design, implement, and manage CI CD pipelines for multiple environments
● Automate infrastructure provisioning using Infrastructure as Code tools
● Manage and optimize cloud infrastructure on AWS, Azure, or GCP
● Monitor system performance, availability, and security
● Implement logging, monitoring, and alerting solutions
● Collaborate with development teams to streamline release processes
● Troubleshoot production issues and ensure high availability
● Implement containerization and orchestration solutions such as Docker and Kubernetes
● Enforce DevOps best practices across the engineering lifecycle
● Ensure security compliance and data protection standards are maintained
Requirements:
● 4 to 7 years of experience in DevOps or Site Reliability Engineering
● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage
● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps
● Experience working in microservices architecture
● Exposure to DevSecOps practices
● Experience in cost optimization and performance tuning in cloud environments
● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM
● Strong knowledge of containerization using Docker
● Experience with Kubernetes in production environments
● Good understanding of Linux systems and shell scripting
● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog
● Strong troubleshooting and debugging skills
● Understanding of networking concepts and security best practices
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
Role: ML Engineer
Location: Remote
Experience: 5+ Years
𝗞𝗲𝘆 𝗦𝗸𝗶𝗹𝗹𝘀 Required:
• Azure ML Studio, AKS, Blob Storage, ADF, ADO Pipelines
• Model deployment & versioning via Azure ML
• MLflow for experiment tracking & model lifecycle management
• MLOps best practices — orchestration, CI/CD, model monitoring
• Strong Python skills (Linting, Black, dependency management)
• Drift detection & performance monitoring
• Docker-based deployment (good to have)
Job Details
- Job Title: Full Stack Engineer
- Industry: SAAS
- Function – Information Technology
- Experience Required: 5-7 years
- Working Days: 6 days
- Employment Type: Full Time
- Job Location: Bangalore
- CTC Range: Best in Industry
Preferred Skills: TypeScript, NodeJS, mongodb, RESTful APIs, React.js
Criteria
Candidate should have at least 4+ years of professional experience as a Full Stack Engineer
Hands-on experience with both React.js and Node.js
Solid understanding of MongoDB
Should have experience in RESTful APIs
Should be from a startup or scale up companies
Should have good experience in Typescript
Strong understanding of asynchronous programming patterns
Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies
Job Description
The Role:
We’re looking for a Full Stack Engineer to build, scale, and maintain high-performance web applications for company’s technology platforms. This role involves working across the stack-frontend, backend, and infrastructure - using modern JavaScript-based technologies.
You’ll collaborate closely with product managers, designers, and cross-functional engineering teams to deliver scalable, secure, and user-centric solutions. This role is ideal for someone who enjoys end-to-end ownership, technical problem-solving, and working in a fast-paced startup environment.
What You’ll Own
1. Full Stack Development
● Design, develop, test, and deploy robust and scalable web applications.
● Build and maintain server-side logic and microservices using Node.js, Express.js, and TypeScript.
● Contribute to frontend feature development and integration.
● Participate in feature planning, estimation, and execution.
2. Backend & API Engineering
● Design and develop RESTful APIs and backend services.
● Implement asynchronous workflows and scalable microservice architectures.
● Ensure performance, reliability, and security of backend systems.
● Implement authentication, authorization, and data protection best practices.
3. Database Design & Optimization
● Design and manage MongoDB schemas using Mongoose.
● Optimize queries and database performance for scale.
● Ensure data integrity and efficient data access patterns.
4. Frontend Collaboration & Integration
● Collaborate with frontend developers to integrate React components and APIs seamlessly.
● Ensure responsive, high-performing application behavior.
5. System Design & Scalability
● Contribute to system architecture and technical design discussions.
● Design scalable, maintainable, and future-ready solutions.
● Optimize applications for speed and scalability.
6. Product & Cross-Functional Collaboration
● Work closely with product and design teams to deliver high-quality features in rapid iterations.
● Participate in the full development lifecycle—from concept to deployment and maintenance.
7. Code Quality & Best Practices
● Write clean, testable, and maintainable code.
● Follow Git-based version control and code review best practices.
● Contribute to improving engineering standards and workflows.
What We’re Looking For
Must-Haves
● 4+ years of professional experience as a Full Stack Engineer or similar role.
● Strong proficiency in JavaScript and TypeScript.
● Hands-on experience with Node.js and Express.js.
● Solid understanding of MongoDB and Mongoose.
● Experience building and consuming RESTful APIs and microservices.
● Strong understanding of asynchronous programming patterns.
● Good grasp of system design principles and application architecture.
● Experience with Git and version control best practices.
● Bachelor’s degree in Computer Science, Engineering, or a related field.
Good-to-Have / Preferred
● Frontend development experience with React.js.
● Exposure to Three.js or similar 3D/visualization libraries.
● Experience with cloud platforms (AWS, GCP, Azure – EC2, S3, Lambda).
● Knowledge of Docker and containerization workflows.
● Experience with testing frameworks (Jest, Mocha, etc.).
● Familiarity with CI/CD pipelines and automated deployments.
Tools You’ll Use
● Backend: Node.js, Express.js, TypeScript
● Frontend: React.js (preferred)
● Database: MongoDB, Mongoose
● Version Control: Git, GitHub / GitLab
● Cloud & DevOps: AWS / GCP / Azure, Docker
● Collaboration: Google Workspace, Notion, Slack
Key Metrics You’ll Own
● Code quality, performance, and scalability
● Timely delivery of features and releases
● System reliability and reduction in production issues
● Contribution to architectural improvements
Why company
● Work on impactful, product-driven tech platforms.
● High-ownership role with end-to-end engineering exposure.
● Opportunity to work with modern technologies and evolving architectures.
● Collaborative startup culture with strong learning and growth opportunities.
Description
Power BI JD
Mandatory:
• 5+ years of Power BI Report development experience.
• Building Analysis Services reporting models.
• Developing visual reports, KPI scorecards, and dashboards using Power BI desktop.
• Connecting data sources, importing data, and transforming data for Business intelligence.
• Analytical thinking for translating data into informative reports and visuals.
• Capable of implementing row-level security on data along with an understanding of application security layer models in Power BI.
• Should have an edge over making DAX queries in Power BI desktop.
• Expert in using advanced-level calculations on the data set.
• Responsible for design methodology and project documentaries.
• Should be able to develop tabular and multidimensional models that are compatible with data warehouse standards.
• Very good communication skills must be able to discuss the requirements effectively with the client teams, and with internal teams.
• Experience working with Microsoft Business Intelligence Stack having Power BI, SSAS, SSRS, and SSIS
• Mandate to have experience with BI tools and systems such as Power BI, Tableau, and SAP.
• Must have 3-4years of experience in data-specific roles.
• Have knowledge of database fundamentals such as multidimensional database design, relational database design, and more
• Knowledge of all the Power BI products (Power Bi premium, Power BI server, Power BI services, Powerquery etc)
• Grip over data analytics
• Interact with customers to understand their business problems and provide best-in-class analytics solutions
• Proficient in SQL and Query performance tuning skills
• Understand data governance, quality and security and integrate analytics with these corporate platforms
• Attention to detail and ability to deliver accurate client outputs
• Experience of working with large and multiple datasets / data warehouses
• Ability to derive insights from data and analysis and create presentations for client teams
• Experience with performance optimization of the dashboards
• Interact with UX/UI designers to create best in class visualization for business harnessing all product capabilities.
• Resilience under pressure and against deadlines.
• Proactive attitude and an open outlook.
• Strong analytical problem-solving skills
• Skill in identifying data issues and anomalies during the analysis
• Strong business acumen demonstrated an aptitude for analytics that incite action
• Ability to execute on design requirements defined by business
• Ability to understand required Power BI functionality from wireframes/ requirement documents
• Ability to architect and design reporting solutions based on client needs.
• Being able to communicate with internal/external customers, desire to develop communication and client-facing skills.
• Ability to seamlessly work with MS Excel working knowledge of pivot table and related functions
Good to have:
• Experience in working with Azure and connecting synapse with Tableau
• Demonstrate strength in data modelling, ETL development, and data warehousing
• Knowledge of leading large-scale data warehousing and analytics projects using Azure, Synapse, MS SQL DB
• Good knowledge of building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets
• Good to have knowledge of Supply Chain Domain.
Position Title: IT Intern (Full Time)
Department: Information Technology
Work Mode: Work From Home (WFH)
Educational Qualification: B.Tech (IT) / M.Tech (IT)
Shift: Rotational Shifts (6am to 3pm, 2pm to 11pm, and 10pm to 07am)
---
Role Summary
The IT Intern will support day-to-day IT operations and assist in maintaining the organization’s IT infrastructure. This role provides structured exposure to desktop support, cloud platforms, user management, server infrastructure, and IT security practices under the guidance of senior IT team members.
---
Key Responsibilities
· Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.
· Support user account management activities using Azure Entra ID (Azure Active Directory), Active Directory, and Microsoft 365.
· Assist the IT team in configuring, monitoring, and supporting AWS cloud services, including EC2, S3, IAM, and WorkSpaces.
· Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.
· Assist with data backups, basic disaster recovery tasks, and implementation of security procedures in line with company policies.
· Create, update, and maintain technical documentation, SOPs, and knowledge base articles.
· Collaborate with internal teams to support system upgrades, IT infrastructure improvements, and ongoing IT projects.
· Adhere to company IT policies, data security standards, and confidentiality requirements.
---
Required Skills & Competencies
· Basic understanding of IT infrastructure, networking concepts, and operating systems
· Familiarity with cloud platforms such as AWS and/or Microsoft Azure
· Fundamental knowledge of Active Directory and user access management
· Strong willingness to learn and adapt to new technologies
· Good analytical, problem-solving, and communication skills
· Ability to work independently in a remote environment
---
Technical Requirements
· Personal laptop/desktop with required specifications
· Reliable internet connectivity to support remote work
---
Learning & Development Opportunities
· Hands-on exposure to enterprise IT environments
· Practical experience with cloud technologies and infrastructure support
· Mentorship from experienced IT professionals
· Opportunity to develop technical, documentation, and operational skills
About ARDEM
ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. With over 20 years of experience, ARDEM has consistently delivered high-quality outsourcing and automation services to clients across the USA and Canada. We are growing rapidly and continuously innovating to improve our services. Our goal is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company for our customers.
Job Summary:
We are looking for a skilled Full Stack Developer with strong expertise in .NET Core, modern frontend frameworks, and Microsoft Azure Cloud. The candidate will be responsible for building scalable applications, designing APIs, and contributing to cloud-native architecture.
Key Responsibilities:
- Develop and maintain backend services using .NET Core 6+
- Design and implement RESTful APIs (OData knowledge is a plus)
- Build microservices-based architecture and scalable systems
- Develop frontend applications using Angular or React
- Work with relational databases and optimize performance
- Design and deploy applications on Microsoft Azure
- Implement CI/CD pipelines using Azure DevOps/Jenkins
- Follow best practices including SOLID principles, TDD, and Agile methodologies
Required Skills:
- Strong experience in .NET Core, Web APIs, and microservices
- Hands-on experience with Angular or React
- Experience with Azure Cloud and DevOps practices
- Strong database and query optimization skills
- Good understanding of scalable architecture and design patterns
Soft Skills:
- Strong problem-solving and analytical skills
- Good communication and collaboration abilities
- Ability to work in a fast-paced environment
Job Overview
As a software Engineer, you will play a crucial role in leading our development efforts, ensuring best practices, and supporting the team on a day-to-day basis. This role requires deep technical knowledge, a proactive mindset, and a commitment to guiding the team in tackling challenging issues. You will work primarily with .NET Core on the backend while also keeping a strategic focus on product security, DevOps, quality assurance, and cloud infrastructure.
Responsibilities
• Forward-Looking Product Development:
o Collaborate with product and engineering teams to align on the technical
direction, scalability, and maintainability of the product.
o Proactively consider and address security, performance, and scalability
requirements during development.
- Cloud and Infrastructure: Leverage Microsoft Azure for cloud infrastructure,
- ensuring efficient and secure use of cloud services. Work closely with DevOps to
- improve deployment processes.
- DevOps & CI/CD: Support the setup and maintenance of CI/CD pipelines, enabling
- smooth and frequent deployments. Collaborate with the DevOps team to automate and
- optimize the development process.
- Technical Mentorship: Provide technical guidance and support to team members,
- helping them solve day-to-day challenges, enhance code quality, and adopt best
- practices.
- Quality Assurance: Collaborate with QA to ensure thorough testing, automated testing
- coverage, and overall product quality.
- Product Security: Actively implement and promote security best practices to protect
- data and ensure compliance with industry standards.
- Documentation & Code Reviews: Promote good coding practices, conduct code
- reviews, and maintain clear documentation.
- Qualifications
• Technical Skills:
o Strong experience with .NET Core for backend development and RESTful API
design.
o Hands-on experience with Microsoft Azure services, including but not limited
to VMs, databases, application gateways, and user management.
o Familiarity with DevOps practices and tools, particularly CI/CD pipeline
configuration and deployment automation.
o Strong knowledge of product security best practices and experience implementing secure coding practices.
o Familiarity with QA processes and automated testing tools is a plus.
o Ability to support team members in solving technical challenges and sharing
knowledge effectively.
Preferred Qualifications
- 4+ years of experience in software development, with a strong focus on .NET Core
- Previous experience as a Staff SE, tech lead, or in a similar hands-on tech role.
- Strong problem-solving skills and ability to work in a fast-paced, startup environment.
- What We Offer
- Opportunity to lead and grow within a dynamic and ambitious team.
- Challenging projects that focus on innovation and cutting-edge technology.
- Collaborative work environment with a focus on learning, mentorship, and growth.
- Competitive compensation, benefits, and stock options.
- If you’re a proactive, forward-thinking technology leader with a passion for .NET Core and you’re ready to make an impact, we’d love to meet you!
Description
Join company as a Backend Developer and become a pivotal force in building the robust, scalable services that power our innovative platforms. In this role, you will design, develop, and maintain server‑side applications, ensuring high performance and reliability for millions of users. You’ll collaborate closely with cross‑functional product, front‑end, and DevOps teams to translate business requirements into clean, efficient code, while participating in code reviews and architectural discussions. Our dynamic environment encourages continuous learning, offering opportunities to work with cutting‑edge technologies, cloud infrastructures, and modern development practices. As a key contributor, your work will directly impact product quality, user satisfaction, and the overall success of company's mission to streamline hiring solutions.
Requirements:
- 1–15 years of professional experience in backend development, with a strong focus on building APIs and microservices.
- Proficiency in server‑side languages such as Python, Java, Node.js, or Go, and solid understanding of object‑oriented and functional programming paradigms.
- Extensive experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis), including schema design and query optimization.
- Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies like Docker and Kubernetes.
- Hands‑on experience with version control (Git), CI/CD pipelines, and automated testing frameworks.
- Strong problem‑solving abilities, effective communication skills, and a collaborative mindset for working within multidisciplinary teams.
Roles and Responsibilities:
- Design, develop, and maintain high‑throughput backend services and RESTful APIs that support core product features.
- Implement data models and storage solutions, ensuring data integrity, security, and optimal performance.
- Collaborate with front‑end engineers, product managers, and designers to define technical requirements and deliver end‑to‑end solutions.
- Participate in code reviews, provide constructive feedback, and uphold coding standards and best practices.
- Monitor, troubleshoot, and optimize production systems, implementing robust logging, alerting, and performance tuning.
- Contribute to the continuous improvement of development workflows, including CI/CD automation, testing strategies, and deployment processes.
- Stay current with emerging technologies and industry trends, proposing innovative approaches to enhance system architecture.
Budget:
- Job Type: payroll
- Experience Range: 1–15 years
📍 Position: IT Intern
👩💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)
🎓 Qualification: B.Tech (IT) / M.Tech (IT) only
📌 Mode: Remote (WFH)
⏳ Shift: Willingness to work in night/rotational shifts
🗣 Communication: Excellent English
𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.
- Support user account management activities using #Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.
- Assist the IT team in configuring, monitoring, and supporting #AWS cloud services (EC2, S3, IAM, WorkSpaces).
- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.
- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.
- Help create and update technical documentation and knowledge base articles.
- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.
💻 Technical Requirements:
- Laptop with i5 or higher processor
-Reliable internet connectivity with 100mbps speed
About Tarento:
Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.
We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.
Job Summary:
We are seeking a highly skilled and self-driven Senior Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.
Key Responsibilities:
- Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
- Implement and maintain RESTful APIs, ensuring high performance and scalability.
- Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
- Develop and manage Docker containers, enabling efficient development and deployment pipelines.
- Integrate messaging services like Apache Kafka into microservice architectures.
- Design and maintain data models using PostgreSQL or other SQL databases.
- Implement unit testing using JUnit and mocking frameworks to ensure code quality.
- Develop and execute API automation tests using Cucumber or similar tools.
- Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
- Work with Kubernetes for orchestrating containerized services.
- Utilize Couchbase or similar NoSQL technologies when necessary.
- Participate in code reviews, design discussions, and contribute to best practices and standards.
Required Skills & Qualifications:
- Strong experience in Java (11 or above) and Spring Boot framework.
- Solid understanding of microservices architecture and deployment on Azure.
- Hands-on experience with Docker, and exposure to Kubernetes.
- Proficiency in Kafka, with real-world project experience.
- Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
- Experience in writing unit tests using JUnit and mocking tools.
- Experience with Cucumber or similar frameworks for API automation testing.
- Exposure to CI/CD tools, DevOps processes, and Git-based workflows.
Nice to Have:
- Azure certifications (e.g., Azure Developer Associate)
- Familiarity with Couchbase or other NoSQL databases.
- Familiarity with other cloud providers (AWS, GCP)
- Knowledge of observability tools (Prometheus, Grafana, ELK)
Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent verbal and written communication.
- Ability to work in an agile environment and contribute to continuous improvement.
Why Join Us:
- Work on cutting-edge microservice architectures
- Strong learning and development culture
- Opportunity to innovate and influence technical decisions
- Collaborative and inclusive work environment
Key Responsibilities
4+ years of experience in design, develop, and maintain backend applications using Java (8, 17, 21).
Build scalable RESTful APIs and backend services using Spring Boot and Spring MVC.
Implement secure authentication and authorization using Spring Security (JWT/OAuth2).
Develop and maintain microservices-based architectures.
Work with Spring Data JPA / Hibernate for database interactions.
Implement configuration management using YAML and Properties files.
Integrate event-driven messaging systems and streaming platforms.
Work with MongoDB for data storage and optimize query performance.
Implement logging, monitoring, and troubleshooting for production systems.
Integrate and work with Azure cloud services including:
Event Hub
Key Vault
Storage Accounts
Databricks
Azure AD authentication (Service Principal, Managed Identity, Federated Credentials)
Collaborate with DevOps and cloud teams for deployment and monitoring.
Ensure application performance, scalability, and reliability.
Key responsibilities
• Design, build, and maintain robust CI/CD pipelines using Azure DevOps Services (Azure Pipelines) and Git-based workflows.
• Implement and manage infrastructure as code (IaC) using ARM templates, Bicep, and/or Terraform for repeatable environment provisioning.
• Containerize applications (Docker) and manage container orchestration platforms such as AKS (Azure Kubernetes Service).
• Automate build, test, release, and rollback processes; integrate automated testing and quality gates into pipelines.
• Monitor and improve platform reliability and observability using logging and monitoring tools (e.g., Azure Monitor, Application Insights, Prometheus, Grafana).
• Drive platform security and compliance through pipeline controls, secrets management (Key Vault / Vault), and secure configuration practices.
• Implement cost-optimization and governance for Azure resources (tags, policies, budgets).
• Troubleshoot build/release failures, production incidents, and performance bottlenecks; perform root-cause analysis and implement permanent fixes.
• Mentor developers in Git workflows, pipeline authoring, best practices for IaC, and cloud-native design.
• Maintain clear documentation: runbooks, deployment playbooks, architecture diagrams, and pipeline templates.
Required skills & experience
• 4+ years hands-on experience working with Azure and cloud-native application delivery.
• Deep experience with Azure DevOps (Repos, Pipelines, Artifacts, Boards).
• Strong IaC skills with Terraform, ARM templates, or Bicep.
• Solid experience with CI/CD design and YAML pipeline authoring.
• Practical knowledge of containerization (Docker) and Kubernetes — preferably AKS.
• Scripting skills: PowerShell, Bash, and/or Python for automation.
• Experience with Git workflows (branching strategies, PRs, code reviews).
• Familiarity with configuration management and secrets management (Azure Key Vault, HashiCorp Vault).
• Understanding of networking, identity (Azure AD), and security fundamentals in Azure.
• Strong troubleshooting, debugging, and incident response skills.
• Good collaboration and communication skills; ability to work across teams.
Certification
AZ-400: Microsoft Certified: DevOps Engineer Expert or AZ-104 or AZ 305 or Terraform Associate.
About TVARIT
TVARIT GmbH specializes in developing and delivering cutting-edge artificial intelligence (AI) solutions for the metal industry, including steel, aluminum, copper, cast iron, and more. Our software products empower customers to make intelligent, data-driven decisions, driving advancements in Predictive Quality (PsQ), Predictive Maintenance (PdM), and Energy Consumption Reduction (PsE), etc. With a strong portfolio of renowned reference customers, state-of-the-art technology, a talented research team from prestigious universities, and recognition through esteemed awards such as the EU Horizon 2020 AI Prize, TVARIT is recognized as one of the most innovative AI companies in Germany and Europe. We are seeking a self-motivated individual with a positive "can-do" attitude and excellent oral and written communication skills in English to join our team.
Job Description: We are looking for a Senior Data Engineer with strong expertise in Azure Databricks, PySpark, and distributed computing to develop and optimize scalable ETL pipelines for manufacturing analytics. The role involves working with high-frequency industrial data to enable real-time and batch data processing.
Key Responsibilities · Build scalable real-time and batch processing workflows using Azure Databricks, PySpark, and Apache Spark.
· Perform data pre-processing, including cleaning, transformation, deduplication, normalization, encoding, and scaling to ensure high-quality input for downstream analytics.
· Design and maintain cloud-based data architectures, including data lakes, lakehouses, and warehouses, following Medallion Architecture.
· Deploy and optimize data solutions on Azure (preferred), AWS, or GCP with a focus on performance, security, and scalability.
· Develop and optimize ETL/ELT pipelines for structured and unstructured data from IoT, MES, SCADA, LIMS, and ERP systems. · Automate data workflows using CI/CD and DevOps best practices, ensuring security and compliance with industry standards
· Monitor, troubleshoot, and enhance data pipelines for high availability and reliability.
· Utilize Docker and Kubernetes for scalable data processing.
· Collaborate with automation team, data scientists and engineers to provide clean, structured data for AI/ML models.
Desired Skills and Qualifications · Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
· 7+ years of experience in core data engineering, with a strong focus on cloud platforms such as Azure (preferred), AWS, or GCP · Proficiency in PySpark, Azure Databricks, Python and Apache Spark, etc.
. 2 years of team handling experience.
· Expertise in relational databases (e.g., SQL Server, PostgreSQL), time series databases (e.g. Influx DB), and NoSQL databases (e.g., MongoDB, Cassandra) · Experience in containerization (Docker, Kubernetes).
· Strong analytical and problem-solving skills with attention to detail.
· Good to have MLOps, DevOps including model lifecycle management
· Excellent communication and collaboration skills, with a proven ability to work effectively as a team player.
· Comfortable working in a dynamic, fast-paced startup environment, adapting quickly to changing priorities and responsibilities.
Job Description:
We are seeking a Cloud & AI Platform Engineer to design and operate AI-native infrastructure that supports large-scale machine learning, generative AI, and agentic AI systems.
This role will focus on building secure, scalable, and automated multi-cloud platforms across AWS, Azure, GCP, and hybrid on-prem environments, enabling teams to deploy LLMs, AI agents, and data-driven applications reliably in production.
You will work at the intersection of cloud engineering, MLOps, LLMOps, DevOps, and data infrastructure, helping build platforms that support RAG pipelines, vector search, AI model lifecycle management, and AI observability.
Key Responsibilities
AI & Agentic Infrastructure
- Design infrastructure to support agentic AI systems, autonomous agents, and multi-agent workflows.
- Build scalable runtime environments for LLM orchestration frameworks.
- Enable deployment of AI copilots, assistants, and autonomous decision systems.
Common frameworks may include:
- LangChain
- LlamaIndex
- AutoGPT
LLMOps & AI Model Lifecycle
Design and manage LLMOps pipelines for the full lifecycle of large language models:
- Model deployment
- Prompt management
- Versioning
- Evaluation and testing
- Model monitoring
Integrate with AI platforms such as:
- Azure Machine Learning
- Amazon SageMaker
- Vertex AI
Retrieval-Augmented Generation (RAG) Infrastructure
Design and optimize RAG pipelines that integrate enterprise knowledge with LLMs.
Responsibilities include:
- Document ingestion pipelines
- Embedding generation workflows
- Knowledge indexing
- Query orchestration
- Retrieval optimization
- Support scalable semantic search architectures.
Vector Database & Knowledge Infrastructure
Deploy and manage vector databases used for AI applications and semantic retrieval.
Common technologies include:
- Pinecone
- Weaviate
- Milvus
- FAISS
Responsibilities include:
- Index optimization
- Query latency tuning
- Scalable embedding storage
- Hybrid search architecture
Multi-Cloud AI Infrastructure
Design and maintain AI-ready infrastructure across:
- Amazon Web Services
- Microsoft Azure
- Google Cloud Platform
Key responsibilities include:
- GPU infrastructure management
- Distributed training environments
- Hybrid cloud integrations with on-prem data centers
- Infrastructure scaling for AI workloads
Data Platforms & Integration
- Support deployment and optimization of data lakes, data warehouses, and streaming platforms.
- Work with data engineering teams to ensure secure and scalable data infrastructure.
Cloud Architecture & Infrastructure
- Design and implement scalable multi-cloud infrastructure across Azure, AWS, and Google Cloud.
- Build hybrid cloud architectures integrating on-premise environments with cloud platforms.
- Implement high availability, disaster recovery, and auto-scaling architectures for AI workloads.
DevOps, Platform Engineering & Automation
Build automated cloud infrastructure using modern DevOps practices.
Tools may include:
- Terraform
- Docker
- Kubernetes
- GitHub Actions
Responsibilities include:
- Infrastructure as Code (IaC)
- Automated deployments
- CI/CD pipelines for AI models and services
- Platform reliability and scalability
AI Observability & Monitoring
Implement observability frameworks to monitor AI systems in production.
This includes:
- Model performance monitoring
- Prompt evaluation
- Hallucination detection
- Latency and throughput analysis
- Cost monitoring for LLM usage
Tools may include:
- Arize AI
- WhyLabs
- Weights & Biases
Security, Governance & Responsible AI
Ensure AI systems follow strong governance and security practices.
Responsibilities include:
- Data privacy and compliance
- Model governance frameworks
- Secure model deployment
- Monitoring model bias and drift
- AI risk management
Support enterprise frameworks for Responsible AI and AI compliance.
Data & Security
- Experience with data lake architectures, distributed storage, and ETL pipelines
- Knowledge of data security, encryption, IAM, and compliance frameworks
- Familiarity with AI governance and responsible AI practices
Required Skills
Cloud & Infrastructure
- Strong experience in Azure (must have), AWS or GCP
- Hybrid and multi-cloud architecture
- GPU infrastructure management
DevOps & Automation
- Kubernetes
- Docker
- Terraform
- CI/CD pipelines
AI / ML Platforms
- MLOps pipelines
- Model deployment
- Model monitoring
AI Application Infrastructure
- Vector databases
- RAG pipelines
- LLM orchestration frameworks
Programming
Experience in one or more languages:
- Python
- Go
- Java
- TypeScript
Preferred Qualifications
- Experience building AI copilots or autonomous agents
- Knowledge of distributed model training - Knowledge of GPU infrastructure and distributed training
- Familiarity with AI evaluation frameworks - Familiarity with model monitoring, drift detection, and AI observability
- Experience building enterprise AI platforms
Education & Experience
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- 4–8+ years experience in cloud infrastructure, DevOps, or platform engineering
- Experience working in data-driven or AI-focused environments
What Success Looks Like
- Reliable ML model deployment pipelines - Reliable infrastructure for LLMs and AI agents, Scalable RAG knowledge platforms
- Efficient multi-cloud infrastructure management - Fast deployment cycles for AI products
- Secure and scalable AI-ready cloud platforms
- Strong automation and governance across cloud and AI systems
Job Title : Senior DevOps Engineer (Only Mumbai Candidates)
Experience : 5+ Years
Location : Mumbai (On-site)
Notice Period : Immediate to 15 Days
Interview Process : 1 Internal Round + 1 Client Round
Mandatory Skills :
Multi-Cloud (AWS/GCP/Azure – any two), Kubernetes, Terraform, Helm (writing Helm Charts), CI/CD (GitLab CI/Jenkins/GitHub Actions), GitOps (ArgoCD/FluxCD), Multi-tenant deployments, Stateful microservices on Kubernetes, Enterprise Linux.
Role Overview :
We are looking for a Senior DevOps Engineer to design, build, and manage scalable cloud infrastructure and DevOps pipelines for product-based platforms.
The ideal candidate should have strong experience with Kubernetes, Terraform, Helm Charts, CI/CD, and GitOps practices.
Key Responsibilities :
- Design and manage scalable cloud infrastructure across AWS/GCP/Azure.
- Deploy and manage microservices on Kubernetes clusters.
- Build and maintain Infrastructure as Code using Terraform and Helm.
- Implement CI/CD pipelines using GitLab CI, Jenkins, or GitHub Actions.
- Implement GitOps workflows using ArgoCD or FluxCD.
- Ensure secure, scalable, and reliable DevOps architecture.
- Implement monitoring and logging using Prometheus, Grafana, or ELK.
Good to Have :
- Packer, OpenShift/Rancher/K3s, On-prem deployments, PaaS experience, scripting (Bash/Python), Terraform modules.
Hiring: IT Operations & Helpdesk Engineer (3–5 Years)
📍 Location: [Bangalore / Hybrid]
We are looking for a hands-on IT Operations Engineer who will anchor our internal IT helpdesk while managing servers, backups, DR drills, and cloud infrastructure. This role is responsible for day-to-day IT stability across endpoints, servers, and Azure environments.
Key Activities
Internal IT Helpdesk (Primary Anchor)
· Act as the single point of contact for internal IT support.
· Resolve L1/L2 issues (laptops, OS, network, access, software installs).
· Manage onboarding/offboarding IT setup.
· Track tickets, SLAs, and recurring issues.
Infrastructure & Servers
· Install and maintain Windows & Linux servers.
· Maintain the centralized IT asset inventory.
· Support manual and automated application deployments
· Handle patching, upgrades, performance monitoring.
Cloud Administration (Azure)
· Manage VMs, storage, networking.
· Maintain access controls and security configurations.
Backup & DR Readiness
· Manage and test backup processes.
· Conduct periodic DR drills to support organizational continuity standards.
· Maintain recovery runbooks and documentation.
What We’re Looking For
· Strong Windows Server & Linux hands-on experience.
· Experience managing Azure Cloud infrastructure.
· Practical backup & restore execution experience.
· Strong troubleshooting mindset.
· Process-driven and documentation disciplined.
· Comfortable working with DevOps & Cyber Security teams.
Impact of This Role
· Stable internal IT operations.
· DR-tested infrastructure.
· Reduced downtime and faster issue resolution.
· Strong operational hygiene in a growing environment.
Location: Bangalore
Experience required: 7-10 years.
Key skills: .NET core, ASP .NET, Microsoft Azure, MVC, AWS
"At Pace Wisdom Solutions, our .NET team is a dynamic and collaborative group of experts specializing in end-to-end development. With a focus on both front-end and back-end technologies, we leverage the robust .NET framework and Azure to deliver innovative and scalable solutions. Our agile approach ensures adaptability to industry changes, empowering us to provide clients with cutting-edge and tailored applications."
We are seeking a highly skilled and experienced Senior .NET Developer with a minimum of 7 years of hands-on experience. The ideal candidate will possess expertise in both front-end and back-end development, with a strong background in MVC architecture and exposure to Microsoft Azure technologies. The role requires an individual who can work independently, lead a team effectively, and contribute to the successful delivery of projects.
Engineering Culture at Pace Wisdom:
We foster a collaborative and communicative environment where engineers are empowered to share ideas freely. Teamwork is paramount, and we believe the best solutions come from diverse perspectives. We are committed to promoting from within, providing clear career paths and mentorship opportunities to help our engineers reach their full potential. Our culture prioritizes continuous learning and growth, offering a safe space to experiment, innovate, and refine your skills.
Responsibilities:
• Create scalable solutions by understanding business requirements, write code, test according to best practices.
• Own and Collaborate with the team including our customers, QA, design, and other stakeholders to drive successful project delivery.
• Advocate and mentor teams to follow best practices around: documentation, unit testing, code reviews etc.
• Comply with security policies and processes.
Qualifications:
• 7-10 years of professional experience in developing applications using .NET framework, .NET Core, Azure Services, Entity Framework
• Good knowledge of common software architecture design patterns, Object Oriented Programming, Data structures, Algorithms, Database design patterns and other best practices.
• Exposure to Cloud technologies (AWS, Azure, Google Cloud - at least one of them)
• Exposure to developing SPA on React, Angular or VueJS
• Experience with micro services, messaging systems (RabbitMQ/Kafka)
• Proven ability to lead and mentor development teams.
• Effective communication and interpersonal skills.
About the Company:
Pace Wisdom Solutions is a deep-tech Product engineering and consulting firm. We have offices in San Francisco, Bengaluru, and Singapore. We specialize in designing and developing bespoke software solutions that cater to solving niche business problems.
We engage with our clients at various stages:
• Right from the idea stage to scope out business requirements.
• Design & architect the right solution and define tangible milestones.
• Setup dedicated and on-demand tech teams for agile delivery.
• Take accountability for successful deployments to ensure efficient go-to-market Implementations.
Pace Wisdom has been working with Fortune 500 Enterprises and growth-stage startups/SMEs since 2012. We also work as an extended Tech team and at times we have played the role of a Virtual CTO too. We believe in building lasting relationships and providing value-add every time and going beyond business.
Position Title : Senior Data Engineer(Founding Member) - Insurtech StartUp
Location : Hyderabad(Onsite)
Immediate to 15 days Joiners
Experience : 5+ to 13 Years
Role Summary
We are looking for a Senior Data Engineer who will play a foundational role in:
- Client onboarding from a data perspective
- Understanding complex insurance data flows
- Designing secure, scalable ingestion pipelines
- Establishing strong data modeling and governance standards
This role sits at the intersection of technology, data architecture, security, and business onboarding.
.
Key Responsibilities
- Lead end-to-end data onboarding for new clients and partners, working closely with business and product teams to understand client systems, data formats, and migration constraints
- Define and implement data ingestion strategies supporting multiple sources and formats, including CSV, XML, JSON files, and API-based integrations
- Design, build, and operate robust, scalable ETL/ELT pipelines, supporting both batch and near-real-time data processing
- Handle complex insurance-domain data including Contracts, Claims, Reserves, Cancellations, and Refunds
- Architect ingestion pipelines with security-by-design principles, including secure credential management (keys, secrets, tokens), encryption at rest and in transit, and network-level controls where required
- Enforce role-based and attribute-based access controls, ensuring strict data isolation, tenancy boundaries, and stakeholder-specific access rules
- Design, maintain, and evolve canonical data models that support operational workflows, reporting & analytics, and regulatory/audit requirements
- Define and enforce data governance standards, ensuring compliance with insurance and financial data regulations and consistent definitions of business metrics across stakeholders
- Build and operate data pipelines on a cloud-native platform, leveraging distributed processing frameworks (Spark / PySpark), data lakes, lakehouses, and warehouses
- Implement and manage orchestration, monitoring, alerting, and cost-optimization mechanisms across the data platform
- Contribute to long-term data strategy, platform architecture decisions, and cost-optimization initiatives while maintaining strict security and compliance standards
Required Technical Skills
- Core Stack: Python, Advanced SQL(Complex joins, window functions, performance tuning), Pyspark
- Platforms: Azure, AWS, Data Bricks, Snowflake
- ETL / Orchestration: Airflow or similar frameworks
- Data Modeling: Star/Snowflake schema, dimensional modeling, OLAP/OLTP
- Visualization Exposure: Power BI
- Version Control & CI/CD: GitHub, Azure Devops, or equivalent
- Integrations: APIs, real-time data streaming, ML model integration exposure
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- 5+ years of experience in data engineering or similar roles
- Strong ability to align technical solutions with business objectives
- Excellent communication and stakeholder management skills
What We Offer
- Direct collaboration with the core US data leadership team
- High ownership and trust to manage the function end-to-end
- Exposure to a global environment with advanced tools and best practices
Job Description
We are looking for a skilled .NET FullStack Developer with expertise in .NET , React.js and AWS/Azure to join our development team. The ideal candidate should have strong programming skills and experience building scalable web applications using modern technologies.
Key Responsibilities
- Develop and maintain scalable applications using .NET Core.
- Design and implement Microservices architecture and RESTful APIs.
- Build responsive and dynamic user interfaces using React.js.
- Integrate frontend applications with backend APIs.
- Deploy and manage applications on AWS/Azure
- Collaborate with cross-functional teams to define, design, and deliver new features.
- Write clean, maintainable, and efficient code following best development practices.
Required Skills
- Strong experience in .NET development.
- Hands-on experience with Microservices architecture and API development.
- Experience working with React.js, including API integration and design principles.
- Experience with AWS / Azure
FULL STACK DEVELOPER
JOB DESCRIPTION – FULL STACK DEVELOPER
Location: Bangalore
Key Responsibilities:
Establish processes, SLAs, and escalation protocols for the support & maintenance of web applications
Manage stakeholders with effective communication & collaborate with cross functional teams to address issues and maintain business continuity.
Design, implement, unit test, and build business applications using React, React-Native, .Net Core, .Net 8, Azure/AWS and leveraging an agile methodology and latest tech like Agentic AI & Gihub Copilot.
Facilitate scrum ceremonies including sprint planning, retrospectives, reviews, and daily stand-ups·
Facilitate discussion, assessment of alternatives or different approaches, decision making, and conflict resolution within the development team
Develop and administer CI/CD pipelines in cloud-hosted Git repositories, and source control artifacts via Git in alignment with common branching strategies and workflows
Assist Software Designer/Implementers with the creation of detailed software design specifications
Participate in the system specification review process to ensure system requirements can be translated into valid software architecture
Integrate internal and external product designs into a cohesive user experience
Identify and keep track of metrics that indicate how software is performing
Handle technical and non-technical queries from the development team and stakeholders
Ensure that all development practices follow best practices and any relevant policies / procedures
Other Duties· Maintain project reporting including dashboards, status reports, road maps, burn down, velocity, and resource utilization.
Own the technical solution and ensure all technical aspects are implemented as designed. ·
Partner with the customer success team and aid in triaging and troubleshooting customer support issues spanning across a range of software components, infrastructure, integrations, and services, some of which target 24/7/365 availability
Flexible to work in rotational shift
Required Qualification
Previous experience of leading full stack technology projects with scrum teams and stakeholder management·
BTech or MTech in computer science, or related field·
3-5 years of experience.
Required Knowledge, Skills and Abilities: (Include any required computer skills, certifications, licenses, languages, etc)·
With Proficiency in .NET Core/.Net 8/, React, React-Native, Redux, Material, Bootstrap, Typescript, SCSS, Microservices, EF, LINQ, SQL, Azure/AWS, CI CD, Agile, Agentic AI, Github Copilot·
Azure Dev Ops, Design System, Micro front ends, Data Science·
Stakeholder management & excellent communication skills.
Must have skills
React - 3 years
React Native - 3 years
Redux - 1 years
Material UI - 1 years
Typescript - 1 years
Bootstrap - 1 years
Microservices - 2 years
SQL - 1 years
Azure - 1 years
Nice to have skills
.NET Core - 3 years
NET 8 - 3 years
AWS - 1 years
LINQ - 1 years
Description
We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years’ experience).
Please find the detailed Job Description attached for your reference. We are looking for candidates with strong experience in:
- Machine Learning model development
- Scalable data pipeline development (ETL/ELT)
- Python and SQL
- Cloud platforms such as Azure/AWS/Databricks
- ML deployment environments (SageMaker, Azure ML, etc.)
Kindly note:
- Location: Pune (Work From Office)
- Immediate joiners preferred
While sharing profiles, please ensure the following details are included:
- Current CTC
- Expected CTC
- Notice Period
- Current Location
- Confirmation on Pune WFO comfort
Must have skills
Machine Learning - 6 years
Python - 6 years
ETL(Extract, Transform, Load) - 6 years
SQL - 6 years
Azure - 6 years
Hiring: Cloud Engineer – MLOps Platform 🚨
📍 Location: Bangalore
🧠 Experience: 5–8 Years
We are looking for an experienced Cloud Engineer to support ML teams and drive end-to-end automation for model deployment across modern cloud platforms.
🔹 Tech Stack:
Azure | Databricks | AKS | ARO | Terraform | MLflow | CI/CD
🔹 Key Responsibilities:
• Build and maintain CI/CD and Continuous Training (CT) pipelines using Azure DevOps, GitHub Actions, or Jenkins.
• Deploy Databricks jobs, MLflow models, and microservices on AKS / ARO environments.
• Automate infrastructure using Terraform and GitOps practices.
• Manage Databricks workspaces, AKS clusters, and networking configurations.
• Implement monitoring, logging, and alerting systems for ML workloads.
• Ensure cloud security, governance, and cost optimization best practices.
🔹 Required Skills:
✔ Strong hands-on experience with Azure, AKS, ARO, and Databricks
✔ Experience with MLflow and Kubernetes-based deployments
✔ Proficiency in Python and Bash / PowerShell scripting
✔ Strong understanding of cloud security, infrastructure automation, and distributed systems
About the role
Applix is seeking a highly skilled Senior Power BI Developer to join our Hyderabad office on a full-time, work-from-office basis. In this role, you will work directly with Caterpillar’s global analytics and GCIO BI Services teams to design, develop, and maintain enterprise-grade Power BI reports, dashboards, scorecards, and advanced data visualizations. You will operate as a member of a Project/Scrum team within Caterpillar’s technology environment, engaging with business partners and internal support teams to provide data visualization development services for a wide variety of projects and business needs.
The ideal candidate combines deep Power BI expertise with strong backend data engineering skills, and can champion BI COE standards while partnering closely with data scientists, business analysts, and IT professionals across Caterpillar’s global operations. A minimum 5-hour daily overlap with US Central Time is required to ensure seamless collaboration with onshore stakeholders and end users.
Key responsibilities
- Design and develop enterprise-grade Power BI dashboards, reports, and scorecards aligned to business needs.
- Implement BI COE standards, governance, security (RLS), and best practices across BI tools and environments.
- Build and optimize data models, DAX calculations, SQL queries, and data transformation pipelines.
- Enhance performance using aggregation, incremental refresh, storage modes, and query optimization techniques.
- Collaborate with business stakeholders, data engineers, and data scientists to deliver actionable insights.
- Support documentation, training, troubleshooting, and continuous improvement initiatives.
- Drive advanced analytics adoption, CI/CD practices, and mentor junior team members.
Requires Qualifications
- Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Science, Industrial Engineering, or a related field.
- 5+ years of hands-on experience in Power BI development, including reports, dashboards, enterprise scorecards, and paginated reports.
- Expert-level proficiency in Power BI Desktop, Power BI Service, Power BI Report Server, and Power BI Report Builder.
- Expert working knowledge of DAX, Power Query (M language), and data modeling best practices (star schema, snowflake schema, dimensional modeling).
- Strong backend skills with SQL Server, Azure SQL Database, Azure Synapse Analytics, or Snowflake – including writing complex T-SQL queries, stored procedures, CTEs, and window functions.
- 3+ years of experience in relational database design, data modeling, and structured query language (SQL).
- Hands-on experience with Azure Data Factory (ADF), Azure Data Lake, or similar ETL/ELT tools.
- Experience working in Agile/Scrum methodology, with tools like Azure DevOps, Jira, or ServiceNow.
Caterpillar-Specific Experience (Strongly Preferred)
- Prior experience within Caterpillar’s BI ecosystem, GCIO BI Services, and governance frameworks.
- Familiarity with multi-tool BI environments (Power BI, Tableau, ThoughtSpot, Cognos, BOBJ).
- Exposure to Caterpillar’s Azure cloud infrastructure, data lakes, and enterprise platforms.
- Understanding of BI COE standards, data governance, naming conventions, and security protocols.
- Domain experience in manufacturing, heavy equipment, construction, or mining industries.
- Experience managing complex, enterprise-grade BI applications integrating multiple data sources.
Preferred Qualifications
- Microsoft PL-300 certification or equivalent.
- Experience with Microsoft Fabric, Azure Databricks, Snowflake/Snowpark.
- Working knowledge of Python or R for advanced analytics.
- Experience with Microsoft Power Platform (Power Apps, Power Automate).
- Knowledge of SSAS Tabular models and XMLA endpoints.
- Experience implementing CI/CD for Power BI using Azure DevOps.
- Familiarity with ETL tools (SnapLogic, SSIS).
- Prior consulting or client-facing delivery experience.
What we offer
- Opportunity to work on high-impact analytics projects for Caterpillar Inc. - a Fortune 100 global leader with $67B+ in annual revenue.
- Direct engagement with Caterpillar’s GCIO BI Services organization and US-based leadership teams.
- Collaborative, innovation-driven work culture at Applix’s Hyderabad office with a team focused on enterprise BI excellence.
- Competitive compensation and benefits package aligned with market standards.
- Career growth with exposure to cutting-edge Microsoft data technologies, Snowflake, and enterprise-scale BI solutions.
- Learning and development support, including Microsoft certification sponsorship (PL-300, DP-500, etc.).
- Opportunity to contribute to Caterpillar’s BI Centre of Excellence standards and shape analytics best practices.
Job Summary:
We are seeking a highly skilled and self-driven Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.Key Responsibilities:
- Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
- Implement and maintain RESTful APIs, ensuring high performance and scalability.
- Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
- Develop and manage Docker containers, enabling efficient development and deployment pipelines.
- Integrate messaging services like Apache Kafka into microservice architectures.
- Design and maintain data models using PostgreSQL or other SQL databases.
- Implement unit testing using JUnit and mocking frameworks to ensure code quality.
- Develop and execute API automation tests using Cucumber or similar tools.
- Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
- Work with Kubernetes for orchestrating containerized services.
- Utilize Couchbase or similar NoSQL technologies when necessary.
- Participate in code reviews, design discussions, and contribute to best practices and standards.
Required Skills & Qualifications:
- Strong experience in Java (11 or above) and Spring Boot framework.
- Solid understanding of microservices architecture and deployment on Azure.
- Hands-on experience with Docker, and exposure to Kubernetes.
- Proficiency in Kafka, with real-world project experience.
- Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
- Experience in writing unit tests using JUnit and mocking tools.
- Experience with Cucumber or similar frameworks for API automation testing.
- Exposure to CI/CD tools, DevOps processes, and Git-based workflows.
Nice to Have:
- Azure certifications (e.g., Azure Developer Associate)
- Familiarity with Couchbase or other NoSQL databases.
- Familiarity with other cloud providers (AWS, GCP)
- Knowledge of observability tools (Prometheus, Grafana, ELK)
Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent verbal and written communication.
- Ability to work in an agile environment and contribute to continuous improvement.
Why Join Us:
- Work on cutting-edge microservice architectures
- Strong learning and development culture
- Opportunity to innovate and influence technical decisions
- Collaborative and inclusive work environment
Hi Folks, we are currently Hiring for Security Engineer.
Gemini said
Hiring: Security Engineer
Company : Pentabay Softwares
Location : Anna salai, Mount Road
Mode: Fulltime
Pentabay Softwares INC is looking for a proactive Security Engineer (2–7 Years Exp) to fortify our global digital solutions. As we scale our footprint in the Healthcare IT sector, you will play a critical role in safeguarding sensitive data (ePHI) and ensuring our cloud-native architectures are resilient against evolving threats.
The Mission
You will be the architect of our defense, bridging the gap between high-speed development and rigorous security standards. Your day-to-day will involve "shifting security left" by embedding DevSecOps practices into our CI/CD pipelines and leading our compliance efforts for SOC 2, ISO 27001, and HIPAA.
Key Responsibilities
Defense & Architecture: Design and maintain secure cloud (AWS/Azure/GCP) and on-prem environments. Implement IAM policies, Zero Trust frameworks, and robust secrets management.
Offensive Testing: Conduct regular vulnerability assessments (VAPT), penetration testing, and code reviews using tools like Burp Suite and Nessus.
DevSecOps & Automation: Integrate SAST/DAST/SCA scanning into engineering workflows. Automate security tasks using Python or Bash.
Incident Response: Monitor SIEM tools (Splunk/CrowdStrike), respond to threats, and develop risk mitigation strategies.
Healthcare Compliance (Plus): Ensure data integrity for HL7/FHIR APIs and maintain HIPAA/HITECH audit readiness for healthcare clients.
What You Bring
Experience: 2–7 years in Information/Application Security with a strong grasp of the OWASP Top 10 and threat modeling (STRIDE).
Technical Depth: Proficiency in network/endpoint security, PKI, encryption standards (TLS/SSL), and container security (Docker/Kubernetes).
Compliance Knowledge: Familiarity with NIST, GDPR, and SOC 2 frameworks.
Tools: Hands-on experience with Metasploit, Wireshark, and Infrastructure-as-Code (Terraform).
Bonus Points: Industry certifications like OSCP, CISSP, or CEH, and experience in Healthcare IT workflows.
Auditing space like ISO27001 , ISO9001 prefered
Why Pentabay?
At Pentabay, we offer more than just a job; we offer a security-first engineering culture.
Growth: A dedicated learning budget for certifications and conferences.
Impact: Work on cutting-edge Healthcare projects that demand the highest levels of data privacy.
Send resumes to : sandhiya.m at pentabay.com
Job opportunity for Developer -Python Full Stack with Siemens at Bangalore.
Interview Process:
1st round of interview - F2F (in-Person)-Technical
2nd round of interview – F2F /Virtual Interview - Technical
3rd round of interview – Virtual Interview – Technical + HR
Job Title / Designation: Developer -Python Full Stack
Employment Type: Full Time, Permanent
Location: Bangalore
Experience: 3-5 Years Job Description: : Developer -Python Full Stack
We are looking for a python full stack expert who has proven 5+ years of experience in developing automating solutions on Linux based environments. You should be capable of developing python-based web applications or automation solutions and have with excellent knowledge on DB handling and decent knowledge of the K8-based deployment environment.
Required Skills:
- Solid experience in Python back-end technology
- Sound experience in web application development
- Decent knowledge and experience in UI development using JavaScript, React/Angular or related tech stack.
- Strong understanding of software design patterns and testing principles
- Ability to learn and adapt to working with multiple programming languages.
- Experience Docker, ArgoCD, Kubernetes and Terraform
- Understanding of ETL processes to extract data from different data sources is a plus.
- Proven experience in Linux development environments using Python.
- Excellent knowledge in interacting with database systems (SQL, NoSQL) and webservices (REST)
- Experienced in establishing an optimized CI / CD environment relevant to the project.
- Good knowledge on repository management tools like Git, Bit Bucket, etc.
- Excellent debugging skills/strategies.
- Excellent communication skills
- Experienced in working in an Agile environment.
Nice to have
- Good Knowledge in eclipse IDE, developed add-ons/ plugins on eclipse Platform.
- Knowledge of 93K Semiconductor test platforms
- Good know-how of agile management tools like Jira, Azure DevOps.
- Good knowledge of RHEL
- Knowledge of JIRA administration
Data Scientist or Senior Machine Learning Engineer
We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years' experience).
Please find the detailed Job Description attached for your reference.
We are looking for candidates with strong experience in:
- Machine Learning model development
- Scalable data pipeline development (ETL/ELT)
- Python and SQL
- Cloud platforms such as Azure/AWS/Databricks
- ML deployment environments (SageMaker, Azure ML, etc.)
Kindly note:
- Location: Pune (Work from Office)
- Immediate joiners preferred
While sharing profiles, please ensure the following details are included:
- Current CTC
- Expected CTC
- Notice Period
- Current Location
- Confirmation on Pune WFO comfort
Must have Skills
- Machine Learning - 6 Years
- Python - 6 Years
- ETL (Extract, Transform, Load) - 6 Years
- SQL - 6 Years
- Azure - 6 Years
Request you to share relevant profiles at the earliest. Looking forward to your support.
About the Role
We're seeking a Junior .NET Developer with 2 years of experience to join our insurtech team. This role offers an opportunity to work with cloud technologies and contribute to our existing codebase and cloud migration initiatives.
Key Responsibilities
- Write clean, maintainable code using C# and .NET Framework (.NET Core, ASP.NET, web API)
- Develop new features and participate in microservices architecture development
- Write unit and integration tests to ensure code quality
- Work with MS SQL Server - write Stored Procedures, Views, and Functions
- Support Azure cloud integration and automated deployment pipelines using Azure DevOps
- Collaborate with infrastructure teams and senior architects on migration initiatives
- Estimate work, break down deliverables, and deliver to deadlines
- Take ownership of your work with focus on quality and continuous improvement
Requirements
Essential
- 2 years of experience with C# and .NET development
- Strong understanding of OOP concepts and Design Patterns
- MS SQL Server programming experience
- Experience working on critical projects
- Self-starter with strong problem-solving and analytical skills
- Excellent communication and ability to work independently and in teams
Desirable
- Microsoft Azure experience (App Service, Functions, SQL Database, Service Bus)
- Knowledge of distributed systems and microservices architecture
- DevOps and CI/CD pipeline experience (Azure DevOps preferred)
- Front-end development with HTML5, CSS, JavaScript, React
Tech Stack
C#, .NET Framework, WPF, WCF, REST & SOAP APIs, MS SQL Server 2016+, Microsoft Azure, HTML5, CSS, JavaScript, React, Azure DevOps, TFS, Github
Description
SRE Engineer
Role Overview
As a Site Reliability Engineer, you will play a critical role in ensuring the availability and performance of our customer-facing platform. You will work closely with DevOps, DBA, and Development teams to provision and maintain infrastructure, deploy and monitor our applications, and automate workflows. Your contributions will have a direct impact on customer satisfaction and overall experience.
Responsibilities and Deliverables
• Manage, monitor, and maintain highly available systems (Windows and Linux)
• Analyze metrics and trends to ensure rapid scalability.
• Address routine service requests while identifying ways to automate and simplify.
• Create infrastructure as code using Terraform, ARM Templates, Cloud Formation.
• Maintain data backups and disaster recovery plans.
• Design and deploy CI/CD pipelines using GitHub Actions, Octopus, Ansible, Jenkins, Azure DevOps.
• Adhere to security best practices through all stages of the software development lifecycle
• Follow and champion ITIL best practices and standards.
• Become a resource for emerging and existing cloud technologies with a focus on AWS.
Organizational Alignment
• Reports to the Senior SRE Manager
• This role involves close collaboration with DevOps, DBA, and security teams.
Technical Proficiencies
• Hands-on experience with AWS is a must-have.
• Proficiency analyzing application, IIS, system, security logs and CloudTrail events
• Practical experience with CI/CD tools such as GitHub Actions, Jenkins, Octopus
• Experience with observability tools such as New Relic, Application Insights, AppDynamics, or DataDog.
• Experience maintaining and administering Windows, Linux, and Kubernetes.
• Experience in automation using scripting languages such as Bash, PowerShell, or Python.
• Configuration management experience using Ansible, Terraform, Azure Automation Run book or similar.
• Experience with SQL Server database maintenance and administration is preferred.
• Good Understanding of networking (VNET, subnet, private link, VNET peering).
• Familiarity with cloud concepts including certificates, Oauth, AzureAD, ASE, ASP, AKS, Azure Apps,
Load Balancers, Application Gateway, Firewall, Load Balancer, API Management, SQL Server, Databases on Azure
Experience
• 7+ years of experience in SRE or System Administration role
• Demonstrated ability building and supporting high availability Windows/Linux servers, with emphasis on the WISA stack (Windows/IIS/SQL Server/ASP.net)
• 3+ years of experience working with cloud technologies including AWS, Azure.
• 1+ years of experience working with container technology including Docker and Kubernetes.
• Comfortable using Scrum, Kanban, or Lean methodologies.
Education
• Bachelor’s Degree or College Diploma in Computer Science, Information Systems, or equivalent
experience.
Additional Job Details:
• Working hours: 2:00 PM / 3:00 PM to 11:30 PM IST
• Interview process: 3 technical rounds
• Work model: 3 days’ work from office
Strong Azure DevOps Engineer Profiles.
Mandatory (Experience 1) – Must have minimum 1+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
Mandatory (Experience 2) – Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
Mandatory (Experience 3) – Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
Mandatory (Experience 4) – Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
Mandatory (Experience 5) – Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis
Mandatory (Note) - Only Male candidates are considered.
Mandatory (Location): The candidate must be currently in Bengaluru.
Job Title: Python Developer (Django / Databricks / Azure)
📍 Location: Bangalore
🕒 Experience: 3–8 Year
💼 Employment Type: FTE
🔹 Job Summary:
We are seeking a skilled Python Developer with strong experience in Django, Flask API development, Databricks, and Azure Cloud. The ideal candidate will be responsible for designing scalable backend systems, developing REST APIs, building data pipelines, and working with cloud-based data platforms.
🔹 Key Responsibilities:
✔ Develop and maintain web applications using Django framework
✔ Design and build RESTful APIs using Flask
✔ Develop and optimize data pipelines using Azure Databricks
✔ Integrate applications with Azure services (Blob, Data Factory, SQL, etc.)
✔ Write clean, scalable, and efficient Python code
✔ Collaborate with frontend, DevOps, and data engineering teams
✔ Perform code reviews and ensure best practices
✔ Troubleshoot, debug, and upgrade existing systems
🔹 Required Skills:
- Strong proficiency in Python programming
- Hands-on experience with Django framework
- Experience building Flask-based REST APIs
- Experience working with Azure Databricks
- Knowledge of Azure Cloud services
- Experience with SQL / NoSQL databases
- Understanding of CI/CD and Git workflows
🔹 Good to Have:
- Experience with PySpark
- Knowledge of microservices architecture
- Docker / Kubernetes exposure
- Experience in data engineering projects
Job Title: Java Backend Developer
Experience: ~3-6 years (Mid-to-Senior)
Employment Type: Full-time, Permanent
Location : Bangalore
Role Overview
As a Java Backend Developer, you’ll be responsible for designing, developing, and maintaining scalable backend systems and microservices. You’ll work with cross-functional teams to build high-performance distributed services, APIs, and data-driven applications that power business solutions.
Key Responsibilities
- Design and implement microservices and backend components using Java (8+) and Spring Boot.
- Build and consume RESTful APIs and integrate with internal/external services.
- Work with event-driven systems and messaging using Apache Kafka (producers/consumers).
- Develop and optimize databases, including SQL (e.g., MySQL/PostgreSQL) and NoSQL (e.g., MongoDB/Cassandra).
- Participate in CI/CD pipelines, automated builds, and deployments using tools like Git, Maven, Jenkins.
- Ensure code quality through unit and integration testing, documentation, and code reviews.
- Collaborate with frontend developers, QA, DevOps, and product teams following Agile methodologies.
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Information Technology, or related field.
- Proven hands-on experience with Core Java and Spring Boot development.
- Strong understanding of microservices architecture, REST APIs, and distributed systems.
- Experience with message queues/event streaming (Apache Kafka).
- Skilled in relational and NoSQL databases and writing optimized queries.
- Comfortable with CI/CD tools (e.g., Git, Maven, Jenkins) and version control.
- Good problem-solving, debugging, and collaboration skills.
Preferred / Nice-to-Have
- Cloud platform experience (AWS / Azure / GCP).
- Familiarity with containerization (Docker) and orchestration (Kubernetes).
- Knowledge of performance tuning, caching strategies, observability (metrics/logging).
- Agile/Scrum development experience.
About the role:
We are looking for a skilled and driven Security Engineer to join our growing security team. This role requires a hands-on professional who can evaluate and strengthen the security posture of our
applications and infrastructure across Web, Android, iOS, APIs, and cloud-native environments.
The ideal candidate will also lead technical triage from our bug bounty program, integrate security into the DevOps lifecycle, and contribute to building a security-first engineering culture.
Required Skills & Experience:
● 3 to 6 years of solid hands-on experience in the VAPT domain
● Solid understanding of Web, Android, and iOS application security
● Experience with DevSecOps tools and integrating security into CI/CD
● Strong knowledge of cloud platforms (AWS/GCP/Azure) and their security models
● Familiarity with bug bounty programs and responsible disclosure practices
● Familiarity with tools like Burp Suite, MobSF, OWASP ZAP, Terraform, Checkov..etc
● Good knowledge of API security
● Scripting experience (Python, Bash, or similar) for automation tasks
Preferred Qualifications:
● OSCP, CEH, AWS Security Specialty, or similar certifications
● Experience working in a regulated environment (e.g., FinTech, InsurTech)
Responsibilities:
● Perform Security reviews, Vulnerability Assessments & Penetration Testing for Web,
Android, iOS, and API endpoints
● Perform Threat Modelling & anticipate potential attack vectors and improve security
architecture on complex or cross-functional components
● Identify and remediate OWASP Top 10 and mobile-specific vulnerabilities
● Conduct secure code reviews and red team assessments
● Integrate SAST, DAST, SCA, and secret scanning tools into CI/CD pipelines
● Automate security checks using tools like SonarQube, Snyk, Trivy, etc.
● Maintain and manage vulnerability scanning infrastructure
● Perform security assessments of AWS, Azure, and GCP environments, with an emphasis
on container security, particularly for Docker and Kubernetes.
● Implement guardrails for IAM, network segmentation, encryption, and cloud monitoring
● Contribute to infrastructure hardening for containers, Kubernetes, and virtual machines
● Triage bug bounty reports and coordinate remediation with engineering teams
● Act as the primary responder for external security disclosures
● Maintain documentation and metrics related to bug bounty and penetration testing
activities
● Collaborate with developers and architects to ensure secure design decisions
● Lead security design reviews for new features and products
● Provide actionable risk assessments and mitigation plans to stakeholders
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for an Engineering Manager who combines technical depth with leadership strength. This role involves leading one or more product engineering pods, driving architecture decisions, ensuring delivery excellence, and working closely with stakeholders to build scalable web and mobile technology solutions. As a key part of our leadership team, you’ll play a pivotal role in mentoring engineers, improving processes, and fostering a culture of ownership, innovation, and continuous learning.
Roles and Responsibilities:
● Team Management: Lead, coach, and grow a team of 15-20 software engineers, tech leads, and QA engineers
● Technical Leadership: Guide the team in building scalable, high-performance web and mobile applications using modern frameworks and technologies
● Architecture Ownership: Architect robust, secure, and maintainable technology solutions aligned with product goals
● Project Execution: Ensure timely and high-quality delivery of projects by driving engineering best practices, agile processes, and cross-functional collaboration
● Stakeholder Collaboration: Act as a bridge between business stakeholders, product managers, and engineering teams to translate requirements into technology plans
● Culture & Growth: Help build and nurture a culture of technical excellence, accountability, and continuous improvement
● Hiring & Onboarding: Contribute to recruitment efforts, onboarding, and learning & development of team members.
Requirements:
● 8+ years of software development experience, with 2+ years in a technical leadership or engineering manager role
● Proven experience in architecting and building web and mobile applications at scale
● Hands-on knowledge of technologies such as JavaScript/TypeScript, Angular, React, Node.js, .NET, Java, Python, or similar stacks
● Solid understanding of cloud platforms (AWS/Azure/GCP) and DevOps practices
● Strong interpersonal skills with a proven ability to manage stakeholders and lead diverse teams
● Excellent problem-solving, communication, and organizational skills
● Nice to have:
- Prior experience in working with startups or product-based companies
- Experience mentoring tech leads and helping shape engineering culture
- Exposure to AI/ML, data engineering, or platform thinking
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethics and culture.
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
Role Objective
We are looking for a proactive InfoSec Associate to support our compliance and audit functions. You will play a key role in maintaining our ISO standards, handling vendor security assessments, and ensuring our documentation is audit-ready for our banking and NBFC clients.
Key Responsibilities
- Audit Support: Assist in internal and external audits for ISO 27001, SOC2, and ISO 27701.
- Vendor Compliance: Independently handle and respond to detailed Vendor Security Questionnaires from banks and NBFCs.
- Evidence Management: Collect, organize, and present technical audit evidence from engineering and IT teams.
- Policy & Documentation: Help draft and review Security Policies, SOPs, and ISMS documentation.
- Risk Tracking: Track audit observations and manage the Corrective Action Plan (CAPA) to ensure timely remediation.
- Data Privacy: Assist in aligning internal processes with the DPDP Act and GDPR requirements.
Required Skills & Competencies
- Framework Knowledge: Basic understanding of ISO 27001 and Risk Assessment principles.
- Technical Literacy: Ability to understand AWS/Azure cloud security settings from a compliance standpoint.
- Documentation: High proficiency in organizing audit trails and drafting professional security reports.
- Communication: Comfortable interacting with external auditors and internal technical teams.
Preferred Certifications (Good to Have)
- ISO 27001 Internal Auditor
- CompTIA Security+
- CISA (In-progress/Foundation)
About NonStop io Technologies
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We're seeking an AI/ML Engineer to join our team. As AI/ML Engineer, you will be responsible for designing, developing, and implementing artificial intelligence (AI) and machine learning (ML) solutions to solve real-world business problems. You will work closely with engineering teams, including software engineers, domain experts, and product managers, to deploy and integrate Applied AI/ML solutions into the products that are being built at NonStop io. Your role will involve researching cutting-edge algorithms and data processing techniques, and implementing scalable solutions to drive innovation and improve the overall user experience.
Responsibilities
● Applied AI/ML engineering; Building engineering solutions on top of the AI/ML tooling available in the industry today. Eg: Engineering APIs around OpenAI
● AI/ML Model Development: Design, develop, and implement machine learning models and algorithms that address specific business challenges, such as natural language processing, computer vision, recommendation systems, anomaly detection, etc.
● Data Preprocessing and Feature Engineering: Cleanse, preprocess, and transform raw data into suitable formats for training and testing AI/ML models. Perform feature engineering to extract relevant features from the data
● Model Training and Evaluation: Train and validate AI/ML models using diverse datasets to achieve optimal performance. Employ appropriate evaluation metrics to assess model accuracy, precision, recall, and other relevant metrics
● Data Visualization: Create clear and insightful data visualizations to aid in understanding data patterns, model behaviour, and performance metrics
● Deployment and Integration: Collaborate with software engineers and DevOps teams to deploy AI/ML models into production environments and integrate them into various applications and systems
● Data Security and Privacy: Ensure compliance with data privacy regulations and implement security measures to protect sensitive information used in AI/ML processes
● Continuous Learning: Stay updated with the latest advancements in AI/ML research, tools, and technologies, and apply them to improve existing models and develop novel solutions
● Documentation: Maintain detailed documentation of the AI/ML development process, including code, models, algorithms, and methodologies for easy understanding and future reference.
Qualifications & Skills
● Bachelor's, Master's, or PhD in Computer Science, Data Science, Machine Learning, or a related field. Advanced degrees or certifications in AI/ML are a plus
● Proven experience as an AI/ML Engineer, Data Scientist, or related role, ideally with a strong portfolio of AI/ML projects
● Proficiency in programming languages commonly used for AI/ML. Preferably Python
● Familiarity with popular AI/ML libraries and frameworks, such as TensorFlow, PyTorch, scikit-learn, etc.
● Familiarity with popular AI/ML Models such as GPT3, GPT4, Llama2, BERT etc.
● Strong understanding of machine learning algorithms, statistics, and data structures
● Experience with data preprocessing, data wrangling, and feature engineering
● Knowledge of deep learning architectures, neural networks, and transfer learning
● Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for scalable AI/ML deployment
● Solid understanding of software engineering principles and best practices for writing maintainable and scalable code
● Excellent analytical and problem-solving skills, with the ability to think critically and propose innovative solutions
● Effective communication skills to collaborate with cross-functional teams and present complex technical concepts to non-technical stakeholders
We are seeking a talented AI/ML Engineer with strong hands-on experience in Generative AI and Large Language Models (LLMs) to join our Business Intelligence team. The role involves designing, developing, and deploying advanced AI/ML and GenAI-driven solutions to unlock business insights and enhance data-driven decision-making.
Key Responsibilities:
• Collaborate with business analysts and stakeholders to identify AI/ML and Generative AI use cases.
• Design and implement ML models for predictive analytics, segmentation, anomaly detection, and forecasting.
• Develop and deploy Generative AI solutions using LLMs (GPT, LLaMA, Mistral, etc.).
• Build and maintain Retrieval-Augmented Generation (RAG) pipelines and semantic search systems.
• Work with vector databases (FAISS, Pinecone, ChromaDB) for embedding storage and retrieval.
• Develop end-to-end AI/ML pipelines from data preprocessing to deployment.
• Integrate AI/ML and GenAI solutions into BI dashboards and reporting tools.
• Optimize models for performance, scalability, and reliability.
• Maintain documentation and promote knowledge sharing within the team.
Mandatory Requirements:
• 4+ years of relevant experience as an AI/ML Engineer.
• Hands-on experience in Generative AI and Large Language Models (LLMs) – Mandatory.
• Experience implementing RAG pipelines and prompt engineering techniques.
• Strong programming skills in Python.
• Experience with ML frameworks (TensorFlow, PyTorch, scikit-learn).
• Experience with vector databases (FAISS, Pinecone, ChromaDB).
• Strong understanding of SQL and database systems.
• Experience integrating AI solutions into BI tools (Power BI, Tableau).
• Strong analytical, problem-solving, and communication skills. Good to Have
• Experience with cloud platforms (AWS, Azure, GCP).
• Experience with Docker or Kubernetes.
• Exposure to NLP, computer vision, or deep learning use cases.
• Experience in MLOps and CI/CD pipelines
Way2DreamJobs is building a premium cloud mentorship ecosystem focused on real-world Microsoft Azure and Modern Workplace skills.
We are inviting experienced Azure professionals to collaborate as founding weekend mentors for a remote mentorship program.
This is not a traditional full-time job. It is a flexible mentor collaboration model designed for working IT professionals who want to share real enterprise experience.
Responsibilities:
• Conduct weekend mentorship sessions
• Guide learners through practical Azure scenarios
• Support hands-on lab oriented learning
Ideal Profile:
• 4+ years Azure infrastructure experience
• Exposure to Microsoft Intune or M365 device management preferred
• Comfortable guiding professionals in live sessions
Benefits:
• Remote weekend engagement
• Build industry mentor brand authority
• Paid mentorship collaboration (structure discussed during call)
Job Details
- Job Title: Lead Software Engineer - Java, Python, API Development
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 8-10 years
- Employment Type: Full Time
- Job Location: Pune & Trivandrum/ Thiruvananthapuram
- CTC Range: Best in Industry
Job Description
Job Summary
We are seeking a Lead Software Engineer with strong hands-on expertise in Java and Python to design, build, and optimize scalable backend applications and APIs. The ideal candidate will bring deep experience in cloud technologies, large-scale data processing, and leading the design of high-performance, reliable backend systems.
Key Responsibilities
- Design, develop, and maintain backend services and APIs using Java and Python
- Build and optimize Java-based APIs for large-scale data processing
- Ensure high performance, scalability, and reliability of backend systems
- Architect and manage backend services on cloud platforms (AWS, GCP, or Azure)
- Collaborate with cross-functional teams to deliver production-ready solutions
- Lead technical design discussions and guide best practices
Requirements
- 8+ years of experience in backend software development
- Strong proficiency in Java and Python
- Proven experience building scalable APIs and data-driven applications
- Hands-on experience with cloud services and distributed systems
- Solid understanding of databases, microservices, and API performance optimization
Nice to Have
- Experience with Spring Boot, Flask, or FastAPI
- Familiarity with Docker, Kubernetes, and CI/CD pipelines
- Exposure to Kafka, Spark, or other big data tools
Skills
Java, Python, API Development, Data Processing, AWS Backend
Skills: Java, API development, Data Processing, AWS backend, Python,
Must-Haves
Java (8+ years), Python (8+ years), API Development (8+ years), Cloud Services (AWS/GCP/Azure), Database & Microservices
8+ years of experience in backend software development
Strong proficiency in Java and Python
Proven experience building scalable APIs and data-driven applications
Hands-on experience with cloud services and distributed systems
Solid understanding of databases, microservices, and API performance optimization
Mandatory Skills: Java API AND AWS
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Pune, Trivandrum
About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for a passionate and experienced Full Stack Engineer to join our engineering team. The ideal candidate will have strong experience in both frontend and backend development, with the ability to design, build, and scale high-quality applications. You will collaborate with cross-functional teams to deliver robust and user-centric solutions.
Roles and Responsibilities:
● Design, develop, and maintain scalable web applications
● Build responsive and high-performance user interfaces
● Develop secure and efficient backend services and APIs
● Collaborate with product managers, designers, and QA teams to deliver features
● Write clean, maintainable, and testable code
● Participate in code reviews and contribute to engineering best practices
● Optimize applications for speed, performance, and scalability
● Troubleshoot and resolve production issues
● Contribute to architectural decisions and technical improvements.
Requirements:
● 3 to 5 years of experience in full-stack development
● Strong proficiency in frontend technologies such as React, Angular, or Vue
● Solid experience with backend technologies such as Node.js, .NET, Java, or Python
● Experience in building RESTful APIs and microservices
● Strong understanding of databases such as PostgreSQL, MySQL, MongoDB, or SQL Server
● Experience with version control systems like Git
● Familiarity with CI CD pipelines
● Good understanding of cloud platforms such as AWS, Azure, or GCP
● Strong understanding of software design principles and data structures
● Experience with containerization tools such as Docker
● Knowledge of automated testing frameworks
● Experience working in Agile environments
Why Join Us?
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethic and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
- Strong experience in Azure – mainly Azure ML Studio, AKS, Blob Storage, ADF, ADO Pipelines.
- Ability and experience to register and deploy ML/AI/GenAI models via Azure ML Studio.
- Working knowledge of deploying models in AKS clusters.
- Design and implement data processing, training, inference, and monitoring pipelines using Azure ML.
- Excellent Python skills – environment setup and dependency management, coding as per best practices, and knowledge of automatic code review tools like linting and Black.
- Experience with MLflow for model experiments, logging artifacts and models, and monitoring.
- Experience in orchestrating machine learning pipelines using MLOps best practices.
- Experience in DevOps with CI/CD knowledge (Git in Azure DevOps).
- Experience in model monitoring (drift detection and performance monitoring).
- Fundamentals of data engineering.
- Docker-based deployment is good to have.
15+ years in enterprise product development
5+ years in Director/VP-level product leadership
Proven AI/ML product commercialization experience
Expertise in Industry 4.0 (IoT, predictive maintenance, digital twins)
Hands-on experience with AWS/Azure/GCP cloud platforms
Strong architecture experience in microservices ecosystems
Experience implementing MLOps and DevSecOps frameworks
Experience integrating MES, ERP, SCADA, automation platforms
SaaS business model and enterprise software commercialization expertise
Experience leading large engineering/product teams

















