Cutshort logo
Windows azure jobs

50+ Windows Azure Jobs in India

Apply to 50+ Windows Azure Jobs on CutShort.io. Find your next job, effortlessly. Browse Windows Azure Jobs and apply today!

icon
Redtring
Keshav Senthil
Posted by Keshav Senthil
Hyderabad
1 - 3 yrs
₹8L - ₹12L / yr
skill iconKotlin
skill iconJava
skill iconSpring Boot
skill iconReact.js
skill iconAmazon Web Services (AWS)
+6 more

Software Engineer (Backend) – Kotlin & React

About Us

We are a high-agency startup building elegant technological solutions to real-world problems.

Our mission is to build world-class systems from scratch that are lean, fast, and intelligent. We are currently operating in stealth mode, developing deeply technical products involving Kotlin, React, Azure, AWS, GCP, Google Maps integrations, and algorithmically intensive backends.

We are building a team of builders — not ticket takers. If you want to design systems, make real decisions, and own your work end-to-end, this is the place for you.

Role Overview

As a Software Engineer, you will take full ownership of building and scaling critical product systems. You will work directly with the founding team to transform complex real-world problems into scalable technical solutions.

This role is ideal for engineers who enjoy thinking deeply about systems, writing clean code, and building products from 0 → 1.

Key Responsibilities

System Development & Architecture

  • Design, develop, and maintain scalable backend services, primarily using Kotlin or JVM-based languages (Java/Scala).
  • Architect systems that are robust, high-performance, and production-ready.
  • Apply strong data structures, algorithms, and system design principles to solve complex engineering challenges.

Full Stack Development

  • Build fast, maintainable front-end applications using React.
  • Ensure seamless integration between frontend systems and backend services.

Cloud Infrastructure

  • Design and manage cloud architecture using AWS, Azure, and/or Google Cloud Platform (GCP).
  • Implement scalable deployment pipelines, monitoring, and infrastructure optimization.

Product & Technical Collaboration

  • Work closely with founders and product stakeholders to translate business problems into technical solutions.
  • Contribute actively to product and engineering roadmap decisions.

Performance Optimization

  • Continuously improve system performance, scalability, and reliability.
  • Implement efficient algorithms and system optimizations to gain a technical advantage.

Engineering Excellence

  • Write clean, well-tested, and maintainable code.
  • Maintain strong engineering standards across the codebase.

Required Skills & Qualifications

We value capability and ownership over years of experience. Whether you have 10 years of experience or none, what matters is your ability to build and solve hard problems.

Core Requirements

  • Strong computer science fundamentals (Data Structures, Algorithms, System Design).
  • Experience with Kotlin or JVM languages such as Java or Scala.
  • Experience building modern React applications.
  • Hands-on experience with cloud platforms (AWS / Azure / GCP).
  • Experience designing and deploying scalable distributed systems.
  • Strong problem-solving and analytical thinking.

Preferred / Bonus Skills

  • Experience with Google Maps APIs or geospatial integrations.
  • Prior startup experience.
  • Contributions to open-source projects.
  • Personal side projects demonstrating strong engineering ability.

Ideal Candidate

You will thrive in this role if you:

  • Take ownership of problems, not just tasks.
  • Are comfortable working in high-ambiguity environments.
  • Have a builder mindset and enjoy creating systems from scratch.
  • Learn quickly and execute with speed and precision.

This Role May Not Be For You If

  • You prefer strict task assignments and detailed specifications before starting work.
  • You want to focus only on coding tickets without product involvement.
  • You prefer large teams with multiple layers of management.

Why Join Us

  • Build 0 → 1 products with massive ownership.
  • Work in a flat organization with no unnecessary hierarchy.
  • Collaborate directly with founders and core product builders.
  • Your contributions will have immediate and visible impact.
  • Flexible remote work environment.
  • Opportunity to shape the technology, culture, and future of the company.

If you are passionate about building powerful systems, solving complex problems, and owning your work, we would love to hear from you.


https://loopx.redstring.co.in/form/69e4c676e8ea0f6af4ed066e

Read more
Bell Techlogix
Pemmraju VenkatVandita
Posted by Pemmraju VenkatVandita
Hyderabad
5 - 10 yrs
₹15L - ₹20L / yr
Generative AI
Microsoft Windows Azure
skill iconPython
SQL
Windows Azure
+1 more

The AI Data Engineer will be responsible for designing, building, and operating scalable data pipelines and curated data assets that power machine learning, generative AI, and intelligent automation solutions in an SLA-driven managed services environment. This role focuses on data ingestion, transformation, governance, and operational reliability across cloud and hybrid environments enabling use cases such as knowledge retrieval (RAG), conversational AI, predictive analytics, and AI-assisted service management. The ideal candidate combines strong data engineering fundamentals with an understanding of AI workload requirements, including quality, lineage, privacy, and performance. 

 

Key Responsibilities 

•Design, build, and operate production-grade data pipelines that support AI/ML and generative AI workloads in managed services environments 

•Develop curated, analytics-ready datasets and data products to enable model training, grounding, feature generation, and AI search/retrieval 

•Implement data ingestion patterns for structured and unstructured sources (APIs, databases, files, event streams, documents) 

•Build and maintain transformation workflows with strong testing and validation 

•Enable Retrieval-Augmented Generation (RAG) by preparing document corpora, chunking strategies, metadata enrichment, and vector indexing patterns 

•Integrate data pipelines with application services 

•Support ITSM and enterprise workflow data needs, including ServiceNow data integration, CMDB/incident data quality improvements, and automation enablement 

•Implement observability for data pipelines (monitoring, alerting, SLAs/SLOs) and perform root cause analysis for pipeline failures or data quality incidents 

•Apply data governance and security best practices 

•Collaborate with ML Engineers, DevOps/SRE, and solution architects to operationalize end-to-end AI solutions 

•Contribute to reusable patterns, templates, and standards within the Bell Techlogix AI Center of Excellence 

 

Required Qualifications 

•Bachelor’s degree in Computer Science, Engineering, Information Systems, or equivalent practical experience 

•5+ years of experience in data engineering, analytics engineering, or platform data operations 

•Strong proficiency in SQL and Python; experience with data modeling and dimensional concepts 

•Hands-on experience with Azure data services (e.g., Data Factory, Synapse, Databricks, Storage, Key Vault) or equivalent cloud tooling 

•Experience building reliable pipelines with scheduling, dependency management, and automated testing/validation 

•Experience supporting production data platforms with incident management, troubleshooting, and root cause analysis 

•Understanding of data security, privacy, and governance principles in enterprise environments 

 

Preferred Qualifications 

•Experience enabling AI/ML workloads: feature engineering, training data preparation, and integration with Azure Machine Learning 

•Experience with unstructured data processing for generative AI 

•Familiarity with vector databases or vector search and RAG patterns 

•Experience with event streaming and messaging 

•Familiarity with ServiceNow data model and integration patterns (Table API, export, CMDB/ITSM reporting) 

•Relevant certifications (Microsoft Azure Data Engineer, Azure AI Engineer, Databricks) 

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
6 - 12 yrs
Upto ₹40L / yr (Varies
)
Windows Azure
skill iconKubernetes
Terraform
DevOps

Role & Responsibilities

  • Design, develop, and deliver automation solutions to enhance platform functionality and reliability.
  • Deploy, manage, and maintain Azure cloud infrastructure ensuring high availability, scalability, and security.
  • Champion and implement Infrastructure as Code (IaC) practices using Terraform.
  • Build and maintain containerized environments using Kubernetes (AKS).
  • Develop self-service, self-healing, monitoring, and alerting systems for cloud platforms.
  • Automate development, testing, and deployment workflows using CI/CD pipelines.
  • Integrate DevOps tools such as Git, Jenkins/Azure DevOps, SonarQube, Artifactory, and Docker to streamline delivery pipelines.
  • Ensure platform observability through monitoring, logging, and alerting frameworks.

Requirements

  • 6+ years of experience in Cloud / DevOps / Platform Engineering roles.
  • Strong hands-on experience with Microsoft Azure cloud infrastructure.
  • Experience working with Azure services such as compute, networking, storage, identity, and messaging services.
  • Expertise in container orchestration using Kubernetes (AKS).
  • Strong experience implementing Infrastructure as Code using Terraform.
  • Familiarity with cloud-native and microservices architecture patterns.
  • Experience with relational and NoSQL databases such as PostgreSQL or Cassandra.

Additional Skills

  • Strong Linux administration and troubleshooting skills.
  • Programming or scripting experience in Bash, Python, Java, or similar languages.
  • Hands-on experience with CI/CD tools such as Jenkins, Git, Maven, or Azure DevOps.
  • Experience managing multi-region or high-availability cloud environments.
  • Familiarity with Agile / Scrum / DevOps practices and collaboration tools.


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
5 - 7 yrs
₹4L - ₹12L / yr
skill iconPython
Large Language Models (LLM)
FastAPI
Windows Azure
CI/CD

👉 Job Title: Senior Backend Developer

🌟 Experience: 5-7 Years

💡Location: Bangalore

👉 Notice Period :- Immediate Joiners

💡 Work Mode - 5 Days work from Office

( Candidate Serving notice period are preffered)


Role Summary

We are seeking a Senior Backend Developer with strong expertise in Python and FastAPI to build scalable, high-performance backend systems integrated with LLM technologies on Azure. The role involves designing distributed systems, optimizing data pipelines, and ensuring secure, enterprise-grade applications.


Key Responsibilities

  • Develop backend services using Python & FastAPI (async, middleware)
  • Build high-concurrency, scalable systems and microservices
  • Work with Azure services and event-driven architectures
  • Optimize MongoDB & Redis for performance
  • Integrate LLM APIs (OpenAI, Gemini, Claude)
  • Implement security (JWT, encryption, API management)

Mandatory Skills (Top 3)

  1. Strong Python backend development with FastAPI
  2. Hands-on experience with Microsoft Azure cloud
  3. Experience in building scalable distributed/microservices systems


Good to Have

  • Docker, Kubernetes, CI/CD
  • LLM frameworks (LangChain, vector DBs)
  • Monitoring tools and real-time data processing


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Bengaluru (Bangalore)
5 - 7 yrs
₹4L - ₹12L / yr
skill iconJava
skill iconPython
skill iconNodeJS (Node.js)
Windows Azure
Google Cloud Platform (GCP)
+3 more

👉 Job Title: Backend Developer

🌟 Experience: 5-7 Years

💡Location: Bangalore

👉 Notice Period :- Immediate Joiners

💡 Work Mode - 5 Days work from Office

( Candidate Serving notice period are preffered)


Role Summary

We are looking for a Backend Engineer to join the Platform Implementation Team, responsible for building scalable, secure, and high-performance backend systems for a multi-cloud Data & AI platform. You will design microservices, develop REST APIs, and enable seamless data integration across enterprise systems like CRM and ERP.


💫 Key Responsibilities

✅ Design and develop scalable microservices and RESTful APIs

✅ Build event-driven architectures for asynchronous processing

✅ Integrate backend systems with cloud platforms (GCP/Azure)

✅ Ensure secure, reliable, and optimized data handling

✅ Collaborate with cross-functional teams (UI, Data, Platform)

✅ Follow best practices in coding, testing, CI/CD, and containerization


💫 Mandatory Skills (Top 3)

✅ Strong backend programming experience (Python / Node.js / Java)

✅ Expertise in API development & Microservices architecture

✅ Hands-on experience with Cloud platforms (GCP or Azure)





Read more
Redtring
Keshav Senthil
Posted by Keshav Senthil
Hyderabad
3 - 6 yrs
₹15L - ₹20L / yr
skill iconJava
skill iconKotlin
skill iconAmazon Web Services (AWS)
skill iconRedis
Apache Kafka
+7 more

About Us:


We are hiring for a pre seed funded startup called Zeromoblt (https://zeromoblt.com/), a high-agency Hyderabad-based startup revolutionizing student transportation with lean, intelligent tech stacks.


Our mission: architect world-class systems from scratch—fast, scalable, and algorithmically sharp—using Kotlin, React, AWS (EC2, IoT, IAM), Google Maps, and multi-cloud setups. Stealth mode operations mean you're building 0→1 products with founders, not fixing tickets.


What You'll Do

  • Lead end-to-end ownership of complex systems: design, build, deploy, monitor, and iterate at scale.
  • Architect high-performance backends in Kotlin (or JVM langs) that handle real-time routing and IoT data.
  • Craft scalable React UIs that power ops dashboards and parent-facing apps.
  • Drive cloud decisions across AWS, Azure/GCP—optimising costs for our bootstrap runway.
  • Apply DSA/system design to solve hard problems like dynamic route optimization and predictive scaling.
  • Shape the engineering roadmap: propose, prioritise, and ship features with founders.
  • Mentor juniors while executing solo on high-impact bets—no layers, just results.


We're Looking For

  • 3-6 years of hands-on engineering where you've owned and shipped production systems (prove it with code/stories).
  • Elite CS fundamentals: advanced DSA, system design (distributed systems a must), design patterns.
  • Mastery of Kotlin/Java + modern React; real AWS experience (EC2, IAM, CLI—you know our stack).
  • Proven "leap-taker": startup grit, side projects, or open-source that screams hunger.
  • Figure-it-out velocity: you thrive in chaos, learn our domain overnight, and deliver 10x faster than peers.


This Role Is Not For You If…

  • You need structured roadmaps, PM hand-holding, or big-tech process.
  • Comfort > impact: stable salary over equity upside and chaos.
  • You've never worn all hats (dev, ops, product) in a resource-constrained environment.


Why Join Us

  • Massive ownership: lead tech for 10k+ students, direct founder access, shape ZeroMoblt's scale.
  • Flat, high-trust team: flexible Hyderabad/remote, no bureaucracy.
  • Hungry culture: we hire hustlers scaling from 700 to 10k students—your wins are visible daily.
  • Hungry to Leap? Apply now!
Read more
Planview

at Planview

3 candid answers
3 recruiters
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
10yrs+
Upto ₹57L / yr (Varies
)
Site survey
Google Cloud Platform (GCP)
Linux administration
Microsoft Windows Server administration
CI/CD
+11 more

 Company Overview:

     Planview has one mission: to build the future of connected work with market-leading portfolio management and work management solutions. Planview is a recognized innovator and industry leader, our solutions enable organizations to connect the business from ideas to impact, empowering companies to accelerate the achievement of what matters most. Our solutions span every class of work, resource, and organization to address the varying needs of diverse and distributed teams, departments, and enterprises.


As a Sr CloudOps Engineer II, you will oversee teams of Engineers and be a champion for configuration management, technologies in the cloud, and continuous improvement. You will work closely with global leaders to ensure that our applications, infrastructure, and processes are scalable, secure, and supportable. By leveraging your production experience and development skills you will work hand in hand with Engineers (Dev, DevOps, DBOps) to design and implement solutions that improve delivery of value to customers, reduce costs, and eliminate toil.


     Responsibilities (What you will do):

  •  Guide the professional development of Engineers and support the teams to accomplish business goals
  • Work closely with leaders in the Israel to align on priorities and architect, deliver, and manage our products
  • Build systems that are secure, scalable, and self-healing.
  • Manage and improve deployment pipelines.
  • Triage and remediate production issues.
  • Participate in on-call rotations for escalations.


Qualifications (What you will bring):

  •   Bachelor's degree is CS or equivalent experience in related field.
  • 2+ years managing Engineering teams.
  • 8+ years of experience as a site reliability or platform engineer, preferably in a fast-scaling environment
  • 5+ years administering Linux and Windows environments.
  • 3+ years programming / scripting experience (e.g., Python, JavaScript, PowerShell)
  • Strong technical knowledge in OS’s (Linux and Windows), virtualizations, storage systems, networking, and firewall implementations
  • Maintaining production environments in the On Premise (90%) and Cloud (10%) (e.g., AWS, Google Cloud, Azure)
  • Solid understanding of networking principles and how it applies to data flow and security.
  • Automating deployments of cloud based available services (e.g., AWS EC2 / RDS, Docker, Kubernetes)
  • Experience managing CI/CD infrastructures, with a strong proficiency in platforms like bitbucket and Jenkins to streamline deployment pipelines and ensure efficient software delivery.
  • Management of resources using Infrastructure as Code tools (e.g., CloudFormation, Terraform, Chef)
  • Knowledge of observability tools such as LogicMonitor, New Relic, Prometheus, and Coralogix, as well as their implementation.
  • Worked within Agile and Lean software development teams.
  • Experience working in globally distributed teams.
  • Ability to look on the big picture and manage risks.
Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore), Mumbai, Trivandrum
4 - 7 yrs
Upto ₹35L / yr (Varies
)
Large Language Models (LLM) tuning
skill iconDeep Learning
Google Cloud Platform (GCP)
Google Vertex AI
Windows Azure
+2 more

Build, deploy, and maintain production-grade AI/ML solutions for Fortune 500 enterprise clients on Google Cloud Platform. Hands-on role focused on shipping scalable AI systems across GenAI, agentic workflows, traditional ML, and computer vision.


Key Responsibilities:


Generative AI & Agentic Systems

  • Design and build GenAI applications (RAG, agentic workflows, multi-agent systems)
  • Develop intelligent systems with memory, planning, and reasoning capabilities
  • Implement prompt engineering, context optimization, and evaluation frameworks
  • Build observable and reliable multi-agent architectures

Traditional ML & Computer Vision

  • Develop ML pipelines (forecasting, recommendation, classification, regression)
  • Build production-grade computer vision solutions (document AI, image analysis)
  • Perform feature engineering, model optimization, and benchmarking

MLOps & Production Engineering

  • Own end-to-end ML lifecycle (CI/CD, testing, versioning, deployment)
  • Build scalable APIs, microservices, and data pipelines
  • Monitor models, detect drift, and implement A/B testing frameworks

Knowledge Solutions

  • Architect knowledge graphs and semantic search systems
  • Implement hybrid retrieval (vector + keyword search)

Client Collaboration

  • Present technical solutions to enterprise clients
  • Collaborate with architects, data engineers, and business teams

Required Skills & Experience

  • 3–6 years of hands-on ML Engineering experience
  • Strong Python and software engineering fundamentals
  • Experience shipping production ML systems on cloud (GCP preferred)
  • Experience across GenAI, Traditional ML, Computer Vision
  • MLOps experience and RAG-based systems

Preferred

  • GCP Professional ML Engineer certification
  • Knowledge graphs / semantic search experience
  • Experience in regulated industries (Healthcare / BFSI)
  • Open-source or technical publications
Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 12 yrs
Upto ₹45L / yr (Varies
)
MLOps
skill iconPython
databricks
Windows Azure
skill iconAmazon Web Services (AWS)

We are seeking a skilled and passionate ML Engineer with 3+ years of experience to join our team. The ideal candidate will be instrumental in developing, deploying, and maintaining machine learning models, with a strong focus on MLOps practices.

This role requires hands-on experience with Azure cloud services, Databricks, and MLflow to build robust and scalable ML solutions.


Responsibilities

  • Design, develop, and implement machine learning models and algorithms to solve complex business problems.
  • Collaborate with data scientists to transition models from research and development into production-ready systems.
  • Build and maintain scalable data pipelines for ML model training and inference using Databricks.
  • Implement and manage the ML model lifecycle using MLflow, including experiment tracking, model versioning, and model registry.
  • Deploy and manage ML models in production environments on Azure, leveraging services such as:
  • Azure Machine Learning
  • Azure Kubernetes Service (AKS)
  • Azure Functions
  • Support MLOps workloads by automating model training, evaluation, deployment, and monitoring processes.
  • Ensure the reliability, performance, and scalability of ML systems in production.
  • Monitor model performance, detect model drift, and implement retraining strategies.
  • Collaborate with DevOps and Data Engineering teams to integrate ML solutions into existing infrastructure and CI/CD pipelines.
  • Document model architecture, data flows, and operational procedures.

Qualifications

Education

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Statistics, or a related quantitative field.

Experience

  • Minimum 3+ years of professional experience as an ML Engineer or in a similar role.

Required Skills

  • Strong proficiency in Python for data manipulation, machine learning, and scripting.
  • Hands-on experience with machine learning frameworks, such as:
  • Scikit-learn
  • TensorFlow
  • PyTorch
  • Keras
  • Demonstrated experience with MLflow for:
  • Experiment tracking
  • Model management
  • Model deployment
  • Proven experience working with Microsoft Azure cloud services, specifically:
  • Azure Machine Learning
  • Azure Databricks
  • Related compute and storage services
  • Solid experience with Databricks for:
  • Data processing
  • ETL pipelines
  • ML model development
  • Strong understanding of MLOps principles and practices, including:
  • CI/CD for ML
  • Model versioning
  • Model monitoring
  • Model retraining
  • Experience with containerization and orchestration technologies, including:
  • Docker
  • Kubernetes (especially AKS)
  • Familiarity with SQL and data warehousing concepts.
  • Experience working with large datasets and distributed computing frameworks.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.

Nice-to-Have Skills

  • Experience with other cloud platforms (AWS or GCP).
  • Knowledge of big data technologies such as Apache Spark.
  • Experience with Azure DevOps for CI/CD pipelines.
  • Familiarity with real-time inference patterns and streaming data.
  • Understanding of Responsible AI principles, including fairness, explainability, and privacy.

Certifications (Preferred)

  • Microsoft Certified: Azure AI Engineer Associate
  • Databricks Certified Machine Learning Associate (or higher) 
Read more
Inteliment Technologies

at Inteliment Technologies

2 candid answers
Ariba Khan
Posted by Ariba Khan
Pune
6 - 10 yrs
Upto ₹20L / yr (Varies
)
skill iconPython
ETL
Snowflake
Spark
PowerBI
+3 more

About the company:

At Inteliment, we help organizations turn data into powerful decisions. With two decades of proven expertise, we work with global customers to solve complex business problems using advanced data and analytics solutions. Our ACE – Analytical Centre of Excellence brings together some of the best minds in data engineering, analytics, and AI to build next-generation decision intelligence platforms. If you are passionate about data engineering, modern data platforms, and solving real business problems, this role will give you the opportunity to work on global enterprise data ecosystems. 


About the role

We are seeking a highly skilled Data Architect with strong hands-on expertise in Data Engineering and/or Data Visualization tools, having 6+ years of experience in the pure Data Analytics domain. The ideal candidate will be responsible for architecting scalable data solutions, guiding technical teams, and ensuring robust data pipelines, analytics frameworks, and visualization ecosystems aligned with business objectives. 


Requirements:

  • Bachelor’s or master’s degree in computer sciences, Information Technology, or a related field.
  • 6+ years of hands-on experience in Data Analytics domain.
  • Strong experience in designing enterprise data solutions.
  • Proven experience in handling large-scale data systems.
  • Experience in client-facing roles is preferred. 
  • Certifications with related field will be an added advantage

Technical Skills

✔ Data Engineering Stack

  • Python / PySpark / SQL
  • ETL Tools (e.g., Informatica, Talend, SSIS, or equivalent)
  • Cloud Platforms (AWS / Azure / GCP)
  • Data Warehousing (Snowflake, Redshift, BigQuery, etc.)
  • Big Data Technologies (Spark, Hadoop – preferred)

✔ Visualization & BI Tools (At least one advanced tool mandatory)

  • Power BI
  • Tableau
  • Qlik
  • Looker or equivalent

✔ Database Technologies

  • SQL (MySQL, PostgreSQL, SQL Server, Oracle)
  • NoSQL (MongoDB, Cassandra – preferred)

✔ Additional Preferred Skills

  • Data Modeling (Star/Snowflake schema)
  • API integrations
  • CI/CD for data pipelines
  • Version control (Git)
  • Agile methodology exposure

Soft Skills

  • Leadership: Strong leadership and mentoring capabilities to guide technical teams.
  • Communication: Excellent communication skills for collaborating with cross-functional teams and stakeholders.
  • Problem-Solving: Analytical mindset with a keen attention to detail.
  • Adaptability: Ability to manage shifting priorities and requirements effectively.
  • Team Collaboration: Strong interpersonal skills for fostering a collaborative work environment.


Responsibilities:

✔ Solution Architecture & Design

  • Design end-to-end data architecture solutions including data ingestion, transformation, storage, and visualization.
  • Architect scalable and high-performance data pipelines.
  • Define best practices, standards, and governance frameworks for data analytics projects.

✔ Data Engineering

  • Build and optimize ETL/ELT pipelines.
  • Work with structured and unstructured datasets.
  • Design and implement data lakes, data warehouses, and modern data platforms.
  • Ensure data quality, integrity, and performance tuning.

✔ Data Visualization & Analytics

  • Architect and implement enterprise-level dashboards and reporting solutions.
  • Define data models optimized for BI tools.
  • Guide teams in building intuitive, performance-driven visualizations.
  • Translate business requirements into scalable analytics solutions.

✔ Technical Leadership

  • Provide technical direction to data engineers, BI developers, and analysts.
  • Conduct code reviews and enforce architectural standards.
  • Collaborate with cross-functional teams including business stakeholders and delivery teams.
  • Mentor junior team members and drive capability building.

✔ Stakeholder Engagement

  • Participate in client discussions, solution presentations, and requirement workshops.
  • Provide effort estimations and solution proposals.
  • Act as a technical escalation point. 
Read more
Superclaims
Akshith Daithala
Posted by Akshith Daithala
Hyderabad
1 - 3 yrs
₹5L - ₹7.5L / yr
skill iconPython
FastAPI
skill iconPostgreSQL
SQLAlchemy
LangGraph
+11 more

About Superclaims

Superclaims modernizes health insurance claims adjudication with intelligent automation. We help insurers and TPAs replace manual, document-heavy workflows with faster, more accurate decisions at scale.


Role: Python Backend Developer

We are looking for a Python Backend Developer who is excited to build AI-powered automation products in a fast-paced startup environment.


What you'll do

- Build and maintain scalable backend systems and APIs

- Develop intelligent data extraction pipelines using AI/ML

- Design and implement agentic workflows with LangGraph

- Design efficient database schemas and optimize queries in PostgreSQL

- Integrate and work with LLMs (OpenAI, Gemini, or similar)

- Collaborate with product, frontend, and data teams to deliver end-to-end features

- Write clean, tested, and well-documented code


Must-have skills

- Strong proficiency in Python and a modern web framework (FastAPI or similar)

- Experience with PostgreSQL and an ORM (SQLAlchemy preferred)

- Solid understanding of RESTful API design and best practices

- Hands-on experience or strong familiarity with LangGraph

- Experience working with LLMs (OpenAI, Gemini, or similar providers)

- Comfort with Git/version control and collaborative development workflows


Nice-to-have skills

- Experience with Docker and containerized deployments

- Knowledge of Redis for caching or background tasks

- Exposure to cloud platforms (GCP, AWS, or Azure)

- Experience with vector databases and retrieval-augmented generation

- Basic prompt engineering skills

- Experience with object storage (S3/MinIO)


What we're looking for

- 1+ years of Python backend development experience (open to exceptional freshers)

- Fast learner with genuine curiosity about AI/ML and automation

- Prior startup experience preferred

- Ownership mindset, bias for action, and comfort with ambiguity

- Ready to relocate to Hyderabad (work location)


How to apply

Please share:

- Your resume

- GitHub/Portfolio link

- A brief note on why you're interested in AI-powered automation and Superclaims

Read more
Arcis India
Sarita Jena
Posted by Sarita Jena
Mumbai
6 - 8 yrs
₹12L - ₹20L / yr
skill iconJava
skill iconSpring Boot
Quarkus
Microservices
Webservices
+17 more

6 + years of hands-on development experience and in-depth knowledge of , Spring Java, Spring boot, Quarkus and nice to have front-end technologies like Angular, React JS

● Excellent Engineering skills in designing and implementing scalable solutions

● Good knowledge of CI/CD Pipeline with strong focus on TDD

Strong communication skills and ownership

● Exposure to Cloud, Kubernetes, Docker, Microservices is highly desired.

● Experience in working on public cloud environments like AWS, Azure, GCP w.r.t. solutions development, deployment & adoption of cloud-based technology components like IaaS / PaaS offerings

● Proficiency in PL/SQL and Database development.

Strong in J2EE & OOPS Design Patterns.

Read more
Mumbai
6 - 12 yrs
₹10L - ₹25L / yr
Windows Azure
cloud

Job Summary The Azure L3 Cloud Support Engineer is a subject-matter expert responsible for resolving the most complex cloud infrastructure and platform issues. This role serves as the final escalation point for Azurerelated incidents, drives root cause analysis, improves platform reliability, and provides architectural guidance. The engineer works closely with cloud architects, security, and Microsoft Support to ensure stability, scalability, and security of enterprise Azure environments. Key Responsibilities • Act as Level 3 / final escalation point for Azure incidents and service outages • Diagnose and resolve high-severity, complex, and recurring Azure issues • Perform deep-dive troubleshooting across: o Azure Compute (VMs, VMSS, AKS) o Networking (VNet peering, Azure Firewall, Load Balancer, Application Gateway, ExpressRoute) o Storage (Blob, File, Disk, performance & latency issues) o PaaS services (App Services, Functions, Azure SQL, Cosmos DB) • Lead root cause analysis (RCA) and implement long-term corrective actions • Design and review high availability, disaster recovery, and resiliency architectures • Optimize cloud environments for performance, cost, and security • Define and enforce Azure best practices, standards, and governance • Support and mentor L1/L2 engineers; create advanced knowledge articles • Collaborate with Microsoft Premier / Unified Support on critical cases • Review and approve production changes and complex deployments • Contribute to automation using PowerShell, Azure CLI, ARM/Bicep, Terraform • Participate in major incident management and post-incident reviews • Provide architectural input for new cloud initiatives and migrations Required Skills & Qualifications • 6–10+ years of IT infrastructure experience with strong Azure specialization • Expert-level knowledge of Azure IaaS, PaaS, and networking • Deep understanding of: o Cloud security (RBAC, Entra ID, Key Vault, Defender for Cloud) o Hybrid connectivity and identity o Monitoring and observability (Azure Monitor, Log Analytics, App Insights) • Strong scripting and automation skills (PowerShell preferred) • Experience handling enterprise-scale production environments • Strong troubleshooting, analytical, and decision-making skills • Ability to lead during outages and communicate with stakeholders

Read more
Mid Size Product Engineering Services Company

Mid Size Product Engineering Services Company

Agency job
via Vidpro Consultancy Services by Vidyadhar Reddy
Remote, Bengaluru (Bangalore), Chennai, Hyderabad
20 - 26 yrs
₹65L - ₹120L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+18 more

This role will report to the Chief Technology Officer


You Will Be Responsible For


* Driving decision-making on enterprise architecture and component-level software design to our software platforms' timely build and delivery.

* Leading a team in building a high-performing and scalable SaaS product.

* Conducting code reviews to maintain code quality and follow best practices

* DevOps practice development on promoting automation, including asset creation, enterprise strategy definition, and training teams

* Developing and building microservices leveraging cloud services

* Working on application security aspects

* Driving innovation within the engineering team, translating product roadmaps into clear development priorities, architectures, and timely release plans to drive business growth.

* Creating a culture of innovation that enables the continued growth of individuals and the company

* Working closely with Product and Business teams to build winning solutions

* Led talent management, including hiring, developing, and retaining a world-class team


Ideal Profile


* You possess a Degree in Engineering or a related field and have at least 20+ years of experience as a Software Engineer, with a 10+ years of experience leading teams and at least 4 Years of experience in building a SaaS / Fintech platform.

* Proficiency in MERN / Java / Full Stack.

* Led a team in optimizing the performance and scalability of a product

* You have extensive experience with DevOps environment and CI/CD practices and can train teams.

* You're a hands-on leader, visionary, and problem solver with a passion for excellence.

* You can work in fast-paced environments and communicate asynchronously with geographically distributed teams.


What's on Offer?


* Exciting opportunity to drive the Engineering efforts of a reputed organisation

* Work alongside & learn from best in class talent

* Competitive compensation + ESOPs

Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
6 - 10 yrs
Upto ₹40L / yr (Varies
)
Windows Azure
databricks
Data Structures
Data engineering

We are hiring an Associate Technical Architect with strong expertise in Azure-based data platforms to design scalable data lakes, data warehouses, and enterprise data pipelines, while working with global teams.


Key Responsibilities

  • Design and implement scalable data lake, data warehouse, and lakehouse architectures on Azure
  • Build resilient data pipelines using Azure services
  • Architect and optimize cloud-based data platforms
  • Improve large-scale data processing and query performance
  • Collaborate with engineering teams, QA, product managers, and stakeholders
  • Communicate technical roadmap, risks, and mitigation strategies


Must-Have Skills:


  • 6+ years of experience in Azure Data Engineering / Data Architecture

Azure Data Platform

  • Experience with Azure Data Factory
  • Hands-on with Azure Databricks and PySpark
  • Experience with Azure Data Lake Storage
  • Knowledge of Azure Synapse or Azure SQL for data warehousing

Programming & Data Skills

  • Strong programming skills in Python and PySpark
  • Advanced SQL with query optimization and performance tuning
  • Experience building ETL / ELT data pipelines

Data Architecture Knowledge

  • Understanding of MPP databases
  • Knowledge of partitioning, indexing, and performance optimization
  • Experience with data modeling (dimensional, normalized, lakehouse)

Cloud Fundamentals

  • Azure security, networking, scalability, and disaster recovery
  • Experience with on-premise to Azure migrations

Certification (Preferred)

  • Azure Data Engineer or Azure Solutions Architect certification

Good-to-Have Skills

  • Domain experience in FSI, Retail, or CPG
  • Exposure to data governance tools
  • Experience with BI tools such as Power BI or Tableau
  • Familiarity with Terraform, CI/CD pipelines, or Azure DevOps
  • Experience with NoSQL databases such as Cosmos DB or MongoDB

Soft Skills

  • Strong problem-solving and analytical thinking
  • Good communication and stakeholder management
  • Ability to translate technical concepts into business outcomes
  • Experience working with global or distributed teams
Read more
Quanteon Solutions

at Quanteon Solutions

1 recruiter
DurgaPrasad Sannamuri
Posted by DurgaPrasad Sannamuri
Hyderabad
6 - 10 yrs
₹10L - ₹30L / yr
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconReact Native
skill iconAngular (2+)
SQL
+14 more

Key Requirements / Skills

  • 6+ years of overall experience in software development with strong expertise in building scalable web applications.
  • 2+ years of experience as a Technical Lead, managing development teams and driving project delivery.
  • Strong technical decision-making ability, including architecture design, technology selection, and implementation of best practices.
  • Front-end expertise: Strong experience in React, JavaScript, TypeScript, and building responsive and user-friendly UI/UX.
  • Back-end development: Hands-on experience with Node.js, RESTful APIs, API design, and server-side architecture.
  • AI/ML knowledge: Experience in implementing AI/ML models or integrating AI-based solutions to solve business problems.
  • Cloud & DevOps exposure: Experience with AWS/Azure, understanding of CI/CD pipelines, and cloud-based deployments.
  • Code quality & best practices: Experience in code reviews, Git version control, and ensuring maintainable and secure code.
  • Team leadership: Ability to mentor developers, guide technical discussions, and collaborate across teams.
  • Strong communication skills to effectively interact with technical and non-technical stakeholders.
  • Experience working in high-compliance environments such as healthcare systems is a plus.


Education Qualifications:

  • B.Tech/M.Tech in CSE/IT/AI/ML from a good university
Read more
Bengaluru (Bangalore)
15 - 25 yrs
₹3L - ₹20L / yr
Channel Sales
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Windows Azure
SaaS
+1 more

Job Description - Manager Sales

Min 15 years experience,

Should be experience from Sales of Cloud IT Saas Products portfolio which Savex deals with,

Team Management experience, leading cloud business including teams

Sales manager - Cloud Solutions

Reporting to Sr Management

Good personality

Distribution backgroung

Keen on Channel partners

Good database of OEMs and channel partners.

Age group - 35 to 45yrs

Male Candidate

Good communication

B2B Channel Sales

Location - Bangalore


If interested reply with cv and below details


Total exp -

Current ctc - 

Exp ctc - 

Np -

Current location - 

Qualification - 

Total exp Channel Sales -

What are the Cloud IT products, you have done sales for? 

What is the Annual revenue generated through Sales ?

Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
5 - 15 yrs
₹18L - ₹25L / yr
PowerBI
SQL
Mobile App Development
Windows App Development
Scripting
+12 more

Description

As a Power Apps Developer, you will be at the forefront of crafting innovative, low‑code solutions that streamline business processes and empower end‑users across the organization. You will collaborate closely with functional analysts, business stakeholders, and fellow developers to translate complex requirements into intuitive, scalable applications on the Microsoft Power Platform. The role offers a dynamic environment where continuous learning is encouraged, providing access to the latest Power Apps features, Azure services, and integration techniques. You will contribute to a culture of knowledge sharing, participate in code reviews, and mentor junior team members, ensuring high‑quality deliverables that drive operational efficiency and measurable business impact.


Requirements:

  • 5–15 years of experience developing enterprise‑grade solutions using Microsoft Power Apps, Power Automate, and Power BI.
  • Strong proficiency in Canvas and Model‑Driven apps, Common Data Service (Dataverse), and integration with Azure services (e.g., Azure Functions, Logic Apps).
  • Solid understanding of relational databases, SQL, and data modeling concepts.
  • Experience with JavaScript, TypeScript, and RESTful APIs for extending Power Apps functionality.
  • Excellent problem‑solving abilities, strong communication skills, and a collaborative mindset.
  • Relevant certifications such as Microsoft Power Platform Developer Associate (PL‑400) are a plus.


Roles and Responsibilities:

  • Design, develop, and deploy custom Power Apps solutions that meet business requirements and adhere to best practices.
  • Create and maintain automated workflows using Power Automate to streamline repetitive tasks and improve efficiency.
  • Integrate Power Apps with external systems via connectors, APIs, and Azure services to ensure seamless data flow.
  • Perform performance tuning, debugging, and troubleshooting of applications to ensure optimal user experience.
  • Collaborate with business analysts and stakeholders to gather requirements, provide technical guidance, and deliver prototypes.
  • Conduct code reviews, enforce governance standards, and contribute to the development of a reusable component library.
  • Stay updated with the latest Power Platform releases, evaluate new features, and recommend adoption strategies.
  • Provide training and mentorship to junior developers and end‑users to foster platform adoption.


Must have skills

Power apps - 5 years

Microsoft Power Automate - 1 years


Nice to have skills

Canvas App Development and Scripting - 4 years

Canvas Apps Development - 4 years

SQL - 2 years

SharePoint APIs - 1 years

Power Fx - 2 years

C Sharp - 3 years

RESTful API - 2 years


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Remote only
5 - 8 yrs
₹7L - ₹25L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Windows Azure
CI/CD
Retrieval Augmented Generation (RAG)
+1 more

🚀 Hiring: AI/M and Gen AI Engineer

⭐ Experience: 5+ Years

⭐ Work Mode:- Remote

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


🌟 About the Role

We are looking for a highly skilled AI/ML Software Engineer to design, build, and productionize enterprise-grade AI solutions. This role focuses on Generative AI, RAG systems, and AI agent–driven automation, with deployment on Microsoft Azure.

You will collaborate with cross-functional teams including architects, engineers, and business stakeholders to deliver scalable and secure AI solutions that create real business impact.


🔑 Mandatory Skills (Must Have)

  • Azure AI Ecosystem (Azure Machine Learning, Azure OpenAI, Cognitive Services)
  • Generative AI & RAG Systems (vector embeddings, retrieval pipelines)
  • Strong Software Engineering + MLOps (CI/CD, containerization, scalable deployments)


💼 Key Responsibilities

  • Design, develop, and deploy AI/ML models in production environments
  • Build and optimize RAG-based applications and AI agent workflows
  • Develop scalable data pipelines and integrate with enterprise systems
  • Implement MLOps practices for continuous deployment and monitoring
  • Work with big data tools to process large-scale datasets
  • Ensure security, scalability, and performance of AI systems
  • Collaborate with stakeholders to translate business problems into AI solutions


🧠 Required Experience & Skills

  • 5–8 years of hands-on experience in AI/ML development
  • Strong programming and software engineering expertise
  • Experience with Azure services (ML, Data Lake, OpenAI, Cognitive Services)
  • Knowledge of vector databases and embedding models
  • Experience with Databricks, Azure Data Factory, or Kafka
  • Familiarity with multi-agent systems / agentic AI frameworks
  • Proficiency in TensorFlow, PyTorch, Keras, or Scikit-learn
  • Background in NLP, Computer Vision, or Deep Learning
  • Experience with SQL/NoSQL databases and ETL pipelines
  • Strong analytical and problem-solving skills 


Read more
Inspiron Labs

at Inspiron Labs

1 candid answer
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
8yrs+
Upto ₹13L / yr (Varies
)
Windows Azure
SQL
PySpark
skill iconPython

Senior Data Engineer (Azure Databricks)


Key Responsibilities:

  • Design, develop, and maintain scalable data pipelines using Azure Databricks and PySpark
  • Work extensively with PySpark notebooks within Databricks for data processing and transformation
  • Build and optimize batch data processing workflows
  • Develop and manage data integrations using Azure Functions and Logic Apps
  • Write efficient and optimized SQL queries for data extraction and transformation

Required Skills:

  • Strong hands-on experience with Azure Databricks, PySpark, and SQL
  • Experience working with batch processing frameworks
  • Proficiency in building and managing data pipelines in Azure ecosystem

Good to Have:

  • Experience with Python

Mandatory Requirement:

  • Candidate must have hands-on experience working with PySpark notebooks in Databricks


Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Pune
5 - 6 yrs
₹4L - ₹10L / yr
Windows Azure
skill iconPython
PySpark
ADF
databricks
+2 more

🚀 Hiring: Data Engineer ( Azure ) at Deqode

⭐ Experience: 5+ Years

📍 Location: Pune, Bhopal, Jaipur, Gurgaon, Delhi, Banglore,

⭐ Work Mode:- Hybrid

⏱️ Notice Period: Immediate Joiners

(Only immediate joiners & candidates serving notice period)


⭐ Hiring: Databricks Data Engineer – Lakeflow | Streaming | DBSQL | Data Intelligence

We are looking for a Databricks Data Engineer ( Azure ) to build reliable, scalable, and governed data pipelines powering analytics, operational reporting, and the Data Intelligence Layer.


🔹 Key Responsibilities

✅ Build optimized batch pipelines using Delta Lake (partitioning, OPTIMIZE, Z-ORDER, VACUUM)

✅ Implement incremental ingestion using Databricks Autoloader with schema evolution & checkpointing

✅ Develop Structured Streaming pipelines with watermarking, late data handling & restart safety

✅ Implement declarative pipelines using Lakeflow

✅ Design idempotent, replayable pipelines with safe backfills

✅ Optimize Spark workloads (AQE, skew handling, shuffle & join tuning)

✅ Build curated datasets for Databricks SQL (DBSQL), dashboards & downstream applications

✅ Package and deploy using Databricks Repos & Asset Bundles (CI/CD)

Ensure governance using Unity Catalog and embedded data quality checks


✅ Mandatory Skills (Must Have)

👉 Databricks & Delta Lake (Advanced Optimization & Performance Tuning)

👉 Structured Streaming & Autoloader Implementation

👉 Databricks SQL (DBSQL) & Data Modeling for Analytics

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
5 - 7 yrs
Best in industry
skill iconJava
Selenium
Selenium Web driver
CI/CD
Appium
+11 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are seeking a highly skilled QA Automation Engineer with strong expertise in Java and Selenium to join our growing engineering team. The ideal candidate will play a key role in designing, developing, and maintaining scalable test automation frameworks while ensuring high product quality across releases.


Roles and Responsibilities:

● Design, develop, and maintain robust automation frameworks using Java and Selenium

● Build automated test scripts for web applications and integrate them into CI CD pipelines

● Collaborate closely with developers, product managers, and business analysts to understand requirements and define effective test strategies

● Participate in sprint planning, requirement reviews, and technical discussions

● Perform root cause analysis for defects and work with engineering teams for resolution

● Improve automation coverage and reduce manual regression effort

● Ensure test environments, test data, and execution reports are maintained and documented

● Mentor junior QA engineers and promote best practices in automation

● Develop, execute, and maintain comprehensive test plans and test cases for manual and automated testing

● Perform functional, regression, performance, and security testing to ensure software quality

● Design and develop automated test scripts using tools such as Selenium, Appium, or similar frameworks

● Identify, document, and track software defects, working closely with development teams for resolution

● Ensure test coverage by working closely with developers, product managers, and other stakeholders

● Establish and maintain continuous integration (CI) and continuous deployment (CD) pipelines for test automation

● Conduct API testing using tools like Postman or RestAssured

● Collaborate with cross-functional teams to enhance the overall quality of the product

● Stay up to date with the latest industry trends and best practices in QA methodologies and automation frameworks


Requirements:

● 5 to 7 years of experience in QA automation

● Strong hands-on experience with Java and Selenium WebDriver

● Experience in building or enhancing automation frameworks from scratch

● Good understanding of TestNG or JUnit

● Experience with Maven or Gradle

● Familiarity with CI CD tools such as Jenkins, GitHub Actions, or similar

● Strong understanding of Agile Scrum methodology

● Experience with API testing tools such as Rest Assured or Postman is a plus

● Knowledge of version control systems like Git

● Strong analytical and problem-solving skills

● Strong understanding of software testing life cycle (STLC) and defect lifecycle management

● Experience with version control systems (e.g., Git)

● Relevant certifications in software testing (e.g., ISTQB) are desirable but not required

● Solid understanding of software testing principles, methodologies, and techniques

● Excellent analytical and problem-solving skills

● Strong attention to detail and a commitment to delivering high-quality software

● Good communication and collaboration skills, with the ability to work effectively in a team environment


Good to Have:

● Experience with performance testing tools

● Exposure to cloud platforms such as AWS or Azure

● Knowledge of containerization tools like Docker

● Experience in BDD frameworks such as Cucumber.


Why Join Us?

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethic and culture


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
4 - 7 yrs
Best in industry
DevOps
skill iconAmazon Web Services (AWS)
Terraform
Windows Azure
Google Cloud Platform (GCP)
+9 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.


Roles and Responsibilities:

● Design, implement, and manage CI CD pipelines for multiple environments

● Automate infrastructure provisioning using Infrastructure as Code tools

● Manage and optimize cloud infrastructure on AWS, Azure, or GCP

● Monitor system performance, availability, and security

● Implement logging, monitoring, and alerting solutions

● Collaborate with development teams to streamline release processes

● Troubleshoot production issues and ensure high availability

● Implement containerization and orchestration solutions such as Docker and Kubernetes

● Enforce DevOps best practices across the engineering lifecycle

● Ensure security compliance and data protection standards are maintained


Requirements:

● 4 to 7 years of experience in DevOps or Site Reliability Engineering

● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage

● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps

● Experience working in microservices architecture

● Exposure to DevSecOps practices

● Experience in cost optimization and performance tuning in cloud environments

● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM

● Strong knowledge of containerization using Docker

● Experience with Kubernetes in production environments

● Good understanding of Linux systems and shell scripting

● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog

● Strong troubleshooting and debugging skills

● Understanding of networking concepts and security best practices


Why Join Us?

● Opportunity to work on a cutting-edge healthcare product

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethic and culture


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Anywhere
6 - 10 yrs
₹10L - ₹25L / yr
skill iconMachine Learning (ML)
Windows Azure
skill iconPython

Role: ML Engineer

Location: Remote

Experience: 5+ Years


𝗞𝗲𝘆 𝗦𝗸𝗶𝗹𝗹𝘀 Required:

• Azure ML Studio, AKS, Blob Storage, ADF, ADO Pipelines

• Model deployment & versioning via Azure ML

• MLflow for experiment tracking & model lifecycle management

• MLOps best practices — orchestration, CI/CD, model monitoring

• Strong Python skills (Linting, Black, dependency management)

• Drift detection & performance monitoring

• Docker-based deployment (good to have)

Read more
SAAS Industry

SAAS Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 7 yrs
₹20L - ₹25L / yr
TypeScript
skill iconNodeJS (Node.js)
skill iconJavascript
skill iconMongoDB
RESTful APIs
+20 more

Job Details

Job Title: Full Stack Engineer

Industry: SAAS

Function – Information Technology

Experience Required: 5-7 years

- Working Days: 6 days

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry

 

Preferred Skills: TypeScript, NodeJS, mongodb, RESTful APIs, React.js

 

Criteria

Candidate should have at least 4+ years of professional experience as a Full Stack Engineer

Hands-on experience with both React.js and Node.js

Solid understanding of MongoDB

Should have experience in RESTful APIs

Should be from a startup or scale up companies

Should have good experience in Typescript

Strong understanding of asynchronous programming patterns

Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies

 

Job Description

The Role:

We’re looking for a Full Stack Engineer to build, scale, and maintain high-performance web applications for company’s technology platforms. This role involves working across the stack-frontend, backend, and infrastructure - using modern JavaScript-based technologies.

You’ll collaborate closely with product managers, designers, and cross-functional engineering teams to deliver scalable, secure, and user-centric solutions. This role is ideal for someone who enjoys end-to-end ownership, technical problem-solving, and working in a fast-paced startup environment.

 

What You’ll Own

1. Full Stack Development

● Design, develop, test, and deploy robust and scalable web applications.

● Build and maintain server-side logic and microservices using Node.js, Express.js, and TypeScript.

● Contribute to frontend feature development and integration.

● Participate in feature planning, estimation, and execution.

 

2. Backend & API Engineering

● Design and develop RESTful APIs and backend services.

● Implement asynchronous workflows and scalable microservice architectures.

● Ensure performance, reliability, and security of backend systems.

● Implement authentication, authorization, and data protection best practices.

 

3. Database Design & Optimization

● Design and manage MongoDB schemas using Mongoose.

● Optimize queries and database performance for scale.

● Ensure data integrity and efficient data access patterns.

 

4. Frontend Collaboration & Integration

● Collaborate with frontend developers to integrate React components and APIs seamlessly.

● Ensure responsive, high-performing application behavior.

 

5. System Design & Scalability

● Contribute to system architecture and technical design discussions.

● Design scalable, maintainable, and future-ready solutions.

● Optimize applications for speed and scalability.

 

6. Product & Cross-Functional Collaboration

● Work closely with product and design teams to deliver high-quality features in rapid iterations.

● Participate in the full development lifecycle—from concept to deployment and maintenance.

 

7. Code Quality & Best Practices

● Write clean, testable, and maintainable code.

● Follow Git-based version control and code review best practices.

● Contribute to improving engineering standards and workflows.

 

What We’re Looking For

Must-Haves

● 4+ years of professional experience as a Full Stack Engineer or similar role.

● Strong proficiency in JavaScript and TypeScript.

● Hands-on experience with Node.js and Express.js.

● Solid understanding of MongoDB and Mongoose.

● Experience building and consuming RESTful APIs and microservices.

● Strong understanding of asynchronous programming patterns.

● Good grasp of system design principles and application architecture.

● Experience with Git and version control best practices.

● Bachelor’s degree in Computer Science, Engineering, or a related field.

 

Good-to-Have / Preferred

● Frontend development experience with React.js.

● Exposure to Three.js or similar 3D/visualization libraries.

● Experience with cloud platforms (AWS, GCP, Azure – EC2, S3, Lambda).

● Knowledge of Docker and containerization workflows.

● Experience with testing frameworks (Jest, Mocha, etc.).

● Familiarity with CI/CD pipelines and automated deployments.

 

Tools You’ll Use

● Backend: Node.js, Express.js, TypeScript

● Frontend: React.js (preferred)

● Database: MongoDB, Mongoose

● Version Control: Git, GitHub / GitLab

● Cloud & DevOps: AWS / GCP / Azure, Docker

● Collaboration: Google Workspace, Notion, Slack

 

Key Metrics You’ll Own

● Code quality, performance, and scalability

● Timely delivery of features and releases

● System reliability and reduction in production issues

● Contribution to architectural improvements

 

Why company

● Work on impactful, product-driven tech platforms.

● High-ownership role with end-to-end engineering exposure.

● Opportunity to work with modern technologies and evolving architectures.

● Collaborative startup culture with strong learning and growth opportunities.

 

Read more
Software and consulting company

Software and consulting company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 8 yrs
₹14L - ₹17L / yr
PowerBI
Business Intelligence (BI)
Business Analysis
skill iconData Analytics
Data Visualization
+15 more

Description

Power BI JD


Mandatory:

• 5+ years of Power BI Report development experience.

• Building Analysis Services reporting models.

• Developing visual reports, KPI scorecards, and dashboards using Power BI desktop.

• Connecting data sources, importing data, and transforming data for Business intelligence.

• Analytical thinking for translating data into informative reports and visuals.

• Capable of implementing row-level security on data along with an understanding of application security layer models in Power BI.

• Should have an edge over making DAX queries in Power BI desktop.

• Expert in using advanced-level calculations on the data set.

• Responsible for design methodology and project documentaries.

• Should be able to develop tabular and multidimensional models that are compatible with data warehouse standards.

• Very good communication skills must be able to discuss the requirements effectively with the client teams, and with internal teams.

• Experience working with Microsoft Business Intelligence Stack having Power BI, SSAS, SSRS, and SSIS

• Mandate to have experience with BI tools and systems such as Power BI, Tableau, and SAP.

• Must have 3-4years of experience in data-specific roles.

• Have knowledge of database fundamentals such as multidimensional database design, relational database design, and more

• Knowledge of all the Power BI products (Power Bi premium, Power BI server, Power BI services, Powerquery etc)

• Grip over data analytics

• Interact with customers to understand their business problems and provide best-in-class analytics solutions

• Proficient in SQL and Query performance tuning skills

• Understand data governance, quality and security and integrate analytics with these corporate platforms

• Attention to detail and ability to deliver accurate client outputs

• Experience of working with large and multiple datasets / data warehouses

• Ability to derive insights from data and analysis and create presentations for client teams

• Experience with performance optimization of the dashboards

• Interact with UX/UI designers to create best in class visualization for business harnessing all product capabilities.

• Resilience under pressure and against deadlines.

• Proactive attitude and an open outlook.

• Strong analytical problem-solving skills

• Skill in identifying data issues and anomalies during the analysis

• Strong business acumen demonstrated an aptitude for analytics that incite action

• Ability to execute on design requirements defined by business

• Ability to understand required Power BI functionality from wireframes/ requirement documents

• Ability to architect and design reporting solutions based on client needs.

• Being able to communicate with internal/external customers, desire to develop communication and client-facing skills.

• Ability to seamlessly work with MS Excel working knowledge of pivot table and related functions


Good to have:

• Experience in working with Azure and connecting synapse with Tableau

• Demonstrate strength in data modelling, ETL development, and data warehousing

• Knowledge of leading large-scale data warehousing and analytics projects using Azure, Synapse, MS SQL DB

• Good knowledge of building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets

• Good to have knowledge of Supply Chain Domain.

Read more
Oddr

at Oddr

Bisman Gill
Posted by Bisman Gill
Remote only
4yrs+
Upto ₹45L / yr (Varies
)
ASP.NET
Windows Azure
Problem solving
skill iconReact.js

Job Overview

As a software Engineer, you will play a crucial role in leading our development efforts, ensuring best practices, and supporting the team on a day-to-day basis. This role requires deep technical knowledge, a proactive mindset, and a commitment to guiding the team in tackling challenging issues. You will work primarily with .NET Core on the backend while also keeping a strategic focus on product security, DevOps, quality assurance, and cloud infrastructure.


Responsibilities

• Forward-Looking Product Development:

o Collaborate with product and engineering teams to align on the technical

direction, scalability, and maintainability of the product.

o Proactively consider and address security, performance, and scalability

requirements during development.

  • Cloud and Infrastructure: Leverage Microsoft Azure for cloud infrastructure,
  • ensuring efficient and secure use of cloud services. Work closely with DevOps to
  • improve deployment processes.
  • DevOps & CI/CD: Support the setup and maintenance of CI/CD pipelines, enabling
  • smooth and frequent deployments. Collaborate with the DevOps team to automate and
  • optimize the development process.
  • Technical Mentorship: Provide technical guidance and support to team members,
  • helping them solve day-to-day challenges, enhance code quality, and adopt best
  • practices.
  • Quality Assurance: Collaborate with QA to ensure thorough testing, automated testing
  • coverage, and overall product quality.
  • Product Security: Actively implement and promote security best practices to protect
  • data and ensure compliance with industry standards.
  • Documentation & Code Reviews: Promote good coding practices, conduct code
  • reviews, and maintain clear documentation.
  • Qualifications

• Technical Skills:

o Strong experience with .NET Core for backend development and RESTful API

design.

o Hands-on experience with Microsoft Azure services, including but not limited

to VMs, databases, application gateways, and user management.

o Familiarity with DevOps practices and tools, particularly CI/CD pipeline

configuration and deployment automation.

o Strong knowledge of product security best practices and experience implementing secure coding practices.

o Familiarity with QA processes and automated testing tools is a plus.

o Ability to support team members in solving technical challenges and sharing

knowledge effectively.

Preferred Qualifications

  • 4+ years of experience in software development, with a strong focus on .NET Core
  • Previous experience as a Staff SE, tech lead, or in a similar hands-on tech role.
  • Strong problem-solving skills and ability to work in a fast-paced, startup environment.
  • What We Offer
  • Opportunity to lead and grow within a dynamic and ambitious team.
  • Challenging projects that focus on innovation and cutting-edge technology.
  • Collaborative work environment with a focus on learning, mentorship, and growth.
  • Competitive compensation, benefits, and stock options.
  • If you’re a proactive, forward-thinking technology leader with a passion for .NET Core and you’re ready to make an impact, we’d love to meet you!
Read more
AI Recruiting Platform

AI Recruiting Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
1 - 15 yrs
₹70L - ₹99L / yr
MySQL
skill iconPython
Microservices
API
skill iconJava
+18 more

Description

Join company as a Backend Developer and become a pivotal force in building the robust, scalable services that power our innovative platforms. In this role, you will design, develop, and maintain server‑side applications, ensuring high performance and reliability for millions of users. You’ll collaborate closely with cross‑functional product, front‑end, and DevOps teams to translate business requirements into clean, efficient code, while participating in code reviews and architectural discussions. Our dynamic environment encourages continuous learning, offering opportunities to work with cutting‑edge technologies, cloud infrastructures, and modern development practices. As a key contributor, your work will directly impact product quality, user satisfaction, and the overall success of company's mission to streamline hiring solutions.


Requirements:

  • 1–15 years of professional experience in backend development, with a strong focus on building APIs and microservices.
  • Proficiency in server‑side languages such as Python, Java, Node.js, or Go, and solid understanding of object‑oriented and functional programming paradigms.
  • Extensive experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis), including schema design and query optimization.
  • Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies like Docker and Kubernetes.
  • Hands‑on experience with version control (Git), CI/CD pipelines, and automated testing frameworks.
  • Strong problem‑solving abilities, effective communication skills, and a collaborative mindset for working within multidisciplinary teams.


Roles and Responsibilities:

  • Design, develop, and maintain high‑throughput backend services and RESTful APIs that support core product features.
  • Implement data models and storage solutions, ensuring data integrity, security, and optimal performance.
  • Collaborate with front‑end engineers, product managers, and designers to define technical requirements and deliver end‑to‑end solutions.
  • Participate in code reviews, provide constructive feedback, and uphold coding standards and best practices.
  • Monitor, troubleshoot, and optimize production systems, implementing robust logging, alerting, and performance tuning.
  • Contribute to the continuous improvement of development workflows, including CI/CD automation, testing strategies, and deployment processes.
  • Stay current with emerging technologies and industry trends, proposing innovative approaches to enhance system architecture.


Budget:

  • Job Type: payroll
  • Experience Range: 1–15 years


Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
6yrs+
Upto ₹32L / yr (Varies
)
skill iconJava
skill iconSpring Boot
Microservices
Windows Azure
RESTful APIs
+2 more

About Tarento:

 

Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions.

 

We're proud to be recognized as a Great Place to Work, a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose.


Job Summary:

We are seeking a highly skilled and self-driven Senior Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.


Key Responsibilities:

  • Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
  • Implement and maintain RESTful APIs, ensuring high performance and scalability.
  • Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
  • Develop and manage Docker containers, enabling efficient development and deployment pipelines.
  • Integrate messaging services like Apache Kafka into microservice architectures.
  • Design and maintain data models using PostgreSQL or other SQL databases.
  • Implement unit testing using JUnit and mocking frameworks to ensure code quality.
  • Develop and execute API automation tests using Cucumber or similar tools.
  • Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
  • Work with Kubernetes for orchestrating containerized services.
  • Utilize Couchbase or similar NoSQL technologies when necessary.
  • Participate in code reviews, design discussions, and contribute to best practices and standards.


Required Skills & Qualifications:

  • Strong experience in Java (11 or above) and Spring Boot framework.
  • Solid understanding of microservices architecture and deployment on Azure.
  • Hands-on experience with Docker, and exposure to Kubernetes.
  • Proficiency in Kafka, with real-world project experience.
  • Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
  • Experience in writing unit tests using JUnit and mocking tools.
  • Experience with Cucumber or similar frameworks for API automation testing.
  • Exposure to CI/CD tools, DevOps processes, and Git-based workflows.


Nice to Have:

  • Azure certifications (e.g., Azure Developer Associate)
  • Familiarity with Couchbase or other NoSQL databases.
  • Familiarity with other cloud providers (AWS, GCP)
  • Knowledge of observability tools (Prometheus, Grafana, ELK)


Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent verbal and written communication.
  • Ability to work in an agile environment and contribute to continuous improvement.


Why Join Us:

  • Work on cutting-edge microservice architectures
  • Strong learning and development culture
  • Opportunity to innovate and influence technical decisions
  • Collaborative and inclusive work environment
Read more
Service based company

Service based company

Agency job
via Codemind Staffing Solutions by Krishna kumar
Chennai
4 - 6 yrs
₹10L - ₹18L / yr
skill iconJava
J2EE
skill iconSpring Boot
Microservices
RESTful APIs
+8 more

Key Responsibilities

4+ years of experience in design, develop, and maintain backend applications using Java (8, 17, 21).

Build scalable RESTful APIs and backend services using Spring Boot and Spring MVC.

Implement secure authentication and authorization using Spring Security (JWT/OAuth2).

Develop and maintain microservices-based architectures.

Work with Spring Data JPA / Hibernate for database interactions.

Implement configuration management using YAML and Properties files.

Integrate event-driven messaging systems and streaming platforms.

Work with MongoDB for data storage and optimize query performance.

Implement logging, monitoring, and troubleshooting for production systems.

Integrate and work with Azure cloud services including:

Event Hub

Key Vault

Storage Accounts

Databricks

Azure AD authentication (Service Principal, Managed Identity, Federated Credentials)

Collaborate with DevOps and cloud teams for deployment and monitoring.

Ensure application performance, scalability, and reliability.


Read more
Service based company

Service based company

Agency job
via Codemind Staffing Solutions by Krishna kumar
Chennai
4 - 7 yrs
₹10L - ₹18L / yr
DevOps
Microsoft Windows Azure
Windows Azure
skill iconDocker
skill iconKubernetes
+4 more

Key responsibilities

• Design, build, and maintain robust CI/CD pipelines using Azure DevOps Services (Azure Pipelines) and Git-based workflows.

• Implement and manage infrastructure as code (IaC) using ARM templates, Bicep, and/or Terraform for repeatable environment provisioning.

• Containerize applications (Docker) and manage container orchestration platforms such as AKS (Azure Kubernetes Service).

• Automate build, test, release, and rollback processes; integrate automated testing and quality gates into pipelines.

• Monitor and improve platform reliability and observability using logging and monitoring tools (e.g., Azure Monitor, Application Insights, Prometheus, Grafana).

• Drive platform security and compliance through pipeline controls, secrets management (Key Vault / Vault), and secure configuration practices.

• Implement cost-optimization and governance for Azure resources (tags, policies, budgets).

• Troubleshoot build/release failures, production incidents, and performance bottlenecks; perform root-cause analysis and implement permanent fixes.

• Mentor developers in Git workflows, pipeline authoring, best practices for IaC, and cloud-native design.

• Maintain clear documentation: runbooks, deployment playbooks, architecture diagrams, and pipeline templates. 

Required skills & experience

• 4+ years hands-on experience working with Azure and cloud-native application delivery.

• Deep experience with Azure DevOps (Repos, Pipelines, Artifacts, Boards).

• Strong IaC skills with Terraform, ARM templates, or Bicep.

• Solid experience with CI/CD design and YAML pipeline authoring.

• Practical knowledge of containerization (Docker) and Kubernetes — preferably AKS.

• Scripting skills: PowerShell, Bash, and/or Python for automation.

• Experience with Git workflows (branching strategies, PRs, code reviews).

• Familiarity with configuration management and secrets management (Azure Key Vault, HashiCorp Vault).

• Understanding of networking, identity (Azure AD), and security fundamentals in Azure.

• Strong troubleshooting, debugging, and incident response skills.

• Good collaboration and communication skills; ability to work across teams.

Certification

AZ-400: Microsoft Certified: DevOps Engineer Expert or AZ-104 or AZ 305 or Terraform Associate.

 


Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
DrSoumya Sahadevan
Posted by DrSoumya Sahadevan
Pune
7 - 15 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
PySpark
databricks
+2 more

About TVARIT

TVARIT GmbH specializes in developing and delivering cutting-edge artificial intelligence (AI) solutions for the metal industry, including steel, aluminum, copper, cast iron, and more. Our software products empower customers to make intelligent, data-driven decisions, driving advancements in Predictive Quality (PsQ), Predictive Maintenance (PdM), and Energy Consumption Reduction (PsE), etc. With a strong portfolio of renowned reference customers, state-of-the-art technology, a talented research team from prestigious universities, and recognition through esteemed awards such as the EU Horizon 2020 AI Prize, TVARIT is recognized as one of the most innovative AI companies in Germany and Europe. We are seeking a self-motivated individual with a positive "can-do" attitude and excellent oral and written communication skills in English to join our team.


Job Description: We are looking for a Senior Data Engineer with strong expertise in Azure Databricks, PySpark, and distributed computing to develop and optimize scalable ETL pipelines for manufacturing analytics. The role involves working with high-frequency industrial data to enable real-time and batch data processing.


Key Responsibilities · Build scalable real-time and batch processing workflows using Azure Databricks, PySpark, and Apache Spark.

· Perform data pre-processing, including cleaning, transformation, deduplication, normalization, encoding, and scaling to ensure high-quality input for downstream analytics.

· Design and maintain cloud-based data architectures, including data lakes, lakehouses, and warehouses, following Medallion Architecture.

· Deploy and optimize data solutions on Azure (preferred), AWS, or GCP with a focus on performance, security, and scalability.

· Develop and optimize ETL/ELT pipelines for structured and unstructured data from IoT, MES, SCADA, LIMS, and ERP systems. · Automate data workflows using CI/CD and DevOps best practices, ensuring security and compliance with industry standards

· Monitor, troubleshoot, and enhance data pipelines for high availability and reliability.

· Utilize Docker and Kubernetes for scalable data processing.

· Collaborate with automation team, data scientists and engineers to provide clean, structured data for AI/ML models.


Desired Skills and Qualifications · Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.

· 7+ years of experience in core data engineering, with a strong focus on cloud platforms such as Azure (preferred), AWS, or GCP · Proficiency in PySpark, Azure Databricks, Python and Apache Spark, etc.

. 2 years of team handling experience.

· Expertise in relational databases (e.g., SQL Server, PostgreSQL), time series databases (e.g. Influx DB), and NoSQL databases (e.g., MongoDB, Cassandra) · Experience in containerization (Docker, Kubernetes).

· Strong analytical and problem-solving skills with attention to detail.

· Good to have MLOps, DevOps including model lifecycle management

· Excellent communication and collaboration skills, with a proven ability to work effectively as a team player.

· Comfortable working in a dynamic, fast-paced startup environment, adapting quickly to changing priorities and responsibilities.

Read more
Ampera Technologies
Faisal AshrafNomani
Posted by Faisal AshrafNomani
Bengaluru (Bangalore), Chennai
4 - 10 yrs
Best in industry
AWS
Windows Azure
Google Cloud Platform (GCP)
Large Language Models (LLM)
AI Agents
+2 more

Job Description:

 

We are seeking a Cloud & AI Platform Engineer to design and operate AI-native infrastructure that supports large-scale machine learning, generative AI, and agentic AI systems. 

This role will focus on building secure, scalable, and automated multi-cloud platforms across AWS, Azure, GCP, and hybrid on-prem environments, enabling teams to deploy LLMs, AI agents, and data-driven applications reliably in production. 

You will work at the intersection of cloud engineering, MLOps, LLMOps, DevOps, and data infrastructure, helping build platforms that support RAG pipelines, vector search, AI model lifecycle management, and AI observability

 

Key Responsibilities

AI & Agentic Infrastructure 

  • Design infrastructure to support agentic AI systems, autonomous agents, and multi-agent workflows. 
  • Build scalable runtime environments for LLM orchestration frameworks. 
  • Enable deployment of AI copilots, assistants, and autonomous decision systems. 

Common frameworks may include: 

  • LangChain 
  • LlamaIndex 
  • AutoGPT 

 

LLMOps & AI Model Lifecycle 

Design and manage LLMOps pipelines for the full lifecycle of large language models: 

  • Model deployment 
  • Prompt management 
  • Versioning 
  • Evaluation and testing 
  • Model monitoring 

Integrate with AI platforms such as: 

  • Azure Machine Learning 
  • Amazon SageMaker 
  • Vertex AI 

 

Retrieval-Augmented Generation (RAG) Infrastructure 

Design and optimize RAG pipelines that integrate enterprise knowledge with LLMs. 

Responsibilities include: 

  • Document ingestion pipelines 
  • Embedding generation workflows 
  • Knowledge indexing 
  • Query orchestration 
  • Retrieval optimization 
  • Support scalable semantic search architectures. 

 

Vector Database & Knowledge Infrastructure 

Deploy and manage vector databases used for AI applications and semantic retrieval. 

Common technologies include: 

  • Pinecone 
  • Weaviate 
  • Milvus 
  • FAISS 

Responsibilities include: 

  • Index optimization 
  • Query latency tuning 
  • Scalable embedding storage 
  • Hybrid search architecture 

 

Multi-Cloud AI Infrastructure 

Design and maintain AI-ready infrastructure across: 

  • Amazon Web Services 
  • Microsoft Azure 
  • Google Cloud Platform 

Key responsibilities include: 

  • GPU infrastructure management 
  • Distributed training environments 
  • Hybrid cloud integrations with on-prem data centers 
  • Infrastructure scaling for AI workloads 

 

Data Platforms & Integration 

  • Support deployment and optimization of data lakes, data warehouses, and streaming platforms. 
  • Work with data engineering teams to ensure secure and scalable data infrastructure. 

 

Cloud Architecture & Infrastructure 

  • Design and implement scalable multi-cloud infrastructure across Azure, AWS, and Google Cloud. 
  • Build hybrid cloud architectures integrating on-premise environments with cloud platforms. 
  • Implement high availability, disaster recovery, and auto-scaling architectures for AI workloads. 

 

DevOps, Platform Engineering & Automation 

Build automated cloud infrastructure using modern DevOps practices. 

Tools may include: 

  • Terraform 
  • Docker 
  • Kubernetes 
  • GitHub Actions 

Responsibilities include: 

  • Infrastructure as Code (IaC) 
  • Automated deployments 
  • CI/CD pipelines for AI models and services 
  • Platform reliability and scalability 

 

AI Observability & Monitoring 

Implement observability frameworks to monitor AI systems in production. 

This includes: 

  • Model performance monitoring 
  • Prompt evaluation 
  • Hallucination detection 
  • Latency and throughput analysis 
  • Cost monitoring for LLM usage 

Tools may include: 

  • Arize AI 
  • WhyLabs 
  • Weights & Biases 

 

Security, Governance & Responsible AI 

Ensure AI systems follow strong governance and security practices. 

Responsibilities include: 

  • Data privacy and compliance 
  • Model governance frameworks 
  • Secure model deployment 
  • Monitoring model bias and drift 
  • AI risk management 

Support enterprise frameworks for Responsible AI and AI compliance. 

 

Data & Security 

  • Experience with data lake architectures, distributed storage, and ETL pipelines 
  • Knowledge of data security, encryption, IAM, and compliance frameworks 
  • Familiarity with AI governance and responsible AI practices 

 

 

Required Skills 

Cloud & Infrastructure 

  • Strong experience in Azure (must have), AWS or GCP 
  • Hybrid and multi-cloud architecture 
  • GPU infrastructure management 

DevOps & Automation 

  • Kubernetes 
  • Docker 
  • Terraform 
  • CI/CD pipelines 

AI / ML Platforms 

  • MLOps pipelines 
  • Model deployment 
  • Model monitoring 

AI Application Infrastructure 

  • Vector databases 
  • RAG pipelines 
  • LLM orchestration frameworks 

Programming 

Experience in one or more languages: 

  • Python 
  • Go 
  • Java 
  • TypeScript 

 

 

 

Preferred Qualifications 

  • Experience building AI copilots or autonomous agents 
  • Knowledge of distributed model training - Knowledge of GPU infrastructure and distributed training 
  • Familiarity with AI evaluation frameworks - Familiarity with model monitoring, drift detection, and AI observability 
  • Experience building enterprise AI platforms 

 

Education & Experience 

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field 
  • 4–8+ years experience in cloud infrastructure, DevOps, or platform engineering 
  • Experience working in data-driven or AI-focused environments 

 

 

What Success Looks Like 

  • Reliable ML model deployment pipelines - Reliable infrastructure for LLMs and AI agents, Scalable RAG knowledge platforms 
  • Efficient multi-cloud infrastructure management - Fast deployment cycles for AI products 
  • Secure and scalable AI-ready cloud platforms 
  • Strong automation and governance across cloud and AI systems 


Read more
NeoGenCode Technologies Pvt Ltd
Mumbai
5 - 10 yrs
₹12L - ₹24L / yr
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
skill iconKubernetes
+12 more

Job Title : Senior DevOps Engineer (Only Mumbai Candidates)

Experience : 5+ Years

Location : Mumbai (On-site)

Notice Period : Immediate to 15 Days

Interview Process : 1 Internal Round + 1 Client Round


Mandatory Skills :

Multi-Cloud (AWS/GCP/Azure – any two), Kubernetes, Terraform, Helm (writing Helm Charts), CI/CD (GitLab CI/Jenkins/GitHub Actions), GitOps (ArgoCD/FluxCD), Multi-tenant deployments, Stateful microservices on Kubernetes, Enterprise Linux.


Role Overview :

We are looking for a Senior DevOps Engineer to design, build, and manage scalable cloud infrastructure and DevOps pipelines for product-based platforms.

The ideal candidate should have strong experience with Kubernetes, Terraform, Helm Charts, CI/CD, and GitOps practices.


Key Responsibilities :

  • Design and manage scalable cloud infrastructure across AWS/GCP/Azure.
  • Deploy and manage microservices on Kubernetes clusters.
  • Build and maintain Infrastructure as Code using Terraform and Helm.
  • Implement CI/CD pipelines using GitLab CI, Jenkins, or GitHub Actions.
  • Implement GitOps workflows using ArgoCD or FluxCD.
  • Ensure secure, scalable, and reliable DevOps architecture.
  • Implement monitoring and logging using Prometheus, Grafana, or ELK.

Good to Have :

  • Packer, OpenShift/Rancher/K3s, On-prem deployments, PaaS experience, scripting (Bash/Python), Terraform modules.
Read more
Gieom Business Solutions Pvt Ltd
Remote only
3 - 5 yrs
₹5L - ₹7L / yr
Microsoft Windows Server
Windows Azure
DevOps
Linux/Unix

Hiring: IT Operations & Helpdesk Engineer (3–5 Years)

📍 Location: [Bangalore / Hybrid] 


We are looking for a hands-on IT Operations Engineer who will anchor our internal IT helpdesk while managing servers, backups, DR drills, and cloud infrastructure. This role is responsible for day-to-day IT stability across endpoints, servers, and Azure environments.


Key Activities

Internal IT Helpdesk (Primary Anchor)

·       Act as the single point of contact for internal IT support.

·       Resolve L1/L2 issues (laptops, OS, network, access, software installs).

·       Manage onboarding/offboarding IT setup.

·       Track tickets, SLAs, and recurring issues.

Infrastructure & Servers

·       Install and maintain Windows & Linux servers.

·       Maintain the centralized IT asset inventory.

·       Support manual and automated application deployments

·       Handle patching, upgrades, performance monitoring.

Cloud Administration (Azure)

·       Manage VMs, storage, networking.

·       Maintain access controls and security configurations.

Backup & DR Readiness

·       Manage and test backup processes.

·       Conduct periodic DR drills to support organizational continuity standards.

·       Maintain recovery runbooks and documentation.

 

What We’re Looking For

·       Strong Windows Server & Linux hands-on experience.

·       Experience managing Azure Cloud infrastructure.

·       Practical backup & restore execution experience.

·       Strong troubleshooting mindset.

·       Process-driven and documentation disciplined.

·       Comfortable working with DevOps & Cyber Security teams.

Impact of This Role

·       Stable internal IT operations.

·       DR-tested infrastructure.

·       Reduced downtime and faster issue resolution.

·       Strong operational hygiene in a growing environment.

Read more
Pace Wisdom Solutions
Bengaluru (Bangalore)
7 - 10 yrs
₹15L - ₹30L / yr
skill icon.NET
ASP.NET
ASP.NET MVC
MVC Framework
skill iconAmazon Web Services (AWS)
+1 more

Location: Bangalore

Experience required: 7-10 years.

Key skills: .NET core, ASP .NET, Microsoft Azure, MVC, AWS


"At Pace Wisdom Solutions, our .NET team is a dynamic and collaborative group of experts specializing in end-to-end development. With a focus on both front-end and back-end technologies, we leverage the robust .NET framework and Azure to deliver innovative and scalable solutions. Our agile approach ensures adaptability to industry changes, empowering us to provide clients with cutting-edge and tailored applications."


We are seeking a highly skilled and experienced Senior .NET Developer with a minimum of 7 years of hands-on experience. The ideal candidate will possess expertise in both front-end and back-end development, with a strong background in MVC architecture and exposure to Microsoft Azure technologies. The role requires an individual who can work independently, lead a team effectively, and contribute to the successful delivery of projects.


Engineering Culture at Pace Wisdom:

We foster a collaborative and communicative environment where engineers are empowered to share ideas freely. Teamwork is paramount, and we believe the best solutions come from diverse perspectives. We are committed to promoting from within, providing clear career paths and mentorship opportunities to help our engineers reach their full potential. Our culture prioritizes continuous learning and growth, offering a safe space to experiment, innovate, and refine your skills.


Responsibilities:

• Create scalable solutions by understanding business requirements, write code, test according to best practices.

• Own and Collaborate with the team including our customers, QA, design, and other stakeholders to drive successful project delivery.

• Advocate and mentor teams to follow best practices around: documentation, unit testing, code reviews etc.

• Comply with security policies and processes.


Qualifications:

• 7-10 years of professional experience in developing applications using .NET framework, .NET Core, Azure Services, Entity Framework

• Good knowledge of common software architecture design patterns, Object Oriented Programming, Data structures, Algorithms, Database design patterns and other best practices.

• Exposure to Cloud technologies (AWS, Azure, Google Cloud - at least one of them)

• Exposure to developing SPA on React, Angular or VueJS

• Experience with micro services, messaging systems (RabbitMQ/Kafka)

• Proven ability to lead and mentor development teams.

• Effective communication and interpersonal skills.


About the Company:

Pace Wisdom Solutions is a deep-tech Product engineering and consulting firm. We have offices in San Francisco, Bengaluru, and Singapore. We specialize in designing and developing bespoke software solutions that cater to solving niche business problems.


We engage with our clients at various stages:

• Right from the idea stage to scope out business requirements.

• Design & architect the right solution and define tangible milestones.

• Setup dedicated and on-demand tech teams for agile delivery.

• Take accountability for successful deployments to ensure efficient go-to-market Implementations.


Pace Wisdom has been working with Fortune 500 Enterprises and growth-stage startups/SMEs since 2012. We also work as an extended Tech team and at times we have played the role of a Virtual CTO too. We believe in building lasting relationships and providing value-add every time and going beyond business. 

Read more
HireTo
Rishita Sharma
Posted by Rishita Sharma
Hyderabad
5 - 13 yrs
₹15L - ₹30L / yr
snowflake
skill iconPython
SQL
Windows Azure
databricks
+4 more

Position Title : Senior Data Engineer(Founding Member) - Insurtech StartUp

Location : Hyderabad(Onsite)

Immediate to 15 days Joiners

Experience : 5+ to 13 Years

Role Summary

We are looking for a Senior Data Engineer who will play a foundational role in:

  • Client onboarding from a data perspective
  • Understanding complex insurance data flows
  • Designing secure, scalable ingestion pipelines
  • Establishing strong data modeling and governance standards

This role sits at the intersection of technology, data architecture, security, and business onboarding.

.

Key Responsibilities

  • Lead end-to-end data onboarding for new clients and partners, working closely with business and product teams to understand client systems, data formats, and migration constraints
  • Define and implement data ingestion strategies supporting multiple sources and formats, including CSV, XML, JSON files, and API-based integrations
  • Design, build, and operate robust, scalable ETL/ELT pipelines, supporting both batch and near-real-time data processing
  • Handle complex insurance-domain data including Contracts, Claims, Reserves, Cancellations, and Refunds
  • Architect ingestion pipelines with security-by-design principles, including secure credential management (keys, secrets, tokens), encryption at rest and in transit, and network-level controls where required
  • Enforce role-based and attribute-based access controls, ensuring strict data isolation, tenancy boundaries, and stakeholder-specific access rules
  • Design, maintain, and evolve canonical data models that support operational workflows, reporting & analytics, and regulatory/audit requirements
  • Define and enforce data governance standards, ensuring compliance with insurance and financial data regulations and consistent definitions of business metrics across stakeholders
  • Build and operate data pipelines on a cloud-native platform, leveraging distributed processing frameworks (Spark / PySpark), data lakes, lakehouses, and warehouses
  • Implement and manage orchestration, monitoring, alerting, and cost-optimization mechanisms across the data platform
  • Contribute to long-term data strategy, platform architecture decisions, and cost-optimization initiatives while maintaining strict security and compliance standards

Required Technical Skills

  • Core Stack: Python, Advanced SQL(Complex joins, window functions, performance tuning), Pyspark
  • Platforms: Azure, AWS, Data Bricks, Snowflake
  • ETL / Orchestration: Airflow or similar frameworks
  • Data Modeling: Star/Snowflake schema, dimensional modeling, OLAP/OLTP
  • Visualization Exposure: Power BI
  • Version Control & CI/CD: GitHub, Azure Devops, or equivalent
  • Integrations: APIs, real-time data streaming, ML model integration exposure

Preferred Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
  • 5+ years of experience in data engineering or similar roles
  • Strong ability to align technical solutions with business objectives
  • Excellent communication and stakeholder management skills

What We Offer

  • Direct collaboration with the core US data leadership team
  • High ownership and trust to manage the function end-to-end
  • Exposure to a global environment with advanced tools and best practices
Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Bengaluru (Bangalore)
7 - 14 yrs
₹3L - ₹30L / yr
skill icon.NET
skill iconC#
skill iconReact.js
Windows Azure

Job Description

We are looking for a skilled .NET FullStack Developer with expertise in .NET , React.js and AWS/Azure to join our development team. The ideal candidate should have strong programming skills and experience building scalable web applications using modern technologies.

Key Responsibilities

  • Develop and maintain scalable applications using .NET Core.
  • Design and implement Microservices architecture and RESTful APIs.
  • Build responsive and dynamic user interfaces using React.js.
  • Integrate frontend applications with backend APIs.
  • Deploy and manage applications on AWS/Azure
  • Collaborate with cross-functional teams to define, design, and deliver new features.
  • Write clean, maintainable, and efficient code following best development practices.

Required Skills

  • Strong experience in .NET development.
  • Hands-on experience with Microservices architecture and API development.
  • Experience working with React.js, including API integration and design principles.
  • Experience with AWS / Azure


Read more
Software and consulting company

Software and consulting company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹14L / yr
skill icon.NET
ReAct (Reason + Act)
skill iconAmazon Web Services (AWS)
Unit testing
skill iconReact Native
+22 more

FULL STACK DEVELOPER

JOB DESCRIPTION – FULL STACK DEVELOPER 

Location: Bangalore 

 

Key Responsibilities      

Establish processes, SLAs, and escalation protocols for the support & maintenance of web applications       

Manage stakeholders with effective communication & collaborate with cross functional teams to address issues and maintain business continuity.      

Design, implement, unit test, and build business applications using React, React-Native, .Net Core, .Net 8, Azure/AWS and leveraging an agile methodology and latest tech like Agentic AI & Gihub Copilot.     

Facilitate scrum ceremonies including sprint planning, retrospectives, reviews, and daily stand-ups·       

Facilitate discussion, assessment of alternatives or different approaches, decision making, and conflict resolution within the development team       

Develop and administer CI/CD pipelines in cloud-hosted Git repositories, and source control artifacts via Git in alignment with common branching strategies and workflows    

Assist Software Designer/Implementers with the creation of detailed software design specifications      

Participate in the system specification review process to ensure system requirements can be translated into valid software architecture       

Integrate internal and external product designs into a cohesive user experience       

Identify and keep track of metrics that indicate how software is performing     

Handle technical and non-technical queries from the development team and stakeholders      

Ensure that all development practices follow best practices and any relevant policies / procedures 

 

Other Duties·       Maintain project reporting including dashboards, status reports, road maps, burn down, velocity, and resource utilization.    

Own the technical solution and ensure all technical aspects are implemented as designed. ·       

Partner with the customer success team and aid in triaging and troubleshooting customer support issues spanning across a range of software components, infrastructure, integrations, and services, some of which target 24/7/365 availability     

Flexible to work in rotational shift   

 

Required Qualification     

Previous experience of leading full stack technology projects with scrum teams and stakeholder management·       

BTech or MTech in computer science, or related field·       

3-5 years of experience.  

 

Required Knowledge, Skills and Abilities: (Include any required computer skills, certifications, licenses, languages, etc)·      

With Proficiency in .NET Core/.Net 8/, React, React-Native, Redux, Material, Bootstrap, Typescript, SCSS, Microservices, EF, LINQ, SQL, Azure/AWS, CI CD, Agile, Agentic AI, Github Copilot·       

Azure Dev Ops, Design System, Micro front ends, Data Science·       

Stakeholder management & excellent communication skills.    

 

Must have skills

React - 3 years

React Native - 3 years

Redux - 1 years

Material UI - 1 years

Typescript - 1 years

Bootstrap - 1 years

Microservices - 2 years

SQL - 1 years

Azure - 1 years

 

Nice to have skills

.NET Core - 3 years

NET 8 - 3 years

AWS - 1 years

LINQ - 1 years

Read more
Generative AI Persona platform

Generative AI Persona platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 7 yrs
₹15L - ₹20L / yr
skill iconMachine Learning (ML)
skill iconPython
ETL
skill iconData Science
ELT
+6 more

Description

We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years’ experience).

 

Please find the detailed Job Description attached for your reference. We are looking for candidates with strong experience in:

  • Machine Learning model development
  • Scalable data pipeline development (ETL/ELT)
  • Python and SQL
  • Cloud platforms such as Azure/AWS/Databricks
  • ML deployment environments (SageMaker, Azure ML, etc.)

 

Kindly note:

  • Location: Pune (Work From Office)
  • Immediate joiners preferred

 

While sharing profiles, please ensure the following details are included:

  • Current CTC
  • Expected CTC
  • Notice Period
  • Current Location
  • Confirmation on Pune WFO comfort

 

Must have skills

Machine Learning - 6 years

Python - 6 years

ETL(Extract, Transform, Load) - 6 years

SQL - 6 years

Azure - 6 years

 

Read more
Thinqor
sai patel
Posted by sai patel
Bengaluru (Bangalore)
5 - 8 yrs
₹15L - ₹20L / yr
MLOps
Windows Azure
skill iconKubernetes
aks
aro
+3 more

 Hiring: Cloud Engineer – MLOps Platform 🚨

📍 Location: Bangalore

🧠 Experience: 5–8 Years

We are looking for an experienced Cloud Engineer to support ML teams and drive end-to-end automation for model deployment across modern cloud platforms.

🔹 Tech Stack:

Azure | Databricks | AKS | ARO | Terraform | MLflow | CI/CD

🔹 Key Responsibilities:

• Build and maintain CI/CD and Continuous Training (CT) pipelines using Azure DevOps, GitHub Actions, or Jenkins.

• Deploy Databricks jobs, MLflow models, and microservices on AKS / ARO environments.

• Automate infrastructure using Terraform and GitOps practices.

• Manage Databricks workspaces, AKS clusters, and networking configurations.

• Implement monitoring, logging, and alerting systems for ML workloads.

• Ensure cloud security, governance, and cost optimization best practices.

🔹 Required Skills:

✔ Strong hands-on experience with Azure, AKS, ARO, and Databricks

✔ Experience with MLflow and Kubernetes-based deployments

✔ Proficiency in Python and Bash / PowerShell scripting

✔ Strong understanding of cloud security, infrastructure automation, and distributed systems

Read more
Applix

at Applix

3 candid answers
Eman Khan
Posted by Eman Khan
Hyderabad
3 - 6 yrs
₹10L - ₹18L / yr
PowerBI
SQL
Snow flake schema
Windows Azure
Snowflake
+1 more

About the role

Applix is seeking a highly skilled Data Engineer (Power BI) to join our Hyderabad office on a full-time, work-from-office basis. In this role, you will work directly with Caterpillar’s global analytics and GCIO BI Services teams to design, develop, and maintain enterprise-grade Power BI reports, dashboards, scorecards, and advanced data visualizations. You will operate as a member of a Project/Scrum team within Caterpillar’s technology environment, engaging with business partners and internal support teams to provide data visualization development services for a wide variety of projects and business needs.


The ideal candidate combines deep Power BI expertise with strong backend data engineering skills, and can champion BI COE standards while partnering closely with data scientists, business analysts, and IT professionals across Caterpillar’s global operations. A minimum 5-hour daily overlap with US Central Time is required to ensure seamless collaboration with onshore stakeholders and end users.


Key responsibilities

  • Design and develop enterprise-grade Power BI dashboards, reports, and scorecards aligned to business needs.
  • Implement BI COE standards, governance, security (RLS), and best practices across BI tools and environments.
  • Build and optimize data models, DAX calculations, SQL queries, and data transformation pipelines.
  • Enhance performance using aggregation, incremental refresh, storage modes, and query optimization techniques.
  • Collaborate with business stakeholders, data engineers, and data scientists to deliver actionable insights.
  • Support documentation, training, troubleshooting, and continuous improvement initiatives.
  • Drive advanced analytics adoption, CI/CD practices, and mentor junior team members.


Requires Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Science, Industrial Engineering, or a related field.
  • 3+ years of hands-on experience in Power BI development, including reports, dashboards, enterprise scorecards, and paginated reports.
  • Expert-level proficiency in Power BI Desktop, Power BI Service, Power BI Report Server, and Power BI Report Builder.
  • Expert working knowledge of DAX, Power Query (M language), and data modeling best practices (star schema, snowflake schema, dimensional modeling).
  • Strong backend skills with SQL Server, Azure SQL Database, Azure Synapse Analytics, or Snowflake – including writing complex T-SQL queries, stored procedures, CTEs, and window functions.
  • 3+ years of experience in relational database design, data modeling, and structured query language (SQL).
  • Hands-on experience with Azure Data Factory (ADF), Azure Data Lake, or similar ETL/ELT tools.
  • Experience working in Agile/Scrum methodology, with tools like Azure DevOps, Jira, or ServiceNow.


Caterpillar-Specific Experience (Strongly Preferred)

  • Prior experience within Caterpillar’s BI ecosystem, GCIO BI Services, and governance frameworks.
  • Familiarity with multi-tool BI environments (Power BI, Tableau, ThoughtSpot, Cognos, BOBJ).
  • Exposure to Caterpillar’s Azure cloud infrastructure, data lakes, and enterprise platforms.
  • Understanding of BI COE standards, data governance, naming conventions, and security protocols.
  • Domain experience in manufacturing, heavy equipment, construction, or mining industries.
  • Experience managing complex, enterprise-grade BI applications integrating multiple data sources.


Preferred Qualifications

  • Microsoft PL-300 certification or equivalent.
  • Experience with Microsoft Fabric, Azure Databricks, Snowflake/Snowpark.
  • Working knowledge of Python or R for advanced analytics.
  • Experience with Microsoft Power Platform (Power Apps, Power Automate).
  • Knowledge of SSAS Tabular models and XMLA endpoints.
  • Experience implementing CI/CD for Power BI using Azure DevOps.
  • Familiarity with ETL tools (SnapLogic, SSIS).
  • Prior consulting or client-facing delivery experience.


What we offer

  • Opportunity to work on high-impact analytics projects for Caterpillar Inc. - a Fortune 100 global leader with $67B+ in annual revenue.
  • Direct engagement with Caterpillar’s GCIO BI Services organization and US-based leadership teams.
  • Collaborative, innovation-driven work culture at Applix’s Hyderabad office with a team focused on enterprise BI excellence.
  • Competitive compensation and benefits package aligned with market standards.
  • Career growth with exposure to cutting-edge Microsoft data technologies, Snowflake, and enterprise-scale BI solutions.
  • Learning and development support, including Microsoft certification sponsorship (PL-300, DP-500, etc.).
  • Opportunity to contribute to Caterpillar’s BI Centre of Excellence standards and shape analytics best practices.
Read more
Tarento Group

at Tarento Group

3 candid answers
1 recruiter
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
4yrs+
Best in industry
skill iconJava
skill iconSpring Boot
Microservices
Windows Azure
RESTful APIs
+5 more

Job Summary:

We are seeking a highly skilled and self-driven Java Backend Developer with strong experience in designing and deploying scalable microservices using Spring Boot and Azure Cloud. The ideal candidate will have hands-on expertise in modern Java development, containerization, messaging systems like Kafka, and knowledge of CI/CD and DevOps practices.Key Responsibilities:

  • Design, develop, and deploy microservices using Spring Boot on Azure cloud platforms.
  • Implement and maintain RESTful APIs, ensuring high performance and scalability.
  • Work with Java 11+ features including Streams, Functional Programming, and Collections framework.
  • Develop and manage Docker containers, enabling efficient development and deployment pipelines.
  • Integrate messaging services like Apache Kafka into microservice architectures.
  • Design and maintain data models using PostgreSQL or other SQL databases.
  • Implement unit testing using JUnit and mocking frameworks to ensure code quality.
  • Develop and execute API automation tests using Cucumber or similar tools.
  • Collaborate with QA, DevOps, and other teams for seamless CI/CD integration and deployment pipelines.
  • Work with Kubernetes for orchestrating containerized services.
  • Utilize Couchbase or similar NoSQL technologies when necessary.
  • Participate in code reviews, design discussions, and contribute to best practices and standards.

Required Skills & Qualifications:

  • Strong experience in Java (11 or above) and Spring Boot framework.
  • Solid understanding of microservices architecture and deployment on Azure.
  • Hands-on experience with Docker, and exposure to Kubernetes.
  • Proficiency in Kafka, with real-world project experience.
  • Working knowledge of PostgreSQL (or any SQL DB) and data modeling principles.
  • Experience in writing unit tests using JUnit and mocking tools.
  • Experience with Cucumber or similar frameworks for API automation testing.
  • Exposure to CI/CD toolsDevOps processes, and Git-based workflows.

Nice to Have:

  • Azure certifications (e.g., Azure Developer Associate)
  • Familiarity with Couchbase or other NoSQL databases.
  • Familiarity with other cloud providers (AWS, GCP)
  • Knowledge of observability tools (Prometheus, Grafana, ELK)

Soft Skills:

  • Strong problem-solving and analytical skills.
  • Excellent verbal and written communication.
  • Ability to work in an agile environment and contribute to continuous improvement.

Why Join Us:

  • Work on cutting-edge microservice architectures
  • Strong learning and development culture
  • Opportunity to innovate and influence technical decisions
  • Collaborative and inclusive work environment
Read more
Pentabay Softwares

at Pentabay Softwares

1 recruiter
Sandhiya M
Posted by Sandhiya M
Chennai
1 - 5 yrs
₹2L - ₹8L / yr
ISO9001
ISO27001
Security Information and Event Management (SIEM)
Cyber Security
skill iconAmazon Web Services (AWS)
+4 more

Hi Folks, we are currently Hiring for Security Engineer.

Gemini said


Hiring: Security Engineer

Company : Pentabay Softwares

Location : Anna salai, Mount Road

Mode: Fulltime


Pentabay Softwares INC is looking for a proactive Security Engineer (2–7 Years Exp) to fortify our global digital solutions. As we scale our footprint in the Healthcare IT sector, you will play a critical role in safeguarding sensitive data (ePHI) and ensuring our cloud-native architectures are resilient against evolving threats.


The Mission

You will be the architect of our defense, bridging the gap between high-speed development and rigorous security standards. Your day-to-day will involve "shifting security left" by embedding DevSecOps practices into our CI/CD pipelines and leading our compliance efforts for SOC 2, ISO 27001, and HIPAA.


Key Responsibilities


Defense & Architecture: Design and maintain secure cloud (AWS/Azure/GCP) and on-prem environments. Implement IAM policies, Zero Trust frameworks, and robust secrets management.

Offensive Testing: Conduct regular vulnerability assessments (VAPT), penetration testing, and code reviews using tools like Burp Suite and Nessus.

DevSecOps & Automation: Integrate SAST/DAST/SCA scanning into engineering workflows. Automate security tasks using Python or Bash.

Incident Response: Monitor SIEM tools (Splunk/CrowdStrike), respond to threats, and develop risk mitigation strategies.

Healthcare Compliance (Plus): Ensure data integrity for HL7/FHIR APIs and maintain HIPAA/HITECH audit readiness for healthcare clients.


What You Bring


Experience: 2–7 years in Information/Application Security with a strong grasp of the OWASP Top 10 and threat modeling (STRIDE).

Technical Depth: Proficiency in network/endpoint security, PKI, encryption standards (TLS/SSL), and container security (Docker/Kubernetes).

Compliance Knowledge: Familiarity with NIST, GDPR, and SOC 2 frameworks.

Tools: Hands-on experience with Metasploit, Wireshark, and Infrastructure-as-Code (Terraform).

Bonus Points: Industry certifications like OSCP, CISSP, or CEH, and experience in Healthcare IT workflows.

Auditing space like ISO27001 , ISO9001 prefered


Why Pentabay?

At Pentabay, we offer more than just a job; we offer a security-first engineering culture.

Growth: A dedicated learning budget for certifications and conferences.

Impact: Work on cutting-edge Healthcare projects that demand the highest levels of data privacy.


Send resumes to : sandhiya.m at pentabay.com

Read more
Siemens
Bengaluru (Bangalore)
3 - 5 yrs
₹5L - ₹15L / yr
skill iconReact.js
skill iconPython
CI/CD
DevOps
Windows Azure
+4 more

Job opportunity for Developer -Python Full Stack with Siemens at Bangalore.


Interview Process:

 

1st round of interview - F2F (in-Person)-Technical

2nd round of interview – F2F /Virtual Interview - Technical

3rd round of interview – Virtual Interview – Technical + HR


Job Title / Designation: Developer -Python Full Stack

Employment Type: Full Time, Permanent

Location: Bangalore

Experience: 3-5 Years Job Description: : Developer -Python Full Stack

 

We are looking for a python full stack expert who has proven 5+ years of experience in developing automating solutions on Linux based environments. You should be capable of developing python-based web applications or automation solutions and have with excellent knowledge on DB handling and decent knowledge of the K8-based deployment environment.

 

Required Skills:

 

  • Solid experience in Python back-end technology
  • Sound experience in web application development
  • Decent knowledge and experience in UI development using JavaScript, React/Angular or related tech stack.
  • Strong understanding of software design patterns and testing principles
  • Ability to learn and adapt to working with multiple programming languages.
  • Experience Docker, ArgoCD, Kubernetes and Terraform
  • Understanding of ETL processes to extract data from different data sources is a plus.
  • Proven experience in Linux development environments using Python.
  • Excellent knowledge in interacting with database systems (SQL, NoSQL) and webservices (REST)
  • Experienced in establishing an optimized CI / CD environment relevant to the project.
  • Good knowledge on repository management tools like Git, Bit Bucket, etc.
  • Excellent debugging skills/strategies.
  • Excellent communication skills
  • Experienced in working in an Agile environment.

 

Nice to have

 

  • Good Knowledge in eclipse IDE, developed add-ons/ plugins on eclipse Platform.
  • Knowledge of 93K Semiconductor test platforms
  • Good know-how of agile management tools like Jira, Azure DevOps.
  • Good knowledge of RHEL
  • Knowledge of JIRA administration 


Read more
Digital solutions and services company

Digital solutions and services company

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 7 yrs
₹17L - ₹23L / yr
skill iconMachine Learning (ML)
skill iconPython
ETL
skill iconData Science
SQL
+5 more

Data Scientist or Senior Machine Learning Engineer


We are currently hiring for the position of Data Scientist/ Senior Machine Learning Engineer (6–7 years' experience).


Please find the detailed Job Description attached for your reference.

We are looking for candidates with strong experience in:

  • Machine Learning model development
  • Scalable data pipeline development (ETL/ELT)
  • Python and SQL
  • Cloud platforms such as Azure/AWS/Databricks
  • ML deployment environments (SageMaker, Azure ML, etc.)


Kindly note:

  • Location: Pune (Work from Office)
  • Immediate joiners preferred


While sharing profiles, please ensure the following details are included:

  • Current CTC
  • Expected CTC
  • Notice Period
  • Current Location
  • Confirmation on Pune WFO comfort


Must have Skills

  • Machine Learning - 6 Years
  • Python - 6 Years
  • ETL (Extract, Transform, Load) - 6 Years
  • SQL - 6 Years
  • Azure - 6 Years


Request you to share relevant profiles at the earliest. Looking forward to your support.

Read more
MIC Global

at MIC Global

3 candid answers
1 product
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
2yrs+
Upto ₹15L / yr (Varies
)
skill icon.NET
skill iconC#
Object Oriented Programming (OOPs)
Design patterns
MS SQLServer
+4 more

About the Role 

We're seeking a Junior .NET Developer with 2 years of experience to join our insurtech team. This role offers an opportunity to work with cloud technologies and contribute to our existing codebase and cloud migration initiatives. 


Key Responsibilities 

  • Write clean, maintainable code using C# and .NET Framework (.NET Core, ASP.NET, web API) 
  • Develop new features and participate in microservices architecture development 
  • Write unit and integration tests to ensure code quality 
  • Work with MS SQL Server - write Stored Procedures, Views, and Functions 
  • Support Azure cloud integration and automated deployment pipelines using Azure DevOps 
  • Collaborate with infrastructure teams and senior architects on migration initiatives 
  • Estimate work, break down deliverables, and deliver to deadlines 
  • Take ownership of your work with focus on quality and continuous improvement 


Requirements 

Essential 

  • 2 years of experience with C# and .NET development 
  • Strong understanding of OOP concepts and Design Patterns 
  • MS SQL Server programming experience 
  • Experience working on critical projects 
  • Self-starter with strong problem-solving and analytical skills 
  • Excellent communication and ability to work independently and in teams 

Desirable 

  • Microsoft Azure experience (App Service, Functions, SQL Database, Service Bus) 
  • Knowledge of distributed systems and microservices architecture 
  • DevOps and CI/CD pipeline experience (Azure DevOps preferred) 
  • Front-end development with HTML5, CSS, JavaScript, React 


Tech Stack 

C#, .NET Framework, WPF, WCF, REST & SOAP APIs, MS SQL Server 2016+, Microsoft Azure, HTML5, CSS, JavaScript, React, Azure DevOps, TFS, Github 

Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
7 - 10 yrs
₹20L - ₹40L / yr
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
Linux/Unix
skill iconGitHub
+19 more

Description

SRE Engineer


Role Overview 

As a Site Reliability Engineer, you will play a critical role in ensuring the availability and performance of our customer-facing platform. You will work closely with DevOps, DBA, and Development teams to provision and maintain infrastructure, deploy and monitor our applications, and automate workflows. Your contributions will have a direct impact on customer satisfaction and overall experience.


Responsibilities and Deliverables

• Manage, monitor, and maintain highly available systems (Windows and Linux)

• Analyze metrics and trends to ensure rapid scalability.

• Address routine service requests while identifying ways to automate and simplify.

• Create infrastructure as code using Terraform, ARM Templates, Cloud Formation.

• Maintain data backups and disaster recovery plans.

• Design and deploy CI/CD pipelines using GitHub Actions, Octopus, Ansible, Jenkins, Azure DevOps.

• Adhere to security best practices through all stages of the software development lifecycle

• Follow and champion ITIL best practices and standards.

• Become a resource for emerging and existing cloud technologies with a focus on AWS.


Organizational Alignment

• Reports to the Senior SRE Manager

• This role involves close collaboration with DevOps, DBA, and security teams.


Technical Proficiencies

• Hands-on experience with AWS is a must-have.

• Proficiency analyzing application, IIS, system, security logs and CloudTrail events

• Practical experience with CI/CD tools such as GitHub Actions, Jenkins, Octopus

• Experience with observability tools such as New Relic, Application Insights, AppDynamics, or DataDog.

• Experience maintaining and administering Windows, Linux, and Kubernetes.

• Experience in automation using scripting languages such as Bash, PowerShell, or Python.

• Configuration management experience using Ansible, Terraform, Azure Automation Run book or similar.

• Experience with SQL Server database maintenance and administration is preferred.

• Good Understanding of networking (VNET, subnet, private link, VNET peering).

• Familiarity with cloud concepts including certificates, Oauth, AzureAD, ASE, ASP, AKS, Azure Apps, 

Load Balancers, Application Gateway, Firewall, Load Balancer, API Management, SQL Server, Databases on Azure


Experience

• 7+ years of experience in SRE or System Administration role

• Demonstrated ability building and supporting high availability Windows/Linux servers, with emphasis on the WISA stack (Windows/IIS/SQL Server/ASP.net)

• 3+ years of experience working with cloud technologies including AWS, Azure.

• 1+ years of experience working with container technology including Docker and Kubernetes.

• Comfortable using Scrum, Kanban, or Lean methodologies.


Education

• Bachelor’s Degree or College Diploma in Computer Science, Information Systems, or equivalent 

experience.


Additional Job Details:

• Working hours: 2:00 PM / 3:00 PM to 11:30 PM IST

• Interview process: 3 technical rounds

• Work model: 3 days’ work from office


Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Bengaluru (Bangalore)
1 - 3 yrs
₹4L - ₹5L / yr
Windows Azure

Strong Azure DevOps Engineer Profiles.

Mandatory (Experience 1) – Must have minimum 1+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).

Mandatory (Experience 2) – Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.

Mandatory (Experience 3) – Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell

Mandatory (Experience 4) – Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database

Mandatory (Experience 5) – Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis

Mandatory (Note) - Only Male candidates are considered.

Mandatory (Location): The candidate must be currently in Bengaluru.

Read more
SPGConsulting
Anitha K
Posted by Anitha K
Bengaluru (Bangalore)
3 - 5 yrs
₹4L - ₹15L / yr
skill iconPython
skill iconDjango
skill iconFlask
API
databricks
+1 more

Job Title: Python Developer (Django / Databricks / Azure)

📍 Location: Bangalore

🕒 Experience: 3–8 Year

💼 Employment Type: FTE

🔹 Job Summary:

We are seeking a skilled Python Developer with strong experience in Django, Flask API development, Databricks, and Azure Cloud. The ideal candidate will be responsible for designing scalable backend systems, developing REST APIs, building data pipelines, and working with cloud-based data platforms.

🔹 Key Responsibilities:

✔ Develop and maintain web applications using Django framework

✔ Design and build RESTful APIs using Flask

✔ Develop and optimize data pipelines using Azure Databricks

✔ Integrate applications with Azure services (Blob, Data Factory, SQL, etc.)

✔ Write clean, scalable, and efficient Python code

✔ Collaborate with frontend, DevOps, and data engineering teams

✔ Perform code reviews and ensure best practices

✔ Troubleshoot, debug, and upgrade existing systems

🔹 Required Skills:

  • Strong proficiency in Python programming
  • Hands-on experience with Django framework
  • Experience building Flask-based REST APIs
  • Experience working with Azure Databricks
  • Knowledge of Azure Cloud services
  • Experience with SQL / NoSQL databases
  • Understanding of CI/CD and Git workflows

🔹 Good to Have:

  • Experience with PySpark
  • Knowledge of microservices architecture
  • Docker / Kubernetes exposure
  • Experience in data engineering projects
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort