Cutshort logo
ADFS Jobs in Bangalore (Bengaluru)

11+ ADFS Jobs in Bangalore (Bengaluru) | ADFS Job openings in Bangalore (Bengaluru)

Apply to 11+ ADFS Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest ADFS Job opportunities across top companies like Google, Amazon & Adobe.

icon
EZEU (OPC) India Pvt Ltd
Bengaluru (Bangalore), Pune
10 - 15 yrs
₹20L - ₹40L / yr
Windows Azure
AZ-300
AZ-301
azure
Active Directory
+4 more
AZ-300 AZ-301 certification is must 

Azure, Azure AD, ADFS, Azure AD Connect, Microsoft Identity management

Azure, Architecture, solution designing, Subscription Design

Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
2 - 5 yrs
₹4L - ₹5L / yr
DevOps
Windows Azure
CI/CD
MySQL
skill iconPython
+12 more

JOB DETAILS:

* Job Title: DevOps Engineer (Azure)

* Industry: Technology

* Salary: Best in Industry

* Experience: 2-5 years

* Location: Bengaluru, Koramangala

Review Criteria

  • Strong Azure DevOps Engineer Profiles.
  • Must have minimum 2+ years of hands-on experience as an Azure DevOps Engineer with strong exposure to Azure DevOps Services (Repos, Pipelines, Boards, Artifacts).
  • Must have strong experience in designing and maintaining YAML-based CI/CD pipelines, including end-to-end automation of build, test, and deployment workflows.
  • Must have hands-on scripting and automation experience using Bash, Python, and/or PowerShell
  • Must have working knowledge of databases such as Microsoft SQL Server, PostgreSQL, or Oracle Database
  • Must have experience with monitoring, alerting, and incident management using tools like Grafana, Prometheus, Datadog, or CloudWatch, including troubleshooting and root cause analysis

 

Preferred

  • Knowledge of containerisation and orchestration tools such as Docker and Kubernetes.
  • Knowledge of Infrastructure as Code and configuration management tools such as Terraform and Ansible.
  • Preferred (Education) – BE/BTech / ME/MTech in Computer Science or related discipline

 

Role & Responsibilities

  • Build and maintain Azure DevOps YAML-based CI/CD pipelines for build, test, and deployments.
  • Manage Azure DevOps Repos, Pipelines, Boards, and Artifacts.
  • Implement Git branching strategies and automate release workflows.
  • Develop scripts using Bash, Python, or PowerShell for DevOps automation.
  • Monitor systems using Grafana, Prometheus, Datadog, or CloudWatch and handle incidents.
  • Collaborate with dev and QA teams in an Agile/Scrum environment.
  • Maintain documentation, runbooks, and participate in root cause analysis.

 

Ideal Candidate

  • 2–5 years of experience as an Azure DevOps Engineer.
  • Strong hands-on experience with Azure DevOps CI/CD (YAML) and Git.
  • Experience with Microsoft Azure (OCI/AWS exposure is a plus).
  • Working knowledge of SQL Server, PostgreSQL, or Oracle.
  • Good scripting, troubleshooting, and communication skills.
  • Bonus: Docker, Kubernetes, Terraform, Ansible experience.
  • Comfortable with WFO (Koramangala, Bangalore).


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Tony Tom
Posted by Tony Tom
Bengaluru (Bangalore)
2 - 6 yrs
Best in industry
Terraform
skill iconPython
Linux/Unix
Infrastructure
skill iconDocker
+5 more

GCP Cloud Engineer:

  • Proficiency in infrastructure as code (Terraform).
  • Scripting and automation skills (e.g., Python, Shell). Knowing python is must.
  • Collaborate with teams across the company (i.e., network, security, operations) to build complete cloud offerings.
  • Design Disaster Recovery and backup strategies to meet application objectives.
  • Working knowledge of Google Cloud
  • Working knowledge of various tools, open-source technologies, and cloud services
  • Experience working on Linux based infrastructure.
  • Excellent problem-solving and troubleshooting skills


Read more
Biofourmis

at Biofourmis

44 recruiters
Roopa Ramalingamurthy
Posted by Roopa Ramalingamurthy
Remote only
5 - 10 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Summary:

We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.


Responsibilities:

  • Deploy, configure, and troubleshoot various infrastructure and application environments
  • Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
  • Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
  • Collaborate with application teams on infrastructure design and issues
  • Architect solutions that optimally meet business needs
  • Implement CI/CD pipelines and automate deployment processes
  • Disaster recovery and infrastructure restoration
  • Restore/Recovery operations from backups
  • Automate routine tasks
  • Execute company initiatives in the infrastructure space
  • Expertise with observability tools like ELK, Prometheus, Grafana , Loki


Qualifications:

  • Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
  • Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
  • Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
  • Experience in architecting solutions that optimally meet business needs
  • Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
  • Strong understanding of system concepts like high availability, scalability, and redundancy
  • Ability to work with application teams on infrastructure design and issues
  • Excellent problem-solving and troubleshooting skills
  • Experience with automation of routine tasks
  • Good communication and interpersonal skills


Education and Experience:

  • Bachelor's degree in Computer Science or a related field
  • 5 to 10 years of experience as a DevOps Engineer or in a related role
  • Experience with observability tools like ELK, Prometheus, Grafana


Working Conditions:

The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.


Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.

Read more
Gipfel & Schnell Consultings Pvt Ltd
Aravind Kumar
Posted by Aravind Kumar
Bengaluru (Bangalore)
6 - 12 yrs
₹20L - ₹40L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+13 more

Job Description:


• Drive end-to-end automation from GitHub/GitLab/BitBucket to Deployment,

Observability and Enabling the SRE activities

• Guide operations support (setup, configuration, management, troubleshooting) of

digital platforms and applications

• Solid understanding of DevSecOps Workflows that support CI, CS, CD, CM, CT.

• Deploy, configure, and manage SaaS and PaaS cloud platform and applications

• Provide Level 1 (OS, patching) and Level 2 (app server instance troubleshooting)

• DevOps programming: writing scripts, building operations/server instance/app/DB

monitoring tools Set up / manage continuous build and dev project management

environment: JenkinX/GitHub Actions/Tekton, Git, Jira Designing secure networks,

systems, and application architectures

• Collaborating with cross-functional teams to ensure secure product development

• Disaster recovery, network forensics analysis, and pen-testing solutions

• Planning, researching, and developing security policies, standards, and procedures

• Awareness training of the workforce on information security standards, policies, and

best practices

• Installation and use of firewalls, data encryption and other security products and

procedures

• Maturity in understanding compliance, policy and cloud governance and ability to

identify and execute automation.

• At Wesco, we discuss more about solutions than problems. We celebrate innovation

and creativity.

Read more
ValueLabs

at ValueLabs

1 video
1 recruiter
Thulan Kumar
Posted by Thulan Kumar
Bengaluru (Bangalore), Chennai
5 - 9 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
  • Bachelor of Computer Science or Equivalent Education
  • At least 5 years of experience in a relevant technical position.
  • Azure and/or AWS experience
  • Strong in CI/CD concepts and technologies like GitOps (Argo CD)
  • Hands-on experience with DevOps Tools (Jenkins, GitHub, SonarQube, Checkmarx)
  • Experience with Helm Charts for package management
  • Strong in Kubernetes, OpenShift, and Container Network Interface (CNI)
  • Experience with programming and scripting languages (Spring Boot, NodeJS, Python)
  • Strong container image management experience using Docker and distroless concepts
  • Familiarity with Shared Libraries for code reuse and modularity
  • Excellent communication skills (verbal, written, and presentation)


Note: Looking for immediate joiners only.

Read more
IntelliFlow Solutions Pvt Ltd

at IntelliFlow Solutions Pvt Ltd

2 candid answers
Divyashree Abhilash
Posted by Divyashree Abhilash
Remote, Bengaluru (Bangalore)
3 - 8 yrs
₹10L - ₹12L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
IntelliFlow.ai is a next-gen technology SaaS Platform company providing tools for companies to design, build and deploy enterprise applications with speed and scale. It innovates and simplifies the application development process through its flagship product, IntelliFlow. It allows business engineers and developers to build enterprise-grade applications to run frictionless operations through rapid development and process automation. IntelliFlow is a low-code platform to make business better with faster time-to-market and succeed sooner.

Looking for an experienced candidate with strong development and programming experience, knowledge preferred-

  • Cloud computing (i.e. Kubernetes, AWS, Google Cloud, Azure)
  • Coming from a strong development background and has programming experience with Java and/or NodeJS (other programming languages such as Groovy/python are a big bonus)
  • Proficient with Unix systems and bash
  • Proficient with git/GitHub/GitLab/bitbucket

 

Desired skills-

  • Docker
  • Kubernetes
  • Jenkins
  • Experience in any scripting language (Phyton, Shell Scripting, Java Script)
  • NGINX / Load Balancer
  • Splunk / ETL tools
Read more
Kutumb

at Kutumb

3 recruiters
Dimpy Mehra
Posted by Dimpy Mehra
Bengaluru (Bangalore)
2 - 4 yrs
₹15L - ₹30L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Kutumb is the first and largest communities platform for Bharat. We are growing at an exponential trajectory. More than 1 Crore users use Kutumb to connect with their community. We are backed by world-class VCs and angel investors. We are growing and looking for exceptional Infrastructure Engineers to join our Engineering team.

 

More on this here - https://kutumbapp.com/why-join-us.html">https://kutumbapp.com/why-join-us.html

 

We’re excited if you have:

  • Recent experience designing and building unified observability platforms that enable companies to use the sometimes-overwhelming amount of available data (metrics, logs, and traces) to determine quickly if their application or service is operating as desired
  • Expertise in deploying and using open-source observability tools in large-scale environments, including Prometheus, Grafana, ELK (ElasticSearch + Logstash + Kibana), Jaeger, Kiali, and/or Loki
  • Familiarity with open standards like OpenTelemetry, OpenTracing, and OpenMetrics
  • Familiarity with Kubernetes and Istio as the architecture on which the observability platform runs, and how they integrate and scale. Additionally, the ability to contribute improvements back to the joint platform for the benefit of all teams
  • Demonstrated customer engagement and collaboration skills to curate custom dashboards and views, and identify and deploy new tools, to meet their requirements
  • The drive and self-motivation to understand the intricate details of a complex infrastructure environment 
  • Using CICD tools to automatically perform canary analysis and roll out changes after passing automated gates (think Argo & keptn)
  • Hands-on experience working with AWS 
  • Bonus points for knowledge of ETL pipelines and Big data architecture
  • Great problem-solving skills & takes pride in your work
  • Enjoys building scalable and resilient systems, with a focus on systems that are robust by design and suitably monitored
  • Abstracting all of the above into as simple of an interface as possible (like Knative) so developers don't need to know about it unless they choose to open the escape hatch

 

What you’ll be doing:

  • Design and build automation around the chosen tools to make onboarding new services easy for developers (dashboards, alerts, traces, etc)
  • Demonstrate great communication skills in working with technical and non-technical audiences
  • Contribute new open-source tools and/or improvements to existing open-source tools back to the CNCF ecosystem

 

Tools we use:

Kops, Argo, Prometheus/ Loki/ Grafana, Kubernetes, AWS, MySQL/ PostgreSQL, Apache Druid, Cassandra, Fluentd, Redis, OpenVPN, MongoDB, ELK

 

What we offer:

  • High pace of learning
  • Opportunity to build the product from scratch
  • High autonomy and ownership
  • A great and ambitious team to work with
  • Opportunity to work on something that really matters
  • Top of the class market salary and meaningful ESOP ownership
Read more
Bengaluru (Bangalore)
2 - 6 yrs
₹8L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
Job Description:

About BootLabs

https://www.google.com/url?q=https://www.bootlabs.in/&;sa=D&source=calendar&ust=1667803146567128&usg=AOvVaw1r5g0R_vYM07k6qpoNvvh6" target="_blank">https://www.bootlabs.in/

-We are a Boutique Tech Consulting partner, specializing in Cloud Native Solutions. 
-We are obsessed with anything “CLOUD”. Our goal is to seamlessly automate the development lifecycle, and modernize infrastructure and its associated applications.
-With a product mindset, we enable start-ups and enterprises on the cloud
transformation, cloud migration, end-to-end automation and managed cloud services. 
-We are eager to research, discover, automate, adapt, empower and deliver quality solutions on time.
-We are passionate about customer success. With the right blend of experience and exuberant youth in our in-house team, we have significantly impacted customers.




Technical Skills:


Expertise in any one hyper scaler (AWS/AZURE/GCP), including basic services like networking,
data and workload management.
  • AWS 

              Networking: VPC, VPC Peering, Transit Gateway, Route Tables, SecuritGroups, etc.
              Data: RDS, DynamoDB, Elastic Search
Workload: EC2, EKS, Lambda, etc.
  •  Azure
                Networking: VNET, VNET Peering,
               Data: Azure MySQL, Azure MSSQL, etc.
               Workload: AKS, Virtual Machines, Azure Functions
  • GCP
               Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
                Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
               Workload: GKE, Instances, App Engine, Batch, etc.

Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup,
templating and configuration.
Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment,
networking, service mesh. Used any package manager like helm.
Scripting experience (Bash/python), automation in pipelines when required, system service.
Infrastructure automation (Terraform/pulumi/cloud formation), write modules, setup pipeline and version the code.

Optional:

Experience in any programming language is not required but is appreciated.
Good experience in GIT, SVN or any other code management tool is required.
DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
Observability tools (Opensource: Prometheus, Elasticsearch, Open Telemetry; Paid: Datadog,
24/7, etc)
Read more
Acqueon Technology

at Acqueon Technology

3 recruiters
Rishav Rahul
Posted by Rishav Rahul
Bengaluru (Bangalore)
8 - 12 yrs
₹25L - ₹35L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more
  • Design, Develop, deploy, and run operations of infrastructure services in the Acqueon AWS cloud environment
  • Manage uptime of Infra & SaaS Application
  • Implement application performance monitoring to ensure platform uptime and performance
  • Building scripts for operational automation and incident response
  • Handle schedule and processes surrounding cloud application deployment
  • Define, measure, and meet key operational metrics including performance, incidents and chronic problems, capacity, and availability
  • Lead the deployment, monitoring, maintenance, and support of operating systems (Windows, Linux)
  • Build out lifecycle processes to mitigate risk and ensure platforms remain current, in accordance with industry standard methodologies
  • Run incident resolution within the environment, facilitating teamwork with other departments as required
  • Automate the deployment of new software to cloud environment in coordination with DevOps engineers
  • Work closely with Presales, understand customer requirement to deploy in Production
  • Lead and mentor a team of operations engineers
  • Drive the strategy to evolve and modernize existing tools and processes to enable highly secure and scalable operations
  • AWS infrastructure management, provisioning, cost management and planning
  • Prepare RCA incident reports for internal and external customers
  • Participate in product engineering meetings to ensure product features and patches comply with cloud deployment standards
  • Troubleshoot and analyse performance issues and customer reported incidents working to restore services within the SLA
  • Monthly SLA Performance reports

As a Cloud Operations Manager in Acqueon you will need…. 

  • 8 years’ progressive experience managing IT infrastructure and global cloud environments such as AWS, GCP (must)
  • 3-5 years management experience leading a Cloud Operations / Site Reliability / Production Engineering team working with globally distributed teams in a fast-paced environment
  • 3-5 years’ experience in IAC (Terraform, K8)
  • 3+ years end-to-end incident management experience
  • Experience with communicating and presenting to all stakeholders
  • Experience with Cloud Security compliance and audits
  • Detail-oriented. The ideal candidate is one who naturally digs as deep as they need to understand the why
  • Knowledge on GCP will be added advantage
  • Manage and monitor customer instances for uptime and reliability
  • Staff scheduling and planning to ensure 24x7x365 coverage for cloud operations
  • Customer facing, excellent communication skills, team management, troubleshooting
Read more
Pramata Knowledge Solutions
Seena Narayanan
Posted by Seena Narayanan
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹16L / yr
DevOps
Automation
skill iconProgramming
Linux/Unix
Software deployment
+7 more
Job Title: DevOps Engineer Work Experience: 3-7 years Qualification: B.E / M. Tech Location: Bangalore, India About Pramata Pramata’s unique, industry-proven offering combines the digitization of critical customer data currently locked in unstructured and obscure sources, then converts that data into high-quality, actionable information accessible through one or multiple applications through the Pramata cloud-based customer digitization platform. Pramata’s customers are some of the largest companies in the world including CenturyLink, Comcast Business, FICO, HPE, Microsoft, NCR, Novelis, and Truven Health IBM. Pramata has helped these companies and more find millions of dollars in revenue, ensure regulatory and pricing compliance, as well as enable risk identification and management across their customer, partner, and even supplier bases. Pramata is headquartered near San Francisco, California and has its Product Engineering and Solutions Delivery Center in Bangalore, India. How Pramata Works Pramata extracts essential intelligence about customer relationships from complex, negotiated contracts, simplifies it from legalese into plain English, synthesizes it with data from CRM, CLM, billing and other systems, and delivers it in the context of a particular user’s role and responsibilities. This is done through Pramata’s unique Digitization-as-a-Service (DaaS) process which transforms unstructured and diverse data into accurate, timely and meaningful digital information stored in the Pramata Digital Intelligence Hub. The Hub keeps the information centralized as one single, shared source of truth along with ensuring that this data remains consistent, accessible and highly secure. The opportunity - What you get to do You will be instrumental in bringing automation to development and testing pipelines, release management, configuration management, environment & application management and day-to-day support of development teams. You will manage the development of capabilities to achieve higher automation, quality and performance in automated build and deployment management, release management, on-demand environment configuration & automation, configuration and change management and in production environment support - Application monitoring, performance management and production support of mission-critical applications including application and system uptime and remote diagnostics - Security - Ensure that the highly sensitive data from our customers is secure at all times. - Instrument applications for performance baselines and to aid rapid diagnostics and resolution in case of system issues. - High availability and disaster recover - Build and maintain systems that are designed to provide 99.9% uptime and ensure that disaster recovery mechanisms are in place. - Automate provisioning and integration tasks as required to deploy new code. - Monitoring - Proactive steps to monitor complex interdependent systems to ensure that issues are being identified and addressed in real-time. Skills required: - Excellent communicator with great interpersonal skills, driving clarity about the intricate systems - Come with hands-on experience in application infrastructure technologies like Linux(RHEL), MySQL, Apache, Nginx, Phusion passenger, Redis etc. - Good understanding of software application builds, configuration management and deployments - Strong scripting skills like Shell, Ruby, Python, Perl etc. Comes with passion for automation - Comfortable with collaboration, open communication and reaching across functional borders. - Advanced problem-solving and task break-down ability. Additional Skills (Good to have but not mandatory): - In depth understanding and experience working with any Cloud Platforms (e.g: AWS, Azure, Google cloud etc) - Experience using configuration management tools like Chef, Puppet, Capistrano, Ansible, etc. - Being able to work under pressure and solve problems using an analytical approach; decisive, fast moving; and a positive attitude. Minimum Qualifications: - Bachelor’s Degree in Computer Science or a related field - Background in technology operations for Linux based applications with 2-4 years of experience in enterprise software - Strong programming skills in Python, Shell or Java - Experience with one or more of the following Configuration Management Tools: Ansible, Chef, Salt, Puppet - Experience with one or more of the following Databases: PostgreSQL, MySQL, Oracle, RDS
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort