Cutshort logo
ADFS Jobs in Bangalore (Bengaluru)

11+ ADFS Jobs in Bangalore (Bengaluru) | ADFS Job openings in Bangalore (Bengaluru)

Apply to 11+ ADFS Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest ADFS Job opportunities across top companies like Google, Amazon & Adobe.

icon
EZEU (OPC) India Pvt Ltd
Bengaluru (Bangalore), Pune
10 - 15 yrs
₹20L - ₹40L / yr
Windows Azure
AZ-300
AZ-301
azure
Active Directory
+4 more
AZ-300 AZ-301 certification is must 

Azure, Azure AD, ADFS, Azure AD Connect, Microsoft Identity management

Azure, Architecture, solution designing, Subscription Design

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
7 - 10 yrs
₹21L - ₹30L / yr
Perforce
DevOps
skill iconGit
skill iconGitHub
skill iconPython
+7 more

JOB DETAILS:

* Job Title: Specialist I - DevOps Engineering

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 7-10 years

* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram

 

Job Description

Job Summary:

As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.

The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.

 

Key Responsibilities:

  • Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
  • Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
  • Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
  • Define migration scope — determine how much history to migrate and plan the repository structure.
  • Manage branch renaming and repository organization for optimized post-migration workflows.
  • Collaborate with development teams to determine migration points and finalize migration strategies.
  • Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.

 

Required Qualifications:

  • Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
  • Hands-on experience with P4-Fusion.
  • Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
  • Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
  • Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
  • Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
  • Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
  • Familiarity with CI/CD pipeline integration to validate workflows post-migration.
  • Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
  • Excellent communication and collaboration skills for cross-team coordination and migration planning.
  • Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.

 

Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools

 

Must-Haves

Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)

Read more
Biofourmis

at Biofourmis

44 recruiters
Roopa Ramalingamurthy
Posted by Roopa Ramalingamurthy
Remote only
5 - 10 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Summary:

We are looking for a highly skilled and experienced DevOps Engineer who will be responsible for the deployment, configuration, and troubleshooting of various infrastructure and application environments. The candidate must have a proficient understanding of CI/CD pipelines, container orchestration, and cloud services, with experience in AWS services like EKS, EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment. The DevOps Engineer will be responsible for monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration, among other tasks. They will also work with application teams on infrastructure design and issues, and architect solutions to optimally meet business needs.


Responsibilities:

  • Deploy, configure, and troubleshoot various infrastructure and application environments
  • Work with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc., in a highly available and scalable production environment
  • Monitor, automate, troubleshoot, secure, maintain users, and report on infrastructure and applications
  • Collaborate with application teams on infrastructure design and issues
  • Architect solutions that optimally meet business needs
  • Implement CI/CD pipelines and automate deployment processes
  • Disaster recovery and infrastructure restoration
  • Restore/Recovery operations from backups
  • Automate routine tasks
  • Execute company initiatives in the infrastructure space
  • Expertise with observability tools like ELK, Prometheus, Grafana , Loki


Qualifications:

  • Proficient understanding of CI/CD pipelines, container orchestration, and various cloud services
  • Experience with AWS services like EC2, ECS, EBS, ELB, S3, Route 53, RDS, ALB, etc.
  • Experience in monitoring, automation, troubleshooting, security, user management, reporting, migrations, upgrades, disaster recovery, and infrastructure restoration
  • Experience in architecting solutions that optimally meet business needs
  • Experience with scripting languages (e.g., Shell, Python) and infrastructure as code (IaC) tools (e.g., Terraform, CloudFormation)
  • Strong understanding of system concepts like high availability, scalability, and redundancy
  • Ability to work with application teams on infrastructure design and issues
  • Excellent problem-solving and troubleshooting skills
  • Experience with automation of routine tasks
  • Good communication and interpersonal skills


Education and Experience:

  • Bachelor's degree in Computer Science or a related field
  • 5 to 10 years of experience as a DevOps Engineer or in a related role
  • Experience with observability tools like ELK, Prometheus, Grafana


Working Conditions:

The DevOps Engineer will work in a fast-paced environment, collaborating with various application teams, stakeholders, and management. They will work both independently and in teams, and they may need to work extended hours or be on call to handle infrastructure emergencies.


Note: This is a remote role. The team member is expected to be in the Bangalore office for one week each quarter.

Read more
Top 3 Fintech Startup

Top 3 Fintech Startup

Agency job
via Jobdost by Sathish Kumar
Bengaluru (Bangalore)
3 - 4 yrs
₹4L - ₹12L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+4 more
Section 1
- Responsible for building, managing, and maintaining deployment pipelines and developing self-service tooling formanaging Git, Linux, Kubernetes, Docker, CI/CD & Pipelining etc in cloud infrastructure
- Responsible for building and managing DevOps agile tool chain with
- Responsible for working as an integrator between developer teams and various cloud infrastructures.

Section 2

- Responsibilities include helping the development team with best practices, provisioning monitoring, troubleshooting, optimizing and tuning, automating and improving deployment and release processes.

Section 3

- Responsible for maintaining application security with perioding tracking and upgrading package dependencies in coordination with respective developer teams .
- Responsible for packaging and containerization of deploy units and strategizing it in coordination with developer team
Section 4
- Setting up tools and required infrastructure. Defining and setting development, test, release, update, and support processes for DevOps operation
- Responsible for documentation of the process.
- Responsible for leading projects with end to end execution

Qualification: Bachelors of Engineering /MCA Preferably with AWS Cloud certification

Ideal Candidate -
- is experienced between 2-4 years with AWS certification and DevOps
experience.
- age less than 30 years, self-motivated and enthusiastic.
- is interested in building a sustainable DevOps platform with maximum
automation
- is interested in learning and being challenged on day to day basis.
- who can take ownership of the tasks and is willing to take the necessary
action to get it done.
- who can solve complex problems.
- who is honest with their quality of work and is comfortable with taking
ownership of their success and failure, Both
Read more
Reqroots

at Reqroots

7 recruiters
Dhanalakshmi D
Posted by Dhanalakshmi D
Bengaluru (Bangalore)
4 - 6 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

We are looking "Sr.Software Engineer(Devops)" for Reputed Client @ Bangalore Permanent Role.

Experience: 4+ Yrs

Responsibilities:

• As part of a team you will design, develop, and maintain scalable multi cloud DevOps blueprint.

• Understand overall virtualization platform architecture in cloud environments and design best of class solutions that fit the SaaS offering & legacy application modernization

• Continuously improve CI/CD pipeline, tools, processes and procedures and systems relating to Developer Productivity

• Collaborate continuously with the product development teams to implement CI/CD pipeline.

• Contribute to the subject matter on Developer Productivity, DevOps, Infrastructure Automation best practices.


Mandatory Skills:

• 1+ years of commercial server-side software development experience & 3+ years of commercial DevOps experience.

• Strong scripting skills (Java or Python) is a must.

• Experience with automation tools such as Ansible, Chef, Puppet etc.

• Hands-on experience with CI/CD tools such as GitLab, Jenkins, Nexus, Artifactory, Maven, Gradle

• Hands-on working experience in developing or deploying microservices is a must.

• Hands-on working experience of at least of the popular cloud infrastructure such as AWS / Azure / GCP / Red Hat OpenStack is a must.

• Knowledge about microservices hosted in leading cloud environments

• Experience with containerizing applications (Docker preferred) is a must

• Hands-on working experience of automating deployment, scaling, and management of containerized applications (Kubernetes) is a must.

• Strong problem-solving, analytical skills and good understanding of the best practices for building, testing, deploying and monitoring software


Mandatory Skills:

• Experience working with Secret management services such as HashiCorp Vault is desirable.

• Experience working with Identity and access management services such as Okta, Cognito is desirable.

• Experience with monitoring systems such as Prometheus, Grafana is desirable.


Educational Qualifications and Experience:

• B.E/B.Tech/MCA/M.Tech (Computer science/Information science/Information Technology is a Plus)

• 4 to 6 years of hands-on experience in server-side application development & DevOps

Read more
company logo
Agency job
via Molecular Connections by Molecular Connections
Bengaluru (Bangalore)
2 - 4 yrs
₹5L - ₹10L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

We are looking to fill the role of Kubernetes engineer.  To join our growing team, please review the list of responsibilities and qualifications.

Kubernetes Engineer Responsibilities

  • Install, configure, and maintain Kubernetes clusters.
  • Develop Kubernetes-based solutions.
  • Improve Kubernetes infrastructure.
  • Work with other engineers to troubleshoot Kubernetes issues.

Kubernetes Engineer Requirements & Skills

  • Kubernetes administration experience, including installation, configuration, and troubleshooting
  • Kubernetes development experience
  • Linux/Unix experience
  • Strong analytical and problem-solving skills
  • Excellent communication and interpersonal skills
  • Ability to work independently and as part of a team
Read more
ValueLabs

at ValueLabs

1 video
1 recruiter
Thulan Kumar
Posted by Thulan Kumar
Bengaluru (Bangalore), Chennai
5 - 9 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
  • Bachelor of Computer Science or Equivalent Education
  • At least 5 years of experience in a relevant technical position.
  • Azure and/or AWS experience
  • Strong in CI/CD concepts and technologies like GitOps (Argo CD)
  • Hands-on experience with DevOps Tools (Jenkins, GitHub, SonarQube, Checkmarx)
  • Experience with Helm Charts for package management
  • Strong in Kubernetes, OpenShift, and Container Network Interface (CNI)
  • Experience with programming and scripting languages (Spring Boot, NodeJS, Python)
  • Strong container image management experience using Docker and distroless concepts
  • Familiarity with Shared Libraries for code reuse and modularity
  • Excellent communication skills (verbal, written, and presentation)


Note: Looking for immediate joiners only.

Read more
Vume Interactive

at Vume Interactive

3 recruiters
Shweta Jaiswal
Posted by Shweta Jaiswal
Bengaluru (Bangalore), Hyderabad
5 - 7 yrs
₹3L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+5 more

Key Responsibilities:

  • Work with the development team to plan, execute and monitor deployments
  • Capacity planning for product deployments
  • Adopt best practices for deployment and monitoring systems
  • Ensure the SLAs for performance, up time are met
  • Constantly monitor systems, suggest changes to improve performance and decrease costs.
  • Ensure the highest standards of security



Key Competencies (Functional):

 

  • Proficiency in coding in atleast one scripting language - bash, Python, etc
  • Has personally managed a fleet of servers (> 15)
  • Understand different environments production, deployment and staging
  • Worked in micro service / Service oriented architecture systems
  • Has worked with automated deployment systems – Ansible / Chef / Puppet.
  • Can write MySQL queries
Read more
Toast

at Toast

2 candid answers
1 video
Rahul Jain
Posted by Rahul Jain
Remote, Bengaluru (Bangalore)
7 - 10 yrs
Best in industry
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+4 more

Now, more than ever, the Toast team is committed to our customers. We’re taking steps to help restaurants navigate these unprecedented times with technology, resources, and community. Our focus is on building a restaurant platform that helps restaurants adapt, take control, and get back to what they do best: building the businesses they love. And because our technology is purpose-built for restaurants by restaurant people, restaurants can trust that we’ll deliver on their needs for today while investing in experiences that will power their restaurant of the future.


At Toast, our Site Reliability Engineers (SREs) are responsible for keeping all customer-facing services and other Toast production systems running smoothly. SREs are a blend of pragmatic operators and software craftspeople who apply sound software engineering principles, operational discipline, and mature automation to our environments and our codebase. Our decisions are based on instrumentation and continuous observability, as well as predictions and capacity planning.


About this roll* (Responsibilities) 

  • Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
  • Partner with development teams to improve services through rigorous testing and release procedures
  • Participate in system design consulting, platform management, and capacity planning
  • Create sustainable systems and services through automation and uplift
  • Balance feature development speed and reliability with well-defined service level objectives


Troubleshooting and Supporting Escalations:

  • Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
  • Diagnose performance bottlenecks and implement optimizations across infrastructure, databases, web, and mobile applications
  • Implement strategies to increase system reliability and performance through on-call rotation and process optimization
  • Perform and run blameless RCAs on incidents and outages aggressively, looking for answers that will prevent the incident from ever happening again


Do you have the right ingredients? (Requirements)


  • Extensive industry experience with at least 7+ years in SRE and/or DevOps roles
  • Polyglot technologist/generalist with a thirst for learning
  • Deep understanding of cloud and microservice architecture and the JVM
  • Experience with tools such as APM, Terraform, Ansible, GitHub, Jenkins, and Docker
  • Experience developing software or software projects in at least four languages, ideally including two of Go, Python, and Java
  • Experience with cloud computing technologies ( AWS cloud provider preferred)



Bread puns are encouraged but not required

Read more
APT Portfolio

at APT Portfolio

1 recruiter
Ankita  Pachauri
Posted by Ankita Pachauri
Delhi, Gurugram, Bengaluru (Bangalore)
10 - 15 yrs
₹50L - ₹70L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+13 more

A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets. 


As a manager, you would be incharge of managing the devops team and your remit shall include the following

  • Private Cloud - Design & maintain a high performance and reliable network architecture to support  HPC applications
  • Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN  Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud. 
  • Security - Implementing best security practices and implementing data isolation policy between different divisions internally. 
  • Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis. 
  • Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
  • NFS - Implement and optimize latest version of NFS for our use case. 
  • Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm. 
  • BackUps  - Identify and automate  back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless. 
  •  Access Control  - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins. 
  •  Operating System  -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions. 
  •  Configuration management  -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same. 
  •  Data Storage & Security Planning  - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
  • Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity. 
  • Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant   environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc


Qualifications 

  • Bachelors or Masters Level Degree, preferably in CSE/IT
  • 10+ years of relevant experience in sys-admin function
  • Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
  • Must have strong grasp of automation & Data management tools.
  • Efficient in scripting languages and python


Desirables

  • Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
  •  Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.

 

APT Portfolio is an equal opportunity employer

Read more
Pramata Knowledge Solutions
Seena Narayanan
Posted by Seena Narayanan
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹16L / yr
DevOps
Automation
skill iconProgramming
Linux/Unix
Software deployment
+7 more
Job Title: DevOps Engineer Work Experience: 3-7 years Qualification: B.E / M. Tech Location: Bangalore, India About Pramata Pramata’s unique, industry-proven offering combines the digitization of critical customer data currently locked in unstructured and obscure sources, then converts that data into high-quality, actionable information accessible through one or multiple applications through the Pramata cloud-based customer digitization platform. Pramata’s customers are some of the largest companies in the world including CenturyLink, Comcast Business, FICO, HPE, Microsoft, NCR, Novelis, and Truven Health IBM. Pramata has helped these companies and more find millions of dollars in revenue, ensure regulatory and pricing compliance, as well as enable risk identification and management across their customer, partner, and even supplier bases. Pramata is headquartered near San Francisco, California and has its Product Engineering and Solutions Delivery Center in Bangalore, India. How Pramata Works Pramata extracts essential intelligence about customer relationships from complex, negotiated contracts, simplifies it from legalese into plain English, synthesizes it with data from CRM, CLM, billing and other systems, and delivers it in the context of a particular user’s role and responsibilities. This is done through Pramata’s unique Digitization-as-a-Service (DaaS) process which transforms unstructured and diverse data into accurate, timely and meaningful digital information stored in the Pramata Digital Intelligence Hub. The Hub keeps the information centralized as one single, shared source of truth along with ensuring that this data remains consistent, accessible and highly secure. The opportunity - What you get to do You will be instrumental in bringing automation to development and testing pipelines, release management, configuration management, environment & application management and day-to-day support of development teams. You will manage the development of capabilities to achieve higher automation, quality and performance in automated build and deployment management, release management, on-demand environment configuration & automation, configuration and change management and in production environment support - Application monitoring, performance management and production support of mission-critical applications including application and system uptime and remote diagnostics - Security - Ensure that the highly sensitive data from our customers is secure at all times. - Instrument applications for performance baselines and to aid rapid diagnostics and resolution in case of system issues. - High availability and disaster recover - Build and maintain systems that are designed to provide 99.9% uptime and ensure that disaster recovery mechanisms are in place. - Automate provisioning and integration tasks as required to deploy new code. - Monitoring - Proactive steps to monitor complex interdependent systems to ensure that issues are being identified and addressed in real-time. Skills required: - Excellent communicator with great interpersonal skills, driving clarity about the intricate systems - Come with hands-on experience in application infrastructure technologies like Linux(RHEL), MySQL, Apache, Nginx, Phusion passenger, Redis etc. - Good understanding of software application builds, configuration management and deployments - Strong scripting skills like Shell, Ruby, Python, Perl etc. Comes with passion for automation - Comfortable with collaboration, open communication and reaching across functional borders. - Advanced problem-solving and task break-down ability. Additional Skills (Good to have but not mandatory): - In depth understanding and experience working with any Cloud Platforms (e.g: AWS, Azure, Google cloud etc) - Experience using configuration management tools like Chef, Puppet, Capistrano, Ansible, etc. - Being able to work under pressure and solve problems using an analytical approach; decisive, fast moving; and a positive attitude. Minimum Qualifications: - Bachelor’s Degree in Computer Science or a related field - Background in technology operations for Linux based applications with 2-4 years of experience in enterprise software - Strong programming skills in Python, Shell or Java - Experience with one or more of the following Configuration Management Tools: Ansible, Chef, Salt, Puppet - Experience with one or more of the following Databases: PostgreSQL, MySQL, Oracle, RDS
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort