Cutshort logo
ADFS Jobs in Bangalore (Bengaluru)

11+ ADFS Jobs in Bangalore (Bengaluru) | ADFS Job openings in Bangalore (Bengaluru)

Apply to 11+ ADFS Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest ADFS Job opportunities across top companies like Google, Amazon & Adobe.

icon
EZEU (OPC) India Pvt Ltd
Bengaluru (Bangalore), Pune
10 - 15 yrs
₹20L - ₹40L / yr
Windows Azure
AZ-300
AZ-301
azure
Active Directory
+4 more
AZ-300 AZ-301 certification is must 

Azure, Azure AD, ADFS, Azure AD Connect, Microsoft Identity management

Azure, Architecture, solution designing, Subscription Design

Read more
Reqroots

at Reqroots

7 recruiters
Dhanalakshmi D
Posted by Dhanalakshmi D
Bengaluru (Bangalore)
4 - 6 yrs
₹10L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

We are looking "Sr.Software Engineer(Devops)" for Reputed Client @ Bangalore Permanent Role.

Experience: 4+ Yrs

Responsibilities:

• As part of a team you will design, develop, and maintain scalable multi cloud DevOps blueprint.

• Understand overall virtualization platform architecture in cloud environments and design best of class solutions that fit the SaaS offering & legacy application modernization

• Continuously improve CI/CD pipeline, tools, processes and procedures and systems relating to Developer Productivity

• Collaborate continuously with the product development teams to implement CI/CD pipeline.

• Contribute to the subject matter on Developer Productivity, DevOps, Infrastructure Automation best practices.


Mandatory Skills:

• 1+ years of commercial server-side software development experience & 3+ years of commercial DevOps experience.

• Strong scripting skills (Java or Python) is a must.

• Experience with automation tools such as Ansible, Chef, Puppet etc.

• Hands-on experience with CI/CD tools such as GitLab, Jenkins, Nexus, Artifactory, Maven, Gradle

• Hands-on working experience in developing or deploying microservices is a must.

• Hands-on working experience of at least of the popular cloud infrastructure such as AWS / Azure / GCP / Red Hat OpenStack is a must.

• Knowledge about microservices hosted in leading cloud environments

• Experience with containerizing applications (Docker preferred) is a must

• Hands-on working experience of automating deployment, scaling, and management of containerized applications (Kubernetes) is a must.

• Strong problem-solving, analytical skills and good understanding of the best practices for building, testing, deploying and monitoring software


Mandatory Skills:

• Experience working with Secret management services such as HashiCorp Vault is desirable.

• Experience working with Identity and access management services such as Okta, Cognito is desirable.

• Experience with monitoring systems such as Prometheus, Grafana is desirable.


Educational Qualifications and Experience:

• B.E/B.Tech/MCA/M.Tech (Computer science/Information science/Information Technology is a Plus)

• 4 to 6 years of hands-on experience in server-side application development & DevOps

Read more
EnterpriseMinds

at EnterpriseMinds

2 recruiters
phani kalyan
Posted by phani kalyan
Bengaluru (Bangalore)
7 - 9 yrs
₹10L - ₹35L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+5 more
  • Candidate should have good Platform experience on Azure with Terraform.
  • The devops engineer  needs to help developers, create the Pipelines and K8s Deployment Manifests.
  • Good to have experience on migrating data from (AWS) to Azure. 
  • To manage/automate infrastructure automatically using Terraforms. Jenkins is the key CI/CD tool which we uses and it will be used to run these Terraforms.
  • VMs to be provisioned on Azure Cloud and managed.
  • Good hands on experience of Networking on Cloud is required.
  • Ability to setup Database on VM as well as managed DB and Proper set up of cloud hosted microservices needs to be done to communicate with the db services.
  • Kubernetes, Storage, KeyValult, Networking(load balancing and routing) and VMs are the key infrastructure expertise which are essential. 
  • Requirement is  to administer Kubernetes cluster end to end. (Application deployment, managing namespaces, load balancing, policy setup, using blue-green/canary deployment models etc).
  • The experience in AWS is desirable 
  • Python experience is optional however Power shell is mandatory
  • Know-how on the use of GitHub
Administration of Azure Kubernetes services
Read more
Toast

at Toast

2 candid answers
1 video
Rahul Jain
Posted by Rahul Jain
Remote, Bengaluru (Bangalore)
7 - 10 yrs
Best in industry
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+4 more

Now, more than ever, the Toast team is committed to our customers. We’re taking steps to help restaurants navigate these unprecedented times with technology, resources, and community. Our focus is on building a restaurant platform that helps restaurants adapt, take control, and get back to what they do best: building the businesses they love. And because our technology is purpose-built for restaurants by restaurant people, restaurants can trust that we’ll deliver on their needs for today while investing in experiences that will power their restaurant of the future.


At Toast, our Site Reliability Engineers (SREs) are responsible for keeping all customer-facing services and other Toast production systems running smoothly. SREs are a blend of pragmatic operators and software craftspeople who apply sound software engineering principles, operational discipline, and mature automation to our environments and our codebase. Our decisions are based on instrumentation and continuous observability, as well as predictions and capacity planning.


About this roll* (Responsibilities) 

  • Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
  • Partner with development teams to improve services through rigorous testing and release procedures
  • Participate in system design consulting, platform management, and capacity planning
  • Create sustainable systems and services through automation and uplift
  • Balance feature development speed and reliability with well-defined service level objectives


Troubleshooting and Supporting Escalations:

  • Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
  • Diagnose performance bottlenecks and implement optimizations across infrastructure, databases, web, and mobile applications
  • Implement strategies to increase system reliability and performance through on-call rotation and process optimization
  • Perform and run blameless RCAs on incidents and outages aggressively, looking for answers that will prevent the incident from ever happening again


Do you have the right ingredients? (Requirements)


  • Extensive industry experience with at least 7+ years in SRE and/or DevOps roles
  • Polyglot technologist/generalist with a thirst for learning
  • Deep understanding of cloud and microservice architecture and the JVM
  • Experience with tools such as APM, Terraform, Ansible, GitHub, Jenkins, and Docker
  • Experience developing software or software projects in at least four languages, ideally including two of Go, Python, and Java
  • Experience with cloud computing technologies ( AWS cloud provider preferred)



Bread puns are encouraged but not required

Read more
HappyFox

at HappyFox

1 video
6 products
Lindsey A
Posted by Lindsey A
Chennai, Bengaluru (Bangalore)
5 - 10 yrs
₹10L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+12 more

About us:

HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.

 

We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.

 

To know more, Visit! - https://www.happyfox.com/

 

Responsibilities:

  • Build and scale production infrastructure in AWS for the HappyFox platform and its products.
  • Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
  • Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
  • Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
  • Implement consistent observability, deployment and IaC setups
  • Patch production systems to fix security/performance issues
  • Actively respond to escalations/incidents in the production environment from customers or the support team
  • Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
  • Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
  • Participate in infrastructure security audits

 

Requirements:

  • At least 5 years of experience in handling/building Production environments in AWS.
  • At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
  • Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
  • Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
  • Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
  • Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
  • Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
  • Experience in setting up and managing test/staging environments, and CI/CD pipelines.
  • Experience in IaC tools such as Terraform or AWS CDK
  • Passion for making systems reliable, maintainable, scalable and secure.
  • Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
  • Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.

 

Read more
Acceldata

at Acceldata

5 recruiters
Richa  Kukar
Posted by Richa Kukar
Bengaluru (Bangalore)
3 - 10 yrs
₹15L - ₹50L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+1 more

Job Description :

Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data pipelines at Petabyte scale. Our customers include a Fortune 500 company, one of Asia's largest telecom companies, and a unicorn fintech startup. We are lean, hungry, customer-obsessed, and growing fast. Our Engineering team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.

Roles & responsibilities:

  • Champion engineering and operational excellence.
  • Establish a solid infrastructure framework and excellent development and deployment processes.
  • Provide technical guidance to both your team members and your peers from the development team.
  • Work with the development teams closely to gather system requirements, new service proposals and large system improvements and come up with the infrastructure architecture leading to stable, well-monitored fly, performant and secure systems.
  • Be part of and help create a positive work environment based on accountability.
  • Communicate across functions and drive engineering initiatives.
  • Initiate cross team collaboration with product development teams to develop high quality, polished products and services.

Must haves:

  • 5+ years of professional experience developing, and launching software products on Cloud.
  • Basic understanding Java/Go Programming
  • Good Understanding of Container Technologies/Orchestration platforms (e. g Docker, Kubernetes)
  • Deep understanding of AWS or Any Cloud.
  • Good understanding of data stores like Postgres, Redis, Kafka, and Elasticsearch.
  • Good Understanding of Operating systems
  • Strong technical background with track record of individual technical accomplishments
  • Ability to handle multiple competing priorities in a fast paced environment
  • Ability to establish credibility with smart engineers quickly.
  • Most importantly, ability to learn and urge to learn new things.
  • B.Tech/M.Tech in Computer Science or a related technical field.

Good to Have:

  • Hands-on knowledge of Configuration Management and Deployment tools like – Ansible, Terraform etc.
  • Proficient in scripting, and Git and Git workflows
  • Experience in developing Continuous Integration/ Continuous Delivery pipelines
  • Knowledge of Big Data systems.
Read more
Bengaluru (Bangalore)
2 - 6 yrs
₹8L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
Job Description:

About BootLabs

https://www.google.com/url?q=https://www.bootlabs.in/&;sa=D&source=calendar&ust=1667803146567128&usg=AOvVaw1r5g0R_vYM07k6qpoNvvh6" target="_blank">https://www.bootlabs.in/

-We are a Boutique Tech Consulting partner, specializing in Cloud Native Solutions. 
-We are obsessed with anything “CLOUD”. Our goal is to seamlessly automate the development lifecycle, and modernize infrastructure and its associated applications.
-With a product mindset, we enable start-ups and enterprises on the cloud
transformation, cloud migration, end-to-end automation and managed cloud services. 
-We are eager to research, discover, automate, adapt, empower and deliver quality solutions on time.
-We are passionate about customer success. With the right blend of experience and exuberant youth in our in-house team, we have significantly impacted customers.




Technical Skills:


Expertise in any one hyper scaler (AWS/AZURE/GCP), including basic services like networking,
data and workload management.
  • AWS 

              Networking: VPC, VPC Peering, Transit Gateway, Route Tables, SecuritGroups, etc.
              Data: RDS, DynamoDB, Elastic Search
Workload: EC2, EKS, Lambda, etc.
  •  Azure
                Networking: VNET, VNET Peering,
               Data: Azure MySQL, Azure MSSQL, etc.
               Workload: AKS, Virtual Machines, Azure Functions
  • GCP
               Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
                Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
               Workload: GKE, Instances, App Engine, Batch, etc.

Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup,
templating and configuration.
Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment,
networking, service mesh. Used any package manager like helm.
Scripting experience (Bash/python), automation in pipelines when required, system service.
Infrastructure automation (Terraform/pulumi/cloud formation), write modules, setup pipeline and version the code.

Optional:

Experience in any programming language is not required but is appreciated.
Good experience in GIT, SVN or any other code management tool is required.
DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
Observability tools (Opensource: Prometheus, Elasticsearch, Open Telemetry; Paid: Datadog,
24/7, etc)
Read more
Acqueon Technology

at Acqueon Technology

3 recruiters
Rishav Rahul
Posted by Rishav Rahul
Bengaluru (Bangalore)
8 - 12 yrs
₹25L - ₹35L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more
  • Design, Develop, deploy, and run operations of infrastructure services in the Acqueon AWS cloud environment
  • Manage uptime of Infra & SaaS Application
  • Implement application performance monitoring to ensure platform uptime and performance
  • Building scripts for operational automation and incident response
  • Handle schedule and processes surrounding cloud application deployment
  • Define, measure, and meet key operational metrics including performance, incidents and chronic problems, capacity, and availability
  • Lead the deployment, monitoring, maintenance, and support of operating systems (Windows, Linux)
  • Build out lifecycle processes to mitigate risk and ensure platforms remain current, in accordance with industry standard methodologies
  • Run incident resolution within the environment, facilitating teamwork with other departments as required
  • Automate the deployment of new software to cloud environment in coordination with DevOps engineers
  • Work closely with Presales, understand customer requirement to deploy in Production
  • Lead and mentor a team of operations engineers
  • Drive the strategy to evolve and modernize existing tools and processes to enable highly secure and scalable operations
  • AWS infrastructure management, provisioning, cost management and planning
  • Prepare RCA incident reports for internal and external customers
  • Participate in product engineering meetings to ensure product features and patches comply with cloud deployment standards
  • Troubleshoot and analyse performance issues and customer reported incidents working to restore services within the SLA
  • Monthly SLA Performance reports

As a Cloud Operations Manager in Acqueon you will need…. 

  • 8 years’ progressive experience managing IT infrastructure and global cloud environments such as AWS, GCP (must)
  • 3-5 years management experience leading a Cloud Operations / Site Reliability / Production Engineering team working with globally distributed teams in a fast-paced environment
  • 3-5 years’ experience in IAC (Terraform, K8)
  • 3+ years end-to-end incident management experience
  • Experience with communicating and presenting to all stakeholders
  • Experience with Cloud Security compliance and audits
  • Detail-oriented. The ideal candidate is one who naturally digs as deep as they need to understand the why
  • Knowledge on GCP will be added advantage
  • Manage and monitor customer instances for uptime and reliability
  • Staff scheduling and planning to ensure 24x7x365 coverage for cloud operations
  • Customer facing, excellent communication skills, team management, troubleshooting
Read more
Banyan Data Services

at Banyan Data Services

1 recruiter
Sathish Kumar
Posted by Sathish Kumar
Bengaluru (Bangalore)
2 - 10 yrs
₹5L - ₹15L / yr
skill iconJava
skill iconPython
Spark
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+3 more

Cloud Software Engineer

Notice Period: 45 days / Immediate Joining

 

Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure.  We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms.  Our engagement service is built on the DevOps standard practice and SRE model.

 

We offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.

 

Roles and Responsibilities

· A wide variety of engineering projects including data visualization, web services, data engineering, web-portals, SDKs, and integrations in numerous languages, frameworks, and clouds platforms

· Apply continuous delivery practices to deliver high-quality software and value as early as possible.

· Work in collaborative teams to build new experiences

· Participate in the entire cycle of software consulting and delivery from ideation to deployment

· Integrating multiple software products across cloud and hybrid environments

· Developing processes and procedures for software applications migration to the cloud, as well as managed services in the cloud

· Migrating existing on-premises software applications to cloud leveraging a structured method and best practices

 

Desired Candidate Profile : *** freshers can also apply ***

 

· 2+years of experience with 1 or more development languages such as Java, Python, or Spark.

· 1 year + of experience with private/public/hybrid cloud model design, implementation, orchestration, and support.

· Certification or any training's completion of any one of the cloud environments like AWS, GCP, Azure, Oracle Cloud, and Digital Ocean.

· Strong problem-solvers who are comfortable in unfamiliar situations, and can view challenges through multiple perspectives

· Driven to develop technical skills for oneself and team-mates

· Hands-on experience with cloud computing and/or traditional enterprise datacentre technologies, i.e., network, compute, storage, and virtualization.

· Possess at least one cloud-related certification from AWS, Azure, or equivalent

· Ability to write high-quality, well-tested code and comfort with Object-Oriented or functional programming patterns

· Past experience quickly learning new languages and frameworks

· Ability to work with a high degree of autonomy and self-direction

http://www.banyandata.com" target="_blank">www.banyandata.com 

Read more
Remote only
3 - 8 yrs
₹7L - ₹16L / yr
DevOps
Windows Azure
skill iconC++
skill iconNodeJS (Node.js)
skill iconDocker
+1 more
This is a very interesting job.   Our ~30 year old company seeks a generalist...somebody who can take on a variety of tasks.  For example: Updating our state-of-the art logistics systems,  moving more of our infrastructure to Azure, helping our Fortune 50 clients get the best value from our software.   
Read more
Dunya Labs

at Dunya Labs

2 recruiters
Muralidhar BS
Posted by Muralidhar BS
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹20L / yr
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
Agile/Scrum
DevOps Specialist Dunya Labs is a deep tech product company currently focused on building infrastructure, developer tooling and middleware for deploying scalable blockchain applications. We combine a theoretical research team with a product team to lead cutting-edge developments in the blockchain space. Seeking DevOps Specialist for designing and supporting our infrastructure environments. Key Responsibilities: ● Support of our continuous integration processes that run on various platforms ● Design entire application environments that can be fully automated or replicated including network, compute, and data stores ● Develop solutions by working with product manager, scrum master, architects, developers and business stakeholders ● Create and enhance of Continuous Integration automation across multiple platforms including Java, Nodejs, and Swif. ● Create and enhance of Continuous Deployment automation built on Docker and Kubernetes ● Create and enhance of dynamic monitoring and alerting solutions using industry leading services ● Develop automation to ensure security across a geographically dispersed hosting environment ● Leverage new technology paradigms (e.g., serverless, containers, microservices) Qualifications: ● Bachelor's or Masters degree in Information Systems, Information Technology, Computer Science or Engineering or equivalent experience ● Good exposure to Agile software development and DevOps practices such as Infrastructure as Code (IaC), Continuous Integration and automated deployment ● Expertise with Continuous Integration and Continuous Delivery (CI/CD) ● Architecting, designing and developing applications on PCF ● Designing and building application and serverless technologies ● Experience architecting highly available systems that utilize load balancing, horizontal scalability and high availability ● Strong communication and analytical/problem-solving skills, should be a team player Experience: ● Managing large production environment in Cloud & Operational Support ● Automating infrastructure deployment, management and monitoring ● Hands-on with Devops technologies ● Troubleshooting and Problem solving in Infrastructure issues Skills and Competencies: ● Linux/Unix Administration ● Infrastructure Automation with Chef/Puppet/Ansible/Terraform ● Cloud Operations & Services - AWS/GCP/Azure (AWS preferable) ● Networking - Unix Networking and Understanding of Cloud Networking ■ AWS VPC, NAT, Firewalls, Subnets etc ■ TCP/IP and HTTP(S) protocols ● Source Code Control : github / gitlab ● CI/CD : Jenkins ● Container Technologies : Docker and Kubernetes ● Monitoring Tools : Nagios/Cloudwatch/Prometheus ● SQL and NoSQL Database : Mysql/Postgresql and Mongodb ● Scripting through Python/Ruby/Shell ● Softwares : Apache Web Server, Tomcat, Nginx, Kafka, Zookeeper etc ● Security : Https/TLS, Certificates, Digital Signature, VPN, Firewalls, AWS Security Group, IAM, DMZ Architecture Dunya Labs is an equal opportunities employer and welcomes applications from all sections of society and does not discriminate on grounds of race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, or any other basis as protected by applicable law.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort