11+ HAProxy Jobs in Bangalore (Bengaluru) | HAProxy Job openings in Bangalore (Bengaluru)
Apply to 11+ HAProxy Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest HAProxy Job opportunities across top companies like Google, Amazon & Adobe.

KaiOS is a mobile operating system for smart feature phones that stormed the scene to become the 3rd largest mobile OS globally (2nd in India ahead of iOS). We are on 100M+ devices in 100+ countries. We recently closed a Series B round with Cathay Innovation, Google and TCL.
What we are looking for:
- BE/B-Tech in Computer Science or related discipline
- 3+ years of overall commercial DevOps / infrastructure experience, cross-functional delivery-focused agile environment; previous experience as a software engineer and ability to see into and reason about the internals of an application a major plus
- Extensive experience designing, delivering and operating highly scalable resilient mission-critical backend services serving millions of users; experience operating and evolving multi-regional services a major plus
- Strong engineering skills; proficiency in programming languages
- Experience in Infrastructure-as-code tools, for example Terraform, CloudFormation
- Proven ability to execute on a technical and product roadmap
- Extensive experience with iterative development and delivery
- Extensive experience with observability for operating robust distributed systems – instrumentation, log shipping, alerting, etc.
- Extensive experience with test and deployment automation, CI / CD
- Extensive experience with infrastructure-as-code and tooling, such as Terraform
- Extensive experience with AWS, including services such as ECS, Lambda, DynamoDB, Kinesis, EMR, Redshift, Elasticsearch, RDS, CloudWatch, CloudFront, IAM, etc.
- Strong knowledge of backend and web technology; infrastructure experience with at-scale modern data technology a major plus
- Deep expertise in server-side security concerns and best practices
- Fluency with at least one programming language. We mainly use Golang, Python and JavaScript so far
- Outstanding analytical thinking, problem solving skills and attention to detail
- Excellent verbal and written communication skills
Requirements
Designation: DevOps Engineer
Location: Bangalore OR Hongkong
Experience: 4 to 7 Years
Notice period: 30 days or Less
General Description:
Owns all technical aspects of software development for assigned applications.
Participates in the design and development of systems & application programs.
Functions as Senior member of an agile team and helps drive consistent development practices – tools, common components, and documentation.
Required Skills:
In depth experience configuring and administering EKS clusters in AWS.
In depth experience in configuring **DataDog** in AWS environments especially in **EKS**
In depth understanding of OpenTelemetry and configuration of **OpenTelemetry Collectors**
In depth knowledge of observability concepts and strong troubleshooting experience.
Experience in implementing comprehensive monitoring and logging solutions in AWS using **CloudWatch**.
Experience in **Terraform** and Infrastructure as code.
Experience in **Helm**
Strong scripting skills in Shell and/or python.
Experience with large-scale distributed systems and architecture knowledge (Linux/UNIX and Windows operating systems, networking, storage) in a cloud computing or traditional IT infrastructure environment.
Must have a good understanding of cloud concepts (Storage /compute/network).
Experience in Collaborating with several cross functional teams to architect observability pipelines for various GCP services like GKE, cloud run Big Query etc.
Experience with Git and GitHub.
Proficient in developing and maintaining technical documentation, ADRs, and runbooks.
Responsibilities
Provisioning and de-provisioning AWS accounts for internal customers
Work alongside systems and development teams to support the transition and operation of client websites/applications in and out of AWS.
Deploying, managing, and operating AWS environments
Identifying appropriate use of AWS operational best practices
Estimating AWS costs and identifying operational cost control mechanisms
Keep technical documentation up to date
Proactively keep up to date on AWS services and developments
Create (where appropriate) automation, in order to streamline provisioning and de-provisioning processes
Lead certain data/service migration projects
Job Requirements
Experience provisioning, operating, and maintaining systems running on AWS
Experience with Azure/AWS.
Capabilities to provide AWS operations and deployment guidance and best practices throughout the lifecycle of a project
Experience with application/data migration to/from AWS
Experience with NGINX and the HTTP protocol.
Experience with configuration and management software such as GIT Strong analytical and problem-solving skills
Deployment experience using common AWS technologies like VPC, and regionally distributed EC2 instances, Docker, and more.
Ability to work in a collaborative environment
Detail-oriented, strong work ethic and high standard of excellence
A fast learner, the Achiever, sets high personal goals
Must be able to work on multiple projects and consistently meet project deadlines
About BootLabs
https://www.google.com/url?q=https://www.bootlabs.in/&sa=D&source=calendar&ust=1667803146567128&usg=AOvVaw1r5g0R_vYM07k6qpoNvvh6" target="_blank">https://www.bootlabs.in/
-We are a Boutique Tech Consulting partner, specializing in Cloud Native Solutions.
-We are obsessed with anything “CLOUD”. Our goal is to seamlessly automate the development lifecycle, and modernize infrastructure and its associated applications.
-With a product mindset, we enable start-ups and enterprises on the cloud
transformation, cloud migration, end-to-end automation and managed cloud services.
-We are eager to research, discover, automate, adapt, empower and deliver quality solutions on time.
-We are passionate about customer success. With the right blend of experience and exuberant youth in our in-house team, we have significantly impacted customers.
Technical Skills:
• Expertise in any one hyper scaler (AWS/AZURE/GCP), including basic services like networking,
data and workload management.
- AWS
Networking: VPC, VPC Peering, Transit Gateway, Route Tables, Security Groups, etc.
Data: RDS, DynamoDB, Elastic Search
Workload: EC2, EKS, Lambda, etc.
- Azure
Data: Azure MySQL, Azure MSSQL, etc.
Workload: AKS, Virtual Machines, Azure Functions
- GCP
Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
Workload: GKE, Instances, App Engine, Batch, etc.
• Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup,
templating and configuration.
• Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment,
networking, service mesh. Used any package manager like helm.
• Scripting experience (Bash/python), automation in pipelines when required, system service.
• Infrastructure automation (Terraform/pulumi/cloud formation), write modules, setup pipeline and version the code.
Optional:
• Experience in any programming language is not required but is appreciated.
• Good experience in GIT, SVN or any other code management tool is required.
• DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
• Observability tools (Opensource: Prometheus, Elasticsearch, Open Telemetry; Paid: Datadog,
24/7, etc)
The DevOps Engineer's core responsibilities include automated configuration and management
of infrastructure, continuous integration and delivery of distributed systems at scale in a Hybrid
environment.
Must-Have:
● You have 4-10 years of experience in DevOps
● You have experience in managing IT infrastructure at scale
● You have experience in automation of deployment of distributed systems and in
infrastructure provisioning at scale.
● You have in-depth hands-on experience on Linux and Linux-based systems, Linux
scripting
● You have experience in Server hardware, Networking, firewalls
● You have experience in source code management, configuration management,
continuous integration, continuous testing, continuous monitoring
● You have experience with CI/CD and related tools
* You have experience with Monitoring tools like ELK, Grafana, Prometheus
● You have experience with containerization, container orchestration, management
● Have a penchant for solving complex and interesting problems.
● Worked in startup-like environments with high levels of ownership and commitment.
● BTech, MTech or Ph.D. in Computer Science or related Technical Discipline
Role Purpose:
As a DevOps , You should be strong in both the Dev and Ops part of DevOps. We are looking for someone who has a deep understanding of systems architecture, understands core CS concepts well, and is able to reason about system behaviour rather than merely working with the toolset of the day. We believe that only such a person will be able to set a compelling direction for the team and excite those around them.
If you are someone who fits the description above, you will find that the rewards are well worth the high bar. Being one of the early hires of the Bangalore office, you will have a significant impact on the culture and the team; you will work with a set of energetic and hungry peers who will challenge you, and you will have considerable international exposure and opportunity for impact across departments.
Responsibilities
- Deployment, management, and administration of web services in a public cloud environment
- Design and develop solutions for deploying highly secure, highly available, performant and scalable services in elastically provisioned environments
- Design and develop continuous integration and continuous deployment solutions from development through production
- Own all operational aspects of running web services including automation, monitoring and alerting, reliability and performance
- Have direct impact on running a business by thinking about innovative solutions to operational problems
- Drive solutions and communication for production impacting incidents
- Running technical projects and being responsible for project-level deliveries
- Partner well with engineering and business teams across continents
Required Qualifications
- Bachelor’s or advanced degree in Computer Science or closely related field
- 4 - 6 years professional experience in DevOps, with at least 1/2 years in Linux / Unix
- Very strong in core CS concepts around operating systems, networks, and systems architecture including web services
- Strong scripting experience in Python and Bash
- Deep experience administering, running and deploying AWS based services
- Solid experience with Terraform, Packer and Docker or their equivalents
- Knowledge of security protocols and certificate infrastructure.
- Strong debugging, troubleshooting, and problem solving skills
- Broad experience with cloud hosted applications including virtualization platforms, relational and non relational data stores, reverse proxies, and orchestration platforms
- Curiosity, continuous learning and drive to continually raise the bar
- Strong partnering and communication skills
Preferred Qualifications
- Past experience as a senior developer or application architect strongly preferred.
- Experience building continuous integration and continuous deployment pipelines
- Experience with Zookeeper, Consul, HAProxy, ELK-Stack, Kafka, PostgreSQL.
- Experience working with, and preferably designing, a system compliant to any security framework (PCI DSS, ISO 27000, HIPPA, SOC 2, ...)
- Experience with AWS orchestration services such as ECS and EKS.
- Experience working with AWS ML pipeline services like AWS Sagemak
Role : SRE
Experience : 4 - 8 Years
- Experience in building, deploying and operating cloud solutions on Kubernetes
- Strong expertise administrating and scaling Kubernetes on bare metal and CKA preferred
- Expertise on K8s Interfaces CNI, CSI, CRI and Service meshe
- Hands-on experience as a DevOps or Automation development
- Demonstrable knowledge of TCP/IP, Linux operating system internals, filesystems, disk/storage technologies and storage protocols.
- Experience working with Helm Charts and building out Infrastructure As Code (IaC)
- Experience in writing software to automate orchestration tasks at scale; we commonly use Python, Go, and Shell scripting
- Knowledge of systems (Linux, GNU tooling), networking (OSI model, DNS, routing) and virtualization vs containerization
- Expertise in CI/CD tooling for cloud-based applications specifically Terraform / CloudFormation, Jenkins and Git
- Architected CNF Orchestration with Kubernetes
- Strong understanding of the principles of 12-factor apps and modern containerized microservices
- Plan for reliability by designing systems to work across our multi-region and multi-cloud environments
- Experience developing and using Application & Integration stacks/tools such as Kafka, Spring Cloud, Apache Camel, Kubernetes, Docker, Redis, Knative, and NoSQL
A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets.
As a manager, you would be incharge of managing the devops team and your remit shall include the following
- Private Cloud - Design & maintain a high performance and reliable network architecture to support HPC applications
- Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud.
- Security - Implementing best security practices and implementing data isolation policy between different divisions internally.
- Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis.
- Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
- NFS - Implement and optimize latest version of NFS for our use case.
- Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm.
- BackUps - Identify and automate back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless.
- Access Control - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins.
- Operating System -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions.
- Configuration management -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same.
- Data Storage & Security Planning - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
- Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity.
- Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc
Qualifications
- Bachelors or Masters Level Degree, preferably in CSE/IT
- 10+ years of relevant experience in sys-admin function
- Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
- Must have strong grasp of automation & Data management tools.
- Efficient in scripting languages and python
Desirables
- Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
- Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.
APT Portfolio is an equal opportunity employer
Job Description:
Mandatory Skills:
Should have strong working experience with Cloud technologies like AWS and Azure.
Should have strong working experience with CI/CD tools like Jenkins and Rundeck.
Must have experience with configuration management tools like Ansible.
Must have working knowledge on tools like Terraform.
Must be good at Scripting Languages like shell scripting and python.
Should be expertise in DevOps practices and should have demonstrated the ability to apply that knowledge across diverse projects and teams.
Preferable skills:
Experience with tools like Docker, Kubernetes, Puppet, JIRA, gitlab and Jfrog.
Experience in scripting languages like groovy.
Experience with GCP
Summary & Responsibilities:
Write build pipelines and IaaC (ARM templates, terraform or cloud formation).
Develop ansible playbooks to install and configure various products.
Implement Jenkins and Rundeck jobs( and pipelines).
Must be a self-starter and be able to work well in a fast paced, dynamic environment
Work independently and resolve issues with minimal supervision.
Strong desire to learn new technologies and techniques
Strong communication (written / verbal ) skills
Qualification:
Bachelor's degree in Computer Science or equivalent.
4+ years of experience in DevOps and AWS.
2+ years of experience in Python, Shell scripting and Azure.
We are a growth-oriented, dynamic, multi-national startup, so those that are looking for that startup excitement, dynamics, and buzz are here at the right place. Read on -
FrontM (http://www.frontm.com/" target="_blank">www.frontm.com) is an edge AI company with a platform that is redefining how businesses and people in remote and isolated environments (maritime, aviation, mining....) collaborate and drive smart decisions.
Successful candidate will lead the back end architecture working alongside VP of delivery, CTO and CEO
The problem you will be working on:
- Take ownership of AWS cloud infrastructure
- Overlook tech ops with hands-on CI/CD and administration
- Develop Node.js Java and backend system procedures for stability, scale and performance
- Understand FrontM platform roadmap and contribute to planning strategic and tactical capabilities
- Integrate APIs and abstractions for complex requirements
Who you are:
- You are an experienced Cloud Architect and back end developer
- You have experience creating AWS Serverless Lambdas EC2 MongoDB backends
- You have extensive CI/CD and DevOps experience
- You can take ownership of continuous server uptime, maintenance, stability and performance
- You can lead a team of backend developers and architects
- You are a die-hard problem solver and never-say-no person
- You have 10+ years experience
- You are very sound in English language
- You have the ability to initiate and lead teams working with senior management
Additional benefits
- Generous pay package, flexible for the right candidate
- Career development and growth planning
- Entrepreneurial environment that nurtures and promotes innovation
- Multi-national team with an enjoyable culture
We'd love to talk to you if you find this interesting and like to join in on our exciting journey
