11+ Load balancing Jobs in Chennai | Load balancing Job openings in Chennai
Apply to 11+ Load balancing Jobs in Chennai on CutShort.io. Explore the latest Load balancing Job opportunities across top companies like Google, Amazon & Adobe.

What we are looking for
Work closely with product & engineering groups to identify and document
infrastructure requirements.
Design infrastructure solutions balancing requirements, operational
constraints and architecture guidelines.
Implement infrastructure including network connectivity, virtual machines
and monitoring.
Implement and follow security guidelines, both policy and technical to
protect our customers.
Resolve incidents as escalated from monitoring solutions and lower tiers.
Identify root cause for issues and develop long term solutions to fix recurring
issues.
Ability to automate recurring tasks to increase velocity and quality.
Partner with the engineering team to build software tolerance for
infrastructure failure or issues.
Research emerging technologies, trends and methodologies and enhance
existing systems and processes.
Qualifications
Master’s/Bachelors degree in Computer Science, Computer Engineering,
Electrical Engineering, or related technical field, and two years of experience
in software/systems or related.
5+ years overall experience.
Work experience must have included:
Proven track record in deploying, configuring and maintaining Ubuntu server
systems on premise and in the cloud.
Minimum of 4 years’ experience designing, implementing and troubleshooting
TCP/IP networks, VPN, Load Balancers & Firewalls.
Minimum 3 years of experience working in public clouds like AWS & Azure.
Hands on experience in any of the configuration management tools like Anisble,
Chef & Puppet.
Strong in performing production operation activities.
Experience with Container & Container Orchestrator tools like Kubernetes, Docker
Swarm is plus.
Good at source code management tools like Bitbucket, GIT.
Configuring and utilizing monitoring and alerting tools.
Scripting to automate infrastructure and operational processes.
Hands on work to secure networks and systems.
Sound problem resolution, judgment, negotiating and decision making skills
Ability to manage and deliver multiple project phases at the same time
Strong analytical and organizational skills
Excellent written and verbal communication skills
Interview focus areas
Networks, systems, monitoring
AWS (EC2, S3, VPC)
Problem solving, scripting, network design, systems administration and
troubleshooting scenarios
Culture fit, agility, bias for action, ownership, communication
Key Responsibilities:
- Build and Automation: Utilize Gradle for building and automating software projects. Ensure efficient and reliable build processes.
- Scripting: Develop and maintain scripts using Python and Shell scripting to automate tasks and improve workflow efficiency.
- CI/CD Tools: Implement and manage Continuous Integration and Continuous Deployment (CI/CD) pipelines using tools such as Harness, Github Actions, Jenkins, and other relevant technologies. Ensure seamless integration and delivery of code changes.
- Cloud Platforms: Demonstrate proficiency in working with cloud platforms including OpenShift, Azure, and Google Cloud Platform (GCP). Deploy, manage, and monitor applications in cloud environments.
Share Cv to
Thirega@ vysystems dot com - WhatsApp - 91Five0033Five2Three
- Configure, optimize, document, and support of the infrastructure components of software products (which are hosted in collocated facilities and cloud services such as AWS)
- Design and build tools and frameworks that support deployment and management and platforms
- Design, build, and deliver cloud computing solutions, hosted services, and underlying software infrastructures
- Build core functionality of our cloud-based platform product, deliver secure, reliable services and construct third party integrations
- Assist in coaching application developers on proper DevOps techniques for building scalable applications in the microservices paradigm
- Foster collaboration with software product development and architecture teams to ensure releases are delivered with repeatable and auditable processes
- Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restores of different environments
- Work independently across multiple platforms and applications to understand dependencies
- Evaluate new tools, technologies, and processes to improve speed, efficiency, and scalability of continuous integration environments
- Design and architect solutions for existing client-facing applications as they are moved into cloud environments such as AWS
- Competencies
- Full understanding of scripting and automated process management in languages such as Shell, Ruby and/ or Python
- Working Knowledge SCM tools such as Git, GitHub, Bitbucket, etc.
- Working knowledge of Amazon Web Services and related APIs
- Ability to deliver and manage web or cloud-based services
- General familiarity with monitoring tools
- General familiarity with configuration/provisioning tools such as Terraform
- Experience
- Experience working within an Agile type environment
- 4+ years of experience with cloud-based provisioning (Azure, AWS, Google), monitoring, troubleshooting, and related DevOps technologies
- 4+ years of experience with containerization/orchestration technologies like Rancher, Docker and Kubernetes
DevOps Lead Engineer
We are seeking a skilled DevOps Lead Engineer with 8 to 10 yrs. of experience who handles the entire DevOps lifecycle and is accountable for the implementation of the process. A DevOps Lead Engineer is liable for automating all the manual tasks for developing and deploying code and data to implement continuous deployment and continuous integration frameworks. They are also held responsible for maintaining high availability of production and non-production work environments.
Essential Requirements (must have):
• Bachelor's degree preferable in Engineering.
• Solid 5+ experience with AWS, DevOps, and related technologies
Skills Required:
Cloud Performance Engineering
• Performance scaling in a Micro-Services environment
• Horizontal scaling architecture
• Containerization (such as Dockers) & Deployment
• Container Orchestration (such as Kubernetes) & Scaling
DevOps Automation
• End to end release automation.
• Solid Experience in DevOps tools like GIT, Jenkins, Docker, Kubernetes, Terraform, Ansible, CFN etc.
• Solid experience in Infra Automation (Infrastructure as Code), Deployment, and Implementation.
• Candidates must possess experience in using Linux, Jenkins, and ample experience in Configuring and automating the monitoring tools.
• Strong scripting knowledge
• Strong analytical and problem-solving skills.
• Cloud and On-prem deployments
Infrastructure Design & Provisioning
• Infra provisioning.
• Infrastructure Sizing
• Infra Cost Optimization
• Infra security
• Infra monitoring & site reliability.
Job Responsibilities:
• Responsible for creating software deployment strategies that are essential for the successful
deployment of software in the work environment and provide stable environment for delivery of
quality.
• The DevOps Lead Engineer is accountable for designing, building, configuring, and optimizing
automation systems that help to execute business web and data infrastructure platforms.
• The DevOps Lead Engineer is involved in creating technology infrastructure, automation tools,
and maintaining configuration management.
• The Lead DevOps Engineer oversees and leads the activities of the DevOps team. They are
accountable for conducting training sessions for the juniors in the team, mentoring, career
support. They are also answerable for the architecture and technical leadership of the complete
DevOps infrastructure.
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)

at Altimetrik
DevOps Architect
Experience: 10 - 12+ year relevant experience on DevOps
Locations : Bangalore, Chennai, Pune, Hyderabad, Jaipur.
Qualification:
• Bachelors or advanced degree in Computer science, Software engineering or equivalent is required.
• Certifications in specific areas are desired
Technical Skillset: Skills Proficiency level
- Build tools (Ant or Maven) - Expert
- CI/CD tool (Jenkins or Github CI/CD) - Expert
- Cloud DevOps (AWS CodeBuild, CodeDeploy, Code Pipeline etc) or Azure DevOps. - Expert
- Infrastructure As Code (Terraform, Helm charts etc.) - Expert
- Containerization (Docker, Docker Registry) - Expert
- Scripting (linux) - Expert
- Cluster deployment (Kubernetes) & maintenance - Expert
- Programming (Java) - Intermediate
- Application Types for DevOps (Streaming like Spark, Kafka, Big data like Hadoop etc) - Expert
- Artifactory (JFrog) - Expert
- Monitoring & Reporting (Prometheus, Grafana, PagerDuty etc.) - Expert
- Ansible, MySQL, PostgreSQL - Intermediate
• Source Control (like Git, Bitbucket, Svn, VSTS etc)
• Continuous Integration (like Jenkins, Bamboo, VSTS )
• Infrastructure Automation (like Puppet, Chef, Ansible)
• Deployment Automation & Orchestration (like Jenkins, VSTS, Octopus Deploy)
• Container Concepts (Docker)
• Orchestration (Kubernetes, Mesos, Swarm)
• Cloud (like AWS, Azure, GoogleCloud, Openstack)
Roles and Responsibilities
• DevOps architect should automate the process with proper tools.
• Developing appropriate DevOps channels throughout the organization.
• Evaluating, implementing and streamlining DevOps practices.
• Establishing a continuous build environment to accelerate software deployment and development processes.
• Engineering general and effective processes.
• Helping operation and developers teams to solve their problems.
• Supervising, Examining and Handling technical operations.
• Providing a DevOps Process and Operations.
• Capacity to handle teams with leadership attitude.
• Must possess excellent automation skills and the ability to drive initiatives to automate processes.
• Building strong cross-functional leadership skills and working together with the operations and engineering teams to make sure that systems are scalable and secure.
• Excellent knowledge of software development and software testing methodologies along with configuration management practices in Unix and Linux-based environment.
• Possess sound knowledge of cloud-based environments.
• Experience in handling automated deployment CI/CD tools.
• Must possess excellent knowledge of infrastructure automation tools (Ansible, Chef, and Puppet).
• Hand on experience in working with Amazon Web Services (AWS).
• Must have strong expertise in operating Linux/Unix environments and scripting languages like Python, Perl, and Shell.
• Ability to review deployment and delivery pipelines i.e., implement initiatives to minimize chances of failure, identify bottlenecks and troubleshoot issues.
• Previous experience in implementing continuous delivery and DevOps solutions.
• Experience in designing and building solutions to move data and process it.
• Must possess expertise in any of the coding languages depending on the nature of the job.
• Experience with containers and container orchestration tools (AKS, EKS, OpenShift, Kubernetes, etc)
• Experience with version control systems a must (GIT an advantage)
• Belief in "Infrastructure as a Code"(IaaC), including experience with open-source tools such as terraform
• Treats best practices for security as a requirement, not an afterthought
• Extensive experience with version control systems like GitLab and their use in release management, branching, merging, and integration strategies
• Experience working with Agile software development methodologies
• Proven ability to work on cross-functional Agile teams
• Mentor other engineers in best practices to improve their skills
• Creating suitable DevOps channels across the organization.
• Designing efficient practices.
• Delivering comprehensive best practices.
• Managing and reviewing technical operations.
• Ability to work independently and as part of a team.
• Exceptional communication skills, be knowledgeable about the latest industry trends, and highly innovative
The ideal person for the role will:
Possess a keen mind for solving tough problems by partnering effectively with various teams and stakeholders
Be comfortable working in a fast-paced, dynamic, and agile framework
Focus on implementing an end-to-end automated chain
Responsibilities
_____________________________________________________
Strengthen the application and environment security by applying standards and best practices and providing tooling to make development workflows more secure
Identify systems that can benefit from automation, monitoring and infrastructure-as-code and develop and scale products and services accordingly.
Implement sophisticated alerts and escalation mechanisms using automated processes
Help increase production system performance with a focus on high availability and scalability
Continue to keep the lights on (day-to-day administration)
Programmatically create infrastructure in AWS, leveraging Autoscaling Groups, Security Groups, Route53, S3 and IAM with Terraform and Ansible.
Enable our product development team to deliver new code daily through Continuous Integration and Deployment Pipelines.
Create a secure production infrastructure and protect our customer data with continuous security practices and monitoring. Design, develop and scale infrastructure-as-code
Establish SLAs for service uptime, and build the necessary telemetry and alerting platforms to enforce them
Architect and build continuous data pipelines for data lakes, Business Intelligence and AI practices of the company
Remain up to date on industry trends, share knowledge among teams and abide by industry best practices for configuration management and automation.
Qualifications and Background
_______________________________________________________
Graduate degree in Computer Science and Engineering or related technologies
Work or research project experience of 5-7 years, with a minimum of 3 years of experience directly related to the job description
Prior experience working in HIPAA / Hi-Trust frameworks will be given preference
About Witmer Health
_________________________________________________________
We exist to make mental healthcare more accessible, affordable, and effective. At Witmer, we are on a mission to build a research-driven, global mental healthcare company to work on developing novel solutions - by harnessing the power of AI/ML and data science - for a range of mental illnesses like depression, anxiety, OCD, and schizophrenia, among others. Our first foray will be in the space of workspace wellness, where we are building tools to help individual employees and companies improve their mental wellness and raise productivity levels.

Striim (pronounced “stream” with two i’s for integration and intelligence) was founded in 2012 with a simple goal of helping companies make data useful the instant it’s born.
Striim’s enterprise-grade, streaming integration with intelligence platform makes it easy to build continuous, streaming data pipelines – including change data capture (CDC) – to power real-time cloud integration, log correlation, edge processing, and streaming analytics
2 - 5 Years of Experience in any Programming any language (Polyglot Preferred ) & System Operations • Awareness of Devops & Agile Methodologies • Proficient in leveraging CI and CD tools to automate testing and deployment . • Experience in working in an agile and fast paced environment . • Hands on knowledge of at least one cloud platform (AWS / GCP / Azure). • Cloud networking knowledge: should understand VPC, NATs, and routers. • Contributions to open source is a plus. • Good written communication skills are a must. Contributions to technical blogs / whitepapers will be an added advantage.
Requirements
You will make an ideal candidate if you have:
-
Experience of building a range of Services in a Cloud Service provider
-
Expert understanding of DevOps principles and Infrastructure as a Code concepts and techniques
-
Strong understanding of CI/CD tools (Jenkins, Ansible, GitHub)
-
Managed an infrastructure that involved 50+ hosts/network
-
3+ years of Kubernetes experience & 5+ years of experience in Native services such as Compute (virtual machines), Containers (AKS), Databases, DevOps, Identity, Storage & Security
-
Experience in engineering solutions on cloud foundation platform using Infrastructure As Code methods (eg. Terraform)
-
Security and Compliance, e.g. IAM and cloud compliance/auditing/monitoring tools
-
Customer/stakeholder focus. Ability to build strong relationships with Application teams, cross functional IT and global/local IT teams
-
Good leadership and teamwork skills - Works collaboratively in an agile environment
-
Operational effectiveness - delivers solutions that align to approved design patterns and security standards
-
Excellent skills in at least one of following: Python, Ruby, Java, JavaScript, Go, Node.JS
-
Experienced in full automation and configuration management
-
A track record of constantly looking for ways to do things better and an excellent understanding of the mechanism necessary to successfully implement change
-
Set and achieved challenging short, medium and long term goals which exceeded the standards in their field
-
Excellent written and spoken communication skills; an ability to communicate with impact, ensuring complex information is articulated in a meaningful way to wide and varied audiences
-
Built effective networks across business areas, developing relationships based on mutual trust and encouraging others to do the same
-
A successful track record of delivering complex projects and/or programmes, utilizing appropriate techniques and tools to ensure and measure success
-
A comprehensive understanding of risk management and proven experience of ensuring own/others' compliance with relevant regulatory processes
Essential Skills :
-
Demonstrable Cloud service provider experience - infrastructure build and configurations of a variety of services including compute, devops, databases, storage & security
-
Demonstrable experience of Linux administration and scripting preferably Red Hat
-
Experience of working with Continuous Integration (CI), Continuous Delivery (CD) and continuous testing tools
-
Experience working within an Agile environment
-
Programming experience in one or more of the following languages: Python, Ruby, Java, JavaScript, Go, Node.JS
-
Server administration (either Linux or Windows)
-
Automation scripting (using scripting languages such as Terraform, Ansible etc.)
-
Ability to quickly acquire new skills and tools
Required Skills :
-
Linux & Windows Server Certification
One of our US based client is looking for a Devops professional who can handle Technical as well as Trainings for them in US.
If you are hired, you will be sent to US for the working from there. Training & Technical work ratio will be 70% & 30% respectively.
Company Will sponsor for US Visa.
If you are an Experienced Devops professional and also given professional trainings then feel free to connect with us for more.
Implement integrations requested by customers
Deploy updates and fixes
Provide Level 2 technical support
Build tools to reduce occurrences of errors and improve customer experience
Develop software to integrate with internal back-end systems
Perform root cause analysis for production errors
Investigate and resolve technical issues
Develop scripts to automate visualization
Design procedures for system troubleshooting and maintenance
Multiple Clouds [AWS/Azure/GCP] hands on experience
Good Experience on Docker implementation at scale.
Kubernets implementation and orchestration.
Your skills and experience should cover:
-
5+ years of experience with developing, deploying, and debugging solutions on the AWS platform using ALL AWS services such as S3, IAM, Lambda, API Gateway, RDS, Cognito, Cloudtrail, CodePipeline, Cloud Formation, Cloudwatch and WAF (Web Application Firewall).
-
Amazon Web Services (AWS) Certified Developer: Associate, is required; Amazon Web Services (AWS) DevOps Engineer: Professional, preferred.
-
5+ years of experience using one or more modern programming languages (Python, Node.js).
-
Hands-on experience migrating data to the AWS cloud platform
-
Experience with Scrum/Agile methodology.
-
Good understanding of core AWS services, uses, and basic AWS architecture best practices (including security and scalability)
-
Experience with AWS Data Storage Tools.
-
Experience in Configure and implement AWS tools such as CloudWatch, CloudTrail and direct system logs for monitoring.
-
Experience working with GIT, or similar tools.
-
Ability to communicate and represent AWS Recommendations and Standards.
The following areas are highly advantageous:
-
Experience with Docker
-
Experience with PostgreSQL database