

About VISIT
About
Connect with the team
Similar jobs
Lightning Job By Cutshort ⚡
As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)
Job Overview:
We are seeking an experienced DevOps Engineer to join our team. The successful candidate will be responsible for designing, implementing, and maintaining the infrastructure and software systems required to support our development and production environments. The ideal candidate should have a strong background in Linux, GitHub, Actions/Jenkins, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
Responsibilities:
• Design, implement and maintain CI/CD pipelines using GitHub, Actions/Jenkins, Kubernetes, Helm, and ArgoCD.
• Deploy and manage Kubernetes clusters using AWS.
• Configure and maintain Envoy Proxy and Cert-Manager to automate deployment and manage application environments.
• Monitor system performance using Datadog, ELK, and Cloudflare tools.
• Automate infrastructure management and maintenance tasks using Terraform, Ansible, or similar tools.
• Collaborate with development teams to design, implement and test infrastructure changes.
• Troubleshoot and resolve infrastructure issues as they arise.
• Participate in on-call rotation and provide support for production issues.
Qualifications:
• Bachelor's or Master's degree in Computer Science, Engineering or a related field.
• 4+ years of experience in DevOps engineering with a focus on Linux, GitHub, Actions/CodeFresh, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
• Strong understanding of Linux administration and shell scripting.
• Experience with automation tools such as Terraform, Ansible, or similar.
• Ability to write infrastructure as code using tools such as Terraform, Ansible, or similar.
• Experience with container orchestration platforms such as Kubernetes.
• Familiarity with container technologies such as Docker.
• Experience with cloud providers such as AWS.
• Experience with monitoring tools such as Datadog and ELK.
Skills:
• Strong analytical and problem-solving skills.
• Excellent communication and collaboration skills.
• Ability to work independently or in a team environment.
• Strong attention to detail.
• Ability to learn and apply new technologies quickly.
• Ability to work in a fast-paced and dynamic environment.
• Strong understanding of DevOps principles and methodologies.
Kindly apply at https://www.wohlig.com/careers

Staff DevOps Engineer with Azure
EGNYTE YOUR CAREER. SPARK YOUR PASSION.
Egnyte is a place where we spark opportunities for amazing people. We believe that every role has meaning, and every Egnyter should be respected. With 22,000+ customers worldwide and growing, you can make an impact by protecting their valuable data. When joining Egnyte, you’re not just landing a new career, you become part of a team of Egnyters that are doers, thinkers, and collaborators who embrace and live by our values:
Invested Relationships
Fiscal Prudence
Candid Conversations
ABOUT EGNYTE
Egnyte is the secure multi-cloud platform for content security and governance that enables organizations to better protect and collaborate on their most valuable content. Established in 2008, Egnyte has democratized cloud content security for more than 22,000 organizations, helping customers improve data security, maintain compliance, prevent and detect ransomware threats, and boost employee productivity on any app, any cloud, anywhere. For more information, visit www.egnyte.com.
Our Production Engineering team enables Egnyte to provide customers access to their data 24/7 by providing best in class infrastructure.
ABOUT THE ROLE
We store multibillion files and multiple petabytes of data. We observe more than 11K API requests per second on average. To make that possible and to provide the best possible experience, we rely on great engineers. For us, people who own their work, from start to finish, are integral. Our engineers are part of the process from design to code, to test, to deployment and back again for further iterations. You can, and will, touch every level of the infrastructure depending on the day and what project you are working on. The ideal candidate should be able to take a complex problem and execute end to end. Mentor and set higher standards for the rest of the team and for the new hires.
WHAT YOU’LL DO:
• Design, build and maintain self-hosted and cloud environments to serve our own applications and services.
• Collaborate with software developers to build stable, scalable and high-performance solutions.
• Taking part in big projects like migrating solutions from self-hosted environments to the cloud, from virtual machines to Kubernetes, from monolith to microservices.
- Proactively make our organization and technology better!
- Advising others as to how DevOps can make a positive impact on their work.
• Share knowledge, mentor more junior team members while also still learning and gaining new skills.
- Maintain consistently high standards of communication, productivity, and teamwork across all teams.
YOUR QUALIFICATIONS:
• 5+ years of proven experience in a DevOps Engineer, System Administrator or Developer role, working on infrastructure or build processes.
• Expert knowledge of Microsoft Azure.
• Programming prowess (Python, Golang).
• Knowledge and experience about deployment and maintenance of Java and Python apps using application and web servers (Tomcat, Nginx, etc.).
• Ability to solve complex problems with simple, elegant and clean code.
• Practical knowledge of CI/CD solutions, GitLab CI or similar.
• Practical knowledge of Docker as a tool for testing and building an environment.
• Knowledge of Kubernetes and related technologies.
• Experience with metric-based monitoring solutions.
• Solid English skills to effectively communicate with other team members.
• Good understanding of the Linux Operating System on the administration level.
• Drive to grow as a DevOps Engineer (we value open-mindedness and a can-do attitude).
• Strong sense of ownership and ability to drive big projects.
BONUS SKILLS:
• Work experience as a Microsoft Azure architect.
• Experience in Cloud migrations projects.
• Leadership skills and experience.
COMMITMENT TO DIVERSITY, EQUITY, AND INCLUSION:
At Egnyte, we celebrate our differences and thrive on our diversity for our employees, our products, our customers, our investors, and our communities. Egnyters are encouraged to bring their whole selves to work and to appreciate the many differences that collectively make Egnyte a higher-performing company and a great place to be.

we’d love to speak with you. Skills and Qualifications:
Strong experience with continuous integration/continuous deployment (CI/CD) pipeline tools such as Jenkins, TravisCI, or GitLab CI.
Proficiency in scripting languages such as Python, Bash, or Ruby.
Knowledge of infrastructure automation tools such as Ansible, Puppet, or Terraform.
Experience with cloud platforms such as AWS, Azure, or GCP.
Knowledge of container orchestration tools such as Docker, Kubernetes, or OpenShift.
Experience with version control systems such as Git.
Familiarity with Agile methodologies and practices.
Understanding of networking concepts and principles.
Knowledge of database technologies such as MySQL, MongoDB, or PostgreSQL.
Good understanding of security and data protection principles.
Roles and responsibilities:
● Building and setting up new development tools and infrastructure
● Working on ways to automate and improve development and release processes
● Deploy updates and fixes
● Helping to ensure information security best practices
● Provide Level 2 technical support
● Perform root cause analysis for production errors
● Investigate and resolve technical issues
At Egnyte we build and maintain our flagship software: a secure content platform used by companies like Red Bull and Yamaha.
We store, analyze, organize, and secure billions of files and petabytes of data with millions of users. We observe more than 1M API requests per minute on average. To make that possible and to provide the best possible experience, we rely on great engineers. For us, people who own their work from start to finish are integral. Our Engineers are part of the process from design to code, to test, to deployment, and back again for further iterations.
We have 300+ engineers spread across the US, Poland, and India.
You will be part of the Site Reliability Engineering Team. This role involves the design, scale, performance tuning, monitoring, and administration activities on the various databases (majority on MySQL).
Your day-to-day at Egnyte:
- Build, scale, and administer a large fleet of MySQL servers spread over multiple data centers with a focus on performance, scale, and high availability.
- Monitor and troubleshoot critical performance bottlenecks for MySQL databases before it causes downtime.
- Review and assess the impact of database schema design, topology changes prior to their implementation
- Ensure that databases are secured, maintained, backed up, and highly available.
- Review stress testing results and provide recommendations to development teams
- Automate anomaly detection to surface databases with failures, IOPS, deadlocks, and other failure reasons.
- Automate management tasks, streamline processes, and perform standard administrative functions
About you:
- Understanding of MySQL’s (5.7+) underlying storage engines
- Knowledge of Performance and scalability issues with MySQL
- Strong experience with MySQL HA using Orchestrator/ProxySQL/Consul/Pacemaker
- Experience with configuration management like Puppet/Ansible
- Knowledge of limitations in MySQL and their workarounds in contrast to other popular relational databases
- Automation experience with ‘Python/Ruby/Perl’ and ‘SQL’ scripting
- Analytical skills necessary to perform troubleshooting of errors and performance issues on a large array of Mysql cluster spread over multiple data centers.
Due to the nature of the role, it would be nice if you have also:
- Experience in other distributed systems like Redis, Elasticsearch, Memcached.
- Experience in managing a large fleet of database servers.
- Knowledge of HA and scalability issues with PostgreSQL
We are looking for a Senior Platform Engineer responsible for handling our GCP/AWS clouds. The candidate will be responsible for automating the deployment of cloud infrastructure and services to support application development and hosting (architecting, engineering, deploying, and operationally managing the underlying logical and physical cloud computing infrastructure).
Job Description:
● Collaborate with teams to build and deliver solutions implementing serverless, microservice-based, IaaS, PaaS, and containerized architectures in GCP/AWS environments.
●Responsible for deploying highly complex, distributed transaction processing systems.
● Work on continuous improvement of the products through innovation and learning. Someone with a knack for benchmarking and optimization
● Hiring, developing, and cultivating a high and reliable cloud support team ● Building and operating complex CI/CD pipelines at scale
● Work with GCP Services, Private Service Connect, Cloud Run, Cloud Functions, Pub/Sub, Cloud Storage, Networking
● Collaborate with Product Management and Product Engineering teams to drive excellence in Google Cloud products and features.
● Ensures efficient data storage and processing functions by company security policies and best practices in cloud security.
● Ensuring scaled database setup/monitoring with near zero downtime
Role – Devops
Experience 3 – 6 Years
Roles & Responsibilities –
- 3-6 years of experience in deploying and managing highly scalable fault resilient systems
- Strong experience in container orchestration and server automation tools such as Kubernetes, Google Container Engine, Docker Swarm, Ansible, Terraform
- Strong experience with Linux-based infrastructures, Linux/Unix administration, AWS, Google Cloud, Azure
- Strong experience with databases such as MySQL, Hadoop, Elasticsearch, Redis, Cassandra, and MongoDB.
- Knowledge of scripting languages such as Java, JavaScript, Python, PHP, Groovy, Bash.
- Experience in configuring CI/CD pipelines using Jenkins, GitLab CI, Travis.
- Proficient in technologies such as Docker, Kafka, Raft and Vagrant
- Experience in implementing queueing services such as RabbitMQ, Beanstalkd, Amazon SQS and knowledge in ElasticStack is a plus.
We are global expert in cloud consulting and service management, focusing exclusively on the Cloud DevOps Space. In short, we strive to be at the forefront in this era of digital disruption by being dynamic, agile and cohesive in providing businesses the solutions needed to leverage it to the next level. Our expert team of Engineers, Programmers, Designers and Business development professionals are the foundations of our firm with the fusion of cutting-edge technology.Nimble IT Consulting is vested in Research and Analysis of Current and Upcoming trends, be it Technology, Business Values and User Experience, we dedicate our efforts tirelessly to be at the pinnacle of the Quality Standards. Devising solutions that are just not only being approved or followed by industry leaders in fact they depend on it. Read more about us: https://nimbleitconsulting.com/" target="_blank">https://nimbleitconsulting.com
What we are looking for
A DevOps Engineer to join our team and provide consulting services to our clients, below is the technology stack we are interested in
Technical skills
- Expertise in implementing and managing Devops CI/CD pipeline. ( either using Jenkins or Azure DevOps )
- At least one AWS or Azure Certification
- Terraform Scripting
- Hands-on experience with git and source code management and release management.
- Experience in DevOps automation tools. And Very well versed with DevOps principles and the Agile Frameworks.
- Working knowledge of scripting using shell, Python, Gradle, Yaml, Ansible or puppet or chef.
- Working knowledge of build systems for various technologies like npm, maven etc.
- Experience and good understanding in any of Cloud platforms like AWS, Azure or Google cloud.
- Hands on Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues. Experience with working with ticketing tools (Jira & Service now)
- A programming language like Java, Go , NodeJS is a nice to have.
- Work Permit for United Kingdom ( tier 2 visa ) total duration of visa will be 5 years ( first 2 years and then 3 year extension)
- At the end of the 5 years you will be eligible for British Citizenship by applying for Indefinite leave to remain in the UK
- Learn new technologies - We won’t ever expect you to do the same thing day in day out; we want to
- give you the chance to explore the latest techniques to solve challenging technical problems and help
- you become the best developer you can be.
- Join a growing agile team that are consistently delivering.
- Technical Development Program
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the
basis of race, religion, colour, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Job description
The role requires you to design development pipelines from the ground up, Creation of Docker Files, design and operate highly available systems in AWS Cloud environments. Also involves Configuration Management, Web Services Architectures, DevOps Implementation, Database management, Backups, and Monitoring.
Key responsibility area
- Ensure reliable operation of CI/CD pipelines
- Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
- Logging, metrics and alerting management.
- Creation of Bash/Python scripts for automation
- Performing root cause analysis for production errors.Requirement
- Proficient in Linux Commands line and troubleshooting.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Hands-on experience with CI tooling preferably with Jenkins.
- Proficient in deployment using Ansible.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Scripting languages: Bash, Python, Groovy.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
Must Have:
Linux, CI/CD(Jenkin), AWS, Scripting(Bash,shell Python, Go), Ngnix, Docker.
Good to have
Configuration Management(Ansible or similar tool), Logging tool( ELK or similar), Monitoring tool(Ngios or similar), IaC(Terraform, cloudformation).
Exposure to development and implementation practices in a modern systems environment together with exposure to working in a project team particularly with reference to industry methodologies, e.g. Agile, continuous delivery, etc
- At least 3-5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront, S3)
- Strong understanding of how to secure AWS environments and meet compliance requirements
- Experience using DevOps methodology and Infrastructure as Code
- Automation / CI/CD tools – Bitbucket Pipelines, Jenkins
- Infrastructure as code – Terraform, Cloudformation, etc
- Strong experience deploying and managing infrastructure with Terraform
- Automated provisioning and configuration management – Ansible, Chef, Puppet
- Experience with Docker, GitHub, Jenkins, ELK and deploying applications on AWS
- Improve CI/CD processes, support software builds and CI/CD of the development departments
- Develop, maintain, and optimize automated deployment code for development, test, staging and production environments

