
· Strong knowledge on Windows and Linux
· Experience working in Version Control Systems like git
· Hands-on experience in tools Docker, SonarQube, Ansible, Kubernetes, ELK.
· Basic understanding of SQL commands
· Experience working on Azure Cloud DevOps

About Client Of PFC
Similar jobs
Job Description
We are looking for an experienced software engineer with a strong background in DevOps and handling
traffic & infrastructure at scale.
Responsibilities :
Work closely with product engineers to implement scalable and highly reliable systems.
Scale existing backend systems to handle ever-increasing amounts of traffic and new product
requirements.
Collaborate with other developers to understand & setup tooling needed for - Continuous
Integration/Delivery/Deployment practices.
Build & operate infrastructure to support website, backend cluster, ML projects in the organization.
Monitor and track performance and reliability of our services and software to meet promised SLA
You are the right fit if you have:
1+ years of experience working on distributed systems and shipping high-quality product features on
schedule
Experience with Python including Object Oriented programming
Container administration and development utilizing Kubernetes, Docker, Mesos, or similar
Infrastructure automation through Terraform, Chef, Ansible, Puppet, Packer or similar
Knowledge of cloud compute technologies, network monitoring
Experience with Cloud Orchestration frameworks, development and SRE support of these systems
Experience with CI/CD pipelines including VCS (git, svn, etc), Gitlab Runners, Jenkins
Working with or supporting production, test, and development environments for medium to large user
environments
Installing and configuring application servers and database servers
Experience in developing scripts to automate software deployments and installations
Experience in a 247 high-availability production environmentAbility to come with best solution by capturing big picture instead of focusing on minor details. Root
cause analysis
Mandatory skills: Shell/Bash Scripting, Unix, Linux, Dockers, Kubernetes, AWS, Jenkins, GIT
-
Job Title - DevOps Engineer
-
Reports Into - Lead DevOps Engineer
-
Location - India
A Little Bit about Kwalee….
Kwalee is one of the world’s leading multiplatform game developers and publishers, with well over 900 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Airport Security and Makeover Studio 3D. We also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope, Die by the Blade and Scathe.
What’s In It For You?
-
Hybrid working - 3 days in the office, 2 days remote/ WFH is the norm
-
Flexible working hours - we trust you to choose how and when you work best
-
Profit sharing scheme - we win, you win
-
Private medical cover - delivered through BUPA
-
Life Assurance - for long term peace of mind
-
On site gym - take care of yourself
-
Relocation support - available
-
Quarterly Team Building days - we’ve done Paintballing, Go Karting & even Robot Wars
-
Pitch and make your own games on https://www.kwalee.com/blog/inside-kwalee/what-are-creative-wednesdays/">Creative Wednesdays!
Are You Up To The Challenge?
As a DevOps Engineer you have a passion for automation, security and building reliable expandable systems. You develop scripts and tools to automate deployment tasks and monitor critical aspects of the operation, resolve engineering problems and incidents. Collaborate with architects and developers to help create platforms for the future.
Your Team Mates
The DevOps team works closely with game developers, front-end and back-end server developers making, updating and monitoring application stacks in the cloud.Each team member has specific responsibilities with their own projects to manage and bring their own ideas to how the projects should work. Everyone strives for the most efficient, secure and automated delivery of application code and supporting infrastructure.
What Does The Job Actually Involve?
-
Find ways to automate tasks and monitoring systems to continuously improve our systems.
-
Develop scripts and tools to make our infrastructure resilient and efficient.
-
Understand our applications and services and keep them running smoothly.
Your Hard Skills
-
Minimum 1 years of experience on a dev ops engineering role
-
Deep experience with Linux and Unix systems
-
Networking basics knowledge (named, nginx, etc)
-
Some coding experience (Python, Ruby, Perl, etc.)
-
Experience with common automation tools (Ex. Chef, Terraform, etc)
-
AWS experience is a plus
-
A creative mindset motivated by challenges and constantly striving for the best
Your Soft Skills
Kwalee has grown fast in recent years but we’re very much a family of colleagues. We welcome people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances, and all we ask is that you collaborate, work hard, ask questions and have fun with your team and colleagues.
We don’t like egos or arrogance and we love playing games and celebrating success together. If that sounds like you, then please apply.
A Little More About Kwalee
Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts.
Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle.
We have an amazing team of experts collaborating daily between our studios in Leamington Spa, Lisbon, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, Cyprus, the Philippines and many more places around the world. We’ve recently acquired our first external studio, TicTales, which is based in France.
We have a truly global team making games for a global audience, and it’s paying off: - Kwalee has been voted the Best Large Studio and Best Leadership Team at the TIGA Awards (Independent Game Developers’ Association) and our games have been downloaded in every country on earth - including Antarctica!
Required Competencies:
- 3+ years experience in automating application and database deployments using most of the above-mentioned technologies
- Strong experience in .NET and MS SQL
- Ability to quickly learn and implement new tools/technologies
- Ability to excel within an "Agile" environment
- Infrastructure automation is a plus
Roles and Responsibilities:
- Application Deployments - Azure DevOps YAML build pipelines and classic release pipelines, PowerShell and bash scripts, Docker containers
- Database Deployments - DACPAC
- SCM - BitBucket
- Infrastucure - Windows Servers, Linux Servers, SQL Server, Azure SQL and many more Azure resources
- Application Types - Web APIs, Web Forms, Windows Services, Task Scheduler Jobs, SQL Server Agent jobs
- Development/Test Stack - VueJS, .NET Framework, .NET Core, Python, TypeScript, PowerBI, SSIS, SQL Server, NUnit, XUnit, Selenium, Postman, Sentry
- Currently exploring ARM, Terraform and Pulumi for infrastructure automation
- Automate application/database builds and deployments and write scripts to automate repetitive tasks
- Optimize and improve existing builds/deployments
- Deploy applications/databases to different environments
- Setup/configure infrastructure on Azure
- Create/merge branches in git
- Help with debugging post-deployment issues
- Managing access to BitBucket, Sentry, VMs and Azure resources
Wolken Software provides a suite of AI-enabled, SaaS 2.0 cloud-native applications for Customer Service and Enterprise Solutions namely Wolken Service Desk, Wolken's IT Service Management, and Wolken's HR Case Management. We have replaced incumbents like Salesforce, ServiceNow Zendesk, etc. at various Fortune 500 and Fortune 1000 companies.
JD:
AWS: 7-10 years experience with using a broad range of AWS technologies (e.g. EC2, RDS, ELB, S3, VPC, IAM, CloudWatch) to develop and maintain AWS-based cloud solutions, with emphasis on best practice cloud security.
DevOps:
Solid experience as a DevOps Engineer in a 24x7 uptime AWS environment, including automation experience with configuration management tools.
Strong scripting skills and automation skills.
Expertise in Linux system administration.
Beneficial to have
- Basic DB administration experience (MySQL)
- Working knowledge of some of the major open-source web containers & servers like apache, tomcat and Nginx.
- Understanding network topologies and common network protocols and services (DNS, HTTP(S), SSH, FTP, SMTP).
- Experience in Docker, Ansible & Python.
Role & Responsibilities:
- Deploying, automating, maintaining and managing AWS cloud-based production system, to ensure the availability, performance, scalability and security of production systems.
- Build, release and configuration management of production systems.
- System troubleshooting and problem-solving across platform and application domains.
- Ensuring critical system security using best-in-class cloud security solutions.
VMware Horizon / View, VMware vSphere, NSX • Azure Cloud VDI/AVD – Azure Virtual Desktop • VMware Virtualization • Hyper converged Infrastructure, VSAN, Storage devices SAN/NAS etc.
Roles and Responsibilities
- Designing, implementing, testing and deployment of the virtual desktop infrastructure
- Facilitating the transition of the VDI solution to Operations, providing operational and end user support when required.
- Acting as the single point of contact for all technical engagements on the VDI infrastructure.
- Creation of documented standard processes and procedures for all aspects of VDI infrastructure, administration and management.
- Working with the vendor in assessing the VDI infrastructure architecture from a deployment, performance, security and compliance perspective.
Knowledge around security best practices and understanding of vulnerability assessments.
- Providing mentoring and guidance to other VDI Administrators.
-
Document designs, development plans, operations procedures for the VDI solution.
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
- Strong communication skills (written and verbal)
- Responsive, reliable and results oriented with the ability to execute on aggressive plans
- A background in software development, with experience of working in an agile product software development environment
- An understanding of modern deployment tools (Git, Bitbucket, Jenkins, etc.), workflow tools (Jira, Confluence) and practices (Agile (SCRUM), DevOps, etc.)
- Expert level experience with AWS tools, technologies and APIs associated with it - IAM, Cloud-Formation, Cloud Watch, AMIs, SNS, EC2, EBS, EFS, S3, RDS, VPC, ELB, IAM, Route 53, Security Groups, Lambda, VPC etc.
- Hands on experience with Kubernetes (EKS preferred)
- Strong DevOps skills across CI/CD and configuration management using Jenkins, Ansible, Terraform, Docker.
- Experience provisioning and spinning up AWS Clusters using Terraform, Helm, Helm Charts
- Ability to work across multiple projects simultaneously
- Ability to manage and work with teams and customers across the globe
Implement DevOps capabilities in cloud offerings using CI/CD toolsets and automation
Defining and setting development, test, release, update, and support processes for DevOps
operation
Troubleshooting techniques and fixing the code bugs
Coordination and communication within the team and with client team
Selecting and deploying appropriate CI/CD tools
Strive for continuous improvement and build continuous integration, continuous
development, and constant deployment pipeline (CI/CD Pipeline)
Pre-requisite skills required:
Experience working on Linux based infrastructure
Experience of scripting in at-least 2 languages ( Bash + Python / Ruby )
Working knowledge of various tools, open-source technologies, and cloud services
Experience with Docker, AWS ( ec2, s3, iam, eks, route53), Ansible, Helm, Terraform
Experience with building, maintaining, and deploying Kubernetes environments and
applications
Experience with build and release automation and dependency management; implementing
CI/CD
Clear fundamentals with DNS, HTTP, HTTPS, Micro-Services, Monolith etc.
- Automate and maintain ML and Data pipelines at scale
- Collaborate with Data Scientists and Data Engineers on feature development teams to containerize and build out deployment pipelines for new modules
- Maintain and expand our on-prem deployments with spark clusters
- Design, build and optimize applications containerization and orchestration with Docker and Kubernetes and AWS or Azure
- 5 years of IT experience in data-driven or AI technology products
- Understanding of ML Model Deployment and Lifecycle
- Extensive experience in Apache airflow for MLOps workflow automation
- Experience is building and automating data pipelines
- Experience in working on Spark Cluster architecture
- Extensive experience with Unix/Linux environments
- Experience with standard concepts and technologies used in CI/CD build, deployment pipelines using Jenkins
- Strong experience in Python and PySpark and building required automation (using standard technologies such as Docker, Jenkins, and Ansible).
- Experience with Kubernetes or Docker Swarm
- Working technical knowledge of current systems software, protocols, and standards, including firewalls, Active Directory, etc.
- Basic knowledge of Multi-tier architectures: load balancers, caching, web servers, application servers, and databases.
- Experience with various virtualization technologies and multi-tenant, private and hybrid cloud environments.
- Hands-on software and hardware troubleshooting experience.
- Experience documenting and maintaining configuration and process information.
- Basic Knowledge of machine learning frameworks: Tensorflow, Caffe/Caffe2, Pytorch

