
A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets.
As a manager, you would be incharge of managing the devops team and your remit shall include the following
- Private Cloud - Design & maintain a high performance and reliable network architecture to support HPC applications
- Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud.
- Security - Implementing best security practices and implementing data isolation policy between different divisions internally.
- Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis.
- Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
- NFS - Implement and optimize latest version of NFS for our use case.
- Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm.
- BackUps - Identify and automate back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless.
- Access Control - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins.
- Operating System -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions.
- Configuration management -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same.
- Data Storage & Security Planning - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
- Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity.
- Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc
Qualifications
- Bachelors or Masters Level Degree, preferably in CSE/IT
- 10+ years of relevant experience in sys-admin function
- Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
- Must have strong grasp of automation & Data management tools.
- Efficient in scripting languages and python
Desirables
- Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
- Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.
APT Portfolio is an equal opportunity employer

About APT Portfolio
About
Connect with the team
Company social profiles
Similar jobs
Job Title: Sr Dev Ops Engineer
Location: Bengaluru- India (Hybrid work type)
Reports to: Sr Engineer manager
About Our Client :
We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure
About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you.
What You'll Do 🛠️
- Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
- Billing & Cost Optimization: Monitor and optimize cloud spending.
- Containerization & Orchestration: Deploy and manage applications and orchestrate them.
- Database Management: Deploy, manage, and optimize database instances and their lifecycles.
- Authentication Solutions: Implement and manage authentication systems.
- Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
- Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
- Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
- Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
- Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks.
What You'll Bring 💼
- Minimum of 4 years of experience in a DevOps or SRE role.
- Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
- Solid understanding of Linux fundamentals and command-line tools.
- Extensive experience with CI/CD tools, GitLab CI.
- Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
- Proven experience deploying and managing microservices.
- Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
- Experience with Identity and Access management solutions like Keycloak.
- Experience implementing backup and recovery solutions.
- Familiarity with optimizing scaling, ideally with Karpenter.
- Proficiency in scripting (Python, Bash).
- Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
- Excellent problem-solving and communication skills.
Bonus Points ➕
- Basic understanding of MQTT or general IoT concepts and protocols.
- Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
- Knowledge of specific AWS services relevant to application stacks.
- Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
- AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).
Why this role:
•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.
Job Description:
Infilect is a GenAI company pioneering the use of Image Recognition in Consumer Packaged Goods retail.
We are looking for a Senior DevOps Engineer to be responsible and accountable for the smooth running of our Cloud, AI workflows, and AI-based Computer Systems. Furthermore, the candidate will supervise the implementation and maintenance of the company’s computing needs including the in-house GPU & AI servers along with AI workloads.
Responsibilities
- Understanding and automating AI based deployment an AI based workflows
- Implementing various development, testing, automation tools, and IT infrastructure
- Manage Cloud, computer systems and other IT assets.
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Design, develop, implement, and coordinate systems, policies, and procedures for Cloud and on-premise systems
- Ensure the security of data, network access, and backup systems
- Act in alignment with user needs and system functionality to contribute to organizational policy
- Identify problematic areas, perform RCA and implement strategic solutions in time
- Preserve assets, information security, and control structures
- Handle monthly/annual cloud budget and ensure cost effectiveness
Requirements and skills
- Well versed in automation tools such as Docker, Kubernetes, Puppet, Ansible etc.
- Working Knowledge of Python, SQL database stack or any full-stack with relevant tools.
- Understanding agile development, CI/CD, sprints, code reviews, Git and GitHub/Bitbucket workflows
- Well versed with ELK stack or any other logging, monitoring and analysis tools
- Proven working experience of 2+ years as an DevOps/Tech lead/IT Manager or relevant positions
- Excellent knowledge of technical management, information analysis, and of computer hardware/software systems
- Hands-on experience with computer networks, network administration, and network installation
- Knowledge in ISO/SOC Type II implementation with be a
- BE/B.Tech/ME/M.Tech in Computer Science, IT, Electronics or a similar field
We are seeking a skilled and proactive Kubernetes Administrator with strong hands-on experience in managing Red Hat OpenShift environments. The ideal candidate will have a solid background in Kubernetes administration, ArgoCD, and Jenkins.
This role demands a self-motivated, quick learner who can confidently manage OpenShift-based infrastructure in production environments, communicate effectively with stakeholders, and escalate issues promptly when needed.
Key Skills & Qualifications
- Strong experience with Red Hat OpenShift and Kubernetes administration (OpenShift or Kubernetes certification a plus).
- Proven expertise in managing containerized workloads on OpenShift platforms.
- Experience with ArgoCD, GitLab CI/CD, and Helm for deployment automation.
- Ability to troubleshoot issues in high-pressure production environments.
- Strong communication and customer-facing skills.
- Quick learner with a positive attitude toward problem-solving.
● Auditing, monitoring and improving existing infrastructure components of highly available and scaled
product on cloud with Ubuntu servers
● Running daily maintenance tasks and improving it with possible automation
● Deploying new components, server and other infrastructure when needed
● Coming up with innovative ways to automate tasks
● Working with telecom carriers and getting rates and destinations and update regularly on the system
● Working with Docker containers, Tinc, Iptables, HAproxy, ETCD, mySQL, mongoDB, CouchDB and
ansible
You would be bringing below skills to our team :
● Expertise with Docker containers and its networking, Tinc, Iptables, HAproxy, ETCD, and ansible
● Extensive experience with setup, maintenance, monitoring, backup and replication with mySQL
● Expertise with the Ubuntu servers and its OS and server level networking
● Good experience of working with mongoDB, CouchDB
● Good with the networking tools
● Open Source server monitoring solutions like nagios, Zabbix etc.
● Worked on highly scaled, distributed applications running on the Datacenter Ubuntu VPS instances
● Innovative and out of box thinker with multitasking skills working in a small team efficiently
● Working Knowledge of any scripting languages like bash, node or python
● It would be an advantage if have experience with the calling platforms like FreeSWITCH, OpenSIPS or
Kamailio and have basic knowledge of SIP protocol
Type, Location
Full Time @ Anywhere in India
Desired Experience
2+ years
Job Description
What You’ll Do
● Deploy, automate and maintain web-scale infrastructure with leading public cloud vendors such as Amazon Web Services, Digital Ocean & Google Cloud Platform.
● Take charge of DevOps activities for CI/CD with the latest tech stacks.
● Acquire industry-recognized, professional cloud certifications (AWS/Google) in the capacity of developer or architect Devise multi-region technical solutions.
● Implementing the DevOps philosophy and strategy across different domains in organisation.
● Build automation at various levels, including code deployment to streamline release process
● Will be responsible for architecture of cloud services
● 24*7 monitoring of the infrastructure
● Use programming/scripting in your day-to-day work
● Have shell experience - for example Powershell on Windows, or BASH on *nix
● Use a Version Control System, preferably git
● Hands on at least one CLI/SDK/API of at least one public cloud ( GCP, AWS, DO)
● Scalability, HA and troubleshooting of web-scale applications.
● Infrastructure-As-Code tools like Terraform, CloudFormation
● CI/CD systems such as Jenkins, CircleCI
● Container technologies such as Docker, Kubernetes, OpenShift
● Monitoring and alerting systems: e.g. NewRelic, AWS CloudWatch, Google StackDriver, Graphite, Nagios/ICINGA
What you bring to the table
● Hands on experience in Cloud compute services, Cloud Function, Networking, Load balancing, Autoscaling.
● Hands on with GCP/AWS Compute & Networking services i.e. Compute Engine, App Engine, Kubernetes Engine, Cloud Function, Networking (VPC, Firewall, Load Balancer), Cloud SQL, Datastore.
● DBs: Postgresql, MySQL, Elastic Search, Redis, kafka, MongoDB or other NoSQL systems
● Configuration management tools such as Ansible/Chef/Puppet
Bonus if you have…
● Basic understanding of Networking(routing, switching, dns) and Storage
● Basic understanding of Protocol such as UDP/TCP
● Basic understanding of Cloud computing
● Basic understanding of Cloud computing models like SaaS, PaaS
● Basic understanding of git or any other source code repo
● Basic understanding of Databases(sql/no sql)
● Great problem solving skills
● Good in communication
● Adaptive to learning

As part of the Cloud Platform / Devops team at Upswing, you will get to work on building state-of-the-art infrastructure for the future. You will also be –
- Building Infrastructure on AWS driven through terraform and building automation tools for deployment, infrastructure management, and observability stack
- Building and Scaling on Kubernetes
- Ensuring the Security of Upswing Cloud Infra
- Building Security Checks and automation to improve overall security posture
- Building automation stack for components like JVM-based applications, Apache Pulsar, MongoDB, PostgreSQL, Reporting Infra, etc.
- Mentoring people across the teams to enable best practices
- Mentoring and guiding team members to upskill and helm them develop work class Fintech Infrastructure
What will you do if you join us?
- Write a lot of code
- Engage in a lot of cross-team collaboration to independently drive forward infrastructure initiatives and Devops practices across the org
- Taking Ownership of existing, ongoing, and future initiatives
- Plan Architecture- for upcoming infrastructure
- Build for Scale, Resiliency & Security
- Introduce best practices wrt Devops & Cloud in the team
- Mentor new/junior team members and eventually build your own team
You should have
- Curiosity for on-the-job learning and experimenting with new technologies and ideas
- A strong background in Linux environment
- Must have Programming skills and Experience
- Strong experience in Cloud technologies, Security and Networking concepts, Multi-cloud environments, etc.
- Experience with at least one scripting language (GoLang/Python/Ruby/Groovy)
- Experience in Terraform is highly desirable but not mandatory
- Experience with Kubernetes and Docker is required
- Understanding of the Java Technologies and Stack
- Any other Devops related experience will be considered
ApnaComplex is one of India’s largest and fastest-growing PropTech disruptors within the Society & Apartment Management business. The SaaS based B2C platform is headquartered out of India’s tech start-up hub, Bangalore, with branches in 6 other cities. It currently empowers 3,600 Societies, managing over 6 Lakh Households in over 80 Indian cities to effortlessly manage all aspects of running large complexes seamlessly.
ApnaComplex is part of ANAROCK Group. ANAROCK Group is India's leading specialized real estate services company having diversified interests across the real estate value chain.
If it excites you to - drive innovation, create industry-first solutions, build new capabilities ground-up, and work with multiple new technologies, ApnaComplex is the place for you.
Must have-
- Knowledge of Docker
- Knowledge of Terraforms
- Knowledge of AWS
Good to have -
- Kubernetes
- Scripting language: PHP/Go Lang and Python
- Webserver knowledge
- Logging and monitoring experience
- Test, build, design, deployment, and ability to maintain continuous integration and continuous delivery process using tools like Jenkins, maven Git, etc.
- Build and maintain highly available production systems.
- Must know how to choose the best tools and technologies which best fits the business needs.
- Develop software to integrate with internal back-end systems.
- Investigate and resolve technical issues.
- Problem-solving attitude.
- Ability to automate test and deploy the code and monitor.
- Work in close coordination with the development and operations team such that the application is in line with performance according to the customer's expectation.
- Lead and guide the team in identifying and implementing new technologies.
Skills that will help you build a success story with us
- An ability to quickly understand and solve new problems
- Strong interpersonal skills
- Excellent data interpretation
- Context-switching
- Intrinsically motivated
- A tactical and strategic track record for delivering research-driven results
Quick Glances:
- https://www.apnacomplex.com/why-apnacomplex">What to look for at ApnaComplex
- https://www.linkedin.com/company/1070467/admin/">Who are we A glimpse of ApnaComplex, know us better
- https://www.apnacomplex.com/media-buzz">ApnaComplex - Media – Visit our media page
ANAROCK Ethos - Values Over Value:
Our assurance of consistent ethical dealing with clients and partners reflects our motto - Values Over Value.
We value diversity within ANAROCK Group and are committed to offering equal opportunities in employment. We do not discriminate against any team member or applicant for employment based on nationality, race, color, religion, caste, gender identity / expression, sexual orientation, disability, social origin and status, indigenous status, political opinion, age, marital status or any other personal characteristics or status. ANAROCK Group values all talent and will do its utmost to hire, nurture and grow them.
Do Your Thng
DYT - Do Your Thing, is an app, where all social media users can share brands they love with their followers and earn money while doing so! We believe everyone is an influencer. Our aim is to democratise social media and allow people to be rewarded for the content they post. How does DYT help you? It accelerates your career through collaboration opportunities with top brands and gives you access to a community full of experts in the influencer space.
Role: DevOps
Job Description:
We are looking for experienced DevOps Engineers to join our Engineering team. The candidate will be working with our engineers and interact with the tech team for high quality web applications for a product.
Required Experience
- Devops Engineer with 2+ years of experience in development and production operations supporting for Linux & Windows based applications and Cloud deployments (AWS/GC stack)
- Experience working with Continuous Integration and Continuous Deployment Pipeline
- Exposure to managing LAMP stack-based applications
- Experience Resource provisioning automation using tools such as CloudFormation, terraform and ARM Templates.
- Experience in working closely with clients, understanding their requirements, design and implement quality solutions to meet their needs.
- Ability to take ownership on the carried-out work
- Experience coordinating with rest of the team to deliver well-architected and high-quality solutions.
- Experience deploying Docker based applications
- Experience with AWS services.
- Excellent verbal and written communication skills
Desired Experience
- Exposure to AWS, google cloud and Azure Cloud
- Experience in Jenkins, Ansible, Terraform
- Build Monitoring tools and respond to alarms triggered in production environment
- Willingness to quickly become a member of the team and to do what it takes to get the job done
- Ability to work well in a fast-paced environment and listen and learn from stakeholders
- Demonstrate a strong work ethic and incorporate company values in your everyday work.
As DevOps Engineer Consultant you will be responsible for Continuous Integration, Continuous Development,
Continuous Delivery with a strong understanding of Business-Driven software integration and delivery approach, you will
be reporting into the Technical Lead.
Responsibilities & Duties
• Ideate and create CI and CD process and documentation for same.
• Ideate and create and Code Maintenance using Visual SVN/Jenkins.
• Design and implement new learning tools or knowledge
Job requirements:
• Should be able to research, design Code Maintenance Process from scratch.
• Should be able to research, design Continuous Integration Process from scratch.
• Should be able to research, design Continuous Development Process from scratch.
• Should be able to research, design Continuous Delivery Process from scratch.
• Should be worked on Install Shield for creating Instable.
• In-depth understanding of principles and best practices of Software Configuration Management (SCM) in Agile,
SCRUM and Waterfall methodologies.
• Experienced in Windows, Linux environment. Good knowledge and understanding of database and application
servers’ administration in a global production environment.
• Should have good understand and Knowledge on Windows and Linux Server Deployment
• Should have good understand and Knowledge on application hosting on Windows IIS
• Experienced in Visual SVN, Gitlab CI and Jenkins for CI and for End-to-End automation for all build and CD.
Mostly with product developed using Dot net technology.
• Experienced in working with version control systems like GIT and used Source code management client tools like
Git Bash, GitHub, Git Lab.
• Experience in using MAVEN/ANT/Bamboo as build tools for the building of deployable artifacts.
• Knowledge of using Routed Protocols: FTP, SFTP, SSH, HTTP, HTTPS and Connect directly.
• Experienced in deploying Database Changes to Oracle, db2, MSSQL and MYSQL databases.
• Having work experience in support of multi-platform like Windows, UNIX, Linux, Ubuntu.
• Managed multiple environments for both production and non-production where primary objectives included
automation, build out, integration and cost control.
• Expertise in trouble shooting the problems generated while building, deploying and production support.
• Good understanding of creating and managing the various development and build platforms and deployment
strategies.
• Excellent Knowledge of Application Lifecycle Management, Change & Release Management and ITIL process
• Exposed to all aspects of software development life cycle (SDLC) such as Analysis, Planning, Developing, Testing,
implementing and Post-production analysis of the projects.
• Good interaction with developers, managers, and team members to coordinate job tasks and strong
commitment to work.
• Documented daily meetings, build reports, release notes and many other day-to-day documentation and status
reports.
• Excellent communicative, interpersonal, intuitive and analytic and leadership skills with teamwork work
efficiently in both independent and teamwork environments.
• Enjoy working on all types of planned and unplanned issues/tasks.
• Implementing gitlab CI, gitlab, docker, maven ect.
• Should have knowledge on docker container which can be utilised in deployment process..
• Good Interpersonal Skills, team-working attitude, takes initiatives and very proactive in solving problems and
providing best solutions.
• Integrating various Version control tools, build tools, deployment methodologies (scripting) into Jenkins or (any
other tool), create an end to end orchestration build cycles.
• Troubleshoot build issues, performance and generating metrics on master's performance along with jobs usage.
• Design develop build and packaging tools for continuous integration build and reporting. Automate the build
and release cycles.
• Coordinate all build and release activities, ensure release processes is well documented, source control
repositories including branching and tagging.
• Maintain product release process, including generating and delivering release packages, generate various
metrics for tracking issues against releases and the means of tracking compatibility among products.
• Maintained and managed cloud & test environments and automation for QA, Product Management and Product
Support
|
Job Title: |
Senior Cloud Infrastructure Engineer (AWS) |
||
|
Department & Team |
Technology |
Location: |
India /UK / Ukraine |
|
Reporting To: |
Infrastructure Services Manager |
|
Role Purpose: |
|
The purpose of the role is to ensure high systems availability across a multi-cloud environment, enabling the business to continue meeting its objectives.
This role will be mostly AWS / Linux focused but will include a requirement to understand comparative solutions in Azure.
Desire to maintain full hands-on status but to add Team Lead responsibilities in future
Client’s cloud strategy is based around a dual vendor solutioning model, utilising AWS and Azure services. This enables us to access more technologies and helps mitigate risks across our infrastructure.
The Infrastructure Services Team is responsible for the delivery and support of all infrastructure used by Client twenty-four hours a day, seven days a week. The team’s primary function is to install, maintain, and implement all infrastructure-based systems, both On Premise and Cloud Hosted. The Infrastructure Services group already consists of three teams:
1. Network Services Team – Responsible for IP Network and its associated components 2. Platform Services Team – Responsible for Server and Storage systems 3. Database Services Team – Responsible for all Databases
This role will report directly into the Infrastructure Services Manager and will have responsibility for the day to day running of the multi-cloud environment, as well as playing a key part in designing best practise solutions. It will enable the Client business to achieve its stated objectives by playing a key role in the Infrastructure Services Team to achieve world class benchmarks of customer service and support.
|
|
Responsibilities: |
|
Operations · Deliver end to end technical and user support across all platforms (On-premise, Azure, AWS) · Day to day, fully hands-on OS management responsibilities (Windows and Linux operating systems) · Ensure robust server patching schedules are in place and meticulously followed to help reduce security related incidents. · Contribute to continuous improvement efforts around cost optimisation, security enhancement, performance optimisation, operational efficiency and innovation. · Take an ownership role in delivering technical projects, ensuring best practise methods are followed. · Design and deliver solutions around the concept of “Planning for Failure”. Ensure all solutions are deployed to withstand system / AZ failure. · Work closely with Cloud Architects / Infrastructure Services Manager to identify and eliminate “waste” across cloud platforms. · Assist several internal DevOps teams with day to day running of pipeline management and drive standardisation where possible. · Ensure all Client data in all forms are backed up in a cost-efficient way. · Use the appropriate monitoring tools to ensure all cloud / on-premise services are continuously monitored. · Drive utilisation of most efficient methods of resource deployment (Terraform, CloudFormation, Bootstrap) · Drive the adoption, across the business, of serverless / open source / cloud native technologies where applicable. · Ensure system documentation remains up to date and designed according to AWS/Azure best practise templates. · Participate in detailed architectural discussions, calling on internal/external subject matter experts as needed, to ensure solutions are designed for successful deployment. · Take part in regular discussions with business executives to translate their needs into technical and operational plans. · Engaging with vendors regularly in terms of verifying solutions and troubleshooting issues. · Designing and delivering technology workshops to other departments in the business. · Takes initiatives for improvement of service delivery. · Ensure that Client delivers a service that resonates with customer’s expectations, which sets Client apart from its competitors. · Help design necessary infrastructure and processes to support the recovery of critical technology and systems in line with contingency plans for the business. · Continually assess working practices and review these with a view to improving quality and reducing costs. · Champions the new technology case and ensure new technologies are investigated and proposals put forward regarding suitability and benefit. · Motivate and inspire the rest of the infrastructure team and undertake necessary steps to raise competence and capability as required. · Help develop a culture of ownership and quality throughout the Infrastructure Services team.
|
|
Skills & Experience: |
|
· AWS Certified Solutions Architect – Professional - REQUIRED · Microsoft Azure Fundamentals AZ-900 – REQUIRED AS MINIMUM AZURE CERT · Red Hat Certified Engineer (RHCE ) - REQUIRED · Must be able to demonstrate working knowledge of designing, implementing and maintaining best practise AWS solutions. (To lesser extend Azure) · Proven examples of ownership of large AWS project implementations in Enterprise settings. · Experience managing the monitoring of infrastructure / applications using tools including CloudWatch, Solarwinds, New Relic, etc. · Must have practical working knowledge of driving cost optimisation, security enhancement and performance optimisation. · Solid understanding and experience of transitioning IaaS solutions to serverless technology · Must have working production knowledge of deploying infrastructure as code using Terraform. · Need to be able to demonstrate security best-practise when designing solutions in AWS. · Working knowledge around optimising network traffic performance an delivering high availability while keeping a check on costs. · Working experience of ‘On Premise to Cloud’ migrations · Experience of Data Centre technology infrastructure development and management · Must have experience working in a DevOps environment · Good working knowledge around WAN connectivity and how this interacts with the various entry point options into AWS, Azure, etc. · Working knowledge of Server and Storage Devices · Working knowledge of MySQL and SQL Server / Cloud native databases (RDS / Aurora) · Experience of Carrier Grade Networking - On Prem and Cloud · Experience in virtualisation technologies · Experience in ITIL and Project management · Providing senior support to the Service Delivery team. · Good understanding of new and emerging technologies · Excellent presentation skills to both an internal and external audience · The ability to share your specific expertise to the rest of the Technology group · Experience with MVNO or Network Operations background from within the Telecoms industry. (Optional) · Working knowledge of one or more European languages (Optional)
|
|
Behavioural Fit: |
|
· Professional appearance and manner · High personal drive; results oriented; makes things happen; “can do attitude” · Can work and adapt within a highly dynamic and growing environment · Team Player; effective at building close working relationships with others · Effectively manages diversity within the workplace · Strong focus on service delivery and the needs and satisfaction of internal clients · Able to see issues from a global, regional and corporate perspective · Able to effectively plan and manage large projects · Excellent communication skills and interpersonal skills at all levels · Strong analytical, presentation and training skills · Innovative and creative · Demonstrates technical leadership · Visionary and strategic view of technology enablers (creative and innovative) · High verbal and written communication ability, able to influence effectively at all levels · Possesses technical expertise and knowledge to lead by example and input into technical debates · Depth and breadth of experience in infrastructure technologies · Enterprise mentality and global mindset · Sense of humour
|
|
Role Key Performance Indicators: |
|
· Design and deliver repeatable, best in class, cloud solutions. · Pro-actively monitor service quality and take action to scale operational services, in line with business growth. · Generate operating efficiencies, to be agreed with Infrastructure Services Manager. · Establish a “best in sector” level of operational service delivery and insight. · Help create an effective team. |










