About the client :
Asia’s largest global sports media property in history with a global broadcast to 150+ countries. As the world’s largest martial arts organization, they are a celebration of Asia’s greatest cultural treasure, and its deep-rooted Asian values of integrity, humility, honor, respect, courage, discipline, and compassion. Has achieved some of the highest TV ratings and social media engagement metrics across Asia with its unique brand of Asian values, world-class athletes, and world-class production. Broadcast partners include Turner Sports, Star India, TV Tokyo, Fox Sports, ABS-CBN, Astro, ClaroSports, Bandsports, Startimes, Premier Sports, Thairath TV, Skynet, Mediacorp, OSN, and more. Institutional investors include Sequoia Capital, Temasek Holdings, GIC, Iconiq Capital, Greenoaks Capital, and Mission Holdings. Currently has offices in Singapore, Tokyo, Los Angeles, Shanghai, Milan, Beijing, Bangkok, Manila, Jakarta, and Bangalore.
Position : Devops Engineer – SDE3
As part of the engineering team, you would be expected to have deep technology expertise with a passion for building highly scalable products. This is a unique opportunity where you can impact the lives of people across 150+ countries!
Responsibilities
• Develop Collaborate in large-scale systems design discussions.
• Deploying and maintaining in-house/customer systems ensuring high availability, performance and optimal cost.
• Automate build pipelines. Ensuring right architecture for CI/CD
• Work with engineering leaders to ensure cloud security
• Develop standard operating procedures for various facets of Infrastructure services (CI/CD, Git Branching, SAST, Quality gates, Auto Scaling)
• Perform & automate regular backups of servers & databases. Ensure rollback and restore capabilities are Realtime and with zero-downtime.
• Lead the entire DevOps charter for ONE Championship. Mentor other DevOps engineers. Ensure industry standards are followed.
Requirements
• Overall 5+ years of experience in as DevOps Engineer/Site Reliability Engineer
• B.E/B.Tech in CS or equivalent streams from institute of repute
• Experience in Azure is a must. AWS experience is a plus
• Experience in Kubernetes, Docker, and containers
• Proficiency in developing and deploying fully automated environments using Puppet/Ansible and Terraform
• Experience with monitoring tools like Nagios/Icinga, Prometheus, AlertManager, Newrelic
• Good knowledge of source code control (git)
• Expertise in Continuous Integration and Continuous Deployment setup using Azure Pipeline or Jenkins
• Strong experience in programming languages. Python is preferred
• Experience in scripting and unit testing
• Basic knowledge of SQL & NoSQL databases
• Strong Linux fundamentals
• Experience in SonarQube, Locust & Browserstack is a plus
About One Championship
Similar jobs
Job Overview:
You will work in engineering and development teams to integrate and develop cloud solutions and virtualized deployment of software as a service product. This will require understanding the software system architecture function as well as performance and security requirements. The DevOps Engineer is also expected to have expertise in available cloud solutions and services, administration of virtual machine clusters, performance tuning and configuration of cloud computing resources, the configuration of security, scripting and automation of monitoring functions. This position requires the deployment and management of multiple virtual clusters and working with compliance organizations to support security audits. The design and selection of cloud computing solutions that are reliable, robust, extensible, and easy to migrate are also important.
Experience:
- Experience working on billing and budgets for a GCP project - MUST
- Experience working on optimizations on GCP based on vendor recommendations - NICE TO HAVE
- Experience in implementing the recommendations on GCP
- Architect Certifications on GCP - MUST
- Excellent communication skills (both verbal & written) - MUST
- Excellent documentation skills on processes and steps and instructions- MUST
- At least 2 years of experience on GCP.
Basic Qualifications:
- Bachelor’s/Master’s Degree in Engineering OR Equivalent.
- Extensive scripting or programming experience (Shell Script, Python).
- Extensive experience working with CI/CD (e.g. Jenkins).
- Extensive experience working with GCP, Azure, or Cloud Foundry.
- Experience working with databases (PostgreSQL, elastic search).
- Must have 2 years of minimum experience with GCP certification.
Benefits :
- Competitive salary.
- Work from anywhere.
- Learning and gaining experience rapidly.
- Reimbursement for basic working set up at home.
- Insurance (including top-up insurance for COVID).
Location :
Remote - work from anywhere.
Ideal joining preferences:
Immediate or 15 days
Head Kubernetes
Docker, Kubernetes Engineer – Remote (Pan India)
ABOUT US
Established in 2009, Ashnik is a leading open-source solutions and consulting company in South East Asia and India, headquartered in Singapore. We enable digital transformation for large enterprises through our design, architecting, and solution skills. Over 100 large enterprises in the region have acknowledged our expertise in delivering solutions using key open-source technologies. Our offerings form critical part of Digital transformation, Big Data platform, Cloud and Web acceleration and IT modernization. We represent EDB, Pentaho, Docker, Couchbase, MongoDB, Elastic, NGINX, Sysdig, Redis Labs, Confluent, and HashiCorp as their key partners in the region. Our team members bring decades of experience in delivering confidence to enterprises in adopting open source software and are known for their thought leadership.
THE POSITION
Ashnik is looking for talented and passionate Technical consultant to be part of the training team and work with customers on DevOps Solution. You will be responsible for implementation and consultations related work for customers across SEA and India. We are looking for personnel with personal qualities like -
- Passion for working for different customers and different environments.
- Excellent communication and articulation skills
- Aptitude for learning new technology and willingness to understand technologies which he/she is not directly working on.
- Willingness to travel within and outside the country.
- Ability to independently work at the customer site and navigate through different teams.
SKILLS AND EXPERIENCE
ESSENTIAL SKILLS
- 3+ years of experience with B.E/ B.Tech, MCA, Graduation Degree with higher education in the technical field.
- Must have prior experience with docker containers, swarm and/or Kubernetes.
- Understanding of Operating System, processes, networking, and containers.
- Experience/exposure in writing and managing Dockerfile or creating container images.
- Experience of working on Linux platform.
- Ability to perform installations and system/software configuration on Linux.
- Should be aware of networking and SSL/TLS basics.
- Should be aware of tools used in complex enterprise IT infra e.g. LDAP, AD, Centralized Logging solution etc.
A preferred candidate will have
- Prior experience with open source Kubernetes or another container management platform like OpenShift, EKS, ECS etc.
- CNCF Kubernetes Certified Administrator/Developer
- Experience in container monitoring using Prometheus, Datadog etc.
- Experience with CI/CD tools and their usage
- Knowledge of scripting language e.g., shell scripting
RESPONSIBILITIES
First 2 months:
- Get an in-depth understanding of containers, Docker Engine and runtime, Swarm and Kubernetes.
- Get hands-on experience with various features of Mirantis Kubernetes Engine, and Mirantis Secure Registry
After 2 months:
- Work with Ashnik’s pre-sales and solution architects to deploy Mirantis Kubernetes Engine for customers
- Work with customer teams and provide them guidance on how Kubernetes Platform can be integrated with other tools in their environment (CI/CD, identity management, storage etc)
- Work with customers to containerize their applications.
- Write Dockerfile for customers during the implementation phase.
- Help customers design their network and security policy for effective management of applications deployed using Swarm and/or Kubernetes.
- Help customer design their deployments either with Swarm services or Kubernetes Deployments.
- Work with pre-sales and sales team to help customers during their evaluation of Mirantis Kubernetes platform
- Conduct workshops for customers as needed for technical hand-holding and technical handover.
Package: upto30 L
Office: Mumbai
Responsibilities
Provisioning and de-provisioning AWS accounts for internal customers
Work alongside systems and development teams to support the transition and operation of client websites/applications in and out of AWS.
Deploying, managing, and operating AWS environments
Identifying appropriate use of AWS operational best practices
Estimating AWS costs and identifying operational cost control mechanisms
Keep technical documentation up to date
Proactively keep up to date on AWS services and developments
Create (where appropriate) automation, in order to streamline provisioning and de-provisioning processes
Lead certain data/service migration projects
Job Requirements
Experience provisioning, operating, and maintaining systems running on AWS
Experience with Azure/AWS.
Capabilities to provide AWS operations and deployment guidance and best practices throughout the lifecycle of a project
Experience with application/data migration to/from AWS
Experience with NGINX and the HTTP protocol.
Experience with configuration and management software such as GIT Strong analytical and problem-solving skills
Deployment experience using common AWS technologies like VPC, and regionally distributed EC2 instances, Docker, and more.
Ability to work in a collaborative environment
Detail-oriented, strong work ethic and high standard of excellence
A fast learner, the Achiever, sets high personal goals
Must be able to work on multiple projects and consistently meet project deadlines
What is the role?
As a DevOps Engineer, you are responsible for setting up and maintaining the GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, and Cloud security.
Key Responsibilities
- Set up, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi-hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Work on Docker images and maintain Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Work on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve a successful implementation of integrated solutions within the portfolio.
- Have the necessary technical and professional expertise.
What are we looking for?
- Minimum 5-12 years of experience in the IT industry.
- Expertise in implementing and managing DevOps CI/CD pipeline.
- Experience in DevOps automation tools. Well versed with DevOps Frameworks, and Agile.
- Working knowledge of scripting using Shell, Python, Terraform, Ansible, Puppet, or chef.
- Experience and good understanding of any Cloud like AWS, Azure, or Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience with Jira is a plus.
What can you look for?
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact, and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being here.
We are
It is a rapidly growing fintech SaaS firm that propels business growth while focusing on human motivation. Backed by Giift and Apis Partners Growth Fund II,offers a suite of three products - Plum, Empuls, and Compass. Works with more than 2000 clients across 10+ countries and over 2.5 million users. Headquartered in Bengaluru, It is a 300+ strong team with four global offices in San Francisco, Dublin, Singapore, New Delhi.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
- Good experience in AWS services like Elastic Compute Cloud(EC2), IAM, RDS, API Gateway, Cognito, etc.
- Using GIT, SonarQube, Ansible, Nexus, Nagios, etc.
- Strong experience in creating, importing and launching volumes with security groups, auto-scaling, Load Balancers, Fault-tolerant
- Experience in configuring Jenkins job with related Plugins for Building, Testing, and Continuous Deployment to accomplish the complete CI/CD.
Job description
The role requires you to design development pipelines from the ground up, Creation of Docker Files, design and operate highly available systems in AWS Cloud environments. Also involves Configuration Management, Web Services Architectures, DevOps Implementation, Database management, Backups, and Monitoring.
Key responsibility area
- Ensure reliable operation of CI/CD pipelines
- Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
- Logging, metrics and alerting management.
- Creation of Bash/Python scripts for automation
- Performing root cause analysis for production errors.
Requirement
- 2 years experience as Team Lead.
- Good Command on kubernetes.
- Proficient in Linux Commands line and troubleshooting.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Hands-on experience with CI tooling preferably with Jenkins.
- Proficient in deployment using Ansible.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Scripting languages: Bash, Python, Groovy.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
Must Have:
Linux, CI/CD(Jenkin), AWS, Scripting(Bash,shell Python, Go), Ngnix, Docker.
Good to have
Configuration Management(Ansible or similar tool), Logging tool( ELK or similar), Monitoring tool(Ngios or similar), IaC(Terraform, cloudformation).
MLOps Engineer
Required Candidate profile :
- 3+ years’ experience in developing continuous integration and deployment (CI/CD) pipelines (e.g. Jenkins, Github Actions) and bringing ML models to CI/CD pipelines
- Candidate with strong Azure expertise
- Exposure of Productionize the models
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops
- Proficient knowledge of git, Docker and containers, Kubernetes
- Familiarity with Terraform
- E2E production experience with Azure ML, Azure ML Pipelines
- Experience in Azure ML extension for Azure Devops
- Worked on Model Drift (Concept Drift, Data Drift preferable on Azure ML.)
- Candidate will be part of a cross-functional team that builds and delivers production-ready data science projects. You will work with team members and stakeholders to creatively identify, design, and implement solutions that reduce operational burden, increase reliability and resiliency, ensure disaster recovery and business continuity, enable CI/CD, optimize ML and AI services, and maintain it all in infrastructure as code everything-in-version-control manner.
- Candidate with strong Azure expertise
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops
One of our US based client is looking for a Devops professional who can handle Technical as well as Trainings for them in US.
If you are hired, you will be sent to US for the working from there. Training & Technical work ratio will be 70% & 30% respectively.
Company Will sponsor for US Visa.
If you are an Experienced Devops professional and also given professional trainings then feel free to connect with us for more.
Implement integrations requested by customers
Deploy updates and fixes
Provide Level 2 technical support
Build tools to reduce occurrences of errors and improve customer experience
Develop software to integrate with internal back-end systems
Perform root cause analysis for production errors
Investigate and resolve technical issues
Develop scripts to automate visualization
Design procedures for system troubleshooting and maintenance
Multiple Clouds [AWS/Azure/GCP] hands on experience
Good Experience on Docker implementation at scale.
Kubernets implementation and orchestration.
- 2+ years of demonstrable experience leading site reliability and performance in large-scale, high-traffic environments
- 2+ years of hands-on experience as a DevOps engineer
- Strong leadership, communication and interpersonal skills geared to getting things done
- Developing themselves and the talent within their charge – fostering and creating opportunity for the team
- Strong understanding of SRE concepts and the DevOps culture. Set the direction and strategy for your team, and help shape the overall SRE program for the company
- Be able to lead complicated technical issues and communicating status updates/RCA with management and customers.
- Own site stability, performance, capacity planning, DevOps recruitment.