The candidate must have 1-3 years of experience in the domain. The responsibilities include:
- Deploying system on Linux-based environment using Docker
- Manage & maintain the production environment
- Deploy updates and fixes
- Provide Level 1 technical support
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
- Experience working on Linux-based infrastructure
- Excellent understanding of MERN Stack, Docker & Nginx
- Configuration and managing databases such as Mongo
- Excellent troubleshooting
- Experience of working with AWS/Azure/GCP
- Working knowledge of various tools, open-source technologies, and cloud services
- Awareness of critical concepts in DevOps and Agile principles
- Experience of CI/CD Pipeline
- Fantastic Work Culture & Opportunity to work with the Founding Team
- Competitive Salary & Benefits
- Work from Office (Location- Noida)
1) Be a part of the process of scaling a young startup to a global company.
5) Flexible work schedule & complete ownership of responsibilities.
6) A cool working environment that will make each day at Voiceoc full of fun & new learning.
Wolken Software provides a suite of AI-enabled, SaaS 2.0 cloud-native applications for Customer Service and Enterprise Solutions namely Wolken Service Desk, Wolken's IT Service Management, and Wolken's HR Case Management. We have replaced incumbents like Salesforce, ServiceNow Zendesk, etc. at various Fortune 500 and Fortune 1000 companies.
AWS: 7-10 years experience with using a broad range of AWS technologies (e.g. EC2, RDS, ELB, S3, VPC, IAM, CloudWatch) to develop and maintain AWS-based cloud solutions, with emphasis on best practice cloud security.
Solid experience as a DevOps Engineer in a 24x7 uptime AWS environment, including automation experience with configuration management tools.
Strong scripting skills and automation skills.
Expertise in Linux system administration.
Beneficial to have
- Basic DB administration experience (MySQL)
- Working knowledge of some of the major open-source web containers & servers like apache, tomcat and Nginx.
- Understanding network topologies and common network protocols and services (DNS, HTTP(S), SSH, FTP, SMTP).
- Experience in Docker, Ansible & Python.
Role & Responsibilities:
- Deploying, automating, maintaining and managing AWS cloud-based production system, to ensure the availability, performance, scalability and security of production systems.
- Build, release and configuration management of production systems.
- System troubleshooting and problem-solving across platform and application domains.
- Ensuring critical system security using best-in-class cloud security solutions.
- Good knowledge of at least one language (C#, Java, Python, Go, PHP, Node.js)
- Have enough experience on application and infrastructure architectures
- Design and plan cloud solution architecture
- Design for security, network, and compliances
- Analyze and optimize technical and business processes
- Ensure solution and operational reliability
- Manage and provision cloud infrastructure
- Manage IaaS, PaaS, and SaaS solutions
- Design strategies around cloud governance, migration, Cloud operations and DevOps
- Design highly scalable, available, and reliable cloud applications
- Build and test applications
- Deploy applications on cloud
- Integration with cloud services
- Architect level certificate of any cloud (AWS, GCP, Azure)
- Job Title - DevOps Engineer
- Reports Into - Senior Data Science Core Developer
- Location - Hybrid/ Bangalore
A Little Bit about Kwalee….
Kwalee is one of the world’s leading multiplatform game developers and publishers, with well over 900 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Airport Security and Makeover Studio 3D. We also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope, Die by the Blade and Scathe.
What’s In It For You?
- Hybrid working - 3 days in the office, 2 days remote/ WFH is the norm
- Flexible working hours - we trust you to choose how and when you work best
- Profit sharing scheme - we win, you win
- Private medical cover - delivered through BUPA
- Life Assurance - for long term peace of mind
- On site gym - take care of yourself
- Relocation support - available
- Quarterly Team Building days - we’ve done Paintballing, Go Karting & even Robot Wars
- Pitch and make your own games on Creative Wednesdays!
Are You Up To The Challenge?
As a DevOps Engineer you have a passion for automation, security and building reliable expandable systems. You develop scripts and tools to automate deployment tasks and monitor critical aspects of the operation, resolve engineering problems and incidents. Collaborate with architects and developers to help create platforms for the future.
Your Team Mates
The Data Science team is central in developing the technology behind the growth and monetisation of our games. We are a cross functional team that consists of analysts, engineers and data scientists, and work closely with the larger engineering team to deliver products spanning our modern, cloud first, tech stack. As a DevOps Engineer in this team you will be supporting the analysts and scientists as their dedicated DevOps expert.
What Does The Job Actually Involve?
- Find ways to automate tasks and monitoring systems to continuously improve our systems.
- Develop scripts and tools to make our infrastructure resilient and efficient.
- Understand our applications and services and keep them running smoothly.
Your Hard Skills
- Minimum 1 years of experience on a dev ops engineering role
- Deep experience with Linux and Unix systems
- Networking basics knowledge (named, nginx, etc)
- Some coding experience (Python, Ruby, Perl, etc.)
- Experience with common automation tools (Ex. Chef, Terraform, etc)
- AWS experience is a plus
- A creative mindset motivated by challenges and constantly striving for the best
Your Soft Skills
Kwalee has grown fast in recent years but we’re very much a family of colleagues. We welcome people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances, and all we ask is that you collaborate, work hard, ask questions and have fun with your team and colleagues.
We don’t like egos or arrogance and we love playing games and celebrating success together. If that sounds like you, then please apply.
A Little More About Kwalee
Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts.
Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle.
We have an amazing team of experts collaborating daily between our studios in Leamington Spa, Lisbon, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, Cyprus, the Philippines and many more places around the world. We’ve recently acquired our first external studio, TicTales, which is based in France.
We have a truly global team making games for a global audience, and it’s paying off: - Kwalee has been voted the Best Large Studio and Best Leadership Team at the TIGA Awards (Independent Game Developers’ Association) and our games have been downloaded in every country on earth - including Antarctica!
We are looking for a candidate who is an experienced DevOps Engineer and who has excellent working knowledge of platforms (like AWS, Azure, GCP) and Docker/Kubernetes ecosystem. You will be expected to interact with clients on a daily basis to discuss and share plans, outcomes etc. You should have a very strong technical flair and passion to learn new technologies, finding innovative solutions to problems, and have the ability to keep abreast with emerging technologies
- Bridging the gaps between core infra, security, QA, and development team
- Owning the end-to-end Availability, Performance, and Capacity of applications and their infrastructure and creating/maintaining the respective observability with Prometheus/New Relic/ELK/Loki.
- Providing 24X7 infra & app support, building processes, and documenting “tribal” knowledge around the same time.
- Mentor and train L1 engineers and continually improve app and infra support processes.
- Managing application deployment on platforms - automate and improve development and release processes.
- Creating, managing and maintaining datastores & data platform infra using IaC.
- Owning and onboarding new applications with the production readiness review process.
- Managing the SLO/Error Budgets/Alerts and performing root cause analysis for production errors.
- Working with Core Infra, Dev and Product teams to define SLO/Error Budgets/Alerts.
- Working with the Dev team to have an in-depth understanding of the application architecture and its bottlenecks.
- Identifying observability gaps in application & infrastructure and working with stakeholders to fix them
- Managing outages and doing detailed RCA with developers and identifying ways to avoid that situation.
Skills and Qualifications:
- Minimum 3+ Years of experience in managing high traffic, large scale microservices and infrastructure with excellent troubleshooting skills
- Experience in troubleshooting, managing, and deploying containerized environments using Docker/containerd, Kubernetes is a must.
- Must be proficient with the helm with experience in service mesh like Istio, Linkerd.
- Must be very hands-on in managing and troubleshooting the Kubernetes environment.
- Extensive experience with Linux administration and a good understanding of the various Linux kernel subsystems (memory, storage, network, etc) and good knowledge of shell scripting.
- Extensive knowledge in DNS, TCP/IP, UDP, GRPC, Routing, and Load Balancing.
- Expertise in GitOps, Infrastructure as a Code tool such as Terraform etc.. and Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible.
- Expertise in Google Cloud (GCP) and/or other relevant Cloud Infrastructure solutions like AWS or Azure.
- Experience in building the CI/CD pipelines with tools such as Jenkins, GitLab, Spinnaker, Argo, etc.
- Experience with Database handling is must.
- Sufficient knowledge of Git.
- A collaborative spirit with the ability to work across disciplines to influence, learn and deliver.
- A deep understanding of computer science, software development, and networking principles.
BE/B.Tech/M.Tech in Computer Science or Equivalent
• Expertise in any one hyper-scale (AWS/AZURE/GCP), including basic services like networking, data and workload management.
Networking: VPC, VPC Peering, Transit Gateway, RouteTables, SecurityGroups, etc.
Data: RDS, DynamoDB, ElasticSearch
Workload: EC2, EKS, Lambda, etc.
Networking: VNET, VNET Peering,
Data: Azure MySQL, Azure MSSQL, etc.
Workload: AKS, VirtualMachines, AzureFunctions
Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
Workload: GKE, Instances, App Engine, Batch, etc.
• Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup, templating and configuration.
• Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment, networking, service mesh. Used any package manager like helm.
• Scripting experience (Bash/python), automation in pipelines when required, system service.
• Infrastructure automation (Terraform/pulumi/cloudformation), write modules, setup pipeline and version the code.
• Experience in any programming language is not required but is appreciated.
• Good experience in GIT, SVN or any other code management tool is required.
• DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
• Observability tools (Opensource: Prometheus, Elasticsearch, OpenTelemetry; Paid: Datadog, 24/7, etc)
●Extensive experience of Linux, including familiarity with C, UNIX system calls, and low-level O/S and network protocols. Also block, file and object storage protocols.
●Experience of using a modern configuration management system (examples such as Ansible, Salt Stack, Puppet, or Chef) to automate the management of a large-scale Linux deployment.
●Effective troubleshooting skills across hardware, O/S, network, and storage.
●Ability to write robust, maintainable code in Python and/or Perl.
●Experience working in a large, multi-national enterprise in any industry vertical, showing experience of communicating and collaborating in globally distributed teams.
●Enthusiasm for modern development tools and practices including Git, Jenkins, automated testing, and continuous integration.
●Experience of designing, implementing and supporting large scale production IaaS platforms.
●Knowledge of building and managing Docker containers in a secure manner.
You will be working on our client’s massive scale Infrastructure and CloudOps requirements. You will be directly working on the customer's cloud environment and managing their product infra life cycle.
What you will do (Responsibilities):
Understand how the product works and how it is used by the customers
Day-to-day operational support of product infrastructure on all three major public cloud services
Interact with customers on/off-site to troubleshoot issues, provide workarounds by leveraging your troubleshooting skills
Create and manage processes for the secure operation of customer cloud environments
System (Windows, Linux, and SQL Databases) and Network administrations
Use incident management tools used in incident analysis and troubleshooting. Identify, escalate, and communicate issues in a timely manner.
Contribute to building a knowledge base centered on known incidents/defects, Frequently Asked Questions, resolved issues, applying lessons learned and previous resolutions to new incidents.
Build hosting environments for clients all over the world
Collaborate with product managers and engineers to ensure that critical and time-sensitive projects run smoothly and achieve the business outcome
On-call responsibilities to respond to emergency situations and scheduled maintenance
Contribute to and maintain documentation for systems, processes, procedures, and infrastructure configuration.
What you bring (Skills):
Minimum 2 years of relevant experience in cloudops (mainly into MS Azure), infrastructure management, and support
Strong Windows Systems, SQL Database, and Network administration skills
Strong understanding of cloud applications, concepts, and the storage, compute, and networking components
Readiness to work in 24x7 rotational shift model
Scripting in PowerShell, Linux shell, Python or C#
Proactive and self-motivated, committed to achieving deadlines, meeting SLAs, and producing results
Excellent customer service and people skills. Excellent analytical, written, and oral communication and relationship building skills
Ability to be a good listener, and to understand customer issues. Ability to provide innovative workarounds or design a solution to fix a customer’s problem
Great if you know (Skills):
Comfortable using TFS (Team Foundation Server) and Azure DevOps workflows
Comfortable with Visual Studio and SQL Server Data Tools
Exposure to deployment using Azure cloud console and ARM templates
Experience in handling Azure AD & RBAC related activities will be a plus
Comfortable with basic DevOps concepts like CI/CD, and Infra-automation
A higher degree of autonomy, startup culture & small teams
Opportunities to become an expert in emerging technologies
Remote working options for the right maturity level
Competitive salary & family benefits
Performance-based career advancement
Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business-first approach to help meet our client’s strategic goals.
We are DevOps focused organization helping our clients focus on their core product activities by handling all aspects of their infrastructure, integration and delivery.
Benefits Working With Us:
- Health & Wellbeing
- Learn & Grow
- Celebrate Achievements
- Financial Wellbeing
- Medical and Accidental cover.
- Flexible Working Hours.
- Sports Club & much more.
In-depth knowledge and hands-on experience with all of the AWS services and other similar cloud services
Strong knowledge of core architectural concepts including distributed computing , scalability, availability, and performance to recommend the best backend solutions for our products
Preferred AWS Certifications:
- AWS Solutions Architect Professional/Associate AWS DevOps Engineer Professional
- AWS SysOps Administrator - Associate AWS Developer Associate
- ITCAN is looking for an AWS Solution Architect who will be responsible for development of scalable, optimized, and reliable backend solutions using AWS services for all our products. You will ensure that our products consume AWS services in the mast effective methods. Therefore, a commitment to collaborative problem solving, sophisticated design, and quality product is important.
- Analyse requirements and devise innovative, efficient, and cost-effective architecture using AWS components and services that ensure scalability, availability and high- performance.
- Develop automation and deployment utilities using Ruby, Bash and Shell scripting and implementing
- CI/CD pipelines using Jenkins, Code Deploy, Git, Code Pipeline, Code Commit etc. To ensure seamless deployment with no downtime.
- Redesign architectures end Lo-end seamlessly by working through major software upgrades such as Apache.
- Ensure an always-running network with the ability to set up redundant DNS systems with failover capabilities.
- Ensure the AWS services consumed are aligned with best practices to ensure higher availability and security along with optimal cost utilization.
- Using AWS-managed services, implement ELK systems end-to-end.
Currently, Indus OS has a user base of over 12+ Million on the back of 10+ smartphone brand partnerships with leading OEMs such as Samsung, Gionee, iTel, Micromax, Intex, Karbonn, and others. The Indus platform is available in English & 23 Indian regional languages and is intended to digitally connect the next 1 billion people in the emerging markets.
Indus App Bazaar: Indus also has its very own app market place called Indus App Bazaar with over 400,000 applications. Indus App Bazaar is India's largest indigenous app store available in 12 languages, designed to suit specific requirements of the Indian consumers. The recent partnership with Samsung will see Indus App Bazaar power Samsung's new Galaxy Apps Store in all Samsung Devices. With this partnership, we will be able to reach a user base of over 150 Million by 2021.
Indus OS in News:
Financial Express: http://bit.ly/29FDQg5
Economic Times: http://bit.ly/2ayTvD4
- Developing automation for the various deployments core to our business
- Documenting run books for various processes / improving knowledge bases
- Identifying technical issues, communicating and recommending solutions
- Miscellaneous support (user account, VPN, network, etc)
- Develop continuous integration / deployment strategies
- Production systems deployment/monitoring/optimization
- Management of staging/development environments
What you know :
- Ability to work with a wide variety of open source technologies and tools
- Ability to code/script (Python, Ruby, Bash)
- Experience with systems and IT operations
- Comfortable with frequent incremental code testing and deployment
- Strong grasp of automation tools (Chef, Packer, Ansible, or others)
- Experience with cloud infrastructure and bare metal systems
- Experience optimizing infrastructure for high availability and low latencies
- Experience with instrumenting systems for monitoring and reporting purposes
- Well versed in software configuration management systems (git, others)
- Experience with cloud providers (AWS or other) and tailoring apps for cloud deployment
- Data management skills
- Degree in Computer Engineering or Computer Science or 2-5 years equivalent experience in systems administration or DevOps roles.
- Work conducted is focused on business outcomes
- Can work in an environment with a high level of autonomy (at the individual and team level)
- Comfortable working in an open, collaborative environment, reaching across functional.
Our Offering :
- True start-up experience - no bureaucracy and a ton of tough decisions that have a real impact on the business from day one.
- The camaraderie of an amazingly talented team that is working tirelessly to build a great OS for India and surrounding markets.
- Awesome benefits, social gatherings etc.
- Work with intelligent, fun and interesting people in a dynamic start-up environment.