Puppet Jobs in Bangalore (Bengaluru)
Role Description:
● Own, deploy, configure, and manage infrastructure environment and/or applications in
both private and public cloud through cross-technology administration (OS, databases,
virtual networks), scripting, and monitoring automation execution.
● Manage incidents with a focus on service restoration.
● Act as the primary point of contact for all compute, network, storage, security, or
automation incidents/requests.
● Manage rollout of patches and release management schedule and implementation.
Technical experience:
● Strong knowledge of scripting languages such as Bash, Python, and Golang.
● Expertise in using command line tools and shells
● Strong working knowledge of Linux/UNIX and related applications
● Knowledge in implementing DevOps and having an inclination towards automation.
● Sound knowledge in infrastructure-as-a-code approaches with Puppet, Chef, Ansible, or
Terraform, and Helm. (preference towards Terraform, Ansible, and Helm)
● Must have strong experience in technologies such as Docker, Kubernetes, OpenShift,
etc.
● Working with REST/gRPC/GraphQL APIs
● Knowledge in networking, firewalls, network automation
● Experience with Continuous Delivery pipelines - Jenkins/JenkinsX/ArgoCD/Tekton.
● Experience with Git, GitHub, and related tools
● Experience in at least one public cloud provider
Skills/Competencies
● Foundation: OS (Linux/Unix) & N/w concepts and troubleshooting
● Automation: Bash or Python or Golang
● CI/CD & Config Management: Jenkin, Ansible, ArgoCD, Helm, Chef/Puppet, Git/GitHub
● Infra as a Code: Terraform
● Platform: Docker, K8s, VMs
● Databases: MySQL, PostgreSql, DataStore (Mongo, Redis, AeroSpike) good to have
● Security: Vulnerability Management and Golden Image
● Cloud: Deep working knowledge on any public cloud (GCP preferable)
● Monitoring Tools: Prometheus, Grafana, NewRelic
One of India’s largest, unicorn e-commerce companies is looking to grow their tech team. They have recently raised Rs. 350Cr and are backed by some of industry’s veterans. The company is expanding quickly with over 10,000 employees.
Designation: Senior Devops Engineer
Location: Bangalore
Experience: 3-6yrs
Experience / Skills Required:
• 2+ years of experience with installation, configuration and management of Linux
systems.
• Experience with AWS.
• Linux/Unix- Writing shell scripts and knows Linux commands.
• Experience with Php.
• IT Infrastructure - Setting up Tools, Servers and Database for application
deployment, monitoring and operations.
• Knowledge of setting up distributed system like Kafka, Solr , cloud, Aerospike.
• Build process - Jenkins and other build and deployment tools.
• Servers - App Servers (Nginx) and Database (MySQL )
• Tools - GIT, Puppet, Chef, Ansible
• Knowledge of Docker.
• Knowledge of tools like New Relic and log management tools.
Roles and Responsibilities:
• Show responsibility in handling critical production issues, if needed, respond and
support troubleshooting of issues during nights and weekends.
• Assist the developers and on calls for other teams with post mortem, follow up
and review of issues affecting production availability.
• Excellent communication and strong application troubleshooting and problem-
solving skills.
- Develop runbooks for newly submitted development requirements
- Actively perform enhancements and deployments for customer automation enablement
- Support StackStorm and maintain the standard run book development procedures
- Troubleshoot day-to-day issues for runbook execution failures.
- Will support software developers building product enhancements, repair automation scripts, tools, and provide technical analysis of any failures.
Requirements:
The candidate should have below experiences:
- Developer of Grade 6, Grade 7 or above for automation development role
- Should have hands-on knowledge on scripting ( Powershell, bash, etc.)
- Should have hands-on knowledge on developing use cases/scripts on python/YAML.
- Should have good experience working on Unix operating systems, RHEL, CentOS preferably.
- Should have good knowledge on the installation of applications on Unix operating systems and troubleshooting.
- Should have integration experience with ITSM tools such as ServiceNow, Helix, etc.
- should have good infrastructure knowledge to be able to present Network/authentication level prerequisites to implement the solutions for automation.
- Preference would be given to candidates having experience on tools like Chef, StackStorm, puppet.
- Should have basic knowledge of databases (Mongo DB, SQL, etc.) is required.
- Should be comfortable working in our Bangalore office from Day 1.
- Should possess good communication skills.
Job Description:
• Contribute to customer discussions in collecting the requirement
• Engage in internal and customer POC’s to realize the potential solutions envisaged for the customers.
• Design/Develop/Migrate VRA blueprints and VRO workflows; strong hands-on knowledge in vROPS and integrations with application and VMware solutions.
• Develop automation scripts to support the design and implementation of VMware projects.
Qualification:
• Maintain current, high-level technical knowledge of the entire VMware product portfolio and future product direction and In depth level knowledge
• Maintain deep technical and business knowledge of cloud computing and networking applications, industry directions, and trends.
• Experience with REST API and/or Python programming. TypeScript/NodeJS backend experience
• Experience with Kubernetes
• Familiarity with DevOps tools like Ansible, Puppet, Terraform
• End to end experience in Architecture, Design and Development of VMware Cloud Automation suite with good exposure to VMware products and/or Solutions.
• Hands-on experience in automation, coding, debugging and release.
• Sound process knowledge from requirement gathering, implementation, deployment and Support.
• Experience in working with global teams, customers and partners with solid communication skills.
• VMware CMA certification would be a plus
• Academic background in MS/BE/B-Tech/ IT/CS/ECE/EE would be preferred.
- Work towards improving the following 4 verticals - scalability, availability, security, and cost, for company's workflows and products.
- Help in provisioning, managing, optimizing cloud infrastructure in AWS (IAM, EC2, RDS, CloudFront, S3, ECS, Lambda, ELK etc.)
- Work with the development teams to design scalable, robust systems using cloud architecture for both 0-to-1 and 1-to-100 products.
- Drive technical initiatives and architectural service improvements.
- Be able to predict problems and implement solutions that detect and prevent outages.
- Mentor/manage a team of engineers.
- Design solutions with failure scenarios in mind to ensure reliability.
- Document rigorously to keep track of all changes/upgrades to the infrastructure and as well share knowledge with the rest of the team
- Identify vulnerabilities during development with actionable information to empower developers to remediate vulnerabilities
- Automate the build and testing processes to consistently integrate code
- Manage changes to documents, software, images, large web sites, and other collections of code, configuration, and metadata among disparate teams
What is the role?
As a DevOps Engineer, you are responsible for setting up and maintaining the GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, and Cloud security.
Key Responsibilities
- Set up, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi-hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Work on Docker images and maintain Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Work on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve a successful implementation of integrated solutions within the portfolio.
- Have the necessary technical and professional expertise.
What are we looking for?
- Minimum 5-12 years of experience in the IT industry.
- Expertise in implementing and managing DevOps CI/CD pipeline.
- Experience in DevOps automation tools. Well versed with DevOps Frameworks, and Agile.
- Working knowledge of scripting using Shell, Python, Terraform, Ansible, Puppet, or chef.
- Experience and good understanding of any Cloud like AWS, Azure, or Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience with Jira is a plus.
What can you look for?
A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact, and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being here.
We are
It is a rapidly growing fintech SaaS firm that propels business growth while focusing on human motivation. Backed by Giift and Apis Partners Growth Fund II,offers a suite of three products - Plum, Empuls, and Compass. Works with more than 2000 clients across 10+ countries and over 2.5 million users. Headquartered in Bengaluru, It is a 300+ strong team with four global offices in San Francisco, Dublin, Singapore, New Delhi.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
About Vyapar:
We are a technology and innovation company in the fintech space, delivering business accounting software to Micro, Small & Medium Enterprises (MSMEs). With more than 5 Million users across 140 countries, we are one of the fastest growing companies in this space. We take the complexity out of invoicing, inventory management & accounting, making it so simple, such that small businesses can spend less time on manual bookkeeping and spend more time focusing on areas of business that matter.
Role Summary:
Vyapar's Engineering team builds the technology platform that eases and digitizes our customers' bookkeeping and enables the transition of cumbersome accounting data from general bookkeeping to a digitized always available resource.
We are looking for a highly motivated and experienced DevOps Engineer to develop the infrastructure, manage the operations of the services and work on software engineering tasks of design and development of systems that increase the reliability, scalability and reduce operational overhead through automation.
Key Responsibilities:
- Design and implement an infrastructure for delivering and running web, mobile applications.
- Scale and optimize a variety of SQL and NoSQL databases, web servers, application frameworks, caches, and distributed messaging systems.
- Automate the deployment and configuration of the virtualized infrastructure and the entire software stack.
- Support several Linux servers running our SaaS platform stack on AWS and Kubernetes.
- Define and set processes to identify performance bottlenecks & scaling pitfalls.
- Manage robust monitoring and alerting infrastructure.
- Sharp and tenacious troubleshooting skills with ability to identify and fix issues in Dev Ops process and product environments.
- Have an Automate Everything” mindset to support scalable growth.
- Handles challenging problems with a positive "can do" attitude.
Desired Skills And Requirements
- Must have
- Experience in developing/operating large scale cloud service.
- DevOps exposure and knowledge of tools such as Terraform, Ansible, Jenkins.
- Experience in container technologies such as Kubernetes, Docker, Registry etc.
- Strong technical background in Linux, Virtualization, and Public Cloud.
- Strong technical background in cloud networking, storage, and security.
- Strong technical knowledge of monitoring, logging and tracing.
- Familiar with MySQL
- Excellent problem solving and analytical skills.
- Good to have
- Programming/scripting in Python.
- Possess knowledge of integrating security into GCP and AWS infrastructure.
- Have managed at least one latency-critical real-time data pipeline that ingested & served millions of events.
Experience:
- Must have 3 years of experience in handling cloud infrastructure on AWS and deployment automation.
- Extensive Experience in AWS.
- Experience with CI/CD tools and automation.
- Experience in Python coding is a plus.
- Experience deploying applications on public/private/hybrid cloud infrastructure - ie, AWS.
- Experience working with containerization, infrastructure, and orchestration frameworks to incorporate setting up ephemeral and permanent environments as part of our CI/CD workflow using Docker, Terraform, Kubernetes, etc.
- Strong infrastructure experience such as virtualization technologies, operating systems, platform migration, data management and networking.
- Understanding in setting up code quality/scanning tools and incorporating into CI/CD pipelines.
- Extensive experience working with configuration management tools such as Puppet, Chef, Ansible, etc.
- Familiarity with developing in Cloud bees/Jenkins pipelines.
- Top-notch communication and influence skills; must be able to work effectively across the entire department.
Education:
- A full-time B.E/ B.Tech Degree from a recognized university.
- AWS DevOps Certification is a plus.
Job Description
DevOps – Technical Skills
- Excellent understanding of at least any one of the programming language Ruby, Python and Java
- Good understanding and hands on experience in Shell/Bash/YAML scripting
- Experience with CI/CD Pipelines and automation processes across different tech stacks and cloud providers (AWS / Azure)
- Experience in Maven and Git workflows
- Hands on experience in working with container orchestration tools such as Docker and Kubernetess
- Good knowledge in any Devops Automation Tools like Chef, Ansible, Terraform, Puppet, Fabric etc
- Experience managing stakeholders and external interfaces and setting up tools and required infrastructure
- Hands on experience in any Cloud infrastructure like AWS or Azure.
- Strong knowledge and hands on experience in Unix OS
- Able to Identify improvements within existing processes to reduce technical debt.
- Experience in network, server, application status monitoring and troubleshooting.
- Possess good problem solving and debugging skills. Troubleshoot issues and coordinate with development teams to streamline builds.
- Exposure to DevSecOps and Agile principles is a plus
- Good communication and inter personal skills
Experience- 5 to 7 Yrs
Designation- Assistant Manager
Location- Pune , Bangalore, Mumbai
Notice Period- Immediate to 1 month
Roles and Responsibilities:
- Work towards uptime of Network / Infrastructure to achieve 99.99% availability
- Build dashboards around monitoring
- Work along with the Engineering team to help out with Infrastructure / Network automation Long term needs.
- Deploy infrastructure as code and automate as much as possible
- Responsible for on-call support on a rotation basis
- Responsible for Network / Virtual Machines and Kube clusters
Desired Profile:
- 5+ years of DevOps experience managing production Infrastructure via code.
- Networking knowledge in routing, firewall, BGP
- Strong Linux administration skills.
- Experience in architecting single point of failures and implementing HA solutions end to end.
- Experience with AAA implementation
- Experience with Managing K8s infrastructure
Knowledge/Skills/Abilities:
- Tooling includes, but is not limited to Ansible, Terraform, Docker, and Kubernetes
- Expected proficiency in all tooling throughout the stack
- Excellent Python, bash, and scripting fundamentals
- Experience with modern web services architectures
- GCP/AWS/Azure Certification is an added advantage
DevOps Engineer
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Key Qualifications
· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.
· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc
· Strong experience in Linux/Unix administration.
· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.
· Expertise in multiple coding and scripting languages including Shell, Python, and Perl
· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)
· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database
· Worked on open-source tools for logging, monitoring, search engine, caching, etc.
· Professional Certificates in AWS or any other cloud is preferable
· Excellent problem solving and troubleshooting skills
· Must have good written and verbal communication skills
Key Responsibilities
Ambitious individuals who can work under their own direction towards agreed targets/goals.
Must be flexible to work on the office timings to accommodate the multi-national client timings.
Will be involved in solution designing from the conceptual stages through development cycle and deployments.
Involve development operations & support internal teams
Improve infrastructure uptime, performance, resilience, reliability through automation
Willing to learn new technologies and work on research-orientated projects
Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.
Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming
www.banyandata.com
As part of the engineering team, you would be expected to have
deep technology expertise with a passion for building highly scalable products.
This is a unique opportunity where you can impact the lives of people across 150+
countries!
Responsibilities
• Develop Collaborate in large-scale systems design discussions.
• Deploying and maintaining in-house/customer systems ensuring high availability,
performance and optimal cost.
• Automate build pipelines. Ensuring right architecture for CI/CD
• Work with engineering leaders to ensure cloud security
• Develop standard operating procedures for various facets of Infrastructure
services (CI/CD, Git Branching, SAST, Quality gates, Auto Scaling)
• Perform & automate regular backups of servers & databases. Ensure rollback and
restore capabilities are Realtime and with zero-downtime.
• Lead the entire DevOps charter for ONE Championship. Mentor other DevOps
engineers. Ensure industry standards are followed.
Requirements
• Overall 5+ years of experience in as DevOps Engineer/Site Reliability Engineer
• B.E/B.Tech in CS or equivalent streams from institute of repute
• Experience in Azure is a must. AWS experience is a plus
• Experience in Kubernetes, Docker, and containers
• Proficiency in developing and deploying fully automated environments using
Puppet/Ansible and Terraform
• Experience with monitoring tools like Nagios/Icinga, Prometheus, AlertManager,
Newrelic
• Good knowledge of source code control (git)
• Expertise in Continuous Integration and Continuous Deployment setup using Azure
Pipeline or Jenkins
• Strong experience in programming languages. Python is preferred
• Experience in scripting and unit testing
• Basic knowledge of SQL & NoSQL databases
• Strong Linux fundamentals
• Experience in SonarQube, Locust & Browserstack is a plus
As a Scala Developer, you are part of the development of the core applications using the Micro Service paradigm. You will join an Agile team, working closely with our product owner, building and delivering a set of Services as part of our order management and fulfilment platform. We deliver value to our business with every release, meaning that you will immediately be able to contribute and make a positive impact.
Our approach to technology is to use the right tool for the job and, through good software engineering practices such as TDD and CI/CD, to build high-quality solutions that are built with a view to maintenance.
Requirements
The Role:
- Build high-quality applications and HTTP based services.
- Work closely with technical and non-technical colleagues to ensure the services we build meet the needs of the business.
- Support development of a good understanding of business requirements and corresponding technical specifications.
- Actively contribute to planning, estimation and implementation of team work.
- Participate in code review and mentoring processes.
- Identify and plan improvements to our services and systems.
- Monitor and support production services and systems.
- Keep up with industry trends and new tools, technologies & development methods with a view to adopting best practices that fit the team and promote adoption more widely.
Relevant Skills & Experience:
The following skills and experience are relevant to the role and we are looking for someone who can hit the ground running in these areas.
- Web service application development in Scala (essential)
- Functional Programming (essential)
- API development and microservice architecture (essential)
- Patterns for building scalable, performant, distributed systems (essential)
- Databases – we use PostgreSQL (essential)
- Common libraries – we use Play, Cats and Slick (essential)
- Strong communication and collaboration skills (essential)
- Performance profiling and analysis of JVM based applications
- Messaging frameworks and patterns
- Testing frameworks and tools
- Docker, virtualisation and cloud computing – we use AWS and Vmware
- Javascript including common frameworks such as React, Angular, etc
- Linux systems administration
- Configuration tooling such as Puppet and Ansible
- Continuous delivery tools and environments
- Agile software delivery
- Troubleshooting and diagnosing complex production issues
Benefits
- Fun, happy and politics-free work culture built on the principles of lean and self organisation.
- Work with large scale systems powering global businesses.
- Competitive salary and benefits.
Note: We looking for immediate joiners. We expect the offered candidate should join within 15 days. Buyout reimbursement is available for 30 to 60 days notice period applicants who can ready join within 15 days.
To build on our success, we are looking for smart, conscientious software developers who want to work in a friendly, engaging environment and take our platform and products forward. In return, you will have the opportunity to work with the latest technologies, frameworks & methodologies in service development in an environment where we value collaboration and learning opportunities.
- You have a Bachelor's degree in computer science or equivalent
- You have at least 7 years of DevOps experience.
- You have deep understanding of AWS and cloud architectures/services.
- You have expertise within the container and container orchestration space (Docker, Kubernetes, etc.).
- You have experience working with infrastructure provisioning tools like CloudFormation, Terraform, Chef, Puppet, or others.
- You have experience enabling CI/CD pipelines using tools such as Jenkins, AWS Code Pipeline, Gitlab, or others.
- You bring a deep understanding and application of computer science fundamentals: data structures, algorithms, and design patterns.
- You have a track record of delivering successful solutions and collaborating with others.
- You take security into account when building new systems.
●Evidence of successful development/engineering team leadership experience.
●Experience of communicating and collaborating in globally distributed teams.
●Ability to write robust, maintainable code in Python and/or Perl.
●Extensive knowledge of Linux, including familiarity with C, UNIX system calls, and low-level O/S and network protocols. Also block, file and object storage protocols
●Experience of using a modern configuration management system, such as Salt Stack, Puppet, or Chef.
●Effective troubleshooting skills across hardware, O/S, network, and storage.
Skills Desired
●Enthusiasm for modern dev tools & practices including Git, Jenkins, automated testing, and continuous integration.
●Management of external vendor resources
●Extensive experience of Linux, including familiarity with C, UNIX system calls, and low-level O/S and network protocols. Also block, file and object storage protocols.
●Experience of using a modern configuration management system (examples such as Ansible, Salt Stack, Puppet, or Chef) to automate the management of a large-scale Linux deployment.
●Effective troubleshooting skills across hardware, O/S, network, and storage.
Skills Desired
●Ability to write robust, maintainable code in Python and/or Perl.
●Experience working in a large, multi-national enterprise in any industry vertical, showing experience of communicating and collaborating in globally distributed teams.
●Enthusiasm for modern development tools and practices including Git, Jenkins, automated testing, and continuous integration.
●Experience of designing, implementing and supporting large scale production IaaS platforms.
●Knowledge of building and managing Docker containers in a secure manner.
Goodera is looking for an experienced and motivated DevOps professional to be an integral part of its core infrastructure team. As a DevOps Engineer, you must be able to troubleshoot production issues, design, implement, and deploy monitoring tools, collaborate with team members to improve the existing and develop new engineering tools, optimize company's computing architecture, design and conduct security, performance, availability and availability tests.
Responsibilities:
This is a highly accountable role and the candidate must meet the following professional expectations:
• Owning and improving the scalability and reliability of our products.
• Working directly with product engineering and infrastructure teams.
• Designing and developing various monitoring system tools.
• Accountable for developing deployment strategies and build configuration management.
• Deploying and updating system and application software.
• Ensure regular, effective communication with team members and cross-functional resources.
• Maintaining a positive and supportive work culture.
• First point of contact for handling customer (may be internal stakeholders) issues, providing guidance and recommendations to increase efficiency and reduce customer incidents.
• Develop tooling and processes to drive and improve customer experience, create playbooks.
• Eliminate manual tasks via configuration management.
• Intelligently migrate services from one AWS region to other AWS regions.
• Create, implement and maintain security policies to ensure ISO/ GDPR / SOC / PCI compliance.
• Verify infrastructure Automation meets compliance goals and is current with disaster recovery plan.
• Evangelize configuration management and automation to other product developers.
• Keep himself updated with upcoming technologies to maintain the state of the art infrastructure.
Required Candidate profile :
• 3+ years of proven experience working in a DevOps environment.
• 3+ years of proven experience working in AWS Cloud environments.
• Solid understanding of networking and security best practices.
• Experience with infrastructure-as-code frameworks such as Ansible, Terraform, Chef, Puppet, CFEngine, etc.
• Experience in scripting or programming languages (Bash, Python, PHP, Node.js, Perl, etc.)
• Experience designing and building web application environments on AWS, including services such as ECS, ECR, Foregate, Lambda, SNS / SQS, CloudFront, Code Build, Code pipeline, Configuring CloudWatch, WAF, Active Directories, Kubernetes (EKS), EC2, S3, ELB, RDS, Redshift etc.
• Hands on Experience in Docker is a big plus.
• Experience working in an Agile, fast paced, DevOps environment.
• Strong Knowledge in DB such as MongoDB / MySQL / DynamoDB / Redis / Cassandra.
• Experience with Open Source and tools such as Haproxy, Apache, Nginx and Nagios etc.
• Fluency with version control systems with a preference for Git *
• Strong Linux-based infrastructures, Linux administration
• Experience with installing and configuring application servers such as WebLogic, JBoss and Tomcat.
• Hands-on in logging, monitoring and alerting tools like ELK, Grafana, Metabase, Monit, Zbbix etc.
• A team player capable of high performance, flexibility in a dynamic working environment and the ability to lead.
d ability to rain others on technical and procedural topics.
Job Responsibilities:
This role requires you to work on Linux systems and their associated services which provide the capability for IG to run their trading platform. Team responsibilities include daily troubleshooting and resolution of incidents, operational maintenance, and support for proactive and preventative analysis of Production and Development systems.
- Managing the Linux Infrastructure and web technologies
- Patching and upgrades of Redhat Linux OS and server firmware.
- General Redhat Linux system administration and networking.
iii. Troubleshooting and Issue Resolution of OS and network stack incidents.
iv. Configurations management using puppet and version control.
v. Systems monitoring and availability.
vi. Web applications and application routing.
vii. Web-site infrastructure, content delivery, and security. - Day to day responsibilities will include: Completing service requests, responding to Incidents and Problems as they arise as well as providing day to day support and troubleshooting for Production and Development systems.
3. Create a run book of operational processes and follow a support matrix of products.
4. Ensuring Internal Handovers are completed, and all OS documentation is updated.
5. Troubleshoot system issues, plan for future capacity, and monitor systems performance.
6. Proactive monitoring of the Linux platform and ownership of these tools/dashboards.
7. Work with the delivery and engineering teams to develop the platform and technologies, striving to automate where possible.
8. Continuously improve the team, tools, and processes, support regular agile releases of applications and architectural improvements.
9. The role includes participating in a team Rota to provide out-of-hours support.
Person Specification:
Ability / Expertise
This position is suited to an engineer with at least 8 years of Redhat Linux / Centos Systems Administration experience that is looking to broaden their range of technologies and work using modern tools and techniques.
We are looking for someone with the right attitude: -
Eager to learn new technologies, tools, and techniques alongside applying their existing skills and judgment.
• Pragmatic approach to balancing different work priorities such as incidents, requests and
Page Break troubleshooting.
- Can do/Proactive in improving the environments around them.
• Sets the desired goal and the plans to achieve it.
• Proud of their achievements and keen to improve further.
This will be a busy role in a team so the successful candidate’s behaviors will need to strongly align with our values:
• Champion the client: customer service is a passion, cultivates trust, has clarity and communicates well, works with pace and momentum
• Lead the way: innovative and resilient, strong learning agility and curiosity
• Love what we do: Conscientiousness - has high self-discipline, carefulness, thoroughness, and organization, Flexible and adaptable
The successful candidate will be able to relate to the statements above and give examples that back them up. We believe that previous achievements signpost a good fit at IG.
Qualifications
Essential:
• At least 4 years’ Systems Administration experience with Redhat Enterprise Linux / Centos 5/6/7 managed through a Satellite infrastructure.
• Managed an estate of 1000+ hosts and performed general system administration, networking, backup, and restore monitoring and troubleshooting functions on that estate.
• 1 Years of experience with scripting languages (bash/Perl/Ruby) and automating tasks with Puppet and Redhat Satellite. Experience with custom RPM generation.
• Strong analytical and troubleshooting skills. You will have resolved complex systems issues in your last role and have a solid understanding of the tools needed to do so.
• Excellent Communication (Listening, speaking, the transmission of concepts with/without examples, etc).
• Calm under pressure and work to tight deadlines. You will have brought critical production systems back to life.
Roles and Responsibilities
- Managing Availability, Performance, Capacity of infrastructure and applications.
- Building and implementing observability for applications health/performance/capacity.
- Optimizing On-call rotations and processes.
- Documenting “tribal” knowledge.
- Managing Infra-platforms like Mesos/Kubernetes,CICD,Observability (Prometheus/New Relic/ELK),Cloud Platforms (AWS/ Azure),Databases,Data Platforms Infrastructure
- Providing help in onboarding new services with production readiness review process.
- Providing reports on services SLO/Error Budgets/Alerts and Operational Overhead.
- Working with Dev and Product teams to define SLO/Error Budgets/Alerts.
- Working with Dev team to have in depth understanding of the application architecture
and its bottlenecks.
- Identifying observability gaps in product services, infrastructure and working with stake
owners to fix it.
- Managing Outages and doing detailed RCA with developers and identifying ways to
avoid that situation.
- Managing/Automating upgrades of the infrastructure services.
- Automate toil work.
Experience & Skills
- 6+ years of total experience
- Experience as an SRE/DevOps/Infrastructure Engineer on large scale microservices and infrastructure.
- A collaborative spirit with the ability to work across disciplines to influence, learn, and
deliver.
- A deep understanding of computer science, software development, and networking principles.
- Demonstrated experience with languages, such as Python, Java, Golang etc.
- Extensive experience with Linux administration and good understanding the various
linux kernel subsystems (memory, storage, network etc).
- Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing.
- Expertise in GitOps, Infrastructure as a Code tools such as Terraform etc.. and
- Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible.
- Expertise of Amazon Web Services (AWS) and/or other relevant Cloud Infrastructure
solutions like Microsoft Azure or Google Cloud.
- Experience in building CI/CD solutions with tools such as Jenkins, GitLab, Spinnaker,
Argo etc.
- Experience in managing and deploying containerized environments using Docker,
Mesos/Kubernetes is a plus.
- Mandatory: Docker, AWS, Linux, Kubernete or ECS
- Prior experience provisioning and spinning up AWS Clusters / Kubernetes
- Production experience to build scalable systems (load balancers, memcached, master/slave architectures)
- Experience supporting a managed cloud services infrastructure
- Ability to maintain, monitor and optimise production database servers
- Prior work with Cloud Monitoring tools (Nagios, Cacti, CloudWatch etc.)
- Experience with Docker, Kubernetes, Mesos, NoSQL databases (DynamoDB, Cassandra, MongoDB, etc)
- Other Open Source tools used in the infrastructure space (Packer, Terraform, Vagrant, etc.)
- In-depth knowledge on Linux Environment.
- Prior experience leading technical teams through the design and implementation of systems infrastructure projects.
- Working knowledge of Configuration Management (Chef, Puppet or Ansible preferred) Continuous Integration Tools (Jenkins preferred)
- Experience in handling large production deployments and infrastructure.
- DevOps based infrastructure and application deployments experience.
- Working knowledge of the AWS network architecture including designing VPN solutions between regions and subnets
- Hands-on knowledge with the AWS AMI architecture including the development of machine templates and blueprints
- He/she should be able to validate that the environment meets all security and compliance controls.
- Good working knowledge of AWS services such as Messaging, Application Services, Migration Services, Cost Management Platform.
- Proven written and verbal communication skills.
- Understands and can serve as the technical team lead to oversee the build of the Cloud environment based on customer requirements.
- Previous NOC experience.
- Client Facing Experience with excellent Customer Communication and Documentation Skills
● Responsible for development, and implementation of Cloud solutions.
● Responsible for achieving automation & orchestration of tools(Puppet/Chef)
● Monitoring the product's security & health(Datadog/Newrelic)
● Managing and Maintaining databases(Mongo & Postgres)
● Automating Infrastructure using AWS services like CloudFormation
● Provide evidences in Infrastructure Security Audits
● Migrating to Container technologies (Docker/Kubernetes)
● Should have knowledge on serverless concepts (AWS Lambda)
● Should be able to work with AWS services like EC2, S3, Cloud-formation, EKS, IAM, RDS, ..etc
What you bring:
● Problem-solving skills that enable you to identify the best solutions.
● Team collaboration and flexibility at work.
● Strong verbal and written communication skills that will help in presenting complex ideas
in an accessible and engaging way.
● Ability to choose the best tools and technologies which best fits the business needs.
Aviso offers:
● Dynamic, diverse, inclusive startup environment driven by transparency and velocity
● Bright, open, sunny working environment and collaborative office space
● Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs
● Competitive salaries and company equity, and a focus on developing world class talent operations
● Comprehensive health insurance available (medical) for you and your family
● Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service
● CEO moonshots projects with cash awards every quarter
● Upskilling and learning support including via paid conferences, online courses, and certifications
● Every month Rupees 2,500 will be credited to Sudexo meal card
● Responsible for design, development, and implementation of Cloud solutions.
● Responsible for achieving automation & orchestration of tools(Puppet/Chef)
● Monitoring the product's security & health(Datadog/Newrelic)
● Managing and Maintaining databases(Mongo & Postgres)
● Automating Infrastructure using AWS services like CloudFormation
● Participating in Infrastructure Security Audits
● Migrating to Container technologies (Docker/Kubernetes)
● Should be able to work on serverless concepts (AWS Lambda)
● Should be able to work with AWS services like EC2, S3, Cloud-formation, EKS, IAM, RDS, ..etc
What you bring:
● Problem-solving skills that enable you to identify the best solutions.
● Team collaboration and flexibility at work.
● Strong verbal and written communication skills that will help in presenting complex ideas
in
● an accessible and engaging way.
● Ability to choose the best tools and technologies which best fits the business needs.
Aviso offers:
● Dynamic, diverse, inclusive startup environment driven by transparency and velocity
● Bright, open, sunny working environment and collaborative office space
● Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs
● Competitive salaries and company equity, and a focus on developing world class talent operations
● Comprehensive health insurance available (medical) for you and your family
● Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service
● CEO moonshots projects with cash awards every quarter
● Upskilling and learning support including via paid conferences, online courses, and certifications
● Every month Rupees 2,500 will be credited to Sudexo meal card
Engineering group to plan ongoing feature development, product maintenance.
• Familiar with Virtualization, Containers - Kubernetes, Core Networking, Cloud Native
Development, Platform as a Service – Cloud Foundry, Infrastructure as a Service, Distributed
Systems etc
• Implementing tools and processes for deployment, monitoring, alerting, automation, scalability,
and ensuring maximum availability of server infrastructure
• Should be able to manage distributed big data systems such as hadoop, storm, mongoDB,
elastic search and cassandra etc.,
• Troubleshooting multiple deployment servers, Software installation, Managing licensing etc,.
• Plan, coordinate, and implement network security measures in order to protect data, software, and
hardware.
• Monitor the performance of computer systems and networks, and to coordinate computer network
access and use.
• Design, configure and test computer hardware, networking software, and operating system
software.
• Recommend changes to improve systems and network configurations, and determine hardware or
software requirements related to such changes.