Job Description
BUDGET: 20 LPA (MAX)
What you will do - Key Responsibilities
- DevOps architect will be responsible for testing, QC, debugging support, all of the various Server Side and Java software/servers for various products developed or procured by the company, will debug problems with integration of all software, on-field deployment issues and suggest improvements/work-arounds("hacks") and structured solutions/approaches.
- Responsible for Scaling the architecture towards 10M+ users.
- Will work closely with other team members including other Web Developers, Software Developers, Application Engineers, product managers to test and deploy existing products for various specialists and personnel using the software.
- Will act in capacity of Team Lead as necessary to coordinate and organize individual effort towards a successful completion / demo of an application.
- Will be solely responsible for the application approval before demo to clients, sponsors and investors.
Essential Requirements
- Should understand the ins and outs of Docker and Kubernetes
- Can architect complex cloud-based solutions using multiple products on either AWS or GCP
- Should have a solid understanding of cryptography and secure communication
- Know your way around Unix systems and can write complex shell scripts comfortably
- Should have a solid understanding of Processes and Thread Scheduling at the OS level
- Skilled with Ruby, Python or similar scripting languages
- Experienced with installing and managing multiple GPUs spread across multiple machines
- Should have at least 5 years managing large server deployments
Category
DevOps Engineer (IT & Networking)
Expertise
DevOps - 3 Years - Intermediate Python - 2 Years AWS - 3 Years - Intermediate Docker - 3 Years - Intermediate Kubernetes - 3 Years - Intermediate
Similar jobs
Classplus is India's largest B2B ed-tech start-up, enabling 1 Lac+ educators and content creators to create their digital identity with their own branded apps. Starting in 2018, we have grown more than 10x in the last year, into India's fastest-growing video learning platform.
Over the years, marquee investors like Tiger Global, Surge, GSV Ventures, Blume, Falcon, Capital, RTP Global, and Chimera Ventures have supported our vision. Thanks to our awesome and dedicated team, we achieved a major milestone in March this year when we secured a “Series-D” funding.
Now as we go global, we are super excited to have new folks on board who can take the rocketship higher🚀. Do you think you have what it takes to help us achieve this? Find Out Below!
What will you do?
• Define the overall process, which includes building a team for DevOps activities and ensuring that infrastructure changes are reviewed from an architecture and security perspective
• Create standardized tooling and templates for development teams to create CI/CD pipelines
• Ensure infrastructure is created and maintained using terraform
• Work with various stakeholders to design and implement infrastructure changes to support new feature sets in various product lines.
• Maintain transparency and clear visibility of costs associated with various product verticals, environments and work with stakeholders to plan for optimization and implementation
• Spearhead continuous experimenting and innovating initiatives to optimize the infrastructure in terms of uptime, availability, latency and costs
You should apply, if you
1. Are a seasoned Veteran: Have managed infrastructure at scale running web apps, microservices, and data pipelines using tools and languages like JavaScript(NodeJS), Go, Python, Java, Erlang, Elixir, C++ or Ruby (experience in any one of them is enough)
2. Are a Mr. Perfectionist: You have a strong bias for automation and taking the time to think about the right way to solve a problem versus quick fixes or band-aids.
3. Bring your A-Game: Have hands-on experience and ability to design/implement infrastructure with GCP services like Compute, Database, Storage, Load Balancers, API Gateway, Service Mesh, Firewalls, Message Brokers, Monitoring, Logging and experience in setting up backups, patching and DR planning
4. Are up with the times: Have expertise in one or more cloud platforms (Amazon WebServices or Google Cloud Platform or Microsoft Azure), and have experience in creating and managing infrastructure completely through Terraform kind of tool
5. Have it all on your fingertips: Have experience building CI/CD pipeline using Jenkins, Docker for applications majorly running on Kubernetes. Hands-on experience in managing and troubleshooting applications running on K8s
6. Have nailed the data storage game: Good knowledge of Relational and NoSQL databases (MySQL,Mongo, BigQuery, Cassandra…)
7. Bring that extra zing: Have the ability to program/script is and strong fundamentals in Linux and Networking.
8. Know your toys: Have a good understanding of Microservices architecture, Big Data technologies and experience with highly available distributed systems, scaling data store technologies, and creating multi-tenant and self hosted environments, that’s a plus
Being Part of the Clan
At Classplus, you’re not an “employee” but a part of our “Clan”. So, you can forget about being bound by the clock as long as you’re crushing it workwise😎. Add to that some passionate people working with and around you, and what you get is the perfect work vibe you’ve been looking for!
It doesn’t matter how long your journey has been or your position in the hierarchy (we don’t do Sirs and Ma’ams); you’ll be heard, appreciated, and rewarded. One can say, we have a special place in our hearts for the Doers! ✊🏼❤️
Are you a go-getter with the chops to nail what you do? Then this is the place for you.
Multiplier enables companies to employ anyone, anywhere in a few clicks. Our
SaaS platform combines the multi-local complexities of hiring & paying employees
anywhere in the world, and automates everything. We are passionate about
creating a world where people can get a job they love, without having to leave the
people they love.
We are an early stage start up with a "Day one" attitude and we are building a
team that will make Multiplier the market leader in this space. Every day is an
exciting one at Multiplier right now because we are figuring out a real problem in
the market and building a first-of-its-kind product around it. We are looking for
smart and talented people who will add on to our collective energy and share the
same excitement in making Multiplier a big deal. We are headquartered in
Singapore, but our team is remote.
What will I be doing? 👩💻👨💻
Owning and managing our cloud infrastructure on AWS.
Working as part of product development from inception to launch, and own
deployment pipelines through to site reliability.
Ensuring a high availability production site with proper alerting, monitoring and
security in place.
Creating an efficient environment for product development teams to build, test
and deploy features quickly by providing multiple environments for testing and
staging.
Use infrastructure as code and the best of methods and tools in DevOps to
innovate and keep improving.
Create an automation culture and add automation to wherever it is needed..
DevOps Engineer Remote) 2
What do I need? 🤓
4 years of industry experience in a similar DevOps role, preferably as part of
a SaaS
product team. You can demonstrate the significant impact your work has had
on the product and/or the team.
Deep knowledge in AWS and the services available. 2 years of experience in
building complex architecture on cloud infrastructure.
Exceptional understanding of containerisation technologies and Docker. Have
had hands on experience with Kubernetes, AWS ECS and AWS EKS.
Experience with Terraform or any other infrastructure as code solutions.
Able to comfortably use at least one high level programming languages such
as Java, Javascript or Python. Hands on experiences of scripting in bash,
groovy and others.
Good understanding of security in web technologies and cloud infrastructure.
Work with and solve problems of very complex nature and enjoy doing it.
Willingness to quickly learn and use new technologies or frameworks.
Clear and responsive communication.
environment. He/she must demonstrate a high level of ownership, integrity, and leadership
skills and be flexible and adaptive with a strong desire to learn & excel.
Required Skills:
- Strong experience working with tools and platforms like Helm charts, Circle CI, Jenkins,
- and/or Codefresh
- Excellent knowledge of AWS offerings around Cloud and DevOps
- Strong expertise in containerization platforms like Docker and container orchestration platforms like Kubernetes & Rancher
- Should be familiar with leading Infrastructure as Code tools such as Terraform, CloudFormation, etc.
- Strong experience in Python, Shell Scripting, Ansible, and Terraform
- Good command over monitoring tools like Datadog, Zabbix, Elk, Grafana, CloudWatch, Stackdriver, Prometheus, JFrog, Nagios, etc.
- Experience with Linux/Unix systems administration.
The candidate must have 2-3 years of experience in the domain. The responsibilities include:
● Deploying system on Linux-based environment using Docker
● Manage & maintain the production environment
● Deploy updates and fixes
● Provide Level 1 technical support
● Build tools to reduce occurrences of errors and improve customer experience
● Develop software to integrate with internal back-end systems
● Perform root cause analysis for production errors
● Investigate and resolve technical issues
● Develop scripts to automate visualization
● Design procedures for system troubleshooting and maintenance
● Experience working on Linux-based infrastructure
● Excellent understanding of MERN Stack, Docker & Nginx (Good to have Node Js)
● Configuration and managing databases such as Mongo
● Excellent troubleshooting
● Experience of working with AWS/Azure/GCP
● Working knowledge of various tools, open-source technologies, and cloud services
● Awareness of critical concepts in DevOps and Agile principles
● Experience of CI/CD Pipeline
As a DevOps Engineer with experience in Kubernetes, you will be responsible for leading and managing a team of DevOps engineers in the design, implementation, and maintenance of the organization's infrastructure. You will work closely with software developers, system administrators, and other IT professionals to ensure that the organization's systems are efficient, reliable, and scalable.
Specific responsibilities will include:
- Leading the team in the development and implementation of automation and continuous delivery pipelines using tools such as Jenkins, Terraform, and Ansible.
- Managing the organization's infrastructure using Kubernetes, including deployment, scaling, and monitoring of applications.
- Ensuring that the organization's systems are secure and compliant with industry standards.
- Collaborating with software developers to design and implement infrastructure as code.
- Providing mentorship and technical guidance to team members.
- Troubleshooting and resolving technical issues in collaboration with other IT professionals.
- Participating in the development and maintenance of the organization's disaster recovery and incident response plans.
To be successful in this role, you should have strong leadership skills and experience with a variety of DevOps and infrastructure tools and technologies. You should also have excellent communication and problem-solving skills, and be able to work effectively in a fast-paced, dynamic environment.
DevOps Engineer
The DevOps team is one of the core technology teams of Lumiq.ai and is responsible for managing network activities, automating Cloud setups and application deployments. The team also interacts with our customers to work out solutions. If you are someone who is always pondering how to make things better, how technologies can interact, how various tools, technologies, and concepts can help a customer or how you can use various technologies to improve user experience, then Lumiq is the place of opportunities.
Job Description
- Explore about the newest innovations in scalable and distributed systems.
- Helps in designing the architecture of the project, solutions to the existing problems and future improvements to be done.
- Make the cloud infrastructure and services smart by implementing automation and trigger based solutions.
- Interact with Data Engineers and Application Engineers to create continuous integration and deployment frameworks and pipelines.
- Playing around with large clusters on different clouds to tune your jobs or to learn.
- Researching about new technologies, proving the concepts and planning how to integrate or update.
- Be part of discussions of other projects to learn or to help.
Responsibilities
- 2+years of experience as DevOps Engineer.
- You understand actual networking to Software defined networking.
- You like containers and open source orchestration system like Kubernetes, Mesos.
- Should have experience to secure system by creating robust access policy and network restrictions enforcement.
- Should have knowledge about how applications work are very important to design distributed systems.
- Should have experience to open source projects and have discussed the shortcomings or problems with the community on several occasions.
- You understand that provisioning a Virtual Machine is not DevOps.
- You know you are not a SysAdmin but DevOps Engineer who is the person behind developing operations for the system to run efficiently and scalably.
- Exposure on Private Cloud, Subnets, VPNs, Peering, Load Balancers and have worked with them.
- You check logs before screaming about error.
- Multiple Screens makes you more efficient.
- You are a doer who don’t say the word impossible.
- You understand the value of documentation of your work.
- You understand the Big Data ecosystem and how can you leverage cloud for it.
- You know these buddies - #airflow, #aws, #azure, #gcloud, #docker, #kubernetes, #mesos, #acs
BlueOptima’s vision is to become the global reference for the optimisation of the performance of Software Engineers across all industries. We provide industry-leading objective metrics in software development. We enable large organisations to deliver better software, faster and at lower cost, with technology that pushes the limits of what has been done before.
We are a global company which has consistently doubled in headcount and revenue YoY, with no external investment. We currently are located in 4 countries: London (our HQ), Mexico, India and the US. A total number of 250+ employees (and increasing every day) from 34 different nationalities and with over 25 languages spoken.
We promote an open-minded environment and encourage our employees to create their own success story in this high-performance environment.
Location: Bangalore
Department: DevOps
Job Summary:
We are looking for skilled and talented engineers to join our Platform team and directly contribute to Continuous Delivery, and improve the state of art in CI/CD and Observability within BlueOptima.
As a Senior DevOps Engineer, you will define and outline CI/CD related aspects and collaborate with application teams on imparting training and enforcing best practices to follow for CI/CD and also directly implement, maintain, and consult on the observability and monitoring framework that supports the needs of multiple internal stakeholders.
Your team: The Platform team in BlueOptima works across Product lines and is responsible for providing a scalable technology platform which is used by the Product team to build their application, improve performance of it, or even improve the SDLC by improving the application delivery pipeline, etc.
Platform team is also responsible for driving technology adoption across the product development team. The team works on components that are common across product lines like IAM (Identity & Access Management), Auto Scaling, APM (Application Performance Monitoring) and CI/CD, etc
Responsibilities and tasks:
- Define & Outline of CI/CD and related aspects
- Own & Improve state of build process to reduce manual intervention
- Own & Improve state of deployment to make it 100% automated
- Define guidelines and standards of automated testing required for a good CI/CD pipeline, ensures alignment on an ongoing basis (includes artifacts generation, promotions, etc)
- Automating Deployment and Roll back into Production Environment.
- Collaborate with engineering teams, application developers, management and infrastructure teams to assess near- and long-term monitoring needs and provide them with Tooling to improve observability of application in production.
- Keep an eye on the emerging observability tools, trends and methodologies, and continuously enhance our existing systems and processes.
- Ability to choose the right set of tools for a given problem and apply that to all the applications which are available
- Collaborate with the application team for following
- Define and enforce logging standard
- Define metrics applications should track and provide support to application teams visualise same on Grafana (or similar tools)
- Define alerts for application health monitoring in Production
- Tooling like APM, E2E, etc
- Continuously improve the state of the art on above
- Assist in scheduling and hosting regular tool training sessions to better enable tool adoption and best practices, also making sure training materials are maintained.
Qualifications
What You Need to Succeed at BlueOptima:
- Minimum bachelor's degree in Computer Science or equivalent
- Demonstrable years of experience with implementation, operations, maintenance of IT systems and/or administration of software functions in multi-platform and multi-system environments.
- At least 1 year of experience leading or mentoring a small team.
- Demonstrable experience having developed containerized application components, using docker or similar solutions in previous roles
- Have extensive experience with metrics and logging libraries and aggregators, data analysis and visualization tools.
- Experience in defining, creating, and supporting monitoring dashboards
- 2+ Years of Experience with CI tools and building pipelines using Jenkins.
- 2 + Years of Experience with monitoring and observability tools and methodology of products such as; Grafana, Prometheus, ElasticSearch, Splunk, AppDynamics, Dynatrace, Nagios, Graphite ,Datadog etc.
- Ability to write and read simple scripts using Python / Shell Scripts.
- Familiarity with configuration languages such as Ansible.
- Ability to work autonomously with minimum supervision
- Demonstrate strong oral and written communication skill
Additional information
Why join our team?
Culture and Growth:
- Global team with a creative, innovative and welcoming mindset.
- Rapid career growth and opportunity to be an outstanding and visible contributor to the company's success.
- Freedom to create your own success story in a high-performance environment.
- Training programs and Personal Development Plans for each employee
Benefits:
- 32 days of holidays - this includes public and religious holidays
- Contributions to your Provident Fund which can be matched by the company above the statutory minimum as agreed
- Private Medical Insurance provided by the company
- Gratuity payments
- Claim Mobile/Internet expenses and Professional Development costs
- Leave Travel Allowance
- Flexible Work from Home policy - 2 days home p/w
- International travel opportunities
- Global annual meet up (most recent meetups have been held in Cancun and India Thailand, Oct 2022.
- High quality equipment (Ergonomic chairs and 32’ screens)
- Pet friendly offices
- Creche Policy for working parents.
- Paternity and Maternity leave.
Stay connected with us on https://www.linkedin.com/company/blueoptima">LinkedIn or keep an eye on our https://www.blueoptima.com/careers">career page for future opportunities!
- Must have a minimum of 3 years of experience in managing AWS resources and automating CI/CD pipelines.
- Strong scripting skills in PowerShell, Python or Bash be able to build and administer CI/CD pipelines.
- Knowledge of infrastructure tools like Cloud Formation, Terraform, Ansible.
- Experience with microservices and/or event-driven architecture.
- Experience using containerization technologies (Docker, ECS, Kubernetes, Mesos or Vagrant).
- Strong practical Windows and Linux system administration skills in the cloud.
- Understanding of DNS, NFS, TCP/IP and other protocols.
- Knowledge of secure SDLC, OWASP top 10 and CWE/SANS top 25.
- Deep understanding of Web Sockets and their functioning. Hands on experience of ElasticCache, Redis, ECS or EKS. Installation, configuration and management of Apache or Nginx web server, Apache/Tomcat Application Server, configure SSL certificates, setup reverse proxy.
- Exposure to RDBMS (MySQL, SQL Server, Aurora, etc.) is a plus.
- Exposure to programming languages like JAVA, PHP, SQL is a plus.
- AWS Developer or AWS SysOps Administrator certification is a plus.
- AWS Solutions Architect Certification experience is a plus.
- Experience building Blue/Green, Canary or other zero down time deployment strategies, advanced understanding of VPC, EC2 Route53 IAM, Lambda is a plus.
- Develop and Maintain IAC using Terraform and Ansible
- Draft design documents that translate requirements into code.
- Deal with challenges associated with scale.
- Assume responsibilities from technical design through technical client support.
- Manage expectations with internal stakeholders and context-switch in a fast paced environment.
- Thrive in an environment that uses Elasticsearch extensively.
- Keep abreast of technology and contribute to the engineering strategy.
- Champion best development practices and provide mentorship
An AWS Certified Engineer with strong skills in
- Terraform o Ansible
- *nix and shell scripting
- Elasticsearch
- Circle CI
- CloudFormation
- Python
- Packer
- Docker
- Prometheus and Grafana
- Challenges of scale
- Production support
- Sharp analytical and problem-solving skills.
- Strong sense of ownership.
- Demonstrable desire to learn and grow.
- Excellent written and oral communication skills.
- Mature collaboration and mentoring abilities.