About Quantela
We are a technology company that offers outcomes business models. We empower our customers with the right digital infrastructure to deliver greater economic, social, and environmental outcomes for their constituents.
When the company was founded in 2015, we specialized in smart cities technology alone. Today, working with cities and towns; utilities, and public venues, our team of 280+ experts offer a vast array of outcomes business models through technologies like digital advertising, smart lighting, smart traffic, and digitized citizen services.
We pride ourselves on our agility, innovation, and passion to use technology for a higher purpose. Unlike other technology companies, we tailor our offerings (what we can digitize) and the business model (how we partner with our customer to deliver that digitization) to drive measurable impact where our customers need it most. Over the last several months alone, we have served customers to deliver outcomes like increased medical response times to save lives; reduced traffic congestion to keep cities moving and created new revenue streams to tackle societal issues like homelessness.
We are headquartered in Billerica, Massachusetts in the United States with offices across Europe, and Asia.
The company has been recognized with the World Economic Forum’s ‘Technology Pioneers’ award in 2019 and CRN’s IoT Innovation Award in 2020.
For latest news and updates please visit us at www.quantela.com
Overview of the Role
The ideal candidate should have automation skills to automate Infrastructure, microservices deployment through automation tools. Should be handling Kubernetes cluster in Production, both cloud and on-premise
Key Responsibilities
- Overall 6+ Years of experience and should have handled Production Kubernetes Cluster both on Cloud and On-premises environments.
- Build monitoring that alerts on symptoms rather than on outages.
- Should migrate VM based applications to Kubernetes cluster.
- Automate Infrastructure component provision.
- Document every task, your findings turn into repeatable actions and then into automation.
- Follow the Agile process and plan the work accordingly.
Must have Skills
- Knowledge on container solutions like Docker, Kubernetes and understanding of Virtualization concepts.
- Experience in Configuration Management Tools like Ansible, Chef.
- Experience with Terraform, Cloud Formation or other infrastructure as code tools.
- Experience with CI/CD in Jenkins.
- Good knowledge of AWS/Azure cloud environments.
- Good hands-on experience on webservers like Nginx, Apache, tomcat configuration, and administration.
- Experience working with and maintaining package management systems (e.g., Artifactory, APT).
- Knowledge of scripting in PowerShell/Python/Bash/
- Experience building automation and pipelines (integration, testing, deployment)
- Experience with Docker containers (images and registry management).
- Understanding of metrics collectors such as Metricbeat, Heartbeat or Prometheus is good to have.
- Ability to work collaboratively on a cross-functional team with a wide range of experience levels
- Ability to analyze existing services and identify technical debt to work.
Desired Background
Bachelors/Masters degree in Computer Science or Computer Applications
Similar jobs
- Configure, optimize, document, and support of the infrastructure components of software products (which are hosted in collocated facilities and cloud services such as AWS)
- Design and build tools and frameworks that support deployment and management and platforms
- Design, build, and deliver cloud computing solutions, hosted services, and underlying software infrastructures
- Build core functionality of our cloud-based platform product, deliver secure, reliable services and construct third party integrations
- Assist in coaching application developers on proper DevOps techniques for building scalable applications in the microservices paradigm
- Foster collaboration with software product development and architecture teams to ensure releases are delivered with repeatable and auditable processes
- Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restores of different environments
- Work independently across multiple platforms and applications to understand dependencies
- Evaluate new tools, technologies, and processes to improve speed, efficiency, and scalability of continuous integration environments
- Design and architect solutions for existing client-facing applications as they are moved into cloud environments such as AWS
- Competencies
- Full understanding of scripting and automated process management in languages such as Shell, Ruby and/ or Python
- Working Knowledge SCM tools such as Git, GitHub, Bitbucket, etc.
- Working knowledge of Amazon Web Services and related APIs
- Ability to deliver and manage web or cloud-based services
- General familiarity with monitoring tools
- General familiarity with configuration/provisioning tools such as Terraform
- Experience
- Experience working within an Agile type environment
- 4+ years of experience with cloud-based provisioning (Azure, AWS, Google), monitoring, troubleshooting, and related DevOps technologies
- 4+ years of experience with containerization/orchestration technologies like Rancher, Docker and Kubernetes
About the role:
We are seeking a highly skilled Azure DevOps Engineer with a strong background in backend development to join our rapidly growing team. The ideal candidate will have a minimum of 4 years of experience and has have extensive experience in building and maintaining CI/CD pipelines, automating deployment processes, and optimizing infrastructure on Azure. Additionally, expertise in backend technologies and development frameworks is required to collaborate effectively with the development team in delivering scalable and efficient solutions.
Responsibilities
- Collaborate with development and operations teams to implement continuous integration and deployment processes.
- Automate infrastructure provisioning, configuration management, and application deployment using tools such as Ansible, and Jenkins.
- Design, implement, and maintain Azure DevOps pipelines for continuous integration and continuous delivery (CI/CD)
- Develop and maintain build and deployment pipelines, ensuring that they are scalable, secure, and reliable.
- Monitor and maintain the health of the production infrastructure, including load balancers, databases, and application servers.
- Automate the software development and delivery lifecycle, including code building, testing, deployment, and release.
- Familiarity with Azure CLI, Azure REST APIs, Azure Resource Manager template, Azure billing/cost management and the Azure Management Console
- Must have experience of any one of the programming language (Java, .Net, Python )
- Ensure high availability of the production environment by implementing disaster recovery and business continuity plans.
- Build and maintain monitoring, alerting, and trending operational tools (CloudWatch, New Relic, Splunk, ELK, Grafana, Nagios).
- Stay up to date with new technologies and trends in DevOps and make recommendations for improvements to existing processes and infrastructure.
- ontribute to backend development projects, ensuring robust and scalable solutions.
- Work closely with the development team to understand application requirements and provide technical expertise in backend architecture.
- Design and implement database schemas.
- Identify and implement opportunities for performance optimization and scalability of backend systems.
- Participate in code reviews, architectural discussions, and sprint planning sessions.
- Stay updated with the latest Azure technologies, tools, and best practices to continuously improve our development and deployment processes.
- Mentor junior team members and provide guidance and training on best practices in DevOps.
Required Qualifications
- BS/MS in Computer Science, Engineering, or a related field
- 4+ years of experience as an Azure DevOps Engineer (or similar role) with experience in backed development.
- Strong understanding of CI/CD principles and practices.
- Expertise in Azure DevOps services, including Azure Pipelines, Azure Repos, and Azure Boards.
- Experience with infrastructure automation tools like Terraform or Ansible.
- Proficient in scripting languages like PowerShell or Python.
- Experience with Linux and Windows server administration.
- Strong understanding of backend development principles and technologies.
- Excellent communication and collaboration skills.
- Ability to work independently and as part of a team.
- Problem-solving and analytical skills.
- Experience with industry frameworks and methodologies: ITIL/Agile/Scrum/DevOps
- Excellent problem-solving, critical thinking, and communication skills.
- Have worked in a product based company.
What we offer:
- Competitive salary and benefits package
- Opportunity for growth and advancement within the company
- Collaborative, dynamic, and fun work environment
- Possibility to work with cutting-edge technologies and innovative projects
Now, more than ever, the Toast team is committed to our customers. We’re taking steps to help restaurants navigate these unprecedented times with technology, resources, and community. Our focus is on building a restaurant platform that helps restaurants adapt, take control, and get back to what they do best: building the businesses they love. And because our technology is purpose-built for restaurants by restaurant people, restaurants can trust that we’ll deliver on their needs for today while investing in experiences that will power their restaurant of the future.
At Toast, our Site Reliability Engineers (SREs) are responsible for keeping all customer-facing services and other Toast production systems running smoothly. SREs are a blend of pragmatic operators and software craftspeople who apply sound software engineering principles, operational discipline, and mature automation to our environments and our codebase. Our decisions are based on instrumentation and continuous observability, as well as predictions and capacity planning.
About this roll* (Responsibilities)
- Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
- Partner with development teams to improve services through rigorous testing and release procedures
- Participate in system design consulting, platform management, and capacity planning
- Create sustainable systems and services through automation and uplift
- Balance feature development speed and reliability with well-defined service level objectives
Troubleshooting and Supporting Escalations:
- Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding
- Diagnose performance bottlenecks and implement optimizations across infrastructure, databases, web, and mobile applications
- Implement strategies to increase system reliability and performance through on-call rotation and process optimization
- Perform and run blameless RCAs on incidents and outages aggressively, looking for answers that will prevent the incident from ever happening again
Do you have the right ingredients? (Requirements)
- Extensive industry experience with at least 7+ years in SRE and/or DevOps roles
- Polyglot technologist/generalist with a thirst for learning
- Deep understanding of cloud and microservice architecture and the JVM
- Experience with tools such as APM, Terraform, Ansible, GitHub, Jenkins, and Docker
- Experience developing software or software projects in at least four languages, ideally including two of Go, Python, and Java
- Experience with cloud computing technologies ( AWS cloud provider preferred)
Bread puns are encouraged but not required
**THIS IS A 100% WORK FROM OFFICE ROLE**
We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead to identify and establish DevOps practices in the company.
You will help us build scalable, efficient cloud infrastructure. You’ll implement monitoring for automated system health checks. Lastly, you’ll build our CI pipeline, and train and guide the team in DevOps practices.
ROLE and RESPONSIBILITIES:
• Understanding customer requirements and project KPIs
• Implementing various development, testing, automation tools, and IT infrastructure
• Planning the team structure, activities, and involvement in project management
activities.
• Managing stakeholders and external interfaces
• Setting up tools and required infrastructure
• Defining and setting development, test, release, update, and support processes for
DevOps operation
• Have the technical skill to review, verify, and validate the software code developed in
the project.
• Troubleshooting techniques and fixing the code bugs
• Monitoring the processes during the entire lifecycle for its adherence and updating or
creating new processes for improvement and minimizing the wastage
• Encouraging and building automated processes wherever possible
• Identifying and deploying cybersecurity measures by continuously performing
vulnerability assessment and risk management
• Incidence management and root cause analysis
• Coordination and communication within the team and with customers
• Selecting and deploying appropriate CI/CD tools
• Strive for continuous improvement and build continuous integration, continuous
development, and constant deployment pipeline (CI/CD Pipeline)
• Mentoring and guiding the team members
• Monitoring and measuring customer experience and KPIs
• Managing periodic reporting on the progress to the management and the customer
Essential Skills and Experience Technical Skills
• Proven 3+years of experience as DevOps
• A bachelor’s degree or higher qualification in computer science
• The ability to code and script in multiple languages and automation frameworks
like Python, C#, Java, Perl, Ruby, SQL Server, NoSQL, and MySQL
• An understanding of the best security practices and automating security testing and
updating in the CI/CD (continuous integration, continuous deployment) pipelines
• An ability to conveniently deploy monitoring and logging infrastructure using tools.
• Proficiency in container frameworks
• Mastery in the use of infrastructure automation toolsets like Terraform, Ansible, and command line interfaces for Microsoft Azure, Amazon AWS, and other cloud platforms
• Certification in Cloud Security
• An understanding of various operating systems
• A strong focus on automation and agile development
• Excellent communication and interpersonal skills
• An ability to work in a fast-paced environment and handle multiple projects
simultaneously
OTHER INFORMATION
The DevOps Engineer will also be expected to demonstrate their commitment:
• to gedu values and regulations, including equal opportunities policy.
• the gedu’s Social, Economic and Environmental responsibilities and minimise environmental impact in the performance of the role and actively contribute to the delivery of gedu’s Environmental Policy.
• to their Health and Safety responsibilities to ensure their contribution to a safe and secure working environment for staff, students, and other visitors to the campus.
- Experience using AWS (that’s just common sense)
- Experience designing and building web environments on AWS, which includes working with services like EC2, ELB, RDS, and S3
- Experience building and maintaining cloud-native applications
- A solid background in Linux/Unix and Windows server system administration
- Experience using https://www.simplilearn.com/tutorials/devops-tutorial/devops-tools" target="_blank">DevOps tools in a cloud environment, such as Ansible, Artifactory, https://www.simplilearn.com/tutorials/docker-tutorial/what-is-docker-container" target="_blank">Docker, GitHub, https://www.simplilearn.com/tutorials/jenkins-tutorial/what-is-jenkins" target="_blank">Jenkins, https://www.simplilearn.com/tutorials/kubernetes-tutorial/what-is-kubernetes" target="_blank">Kubernetes, Maven, and Sonar Qube
- Experience installing and configuring different application servers such as JBoss, Tomcat, and WebLogic
- Experience using monitoring solutions like CloudWatch, ELK Stack, and Prometheus
- An understanding of writing Infrastructure-as-Code (IaC), using tools like CloudFormation or Terraform
- Knowledge of one or more of the most-used programming languages available for today’s cloud computing (i.e., SQL data, XML data, R math, Clojure math, Haskell functional, Erlang functional, Python procedural, and Go procedural languages)
- Experience in troubleshooting distributed systems
- Proficiency in script development and scripting languages
- The ability to be a team player
- The ability and skill to train other people in procedural and technical topics
- Strong communication and collaboration skills
As a special aside, an AWS engineer who works in DevOps should also have experience with:
- The theory, concepts, and real-world application of Continuous Delivery (CD), which requires familiarity with tools like AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline
- An understanding of automation
• Expertise in any one hyper-scale (AWS/AZURE/GCP), including basic services like networking, data and workload management.
o AWS
Networking: VPC, VPC Peering, Transit Gateway, RouteTables, SecurityGroups, etc.
Data: RDS, DynamoDB, ElasticSearch
Workload: EC2, EKS, Lambda, etc.
o Azure
Networking: VNET, VNET Peering,
Data: Azure MySQL, Azure MSSQL, etc.
Workload: AKS, VirtualMachines, AzureFunctions
o GCP
Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
Workload: GKE, Instances, App Engine, Batch, etc.
• Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup, templating and configuration.
• Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment, networking, service mesh. Used any package manager like helm.
• Scripting experience (Bash/python), automation in pipelines when required, system service.
• Infrastructure automation (Terraform/pulumi/cloudformation), write modules, setup pipeline and version the code.
Optional
• Experience in any programming language is not required but is appreciated.
• Good experience in GIT, SVN or any other code management tool is required.
• DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
• Observability tools (Opensource: Prometheus, Elasticsearch, OpenTelemetry; Paid: Datadog, 24/7, etc)
Introduction
http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis.
Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting. We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls
Your Roles and Responsibilities
The Lead DevOps Engineer will be responsible for the management, monitoring and operation of our applications and services in production. The DevOps Engineer will be a hands-on person who can work independently or with minimal guidance and has the ability to drive the team’s deliverables by mentoring and guiding junior team members. You will work with the existing teams very closely and build on top of tools like Kubernetes, Docker and Terraform and support our numerous polyglot services.
Introducing a strong DevOps ethic into the rest of the team is crucial, and we expect you to lead the team on best practices in deployment, monitoring, and tooling. You'll work collaboratively with software engineering to deploy and operate our systems, help automate and streamline our operations and processes, build and maintain tools for deployment, monitoring, and operations and troubleshoot and resolve issues in our development, test and production environments. The position is based in our Bangalore office.
Primary Responsibilities
- Providing strategies and creating pathways in support of product initiatives in DevOps and automation, with a focus on the design of systems and services that run on cloud platforms.
- Optimizations and execution of the CI/CD pipelines of multiple products and timely promotion of the releases to production environments
- Ensuring that mission critical applications are deployed and optimised for high availability, security & privacy compliance and disaster recovery.
- Strategize, implement and verify secure coding techniques, integrate code security tools for Continuous Integration
- Ensure analysis, efficiency, responsiveness, scalability and cross-platform compatibility of applications through captured metrics, testing frameworks, and debugging methodologies.
- Technical documentation through all stages of development
- Establish strong relationships, and proactively communicate, with team members as well as individuals across the organisation
Requirements
- Minimum of 6 years of experience on Devops tools.
- Working experience with Linux, container orchestration and management technologies (Docker, Kubernetes, EKS, ECS …).
- Hands-on experience with "infrastructure as code" solutions (Cloudformation, Terraform, Ansible etc).
- Background of building and maintaining CI/CD pipelines (Gitlab-CI, Jenkins, CircleCI, Github actions etc).
- Experience with the Hashicorp stack (Vault, Packer, Nomad etc).
- Hands-on experience in building and maintaining monitoring/logging/alerting stacks (ELK stack, Prometheus stack, Grafana etc).
- Devops mindset and experience with Agile / SCRUM Methodology
- Basic knowledge of Storage , Databases (SQL and noSQL)
- Good understanding of networking technologies, HAProxy, firewalling and security.
- Experience in Security vulnerability scans and remediation
- Experience in API security and credentials management
- Worked on Microservice configurations across dev/test/prod environments
- Ability to quickly adapt new languages and technologies
- A strong team player attitude with excellent communication skills.
- Very high sense of ownership.
- Deep interest and passion for technology
- Ability to plan projects, execute them and meet the deadline
- Excellent verbal and written English communication.
- Design, Develop, deploy, and run operations of infrastructure services in the Acqueon AWS cloud environment
- Manage uptime of Infra & SaaS Application
- Implement application performance monitoring to ensure platform uptime and performance
- Building scripts for operational automation and incident response
- Handle schedule and processes surrounding cloud application deployment
- Define, measure, and meet key operational metrics including performance, incidents and chronic problems, capacity, and availability
- Lead the deployment, monitoring, maintenance, and support of operating systems (Windows, Linux)
- Build out lifecycle processes to mitigate risk and ensure platforms remain current, in accordance with industry standard methodologies
- Run incident resolution within the environment, facilitating teamwork with other departments as required
- Automate the deployment of new software to cloud environment in coordination with DevOps engineers
- Work closely with Presales, understand customer requirement to deploy in Production
- Lead and mentor a team of operations engineers
- Drive the strategy to evolve and modernize existing tools and processes to enable highly secure and scalable operations
- AWS infrastructure management, provisioning, cost management and planning
- Prepare RCA incident reports for internal and external customers
- Participate in product engineering meetings to ensure product features and patches comply with cloud deployment standards
- Troubleshoot and analyse performance issues and customer reported incidents working to restore services within the SLA
- Monthly SLA Performance reports
As a Cloud Operations Manager in Acqueon you will need….
- 8 years’ progressive experience managing IT infrastructure and global cloud environments such as AWS, GCP (must)
- 3-5 years management experience leading a Cloud Operations / Site Reliability / Production Engineering team working with globally distributed teams in a fast-paced environment
- 3-5 years’ experience in IAC (Terraform, K8)
- 3+ years end-to-end incident management experience
- Experience with communicating and presenting to all stakeholders
- Experience with Cloud Security compliance and audits
- Detail-oriented. The ideal candidate is one who naturally digs as deep as they need to understand the why
- Knowledge on GCP will be added advantage
- Manage and monitor customer instances for uptime and reliability
- Staff scheduling and planning to ensure 24x7x365 coverage for cloud operations
- Customer facing, excellent communication skills, team management, troubleshooting