Responsibilities:
- Provide daily support with resolution of escalated tickets and act as liaison to business and technical leads to ensure issues are resolved in timely manner.
- Incident resolution and supporting production system deployments.
- Suggest fixes to complex issues by doing a thorough analysis of root cause and impact of the defect.
- Support and deliver within Continuous Integration/Continuous Delivery pipelines.
- Prioritise workload, providing timely and accurate resolutions.
- Perform production support activities which involve assignment of issues and issue analysis and resolution within the specified SLAs.
- Understand linux. SSH to linux box, look for web logs etc
- Understand web apps to be able to troubleshoot issues
Requirements:
- Good to have programming experience with Python.
- Java and JavaScript development experience would be an added advantage.
- You should not be afraid to do some development as well as Devops.
- Clear written and oral communication is a must.

About Firebucks
About
Connect with the team
Similar jobs
● Good understanding of how the web works
● Experience with at least one language like Java, Python etc
● Good with Shell scripting
● Experience with *Nix based operating systems
● Experience with k8s, containers
● Fairly good understanding of AWS/GCP/Azure
● Troubleshoot and fix outages and performance issues in infrastructure stack
● Identify gap and design automation tools for all feasible functions in infrastructure
● Good verbal and written communication skills
● Drive SLA/SLO of team
Benefits
This is an opportunity to work on a fairly complex set of systems and improve
them. You will get a chance to learn things like “how to think about code
simplicity”, “how to write for maintainability” and several other things.
● Comprehensive health insurance policy.
● Flexible working hours and a very friendly work environment.
● Flexibility to work either in the office (post Covid) or remotely.
Role Overview:
As a DevOps Engineer (L2), you will play a key role in designing, implementing, and optimizing infrastructure. You will take ownership of automating processes, improving system reliability, and supporting the development lifecycle.
Key Responsibilities:
- Design and manage scalable, secure, and highly available cloud infrastructure.
- Lead efforts in implementing and optimizing CI/CD pipelines.
- Automate repetitive tasks and develop robust monitoring solutions.
- Ensure the security and compliance of systems, including IAM, VPCs, and network configurations.
- Troubleshoot complex issues across development, staging, and production environments.
- Mentor and guide L1 engineers on best practices.
- Stay updated on emerging DevOps tools and technologies.
- Manage cloud resources efficiently using Infrastructure as Code (IaC) tools like Terraform and AWS CloudFormation.
Qualifications:
- Bachelor’s degree in Computer Science, IT, or a related field.
- Proven experience with CI/CD pipelines and tools like Jenkins, GitLab, or Azure DevOps.
- Advanced knowledge of cloud platforms (AWS, Azure, or GCP) with hands-on experience in deployments, migrations, and optimizations.
- Strong expertise in containerization (Docker) and orchestration tools (Kubernetes).
- Proficiency in scripting languages like Python, Bash, or PowerShell.
- Deep understanding of system security, networking, and load balancing.
- Strong analytical skills and problem-solving mindset.
- Certifications (e.g., AWS Certified Solutions Architect, Kubernetes Administrator) are a plus.
What We Offer:
- Opportunity to work with a cutting-edge tech stack in a product-first company.
- Collaborative and growth-oriented environment.
- Competitive salary and benefits.
- Freedom to innovate and contribute to impactful projects.
Company Overview
Adia Health revolutionizes clinical decision support by enhancing diagnostic accuracy and personalizing care. It modernizes the diagnostic process by automating optimal lab test selection and interpretation, utilizing a combination of expert medical insights, real-world data, and artificial intelligence. This approach not only streamlines the diagnostic journey but also ensures precise, individualized patient care by integrating comprehensive medical histories and collective platform knowledge.
Position Overview
We are seeking a talented and experienced Site Reliability Engineer/DevOps Engineer to join our dynamic team. The ideal candidate will be responsible for ensuring the reliability, scalability, and performance of our infrastructure and applications. You will collaborate closely with development, operations, and product teams to automate processes, implement best practices, and improve system reliability.
Key Responsibilities
- Design, implement, and maintain highly available and scalable infrastructure solutions using modern DevOps practices.
- Automate deployment, monitoring, and maintenance processes to streamline operations and increase efficiency.
- Monitor system performance and troubleshoot issues, ensuring timely resolution to minimize downtime and impact on users.
- Implement and manage CI/CD pipelines to automate software delivery and ensure code quality.
- Manage and configure cloud-based infrastructure services to optimize performance and cost.
- Collaborate with development teams to design and implement scalable, reliable, and secure applications.
- Implement and maintain monitoring, logging, and alerting solutions to proactively identify and address potential issues.
- Conduct periodic security assessments and implement appropriate measures to ensure the integrity and security of systems and data.
- Continuously evaluate and implement new tools and technologies to improve efficiency, reliability, and scalability.
- Participate in on-call rotation and respond to incidents promptly to ensure system uptime and availability.
Qualifications
- Bachelor's degree in Computer Science, Engineering, or related field
- Proven experience (5+ years) as a Site Reliability Engineer, DevOps Engineer, or similar role
- Strong understanding of cloud computing principles and experience with AWS
- Experience of building and supporting complex CI/CD pipelines using Github
- Experience of building and supporting infrastructure as a code using Terraform
- Proficiency in scripting and automating tools
- Solid understanding of networking concepts and protocols
- Understanding of security best practices and experience implementing security controls in cloud environments
- Knowing modern security requirements like SOC2, HIPAA, HITRUST will be a solid advantage.
● Auditing, monitoring and improving existing infrastructure components of highly available and scaled
product on cloud with Ubuntu servers
● Running daily maintenance tasks and improving it with possible automation
● Deploying new components, server and other infrastructure when needed
● Coming up with innovative ways to automate tasks
● Working with telecom carriers and getting rates and destinations and update regularly on the system
● Working with Docker containers, Tinc, Iptables, HAproxy, ETCD, mySQL, mongoDB, CouchDB and
ansible
You would be bringing below skills to our team :
● Expertise with Docker containers and its networking, Tinc, Iptables, HAproxy, ETCD, and ansible
● Extensive experience with setup, maintenance, monitoring, backup and replication with mySQL
● Expertise with the Ubuntu servers and its OS and server level networking
● Good experience of working with mongoDB, CouchDB
● Good with the networking tools
● Open Source server monitoring solutions like nagios, Zabbix etc.
● Worked on highly scaled, distributed applications running on the Datacenter Ubuntu VPS instances
● Innovative and out of box thinker with multitasking skills working in a small team efficiently
● Working Knowledge of any scripting languages like bash, node or python
● It would be an advantage if have experience with the calling platforms like FreeSWITCH, OpenSIPS or
Kamailio and have basic knowledge of SIP protocol
About Quantela
We are a technology company that offers outcomes business models. We empower our customers with the right digital infrastructure to deliver greater economic, social, and environmental outcomes for their constituents.
When the company was founded in 2015, we specialized in smart cities technology alone. Today, working with cities and towns; utilities, and public venues, our team of 280+ experts offer a vast array of outcomes business models through technologies like digital advertising, smart lighting, smart traffic, and digitized citizen services.
We pride ourselves on our agility, innovation, and passion to use technology for a higher purpose. Unlike other technology companies, we tailor our offerings (what we can digitize) and the business model (how we partner with our customer to deliver that digitization) to drive measurable impact where our customers need it most. Over the last several months alone, we have served customers to deliver outcomes like increased medical response times to save lives; reduced traffic congestion to keep cities moving and created new revenue streams to tackle societal issues like homelessness.
We are headquartered in Billerica, Massachusetts in the United States with offices across Europe, and Asia.
The company has been recognized with the World Economic Forum’s ‘Technology Pioneers’ award in 2019 and CRN’s IoT Innovation Award in 2020.
For latest news and updates please visit us at www.quantela.com
Overview of the Role
The ideal candidate should have automation skills to automate Infrastructure, microservices deployment through automation tools. Should be handling Kubernetes cluster in Production, both cloud and on-premise
Key Responsibilities
- Overall 6+ Years of experience and should have handled Production Kubernetes Cluster both on Cloud and On-premises environments.
- Build monitoring that alerts on symptoms rather than on outages.
- Should migrate VM based applications to Kubernetes cluster.
- Automate Infrastructure component provision.
- Document every task, your findings turn into repeatable actions and then into automation.
- Follow the Agile process and plan the work accordingly.
Must have Skills
- Knowledge on container solutions like Docker, Kubernetes and understanding of Virtualization concepts.
- Experience in Configuration Management Tools like Ansible, Chef.
- Experience with Terraform, Cloud Formation or other infrastructure as code tools.
- Experience with CI/CD in Jenkins.
- Good knowledge of AWS/Azure cloud environments.
- Good hands-on experience on webservers like Nginx, Apache, tomcat configuration, and administration.
- Experience working with and maintaining package management systems (e.g., Artifactory, APT).
- Knowledge of scripting in PowerShell/Python/Bash/
- Experience building automation and pipelines (integration, testing, deployment)
- Experience with Docker containers (images and registry management).
- Understanding of metrics collectors such as Metricbeat, Heartbeat or Prometheus is good to have.
- Ability to work collaboratively on a cross-functional team with a wide range of experience levels
- Ability to analyze existing services and identify technical debt to work.
Desired Background
Bachelors/Masters degree in Computer Science or Computer Applications
Exp:8 to 10 years notice periods 0 to 20 days
Job Description :
- Provision Gcp Resources Based On The Architecture Design And Features Aligned With Business Objectives
- Monitor Resource Availability, Usage Metrics And Provide Guidelines For Cost And Performance Optimization
- Assist It/Business Users Resolving Gcp Service Related Issues
- Provide Guidelines For Cluster Automation And Migration Approaches And Techniques Including Ingest, Store, Process, Analyse And Explore/Visualise Data.
- Provision Gcp Resources For Data Engineering And Data Science Projects.
- Assistance With Automated Data Ingestion, Data Migration And Transformation(Good To Have)
- Assistance With Deployment And Troubleshooting Applications In Kubernetes.
- Establish Connections And Credibility In How To Address The Business Needs Via Design And Operate Cloud-Based Data Solutions
Key Responsibilities / Tasks :
- Building complex CI/CD pipelines for cloud native PaaS services such as Databases, Messaging, Storage, Compute in Google Cloud Platform
- Building deployment pipeline with Github CI (Actions)
- Building terraform codes to deploy infrastructure as a code
- Working with deployment and troubleshooting of Docker, GKE, Openshift, and Cloud Run
- Working with Cloud Build, Cloud Composer, and Dataflow
- Configuring software to be monitored by Appdynamics
- Configuring stackdriver logging and monitoring in GCP
- Work with splunk, Kibana, Prometheus and grafana to setup dashboard
Your skills, experience, and qualification :
- Total experience of 5+ Years, in as Devops. Should have at least 4 year of experience in Google could and Github CI.
- Should have strong experience in Microservices/API.
- Should have strong experience in Devops tools like Gitbun CI, teamcity, Jenkins and Helm.
- Should know Application deployment and testing strategies in Google cloud platform.
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Excellent understanding of Java
- Knowledge on Kafka, ZooKeeper, Hazelcast, Pub/Sub is nice to have.
- Understanding of cloud networking, security such as software defined networking/firewalls, virtual networks and load balancers.
- Understanding of cloud identity and access
- Understanding of the compute runtime and the differences between native compute, virtual and containers
- Configuration and managing databases such as Oracle, Cloud SQL, and Cloud Spanner.
- Excellent troubleshooting
- Working knowledge of various tools, open-source technologies
- Awareness of critical concepts of Agile principles
- Certification in Google professional Cloud DevOps Engineer is desirable.
- Experience with Agile/SCRUM environment.
- Familiar with Agile Team management tools (JIRA, Confluence)
- Understand and promote Agile values: FROCC (Focus, Respect, Openness, Commitment, Courage)
- Good communication skills
- Pro-active team player
- Comfortable working in multi-disciplinary, self-organized teams
- Professional knowledge of English
- Differentiators : knowledge/experience about
About the compnay:
Our Client is a B2B2C tech Web3 startup founded by founders - IITB Graduates who are experienced in retail, ecommerce and fintech.
Vision: Our Client aims to change the way that customers, creators, and retail investors interact and transact at brands of all shapes and sizes. Essentially, becoming the Web3 version of brands driven social ecommerce & investment platform.
Role Description
We are looking for a DevOps Engineer responsible for managing cloud technologies, deployment
automation and CI /CD
Key Responsibilities
Building and setting up new development tools and infrastructure
Understanding the needs of stakeholders and conveying this to developers
Working on ways to automate and improve development and release processes
Testing and examining code written by others and analyzing results
Ensuring that systems are safe and secure against cybersecurity threats
Identifying technical problems and developing software updates and ‘fixes’
Working with software developers and software engineers to ensure that development
follows established processes and works as intended
Planning out projects and being involved in project management decisions
Required Skills and Qualifications
BE / MCA / B.Sc-IT / B.Tech in Computer Science or a related field.
4+ years of overall development experience.
Strong understanding of cloud deployment and setup.
Hands-on experience with tools like Jenkins, Gradle etc.
Deploy updates and fixes.
Provide Level 2 technical support.
Build tools to reduce occurrences of errors and improve customer experience.
Perform root cause analysis for production errors.
Investigate and resolve technical issues.
Develop scripts to automate deployment.
Design procedures for system troubleshooting and maintenance.
Proficient with git and git workflows.
Working knowledge of databases and SQL.
Problem-solving attitude.
Collaborative team spirit
Regards
Team Merito
About the Company
- 💰 Early-stage, ed-tech, funded, growing, growing fast
- 🎯 Mission Driven: Make Indonesia competitive on a global scale
- 🥅 Build the best educational content and technology to advance STEM education
- 🥇 Students-First approach
- 🇮🇩 🇮🇳 Teams in India and Indonesia
Skillset 🧗🏼♀️
- You primarily identify as a DevOps/Infrastructure engineer and are comfortable working with systems and cloud-native services on AWS
- You can design, implement, and maintain secure and scalable infrastructure delivering cloud-based services
- You have experience operating and maintaining production systems in a Linux based public cloud environment
- You are familiar with cloud-native concepts - Containers, Lambdas, Orchestration (ECS, Kubernetes)
- You’re in love with system metrics and strive to help deliver improvements to systems all the time
- You can think in terms of Infrastructure as Code to build tools for automating deployment, monitoring, and operations of the platform
- You can be on-call once every few weeks to provide application support, incident management, and troubleshooting
- You’re fairly comfortable with GIT, AWS CLI, python, docker CLI, in general, all things CLI. Oh! Bash scripting too!
- You have high integrity, and you are reliable
What you can expect from us 👌🏼
☮️ Mentorship, growth, great work culture
- Mentorship and continuous improvement are a part of the team’s DNA. We have a battle-tested robust growth framework. You will have people to look up to and people looking up to you
- We are a people-first, high-trust, high-autonomy team
- We live in the TDD, Pair Programming, First Principles world
🌏 Remote done right
- Distributed does not mean working in isolation, feeling alone, being buried in Zoom calls
- Our leadership team has been WFH for 10+ years now and we know how remote teams work. This will be a place to belong
- A good balance between deep focussed work and collaborative work ⚖️
🖥️ Friendly, humane interview process
- 30-minute alignment check and screening call
- A short take-home coding assignment, no more than 2-3 hours. Time is precious
- Pair programming interview. Collaborate, work together. No sitting behind a desk and judging
- In-depth engineering discussion around your skills and career so far
- System design and architecture interview for seniors
What we ask from you👇🏼
- Bring your software engineering — both individual brilliance and collaborative skills
- Bring your good nature — we're building a team that supports each other
- Be vested or interested in the company vision
Roles and Responsibilities
- 5 - 8 years of experience in Infrastructure setup on Cloud, Build/Release Engineering, Continuous Integration and Delivery, Configuration/Change Management.
- Good experience with Linux/Unix administration and moderate to significant experience administering relational databases such as PostgreSQL, etc.
- Experience with Docker and related tools (Cassandra, Rancher, Kubernetes etc.)
- Experience of working in Config management tools (Ansible, Chef, Puppet, Terraform etc.) is a plus.
- Experience with cloud technologies like Azure
- Experience with monitoring and alerting (TICK, ELK, Nagios, PagerDuty)
- Experience with distributed systems and related technologies (NSQ, RabbitMQ, SQS, etc.) is a plus
- Experience with scaling data store technologies is a plus (PostgreSQL, Scylla, Redis) is a plus
- Experience with SSH Certificate Authorities and Identity Management (Netflix BLESS) is a plus
- Experience with multi-domain SSL certs and provisioning a plus (Let's Encrypt) is a plus
- Experience with chaos or similar methodologies is a plus
We are looking for an experienced software engineer with a strong background in DevOps and handling traffic & infrastructure at scale.
Responsibilities :
Work closely with product engineers to implement scalable and highly reliable systems.
Scale existing backend systems to handle ever-increasing amounts of traffic and new product requirements.
Collaborate with other developers to understand & setup tooling needed for - Continuous Integration/Delivery/
Build & operate infrastructure to support website, backend cluster, ML projects in the organization.
Monitor and track performance and reliability of our services and software to meet promised SLA
2+ years of experience working on distributed systems and shipping high-quality product features on schedule
Intimate knowledge of the whole web stack (Front end, APIs, database, networks etc.)
Ability to build highly scalable, robust, and fault-tolerant services and stay up-to-date with the latest architectural trends
Experience with container based deployment, microservices, in-memory caches, relational databases, key-value stores
Hands-on experience with cloud infrastructure provisioning, deployment, monitoring (we are on AWS and use ECS, RDS, ELB, EC2, Elasticache, Elasticsearch, S3, CloudWatch)









