
Goodera is looking for an experienced and motivated DevOps professional to be an integral part of its core infrastructure team. As a DevOps Engineer, you must be able to troubleshoot production issues, design, implement, and deploy monitoring tools, collaborate with team members to improve the existing and develop new engineering tools, optimize company's computing architecture, design and conduct security, performance, availability and availability tests.
Responsibilities:
This is a highly accountable role and the candidate must meet the following professional expectations:
⢠Owning and improving the scalability and reliability of our products.
⢠Working directly with product engineering and infrastructure teams.
⢠Designing and developing various monitoring system tools.
⢠Accountable for developing deployment strategies and build configuration management.
⢠Deploying and updating system and application software.
⢠Ensure regular, effective communication with team members and cross-functional resources.
⢠Maintaining a positive and supportive work culture.
⢠First point of contact for handling customer (may be internal stakeholders) issues, providing guidance and recommendations to increase efficiency and reduce customer incidents.
⢠Develop tooling and processes to drive and improve customer experience, create playbooks.
⢠Eliminate manual tasks via configuration management.
⢠Intelligently migrate services from one AWS region to other AWS regions.
⢠Create, implement and maintain security policies to ensure ISO/ GDPR / SOC / PCI compliance.
⢠Verify infrastructure Automation meets compliance goals and is current with disaster recovery plan.
⢠Evangelize configuration management and automation to other product developers.
⢠Keep himself updated with upcoming technologies to maintain the state of the art infrastructure.
Required Candidate profile :Ā
⢠3+ years of proven experience working in a DevOps environment.
⢠3+ years of proven experience working in AWS Cloud environments.
⢠Solid understanding of networking and security best practices.
⢠Experience with infrastructure-as-code frameworks such as Ansible, Terraform, Chef, Puppet, CFEngine, etc.
⢠Experience in scripting or programming languages (Bash, Python, PHP, Node.js, Perl, etc.)
⢠Experience designing and building web application environments on AWS, including services such as ECS, ECR, Foregate, Lambda, SNS / SQS, CloudFront, Code Build, Code pipeline, Configuring CloudWatch, WAF, Active Directories, Kubernetes (EKS), EC2, S3, ELB, RDS, Redshift etc.
⢠Hands on Experience in Docker is a big plus.
⢠Experience working in an Agile, fast paced, DevOps environment.
⢠Strong Knowledge in DB such as MongoDB / MySQL / DynamoDB / Redis / Cassandra.
⢠Experience with Open Source and tools such as Haproxy, Apache, Nginx and Nagios etc.
⢠Fluency with version control systems with a preference for Git *
⢠Strong Linux-based infrastructures, Linux administrationĀ
⢠Experience with installing and configuring application servers such as WebLogic, JBoss and Tomcat.
⢠Hands-on in logging, monitoring and alerting tools like ELK, Grafana, Metabase, Monit, Zbbix etc.
⢠A team player capable of high performance, flexibility in a dynamic working environment and the ability to lead.
d ability to rain others on technical and procedural topics.

About Goodera
About
Connect with the team
Similar jobs
Role Overview:
As a DevOps Engineer (L2), you will play a key role in designing, implementing, and optimizing infrastructure. You will take ownership of automating processes, improving system reliability, and supporting the development lifecycle.
Key Responsibilities:
- Design and manage scalable, secure, and highly available cloud infrastructure.
- Lead efforts in implementing and optimizing CI/CD pipelines.
- Automate repetitive tasks and develop robust monitoring solutions.
- Ensure the security and compliance of systems, including IAM, VPCs, and network configurations.
- Troubleshoot complex issues across development, staging, and production environments.
- Mentor and guide L1 engineers on best practices.
- Stay updated on emerging DevOps tools and technologies.
- Manage cloud resources efficiently using Infrastructure as Code (IaC) tools like Terraform and AWS CloudFormation.
Qualifications:
- Bachelorās degree in Computer Science, IT, or a related field.
- Proven experience with CI/CD pipelines and tools like Jenkins, GitLab, or Azure DevOps.
- Advanced knowledge of cloud platforms (AWS, Azure, or GCP) with hands-on experience in deployments, migrations, and optimizations.
- Strong expertise in containerization (Docker) and orchestration tools (Kubernetes).
- Proficiency in scripting languages like Python, Bash, or PowerShell.
- Deep understanding of system security, networking, and load balancing.
- Strong analytical skills and problem-solving mindset.
- Certifications (e.g., AWS Certified Solutions Architect, Kubernetes Administrator) are a plus.
What We Offer:
- Opportunity to work with a cutting-edge tech stack in a product-first company.
- Collaborative and growth-oriented environment.
- Competitive salary and benefits.
- Freedom to innovate and contribute to impactful projects.
About AiSensy
AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Skullcandy, Vivo, Rentomojo, Physicswallah, Cosco grow their revenues via WhatsApp.
- Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing
- 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year
- Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more
- High Impact as Businesses drive 25-80% Revenues using AiSensy Platform
- Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors
Now, weāre looking for aĀ DevOps EngineerĀ to help scale our infrastructure and optimize performance for millions of users. š
What Youāll Do (Key Responsibilities)
š¹Ā CI/CD & Automation:
- Implement, manage, and optimize CI/CD pipelines usingĀ AWS CodePipeline, GitHub Actions, or Jenkins.
- Automate deployment processes to improve efficiency and reduce downtime.
š¹Ā Infrastructure Management:
- UseĀ Terraform, Ansible, Chef, Puppet, or PulumiĀ to manage infrastructure as code.
- Deploy and maintainĀ Dockerized applicationsĀ onĀ Kubernetes clustersĀ for scalability.
š¹Ā Cloud & Security:
- Work extensively withĀ AWS (Preferred) or other cloud platformsĀ to build and maintain cloud infrastructure.
- Optimize cloud costs and ensure security best practices are in place.
š¹Ā Monitoring & Troubleshooting:
- Set up and manageĀ monitoring toolsĀ likeĀ CloudWatch, Prometheus, Datadog, New Relic, or GrafanaĀ to track system performance and uptime.
- Proactively identify and resolve infrastructure-related issues.
š¹Ā Scripting & Automation:
- UseĀ Python or Bash scriptingĀ to automate repetitive DevOps tasks.
- Build internal tools for system health monitoring, logging, and debugging.
What Weāre Looking For (Must-Have Skills)
ā Ā Version Control:Ā Proficiency inĀ GitĀ (GitLab / GitHub / Bitbucket)
ā Ā CI/CD Tools:Ā Hands-on experience withĀ AWS CodePipeline, GitHub Actions, or Jenkins
ā Ā Infrastructure as Code:Ā Strong knowledge ofĀ Terraform, Ansible, Chef, or Pulumi
ā Ā Containerization & Orchestration:Ā Experience withĀ Docker & Kubernetes
ā Ā Cloud Expertise:Ā Hands-on experience withĀ AWS (Preferred)Ā or other cloud providers
ā Ā Monitoring & Alerting:Ā Familiarity withĀ CloudWatch, Prometheus, Datadog, or Grafana
ā Ā Scripting Knowledge:Ā Python or BashĀ for automation
Bonus Skills (Good to Have, Not Mandatory)
āĀ AWS Certifications:Ā Solutions Architect, DevOps Engineer, Security, Networking
ā Experience withĀ Microsoft/Linux/F5 Technologies
ā Hands-on knowledge ofĀ Database servers
ā Experience on AWS Sagemaker, AWS Bedrock
At Egnyte we build and maintain our flagship software: a secure content platform used by companies like Red Bull and Yamaha.
We store, analyze, organize, and secure billions of files and petabytes of data with millions of users. We observe more than 1M API requests per minute on average. To make that possible and to provide the best possible experience, we rely on great engineers. For us, people who own their work from start to finish are integral. Our Engineers are part of the process from design to code, to test, to deployment, and back again for further iterations.Ā Ā
We have 300+ engineers spread across the US, Poland, and India.
You will be part of our DevOps Team working closely with our DBA team in automating, monitoring, and scaling our massive MySQL cluster. Previous MySQL experience is a plus.Ā
Your day-to-day at Egnyte
- Designing, building, and maintaining cloud environments (using Terraform, Puppet or Kubernetes)
- Migrating services to cloud-based environments
- Collaborating with software developers and DBAs to create a reliable and scalable infrastructure for our product.Ā
About you
- 2+ years of proven experience in a DevOps Engineer, System Administrator or Developer role, working on infrastructure or build processes
- Programming prowess (Python, Java, Ruby, Golang, or JavaScript)
- Experience with databases (MySQL or Postgress or RDS/Aurora or others)
- Experience with public cloud services (GCP/AWS/Azure)
- Good understanding of the Linux Operating System on the administration level
- Preferably you have experience with HA solutions: our tools of choice include Orchestrator, Proxysql, HAProxy, Corosync & Pacemaker, etc.
- Experience with metric-based monitoring solutions (Cloud: CloudWatch/Stackdriver, On-prem: InfluxDB/OpenTSDB/Prometheus)
- Drive to grow as a DevOps Engineer (we value open-mindedness and a can-do attitude)
This company is a network of the world's best developers - full-time, long-term remote software jobs with better compensation and career growth.Ā We enable our clients to accelerate their Cloud Offering, and Capitalize on Cloud.Ā We have our own IOT/AI platform and we provide professional services on that platform to build custom clouds for their IOT devices.Ā We also build mobile apps, run 24x7 devops/site reliability engineering for our clients.
We are looking for very hands-on SRE (Site Reliability Engineering) engineers with 3 to 6 years of experience. The person will be part of team that is responsible for designing & implementing automation from scratch for medium to large scale cloud infrastructure and providing 24x7 services to our North American / European customers. This also includes ensuring ~100% uptime for almost 50+ internal sites. The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.
Ā
This person MUST have:
- B.E Computer Science or equivalent
- 2+ Years of hands-on experience troubleshooting/setting up of the Linux environment, who can write shell scripts for any given requirement.
- 1+ Years of hands-on experience setting up/configuring AWS or GCP services from SCRATCH and maintaining them.
- 1+ Years of hands-on experience setting up/configuring Kubernetes & EKS and ensuring high availability of container orchestration.
- 1+ Years of hands-on experience setting up CICD from SCRATCH in Jenkins & Gitlab.
- Experience configuring/maintaining one monitoring tool.
- Excellent verbal & written communication skills.
- Candidates with certifications - AWS, GCP, CKA, etc will be preferred
- Hands-on experience with databases (Cassandra, MongoDB, MySQL, RDS).
Ā
Experience:
- Min 3 years of experience as SRE automation engineer building, running, and maintaining production sites. Not looking for candidates who have experience only as L1/L2 or Build & Deploy..
Ā
Location:
- Remotely, anywhere in India
Ā
Timings:
- The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.
Ā
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods.Ā You will stay here because you love the company.Ā We have only a 15 days notice period.
- 7+ years of experience in System Administration, Networking, Automation, Monitoring
- Excellent problem solving, analytical skills and technical troubleshooting skills
- Experience managing systems deployed in public cloud platforms (Microsoft Azure, AWS or Google Cloud)
- Experience implementing and maintaining CI/CD pipelines (Jenkins, Concourse, etc.)
- Linux experience, flavours: Ubuntu, Redhat, CentOS (sysadmin, bash scripting)
- Experience setting up monitoring (Datadog, Splunk, etc.)
- Experience in Infrastructure Automation tools like Terraform
- Experience in Package Manager for Kubernetes like Helm Charts
- Experience with databases and data storage (Oracle, MongoDB, Postgres SQL, ELK stack)
- Experience with Docker
- Experience with orchestration technologies (Kubernetes or DC/OS)
- Familiar with Agile Software Development
Hi ,
Greetings from ToppersEdge.com India Pvt Ltd
We have job openings for our Client. Kindly find the details below:
Work Location : Bengaluru(remote axis presently)later on they should relocate to Bangalore.
Shift Timings ā general shift
Job Type ā Permanent Position
Experience ā 3-7 years
Candidate should be from Product Based Company onlyĀ
Job Description
We are looking to expand our DevOps team. This team is responsible for writing scripts to set up infrastructure to support 24*7 availability of theĀ NetradyneĀ services. The team is also responsible for setting up monitoring and alerting, to troubleshoot any issues reported in multiple environments. The team is responsible for triaging of production issues and providing appropriate and timely response to customers.
Requirements
- B Tech/M Tech/MS in Computer Science or a related field from a reputed university.
- Total industry experience of around 3-7 years.
- Programming experience in Python, Ruby, Perl or equivalent is a must.
- Good knowledge and experience of configuration management tool (like Ansible, etc.)
- Good knowledge and experience of provisioning tools (like Terraform, etc.)
- Good knowledge and experience with AWS.
- Experience with setting up CI/CD pipelines.
- Experience, in individual capacity, managing multiple live SaaS applications with high volume, high load, low-latency and high availability (24x7).
- Experience setting up web servers like apache, application servers like Tomcat/Websphere and databases (RDBMS and NoSQL).
- Good knowledge of UNIX (Linux) administration tools.
- Good knowledge of security best practices and knowledge of relevant tools (Firewalls, VPN) etc.
- Good knowledge of networking concepts and UNIX administration tools.
- Ability to troubleshoot issues quickly is required.
Ā
Ā

Contract to hire
Total 8 years of experience and relevant 4 years
⢠Experience with building and deploying software in the cloud, preferably on Google Cloud    Platform (GCP)
⢠Sound knowledge to build infrastructure as code with Terraform
⢠Comfortable with test-driven development, testing frameworks and building CI/CD pipelines   with version control software Gitlab
⢠Strong skills of containerisation with Docker, Kubernetes and Helm
⢠Familiar with Gitlab, systems integration and BDD
⢠Solid networking skills e.g. IP, DNS, VPN, HTTP/HTTPS
⢠Scripting experience (Bash, Python, etc.)
⢠Experience in Linux/Unix administration
⢠Experience with agile methods and practices (Scrum, Kanban, Continuous Integration, Pair   Programming, TDD)
- JD: ⢠10+ years of overall industry experience
⢠5+ years of cloud experience
⢠2+ years of architect experience
⢠Varied background preferred between systems and development
o Experience working with applications, not pure infra experience
⢠Azure experience ā strong background using Azure for application migrations
⢠Terraform experience ā should mention automation technologies in job experience
⢠Hands on experience delivering in the cloud
⢠Must have job experience designing solutions for customers
⢠IaaS Cloud architect
workload migrations to AWS and/or Azure
⢠Security architecture considerations experience
⢠CI/CD experience
⢠Proven applications migration track of record.


DevOps Engineer Skills Building a scalable and highly available infrastructure for data science Knows data science project workflows Hands-on with deployment patterns for online/offline predictions (server/serverless)
Experience with either terraform or Kubernetes
Experience of ML deployment frameworks like Kubeflow, MLflow, SageMaker Working knowledge of Jenkins or similar tool Responsibilities Owns all the ML cloud infrastructure (AWS) Help builds out an entirely CI/CD ecosystem with auto-scaling Work with a testing engineer to design testing methodologies for ML APIs Ability to research & implement new technologies Help with cost optimizations of infrastructure.
Knowledge sharing Nice to Have Develop APIs for machine learning Can write Python servers for ML systems with API frameworks Understanding of task queue frameworks like Celery

Radical is a platform connecting data, medicine and people -- through machine learning, and usable, performant products. Software has never been the strong suit of the medical industry -- and we are changing that. We believe that the same sophistication and performance that powers our daily needs through millions of consumer applications -- be it your grocery, your food delivery or your movie tickets -- when applied to healthcare, has a massive potential to transform the industry, and positively impact lives of patients and doctors. Radical works with some of the largest hospitals and public health programmes in India, and has a growing footprint both inside the country and abroad.
As a DevOps Engineer at Radical, you will:
Work closely with all stakeholders in the healthcare ecosystem - patients, doctors, paramedics and administrators - to conceptualise and bring to life the ideal set of products that add value to their time
Work alongside Software Developers and ML Engineers to solve problems and assist in architecture design
Work on systems which have an extraordinary emphasis on capturing data that can help build better workflows, algorithms and tools
Work on high performance systems that deal with several million transactions, multi-modal data and large datasets, with a close attention to detail
Weāre looking for someone who has:
Familiarity and experience with writing working, well-documented and well-tested scripts, Dockerfiles, Puppet/Ansible/Chef/Terraform scripts.
Proficiency with scripting languages like Python and Bash.
Knowledge of systems deployment and maintainence, including setting up CI/CD and working alongside Software Developers, monitoring logs, dashboards, etc.
Experience integrating with a wide variety of external tools and services
Experience navigating AWS and leveraging appropriate services and technologies rather than DIY solutions (such as hosting an application directly on EC2 vs containerisation, or an Elastic Beanstalk)
Itās not essential, but great if you have:
An established track record of deploying and maintaining systems.
Experience with microservices and decomposition of monolithic architectures
Proficiency in automated tests.
Proficiency with the linux ecosystem
Experience in deploying systems to production on cloud platforms such as AWS
The position is open now, and we are onboarding immediately.
Please write to us with an updated resume, and one thing you would like us to see as part of your application. This one thing can be anything that you think makes you stand apart among candidates.
Radical is based out of Delhi NCR, India, and we look forward to working with you!
We're looking for people who may not know all the answers, but are obsessive about finding them, and take pride in the code that they write. We are more interested in the ability to learn fast, think rigorously and for people who arenāt afraid to challenge assumptions, and take large bets -- only to work hard and prove themselves correct. You're encouraged to apply even if your experience doesn't precisely match the job description. Join us.

