
2. Kubernetes Engineer
DevOps Systems Engineer with experience in Docker Containers, Docker Swarm, Docker Compose, Ansible, Jenkins and other tools. As part of this role she/he must work with the container best practices in design, development and implementation.
At Least 4 Years of experience in DevOps with knowledge of:
- Docker
- Docker Cloud / Containerization
- DevOps Best Practices
- Distributed Applications
- Deployment Architecture
- Atleast AWS Experience
- Exposure to Kubernetes / Serverless Architecture
Skills:
- 3-7+ years of experience in DevOps Engineering
- Strong experience with Docker Containers, Implementing Docker Containers, Container Clustering
- Experience with Docker Swarm, Docker Compose, Docker Engine
- Experience with Provisioning and managing VM's - Virtual Machines
- Experience / strong knowledge with Network Topologies, Network Research,
- Jenkins, BitBucket, Jira
- Ansible or other Automation Configuration Management System tools
- Scripting & Programming using languages such as BASH, Perl, Python, AWK, SED, PHP, Shell
- Linux Systems Administration -: Redhat
Additional Preference:
Security, SSL configuration, Best Practices

Similar jobs
Key Responsibilities:
- Design, implement, and maintain scalable, secure, and cost-effective infrastructure on AWS and Azure
- Set up and manage CI/CD pipelines for smooth code integration and delivery using tools like GitHub Actions, Bitbucket Runners, AWS Code build/deploy, Azure DevOps, etc.
- Containerize applications using Docker and manage orchestration with Kubernetes, ECS, Fargate, AWS EKS, Azure AKS.
- Manage and monitor production deployments to ensure high availability and performance
- Implement and manage CDN solutions using AWS CloudFront and Azure Front Door for optimal content delivery and latency reduction
- Define and apply caching strategies at application, CDN, and reverse proxy layers for performance and scalability
- Set up and manage reverse proxies and Cloudflare WAF to ensure application security and performance
- Implement infrastructure as code (IaC) using Terraform, CloudFormation, or ARM templates
- Administer and optimize databases (RDS, PostgreSQL, MySQL, etc.) including backups, scaling, and monitoring
- Configure and maintain VPCs, subnets, routing, VPNs, and security groups for secure and isolated network setups
- Implement monitoring, logging, and alerting using tools like CloudWatch, Grafana, ELK, or Azure Monitor
- Collaborate with development and QA teams to align infrastructure with application needs
- Troubleshoot infrastructure and deployment issues efficiently and proactively
- Ensure cloud cost optimization and usage tracking
Required Skills & Experience:
- 3-4 years of hands-on experience in a DevOps
- Strong expertise with both AWS and Azure cloud platforms
- Proficient in Git, branching strategies, and pull request workflows
- Deep understanding of CI/CD concepts and experience with pipeline tools
- Proficiency in Docker, container orchestration (Kubernetes, ECS/EKS/AKS)
- Good knowledge of relational databases and experience in managing DB backups, performance, and migrations
- Experience with networking concepts including VPC, subnets, firewalls, VPNs, etc.
- Experience with Infrastructure as Code tools (Terraform preferred)
- Strong working knowledge of CDN technologies: AWS CloudFront and Azure Front Door
- Understanding of caching strategies: edge caching, browser caching, API caching, and reverse proxy-level caching
- Experience with Cloudflare WAF, reverse proxy setups, SSL termination, and rate-limiting
- Familiarity with Linux system administration, scripting (Bash, Python), and automation tools
- Working knowledge of monitoring and logging tools
- Strong troubleshooting and problem-solving skills
Good to Have (Bonus Points):
- Experience with serverless architecture (e.g., AWS Lambda, Azure Functions)
- Exposure to cost monitoring tools like CloudHealth, Azure Cost Management
- Experience with compliance/security best practices (SOC2, ISO, etc.)
- Familiarity with Service Mesh (Istio, Linkerd) and API gateways
- Knowledge of Secrets Management tools (e.g., HashiCorp Vault, AWS Secrets Manager)
5+ yrs of experience in Cloud/DevOps roles.
Strong hands-on experience with AWS architecture, operations & automation (70% focus).
Solid Kubernetes/EKS administration experience (30% focus).
IaC experience (Terraform preferred).
Scripting (Python / Bash).
CI/CD tools (Jenkins, GitLab, GitHub Actions).
Experience working with BFSI or Managed Service projects is mandatory
Job Responsibilities:
- Managing and maintaining the efficient functioning of containerized applications and systems within an organization
- Design, implement, and manage scalable Kubernetes clusters in cloud or on-premise environments
- Develop and maintain CI/CD pipelines to automate infrastructure and application deployments, and track all automation processes
- Implement workload automation using configuration management tools, as well as infrastructure as code (IaC) approaches for resource provisioning
- Monitor, troubleshoot, and optimize the performance of Kubernetes clusters and underlying cloud infrastructure
- Ensure high availability, security, and scalability of infrastructure through automation and best practices
- Establish and enforce cloud security standards, policies, and procedures Work agile technologies
Primary Requirements:
- Kubernetes: Proven experience in managing Kubernetes clusters (min. 2-3 years)
- Linux/Unix: Proficiency in administering complex Linux infrastructures and services
- Infrastructure as Code: Hands-on experience with CM tools like Ansible, as well as the
- knowledge of resource provisioning with Terraform or other Cloud-based utilities
- CI/CD Pipelines: Expertise in building and monitoring complex CI/CD pipelines to
- manage the build, test, packaging, containerization and release processes of software
- Scripting & Automation: Strong scripting and process automation skills in Bash, Python
- Monitoring Tools: Experience with monitoring and logging tools (Prometheus, Grafana)
- Version Control: Proficient with Git and familiar with GitOps workflows.
- Security: Strong understanding of security best practices in cloud and containerized
- environments.
Skills/Traits that would be an advantage:
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team
Roles & Responsibilities:
- Bachelor’s degree in Computer Science, Information Technology or a related field
- Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
- Knowledge in Linux/Unix Administration and Python/Shell Scripting
- Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
- Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
- Experience in enterprise application development, maintenance and operations
- Knowledge of best practices and IT operations in an always-up, always-available service
- Excellent written and oral communication skills, judgment and decision-making skills
We are looking for an excellent experienced person in the Dev-Ops field. Be a part of a vibrant, rapidly growing tech enterprise with a great working environment. As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to
deploy and release their code seamlessly.
Responsibilities
The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems.
Understanding accessibility and security compliance (Depending on the specific project)
User authentication and authorization between multiple systems,
servers, and environments
Integration of multiple data sources and databases into one system
Understanding fundamental design principles behind a scalable
application
Configuration management tools (Ansible/Chef/Puppet), Cloud
Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.
Should be able to make key decisions for our infrastructure,
networking and security.
Manipulation of shell scripts during migration and DB connection.
Monitor Production Server Health of different parameters (CPU Load, Physical Memory, Swap Memory and Setup Monitoring tool to
Monitor Production Servers Health, Nagios
Created Alerts and configured monitoring of specified metrics to
manage their cloud infrastructure efficiently.
Setup/Managing VPC, Subnets; make connection between different zones; blocking suspicious ip/subnet via ACL.
Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade
AWS resources (CPU, Memory, EBS)
The candidate would be Responsible for managing microservices at scale maintain the compute and storage infrastructure for various product teams.
Strong Knowledge about Configuration Management Tools like –
Ansible, Chef, Puppet
Extensively worked with Change tracking tools like JIRA and log
Analysis, Maintaining documents of production server error log's
reports.
Experienced in Troubleshooting, Backup, and Recovery
Excellent Knowledge of Cloud Service Providers like – AWS, Digital
Ocean
Good Knowledge about Docker, Kubernetes eco-system.
Proficient understanding of code versioning tools, such as Git
Must have experience working in an automated environment.
Good knowledge of Amazon Web Service Architects like – Amazon EC2, Amazon S3 (Amazon Glacier), Amazon VPC, Amazon Cloud Watch.
Scheduling jobs using crontab, Create SWAP Memory
Proficient Knowledge about Access Management (IAM)
Must have expertise in Maven, Jenkins, Chef, SVN, GitHub, Tomcat, Linux, etc.
Candidate Should have good knowledge about GCP.
EducationalQualifications
B-Tech-IT/M-Tech -/MBA- IT/ BCA /MCA or any degree in the relevant field
EXPERIENCE: 2-6 yr
- Automate deployments of infrastructure components and repetitive tasks.
- Drive changes strictly via the infrastructure-as-code methodology.
- Promote the use of source control for all changes including application and system-level changes.
- Design & Implement self-recovering systems after failure events.
- Participate in system sizing and capacity planning of various components.
- Create and maintain technical documents such as installation/upgrade MOPs.
- Coordinate & collaborate with internal teams to facilitate installation & upgrades of systems.
- Support 24x7 availability for corporate sites & tools.
- Participate in rotating on-call schedules.
- Actively involved in researching, evaluating & selecting new tools & technologies.
- Cloud computing – AWS, OCI, OpenStack
- Automation/Configuration management tools such as Terraform & Chef
- Atlassian tools administration (JIRA, Confluence, Bamboo, Bitbucket)
- Scripting languages - Ruby, Python, Bash
- Systems administration experience – Linux (Redhat), Mac, Windows
- SCM systems - Git
- Build tools - Maven, Gradle, Ant, Make
- Networking concepts - TCP/IP, Load balancing, Firewall
- High-Availability, Redundancy & Failover concepts
- SQL scripting & queries - DML, DDL, stored procedures
- Decisive and ability to work under pressure
- Prioritizing workload and multi-tasking ability
- Excellent written and verbal communication skills
- Database systems – Postgres, Oracle, or other RDBMS
- Mac automation tools - JAMF or other
- Atlassian Datacenter products
- Project management skills
Qualifications
- 3+ years of hands-on experience in the field or related area
- Requires MS or BS in Computer Science or equivalent field
About the job
👉 TL; DR: We at Sarva Labs Inc., are looking for Site Reliability Engineers with experience to join our team. As a Protocol Developer, you will handle assets in data centers across Asia, Europe and Americas for the World’s First Context-Aware Peer-to-Peer Network enabling Web4.0. We are looking for that person who will take over the ownership of DevOps, establish proper deployment processes and work with engineering teams and hustle through the Main Net launch.
About Us 🚀
Imagine if each user had their own chain with each transaction being settled by a dynamic group of nodes who come together and settle that interaction with near immediate finality without a volatile gas cost. That’s MOI for you, Anon.
Visit https://www.sarva.ai/ to know more about who we are as a company
Visit https://www.moi.technology/ to know more about the technology and team!
Visit https://www.moi-id.life/ , https://www.moibit.io/ , https://www.moiverse.io/ to know more
Read our developer documentation at https://apidocs.moinet.io/
What you'll do 🛠
- You will take over the ownership of DevOps, establish proper deployment processes and work with engineering teams to ensure an appropriate degree of automation for component assembly, deployment, and rollback strategies in medium to large scale environments
- Monitor components to proactively prevent system component failure, and enable the engineering team on system characteristics that require improvement
- You will ensure the uninterrupted operation of components through proactive resource management and activities such as security/OS/Storage/application upgrades
You'd fit in 💯 if you...
- Familiar with any of these providers: AWS, GCP, DO, Azure, RedSwitches, Contabo, Redswitches, Hetzner, Server4you, Velia, Psychz, Tier and so on
- Experience in virtualizing bare metals using Openstack / VMWare / Similar is a PLUS
- Seasoned in building and managing VMs, Containers and clusters across the continents
- Confident in making best use of Docker, Kubernetes with stateful set deployment, autoscaling, rolling update, UI dashboard, replications, persistent volume, ingress
- Must have experience deploying in multi-cloud environments
- Working knowledge on automation tools such as Terraform, Travis, Packer, Chef, etc.
- Working knowledge on Scalability in a distributed and decentralised environment
- Familiar with Apache, Rancher, Nginx, SELinux/Ubuntu 18.04 LTS/CentOS 7 and RHEL
- Monitoring tools like PM2, Grafana and so on
- Hands-on with ELK stack/similar for log analytics
🌱 Join Us
- Flexible work timings
- We’ll set you up with your workspace. Work out of our Villa which has a lake view!
- Competitive salary/stipend
- Generous equity options (for full-time employees)
Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech
to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering,
AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We
are one of the top & the fastest growing partners for Google Cloud and AWS globally with
over 2,500 clients successfully moved to cloud.
What we believe?
1. Best practices are overrated
○ Implementing best practices can only make one n ‘average’ .
2. Honesty and Transparency
○ We believe in naked truth. We do what we tell and tell what we do.
3. Client Partnership
○ Client - Vendor relationship: No. We partner with clients instead.
○ And our sales team comprises 100% of our clients.
How we work?
It’s all about being Happier first. And rest follows. Searce work culture is defined by
HAPPIER.
1. Humble: Happy people don’t carry ego around. We listen to understand; not to
respond.
2. Adaptable: We are comfortable with uncertainty. And we accept changes well. As
that’s what life's about.
3. Positive: We are super positive about work & life in general. We love to forget and
forgive. We don’t hold grudges. We don’t have time or adequate space for it.
4. Passionate: We are as passionate about the great street-food vendor across the
street as about Tesla’s new model and so on. Passion is what drives us to work and
makes us deliver the quality we deliver.
5. Innovative: Innovate or Die. We love to challenge the status quo.
6. Experimental: We encourage curiosity & making mistakes.
7. Responsible: Driven. Self motivated. Self governing teams. We own it.
Are you the one? Quick self-discovery test:
1. Love for cloud: When was the last time your dinner entailed an act on “How would
‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did
the ‘Sheldon’ version of the same thing.
2. Passion for sales: When was the last time you went at a remote gas station while on
vacation, and ended up helping the gas station owner saasify his 7 gas stations
across other geographies.
3. Compassion for customers: You listen more than you speak. When you do speak,
people feel the need to listen.
4. Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can
attempt to take humanity to Mars, why can’t we take your business to run on cloud ?
Your bucket of undertakings:
This position will be responsible to consult with clients and propose architectural solutions
to help move & improve infra from on-premise to cloud or help optimize cloud spend from
one public cloud to the other.
1. Be the first one to experiment on new age cloud offerings, help define the best
practise as a thought leader for cloud, automation & Dev-Ops, be a solution
visionary and technology expert across multiple channels.
2. Continually augment skills and learn new tech as the technology and client needs
evolve
3. Use your experience in Google cloud platform, AWS or Microsoft Azure to build
hybrid-cloud solutions for customers.
4. Provide leadership to project teams, and facilitate the definition of project
deliverables around core Cloud based technology and methods.
5. Define tracking mechanisms and ensure IT standards and methodology are met;
deliver quality results.
6. Participate in technical reviews of requirements, designs, code and other artifacts
7. Identify and keep abreast of new technical concepts in google cloud platform
8. Security, Risk and Compliance - Advise customers on best practices around access
management, network setup, regulatory compliance and related areas.
Accomplishment Set
● Passionate, persuasive, articulate Cloud professional capable of quickly establishing
interest and credibility
● Good business judgment, a comfortable, open communication style, and a
willingness and ability to work with customers and teams.
● Strong service attitude and a commitment to quality.
● Highly organised and efficient.
● Confident working with others to inspire a high-quality standard.
Education, Experience, etc.
1. Is Education overrated? Yes. We believe so. However there is no way to locate you
otherwise. So unfortunately we might have to look for a Bachelor's or Master's
degree in engineering from a reputed institute or you should be programming from
12. And the latter is better. We will find you faster if you specify the latter in some
manner. Not just degree, but we are not too thrilled by tech certifications too ... :)
2. To reiterate: Passion to tech-awesome, insatiable desire to learn the latest of the
new-age cloud tech, highly analytical aptitude and a strong ‘desire to deliver’ outlives
those fancy degrees!
3. 1 - 5 years of experience with at least 2 - 3 years of hands-on experience in Cloud
Computing (AWS/GCP/Azure) and IT operational experience in a global enterprise
environment.
4. Good analytical, communication, problem solving, and learning skills.
5. Knowledge on programming against cloud platforms such as Google Cloud Platf
Roles and Responsibilities
- 5 - 8 years of experience in Infrastructure setup on Cloud, Build/Release Engineering, Continuous Integration and Delivery, Configuration/Change Management.
- Good experience with Linux/Unix administration and moderate to significant experience administering relational databases such as PostgreSQL, etc.
- Experience with Docker and related tools (Cassandra, Rancher, Kubernetes etc.)
- Experience of working in Config management tools (Ansible, Chef, Puppet, Terraform etc.) is a plus.
- Experience with cloud technologies like Azure
- Experience with monitoring and alerting (TICK, ELK, Nagios, PagerDuty)
- Experience with distributed systems and related technologies (NSQ, RabbitMQ, SQS, etc.) is a plus
- Experience with scaling data store technologies is a plus (PostgreSQL, Scylla, Redis) is a plus
- Experience with SSH Certificate Authorities and Identity Management (Netflix BLESS) is a plus
- Experience with multi-domain SSL certs and provisioning a plus (Let's Encrypt) is a plus
- Experience with chaos or similar methodologies is a plus
Must-Have’s:
- Hands-on DevOps (Git, Ansible, Terraform, Jenkins, Python/Ruby)
Job Description:
- Knowledge on what is a DevOps CI/CD Pipeline
- Understanding of version control systems like Git, including branching and merging strategies
- Knowledge of what is continuous delivery and integration tools like Jenkins, Github
- Knowledge developing code using Ruby or Python and Java or PHP
- Knowledge writing Unix Shell (bash, ksh) scripts
- Knowledge of what is automation/configuration management using Ansible, Terraform, Chef or Puppet
- Experience and willingness to keep learning in a Linux environment
- Ability to provide after-hours support as needed for emergency or urgent situations
Nice to have’s:
- Proficient with container based products like docker and Kubernetes
- Excellent communication skills (verbal and written)
- Able to work in a team and be a team player
- Knowledge of PHP, MySQL, Apache and other open source software
- BA/BS in computer science or similar







.png&w=256&q=75)
