.png&w=3840&q=75)
About the job
šĀ TL; DR: We at Sarva Labs Inc., are looking for Site Reliability Engineers with experience to join our team. As a Protocol Developer, you will handle assets in data centers across Asia, Europe and Americas for the Worldās First Context-Aware Peer-to-Peer Network enabling Web4.0. We are looking for that person who will take over the ownership of DevOps, establish proper deployment processes and work with engineering teams and hustle through the Main Net launch.
Ā
About Us š
Imagine if each user had their own chain with each transaction being settled by a dynamic group of nodes who come together and settle that interaction with near immediate finality without a volatile gas cost. Thatās MOI for you, Anon.
Ā
Visit https://www.sarva.ai/ to know more about who we are as a company
Visit https://www.moi.technology/ to know more about the technology and team!
Visit https://www.moi-id.life/ , https://www.moibit.io/ , https://www.moiverse.io/ to know more
Read our developer documentation at https://apidocs.moinet.io/
Ā
What you'll do š
- You will take over the ownership of DevOps, establish proper deployment processes and work with engineering teams to ensure an appropriate degree of automation for component assembly, deployment, and rollback strategies in medium to large scale environments
- Monitor components to proactively prevent system component failure, and enable the engineering team on system characteristics that require improvement
- You will ensure the uninterrupted operation of components through proactive resource management and activities such as security/OS/Storage/application upgrades
Ā
You'd fit in šÆ if you...
- Familiar with any of these providers: AWS, GCP, DO, Azure, RedSwitches, Contabo, Redswitches, Hetzner, Server4you, Velia, Psychz, Tier and so on
- Experience in virtualizing bare metals using Openstack / VMWare / Similar is a PLUS
- Seasoned in building and managing VMs, Containers and clusters across the continents
- Confident in making best use of Docker, Kubernetes with stateful set deployment, autoscaling, rolling update, UI dashboard, replications, persistent volume, ingress
- Must have experience deploying in multi-cloud environments
- Working knowledge on automation tools such as Terraform, Travis, Packer, Chef, etc.
- Working knowledge on Scalability in a distributed and decentralised environment
- Familiar with Apache, Rancher, Nginx, SELinux/Ubuntu 18.04 LTS/CentOS 7 and RHEL
- Monitoring tools like PM2, Grafana and so on
- Hands-on with ELK stack/similar for log analytics
Ā
š± Join Us
- Flexible work timings
- Weāll set you up with your workspace. Work out of our Villa which has a lake view!
- Competitive salary/stipend
- Generous equity options (for full-time employees)
Ā
Ā

About Sarva Labs Inc
About
We at Sarvalabs bring personalization to open networks through full user ownership and complete control of all dimensions of their digital interactions. Sarvalabs is developing The World's First Protocol to bring personalization to Open Networks and enable an Internet for Value Transfer. MOI - MY OWN INTERNET.
MOI enables complete user ownership and control, empowers the New Internet of Value, introduces a new personalised multidimensional value structure for participants measured using TDU (Total Digital Utility), and Integrates context as a foundational computational dimension of p2p networks.
Connect with the team
Similar jobs
Job Title: Cloud Engineer - Azure DevOps
Ā
Job Location: Mumbai (Andheri East)
Ā
About the company:
MIRACLE HUB CLIENT, Ā Ā is a predictive analytics and artificial intelligence company headquartered in Boston, US with offices across the globe. We build prediction models and algorithms to solve high priority business problems. Working across multiple industries, we have designed and developed breakthrough analytic products and decision-making tools by leveraging predictive analytics, AI, machine learning, and deep domain expertise
Ā
Skill-sets Required:
- Azure Architecture
- DevOps Expert
- Infrastructure as a code
- Automate CICD pipelines
- Security and Risk Compliance
- Validate Tech Design
Ā
Job Role:
- Create a well-informed cloud strategy and manage the adaption process and Azure based architecture
- Develop and organize automated cloud systems
- Work with other teams in continuous integration and continuous deployment pipeline in delivering solutions
- Work closely with IT security to monitor the company's cloud privacy
Desired Candidate Profile:
Ā
- Bachelorās degree in computer science, computer engineering, or relevant field.
- A minimum of 3 yearsā experience in a similar role.
- Strong knowledge of database structure systems and data mining.
- Excellent organizational and analytical abilities.
- Outstanding problem solver.
- IMMEDIATE JOINING (A notice period of 1 month is also acceptable)
- Excellent English communication and presentation skills, both verbal and written
- Charismatic, competitive and enthusiastic personality with negotiation skills
Ā
Compensation: 12-15 LPA with minimum 5 years of experience. ( OR AS PER LAST DRAWN )
Ā
Are you an experienced Infrastructure/DevOps Engineer looking for an exciting remote opportunity to design, automate, and scale modern cloud environments? Weāre seeking a skilled engineer with strong expertise in Terraform and DevOps practices to join our growing team. If youāre passionate about automation, cloud infrastructure, and CI/CD pipelines, weād love to hear from you!
Key Responsibilities:
- Design, implement, and manage cloud infrastructure using Terraform (IaC).
- Build and maintain CI/CD pipelines for seamless application deployment.
- Ensure scalability, reliability, and security of cloud-based systems.
- Collaborate with developers and QA to optimize environments and workflows.
- Automate infrastructure provisioning, monitoring, and scaling.
- Troubleshoot infrastructure and deployment issues quickly and effectively.
- Stay up to date with emerging DevOps tools, practices, and cloud technologies.
Requirements:
- Minimum 5+ years of professional experience in DevOps or Infrastructure Engineering.
- Strong expertise in Terraform and Infrastructure as Code (IaC).
- Hands-on experience with AWS / Azure / GCP (at least one cloud platform).
- Proficiency in CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, etc.).
- Experience with Docker, Kubernetes, and container orchestration.
- Strong knowledge of Linux systems, networking, and security best practices.
- Familiarity with monitoring & logging tools (Prometheus, Grafana, ELK, etc.).
- Scripting experience (Bash, Python, or similar).
- Excellent problem-solving skills and ability to work in remote teams.
Perks and Benefits:
- Competitive salary with remote work flexibility.
- Opportunity to work with global clients on modern infrastructure.
- Growth and learning opportunities in cutting-edge DevOps practices.
- Collaborative team culture that values automation and innovation.
Job Description
We are seeking a seasoned DevOps Architect to join our dynamic team. The ideal candidate should possess a deep understanding of DevOps principles, system design, and architecture, with a focus on creating robust and scalable infrastructure solutions through automation. This role requires a candidate with hands-on experience in development, testing, and deployment processes. Additionally, the candidate should have a minimum of 5 years of experience in DevOps operations and should be proficient in team management, coordination, problem-solving, troubleshooting, and technical expertise.
About the company:
A rapidly growing omni-channel luxury retailer with eight stores across Mumbai, Delhi, Kolkata and a global e-commerce platform servicing 65+ countries worldwide. The 18-year-old company is an established market leader with considerable brand equity.
Location:Ā Prabhadevi, Mumbai
Key Responsibilities:
- System Design and Architecture: Develop robust and scalable system designs that align with business requirements and industry best practices.
- Automation: Implement automation solutions to streamline processes and enhance system reliability.
- Development, Testing, and Deployment: Oversee the entire software development lifecycle, from code creation to testing and deployment.
- Coordination and Issue Resolution: Collaborate with cross-functional teams, resolve technical issues, and ensure smooth project execution.
- Troubleshooting: Apply your technical expertise to diagnose and resolve complex system issues efficiently.
- Interpersonal Skills: Communicate effectively with team members, stakeholders, and management to ensure project success.
- Ecommerce (B2C) Expertise: Bring in-depth knowledge of Ecommerce (B2C) operations to tailor DevOps solutions to our specific needs.
- Infrastructure Automation: Design and implement infrastructure automation tools and workflows to support CI/CD initiatives.
- CI/CD Pipeline Management: Build and operate complex CI/CD pipelines at scale, ensuring efficient software delivery.
- Cloud Expertise: Possess knowledge of handling GCP/AWS clouds, optimizing cloud resources, and managing cloud-based applications.
- Cybersecurity: Ensure that systems are safe and secure against cybersecurity threats, implementing best practices for data protection and compliance.
Requirements
Qualifications:
- Bachelor's degree in Computer Science or related field (Master's preferred).
- Minimum 5 years of hands-on experience in DevOps operations.
- Has worked to ensure system reliability, scale & performance in high growth environments.
- Experienced in designing and implementing scalable and robust IT solutions.
- Strong technical background and proficiency in DevOps tools and practices.
- Experience with Ecommerce (B2C) platforms is mandatory.
- Excellent team management, coordination, and interpersonal skills.
- Proficiency in troubleshooting and issue resolution.
- Familiarity with the latest open-source technologies.
- Expertise in CI/CD pipeline management.
- Knowledge of GCP/AWS cloud services.
- Understanding cybersecurity best practices.
Benefits
- Group Mediclaim cover 2.5 L sum assured (Employee + Spouse + 2 Children) & Group Personal Accident ā 5 L sum assured.
- Rewards & Recognition programmes
Head Kubernetes
Docker, Kubernetes Engineer ā Remote (Pan India)Ā
Ā
ABOUT US
Established in 2009, Ashnik is a leading open-source solutions and consulting company in South East Asia and India, headquartered in Singapore. We enable digital transformation for large enterprises through our design, architecting, and solution skills. Over 100 large enterprises in the region have acknowledged our expertise in delivering solutions using key open-source technologies. Our offerings form critical part of Digital transformation, Big Data platform, Cloud and Web acceleration and IT modernization. We represent EDB, Pentaho, Docker, Couchbase, MongoDB, Elastic, NGINX, Sysdig, Redis Labs, Confluent, and HashiCorp as their key partners in the region. Our team members bring decades of experience in delivering confidence to enterprises in adopting open source software and are known for their thought leadership.
Ā
THE POSITION
Ashnik is looking for talented and passionate Technical consultant to be part of the training team and work with customers on DevOps Solution. You will be responsible for implementation and consultations related work for customers across SEA and India. We are looking for personnel with personal qualities like -
- Passion for working for different customers and different environments.
- Excellent communication and articulation skills
- Aptitude for learning new technology and willingness to understand technologies which he/she is not directly working on.
- Willingness to travel within and outside the country.
- Ability to independently work at the customer site and navigate through different teams.
SKILLS AND EXPERIENCE
ESSENTIAL SKILLS
- 3+ years of experience with B.E/ B.Tech, MCA, Graduation Degree with higher education in the technical field.
- Must have prior experience with docker containers, swarm and/or Kubernetes.
- Understanding of Operating System, processes, networking, and containers.
- Experience/exposure in writing and managing Dockerfile or creating container images.
- Experience of working on Linux platform.
- Ability to perform installations and system/software configuration on Linux.
- Should be aware of networking and SSL/TLS basics.
- Should be aware of tools used in complex enterprise IT infra e.g. LDAP, AD, Centralized Logging solution etc.
A preferred candidate will have
- Prior experience with open source Kubernetes or another container management platform like OpenShift, EKS, ECS etc.
- CNCF Kubernetes Certified Administrator/Developer
- Experience in container monitoring using Prometheus, Datadog etc.
- Experience with CI/CD tools and their usage
- Knowledge of scripting language e.g., shell scripting
RESPONSIBILITIES
First 2 months:
- Get an in-depth understanding of containers, Docker Engine and runtime, Swarm and Kubernetes.
- Get hands-on experience with various features of Mirantis Kubernetes Engine, and Mirantis Secure Registry
After 2 months:
- Work with Ashnikās pre-sales and solution architects to deploy Mirantis Kubernetes Engine for customers
- Work with customer teams and provide them guidance on how Kubernetes Platform can be integrated with other tools in their environment (CI/CD, identity management, storage etc)
- Work with customers to containerize their applications.
- Write Dockerfile for customers during the implementation phase.
- Help customers design their network and security policy for effective management of applications deployed using Swarm and/or Kubernetes.
- Help customer design their deployments either with Swarm services or Kubernetes Deployments.
- Work with pre-sales and sales team to help customers during their evaluation of Mirantis Kubernetes platform
- Conduct workshops for customers as needed for technical hand-holding and technical handover.
Package: upto30 L
Office: Mumbai
JOB RESPONSIBILITIES:
- Responsible for design, implementation, and continuous improvement on automated CI/CD infrastructure
- Displays technical leadership and oversight of implementation and deployment planning, system integration, ongoing data validation processes, quality assurance, delivery, operations, and sustainability of technical solutions
- Responsible for designing topology to meet requirements for uptime, availability, scalability, robustness, fault tolerance & security
- Implement proactive measures for automated detection and resolution of recurring operational issues
- Lead operational support team manage incidents, document root cause and tracking preventive measures
- Identifying and deploying cybersecurity measures by continuously validating/fixing vulnerability assessment reports and risk management
- Responsible for the design and development of tools, installation procedures
- Develops and maintains accurate estimates, timelines, project plans, and status reports
- Organize and maintain packaging and deployment of various internal modules and third-party vendor libraries
- Responsible for the employment, timely performance evaluation, counselling, employee development, and discipline of assigned employees.
- Participates in calls and meetings with customers, vendors, and internal teams on regular basis.
- Perform infrastructure cost analysis and optimization
Ā
SKILLS & ABILITIES
Ā
Experience: Minimum of 10 years of experience with good technical knowledge regarding build, release, and systems engineering
Ā
Technical Skills:
- Experience with DevOps toolchains such as Docker, Rancher, Kubernetes, Bitbucket
- Experience with Apache, Nginx, Tomcat, Prometheus ,Grafana
- Ability to learn/use a wide variety of open-source technologies and tools
- Sound understanding of cloud technologies preferably AWS technologies
- Linux, Windows, Scripting, Configuration Management, Build and Release Engineering
- 6 years of experience in DevOps practices, with a good understanding of DevOps and Agile principles
- Good scripting skills (Python/Perl/Ruby/Bash)
- Experience with standard continuous integration tools Jenkins/Bitbucket Pipelines
- Work on software configuration management systems (Puppet/Chef/Salt/Ansible)
- Microsoft Office Suite (Word, Excel, PowerPoint, Visio, Outlook) and other business productivity tools
- Working knowledge on HSM and PKI (Good to have)
Ā
Location:
- Bangalore
Experience:
- 10 + Years.
Experience of Linux
Experience using Python or Shell scripting (for Automation)Ā
Hands-on experience with Implementation of CI/CD ProcessesĀ
Experience workingĀ with one cloud platforms (AWS or Azure or Google)Ā
Experience working with configuration management tools such as Ansible & ChefĀ
Experience working with Ā Containerization tool Docker.
Experience working with Container Orchestration tool Kubernetes.
Experience in source Control Management including SVN and/or Bitbucket
& GitHub
Experience with setup & management of monitoring tools like Nagios, Sensu &Ā Prometheus or any other popular toolsĀ
Hands-on experience in Linux, Scripting Language & AWS is mandatory
Troubleshoot and Triage development, Production issues
About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, youāll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!
Your Role
- Applications built at scale to go up and down on command.
- Manage a cluster of microservices talking to each other.
- Build pipelines for huge data ingestion, processing, and dissemination.
- Optimize services for low cost and high efficiency.
- Maintain high availability and scalable PSQL database cluster.
- Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.
Requirements
- 1-4 years of work experience.
- Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
- CI/CD concepts and implementation using Codepipeline, Github Actions.
- Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
- Advanced Containerization -Ā Docker, Kubernetes, ECS.
- Experience with managed services like database cluster, distributed services on EC2.
- Self-starters and curious folks who don't need to be micromanaged.
- Passionate about Blue Sky Climate Action and working with data at scale.
Benefits
- Work from anywhere:Ā Work by the beach or from the mountains.
- Open source at heart:Ā We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie:Ā Possibility of becoming an owner by investing in ESOPs.
- Flexible timings:Ā Fit your work around your lifestyle.
- Comprehensive health cover:Ā Health cover for you and your dependents to keep you tension free.
- Work Machine of choice:Ā Buy a device and own it after completing a year at BSA.
- Quarterly Retreats:Ā Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations:Ā Take time off to rest and get ready for the next big assignment by availing the paid leaves.
Location ā Pune
Experience - 1.5 to 3 YR
Payroll: Direct with Client
Salary Range: 3 to 5 Lacs (depending on existing)
Role and ResponsibilityĀ Ā
⢠Good understanding and Experience on AWS CloudWatch for  ES2, Amazon Web Services, and Resources, and other sources.
⢠Collect and Store logsĀ
⢠Monitor and Store Logs
⢠Log AnalyzeĀ
⢠Configure Alarm
⢠Configure DashboardĀ
⢠Preparation and following of SOP's, Documentation.
⢠Good understanding AWS in DevOps.
⢠Experience with AWS services ( EC2, ECS, CloudWatch, VPC, Networking )
⢠Experience with a variety of infrastructure, application, and log monitoring tools ~ Prometheus, Grafana,Ā
⢠Familiarity with Docker, Linux, and Linux securityĀ
⢠Knowledge and experience with container-based architectures like Docker
⢠Experience on performing troubleshooting on AWS service.
⢠Experience in configuring services in AWS like EC2, S3, ECS
⢠Experience withĀ Linux system administration and engineering skills on Cloud infrastructureĀ
⢠Knowledge of Load Balancers, Firewalls, and network switching components
⢠Knowledge of Internet-based technologies - TCP/IP, DNS, HTTP, SMTP & Networking concepts
⢠Knowledge of security best practices
⢠Comfortable 24x7 supporting Production environments
⢠Strong communication skills
Job Dsecription:
Ā
ā Develop best practices for team and also responsible for the architecture
ā solutions and documentation operations in order to meet the engineering departments quality and standards
ā Participate in production outage and handle complex issues and works towards Resolution
ā Develop custom tools and integration with existing tools to increase engineering Productivity
Ā
Ā
Required Experience and Expertise
Ā
ā Having a good knowledge of Terraform + someone who has worked on large TF code bases.
ā Deep understanding of Terraform with best practices & writing TF modules.
ā Hands-on experience of GCP Ā and AWS and knowledge on AWS Services like VPC and VPC related services like (route tables, vpc endpoints, privatelinks) EKS, S3, IAM. Cost aware mindset towards Cloud services.
ā Deep understanding of Kernel, Networking and OS fundamentals
NOTICE PERIOD - Max - 30 days
- Azure Devops - Working experience in Azure yaml pipelines. (Note ā Some say they worked in yaml but itās for Jenkins and not Azure devops)
- Azure ā Infrastructure automation using Terraform/ARM templates. (Note ā Some say that they worked in terraform but itās for AWS and not Azure). Please confirm Terraform for Azure infrastructure automation.
- Powershell scripting to automate and deploy .Net applications.









