About the job
đ TL; DR: We at Sarva Labs Inc., are looking for Site Reliability Engineers with experience to join our team. As a Protocol Developer, you will handle assets in data centers across Asia, Europe and Americas for the Worldâs First Context-Aware Peer-to-Peer Network enabling Web4.0. We are looking for that person who will take over the ownership of DevOps, establish proper deployment processes and work with engineering teams and hustle through the Main Net launch.
Â
About Us đ
Imagine if each user had their own chain with each transaction being settled by a dynamic group of nodes who come together and settle that interaction with near immediate finality without a volatile gas cost. Thatâs MOI for you, Anon.
Â
Visit https://www.sarva.ai/ to know more about who we are as a company
Visit https://www.moi.technology/ to know more about the technology and team!
Visit https://www.moi-id.life/ , https://www.moibit.io/ , https://www.moiverse.io/ to know more
Read our developer documentation at https://apidocs.moinet.io/
Â
What you'll do đ
- You will take over the ownership of DevOps, establish proper deployment processes and work with engineering teams to ensure an appropriate degree of automation for component assembly, deployment, and rollback strategies in medium to large scale environments
- Monitor components to proactively prevent system component failure, and enable the engineering team on system characteristics that require improvement
- You will ensure the uninterrupted operation of components through proactive resource management and activities such as security/OS/Storage/application upgrades
Â
You'd fit in đŻ if you...
- Familiar with any of these providers: AWS, GCP, DO, Azure, RedSwitches, Contabo, Redswitches, Hetzner, Server4you, Velia, Psychz, Tier and so on
- Experience in virtualizing bare metals using Openstack / VMWare / Similar is a PLUS
- Seasoned in building and managing VMs, Containers and clusters across the continents
- Confident in making best use of Docker, Kubernetes with stateful set deployment, autoscaling, rolling update, UI dashboard, replications, persistent volume, ingress
- Must have experience deploying in multi-cloud environments
- Working knowledge on automation tools such as Terraform, Travis, Packer, Chef, etc.
- Working knowledge on Scalability in a distributed and decentralised environment
- Familiar with Apache, Rancher, Nginx, SELinux/Ubuntu 18.04 LTS/CentOS 7 and RHEL
- Monitoring tools like PM2, Grafana and so on
- Hands-on with ELK stack/similar for log analytics
Â
đą Join Us
- Flexible work timings
- Weâll set you up with your workspace. Work out of our Villa which has a lake view!
- Competitive salary/stipend
- Generous equity options (for full-time employees)
Â
Â
About Sarva Labs Inc
We at Sarvalabs bring personalization to open networks through full user ownership and complete control of all dimensions of their digital interactions. Sarvalabs is developing The World's First Protocol to bring personalization to Open Networks and enable an Internet for Value Transfer. MOI - MY OWN INTERNET.
MOI enables complete user ownership and control, empowers the New Internet of Value, introduces a new personalised multidimensional value structure for participants measured using TDU (Total Digital Utility), and Integrates context as a foundational computational dimension of p2p networks.
Similar jobs
- Proven work experience as an Azure Devops engineer
- Strong proficiency in PowerShell scripting.
- Experience with Azure DevOps and Yaml scripting.
- Experience with Microsoft Azure cloud.
- Hands-on experience with Microsoft Azure services, including Azure Automation Account, Azure Functions, Azure Webapp, and Azure Storage.
- Develop and maintain Azure DevOps pipelines for continuous integration and deployment.
- Implement automation solutions using PowerShell scripting to streamline processes and workflows.
- Collaborate with development and operations teams to ensure smooth deployment and operation of applications on Azure cloud infrastructure.
- Azure AZ-400 certification is an added advantage.
We are now seeking a talented and motivated individual to contribute to our product in the Cloud data
protection space. Ability to clearly comprehend customer needs in a cloud environment, excellent
troubleshooting skills, and the ability to focus on problem resolution until completion are a requirement.
Responsibilities Include:
Review proposed feature requirements
Create test plan and test cases
Analyze performance, diagnosis, and troubleshooting
Enter and track defects
Interact with customers, partners, and development teams
Researching customer issues and product initiatives
Provide input for service documentation
Required Skills:
Bachelor's degree in Computer Science, Information Systems or related discipline
3+ years' experience inclusive of Software as a Service and/or DevOps engineering experience
Experience with AWS services like VPC, EC2, RDS, SES, ECS, Lambda, S3, ELB
Experience with technologies such as REST, Angular, Messaging, Databases, etc.
Strong troubleshooting skills and issue isolation skills
Possess excellent communication skills (written and verbal English)
Must be able to work as an individual contributor within a team
Ability to think outside the box
Experience in configuring infrastructure
Knowledge of CI / CD
Desirable skills:
Programming skills in scripting languages (e.g., python, bash)
Knowledge of Linux administration
Knowledge of testing tools/frameworks: TestNG, Selenium, etc
Knowledge of Identity and Security
Ask any CIO about corporate data and theyâll happily share all the work theyâve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and youâll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didnât matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. Itâs a target for both cybercriminals and regulators but securing it is incredibly difficult. Itâs the data challenge of our generation.
Existing approaches arenât doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. Whatâs needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
Thatâs our mission. Concentricâs semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/" target="_blank">https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Title: Cloud DevOps EngineerÂ
Role: Individual Contributor (4-8 yrs) Â
    Â
Requirements:Â
- Energetic self-starter, a fast learner, with a desire to work in a startup environment Â
- Experience working with Public Clouds like AWSÂ
- Operating and Monitoring cloud infrastructure on AWS.Â
- Primary focus on building, implementing and managing operational supportÂ
- Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure.Â
- Expert at one of the scripting languages â Python, shell, etc Â
- Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etcÂ
- Handling load monitoring, capacity planning, and services monitoring.Â
- Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues.Â
- Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
As a DevOps Engineer with experience in Kubernetes, you will be responsible for leading and managing a team of DevOps engineers in the design, implementation, and maintenance of the organization's infrastructure. You will work closely with software developers, system administrators, and other IT professionals to ensure that the organization's systems are efficient, reliable, and scalable.Â
Specific responsibilities will include:Â
- Leading the team in the development and implementation of automation and continuous delivery pipelines using tools such as Jenkins, Terraform, and Ansible.Â
- Managing the organization's infrastructure using Kubernetes, including deployment, scaling, and monitoring of applications.Â
- Ensuring that the organization's systems are secure and compliant with industry standards.Â
- Collaborating with software developers to design and implement infrastructure as code.Â
- Providing mentorship and technical guidance to team members.Â
- Troubleshooting and resolving technical issues in collaboration with other IT professionals.Â
- Participating in the development and maintenance of the organization's disaster recovery and incident response plans.Â
To be successful in this role, you should have strong leadership skills and experience with a variety of DevOps and infrastructure tools and technologies. You should also have excellent communication and problem-solving skills, and be able to work effectively in a fast-paced, dynamic environment.Â
Azure DeVops
On premises to Azure Migration
Docker, Kubernetes
Terraform CI CD pipeline
Â
9+ Location
Budget â BG, Hyderabad, Remote , Hybrid â
Budget â up to 30 LPA
Profile: DevOps Engineer
Experience: 5-8 Yrs
Notice Period: Immediate to 30 Days
Job Descrtiption:
Technical Experience (Must Have):
Cloud: Azure
DevOps Tool: Terraform, Ansible, Github, CI-CD pipeline, Docker, Kubernetes
Network: Cloud Networking
Scripting Language: Any/All - Shell Script, PowerShell, Python
OS: Linux (Ubuntu, RHEL etc)
Database: MongoDB
Professional Attributes: Excellent communication, written, presentation,
and problem-solving skills.
Experience: Minimum of 5-8 years of experience in Cloud Automation and
Application
Additional Information (Good to have):
Microsoft Azure Fundamentals AZ-900
Terraform Associate
Docker
Certified Kubernetes Administrator
Role:
Building and maintaining tools to automate application and
infrastructure deployment, and to monitor operations.
Design and implement cloud solutions which are secure, scalable,
resilient, monitored, auditable and cost optimized.
Implementing transformation from an as is state, to the future.
Coordinating with other members of the DevOps team, Development, Test,
and other teams to enhance and optimize existing processes.
Provide systems support, implement monitoring and logging alerting
solutions that enable the production systems to be monitored.
Writing Infrastructure as Code (IaC) using Industry standard tools and
services.
Writing application deployment automation using industry standard
deployment and configuration tools.
Design and implement continuous delivery pipelines that serve the
purpose of provisioning and operating client test as well as production
environments.
Implement and stay abreast of Cloud and DevOps industry best practices
and tooling.
- 2 years of experience in DevOps.
- Hands-on knowledge and experience on version control tools such as GIT and SVN.
- Experience working with Apache, Nginx, JBoss, and Tomcat Servers.
- Understanding of load balancing technologies.
- Knowledge of containerization technologies such as Docker and Kubernetes.
- Strong in GitLab CI/CD.
- Passionate to resolve reliability issues and identify strategies to mitigate going forward.
- Basic awareness but strong interest to learn, work, and grow in the DevOps area.
- DevOps/Operations, Dev or QA background with some awareness or practical experience working on Cloud (AWS services ELB, VPC, S3, RDS ), Python/Shell scripting, Jenkins, Dockers, Kubernetes.
- Knowledge of Infra-as-a Code concepts using AWS, Terraform, Cloud formation.
- Should be Good in Unix and aware of basic networking concepts (DNS, DHCP, VPN, NAT, TCP/IP).
- Proficiency in scripting languages including Bash, Python.
- Help increase system performance with a focus on high availability and scalability.
- Propose, scope, design, and implement various infrastructure architectures.
- Strong knowledge of configuration management tools.
- Keep up to date on modern technologies and trends and advocate for their inclusion within products when it makes sense.
- Ability to learn and apply new technologies through self-learning.
- Worked on system backup & recovery.
What is the role?
As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.
Â
Key Responsibilities
- Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Working on Docker images and maintaining Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Working on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
- Required Technical and Professional Expertise.
What are we looking for?
- Minimum 4-6 years of experience in IT industry.
- Expertise in implementing and managing Devops CI/CD pipeline.
- Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
- Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
- Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience with working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience and well versed with Jira tool is a plus.
What can you look for?
A wholesome opportunity in a fast-paced environment will enable you to juggle between concepts yet maintain the quality of content, interact, share your ideas, and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore, and Dublin. We have three products in our portfolio: Plum, Empuls, and Compass. Xoxoday works with over 1000 global clients. We help our clients engage and motivate their employees, sales teams, channel partners, or consumers for better business results.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. However, we assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.