.png&w=3840&q=75)
About the job
šĀ TL; DR: We at Sarva Labs Inc., are looking for Site Reliability Engineers with experience to join our team. As a Protocol Developer, you will handle assets in data centers across Asia, Europe and Americas for the Worldās First Context-Aware Peer-to-Peer Network enabling Web4.0. We are looking for that person who will take over the ownership of DevOps, establish proper deployment processes and work with engineering teams and hustle through the Main Net launch.
Ā
About Us š
Imagine if each user had their own chain with each transaction being settled by a dynamic group of nodes who come together and settle that interaction with near immediate finality without a volatile gas cost. Thatās MOI for you, Anon.
Ā
Visit https://www.sarva.ai/ to know more about who we are as a company
Visit https://www.moi.technology/ to know more about the technology and team!
Visit https://www.moi-id.life/ , https://www.moibit.io/ , https://www.moiverse.io/ to know more
Read our developer documentation at https://apidocs.moinet.io/
Ā
What you'll do š
- You will take over the ownership of DevOps, establish proper deployment processes and work with engineering teams to ensure an appropriate degree of automation for component assembly, deployment, and rollback strategies in medium to large scale environments
- Monitor components to proactively prevent system component failure, and enable the engineering team on system characteristics that require improvement
- You will ensure the uninterrupted operation of components through proactive resource management and activities such as security/OS/Storage/application upgrades
Ā
You'd fit in šÆ if you...
- Familiar with any of these providers: AWS, GCP, DO, Azure, RedSwitches, Contabo, Redswitches, Hetzner, Server4you, Velia, Psychz, Tier and so on
- Experience in virtualizing bare metals using Openstack / VMWare / Similar is a PLUS
- Seasoned in building and managing VMs, Containers and clusters across the continents
- Confident in making best use of Docker, Kubernetes with stateful set deployment, autoscaling, rolling update, UI dashboard, replications, persistent volume, ingress
- Must have experience deploying in multi-cloud environments
- Working knowledge on automation tools such as Terraform, Travis, Packer, Chef, etc.
- Working knowledge on Scalability in a distributed and decentralised environment
- Familiar with Apache, Rancher, Nginx, SELinux/Ubuntu 18.04 LTS/CentOS 7 and RHEL
- Monitoring tools like PM2, Grafana and so on
- Hands-on with ELK stack/similar for log analytics
Ā
š± Join Us
- Flexible work timings
- Weāll set you up with your workspace. Work out of our Villa which has a lake view!
- Competitive salary/stipend
- Generous equity options (for full-time employees)
Ā
Ā

About Sarva Labs Inc
About
We at Sarvalabs bring personalization to open networks through full user ownership and complete control of all dimensions of their digital interactions. Sarvalabs is developing The World's First Protocol to bring personalization to Open Networks and enable an Internet for Value Transfer. MOI - MY OWN INTERNET.
MOI enables complete user ownership and control, empowers the New Internet of Value, introduces a new personalised multidimensional value structure for participants measured using TDU (Total Digital Utility), and Integrates context as a foundational computational dimension of p2p networks.
Connect with the team
Similar jobs

A strong proficiency in at least one scripting language (e.g., Python, Bash, PowerShell) is required.
Candidates must possess an in-depth ability to design, write, and implement complex automation logic, not just basic scripts.
Proven experience in automating DevOps processes, environment provisioning, and configuration management is essential.
Cloud Platform (AWS Preferred) : ⢠Extensive hands-on experience with Amazon Web Services (AWS) is highly preferred.
Candidates must be able to demonstrate expert-level knowledge of core AWS services and articulate their use cases.
Excellent debugging and problem-solving skills within the AWS ecosystem are mandatory. The ability to diagnose and resolve issues efficiently is a key requirement.
Infrastructure as Code (IaC - Terraform Preferred) : ⢠Expert-level knowledge and practical experience with Terraform are required.
Candidates must have a deep understanding of how to write scalable, modular, and reusable Terraform code.
Containerization and Orchestration (Kubernetes Preferred) : ⢠Advanced, hands-on experience with Kubernetes is mandatory. ⢠Candidates must be proficient in solving complex, production-level issues related to deployments, networking, and cluster management. ⢠A solid foundational knowledge of Docker is required.
Ā
Who you are
Ā
The possibility of having massive societal impact. Our software touches the lives of hundreds of millions of people.
Solving hard governance and societal challenges
Work directly with central and state government leaders and other dignitaries
Mentorship from world class people and rich ecosystems
Position : Architect or Technical Lead - DevOps
Location : Bangalore
Role:
Strong knowledge on architecture and system design skills for multi-tenant, multi-region, redundant and highly- available mission-critical systems.
Clear understanding of core cloud platform technologies across public and private clouds.
Lead DevOps practices for Continuous Integration and Continuous Deployment pipeline and IT operations practices, scaling, metrics, as well as running day-to-day operations from development to production for the platform.
Implement non-functional requirements needed for a world-class reliability, scale, security and cost-efficiency throughout the product development lifecycle.
Drive a culture of automation, and self-service enablement for developers.
Work with information security leadership and technical team to automate security into the platform and services.
Meet infrastructure SLAs and compliance control points of cloud platforms.
Define and contribute to initiatives to continually improve Solution Delivery processes.
Improve organisationās capability to build, deliver and scale software as a service on the cloud
Interface with Engineering, Product and Technical Architecture group to meet joint objectives.
You are deeply motivated & have knowledge of solving hard to crack societal challenges with the stamina to see it all the way through.
You must have 7+ years of hands-on experience on DevOps, MSA and Kubernetes platform.
You must have 2+ years of strong kubernetes experience and must have CKA (Certified Kubernetes Administrator).
You should be proficient in tools/technologies involved for a microservices architecture and deployed in a multi cloud kubernetes environment.
You should have hands-on experience in architecting and building CI/CD pipelines from code check-in until production.
You should have strong knowledge on Dockers, Jenkins pipeline scripting, Helm charts, kubernetes objects and manifests, prometheus and demonstrate hands-on knowledge of multi cloud computing, storage and networking.
You should have experience in designing, provisioning and administrating using best practices across multiple public cloud offerings (Azure, AWS, GCP) and/or private cloud offerings ( OpenStack, VMware, Nutanix, NIC, bare metal Infra).
You should have experience in setting up logging, monitoring Cloud application performance, error rates, and error budgets while tracking adherence to SLOs and SLAs.
What You will do
We're looking for bright Senior DevOps engineers that are deeply experienced individual contributors and dynamic team leaders, to join our dedicated DevOps Practice Group. As a Senior DevOps practitioner at Zemoso youāll use your expertise to provide DevSecOps services to our customers and to our inhouse Dev teams.
Basic Qualifications
- Bachelor's degree in computer science, engineering or equivalent experience.Ā
- 15+ years of overall IT experience
- 10+ years of DevOps experience with 5+ years of it on Azure platform
- Relevant Azure Certification(s)
Must have experience
- IaC tools ARM/bicep template, Azure DevOps
- GITHub repo and GIT tools
- scripting languages Python, PowerShell, bash development experience
- CI/CD design, build management.
- Container tools Docker and Kubernetes
- Azure Policy and permissions provisioning.
- IT experience to design, development (Java Spring Boot), automated testing, REST APIās, Microservices, Pub/Sub architecture
- Costing, budgeting, forecasting, monitoring cloud expenses.
- Architecture drawings and proposals, presentations.
- Stakeholder management.
- Prototyping.
Nice to have experience
- IaC tools CloudFormation, TerraformĀ
- Build tools Apache Maven, Jenkins
- Networking: DNS, TCP/IP, HTTP, CDN and VPN
- Security: NSG, firewall and Azure policies, HTTPS, SSL certificates, KMS key
- Authentication (IAM): SAML 2.0, OpenID Connect, OAuth 2.0, Keycloak-based IdAM, LDAP and AD, and/or 3rd-party SSO.
- Access: RBAC and ABAC
- Load Balancers: Apache, Nginx, HAProxy
- Encryption: PKI infrastructure, Certificate authorities, OCSP, CRL, CSRs, x.509 certificate structures, pkcs12 certificate containers
Additionally, weāre looking for individuals who:
- Will assist in the rollout and deployment of new product features, and deploy applications via automation with configuration management tools
- Will build applications and tools to reduce barriers, decrease friction and speed up delivery of products
- Can effectively articulate technical challenges and solutions.
- Have strong collaborative instincts and interpersonal skills; must enjoy and work well with a wide variety of stakeholders in a complex organization
- Will manage individual project priorities with guidance, meeting deadlines and deliverables.
Benefits
- Competitive salary.
- Work from anywhere.
- Learning and gaining experience rapidly.
- Reimbursement for basic working set up at home.
- Insurance (including a top up insurance for COVID).
Location
Remote - work from anywhere.
About us:
Zemoso Technologies is a Software Product Market Fit Studio that brings silicon valley style rapid prototyping and rapid application builds to Entrepreneurs and Corporate innovation. We offer Innovation as a service and work on ideas from scratch and take it to the Product Market Fit stage using Design Thinking->Lean Execution->Agile Methodology.
We were featured as one of Deloitte Fastest 50 growing tech companies from India thrice (2016, 2018 and 2019). We were also featured in Deloitte Technology Fast 500 Asia Pacific both in 2016 and 2018.
We are located in Hyderabad, India, and Dallas, US. We have recently incorporated another office in Waterloo, Canada.
Our founders have had past successes - founded a decision management company acquired by SAP AG (now part of Hana Big data stack & NetWeaver BPM), early engineering team of Zoho (leading billion $ SaaS player) & some Private Equity experience.
Marquee customers along with some exciting start-ups are part of our clientele.
ApnaComplex is one of Indiaās largest and fastest-growing PropTech disruptors within the Society & Apartment Management business.Ā The SaaS based B2C platform is headquartered out of Indiaās tech start-up hub, Bangalore, with branches in 6 other cities. It currently empowers 3,600 Societies, managing over 6 Lakh Households in over 80 Indian cities to effortlessly manage all aspects of running large complexes seamlessly.
ApnaComplex is part of ANAROCK Group. ANAROCK Group is India's leading specialized real estate services company having diversified interests across the real estate value chain.
If it excites you to - drive innovation, create industry-first solutions, build new capabilities ground-up, and work with multiple new technologies, ApnaComplex is the place for you.
Ā
Must have-
Ā
- Knowledge of Docker
- Knowledge of Terraforms
- Knowledge of AWS
Ā
Good to have -
- Kubernetes
- Scripting language: PHP/Go Lang and Python
- Webserver knowledge
- Logging and monitoring experience
- Test, build, design, deployment, and ability to maintain continuous integration and continuous delivery process using tools like Jenkins, maven Git, etc.Ā
- Build and maintain highly available production systems.
- Must know how to choose the best tools and technologies which best fits the business needs.Ā
- Develop software to integrate with internal back-end systems.
- Investigate and resolve technical issues.
- Problem-solving attitude.
- Ability to automate test and deploy the code and monitor.Ā
- Work in close coordination with the development and operations team such that the application is in line with performance according to the customer's expectation.
- Lead and guide the team in identifying and implementing new technologies.
Ā
Ā
Skills that will help you build a success story with us
Ā
- An ability to quickly understand and solve new problems
- Strong interpersonal skills
- Excellent data interpretation
- Context-switching
- Intrinsically motivated
- A tactical and strategic track record for delivering research-driven results
Ā
Quick Glances:
Ā
- https://www.apnacomplex.com/why-apnacomplex">What to look for at ApnaComplex
- https://www.linkedin.com/company/1070467/admin/">Who are we A glimpse of ApnaComplex, know us better
- https://www.apnacomplex.com/media-buzz">ApnaComplex - Media ā Visit our media page Ā
Ā
ANAROCK Ethos - Values Over Value:
Our assurance of consistent ethical dealing with clients and partners reflects our motto - Values Over Value.
We value diversity within ANAROCK Group and are committed to offering equal opportunities in employment. We do not discriminate against any team member or applicant for employment based on nationality, race, color, religion, caste, gender identity / expression, sexual orientation, disability, social origin and status, indigenous status, political opinion, age, marital status or any other personal characteristics or status. ANAROCK Group values all talent and will do its utmost to hire, nurture and grow them.
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
Ā
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
Ā
To know more, Visit! -Ā https://www.happyfox.com/
Ā
Responsibilities
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Implement consistent observability, deployment and IaC setups
- Lead incident management and actively respond to escalations/incidents in the production environment from customers and the support team.
- Hire/Mentor other Infrastructure engineers and review their work to continuously ship improvements to production infrastructure and its tooling.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Lead infrastructure security audits
Ā
Requirements
- At least 7 years of experience in handling/building Production environments in AWS.
- At least 3 years of programming experience in building API/backend services for customer-facing applications in production.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Experience in security hardening of infrastructure, systems and services.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Exposure/Experience in setting up or managing Cloudflare, Qualys and other related tools
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points ā Hands-on experience with Nginx, Postgres, Postfix, Redis or Mongo systems.
Ā
Ā

Job Description
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is looking for highly talented hands on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitiveās global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
Ā
Key Responsibilities and Must-have skills:
- Lead the pre-sales (25%) to post-sales (75%) efforts building Public/Hybrid Cloud solutions working collaboratively with Intuitive and client technical and business stakeholders
- Be a customer advocate with obsession for excellence delivering measurable success for Intuitiveās customers with secure, scalable, highly available cloud architecture that leverage AWS Cloud services
- Experience in analyzing customer's business and technical requirements, assessing existing environment for Cloud enablement, advising on Cloud models, technologies and risk management strategies
- Apply creative thinking/approach to determine technical solutions that further business goals and align with corporate technology strategies
- Extensive experience building Well Architected solutions in-line with AWS cloud adoption framework (DevOps/DevSecOps, Database/Data Warehouse/Data Lake, App Modernization/Containers, Security, Governance, Risk, Compliance, Cost Management and Operational Excellence)
- Experience with application discovery preferably with tools like Cloudscape, to discover application configurations , databases, filesystems, and application dependencies
- Experience with Well Architected Review, Cloud Readiness Assessments and defining migration patterns (MRA/MRP) for application migration e.g. Re-host, Re-platform, Re-architect etc
- Experience in architecting and deploying AWS Landing Zone architecture with CI/CD pipeline
- Experience on architecture, design of AWS cloud services to address scalability, performance, HA, security, availability, compliance, backup and DR, automation, alerting and monitoring and cost
- Hands-on experience in migrating applications to AWS leveraging proven tools and processes including migration, implementation, cutover and rollback plans and execution
- Hands-on experience in deploying various AWS services e.g. EC2, S3, VPC, RDS, Security Groups etc. using either manual or IaC, IaC is preferred
- Hands-on Experience in writing cloud automation scripts/code such as Ansible, Terraform, CloudFormation Template (AWS CFT) etc.
- Hands-on Experience with application build/release processes CI/CD pipelines
- Deep understanding of Agile processes (planning/stand-ups/retros etc), and interact with cross-functional teams i.e. Development, Infrastructure, Security, Performance Engineering, and QA
Ā
Additional Requirements:
- Work with Technology leadership to grow the Cloud & DevOps practice. Create cloud practice collateral
- Work directly with sales teams to improve and help them drive the sales for Cloud & DevOps practice
- Assist Sales and Marketing team in creating sales and marketing collateral
- Write whitepapers and technology blogs to be published on social media and Intuitive website
- Create case studies for projects successfully executed by Intuitive delivery team
- Conduct sales enablement sessions to coach sales team on new offerings
- Flexibility with work hours supporting customerās requirement and collaboration with global delivery teams
- Flexibility with Travel as required for Pre-sales/Post-sales, Design workshops, War-room Migration events and customer meetings
- Strong passion for modern technology exploration and development
- Excellent written, verbal communication skills, presentation, and collaboration skills - Team leadership skills
- Experience with Multi-cloud (Azure, GCP, OCI) is a big plus
- Experience with VMware Cloud Foundation as well as Advanced Windows and Linux Engineering is a big plus
- Experience with On-prem Data Engineering (Database, Data Warehouse, Data Lake) is a big plus
JOB RESPONSIBILITIES:
- Responsible for design, implementation, and continuous improvement on automated CI/CD infrastructure
- Displays technical leadership and oversight of implementation and deployment planning, system integration, ongoing data validation processes, quality assurance, delivery, operations, and sustainability of technical solutions
- Responsible for designing topology to meet requirements for uptime, availability, scalability, robustness, fault tolerance & security
- Implement proactive measures for automated detection and resolution of recurring operational issues
- Lead operational support team manage incidents, document root cause and tracking preventive measures
- Identifying and deploying cybersecurity measures by continuously validating/fixing vulnerability assessment reports and risk management
- Responsible for the design and development of tools, installation procedures
- Develops and maintains accurate estimates, timelines, project plans, and status reports
- Organize and maintain packaging and deployment of various internal modules and third-party vendor libraries
- Responsible for the employment, timely performance evaluation, counselling, employee development, and discipline of assigned employees.
- Participates in calls and meetings with customers, vendors, and internal teams on regular basis.
- Perform infrastructure cost analysis and optimization
Ā
SKILLS & ABILITIES
Ā
Experience: Minimum of 10 years of experience with good technical knowledge regarding build, release, and systems engineering
Ā
Technical Skills:
- Experience with DevOps toolchains such as Docker, Rancher, Kubernetes, Bitbucket
- Experience with Apache, Nginx, Tomcat, Prometheus ,Grafana
- Ability to learn/use a wide variety of open-source technologies and tools
- Sound understanding of cloud technologies preferably AWS technologies
- Linux, Windows, Scripting, Configuration Management, Build and Release Engineering
- 6 years of experience in DevOps practices, with a good understanding of DevOps and Agile principles
- Good scripting skills (Python/Perl/Ruby/Bash)
- Experience with standard continuous integration tools Jenkins/Bitbucket Pipelines
- Work on software configuration management systems (Puppet/Chef/Salt/Ansible)
- Microsoft Office Suite (Word, Excel, PowerPoint, Visio, Outlook) and other business productivity tools
- Working knowledge on HSM and PKI (Good to have)
Ā
Location:
- Bangalore
Experience:
- 10 + Years.
Karkinos Healthcare Pvt. Ltd.
The fundamental principle of Karkinos healthcare is democratization of cancer care in a participatory fashion with existing health providers, researchers and technologists. Our vision is to provide millions of cancer patients with affordable and effective treatments and have India become a leader in oncology research. Karkinos will be with the patient every step of the way, to advise them, connect them to the best specialists, and to coordinate their care.
Karkinos has an eclectic founding team with strong technology, healthcare and finance experience, and a panel of eminent clinical advisors in India and abroad.
Roles and Responsibilities:
- Critical role that involves in setting up and owning the dev, staging, and production infrastructure for the platform that uses micro services, data warehouses and a datalake.
- Demonstrate technical leadership with incident handling and troubleshooting.
- Provide software delivery operations and application release management support, including scripting, automated build and deployment processing and process reengineering.
- Build automated deployments for consistent software releases with zero downtime
- Deploy new modules, upgrades and fixes to the production environment.
- Participate in the development of contingency plans including reliable backup and restore procedures.
- Participate in the development of the end to end CI / CD process and follow through with other team members to ensure high quality and predictable delivery
- Participate in development of CI / CD processes
- Work on implementing DevSecOps and GitOps practices
- Work with the Engineering team to integrate more complex testing into a containerized pipeline to ensure minimal regressions
- Build platform tools that rest of the engineering teams can use.
Apply only if you have:
- 2+ years of software development/technical support experience.
- 1+ years of software development, operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.
- 2+ years of experience in public cloud services: AWS (VPC, EC2, ECS, Lambda, Redshift, S3, API Gateway) or GCP (Kubernetes Engine, Cloud SQL, Cloud Storage, BIG Query, API Gateway, Container Registry) - preferably in GCP.
- Experience managing infra for distributed NoSQL system (Kafka/MongoDB), Containers, Micro services, deployment and service orchestration using Kubernetes.
- Experience and a god understanding of Kubernetes, Service Mesh (Istio preferred), API Gateways, Network proxies, etc.
- Experience in setting up infra for central monitoring of infrastructure, ability to debug, trace
- Experience and deep understanding of Cloud Networking and Security
- Experience in Continuous Integration and Delivery (Jenkins / Maven Github/Gitlab).
- Strong scripting language knowledge, such as Python, Shell.
- Experience in Agile development methodologies and release management techniques.
- Excellent analytical and troubleshooting.
- Ability to continuously learn and make decisions with minimal supervision. You understand that making mistakes means that you are learning.
Interested Applicants can share their resume at sajal.somani[AT]karkinos[DOT]in with subject as "DevOps Engineer".
ā Develop and deliver automation software required for building & improving the functionality, reliability, availability, and manageability of applications and cloud platforms
ā Champion and drive the adoption of Infrastructure as Code (IaC) practices and mindset
ā Design, architect, and build self-service, self-healing, synthetic monitoring and alerting platform and tools
ā Automate the development and test automation processes through CI/CD pipeline (Git, Jenkins, SonarQube, Artifactory, Docker containers)
ā Build container hosting-platform using Kubernetes
ā Introduce new cloud technologies, tools & processes to keep innovating in commerce area to drive greater business value.
Skills Required:
ā Excellent written and verbal communication skills and a good listener.
ā Proficiency in deploying and maintaining Cloud based infrastructure services (AWS, GCP, Azure ā good hands-on experience in at least one of them)
ā Well versed with service-oriented architecture, cloud-based web services architecture, design patterns and frameworks.
ā Good knowledge of cloud related services like compute, storage, network, messaging (Eg SNS, SQS) and automation (Eg. CFT/Terraform).
ā Experience with relational SQL and NoSQL databases, including Postgres and
Cassandra.
ā Experience in systems management/automation tools (Puppet/Chef/Ansible, Terraform)
ā Strong Linux System Admin Experience with excellent troubleshooting and problem solving skills
ā Hands-on experience with languages (Bash/Python/Core Java/Scala)
ā Experience with CI/CD pipeline (Jenkins, Git, Maven etc)
ā Experience integrating solutions in a multi-region environment
ā Self-motivate, learn quickly and deliver results with minimal supervision
ā Experience with Agile/Scrum/DevOps software development methodologies.
Nice to Have:
ā Experience in setting-up Elastic Logstash Kibana (ELK) stack.
ā Having worked with large scale data.
ā Experience with Monitoring tools such as Splunk, Nagios, Grafana, DataDog etc.
ā Previously experience on working with distributed architectures like Hadoop, Mapreduce etc.

