


Role
We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead to identify and establish DevOps practices in the company.
You will also help us build scalable, efficient cloud infrastructure. You’ll implement monitoring for automated system health checks. Lastly, you’ll build our CI pipeline, and train and guide the team in DevOps practices.
This would be a hybrid role and the person would be expected to also do some application level programming in their downtime.
Responsibilities
- Deployment, automation, management, and maintenance of production systems.
- Ensuring availability, performance, security, and scalability of production systems.
- Evaluation of new technology alternatives and vendor products.
- System troubleshooting and problem resolution across various application domains and platforms.
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on AWS platform.
- Manage the establishment and configuration of SaaS infrastructure in an agile way by storing infrastructure as code and employing automated configuration management tools with a goal to be able to re-provision environments at any point in time.
- Be accountable for proper backup and disaster recovery procedures.
- Drive operational cost reductions through service optimizations and demand based auto scaling.
- Have on call responsibilities.
- Perform root cause analysis for production errors
- Uses open source technologies and tools to accomplish specific use cases encountered within the project.
- Uses coding languages or scripting methodologies to solve a problem with a custom workflow.
Requirements
- Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
- Prior experience as a software developer in a couple of high level programming languages.
- Extensive experience in any Javascript based framework since we will be deploying services to NodeJS on AWS Lambda (Serverless)
- Extensive experience with web servers such as Nginx/Apache
- Strong Linux system administration background.
- Ability to present and communicate the architecture in a visual form.
- Strong knowledge of AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, NAT gateway, DynamoDB)
- Experience maintaining and deploying highly-available, fault-tolerant systems at scale (~ 1 Lakh users a day)
- A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)
- Expertise with Git
- Experience implementing CI/CD (e.g. Jenkins, TravisCI)
- Strong experience with databases such as MySQL, NoSQL, Elasticsearch, Redis and/or Mongo.
- Stellar troubleshooting skills with the ability to spot issues before they become problems.
- Current with industry trends, IT ops and industry best practices, and able to identify the ones we should implement.
- Time and project management skills, with the capability to prioritize and multitask as needed.

About Mosaic Wellness
About
Connect with the team
Similar jobs
Hi,
We are looking for candidate with experience in DevSecOps.
Please find below JD for your reference.
Responsibilities:
Execute shell scripts for seamless automation and system management.
Implement infrastructure as code using Terraform for AWS, Kubernetes, Helm, kustomize, and kubectl.
Oversee AWS security groups, VPC configurations, and utilize Aviatrix for efficient network orchestration.
Contribute to Opentelemetry Collector for enhanced observability.
Implement microsegmentation using AWS native resources and Aviatrix for commercial routes.
Enforce policies through Open Policy Agent (OPA) integration.
Develop and maintain comprehensive runbooks for standard operating procedures.
Utilize packet tracing for network analysis and security optimization.
Apply OWASP tools and practices for robust web application security.
Integrate container vulnerability scanning tools seamlessly within CI/CD pipelines.
Define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaborate with software and platform engineers to infuse security principles into DevOps teams.
Regularly monitor and report project status to the management team.
Qualifications:
Proficient in shell scripting and automation.
Strong command of Terraform, AWS, Kubernetes, Helm, kustomize, and kubectl.
Deep understanding of AWS security practices, VPC configurations, and Aviatrix.
Familiarity with Opentelemetry for observability and OPA for policy enforcement.
Experience in packet tracing for network analysis.
Practical application of OWASP tools and web application security.
Integration of container vulnerability scanning tools within CI/CD pipelines.
Proven ability to define security requirements for source code repositories, binary repositories, and secrets managers in CI/CD pipelines.
Collaboration expertise with DevOps teams for security integration.
Regular monitoring and reporting capabilities.
Site Reliability Engineering experience.
Hands-on proficiency with source code management tools, especially Git.
Cloud platform expertise (AWS, Azure, or GCP) with hands-on experience in deploying and managing applications.
Please send across your updated profile.
Responsibility :
- Install, configure, and maintain Kubernetes clusters.
- Develop Kubernetes-based solutions.
- Improve Kubernetes infrastructure.
- Work with other engineers to troubleshoot Kubernetes issues.
Kubernetes Engineer Requirements & Skills
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Linux/Unix experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team
Responsibilities:
- Design, implement, and maintain cloud infrastructure solutions on Microsoft Azure, with a focus on scalability, security, and cost optimization.
- Collaborate with development teams to streamline the deployment process, ensuring smooth and efficient delivery of software applications.
- Develop and maintain CI/CD pipelines using tools like Azure DevOps, Jenkins, or GitLab CI to automate build, test, and deployment processes.
- Utilize infrastructure-as-code (IaC) principles to create and manage infrastructure deployments using Terraform, ARM templates, or similar tools.
- Manage and monitor containerized applications using Azure Kubernetes Service (AKS) or other container orchestration platforms.
- Implement and maintain monitoring, logging, and alerting solutions for cloud-based infrastructure and applications.
- Troubleshoot and resolve infrastructure and deployment issues, working closely with development and operations teams.
- Ensure high availability, performance, and security of cloud infrastructure and applications.
- Stay up-to-date with the latest industry trends and best practices in cloud infrastructure, DevOps, and automation.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent work experience).
- Minimum of four years of proven experience working as a DevOps Engineer or similar role, with a focus on cloud infrastructure and deployment automation.
- Strong expertise in Microsoft Azure services, including but not limited to Azure Virtual Machines, Azure App Service, Azure Storage, Azure Networking, Azure Security, and Azure Monitor.
- Proficiency in infrastructure-as-code (IaC) tools such as Terraform or ARM templates.
- Hands-on experience with containerization and orchestration platforms, preferably Azure Kubernetes Service (AKS) or Docker Swarm.
- Solid understanding of CI/CD principles and experience with relevant tools such as Azure DevOps, Jenkins, or GitLab CI.
- Experience with scripting languages like PowerShell, Bash, or Python for automation tasks.
- Strong problem-solving and troubleshooting skills with a proactive and analytical mindset.
- Excellent communication and collaboration skills, with the ability to work effectively in a team environment.
- Azure certifications (e.g., Azure Administrator, Azure DevOps Engineer, Azure Solutions Architect) are a plus.
Who we are :
Stanza Living is India's largest and fastest growing tech-enabled, managed accommodation company that delivers a hospitality-led living experience to migrant students and young working professionals across India. We have a full-stack business model that focuses on design, development and delivery of daily living solutions tailored to the young consumers' lifestyle. From smartly-planned residences, host of amenities and services for hassle-free living to exclusive community engagement programmes - everything is seamlessly integrated through technology to ensure the highest consumer delight.
Today, we are :
- India's largest managed accommodation company with over 75,000+ beds under management across 25+ cities
- Most capitalized player in the managed accommodation space, backed by global marquee investors - Falcon Edge, Equity International, Sequoia Capital, Matrix Partners, Accel Partners
- Recognized as the Best Real Estate Tech company across the Globe in 2020 by leading analysis agency, Tracxn
- LinkedIn Top Startup to Work for - 2019
Objectives of this role :
- Work in tandem with our engineering team to identify and implement the most optimal cloud-based solutions for the company
- Define and document best practices and strategies regarding application deployment and infrastructure maintenance
- Provide guidance, thought leadership, and mentorship to developer teams to build their cloud competencies
- Ensure application performance, uptime, and scale, maintaining high standards for code quality and thoughtful design
- Manage cloud environments in accordance with company security guidelines
Job Description :
- Excellent understanding of Cloud Platform (AWS)
- Strong knowledge on AWS Services, design, configuration on enterprise systems
- Good knowledge on Kubernetes configuration, Dockers
- Understanding the needs of the business for defining AWS system specifications
- Understand Architecture Requirements and ensure effective support activities
- Evaluation and choosing suitable AWS Service or and suggesting methods for integration
- Overseeing assigned programs and guiding the team members
- Providing assistance when technical problems arise
- Making sure the agreed infrastructure and architecture are implemented
- Addressing the technical concerns, suggestions, and ideas
- Configure Monitoring systems to make sure they meet business goals as well as user requirements
- Excellent knowledge of AWS IaaS Layer
- Ability to lead & implement PS workloads or POCs
- Ensure continual knowledge management
About Us -Celebal Technologies is a premier software services company in the field of Data Science, Big Data and Enterprise Cloud. Celebal Technologies helps you to discover the competitive advantage by employing intelligent data solutions using cutting-edge technology solutions that can bring massive value to your organization. The core offerings are around "Data to Intelligence", wherein we leverage data to extract intelligence and patterns thereby facilitating smarter and quicker decision making for clients. With Celebal Technologies, who understands the core value of modern analytics over the enterprise, we help the business in improving business intelligence and more data-driven in architecting solutions.
Key Responsibilities
• As a part of the DevOps team, you will be responsible for configuration, optimization, documentation, and support of the CI/CD components.
• Creating and managing build and release pipelines with Azure DevOps and Jenkins.
• Assist in planning and reviewing application architecture and design to promote an efficient deployment process.
• Troubleshoot server performance issues & handle the continuous integration system.
• Automate infrastructure provisioning using ARM Templates and Terraform.
• Monitor and Support deployment, Cloud-based and On-premises Infrastructure.
• Diagnose and develop root cause solutions for failures and performance issues in the production environment.
• Deploy and manage Infrastructure for production applications
• Configure security best practices for application and infrastructure
Essential Requirements
• Good hands-on experience with cloud platforms like Azure, AWS & GCP. (Preferably Azure)
• Strong knowledge of CI/CD principles.
• Strong work experience with CI/CD implementation tools like Azure DevOps, Team city, Octopus Deploy, AWS Code Deploy, and Jenkins.
• Experience of writing automation scripts with PowerShell, Bash, Python, etc.
• GitHub, JIRA, Confluence, and Continuous Integration (CI) system.
• Understanding of secure DevOps practices
Good to Have -
• Knowledge of scripting languages such as PowerShell, Bash
• Experience with project management and workflow tools such as Agile, Jira, Scrum/Kanban, etc.
• Experience with Build technologies and cloud services. (Jenkins, TeamCity, Azure DevOps, Bamboo, AWS Code Deploy)
• Strong communication skills and ability to explain protocol and processes with team and management.
• Must be able to handle multiple tasks and adapt to a constantly changing environment.
• Must have a good understanding of SDLC.
• Knowledge of Linux, Windows server, Monitoring tools, and Shell scripting.
• Self-motivated; demonstrating the ability to achieve in technologies with minimal supervision.
• Organized, flexible, and analytical ability to solve problems creatively.
Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data platforms at Petabyte scale. Our customers are Fortune 500 companies including Asia's largest telecom company, a unicorn fintech startup of India, and many more. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data observability platform that focuses on ensuring data reliability from every spectrum (compute, data and pipeline) of a cloud or on-premise data platform.
Position Summary-
This role will support the customer implementation of a data quality and reliability product. The candidate is expected to install the product in the client environment, manage proof of concepts with prospects, and become a product expert and troubleshoot post installation, production issues. The role will have significant interaction with the client data engineering team and the candidate is expected to have good communication skills.
Required experience
- 6-7 years experience providing engineering support to data domain/pipelines/data engineers.
- Experience in troubleshooting data issues, analyzing end to end data pipelines and in working with users in resolving issues
- Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
- Basic understanding of SQL
- Experience working with technologies like S3, Kubernetes experience preferred.
- Databricks/Hadoop/Kafka experience preferred but not required
- Equal Experts is an innovative software delivery consultancy specializing in the delivery of custom software solutions for blue-chip enterprise and public sector clients across a range of industry sectors.
- We deliver market-leading propositions across the digital, online and mobile channels, and are recognized for our leadership in the application of Agile and Lean delivery methods to assure delivery.
- We are focused on hiring DevOps Engineers with skills in AWS, Automation tools, CI/CD tools and the likes.
- The DevOps Consultant will be working with an on-site client team and will be expected to mentor and share their skills and knowledge with the existing client developers.
Your Responsibilities Include:
- Continuous Delivery of quality software created by our project teams. Ensuring smooth build and release with continuous integration.
- Configuration Management - Design and implementation of deployment strategies for multiple projects.
- Virtualization and Infrastructure Provisioning.
- Maintenance and upgrade of our cloud environment (AWS).
- Version control and source code administration.
- Administration of Web Servers, Application Servers and Servlet Containers
- Setting up and managing the Automation efforts for multiple projects.
Technical Expertise:
- Should have worked on Agile projects featuring weekly iterations and releases.
- Should have extensive hands-on experience with:
- Continuous Integration tools like Jenkins.
- Configuration Management tools like Terraform or Ansible.
- Cloud computing using AWS, EC2.
- Hands-on experience on Kubernetes.
- Virtualization tools like Vagrant, Docker or VMWare
- You have an active presence in the DevOps community through your blogs, Stack Overflow and GitHub profiles.
- You are passionate about DevOps. You love to mentor people and evangelize about best practices and innovations in DevOps.

Cloud Software Engineer
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Roles and Responsibilities
· A wide variety of engineering projects including data visualization, web services, data engineering, web-portals, SDKs, and integrations in numerous languages, frameworks, and clouds platforms
· Apply continuous delivery practices to deliver high-quality software and value as early as possible.
· Work in collaborative teams to build new experiences
· Participate in the entire cycle of software consulting and delivery from ideation to deployment
· Integrating multiple software products across cloud and hybrid environments
· Developing processes and procedures for software applications migration to the cloud, as well as managed services in the cloud
· Migrating existing on-premises software applications to cloud leveraging a structured method and best practices
Desired Candidate Profile : *** freshers can also apply ***
· 2+years of experience with 1 or more development languages such as Java, Python, or Spark.
· 1 year + of experience with private/public/hybrid cloud model design, implementation, orchestration, and support.
· Certification or any training's completion of any one of the cloud environments like AWS, GCP, Azure, Oracle Cloud, and Digital Ocean.
· Strong problem-solvers who are comfortable in unfamiliar situations, and can view challenges through multiple perspectives
· Driven to develop technical skills for oneself and team-mates
· Hands-on experience with cloud computing and/or traditional enterprise datacentre technologies, i.e., network, compute, storage, and virtualization.
· Possess at least one cloud-related certification from AWS, Azure, or equivalent
· Ability to write high-quality, well-tested code and comfort with Object-Oriented or functional programming patterns
· Past experience quickly learning new languages and frameworks
· Ability to work with a high degree of autonomy and self-direction
http://www.banyandata.com" target="_blank">www.banyandata.com


