DevOps Engineer
Multiplier enables companies to employ anyone, anywhere in a few clicks. Our
SaaS platform combines the multi-local complexities of hiring & paying employees
anywhere in the world, and automates everything. We are passionate about
creating a world where people can get a job they love, without having to leave the
people they love.
We are an early stage start up with a "Day one" attitude and we are building a
team that will make Multiplier the market leader in this space. Every day is an
exciting one at Multiplier right now because we are figuring out a real problem in
the market and building a first-of-its-kind product around it. We are looking for
smart and talented people who will add on to our collective energy and share the
same excitement in making Multiplier a big deal. We are headquartered in
Singapore, but our team is remote.
What will I be doing? š©š»šØš»
Owning and managing our cloud infrastructure on AWS.
Working as part of product development from inception to launch, and own
deployment pipelines through to site reliability.
Ensuring a high availability production site with proper alerting, monitoring and
security in place.
Creating an efficient environment for product development teams to build, test
and deploy features quickly by providing multiple environments for testing and
staging.
Use infrastructure as code and the best of methods and tools in DevOps to
innovate and keep improving.
Create an automation culture and add automation to wherever it is needed..
DevOps Engineer īRemote) 2
What do I need? š¤
4ī years of industry experience in a similar DevOps role, preferably as part of
a SaaS
product team. You can demonstrate the significant impact your work has had
on the product and/or the team.
Deep knowledge in AWS and the services available. 2ī years of experience in
building complex architecture on cloud infrastructure.
Exceptional understanding of containerisation technologies and Docker. Have
had hands on experience with Kubernetes, AWS ECS and AWS EKS.
Experience with Terraform or any other infrastructure as code solutions.
Able to comfortably use at least one high level programming languages such
as Java, Javascript or Python. Hands on experiences of scripting in bash,
groovy and others.
Good understanding of security in web technologies and cloud infrastructure.
Work with and solve problems of very complex nature and enjoy doing it.
Willingness to quickly learn and use new technologies or frameworks.
Clear and responsive communication.
About Multiplier AI for hospital pharma marketing hospital CRM pharma CRM Chatbot
About
Similar jobs
The candidate should have a background in development/programming with experience in at least one of the following: .NET, Java (Spring Boot), ReactJS, or AngularJS.
Ā
Primary Skills:
- AWS or GCP Cloud
- DevOps CI/CD pipelines (e.g., Azure DevOps, Jenkins)
- Python/Bash/PowerShell scripting
Ā
Secondary Skills:
- Docker or Kubernetes
- At least 5 year of experience in Cloud technologies-AWS and Azure and developing.Ā
- Experience in implementing DevOps practices and DevOps-tools in areas like CI/CD using Jenkins environment automation, and release automation, virtualization, infra as a code or metrics tracking.
- Hands on experience in DevOps tools configuration in different environments.
- Strong knowledge of working with DevOps design patterns, processes and best practices
- Hand-on experience in Setting up Build pipelines.
- Prior working experience in system administration or architecture in Windows or Linux.
- Must have experience in GIT (BitBucket, GitHub, GitLab)
- Hands-on experience on Jenkins pipeline scripting.
- Hands-on knowledge in one scripting language (Nant, Perl, Python, Shell or PowerShell)
- Configuration level skills in tools like SonarQube (or similar tools) and Artifactory.
- Expertise on Virtual Infrastructure (VMWare or VirtualBox or QEMU or KVM or Vagrant) and environment automation/provisioning using SaltStack/Ansible/Puppet/Chef
- Deploying, automating, maintaining and managing Azure cloud based production systems including monitoring capacity.
- Good to have experience in migrating code repositories from one source control to another.
- Hands-on experience in Docker container and orchestration based deployments like Kubernetes, Service Fabric, Docker swarm.
- Must have good communication skills and problem solving skills
- Understanding of maintenance of existing systems (Virtual machines), Linux stack
- Experience running, operating and maintainence of Kubernetes pods
- Strong Scripting skills
- Experience in AWS
- Knowledge of configuring/optimizing open source tools like Kafka, etc.
- Strong automation maintenance - ability to identify opportunities to speed up build and deploy process with strong validation and automation
- Optimizing and standardizing monitoring, alerting.
- Experience in Google cloud platform
- Experience/ Knowledge Ā in Python will be an added advantage
- Experience on Monitoring Tools like Jenkins, Kubernetes ,Terraform Ā etc
Ask any CIO about corporate data and theyāll happily share all the work theyāve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and youāll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didnāt matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. Itās a target for both cybercriminals and regulators but securing it is incredibly difficult. Itās the data challenge of our generation.
Existing approaches arenāt doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. Whatās needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
Thatās our mission. Concentricās semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/" target="_blank">https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Title: Cloud DevOps EngineerĀ
Role: Individual Contributor (4-8 yrs) Ā
Ā Ā Ā Ā Ā
Requirements:Ā
- Energetic self-starter, a fast learner, with a desire to work in a startup environmentĀ Ā
- Experience working with Public Clouds like AWSĀ
- Operating and Monitoring cloud infrastructure on AWS.Ā
- Primary focus on building, implementing and managing operational supportĀ
- Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure.Ā
- Expert at one of the scripting languages ā Python, shell, etcĀ Ā
- Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etcĀ
- Handling load monitoring, capacity planning, and services monitoring.Ā
- Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues.Ā
- Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
About RaRa Delivery
Not just a delivery companyā¦
RaRa Delivery is revolutionising instant delivery for e-commerce in Indonesia through data driven logistics.
RaRa Delivery is making instant and same-day deliveries scalable and cost-effective by leveraging a differentiated operating model and real-time optimisation technology. RaRa makes it possible for anyone, anywhere to get same day delivery in Indonesia. While others are focusing on āone-to-oneā deliveries, the company has developed proprietary, real-time batching tech to do āmany-to-manyā deliveries within a few hours.. RaRa is already in partnership with some of the top eCommerce players in Indonesia like Blibli, Sayurbox, Kopi Kenangan and many more.
We are a distributed team with the company headquartered in Singapore šøš¬ , core operations in Indonesia š®š© and technology team based out of India š®š³
Future of eCommerce Logistics.
- Data driven logistics company that is bringing in same day delivery revolution in Indonesia š®š©
- Revolutionising delivery as an experience
- Empowering D2C Sellers with logistics as the core technology
- Build and maintain CI/CD tools and pipelines.
- Designing and managing highly scalable, reliable, and fault-tolerant infrastructure & networking that forms the backbone of distributed systems at RaRa Delivery.
- Continuously improve code quality, product execution, and customer delight.
- Communicate, collaborate and work effectively across distributed teams in a global environment.
- Operate to strengthen teams across their product with their knowledge base
- Contribute to improving team relatedness, and help build a culture of camaraderie.
- Continuously refactor applications to ensure high-quality design
- Pair with team members on functional and non-functional requirements and spread design philosophy and goals across the team
- Excellent bash, and scripting fundamentals and hands-on with scripting in programming languages such as Python, Ruby, Golang, etc.
- Good understanding of distributed system fundamentals and ability to troubleshoot issues in a larger distributed infrastructure
- Working knowledge of the TCP/IP stack, internet routing, and load balancing
- Basic understanding of cluster orchestrators and schedulers (Kubernetes)
- Deep knowledge of Linux as a production environment, container technologies. e.g. Docker, Infrastructure As Code such as Terraform, K8s administration at large scale.
- Have worked on production distributed systems and have an understanding of microservices architecture, RESTful services, CI/CD.
Responsibilities:
- Writing and maintaining the automation for deployments across various cloud (AWS/Azure/GCP)
- Bring a passion to stay on top of DevOps trends, experiment, and learn new CI/CD technologies.
- Creating the Architecture Diagrams and documentation for various pieces
- Build tools and automation to improve the system's observability, availability, reliability, performance/latency, monitoring, emergency response
Ā
Requirements:
- 3 - 5 years of professional experience as a DevOps / System Engineer.
- Strong knowledge in Systems Administration & troubleshooting skills with Linux.
- Experience with CI/CD best practices and tooling, preferably Jenkins, Circle CI.
- Hands-on experience with Cloud platforms such as AWS/Azure/GCP or private cloud environments.
- Experience and understanding of modern container orchestration, Well-versed with the containerised applications (Docker, Docker-compose, Docker-swarm, Kubernetes).
- Experience in Infrastructure as code development using Terraform.
- Basic Networking knowledge VLAN, Subnet, VPC, Webserver like Nginx, Apache.
- Experience in handling different SQL and NoSQL databases (PostgreSQL, MySQL, Mongo).
- Experience with GIT Version Control Software.
- Proficiency in any programming or scripting language such as Shell Script, Python, Golang.
- Strong interpersonal and communication skills; ability to work in a team environment.
- AWS / Kubernetes Certifications: AWS Certified Solutions Architect / CKA.
- Setup and management of a Kubernetes cluster, including writing Docker files.
- Experience working in and advocating for agile environments.
- Knowledge in Microservice architecture.
The AWS Cloud/Devops Engineer will be working with the engineering team and focusing on AWS infrastructure and automation.Ā A key part of the role is championing and leading infrastructure as code.Ā The Engineer will work closely with the Manager of Operations and Devops to build, manage and automate our AWS infrastructure.Ā
Duties & Responsibilities:
- Design cloud infrastructure that is secure, scalable, and highly available on AWS
- Work collaboratively with software engineering to define infrastructure and deployment requirements
- Provision, configure and maintain AWS cloud infrastructure defined as code
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
- Perform infrastructure cost analysis and optimization
Qualifications:
- At least 1-5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront, S3)
- Strong understanding of how to secure AWS environments and meet compliance requirements
- Expertise using Chef for configuration management
- Hands-on experience deploying and managing infrastructure with Terraform
- Solid foundation of networking and Linux administration
- Experience with CI-CD, Docker, GitLab, Jenkins, ELK and deploying applications on AWS
- Ability to learn/use a wide variety of open source technologies and tools
- Strong bias for action and ownership
Our client is a call management solutions company, which helps small to mid-sized businesses use its virtual call center to manage customer calls and queries. It is an AI and cloud-based call operating facility that is affordable as well as feature-optimized. The advanced features offered like call recording, IVR, toll-free numbers, call tracking, etc are based on automation and enhances the call handling quality and process, for each client as per their requirements. They service over 6,000 business clients including large accounts like Flipkart and Uber.
- Being involved in Configuration Management, Web Services Architectures, DevOps Implementation, Build & Release Management, Database management, Backups, and Monitoring.
- Ensuring reliable operation of CI/ CD pipelines
- Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
- Logging, metrics and alerting management.
- Creating Docker files
- Creating Bash/ Python scripts for automation.
- Performing root cause analysis for production errors.
Ā
What you need to have:- Proficient in Linux Commands line and troubleshooting.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Hands-on experience with CI tooling preferably with Jenkins.
- Proficient in deployment using Ansible.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Scripting languages: Bash, Python, Groovy.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
MLOps Engineer
Required Candidate profile :
- 3+ yearsā experience in developing continuous integration and deployment (CI/CD) pipelines (e.g. Jenkins, Github Actions) and bringing ML models to CI/CD pipelines
- Candidate with strong Azure expertise
- Exposure of Productionize the models
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops
- Proficient knowledge of git, Docker and containers, Kubernetes
- Familiarity with Terraform
- E2E production experience with Azure ML, Azure ML Pipelines
- Experience in Azure ML extension for Azure Devops
- Worked on Model Drift (Concept Drift, Data Drift preferable on Azure ML.)
- Candidate will be part of a cross-functional team that builds and delivers production-ready data science projects. You will work with team members and stakeholders to creatively identify, design, and implement solutions that reduce operational burden, increase reliability and resiliency, ensure disaster recovery and business continuity, enable CI/CD, optimize ML and AI services, and maintain it all in infrastructure as code everything-in-version-control manner.
- Candidate with strong Azure expertise
- Candidate should have complete knowledge of Azure ecosystem, especially in the area of DE
- Candidate should have prior experience in Design, build, test, and maintain machine learning infrastructure to empower data scientists to rapidly iterate on model development
- Develop continuous integration and deployment (CI/CD) pipelines on top of Azure that includes AzureML, MLflow and Azure Devops
ā Responsible for development, and implementation of Cloud solutions.
ā Responsible for achieving automation & orchestration of tools(Puppet/Chef)
ā Monitoring the product's security & health(Datadog/Newrelic)
ā Managing and Maintaining databases(Mongo & Postgres)
ā Automating Infrastructure using AWS services like CloudFormation
ā Provide evidences in Infrastructure Security Audits
ā Migrating to Container technologies (Docker/Kubernetes)
ā Should have knowledge on serverless concepts (AWS Lambda)
ā Should be able to work with AWS services like EC2, S3, Cloud-formation, EKS, IAM, RDS, ..etc
What you bring:
ā Problem-solving skills that enable you to identify the best solutions.
ā Team collaboration and flexibility at work.
ā Strong verbal and written communication skills that will help in presenting complex ideas
in an accessible and engaging way.
ā Ability to choose the best tools and technologies which best fits the business needs.
Aviso offers:
ā Dynamic, diverse, inclusive startup environment driven by transparency and velocity
ā Bright, open, sunny working environment and collaborative office space
ā Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs
ā Competitive salaries and company equity, and a focus on developing world class talent operations
ā Comprehensive health insurance available (medical) for you and your family
ā Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service
ā CEO moonshots projects with cash awards every quarter
ā Upskilling and learning support including via paid conferences, online courses, and certifications
ā Every month Rupees 2,500 will be credited to Sudexo meal card