As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Similar jobs
Wissen Technology is hiring for Devops engineer
Required:
-4 to 10 years of relevant experience in Devops
-Must have hands on experience on AWS, Kubernetes, CI/CD pipeline
-Good to have exposure on Github or Gitlab
-Open to work from hashtag Chennai
-Work mode will be Hybrid
Company profile:
Company Name : Wissen Technology
Group of companies in India : Wissen Technology & Wissen Infotech
Work Location - Chennai
Website : www.wissen.com
Wissen Thought leadership : https://lnkd.in/gvH6VBaU
LinkedIn: https://lnkd.in/gnK-vXjF
Azure DevOps engineer should have a deep understanding of container principles and hands-on experience with Docker.
They should also be able to set-up and manage clusters using Azure Kubernetes Service (AKS). Additionally, understanding of API management, Azure Key-Vaults, ACR, networking concepts like virtual networks, subnets, NSG, route tables. Awareness of any one of the software like Apigee, Kong, or APIM in Azure is a must. Strong experience with IaC technologies like Terraform, ARM/ Bicep Templates, GitHub Pipelines, Sonar etc.
- Designing DevOps strategies: Recommending strategies for migrating and consolidating DevOps tools, designing an Agile work management approach, and creating a secure development process
- Implementing DevOps development processes: Designing version control strategies, integrating source control, and managing build infrastructure
- Managing application configuration and secrets: Ensuring system and infrastructure availability, stability, scalability, and performance
- Automating processes: Overseeing code releases and deployments with an emphasis on continuous integration and delivery
- Collaborating with teams: Working with architect and developers to ensure smooth code integration and collaborating with development and operations teams to define pipelines.
- Documentation: Producing detailed Development Architecture design, setting up the DevOps tools and working together with the CI/CD specialist in integrating the automated CI and CD pipelines with those tools
- Ensuring security and compliance/DevSecOps: Managing code quality and security policies
- Troubleshooting issues: Investigating issues and responding to customer queries
- Core Skills: Azure DevOps engineer should have a deep understanding of container principles and hands-on experience with Docker. They should also be able to set-up and manage clusters using Azure Kubernetes Service (AKS). Additionally, understanding of API management, Azure Key-Vaults, ACR, networking concepts like virtual networks, subnets, NSG, route tables. Awareness of any one of the software like Apigee, Kong, or APIM in Azure is a must. Strong experience with IaC technologies like Terraform, ARM/ Bicep Templates, GitHub Pipelines, Sonar,
- Additional Skills: Self-starter and ability to execute tasks on time, Excellent communication skills, ability to come up with multiple solutions for problems, interact with client-side experts to resolve issues by providing correct pointers, excellent debugging skills, ability to breakdown tasks into smaller steps.
Only apply on this link - https://loginext.hire.trakstar.com/jobs/fk025uh?source=" target="_blank">https://loginext.hire.trakstar.com/jobs/fk025uh?source=
LogiNext is looking for a technically savvy and passionate Associate Vice President - Product Engineering - DevOps or Senior Database Administrator to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust infrastructure.
You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in dev, staging and production environments.
Responsibilities:
- Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud
- Scale and optimise a variety of SQL and NoSQL databases (especially MongoDB), web servers, application frameworks, caches, and distributed messaging systems
- Automate the deployment and configuration of the virtualized infrastructure and the entire software stack
- Plan, implement and maintain robust backup and restoration policies ensuring low RTO and RPO
- Support several Linux servers running our SaaS platform stack on AWS, Azure, IBM Cloud, Ali Cloud
- Define and build processes to identify performance bottlenecks and scaling pitfalls
- Manage robust monitoring and alerting infrastructure
- Explore new tools to improve development operations to automate daily tasks
- Ensure High Availability and Auto-failover with minimum or no manual interventions
Requirements:
- Bachelor’s degree in Computer Science, Information Technology or a related field
- 11 to 14 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
- Strong background in Linux/Unix Administration and Python/Shell Scripting
- Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
- Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
- Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
- Experience in query analysis, peformance tuning, database redesigning,
- Experience in enterprise application development, maintenance and operations
- Knowledge of best practices and IT operations in an always-up, always-available service
- Excellent written and oral communication skills, judgment and decision-making skills.
- Excellent leadership skill.
The candidates should have:
· Strong knowledge on Windows and Linux OS
· Experience working in Version Control Systems like git
· Hands-on experience in tools Docker, SonarQube, Ansible, Kubernetes, ELK.
· Basic understanding of SQL commands
· Experience working on Azure Cloud DevOps
Bachelor's degree in Computer Science or a related field, or equivalent work experience
Strong understanding of cloud infrastructure and services, such as AWS, Azure, or Google Cloud Platform
Experience with infrastructure as code tools such as Terraform or CloudFormation
Proficiency in scripting languages such as Python, Bash, or PowerShell
Familiarity with DevOps methodologies and tools such as Git, Jenkins, or Ansible
Strong problem-solving and analytical skills
Excellent communication and collaboration skills
Ability to work independently and as part of a team
Willingness to learn new technologies and tools as required
Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.
Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!
We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.
We are hiring a DevOps Engineer to join our team.
Responsibilities:
- Collaborate with the development team to design, develop, and implement Java-based applications
- Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
- Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
- Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
- Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
- Evaluate and define/modify configuration management strategies and processes using Ansible
- Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
- Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort
Requirements:
- Minimum 4+ years of relevant work experience in a DevOps role
- At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
- Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
- Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
- Mastery in configuration automation tool sets such as Ansible, Chef, etc
- Proficiency with Jira, Confluence, and Git toolset
- Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
- Proven ability to manage and prioritize multiple diverse projects simultaneously
What do we offer:
At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology.
· Work from anywhere
· Flexible work timings
· Competitive compensation, including stock options
· A chance to work in the exciting generative AI space
· Quarterly team offsite events
Experience with Linux infrastructure, opensource databases (mysql, postgres etc), CI/CD tools Jenkins, Gitlab, Nexus Repository, Jira, Agile workflow and Kanban.
- Solid understanding of IP network and TCP/IP
- Minimum of 7+ years’ work experience in IT
- Minimum of 4+ years experience as DevOps Enginneer
- Good interpersonal skills and communication written and oral.
- Have 3+ years of experience in Python development
- Be familiar with common database access patterns
- Have experience with designing systems and monitoring metrics, looking at graphs.
- Have knowledge of AWS, Kubernetes and Docker.
- Be able to work well in a remote development environment.
- Be able to communicate in English at a native speaking and writing level.
- Be responsible to your fellow remote team members.
- Be highly communicative and go out of your way to contribute to the team and help others
DevOps Engineer
Company Introduction
https://www.cometchat.com/">CometChat harnesses the power of chat by helping thousands of businesses around the world create customized in-app messaging experiences. Our products allow developers to seamlessly add voice, video and text chat to their websites and mobile apps so that their users can communicate with each other, resulting in a unified customer experience, increased engagement and retention, and revenue growth.
In 2019, CometChat was selected into the exclusive Techstars Boulder Accelerator. CometChat (Industry CPaaS: communication-platform-as-a-service) has also been listed among the top 10 best SaaS companies by G2 Crowd. With solid financials, strong organic growth and increasing interest in developer tool-focused companies (from the market and with top technical talent), we’re heading into an exciting period of growth and acceleration. https://www.crunchbase.com/organization/cometchat">CometChat is backed by seasoned investors such as iSeed Ventures, Range Ventures, Silicon Badia, eonCapital and Matchstick Ventures.
A global business from the start, we have 60+ team members across our Denver and Mumbai offices serving over 50,000 customers around the world. We’ve had an exciting journey so far, and we know this is just the beginning!
CometChat’s Mission
Enable meaningful connections between real people in an increasingly digital world.
CometChat’s Products
CometChat offers a robust suite of cloud hosted text, voice and video options that meet businesses where they are–whether they need drag and drop plugins that can be ready within 30 minutes or if they want more advanced features and can invest development resources to launch the experience that will best serve their users.
● Quickly build a reliable & full featured chat experience into any mobile or web app
● Fully customizable SDKs and API designed to help companies ship faster
At every step, CometChat helps customers solve complex infrastructure, performance and security challenges, regardless of the platform. But there is so much more! With over 20 ready to use extensions, customers can build an experience and get the data, analysis and insights they need to drive their business forward.
CometChat’s solutions are perfect for every kind of chat including:
● Social community – Allowing people in online communities to interact without moving the conversation to another platform
● Marketplace – Enabling communications between buyers and sellers
● Events – Bringing thousands of users together to interact without diminishing the quality of the experience
● Telemedicine – Making connections between patients and providers more accessible
● Dating – Keeping people engaged while they connect with one another
● And more!
CometChat is committed to fostering a culture of innovation & collaboration. Our people are our strength so we respect and nurture their individual talent and potential. Join us if you are looking to be a part of a high growth team!
Position Overview & Priorities:
The DevOps Engineer will be responsible for effective provisioning, installation/configuration, operation, and maintenance of systems and software using Infrastructure as Code. This can include the provision of cloud instances, streamlining deployments, configuring virtual instances, scaling out DB servers.
Primary responsibility would be:
- Oversight of all server environments, from Dev through Production.
- Work on an infrastructure that is 100% on AWS.
- Work on CI/CD tooling which is used to build and deploy code to our cloud.
- Assist with day-to-day issue management.
- Work on internal tooling which simplifies workflows.
- Research, design and implement solutions for fault tolerance, monitoring, performance enhancement, capacity optimization, and configuration management of systems and applications.
Work Location:
We operate on a Hybrid model – you choose where you work from! Remotely or from our offices. Currently, our talent is spread across 14 different cities globally.
Prioritized Experiences and Capabilities:
- 2-4 years of experience working as a DevOps Engineer/currently practicing DevOps methodology
- Experience in AWS Infrastructure
- Hands-on experience with Infrastructure as Code (Cloud Formation / Terraform, Puppet / Chef / Ansible)
- Strong background in Linux/Unix Administration
- DevOps automation with CI/CD, a pipeline that enforces proper versioning and branching practices
- Experience in Docker and Kubernetes.
You will be responsible for
1. Setting up, maintaining cloud (AWS/GCP/Azure) and kubernetes cluster and automating
their operation
2. All operational aspects of devtron platform including maintenance, upgrades,
automation.
3. Providing kubernetes expertise to facilitate smooth and fast customer onboarding on
devtron platform
Responsibilities:
1. Manage devtron platform on multiple kubernetes clusters
2. Designing and embedding industry best practices for online services including disaster
recovery, business continuity, monitoring/alerting, and service health measurement
3. Providing operational support for day to day activities involving the deployment of
services
4. Identify opportunities for improving the security, reliability, and scalability of the platform
5. Facilitate smooth and fast customer onboarding on devtron platform
6. Drive customer engagement
Requirements:
● Bachelor's Degree in Computer Science or a related field.
● 2+ years working as a devops engineer
● Proficient in 1 or more programming languages (e.g. Python, Go, Ruby).
● Familiar with shell scripts, Linux commands, network fundamentals
● Understanding of large scale distributed systems
● Basic understanding of cloud computing (AWS/GCP/Azure)
Preferred Qualifications:
● Great analytical and interpersonal skills
● Passion for creating efficient, reliable, reusable programs/scripts.
● Excited about technology, have a strong interest in learning about and playing with the
latest technologies and doing POC.
● Strong customer focus, ownership, urgency and drive.
● Knowledge and experience with cloud native tools like prometheus, kubernetes, docker,
grafana.