Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.
Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!
We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.
We are hiring a DevOps Engineer to join our team.
Responsibilities:
- Collaborate with the development team to design, develop, and implement Java-based applications
- Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
- Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
- Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
- Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
- Evaluate and define/modify configuration management strategies and processes using Ansible
- Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
- Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort
Requirements:
- Minimum 4+ years of relevant work experience in a DevOps role
- At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
- Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
- Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
- Mastery in configuration automation tool sets such as Ansible, Chef, etc
- Proficiency with Jira, Confluence, and Git toolset
- Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
- Proven ability to manage and prioritize multiple diverse projects simultaneously
What do we offer:
At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology.
· Work from anywhere
· Flexible work timings
· Competitive compensation, including stock options
· A chance to work in the exciting generative AI space
· Quarterly team offsite events
About Bito Inc
Similar jobs
Experience: 3+ years of experience in Cloud Architecture
About Company:
The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.
Cloud Architect / Lead
- Role Overview
- Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
- Key Knowledge
- 3-5+ years of experience in AWS/GCP or Azure technologies
- Is likely certified on one or more of the major cloud platforms
- Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
- Ability to guide and lead internal agile teams on cloud technology
- Background from the financial services industry or similar critical operational experience
We're Hiring: DevOps Tech Lead with 7-9 Years of Experience! 🚀
Are you a seasoned DevOps professional with a passion for cloud technologies and automation? We have an exciting opportunity for a DevOps Tech Lead to join our dynamic team at our Gurgaon office.
🏢 ZoomOps Technolgy Solutions Private Limited
📍 Location: Gurgaon
💼 Full-time position
🔧 Key Skills & Requirements:
✔ 7-9 years of hands-on experience in DevOps roles
✔ Proficiency in Cloud Platforms like AWS, GCP, and Azure
✔ Strong background in Solution Architecture
✔ Expertise in writing Automation Scripts using Python and Bash
✔ Ability to manage IAC tools and CM tools like Terraform, Ansible, pulumi etc..
Responsibilities:
🔹 Lead and mentor the DevOps team, driving innovation and best practices
🔹 Design and implement robust CI/CD pipelines for seamless software delivery
🔹 Architect and optimize cloud infrastructure for scalability and efficiency
🔹 Automate manual processes to enhance system reliability and performance
🔹 Collaborate with cross-functional teams to drive continuous improvement
Join us to work on exciting projects and make a significant impact in the tech space!
Apply now and take the next step in your DevOps career!
We are looking for an excellent experienced person in the Dev-Ops field. Be a part of a vibrant, rapidly growing tech enterprise with a great working environment. As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to
deploy and release their code seamlessly.
Responsibilities
The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems.
Understanding accessibility and security compliance (Depending on the specific project)
User authentication and authorization between multiple systems,
servers, and environments
Integration of multiple data sources and databases into one system
Understanding fundamental design principles behind a scalable
application
Configuration management tools (Ansible/Chef/Puppet), Cloud
Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.
Should be able to make key decisions for our infrastructure,
networking and security.
Manipulation of shell scripts during migration and DB connection.
Monitor Production Server Health of different parameters (CPU Load, Physical Memory, Swap Memory and Setup Monitoring tool to
Monitor Production Servers Health, Nagios
Created Alerts and configured monitoring of specified metrics to
manage their cloud infrastructure efficiently.
Setup/Managing VPC, Subnets; make connection between different zones; blocking suspicious ip/subnet via ACL.
Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade
AWS resources (CPU, Memory, EBS)
The candidate would be Responsible for managing microservices at scale maintain the compute and storage infrastructure for various product teams.
Strong Knowledge about Configuration Management Tools like –
Ansible, Chef, Puppet
Extensively worked with Change tracking tools like JIRA and log
Analysis, Maintaining documents of production server error log's
reports.
Experienced in Troubleshooting, Backup, and Recovery
Excellent Knowledge of Cloud Service Providers like – AWS, Digital
Ocean
Good Knowledge about Docker, Kubernetes eco-system.
Proficient understanding of code versioning tools, such as Git
Must have experience working in an automated environment.
Good knowledge of Amazon Web Service Architects like – Amazon EC2, Amazon S3 (Amazon Glacier), Amazon VPC, Amazon Cloud Watch.
Scheduling jobs using crontab, Create SWAP Memory
Proficient Knowledge about Access Management (IAM)
Must have expertise in Maven, Jenkins, Chef, SVN, GitHub, Tomcat, Linux, etc.
Candidate Should have good knowledge about GCP.
EducationalQualifications
B-Tech-IT/M-Tech -/MBA- IT/ BCA /MCA or any degree in the relevant field
EXPERIENCE: 2-6 yr
What you will do:
- Handling Configuration Management, Web Services Architectures, DevOps Implementation, Build & Release Management, Database management, Backups and monitoring
- Logging, metrics and alerting management
- Creating Docker files
- Performing root cause analysis for production errors
What you need to have:
- 12+ years of experience in Software Development/ QA/ Software Deployment with 5+ years of experience in managing high performing teams
- Proficiency in VMware, AWS & cloud applications development, deployment
- Good knowledge in Java, Node.js
- Experience working with RESTful APIs, JSON etc
- Experience with Unit/ Functional automation is a plus
- Experience with MySQL, Mango DB, Redis, Rabbit MQ
- Proficiency in Jenkins. Ansible, Terraform/Chef/Ant
- Proficiency in Linux based Operating Systems
- Proficiency of Cloud Infrastructure like Dockers, Kubernetes
- Strong problem solving and analytical skills
- Good written and oral communication skills
- Sound understanding in areas of Computer Science such as algorithms, data structures, object oriented design, databases
- Proficiency in monitoring and observability
- Good experience in AWS services like Elastic Compute Cloud(EC2), IAM, RDS, API Gateway, Cognito, etc.
- Using GIT, SonarQube, Ansible, Nexus, Nagios, etc.
- Strong experience in creating, importing and launching volumes with security groups, auto-scaling, Load Balancers, Fault-tolerant
- Experience in configuring Jenkins job with related Plugins for Building, Testing, and Continuous Deployment to accomplish the complete CI/CD.
We (the Software Engineer team) are looking for a motivated, experienced person with a data-driven approach to join our Distribution Team in Bangalore to help design, execute and improve our test sets and infrastructure for producing high-quality Hadoop software.
A Day in the life
You will be part of a team that makes sure our releases are predictable and deliver high value to the customer. This team is responsible for automating and maintaining our test harness, and making test results reliable and repeatable.
You will:
-
work on making our distributed software stack more resilient to high-scale endurance runs and customer simulations
-
provide valuable fixes to our product development teams to the issues you’ve found during exhaustive test runs
-
work with product and field teams to make sure our customer simulations match the expectations and can provide valuable feedback to our customers
-
work with amazing people - We are a fun & smart team, including many of the top luminaries in Hadoop and related open source communities. We frequently interact with the research community, collaborate with engineers at other top companies & host cutting edge researchers for tech talks.
-
do innovative work - Cloudera pushes the frontier of big data & distributed computing, as our track record shows. We work on high-profile open source projects, interacting daily with engineers at other exciting companies, speaking at meet-ups, etc.
-
be a part of a great culture - Transparent and open meritocracy. Everybody is always thinking of better ways to do things, and coming up with ideas that make a difference. We build our culture to be the best workplace in our careers.
You have:
-
strong knowledge in at least 1 of the following languages: Java / Python / Scala / C++ / C#
-
hands-on experience with at least 1 of the following configuration management tools: Ansible, Chef, Puppet, Salt
-
confidence with Linux environments
-
ability to identify critical weak spots in distributed software systems
-
experience in developing automated test cases and test plans
-
ability to deal with distributed systems
-
solid interpersonal skills conducive to a distributed environment
-
ability to work independently on multiple tasks
-
self-driven & motivated, with a strong work ethic and a passion for problem solving
-
innovate and automate and break the code
The right person in this role has an opportunity to make a huge impact at Cloudera and add value to our future decisions. If this position has piqued your interest and you have what we described - we invite you to apply! An adventure in data awaits.
Role and responsibilities
- Expertise in AWS (Most typical Services),Docker & Kubernetes.
- Strong Scripting knowledge, Strong Devops Automation, Good at Linux
- Hands on with CI/CD (CircleCI preferred but any CI/CD tool will do). Strong Understanding of GitHub
- Strong understanding of AWS networking and. Strong with Security & Certificates.
Nice-to-have skills
- Involved in Product Engineering
We are looking for a full-time remote DevOps Engineer who has worked with CI/CD automation, big data pipelines and Cloud Infrastructure, to solve complex technical challenges at scale that will reshape the healthcare industry for generations. You will get the opportunity to be involved in the latest tech in big data engineering, novel machine learning pipelines and highly scalable backend development. The successful candidates will be working in a team of highly skilled and experienced developers, data scientists and CTO.
Job Requirements
- Experience deploying, automating, maintaining, and improving complex services and pipelines • Strong understanding of DevOps tools/process/methodologies
- Experience with AWS Cloud Formation and AWS CLI is essential
- The ability to work to project deadlines efficiently and with minimum guidance
- A positive attitude and enjoys working within a global distributed team
Skills
- Highly proficient working with CI/CD and automating infrastructure provisioning
- Deep understanding of AWS Cloud platform and hands on experience setting up and maintaining with large scale implementations
- Experience with JavaScript/TypeScript, Node, Python and Bash/Shell Scripting
- Hands on experience with Docker and container orchestration
- Experience setting up and maintaining big data pipelines, Serverless stacks and containers infrastructure
- An interest in healthcare and medical sectors
- Technical degree with 4 plus years’ infrastructure and automation experience