About BootLabs
https://www.google.com/url?q=https://www.bootlabs.in/&sa=D&source=calendar&ust=1667803146567128&usg=AOvVaw1r5g0R_vYM07k6qpoNvvh6" target="_blank">https://www.bootlabs.in/
-We are a Boutique Tech Consulting partner, specializing in Cloud Native Solutions.
-We are obsessed with anything “CLOUD”. Our goal is to seamlessly automate the development lifecycle, and modernize infrastructure and its associated applications.
-With a product mindset, we enable start-ups and enterprises on the cloud
transformation, cloud migration, end-to-end automation and managed cloud services.
-We are eager to research, discover, automate, adapt, empower and deliver quality solutions on time.
-We are passionate about customer success. With the right blend of experience and exuberant youth in our in-house team, we have significantly impacted customers.
Technical Skills:
• Expertise in any one hyper scaler (AWS/AZURE/GCP), including basic services like networking,
data and workload management.
- AWS
Networking: VPC, VPC Peering, Transit Gateway, Route Tables, Security Groups, etc.
Data: RDS, DynamoDB, Elastic Search
Workload: EC2, EKS, Lambda, etc.
- Azure
Data: Azure MySQL, Azure MSSQL, etc.
Workload: AKS, Virtual Machines, Azure Functions
- GCP
Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
Workload: GKE, Instances, App Engine, Batch, etc.
• Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup,
templating and configuration.
• Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment,
networking, service mesh. Used any package manager like helm.
• Scripting experience (Bash/python), automation in pipelines when required, system service.
• Infrastructure automation (Terraform/pulumi/cloud formation), write modules, setup pipeline and version the code.
Optional:
• Experience in any programming language is not required but is appreciated.
• Good experience in GIT, SVN or any other code management tool is required.
• DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
• Observability tools (Opensource: Prometheus, Elasticsearch, Open Telemetry; Paid: Datadog,
24/7, etc)
About Bootlabs Technologies Private Limited
About
Similar jobs
Job Summary:
We are seeking a skilled DevOps Engineer to manage and optimise the infrastructure of AWS-hosted MEAN stack single tenant applications. The role involves automating deployments, ensuring high availability, and implementing monitoring and security best practices.
Key Responsibilities:
- Manage and optimise AWS resources (EC2, S3, SQS, SNS, Code Deploy, Auto Scaling Groups, API Gateway, Load Balancer, Lambda, VPC, etc.).
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or CodePipeline.
- Implement containerisation using Docker and Kubernetes (EKS).
- Use Terraform or CloudFormation for infrastructure automation.
- Monitor and log application performance with Prometheus, Grafana and CloudWatch.
- Ensure security using IAM, WAF, and other AWS tools.
Required Skills:
- Proficiency in AWS services.
- Experience with CI/CD tools and processes.
- Hands-on with Docker, Kubernetes, and IaC tools.
- Strong scripting skills (Bash, Python).
- Expertise in monitoring and logging tools.
Who We Are
Grip is building a new category of investment options for the new-age of Indian Investors. Millennials
don’t communicate, shop, pay, entertain and work like the previous generation - then why should they
invest the same way?
Started in June’20, Grip has seen 20% month-on-month growth to become one of India’s fastest-
growing destinations for alternative investments. Today, Grip offers multiple investment options providing
8-16% annual returns and 1-60 month tenures as the country’s only multi- asset (lease financing,
inventory financing, corporate bonds start-up equity, commercial real estate ) investment platform. With a
minimum investment size of INR 20,000, Grip is democratizing investment options that have previously
only been available to the largest funds and family offices.
Finance and technology (FinTech) is what we do, but people are at the core of our mission. From client-
facing roles to technology, and everywhere in between, you’ll work alongside a diverse team who loves
to solve problems, think creatively, and fly the plane as we continue to build it.
In the News
- Money Control : Click Here
- Hindu Business : Click Here
What We Can Offer You
● Young and fast-growing company with a healthy work-life balance
● Great culture based on the following core values
○ Courage
○ Transparency
○ Ownership
○ Customer Obsession
○ Celebration
● Lean structure and no micromanaging. You get to own your work
● The Company has turned two so you get a seat on rocketship that’s just taking off!
● High focus on Learning & Development and monetary support for relevant upskilling
● Competitive compensation along with equity ownership for wealth creation
What You’ll Do
● Design cloud infrastructure that is secure, scalable, and highly available onAWS
● Work collaboratively with software engineering to define infrastructure and
deploying requirements
● Provision, configure and maintain AWS cloud infrastructure defined as code
● Ensure configuration and compliance with configuration management tools
● Administer and troubleshoot Linux based systems
● Troubleshoot problems across a wide array of services and functional areas
● Build and maintain operational tools for deployment, monitoring, and analysis of AWS
infrastructure and systems
● Perform infrastructure cost analysis and optimization
Your Superpowers
● At least 3-5 years of experience building and maintaining AWS infrastructure
(VPC,EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront,
S3,RDS,Elasticbeanstalk)
● Strong understanding of how to secure AWS environments and meet
compliance requirements
● Expertise using Chef/Cloudformation/Ansible for configuration management
● Hands-on experience deploying and managing infrastructure with Terraform
● Solid foundation of networking and Linux administration
● Experience with Docker, GitHub, Kubernetes,, ELK and deploying applications onAWS
● Ability to learn/use a wide variety of open source technologies and tools
● Strong bias for action and ownership
As a DevOps Engineer with experience in Kubernetes, you will be responsible for leading and managing a team of DevOps engineers in the design, implementation, and maintenance of the organization's infrastructure. You will work closely with software developers, system administrators, and other IT professionals to ensure that the organization's systems are efficient, reliable, and scalable.
Specific responsibilities will include:
- Leading the team in the development and implementation of automation and continuous delivery pipelines using tools such as Jenkins, Terraform, and Ansible.
- Managing the organization's infrastructure using Kubernetes, including deployment, scaling, and monitoring of applications.
- Ensuring that the organization's systems are secure and compliant with industry standards.
- Collaborating with software developers to design and implement infrastructure as code.
- Providing mentorship and technical guidance to team members.
- Troubleshooting and resolving technical issues in collaboration with other IT professionals.
- Participating in the development and maintenance of the organization's disaster recovery and incident response plans.
To be successful in this role, you should have strong leadership skills and experience with a variety of DevOps and infrastructure tools and technologies. You should also have excellent communication and problem-solving skills, and be able to work effectively in a fast-paced, dynamic environment.
Striim (pronounced “stream” with two i’s for integration and intelligence) was founded in 2012 with a simple goal of helping companies make data useful the instant it’s born.
Striim’s enterprise-grade, streaming integration with intelligence platform makes it easy to build continuous, streaming data pipelines – including change data capture (CDC) – to power real-time cloud integration, log correlation, edge processing, and streaming analytics
2 - 5 Years of Experience in any Programming any language (Polyglot Preferred ) & System Operations • Awareness of Devops & Agile Methodologies • Proficient in leveraging CI and CD tools to automate testing and deployment . • Experience in working in an agile and fast paced environment . • Hands on knowledge of at least one cloud platform (AWS / GCP / Azure). • Cloud networking knowledge: should understand VPC, NATs, and routers. • Contributions to open source is a plus. • Good written communication skills are a must. Contributions to technical blogs / whitepapers will be an added advantage.
We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.
Job Description:
- Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
- Experience interacting with customer.
- Excellent communication.
- Hands-on in creating and managing Jenkins job, Groovy scripting.
- Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
- Experience in Maven.
- Experience in scripting languages like Bash, Powershell, Python.
- Experience in automation tools like Terraform, Ansible, Chef, Puppet.
- Excellent troubleshooting skills.
- Experience in Docker and Kuberneties with creating docker files.
- Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
We are front runners of the technological revolution with an inexhaustible passion for technology! DevOn is the technical organization that originated from Prowareness. We are the company at the forefront of leading DevOps transformations and setting up High Performance Distributed DevOps teams with leading companies worldwide. DevOn helps market leaders to take the next step in software delivery. We consist of a dynamic team, in which personal growth is central!
About You
You have 6+ years of experience in AWS infra Automation. This is a fantastic opportunity to work in a fast-paced operations environment and to develop your career in Cloud technologies, particularly Amazon Web Services.
You are building and monitoring CI/CD pipeline in AWS cloud. This is a highly scalable backend application building on Java platform. We need a resource who can troubleshoot, diagnose and rectify system service issues.
You’re cloud native with Terraform as an orchestration. You would use Terraform as a key Orchestration in Infrastructure as Code.
You're comfortable driving. You prefer to own your work streams and enjoy working in autonomy to progress towards your goals.
You provide an incredible support to the team. You sweat the small stuff but keep the big picture in mind. You know that a pair programming can give better result
An ideal candidate is/are:
This is a key role within our DevOps team and will involve working as part of a collaborative agile team in a shared services DevOps organization to support and deliver innovative technology solutions that directly align with the delivery of business value and enhanced customer experience. The primary objective is to provide support to Amazon Web Services hosted environment, ensure continuous availability, working closely with development teams to ensure best value for money, and effective estate management.
- Setup CI/CD Pipeline from scratch along with integration of appropriate quality gates.
- Expertise level knowledge in AWS cloud. Provision and configure infrastructure as code using Terraform
- Build and configure Kubernetes-based infrastructure, networking policies, LBs, and cluster security. Define autoscaling and cost strategies.
- Automate the build of containerized systems with CI/CD tooling, Helm charts, and more
- Manage deployments and rollbacks of applications
- Implement monitoring and metrics with Cloud watch, Newrelic
- Troubleshoot and optimize containerized workload deployments for clients
- Automate operational tasks, and assist in the transition to service ownership models.
- Strong experience using Java programming languages or DevOps on Google Cloud.
- Strong communication skills.
- Experience in Agile methodologies
- Certification on Professional Google Cloud Data engineer will be an added advantage.
- Experience on Google Cloud Platform.
- Experience on Java or DevOps
Required Key Skills :
- Excellent verbal and written communication and interpersonal skills.
- Ability to work independently and within a team environment.
- Interpersonal skills
- GCP, Cloud, Programming
- Agile
- Java programming language or DevOps experience.
CTC- 4L - 7L
Must Have Skills
- AWS Solutions Architect and/or DevOps certification, Professional preferred
- BS level technical degree or equivalent experience; Computer Science or Engineering background preferred
- Hands-on technical expertise on Amazon Web Services (AWS) to include but not limited to EC2, VPC, IAM, Security groups, ELB/NLB/ALB, Internet gateway, S3, EBS and EFS.
- Experience in migration-deployment of applications to Cloud, re-engineering of application for cloud, setting up OS and app environments in virtualized cloud
- DevOps automation, CI/CD, infrastructure/services provisioning, application deployment and configuration
- DevOps toolsets to include Ansible, Jenkins and XLdeploy, XLrelease
- Deploy and configuration of Java/Wildfly, Springboot, JavaScript/Node.js and Ruby applications and middleware
- Scripting in Shell (bash) and Extensive Python
- Agile software development
- Excellent written, verbal communication skills, presentation, and collaboration skills - Team leadership skills
- Linux (RHEL) administration/engineering
Experience:
- Deploying, configuring, and supporting large scale monolithic and microservices based SaaS applications
- Working as both an infrastructure and application migration specialist
- Identifying and documenting application requirements for network, F5, IAM, and security groups
- Implementing DevOps practices such as infrastructure as code, continuous integration, and automated deployment
- Work with Technology leadership to understand business goals and requirements
- Experience with continuous integration tools
- Experience with configuration management platforms
- Writing and diagnosing issues with complex shell scripts
- Strong practical application development experience on Linux and Windows-based systems
● Responsible for development, and implementation of Cloud solutions.
● Responsible for achieving automation & orchestration of tools(Puppet/Chef)
● Monitoring the product's security & health(Datadog/Newrelic)
● Managing and Maintaining databases(Mongo & Postgres)
● Automating Infrastructure using AWS services like CloudFormation
● Provide evidences in Infrastructure Security Audits
● Migrating to Container technologies (Docker/Kubernetes)
● Should have knowledge on serverless concepts (AWS Lambda)
● Should be able to work with AWS services like EC2, S3, Cloud-formation, EKS, IAM, RDS, ..etc
What you bring:
● Problem-solving skills that enable you to identify the best solutions.
● Team collaboration and flexibility at work.
● Strong verbal and written communication skills that will help in presenting complex ideas
in an accessible and engaging way.
● Ability to choose the best tools and technologies which best fits the business needs.
Aviso offers:
● Dynamic, diverse, inclusive startup environment driven by transparency and velocity
● Bright, open, sunny working environment and collaborative office space
● Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs
● Competitive salaries and company equity, and a focus on developing world class talent operations
● Comprehensive health insurance available (medical) for you and your family
● Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service
● CEO moonshots projects with cash awards every quarter
● Upskilling and learning support including via paid conferences, online courses, and certifications
● Every month Rupees 2,500 will be credited to Sudexo meal card
● Develop and deliver automation software required for building & improving the functionality, reliability, availability, and manageability of applications and cloud platforms
● Champion and drive the adoption of Infrastructure as Code (IaC) practices and mindset
● Design, architect, and build self-service, self-healing, synthetic monitoring and alerting platform and tools
● Automate the development and test automation processes through CI/CD pipeline (Git, Jenkins, SonarQube, Artifactory, Docker containers)
● Build container hosting-platform using Kubernetes
● Introduce new cloud technologies, tools & processes to keep innovating in commerce area to drive greater business value.
Skills Required:
● Excellent written and verbal communication skills and a good listener.
● Proficiency in deploying and maintaining Cloud based infrastructure services (AWS, GCP, Azure – good hands-on experience in at least one of them)
● Well versed with service-oriented architecture, cloud-based web services architecture, design patterns and frameworks.
● Good knowledge of cloud related services like compute, storage, network, messaging (Eg SNS, SQS) and automation (Eg. CFT/Terraform).
● Experience with relational SQL and NoSQL databases, including Postgres and
Cassandra.
● Experience in systems management/automation tools (Puppet/Chef/Ansible, Terraform)
● Strong Linux System Admin Experience with excellent troubleshooting and problem solving skills
● Hands-on experience with languages (Bash/Python/Core Java/Scala)
● Experience with CI/CD pipeline (Jenkins, Git, Maven etc)
● Experience integrating solutions in a multi-region environment
● Self-motivate, learn quickly and deliver results with minimal supervision
● Experience with Agile/Scrum/DevOps software development methodologies.
Nice to Have:
● Experience in setting-up Elastic Logstash Kibana (ELK) stack.
● Having worked with large scale data.
● Experience with Monitoring tools such as Splunk, Nagios, Grafana, DataDog etc.
● Previously experience on working with distributed architectures like Hadoop, Mapreduce etc.