
Devops CICD
∙Managing large-scale AWS deployments using Infrastructure as Code (IaC) and k8s developer tools
∙Managing build/test/deployment of very large-scale systems, bridging between developers and live stacks
∙Actively troubleshoot issues that arise during development and production
∙Owning, learning, and deploying SW in support of customer-facing applications
∙Help establish DevOps best practices
∙Actively work to reduce system costs
∙Work with open-source technologies, helping to ensure robustness and secureness of said technologies
∙Actively work with CI/CD, GIT and other component parts of the build and deployment system
∙Leading skills with AWS cloud stack
∙Proven implementation experience with Infrastructure as Code (Terraform, Terragrunt, Flux, Helm charts)
at scale
∙Proven experience with Kubernetes at scale
∙Proven experience with cloud management tools beyond AWS console (k9s, lens)
∙Strong communicator who people want to work with – must be thought of as the ultimate collaborator
∙Solid team player
∙Strong experience with Linux-based infrastructures and AWS
∙Strong experience with databases such as MySQL, Redshift, Elasticsearch, Mongo, and others
∙Strong knowledge of JavaScript, GIT
∙Agile practitioner

Similar jobs
What does a successful Senior DevOps Engineer do at Fiserv?
This role’s focus will be on contributing and enhancing our DevOps environment within Issuer Solution group, where our cross functional Scrum teams are delivering solutions built on cutting-edge mobile technology and products. You will be expected to support across the wider business unit, leading DevOps practices and initiatives.
What will you do:
• Build, manage, and deploy CI/CD pipelines.
• DevOps Engineer - Helm Chart, Rundesk, Openshift
• Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline.
• Implementing various development, testing, automation tools, and IT infrastructure
• Optimize and automate release/development cycles and processes.
• Be part of and help promote our DevOps culture.
• Identify and implement continuous improvements to the development practice
What you must have:
• 3+ years of experience in devops with hands-on experience in the following:
- Writing automation scripts for deployments and housekeeping using shell scripts (bash) and ansible playbooks
- Building docker images and running/managing docker instances
- Building Jenkins pipelines using groovy scripts
- Working knowledge on kubernetes including application deployments, managing application configurations and persistence volumes
• Has good understanding on infrastructure as code
• Ability to write and update documentation
• Demonstrate a logical, process orientated approach to problems and troubleshooting
• Ability to collaborate with multi development teams
What you are preferred to have:
• 8+ years of development experience
• Jenkins administration experience
• Hands-on experience in building and deploying helm charts
Process Skills:
• Should have worked in Agile Project
DevOps & Automation:
- Experience in CI/CD tools like Azure DevOps, YAML, Git, and GitHub. Capable of automating build, test, and deployment processes to streamline application delivery.
- Hands-on experience with Infrastructure as Code (IaC) tools such as Bicep (preferred), Terraform, Ansible, and ARM Templates.
Cloud Services & Architecture:
- Experience in Azure Cloud services, including Web Apps, AKS, Application Gateway, APIM, and Logic Apps.
- Good understanding of cloud design patterns, security best practices, and cost optimization strategies.
Scripting & Automation:
- Experience in developing and maintaining automation scripts using PowerShell to manage, monitor, and support applications.
- Familiar with Azure CLI, REST APIs, and automating workflows using Azure DevOps Pipelines.
Data Integration & ADF:
- Working knowledge or basic hands-on experience with Azure Data Factory (ADF), focusing on developing and managing data pipelines and workflows.
- Knowledge of data integration practices, including ETL/ELT processes and data transformations.
Application Management & Monitoring:
- Ability to provide comprehensive support for both new and legacy applications.
- Proficient in managing and monitoring application performance using tools like Azure Monitor, Log Analytics, and Application Insights.
- Understanding of application security principles and best practices.
Database Skills:
- Basic experience of SQL and Azure SQL, including database backups, restores, and application data management.
We are looking for a DevOps Lead to join our team.
Responsibilities
• A technology Professional who understands software development and can solve IT Operational and deployment challenges using software engineering tools and processes. This position requires an understanding of both Software development (Dev) and deployment
Operations (Ops)
• Identity manual processes and automate them using various DevOps automation tools
• Maintain the organization’s growing cloud infrastructure
• Monitor and maintain DevOps environment stability
• Collaborate with distributed Agile teams to define technical requirements and resolve technical design issues
• Orchestrating builds and test setups using Docker and Kubernetes.
• Participate in designing and building Kubernetes, Cloud, and on-prem environments for maximum performance, reliability and scalability
• Share business and technical learnings with the broader engineering and product organization, while adapting approaches for different audiences
Requirements
• Candidates working for this position should possess at least 5 years of work experience as a DevOps Engineer.
• Candidate should have experience in ELK stack, Kubernetes, and Docker.
• Solid experience in the AWS environment.
• Should have experience in monitoring tools like DataDog or Newrelic.
• Minimum of 5 years experience with code repository management, code merge and quality checks, continuous integration, and automated deployment & management using tools like Jenkins, SVN, Git, Sonar, and Selenium.
• Candidates must possess ample knowledge and experience in system automation, deployment, and implementation.
• Candidates must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.
• The candidates should also possess experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, PostgreSQL, and Git.
• Candidates should demonstrate knowledge in handling distributed data systems.
Examples: Elastisearch, Cassandra, Hadoop, and others.
• Should have experience in GitLab- CIRoles and Responsibilities
Navi Mumbai 1 - 3 years
Rejolut is among the fastest-growing and award-winning Tech companies working on leading
technologies namely Blockchain, Machine Learning & Artificial Intelligence, Complex mobile & Web
Apps, IoT, etc.Rejolut is a venture-backed company with clients in over several countries namely
Malaysia Airlines,gba global,my-earth,biomes, Dlg-hub,etc.
We are looking for Tech geeks having hands-on experience and in love with building scalable,
distributed and large web/mobile products and tech solutions. He/She must be an excellent problem
solver with passion to self-learn and implement web technologies (frontend + backend). He/She would
be responsible for the architecture design, code review, and technology build and deployment
activities of the product.
Key Skills For DevOps Engineer:
Background in linux\unix systems
Experience with DevOps techniques and philosophies
Experience with automation of code builds and deployments
Knowledge/Experience of CI\CD pipelines to a wide variety of virtual environments in private
and public cloud providers (Jenkins / Hudson, Docker, Kubernetes, Azure, AWS)
Knowledge/Experience of software configuration management systems and source code
version control systems (Jenkins, Bitbucket, consul, vagrant, Chef, Puppet, Gerrit)
Passion to work in an exciting fast paced environment
Self-starter who can implement with minimal guidance
High Availability: Load Balancing (ELB), Reverse Proxies, CDNs etc.
AWS core components (or their GCP/Azure equivalents) and their management: EC2, ELB,
NAT, VPC, IAM Roles and policies, EBS and S3, CloudFormation, Elasticache, Route53, etc.
Work with K8s and Docker is plus
High social and communication skills
Job Responsibilities:
Gather and analyse cloud infrastructure requirements
Automate
Support existing infrastructure, analyse problem areas and come up with solutions
Optimise stack performance and costs
An eye for monitoring. The ideal candidate should be able to look at complex infrastructure
and be able to figure out what to monitor and how.
Rejolut - As a Career Differentiator
- We are a young and dynamic team who are obsessed with solving futuristic and evolutionary
business problems at scale with the next generation technology like blockchain, crypto and machine
learning. Focuses on empowering people across the globe to be technically efficient, making
advancements in technology and providing new capabilities that were previously thought impossible.
- We provide exposure to higher learning opportunities so that you can work on complex and cutting
edge technology like React, React Native, Flutter, NodeJS, Python, Go, Svelte, WebAssembly.
Strong expertise in blockchain and crypto technology and working with the networks like Hedera
Hashgraph, Tezos, BlockApps, Algorand, Cardano.
- Company is backed by two technology Co-founders, well-versed with consumer applications and
their work has been downloaded millions of times and have led teams in leadership positions in
companies like Samsung, Purplle, Loylty Rewardz.
Benefits :
> Health Insurance
> Fast growth and more visibility into the company
> Experience to work on the latest technology
> Competitive Learning Environment with supportive co-workers
> Employee friendly HR Policies
> Paid leaves up to certain limits
> Competitive salaries & Bonuses
> Liberal working atmosphere
> Get mentored by the best in the industry
Schedule:
Day Shift/Flexible working hours
Monday to Friday
- Must have a minimum of 3 years of experience in managing AWS resources and automating CI/CD pipelines.
- Strong scripting skills in PowerShell, Python or Bash be able to build and administer CI/CD pipelines.
- Knowledge of infrastructure tools like Cloud Formation, Terraform, Ansible.
- Experience with microservices and/or event-driven architecture.
- Experience using containerization technologies (Docker, ECS, Kubernetes, Mesos or Vagrant).
- Strong practical Windows and Linux system administration skills in the cloud.
- Understanding of DNS, NFS, TCP/IP and other protocols.
- Knowledge of secure SDLC, OWASP top 10 and CWE/SANS top 25.
- Deep understanding of Web Sockets and their functioning. Hands on experience of ElasticCache, Redis, ECS or EKS. Installation, configuration and management of Apache or Nginx web server, Apache/Tomcat Application Server, configure SSL certificates, setup reverse proxy.
- Exposure to RDBMS (MySQL, SQL Server, Aurora, etc.) is a plus.
- Exposure to programming languages like JAVA, PHP, SQL is a plus.
- AWS Developer or AWS SysOps Administrator certification is a plus.
- AWS Solutions Architect Certification experience is a plus.
- Experience building Blue/Green, Canary or other zero down time deployment strategies, advanced understanding of VPC, EC2 Route53 IAM, Lambda is a plus.
About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, you’ll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!
Your Role
- Applications built at scale to go up and down on command.
- Manage a cluster of microservices talking to each other.
- Build pipelines for huge data ingestion, processing, and dissemination.
- Optimize services for low cost and high efficiency.
- Maintain high availability and scalable PSQL database cluster.
- Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.
Requirements
- 1-4 years of work experience.
- Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
- CI/CD concepts and implementation using Codepipeline, Github Actions.
- Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
- Advanced Containerization - Docker, Kubernetes, ECS.
- Experience with managed services like database cluster, distributed services on EC2.
- Self-starters and curious folks who don't need to be micromanaged.
- Passionate about Blue Sky Climate Action and working with data at scale.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
Position Level: Senior Engineer
Company Overview:
AskSid.ai is a 4 years old start-up based in Bangalore, is fast growing and cofounded by
two ex-Mindtree employees each with 20+ years of experience. We were rated the No1
emerging SaaS company in India and won the NASSCOM EMERGE 50- League of 10
awards in 2019. Also got rated as the most innovative AI company in India for 2020 by
CII and Accenture Ventures. As a growing company, we are looking for passionate
engineers who aspire to build world class technology products of internet scale.
Job purpose:
Setup, optimize, and maintain Kubernetes clusters on Microsoft Azure Cloud.
Responsibilities
● Setup, maintain, optimize, and secure various Kubernetes clusters on MS Azure
Cloud
● Setup and maintain containers, container availability, auto-scalability, storage
management, DNS, Proxy setup and maintain firewall, app gateway, and load
balancers on MS Azure Cloud.
● Build and manage backup, restore, and DR activities
Knowledge and skills
Education and Experience
- Engineering in computer science
- 3-5 years of experience in setup and management of Kubernetes infrastructure
- Expert level skills in analytical & problem solving
- Ability to communicate clearly in English
- Microsoft Azure
- Kubernetes, AKS services as well as custom clusters on bare metal infrastructure
- Linux internals & services
- Docker, Docker Registry
- NGINX, Load Balancing, Firewall, Security, PKI
- Shell & Awk Script, Azure Templates & scripting, Python Scripting
- Knowledge of NoSQL Databases
Job Brief:
We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.
Job Location:
Pune.
Job Description:
- Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
- Experience interacting with customer.
- Excellent communication.
- Hands-on in creating and managing Jenkins job, Groovy scripting.
- Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
- Experience in Maven.
- Experience in scripting languages like Bash, Powershell, Python.
- Experience in automation tools like Terraform, Ansible, Chef, Puppet.
- Excellent troubleshooting skills.
- Experience in Docker and Kuberneties with creating docker files.
- Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
Responsibilities
- Designing and building infrastructure to support AWS, Azure, and GCP-based Cloud services and infrastructure.
- Creating and utilizing tools to monitor our applications and services in the cloud including system health indicators, trend identification, and anomaly detection.
- Working with development teams to help engineer scalable, reliable, and resilient software running in the cloud.
- Participating in on-call escalation to troubleshoot customer-facing issues
- Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance.
- Providing analytics and forecasts for cloud capacity, troubleshooting analysis, and uptime.
Skills
- Should have strong experience of a couple of years, in leading DevOps team and planning, defining DevOps roadmap and executing as per the same along with the team
- Familiarity with AWS cloud and JSON templates, Python, AWS Cloud formation templates
- Designing solutions using one or more AWS features, tools, and technologies such as EC2, EBS, Glacier, S3, ELB, CloudFormation, Lambada, CloudWatch, VPC, RDS, Direct Connect, AWS CLI, REST API
- Design and implement system architecture with AWS cloud - Develop automation scripts, ARM templates, Ansible, Chef, Python, Powershell Knowledge of AWS services and cloud design patterns- Knowledge on Cloud fundamentals like autoscaling, serverless
- Have experience with DevOps and Infrastructure as Code: AWS environment and application automation utilizing CloudFormation and third-party tools. CI/CD pipeline setup utilizing
- CI experience with the following is a must: Jenkins, Bitbucket/GIT, Nexus or Artifactory, SonarQube, WireMock or other mocking solution
- Expert knowledge on Windows/Linux OS/Mac with at least 5-6 years of system administration experience
- Should have strong skills in using JIRA build tool
- Should have knowledge in managing the CI/CD pipeline on public cloud deployments using AWS
- Should have strong skills in using tools like Jenkins, Docker, Kubernetes (AWS EKS, Azure AKS), and Cloudformation.
- Experience in monitoring tools like Pingdom, Nagios, etc.
- Experience in reverse proxy services like Nginx and Apache
- Desirable experience in Bitbucket with version control tools like GIT/SVN
- Experience of manual/automated testing desired application deployments
- Experience in database technologies such as PostgreSQL, MySQL
- Knowledge of helm and terraform

