What will you do?
- Setup, manage Applications with automation, DevOps, and CI/CD tools.
- Deploy, Maintain and Monitor Infrastructure and Services.
- Automate code and Infra Deployments.
- Tune, optimize and keep systems up to date.
- Design and implement deployment strategies.
- Setup infrastructure in cloud platforms like AWS, Azure, Google Cloud, IBM cloud, Digital Ocean etc as per requirement.

Similar jobs
About the Role
We are looking for an experienced and driven DevOps Engineer to join our team and take ownership of building, maintaining, and optimizing cloud infrastructure. This role is ideal for professionals with strong experience in AWS, Kubernetes, CI/CD, Docker, and Terraform, who thrive in fast-paced, agile environments.
Key Responsibilities
- Cloud Infrastructure: Design, deploy, and manage scalable infrastructure on AWS
- Infrastructure as Code (IaC): Use Terraform for automating cloud resource provisioning
- Containerization: Manage and deploy Docker containers and Kubernetes clusters
- CI/CD Pipeline Management: Build and maintain pipelines using Jenkins, Bitbucket CI/CD, ArgoCD, etc.
- Automation: Automate deployment processes using tools like Ansible and Terraform
- Monitoring & Troubleshooting: Set up proactive monitoring and alerting systems
- Cross-Functional Collaboration: Partner with development teams to align infrastructure with application needs
- Security: Ensure security best practices across infrastructure and deployments
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related field
- 8+ years of hands-on experience in a DevOps or similar role
- Strong expertise in AWS services (EC2, S3, VPC, IAM, etc.)
- Proficiency with Kubernetes, Docker, and container orchestration
- Advanced experience with CI/CD tools (Jenkins, Bitbucket, ArgoCD)
- Working knowledge of Terraform, CloudFormation, and scripting languages (Python, Bash)
- Solid understanding of networking and cloud security principles
Please Apply - https://zrec.in/RZ7zE?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: DevOps Engineer GCP
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 2-4 years
Education: B.Tech/MCA/BCA
Notice Period: Immediately
Infra360.io is searching for a DevOps Engineer to lead our group of IT specialists in maintaining and improving our software infrastructure. You'll collaborate with software engineers, QA engineers, and other IT pros in deploying, automating, and managing the software infrastructure. As a DevOps engineer you will also be responsible for setting up CI/CD pipelines, monitoring programs, and cloud infrastructure.
Below is a detailed description of the roles and responsibilities, expectations for the role.
Tech Stack :
- Kubernetes: Deep understanding of Kubernetes clusters, container orchestration, and its architecture.
- Terraform: Extensive hands-on experience with Infrastructure as Code (IaC) using Terraform for managing cloud resources.
- ArgoCD: Experience in continuous deployment and using ArgoCD to maintain GitOps workflows.
- Helm: Expertise in Helm for managing Kubernetes applications.
- Cloud Platforms: Expertise in GCP, AWS or Azure will be an added advantage.
- Debugging and Troubleshooting: The DevOps Engineer must be proficient in identifying and resolving complex issues in a distributed environment, ranging from networking issues to misconfigurations in infrastructure or application components.
Key Responsibilities:
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
Ideal Candidate Profile:
- A graduation/post-graduation degree in Computer Science and related fields
- 2-4 years of strong DevOps experience with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Strong experience of GCP.
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in production environment and fixing them
- Knowledge of SQL and NoSQL databases, ElasticSearch, Solr etc.
- Knowledge of Networking, Firewalls, load balancers, Nginx, Apache etc.
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience with managing/mentoring small team of 2-3 people (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Loki.
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is looking for highly talented hands-on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’ s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
Job Description:- Integrate gates into CI/CD pipeline and push all flaws/issues to developers IDE (as far left as possible) - ideally in code repo but required by the time code is in the artifact repository.
- Demonstrable experience in Containerization-Docker and orchestration (Kubernetes)
- Experience withsetting up self-managed Kubernetes clusters without using any managed cloud offerings like EKS
- Experience working withAWS - Managing AWS services - EC2, S3, Cloudfront, VPC, SNS, Lambda, AWS Autoscaling, AWS IAM, RDS, EBS, Kinesis, SQS, DynamoDB, Elastic Cache, Redshift, Cloudwatch, Amazon Inspector.
- Familiarity withLinux and UNIX systems (e.g. CentOS, RedHat) and command line system administration such as Bash, VIM, SSH.
- Hands on experience in configuration management of server farms (using tools such asPuppet, Chef, Ansible, etc.,).
- Demonstrated understanding of ITIL methodologies, ITIL v3 or v4 certification
- Kubernetes CKA or CKAD certification nice to have
Excellent communication skills
Open to work on EST time zone

- Provides free and subscription-based website and email services hosted and operated at data centres in Mumbai and Hyderabad.
- Serve global audience and customers through sophisticated content delivery networks.
- Operate a service infrastructure using the latest technologies for web services and a very large storage infrastructure.
- Provides virtualized infrastructure, allows seamless migration and the addition of services for scalability.
- Pioneers and earliest adopters of public cloud and NoSQL big data store - since more than a decade.
- Provide innovative internet services with work on multiple technologies like php, java, nodejs, python and c++ to scale our services as per need.
- Has Internet infrastructure peering arrangements with all the major and minor ISPs and telecom service providers.
- Have mail traffic exchange agreements with major Internet services.
Job Details :
- This job position provides competitive professional opportunity both to experienced and aspiring engineers. The company's technology and operations groups are managed by senior professionals with deep subject matter expertise.
- The company believes having an open work environment offering mentoring and learning opportunities with an informal and flexible work culture, which allows professionals to actively participate and contribute to the success of our services and business.
- You will be part of a team that keeps the business running for cloud products and services that are used 24- 7 by the company's consumers and enterprise customers around the world. You will be asked to contribute to operate, maintain and provide escalation support for the company's cloud infrastructure that powers all of cloud offerings.
Job Role :
- As a senior engineer, your role grows as you gain experience in our operations. We facilitate a hands-on learning experience after an induction program, to get you into the role as quickly as possible.
- The systems engineer role also requires candidates to research and recommend innovative and automated approaches for system administration tasks.
- The work culture allows a seamless integration with different product engineering teams. The teams work together and share responsibility to triage in complex operational situations. The candidate is expected to stay updated on best practices and help evolve processes both for resilience of services and compliance.
- You will be required to provide support for both, production and non-production environments to ensure system updates and expected service levels. You will be required to specifically handle 24/7 L2 and L3 oversight for incident responses and have an excellent understanding of the end-to-end support process from client to different support escalation levels.
- The role also requires a discipline to create, update and maintain process documents, based on operation incidents, technologies and tools used in the processes to resolve issues.
QUALIFICATION AND EXPERIENCE :
- A graduate degree or senior diploma in engineering or technology with some or all of the following:
- Knowledge and work experience with KVM, AWS (Glacier, S3, EC2), RabbitMQ, Fluentd, Syslog, Nginx is preferred
- Installation and tuning of Web Servers, PHP, Java servlets, memory-based databases for scalability and performance
- Knowledge of email related protocols such as SMTP, POP3, IMAP along with experience in maintenance and administration of MTAs such as postfix, qmail, etc will be an added advantage
- Must have knowledge on monitoring tools, trend analysis, networking technologies, security tools and troubleshooting aspects.
- Knowledge of analyzing and mitigating security related issues and threats is certainly desirable.
- Knowledge of agile development/SDLC processes and hands-on participation in planning sprints and managing daily scrum is desirable.
- Preferably, programming experience in Shell, Python, Perl or C.
Designation - DevOps Engineer
Urgently required. (NP of maximum 15 days)
Location:- Mumbai
Experience:- 3-5 years.
Package Offered:- Rs.4,00,000/- to Rs.7,00,000/- pa.
DevOps Engineer Job Description:-
Responsibilities
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer & client experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Requirements
- Work experience as a DevOps Engineer or similar software engineering role
- Work experience on AWS
- Good knowledge of Ruby or Python
- Working knowledge of databases and SQL
- Problem-solving attitude
- Team spirit
- BSc in Computer Science, Engineering or relevant field
- Demonstrated experience with AWS
- Knowledge of servers, networks, storage, client-server systems, and firewalls
- Strong expertise in Windows and/or Linux operating systems, including system architecture and design, as well as experience supporting and troubleshooting stability and performance issues
- Thorough understanding of and experience with virtualization technologies (e.g., VMWare/Hyper-V)
- Knowledge of core network services such as DHCP, DNS, IP routing, VLANs, layer 2/3 routing, and load balancing is required
- Experience in reading, writing or modifying PowerShell, Bash scripts & Python code.Experience using git
- Working know-how of software-defined lifecycles, product packaging, and deployments
- POSTGRESSQL or Oracle database administration (Backup, Restore, Tuning, Monitoring, Management)
- At least 2 from AWS Associate Solutions Architect, DevOps, or SysOps
- At least 1 from AWS Professional Solutions Architect, DevOps
- AWS: S3, Redshift, DynamoDB, EC2, VPC, Lambda, CloudWatch etc.
- Bigdata: Databricks, Cloudera, Glue and Athena
- DevOps: Jenkins, Bitbucket
- Automation: Terraform, Cloud Formation, Python, Shell scripting Experience in automating AWS infrastructure with Terraform.
- Experience in database technologies is a plus.
- Knowledge in all aspects of DevOps (source control, continuous integration, deployments, etc.)
- Proficiency in security implementation best practices on IAM policies, KMS encryption, Secrets Management, Network Security Groups etc.
- Experience working in the SCRUM Environment
About Us:
Varivas is a community that allows users to create, support, and recommend content. The mission of Varivas is to give community freedom, ease, and complete control of their content.
Varivas is an early-stage startup looking for its first few members.
https://www.varivas.community/
Become part of a core team of an upcoming startup building an exciting product from scratch.
Your Responsibilities:
- Write scalable backend code and audit existing backend code for sanity, performance and security
- Design and implementation of on-demand scalable, and performant applications.
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Incidence management and root cause analysis
- Selecting and deploying appropriate CI/CD tools for various test automation frameworks
- Monitoring the processes during the entire lifecycle Encouraging and building automated processes wherever possible
- Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management
- Achieve repeatability, fast recovery, best practices, and delegate proper ownership permissions to teams
- Analyzes the technology currently being used and develops plans and processes for improvement and expansion for better cost and efficiency
- Collaborate with and assist QA in performing load tests and assessing performance optimization
- writing and maintaining the DevOps processes in tools such as confluence
We are looking for someone who is:
- Ready to take complete ownership of the whole infra at Varivas
- Has previous experience in building and maintaining live production app
- You have an eye for detail.
- You’re a problem solver and a perpetual learner.
- You possess a positive and solution-oriented mindset.
- Good Problem-solving skills and troubleshooting
- Build high-performing web and native applications that are robust and easy to maintain
Our Current DevOps stack:
- Github/bitbucket
- Azure cloud storage
- MongoDB Atlas
- JIRA
- Selenium, cypress test automation
- Heroku/digital ocean
- GoDaddy
- Google Analytics/search console
- SendGrid
- sentry.io
- hotjar
What we can offer:
- Payment for work done (of course 🙂)
- Remote work from anywhere
- part time or full time engagement
- Complete flexibility of working hours
- Complete freedom and ownership of your work
- No meetings, standups, or daily status group calls. (We prefer asynchronous communication like slack)
About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, you’ll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!
Your Role
- Applications built at scale to go up and down on command.
- Manage a cluster of microservices talking to each other.
- Build pipelines for huge data ingestion, processing, and dissemination.
- Optimize services for low cost and high efficiency.
- Maintain high availability and scalable PSQL database cluster.
- Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.
Requirements
- 1-4 years of work experience.
- Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
- CI/CD concepts and implementation using Codepipeline, Github Actions.
- Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
- Advanced Containerization - Docker, Kubernetes, ECS.
- Experience with managed services like database cluster, distributed services on EC2.
- Self-starters and curious folks who don't need to be micromanaged.
- Passionate about Blue Sky Climate Action and working with data at scale.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
We are looking for a self motivated and goal oriented candidate to lead in architecting, developing, deploying, and maintaining first class, highly scalable, highly available SaaS platforms.
This is a very hands-on role. You will have a significant impact on Wenable's success.
Technical Requirements:
8+ years SaaS and Cloud Architecture and Development with frameworks such as:
- AWS, GoogleCloud, Azure, and/or other
- Kafka, RabbitMQ, Redis, MongoDB, Cassandra, ElasticSearch
- Docker, Kubernetes, Helm, Terraform, Mesos, VMs, and/or similar orchestration, scaling, and deployment frameworks
- ProtoBufs, JSON modeling
- CI/CD utilities like Jenkins, CircleCi, etc..
- Log aggregation systems like Graylog or ELK
- Additional development tools typically used in orchestration and automation like Python, Shell, etc...
- Strong security best practices background
- Strong software development a plus
Leadership Requirements:
- Strong written and verbal skills. This role will entail significant coordination both internally and externally.
- Ability to lead projects of blended teams, on/offshore, of various sizes.
- Ability to report to executive and leadership teams.
- Must be data driven, and objective/goal oriented.

