
- Solve complex Cloud Infrastructure problems.
- Drive DevOps culture in the organization by working with engineering and product teams.
- Be a trusted technical advisor to developers and help them architect scalable, robust, and highly-available systems.
- Frequently collaborate with developers to help them learn how to run and maintain systems in production.
- Drive a culture of CI/CD. Find bottlenecks in the software delivery pipeline. Fix bottlenecks with developers to help them deliver working software faster. Develop and maintain infrastructure solutions for automation, alerting, monitoring, and agility.
- Evaluate cutting edge technologies and build PoCs, feasibility reports, and implementation strategies.
- Work with engineering teams to identify and remove infrastructure bottlenecks enabling them to move fast. (In simple words you'll be a bridge between tech, operations & product)
Skills required:
Must have:
- Deep understanding of open source DevOps tools.
- Scripting experience in one or more among Python, Shell, Go, etc.
- Strong experience with AWS (EC2, S3, VPC, Security, Lambda, Cloud Formation, SQS, etc)
- Knowledge of distributed system deployment.
- Deployed and Orchestrated applications with Kubernetes.
- Implemented CI/CD for multiple applications.
- Setup monitoring and alert systems for services using ELK stack or similar.
- Knowledge of Ansible, Jenkins, Nginx.
- Worked with Queue based systems.
- Implemented batch jobs and automated recurring tasks.
- Implemented caching infrastructure and policies.
- Implemented central logging.
Good to have:
- Experience dealing with PI information security.
- Experience conducting internal Audits and assisting External Audits.
- Experience implementing solutions on-premise.
- Experience with blockchain.
- Experience with Private Cloud setup.
Required Experience:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- You need to have 2-4 years of DevOps & Automation experience.
- Need to have a deep understanding of AWS.
- Need to be an expert with Git or similar version control systems.
- Deep understanding of at least one open-source distributed systems (Kafka, Redis, etc)
- Ownership attitude is a must.
We offer a suite of memberships and subscriptions to spice up your lifestyle. We believe in practicing an ultimate work life balance and satisfaction. Working hard doesn’t mean clocking in extra hours, it means having a zeal to contribute the best of your talents. Our people culture helps us inculcate measures and benefits which help you feel confident and happy each and every day. Whether you’d like to skill up, go off the grid, attend your favourite events or be an epitome of fitness. We have you covered round and about.
- Health Memberships
- Sports Subscriptions
- Entertainment Subscriptions
- Key Conferences and Event Passes
- Learning Stipend
- Team Lunches and Parties
- Travel Reimbursements
- ESOPs
Thats what we think would bloom up your personal life, as a gesture for helping us with your talents.
Join us to be a part of our Exciting journey to Build one Digital Identity Platform!!!

About BlueCloud
About
BlueCloud proudly stands at the forefront of data and analytics, GenAI, and cloud transformation. As a leading Snowflake partner, we offer our clients tailored solutions using the latest cloud technologies. Our suite of premium services includes digital strategy, data engineering and analytics, digital services, and cloud operations—all offered at competitive rates, thanks to our unique flexible delivery model.
At BlueCloud, we provide a dedicated Innovation team for strategic and execution support to implement effective business strategies and technology solutions for our diverse clientele, which spans Fortune 500 companies, mid-market firms, and startups. Our commitment is steadfast in fostering customer success within the dynamic GenAI landscape. We prioritize innovation, exceptional customer service, and the cultivation of employee engagement.
For deeper insight into our capabilities and the real-world impact we create, we invite you to visit our website at www.blue.cloud.
Connect with the team
Similar jobs
Job Description
We are seeking a DevOps Engineer with 3+ years of experience and strong expertise in Google Cloud Platform (GCP) to design, automate, and manage scalable cloud infrastructure. The role involves building CI/CD pipelines, implementing Infrastructure as Code, and ensuring high availability, security, and performance of cloud-native applications.
Key Responsibilities
- Design, deploy, and manage GCP infrastructure using best practices
- Implement Infrastructure as Code (IaC) using Terraform
- Build and maintain CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI
- Manage containerized workloads using Docker and Kubernetes (GKE)
- Configure and manage GCP networking (VPCs, Subnets, VPN, Cloud Interconnect, Load Balancers, Firewall rules)
- Implement monitoring and logging using Cloud Monitoring and Cloud Logging
- Ensure high availability, scalability, and security of applications
- Troubleshoot production issues and perform root cause analysis
- Collaborate with development and product teams to improve deployment and reliability
- Optimize cloud cost, performance, and security
Required Skills & Qualifications
- 3+ years of experience as a DevOps / Cloud Engineer
- Strong hands-on experience with Google Cloud Platform (GCP)
- Experience with Terraform for GCP resource provisioning
- CI/CD experience with Jenkins / GitHub Actions
- Hands-on experience with Docker and Kubernetes (GKE)
- Good understanding of Linux and shell scripting
- Knowledge of cloud networking concepts (TCP/IP, DNS, Load Balancers)
- Experience with monitoring, logging, and alerting
Good to Have
- Experience with Hybrid or Multi-cloud architectures
- Knowledge of DevSecOps practices
- Experience with SRE concepts
- GCP certifications (Associate Cloud Engineer / Professional DevOps Engineer)
Why Join Us
- Work on modern GCP-based cloud infrastructure
- Opportunity to design and own end-to-end DevOps pipelines
- Learning and growth opportunities in cloud and automation
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is looking for highly talented hands-on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’ s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
Lead the pre-sales (25%) to post-sales (75%) efforts building Public/Hybrid Cloud solutions working collaboratively with Intuitive and client technical and business stakeholders
Be a customer advocate with obsession for excellence delivering measurable success for Intuitive’s customers with secure, scalable, highly available cloud architecture that leverage AWS Cloud services
Experience in analyzing customer's business and technical requirements, assessing existing environment for Cloud enablement, advising on Cloud models, technologies, and risk management strategies
Apply creative thinking/approach to determine technical solutions that further business goals and align with corporate technology strategies
Extensive experience building Well Architected solutions in-line with AWS cloud adoption framework (DevOps/DevSecOps, Database/Data Warehouse/Data Lake, App Modernization/Containers, Security, Governance, Risk, Compliance, Cost Management and Operational Excellence)
Experience with application discovery prefera bly with tools like Cloudscape, to discover application configurations , databases, filesystems, and application dependencies
Experience with Well Architected Review, Cloud Readiness Assessments and defining migration patterns (MRA/MRP) for application migration e.g. Re-host, Re-platform, Re-architect etc
Experience in architecting and deploying AWS Landing Zone architecture with CI/CD pipeline
Experience on architecture, design of AWS cloud services to address scalability, performance, HA, security, availability, compliance, backup and DR, automation, alerting and monitoring and cost
Hands-on experience in migrating applications to AWS leveraging proven tools and processes including migration, implementation, cutover and rollback plans and execution
Hands-on experience in deploying various AWS services e.g. EC2, S3, VPC, RDS, Security Groups etc. using either manual or IaC, IaC is preferred
Hands-on Experience in writing cloud automation scripts/code such as Ansible, Terraform, CloudFormation Template (AWS CFT) etc.
Hands-on Experience with application build/release processes CI/CD pipelines
Deep understanding of Agile processes (planning/stand-ups/retros etc), and interact with cross-functional teams i.e. Development, Infrastructure, Security, Performance Engineering, and QA
Additional Requirements:
Work with Technology leadership to grow the Cloud & DevOps practice. Create cloud practice collateral
Work directly with sales teams to improve and help them drive the sales for Cloud & DevOps practice
Assist Sales and Marketing team in creating sales and marketing collateral
Write whitepapers and technology blogs to be published on social media and Intuitive website
Create case studies for projects successfully executed by Intuitive delivery team
Conduct sales enablement sessions to coach sales team on new offerings
Flexibility with work hours supporting customer’s requirement and collaboration with global delivery teams
Flexibility with Travel as required for Pre-sales/Post-sales, Design workshops, War-room Migration events and customer meetings
Strong passion for modern technology exploration and development
Excellent written, verbal communication skills, presentation, and collaboration skills - Team leadership skills
Experience with Multi-cloud (Azure, GCP, OCI) is a big plus
Experience with VMware Cloud Foundation as well as Advanced Windows and Linux Engineering is a big plus
Experience with On-prem Data Engineering (Database, Data Warehouse, Data Lake) is a big plus
Job Description
• Minimum 3+ yrs of Experience in DevOps with AWS Platform
• Strong AWS knowledge and experience
• Experience in using CI/CD automation tools (Git, Jenkins, Configuration deployment tools ( Puppet/Chef/Ansible)
• Experience with IAC tools Terraform
• Excellent experience in operating a container orchestration cluster (Kubernetes, Docker)
• Significant experience with Linux operating system environments
• Experience with infrastructure scripting solutions such as Python/Shell scripting
• Must have experience in designing Infrastructure automation framework.
• Good experience in any of the Setting up Monitoring tools and Dashboards ( Grafana/kafka)
• Excellent problem-solving, Log Analysis and troubleshooting skills
• Experience in setting up centralized logging for system (EKS, EC2) and application
• Process-oriented with great documentation skills
• Ability to work effectively within a team and with minimal supervision
- As a DevOps Engineer, you need to have strong experience in CI/CD pipelines.
- Setup development, testing, automation tools, and IT infrastructure
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Selecting and deploying appropriate CI/CD tools
- Deploy and maintain CI/CD pipelines across multiple environments (Mobile, Web API’s & AIML)
Required skills & experience:
- 3+ years of experience as DevOps Engineer and strong working knowledge in CI/CD pipelines
- Experience administering and deploying development CI/CD using Git, BitBucket, CodeCommit, Jira, Jenkins, Maven, Gradle, etc
- Strong knowledge in Linux-based infrastructures and AWS/Azure/GCP environment
- Working knowledge on AWS (IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, etc)
- Experience with Docker containerization and clustering (Kubernetes/ECS)
- Experience on Android source(AOSP) clone, build, and automation ecosystems
- Knowledge of scripting languages such as Python, Shell, Groovy, Bash, etc
- Familiar with Android ROM development and build process
- Knowledge of Agile Software Development methodologies
• Drive the architectural design, solution planning and feasibility study on Cloud Computing Infrastructure.
• Deliver new IT services and exploit current infrastructure technologies.
• Drive the infrastructure roadmaps and planning in adopting the cloud infrastructure in a long run.
• Conduct research and make recommendations on suitable cloud platforms & services.
• Advise on and implement cloud best practices.
Job Requirements:
Desired understanding of the following - VPC, EC2, S3, IAM, Route 53, Lambda, Billing, AWS MYSQL, Kinesis, API
Gateway, Cloud Watch, EBS, AMI, RDS, Dynamo DB, ELB, Light sail, Kubernetes, Docker, NAT Gateway
Education & Experience:
• 3 to 5 years related work experience
• Bachelor’s degree in Computer Science, Information Technology or related field
• Solid experience in infrastructure architecture solutions design
• Solid knowledge in AWS/Google Cloud
• Experience in managing implementations on public clouds (AWS/Google Cloud)
• Excellent analytical and problem-solving skills
• Good command in written and spoken English.
• Certification for AWS/Google Cloud Architect – Associate level
REVOS is a smart micro-mobility platform that works with enterprises across the automotive shared mobility value chain to enable and accelerate their smart vehicle journeys. Founded in 2017, it aims to empower all 2 and 3 wheeler vehicles through AI-integrated IoT solutions that will make them smart, safe, connected. We are backed by investors like USV and Prime Venture.
Duties and Responsibilities :
- Automating various tasks in cloud operations, deployment, monitoring, and performance optimization for big data stack.
- Build, release, and configuration management of production systems.
- System troubleshooting and problem-solving across platform and application domains.
- Suggesting architecture improvements, recommending process improvements.
- Evaluate new technology options and vendor products.
- Function well in a fast-paced, rapidly-changing environment
- Communicate effectively with people at all levels of the organization
Qualifications and Required Skills:
- Overall 3+ years of experience in various software engineering roles.
- 3+ years of experience in building applications and tools in any tech stack, preferably deployed on cloud
- Recent 3 years’ experience must be on Serverless/cloud-native development in AWS (preferred)/Azure
- Expertise in any of the programming languages – (NodeJS or Python preferable)
- Must have hands-on experience in using AWS/Azure - SDK/APIs.
- Must have experience in deploying, releasing, and managing production systems
- MCA or a degree in engineering in Computer Science, IT, or Electronics stream
- Works independently without any supervision
- Work on continuous improvement of the products through innovation and learning. Someone with a knack for benchmarking and optimization
- Experience in deploying highly complex, distributed transaction processing systems.
- Stay abreast with new innovations and the latest technology trends and explore ways of leveraging these for improving the product in alignment with the business.
- As a component owner, where the component impacts across multiple platforms (5-10-member team), work with customers to obtain their requirements and deliver the end-to-end project.
Required Experience, Skills, and Qualifications
- 5+ years of experience as a DevOps Engineer. Experience with the Golang cycle is a plus
- At least one End to End CI/CD Implementation experience
- Excellent Problem Solving and Debugging skills in DevOps area· Good understanding of Containerization (Docker/Kubernetes)
- Hands-on Build/Package tool experience· Experience with AWS services Glue, Athena, Lambda, EC2, RDS, EKS/ECS, ALB, VPC, SSM, Route 53
- Experience with setting up CI/CD pipeline for Glue jobs, Athena, Lambda functions
- Experience architecting interaction with services and application deployments on AWS
- Experience with Groovy and writing Jenkinsfile
- Experience with repository management, code scanning/linting, secure scanning tools
- Experience with deployments and application configuration on Kubernetes
- Experience with microservice orchestration tools (e.g. Kubernetes, Openshift, HashiCorp Nomad)
- Experience with time-series and document databases (e.g. Elasticsearch, InfluxDB, Prometheus)
- Experience with message buses (e.g. Apache Kafka, NATS)
- Experience with key-value stores and service discovery mechanisms (e.g. Redis, HashiCorp Consul, etc)
- As a DevOps engineer, you will be responsible for
- Automated provisioning of infrastructure in AWS/Azure/OpenStack environments.
- Creation of CI/CD pipelines to ensure smooth delivery of projects.
- Proactive Monitoring of overall infrastructure (Logs/Resources etc)
- Deployment of application to various cloud environments.
- Should be able to lead/guide a team towards achieving goals and meeting the milestones defined.
- Practice and implement best practices in every aspect of project deliverable.
- Keeping yourself up to date with new frameworks and tools and enabling the team to use them.
Skills Required
- Experience in Automation of CI/CD processes using tools such as GIT, Gerrit, Jenkins, CircleCI, Azure Pipeline, Gitlab
- Experience in working with AWS and Azure platforms and Cloud-Native automation tools such as AWS cloud formation and Azure Resource Manager.
- Experience in monitoring solutions such as ELK Stack, Splunk, Nagios, Zabbix, Prometheus
- Web Server/Application Server deployments and administration.
- Good Communication, Team Handling, Problem-solving, Work Ethic, and Creativity.
- Work experience of at least 1 year in the following are mandatory.
If you do not have the relevant experience, please do not apply.
- Any cloud provider (AWS, GCP, Azure, OpenStack)
- Any of the configuration management tools (Ansible, Chef, Puppet, Terraform, Powershell DSC)
- Scripting languages (PHP, Python, Shell, Bash, etc.?
- Docker or Kubernetes
- Troubleshoot and debug infrastructure Network and operating system issues.
The mission of R&D IT Design Infrastructure is to offer a state-of-the-art design environment
for the chip hardware designers. The R&D IT design environment is a complex landscape of EDA Applications, High Performance Compute, and Storage environments - consolidated in five regional datacenters. Over 7,000 chip hardware designers, spread across 40+ locations around the world, use this state-of-the-art design environment to design new chips and drive the innovation of company. The following figures give an idea about the scale: the landscape has more 75,000+ cores, 30+ PBytes of data, and serves 2,000+ CAD applications and versions. The operational service teams are globally organized to cover 24/7 support to the chip hardware design and software design projects.
Since the landscape is really too complex to manage the traditional way, it is our strategy to transform our R&D IT design infrastructure into “software-defined datacenters”. This transformation entails a different way of work and a different mind-set (DevOps, Site Reliability Engineering) to ensure that our IT services are reliable. That’s why we are looking for a DevOps Linux Engineer to strengthen the team that is building a new on-premise software defined virtualization and containerization platform (PaaS) for our IT landscape, so that we can manage it with best practices from software engineering and offer the IT service reliability which is required by our chip hardware design community.
It will be your role to develop and maintain the base Linux OS images that are offered via automation to the customers of the internal (on-premise) cloud platforms.
Your responsibilities as DevOps Linux Engineer:
• Develop and maintain the base RedHat Linux operating system images
• Develop and maintain code to configure and test the base OS image
• Provide input to support the team to design, develop and maintain automation
products with playbooks (YAML) and modules (Python/PowerShell) in tools like
Ansible Tower and Service-Now
• Test and verify the code produced by the team (including your own) to continuously
improve and refactor
• Troubleshoot and solve incidents on the RedHat Linux operating system
• Work actively with other teams to align on the architecture of the PaaS solution
• Keep the Base OS image up2date via patches or make sure patches are available to
the virtual machine owners
• Train team members and others with your extensive automation knowledge
• Work together with ServiceNow developers in your team to provide the best intuitive
end-user experience possible for the virtual machine OS deployments
We are looking for a DevOps engineer/consultant with the following characteristics:
• Master or Bachelor degree
• You are a technical, creative, analytical and open-minded engineer that is eager to
learn and not afraid to take initiative.
• Your favorite t-shirt has “Linux” or “RedHat” printed on it at least once.
• Linux guru: You have great knowledge Linux servers (RedHat), RedHat Satellite 6
and other RedHat products
• Experience in Infrastructure services, e.g., Networking, DNS, LDAP, SMTP
• DevOps mindset: You are a team-player that is eager to develop and maintain cool
products to automate/optimize processes in a complex IT infrastructure and are able
to build and maintain productive working relationships
• You have great English communication skills, both verbally as in writing.
• No issue to work outside business hours to support the platform for critical R&D
Applications
Other competences we value, but are not strictly mandatory:
• Experience with agile development methods, like Scrum, and are convinced of its
power to deliver products with immense (business) value.
• “Security” is your middle name, and you are always challenging yourself and your
colleagues to design and develop new solutions as security tight as possible.
• Being a master in automation and orchestration with tools like Ansible Tower (or
comparable) and feel comfortable with developing new modules in Python or
PowerShell.
• It would be awesome if you are already a true Yoda when it comes to code version
control and branching strategies with Git, and preferably have worked with GitLab
before.
• Experience with automated testing in a CI/CD pipeline with Ansible, Python and tools
like Selenium.
• An enthusiast on cloud platforms like Azure & AWS.
• Background in and affinity with R&D
Opening for a Java Developer with Devops experience
Experience required: 5 yrs to 10 yrs
Essential Required Skills:
Familiarity with Version Control such as GitHub, BitBucket
- Java programmer(Liferay, Alfresco will add plus point)
- AWS
- OPs(ansible, apache, python, terraform)
- Effective communication skills
- An analytical bent of mind and problem-solving aptitude
- Good time management skills
- Curiosity for learning
- Patience
Roles & Responsibilities:
- Candidate with good hand on exposure on AWS, Cloud, Devops, Ansible, Docker, Jekins.
- Strong proficiency in Linux, Open Source, Web based and Cloud based environments (ability to use open source technologies and tools)
- Strong scripting and automation (bash, Perl, common Linux utils), strong grasp of automation tools a plus.
- Strong debugging skills (OS, scripting, Web based technologies), SQL, Java and Database concepts are a plus
- Apache, nginx, git, svn, GNU tools
- Must have exposure on Grep, awk, sed, Git, svn
- Scripting (bash, python)
- API related skills (REST, and any other like google, aws, atlassian)
- Web based technology
- Strong Unix Skills
- Java programmer, Coding (Springboot, Microservices, Liferay, Alfresco will add plus point)
- Proficient in AWS
- Ops (ansible, apache, python, terraform)
Benefits
- Cash Rewards & Recognition on Monthly Basis
- Work-Life Balance (Flexible Working Hours)
- Five-Day Work Week








