
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is looking for highly talented hands-on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
Job Description :
- Extensive exp. with K8s (EKS/GKE) and k8s eco-system tooling e,g., Prometheus, ArgoCD, Grafana, Istio etc.
- Extensive AWS/GCP Core Infrastructure skills
- Infrastructure/ IAC Automation, Integration - Terraform
- Kubernetes resources engineering and management
- Experience with DevOps tools, CICD pipelines and release management
- Good at creating documentation(runbooks, design documents, implementation plans )
Linux Experience :
- Namespace
- Virtualization
- Containers
Networking Experience
- Virtual networking
- Overlay networks
- Vxlans, GRE
Kubernetes Experience :
Should have experience in bringing up the Kubernetes cluster manually without using kubeadm tool.
Observability
Experience in observability is a plus
Cloud automation :
Familiarity with cloud platforms exclusively AWS, DevOps tools like Jenkins, terraform etc.

About Intuitive Technology Partners
About
Connect with the team
Similar jobs
About GradRight
Our vision is to be the world’s leading Ed-Fin Tech company dedicated to making higher education accessible and affordable to all. Our mission is to drive transparency and accountability in the global higher education sector and create significant impact using the power of technology, data science and collaboration.
GradRight is the world’s first SaaS ecosystem that brings together students, universities and financial institutions in an integrated manner. It enables students to find and fund high return college education, universities to engage and select the best-fit students and banks to lend in an effective and efficient manner.
In the last three years, we have enabled students to get the best deals on a $ 2.8+ Billion of loan requests and facilitated disbursements of more than $ 350+ Million in loans. GradRight won the HSBC Fintech Innovation Challenge supported by the Ministry of Electronics & IT, Government of India & was among the top 7 global finalists in The PIEoneer awards, UK.
GradRight’s team possesses extensive domestic and international experience in the launch and scale-up of premier higher education institutions. It is led by alumni of IIT Delhi, BITS Pilani, IIT Roorkee, ISB Hyderabad and University of Pennsylvania. GradRight is a Delaware, USA registered company with a wholly owned subsidiary in India.
About the Role
We are looking for a passionate DevOps Engineer with hands-on experience in AWS cloud infrastructure, containerization, and orchestration. The ideal candidate will be responsible for building, automating, and maintaining scalable cloud solutions, ensuring smooth CI/CD pipelines, and supporting development and operations teams.
Core Responsibilities
Design, implement, and manage scalable, secure, and highly available infrastructure on AWS.
Build and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, or GitHub Actions.
Containerize applications using Docker and manage deployments with Kubernetes (EKS, self-managed, or other distributions).
Monitor system performance, availability, and security using tools like CloudWatch, Prometheus, Grafana, ELK/EFK stack.
Collaborate with development teams to optimize application performance and deployment processes.
Required Skills & Experience
3–4 years of professional experience as a DevOps Engineer or similar role.
Strong expertise in AWS services (EC2, S3, RDS, Lambda, VPC, IAM, CloudWatch, EKS, etc.).
Hands-on experience with Docker and Kubernetes (EKS or self-hosted clusters).
Proficiency in CI/CD pipeline design and automation.
Experience with Infrastructure as Code (Terraform / AWS CloudFormation).
Solid understanding of Linux/Unix systems and shell scripting.
Knowledge of monitoring, logging, and alerting tools.
Familiarity with networking concepts (DNS, Load Balancing, Security Groups, Firewalls).
Basic programming/scripting experience in Python, Bash, or Go.
Nice to Have
Exposure to microservices architecture and service mesh (Istio/Linkerd).
Knowledge of serverless (AWS Lambda, API Gateway).
Are you an experienced Infrastructure/DevOps Engineer looking for an exciting remote opportunity to design, automate, and scale modern cloud environments? We’re seeking a skilled engineer with strong expertise in Terraform and DevOps practices to join our growing team. If you’re passionate about automation, cloud infrastructure, and CI/CD pipelines, we’d love to hear from you!
Key Responsibilities:
- Design, implement, and manage cloud infrastructure using Terraform (IaC).
- Build and maintain CI/CD pipelines for seamless application deployment.
- Ensure scalability, reliability, and security of cloud-based systems.
- Collaborate with developers and QA to optimize environments and workflows.
- Automate infrastructure provisioning, monitoring, and scaling.
- Troubleshoot infrastructure and deployment issues quickly and effectively.
- Stay up to date with emerging DevOps tools, practices, and cloud technologies.
Requirements:
- Minimum 5+ years of professional experience in DevOps or Infrastructure Engineering.
- Strong expertise in Terraform and Infrastructure as Code (IaC).
- Hands-on experience with AWS / Azure / GCP (at least one cloud platform).
- Proficiency in CI/CD tools (Jenkins, GitHub Actions, GitLab CI/CD, etc.).
- Experience with Docker, Kubernetes, and container orchestration.
- Strong knowledge of Linux systems, networking, and security best practices.
- Familiarity with monitoring & logging tools (Prometheus, Grafana, ELK, etc.).
- Scripting experience (Bash, Python, or similar).
- Excellent problem-solving skills and ability to work in remote teams.
Perks and Benefits:
- Competitive salary with remote work flexibility.
- Opportunity to work with global clients on modern infrastructure.
- Growth and learning opportunities in cutting-edge DevOps practices.
- Collaborative team culture that values automation and innovation.
Experience - 2+ Years
Requirements:
● Should have at least 2+ years of DevOps experience
● Should have experience with Kubernetes
● Should have experience with Terraform/Helm
● Should have experience in building scalable server-side systems
● Should have experience in cloud infrastructure and designing databases
● Having experience with NodeJS/TypeScript/AWS is a bonus
● Having experience with WebRTC is a bonus
Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.
Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!
We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.
We are hiring a DevOps Engineer to join our team.
Responsibilities:
- Collaborate with the development team to design, develop, and implement Java-based applications
- Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
- Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
- Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
- Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
- Evaluate and define/modify configuration management strategies and processes using Ansible
- Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
- Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort
Requirements:
- Minimum 4+ years of relevant work experience in a DevOps role
- At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
- Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
- Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
- Mastery in configuration automation tool sets such as Ansible, Chef, etc
- Proficiency with Jira, Confluence, and Git toolset
- Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
- Proven ability to manage and prioritize multiple diverse projects simultaneously
What do we offer:
At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology.
· Work from anywhere
· Flexible work timings
· Competitive compensation, including stock options
· A chance to work in the exciting generative AI space
· Quarterly team offsite events

Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.
Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/" target="_blank">https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Title: Cloud DevOps Engineer
Role: Individual Contributor (4-8 yrs)
Requirements:
- Energetic self-starter, a fast learner, with a desire to work in a startup environment
- Experience working with Public Clouds like AWS
- Operating and Monitoring cloud infrastructure on AWS.
- Primary focus on building, implementing and managing operational support
- Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure.
- Expert at one of the scripting languages – Python, shell, etc
- Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc
- Handling load monitoring, capacity planning, and services monitoring.
- Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues.
- Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
CI/CD tools Jenkins/Bamboo/Teamcity/CircleCI, DevSecOps Pipeline, Cloud Services (AWS/Azure/GCP), Ansible, Terraform, Docker, Helm, Cloud formation template, Webserver deployment & config, Databases(SQL/NoSQL) deployment & config, Git, Artifactory, Monitoring tools (Nagios, Grafana, Prometheus etc), Application logs (ELK/EFK, Splunk etc.), API Gateways, Security tools, Vault. |
Experience : 8 to 9 Years.
Location : Pune / Nasik or willing to relocate to these locations.
Notice Period : Immediate or Within a week.
Who we are :
- Winjit is a global, agile, and innovative organization established with an aim to create value and impact for its employees, customers and all stakeholders.
- We are a group of technology enthusiasts with a mission to make a positive impact on businesses by leveraging our expertise in Artificial Intelligence, Machine Learning, IoT and Blockchain.
Who you are :
- Growth oriented, a tech geek with a curiosity to know and explore.
- A Solution oriented individual.
- A Strong Decision maker.
- A Detail oriented, obsessed for pixel perfection and ability to convert a designer's vision to a working application.
- Someone who aspires to travel globally to solve exciting business challenges.
What you possess :
- Relevant experience in architecting, designing, and implementing Microsoft Azure private, public and hybrid cloud-based solutions.
- Broad understanding of cloud computing technologies, business drivers, and emerging computing trends.
- Knowledge of and experience in Microsoft Azure.
- Knowledge or good understanding of Cloud Services & Concepts (SaaS, PaaS, IaaS).
- Recent experience in working with CI/CD processes and tooling from code repositories such as GitLab/GitHub, continuous integration.
- Strong scripting abilities (PowerShell) and ability to use APIs to automate tooling, delivery pipelines and operational support.
- Keen interest in future technologies and the strategic direction of software and systems delivery to continually improve the service offered by SaaS plaza.
- Experience in application assessment to determine the best fit for cloud services.
- Good communication, presenting, collaboration and stakeholder management skills.
What you will do :
- Own & lead the technical customer engagement, assessments, architectural design sessions, improvement sessions on existing architectures, implementation projects, proofs of concepts, etc.
- Work in project teams to architect, develop and deploy new cloud solutions for customers.
- Optimize the deployment of cloud solutions using DevOps tooling.
- Embed, automate operational tasks on cloud solutions.
- Provide 2nd & 3rd line support on delivered cloud solutions.
- Build and support DevOps CI/CD automation to implement cloud infrastructure that enables engineers to self-serve for most operational tasks and achieve rapid change. Education & experience.
- Develop strategies and best practices for efficient development.
- Mentor the team and lead them towards growth.
With hands on experience in :
- Microsoft Azure is a must.
- CI/CD Pipeline creation.
- Powershell scripting.
- ARM Templates.
- Terraforms.
Why Winjit :
- Winjit understands that our team has a life outside work as well.
We offer :
- Flexible working culture.
- Holistic development of its team members through its wellness and growth-oriented programs.
- Opportunities to upskill and cross-skill through advanced technical certifications.
- Global on-site exposure.
- Be a musician, stand-up comic, writer, athlete while working at Winjit as value your passion !
We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead ( and/or CTO ) to identify and establish DevOps practices in the company.
You will help us build scalable, efficient cloud infrastructure. You’ll implement monitoring for automated system health checks. Lastly, you’ll build our CI pipeline, and train and guide the team in DevOps practices.
Responsibilities
- Implement and own the CI.
- Manage CD tooling.
- Implement and maintain monitoring and alerting.
- Build and maintain highly available production systems.
Qualification- B.tech in IT
Your skills and experience should cover:
-
5+ years of experience with developing, deploying, and debugging solutions on the AWS platform using ALL AWS services such as S3, IAM, Lambda, API Gateway, RDS, Cognito, Cloudtrail, CodePipeline, Cloud Formation, Cloudwatch and WAF (Web Application Firewall).
-
Amazon Web Services (AWS) Certified Developer: Associate, is required; Amazon Web Services (AWS) DevOps Engineer: Professional, preferred.
-
5+ years of experience using one or more modern programming languages (Python, Node.js).
-
Hands-on experience migrating data to the AWS cloud platform
-
Experience with Scrum/Agile methodology.
-
Good understanding of core AWS services, uses, and basic AWS architecture best practices (including security and scalability)
-
Experience with AWS Data Storage Tools.
-
Experience in Configure and implement AWS tools such as CloudWatch, CloudTrail and direct system logs for monitoring.
-
Experience working with GIT, or similar tools.
-
Ability to communicate and represent AWS Recommendations and Standards.
The following areas are highly advantageous:
-
Experience with Docker
-
Experience with PostgreSQL database


