
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is looking for highly talented hands-on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
Job Description :
- Extensive exp. with K8s (EKS/GKE) and k8s eco-system tooling e,g., Prometheus, ArgoCD, Grafana, Istio etc.
- Extensive AWS/GCP Core Infrastructure skills
- Infrastructure/ IAC Automation, Integration - Terraform
- Kubernetes resources engineering and management
- Experience with DevOps tools, CICD pipelines and release management
- Good at creating documentation(runbooks, design documents, implementation plans )
Linux Experience :
- Namespace
- Virtualization
- Containers
Networking Experience
- Virtual networking
- Overlay networks
- Vxlans, GRE
Kubernetes Experience :
Should have experience in bringing up the Kubernetes cluster manually without using kubeadm tool.
Observability
Experience in observability is a plus
Cloud automation :
Familiarity with cloud platforms exclusively AWS, DevOps tools like Jenkins, terraform etc.

About Intuitive Technology Partners
About
Connect with the team
Similar jobs
REVIEW CRITERIA:
MANDATORY:
- Strong Senior/Lead DevOps Engineer Profile
- Must have 8+ years of hands-on experience in DevOps engineering, with a strong focus on AWS cloud infrastructure and services (EC2, VPC, EKS, RDS, Lambda, CloudFront, etc.).
- Must have strong system administration expertise (installation, tuning, troubleshooting, security hardening)
- Must have solid experience in CI/CD pipeline setup and automation using tools such as Jenkins, GitHub Actions, or similar
- Must have hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible
- Must have strong database expertise across MongoDB and Snowflake (administration, performance optimization, integrations)
- Must have experience with monitoring and observability tools such as Prometheus, Grafana, ELK, CloudWatch, or Datadog
- Must have good exposure to containerization and orchestration using Docker and Kubernetes (EKS)
- Must be currently working in an AWS-based environment (AWS experience must be in the current organization)
- Its an IC role
PREFERRED:
- Must be proficient in scripting languages (Bash, Python) for automation and operational tasks.
- Must have strong understanding of security best practices, IAM, WAF, and GuardDuty configurations.
- Exposure to DevSecOps and end-to-end automation of deployments, provisioning, and monitoring.
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
- Candidates from NCR region only (No outstation candidates).
ROLES AND RESPONSIBILITIES:
We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.
KEY RESPONSIBILITIES:
Cloud & Infrastructure as Code (IaC)-
- Architect and manage AWS environments ensuring scalability, security, and high availability.
- Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
- Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.
CI/CD & Automation:
- Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
- Automate deployments, provisioning, and monitoring across environments.
Containers & Orchestration:
- Deploy and operate workloads on Docker and Kubernetes (EKS).
- Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
- Optimize performance of containerized and microservices applications.
Monitoring & Reliability:
- Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Establish logging, alerting, and proactive monitoring for high availability.
Security & Compliance:
- Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
- Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
- Configure VPNs, firewalls, and secure access policies and AWS organizations.
Databases & Analytics:
- Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Manage data reliability, performance tuning, and cloud-native integrations.
- Experience with Apache Airflow and Spark.
IDEAL CANDIDATE:
- 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
- Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
- Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
- Proven ability with CI/CD pipeline automation and DevSecOps practices.
- Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
- Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Strong scripting skills (Shell/bash, Python, or similar) for automation.
- Bachelor / Master’s degree
- Effective communication skills
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
Senior DevOps Engineer
Experience: Minimum 5 years of relevant experience
Key Responsibilities:
• Hands-on experience with AWS tools and CI/CD pipelines, Redhat Linux
• Strong expertise in DevOps practices and principles
• Experience with infrastructure automation and configuration management
• Excellent problem-solving skills and attention to detail
Nice to Have:
• Redhat certification
About Quantela
We are a technology company that offers outcomes business models. We empower our customers with the right digital infrastructure to deliver greater economic, social, and environmental outcomes for their constituents.
When the company was founded in 2015, we specialized in smart cities technology alone. Today, working with cities and towns; utilities, and public venues, our team of 280+ experts offer a vast array of outcomes business models through technologies like digital advertising, smart lighting, smart traffic, and digitized citizen services.
We pride ourselves on our agility, innovation, and passion to use technology for a higher purpose. Unlike other technology companies, we tailor our offerings (what we can digitize) and the business model (how we partner with our customer to deliver that digitization) to drive measurable impact where our customers need it most. Over the last several months alone, we have served customers to deliver outcomes like increased medical response times to save lives; reduced traffic congestion to keep cities moving and created new revenue streams to tackle societal issues like homelessness.
We are headquartered in Billerica, Massachusetts in the United States with offices across Europe, and Asia.
The company has been recognized with the World Economic Forum’s ‘Technology Pioneers’ award in 2019 and CRN’s IoT Innovation Award in 2020.
For latest news and updates please visit us at www.quantela.com
Overview of the Role
The ideal candidate should have automation skills to automate Infrastructure, microservices deployment through automation tools. Should be handling Kubernetes cluster in Production, both cloud and on-premise
Key Responsibilities
- Overall 6+ Years of experience and should have handled Production Kubernetes Cluster both on Cloud and On-premises environments.
- Build monitoring that alerts on symptoms rather than on outages.
- Should migrate VM based applications to Kubernetes cluster.
- Automate Infrastructure component provision.
- Document every task, your findings turn into repeatable actions and then into automation.
- Follow the Agile process and plan the work accordingly.
Must have Skills
- Knowledge on container solutions like Docker, Kubernetes and understanding of Virtualization concepts.
- Experience in Configuration Management Tools like Ansible, Chef.
- Experience with Terraform, Cloud Formation or other infrastructure as code tools.
- Experience with CI/CD in Jenkins.
- Good knowledge of AWS/Azure cloud environments.
- Good hands-on experience on webservers like Nginx, Apache, tomcat configuration, and administration.
- Experience working with and maintaining package management systems (e.g., Artifactory, APT).
- Knowledge of scripting in PowerShell/Python/Bash/
- Experience building automation and pipelines (integration, testing, deployment)
- Experience with Docker containers (images and registry management).
- Understanding of metrics collectors such as Metricbeat, Heartbeat or Prometheus is good to have.
- Ability to work collaboratively on a cross-functional team with a wide range of experience levels
- Ability to analyze existing services and identify technical debt to work.
Desired Background
Bachelors/Masters degree in Computer Science or Computer Applications
Job Summary: We are looking for a senior DevOps engineer to help us build functional systems that improve customer experience. They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs.
Key Responsibilities
- Utilise various open source technologies & build independent web based tools, microservices and solutions
- Write deployment scripts
- Configure and manage data sources like MySQL, Mongo, ElasticSearch, etc
- Configure and deploy pipelines for various microservices using CI/CD tools
- Automated server monitoring setup & HA adherence
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Coordination and communication within the team and with customers where integrations are required
- Work with company personnel to define technical problems and requirements, determine solutions, and implement those solutions.
- Work with product team to design automated pipelines to support SaaS delivery and operations in cloud platforms.
- Review and act on the Service requests, Infrastructure requests and Incidents logged by our Implementation teams and clients. Identifying, analysing, and resolving infrastructure vulnerabilities and application deployment issues
- Modifying and improving existing systems. Suggest process improvements and implement them.
- Collaborate with Software Engineers to help them deploy and operate different systems, also help to automate and streamline company's operations and processes.
- Developing interface simulators and designing automated module deployments.
Key Skills
- Bachelor's degree in software engineering, computer science, information technology, information systems.
- 3+ years of experience in managing Linux based cloud microservices infrastructure (AWS, GCP or Azure)
- Hands-on experience with databases including MySQL.
- Experience OS tuning and optimizations for running databases and other scalable microservice solutions
- Proficient working with git repositories and git workflows
- Able to setup and manage CI/CD pipelines
- Excellent troubleshooting, working knowledge of various tools, open-source technologies, and cloud services
- Awareness of critical concepts in DevOps and Agile principles
- Sense of ownership and pride in your performance and its impact on company’s success
- Critical thinker and problem-solving skills
- Extensive experience in DevOps engineering, team management, and collaboration.
- Ability to install and configure software, gather test-stage data, and perform debugging.
- Ability to ensure smooth software deployment by writing script updates and running diagnostics.
- Proficiency in documenting processes and monitoring various metrics.
- Advanced knowledge of best practices related to data encryption and cybersecurity.
- Ability to keep up with software development trends and innovation.
- Exceptional interpersonal and communication skills
Experience:
- Must have 4+ years of experience as Devops Engineer in a SaaS product based company
About SuperProcure
SuperProcure is a leading logistics and supply chain management solutions provider that aims to bring efficiency, transparency, and process optimization across the globe with the help of technology and data. SuperProcure started our journey in 2017 to help companies digitize their logistics operations. We created industry-recognized products which are now being used by 150+ companies like Tata Consumer Products, ITC, Flipkart, Tata Chemicals, PepsiCo, L&T Constructions, GMM Pfaudler, Havells, others. It helps achieve real-time visibility, 100% audit adherence & transparency, 300% improvement in team productivity, up to 9% savings in freight costs and many more benefits. SuperProcure is determined to make the lives of the logistic teams easier, add value, and help in establishing a fair and beneficial process for the company.
Super Procure is backed by IndiaMart and incubated under IIMCIP & Lumis, Supply Chain Labs. SuperProcure was also recognized as Top 50 Emerging Start-ups of India at the NASSCOM Product Conclave organized in Bengaluru and was a part of the recently launched National Logistics policy by the prime minister of India. More details about our journey can be found here
Life @ SuperProcure
SuperProcure operates in an extremely innovative, entrepreneurial, analytical, and problem-solving work culture. Every team member is fully motivated and committed to the company's vision and believes in getting things done. In our organization, every employee is the CEO of what he/she does; from conception to execution, the work needs to be thought through.
Our people are the core of our organization, and we believe in empowering them and making them a part of the daily decision-making, which impacts the business and shapes the company's overall strategy. They are constantly provided with resources,
mentorship and support from our highly energetic teams and leadership. SuperProcure is extremely inclusive and believes in collective success.
Looking for a bland, routine 9-6 job? PLEASE DO NOT APPLY. Looking for a job where you wake up and add significant value to a $180 Billion logistics industry everyday? DO APPLY.
OTHER DETAILS
- Engagement : Full Time
- No. of openings : 1
- CTC : 12 - 20lpa
Our Client is an IT infrastructure services company, focused and specialized in delivering solutions and services on Microsoft products and technologies. They are a Microsoft partner and cloud solution provider. Our Client's objective is to help small, mid-sized as well as global enterprises to transform their business by using innovation in IT, adapting to the latest technologies and using IT as an enabler for business to meet business goals and continuous growth.
With focused and experienced management and a strong team of IT Infrastructure professionals, they are adding value by making IT Infrastructure a robust, agile, secure and cost-effective service to the business. As an independent IT Infrastructure company, they provide their clients with unbiased advice on how to successfully implement and manage technology to complement their business requirements.
- Providing on-call support within a high availability production environment
- Logging issues
- Providing Complex problem analysis and resolution for technical and application issues
- Supporting and collaborating with team members
- Running system updates
- Monitoring and responding to system alerts
- Developing and running system health checks
- Applying industry standard practices across the technology estate
- Performing system reviews
- Reviewing and maintaining infrastructure configuration
- Diagnosing performance issues and network bottlenecks
- Collaborating within geographically distributed teams
- Supporting software development infrastructure by continuous integration and delivery standards
- Working closely with developers and QA teams as part of a customer support centre
- Projecting delivery work, either individually or in conjunction with other teams, external suppliers or contractors
- Ensuring maintenance of the technical environments to meet current standards
- Ensuring compliance with appropriate industry and security regulations
- Providing support to Development and Customer Support teams
- Managing the hosted infrastructure through vendor engagement
- Managing 3rd party software licensing ensuring compliance
- Delivering new technologies as agreed by the business
What you need to have:
- Experience working within a technical operations environment relevant to associated skills stated.
- Be proficient in:
- Linux, zsh/ bash/ similar
- ssh, tmux/ screen/ similar
- vim/ emacs/ similar
- Computer networking
- Have a reasonable working knowledge of:
- Cloud infrastructure, Preferably GCP
- One or more programming/ scripting languages
- Git
- Docker
- Web services and web servers
- Databases, relational and NoSQL
- Some familiarity with:
- Puppet, ansible
- Terraform
- GitHub, CircleCI , Kubernetes
- Scripting language- Shell
- Databases: Cassandra, Postgres, MySQL or CloudSQL
- Agile working practices including scrum and Kanban
- Private & public cloud hosting environments
- Strong technology interests with a positive ‘can do’ attitude
- Be flexible and adaptable to changing priorities
- Be good at planning and organising their own time and able to meet targets and deadlines without supervision
- Excellent written and verbal communication skills.
- Approachable with both colleagues and team members
- Be resourceful and practical with an ability to respond positively and quickly to technical and business challenges
- Be persuasive, articulate and influential, but down to earth and friendly with own team and colleagues
- Have an ability to establish relationships quickly and to work effectively either as part of a team or singularly
- Be customer focused with both internal and external customers
- Be capable of remaining calm under pressure
- Technically minded with good problem resolution skills and systematic manner
- Excellent documentation skills
- Prepared to participate in out of hours support rota
About Us
We have grown over 1400% in revenues in the last year.
Interface.ai provides an Intelligent Virtual Assistant (IVA) to FIs to automate calls and customer inquiries across multiple channels and engage their customers with financial insights and upsell/cross-sell.
Our IVA is transforming financial institutions’ call centers from a cost to a revenue center.
Our core technology is built 100% in-house with several breakthroughs in Natural Language Understanding. Our parser is built based on zero-shot learning that helps us to launch industry-specific IVA that can achieve over 90% accuracy on Day-1.
We are 45 people strong with employees spread across India and US locations. Many of them come from ML teams at Apple, Microsoft, and Salesforce in the US along with enterprise architects with over 20+ years of experience building large-scale systems. Our India team consists of people from ISB, IIMs, and many who have been previously part of early-stage startups.
We are a fully remote team.
Founders come from Banking and Enterprise Technology backgrounds with previous experience scaling companies from scratch to $50M+ in revenues.
As a Site Reliability Engineer you will be in charge of:
- Designing, analyzing and troubleshooting large-scale distributed systems
- Engaging in cross-functional team discussions on design, deployment, operation, and maintenance, in a fast-moving, collaborative set up
- Building automation scripts to validate the stability, scalability, and reliability of interface.ai’s products & services as well as enhance interface.ai’s employees’ productivity
- Debugging and optimizing code and automating routine tasks
- Troubleshoot and diagnose issues (hardware or software), propose and implement solutions to ensure they occur with reduced frequency
- Perform the periodic on-call duty to handle security, availability, and reliability of interface.ai’s products
- You will follow and write good code and solid engineering practices
Requirements
You can be a great fit if you are :
- Extremely self motivated
- Ability to learn quickly
- Growth Mindset (read this if you don't know what it means - https://www.amazon.com/Mindset-Psychology-Carol-S-Dweck/dp/0345472322" target="_blank">link)
- Emotional Maturity (read this if you don't know what it means - https://medium.com/@krisgage/15-signs-of-emotional-maturity-38b1a2ab9766" target="_blank">link)
- Passionate about the possibilities at the intersection of AI + Banking
- Worked in a startup of 5 to 30 employees
- Developer with a strong interest in systems Design. You will be building, maintaining, and scaling our cloud infrastructure through software tooling and automation.
- 4-8 years of industry experience developing and troubleshooting large-scale infrastructure on the cloud
- Have a solid understanding of system availability, latency, and performance
- Strong programming skills in at least one major programming language and the ability to learn new languages as needed
- Strong System/network debugging skills
- Experience with management/automation tools such as Terraform/Puppet/Chef/SALT
- Experience with setting up production-level monitoring and telemetry
- Expertise in Container management & AWS
- Experience with kubernetes is a plus
- Experience building CI/CD pipelines
- Experience working with Web sockets, Redis, Postgres, Elastic search, Logstash
- Experience working in an agile team environment and proficient understanding of code versioning tools, such as Git.
- Ability to effectively articulate technical challenges and solutions.
- Proactive outlook for ways to make our systems more reliable
- Install, configuration management, performance tuning and monitoring of Web, App and Database servers.
- Install, setup and management of Java, PHP and NodeJS stack with software load balancers.
- Install, setup and administer MySQL, Mongo, Elasticsearch & PostgreSQL DBs.
- Install, set up and maintenance monitoring solutions for like Nagios, Zabbix.
- Design and implement DevOps processes for new projects following the department's objectives of automation.
- Collaborate on projects with development teams to provide recommendations, support and guidance.
- Work towards full automation, monitoring, virtualization and containerization.
- Create and maintain tools for deployment, monitoring and operations.
- Automation of processes in a scalable and easy to understand way that can be detailed and understood through documentation.
- Develop and deploy software that will help drive improvements towards the availability, performance, efficiency, and security of services.
- Maintain 24/7 availability for responsible systems and be open to on-call rotation.
About the company:
Tathastu, the next-generation innovation labs is Future Group’s initiative to provide a new-age retail experience - combining the physical with digital and enhancing it with data. We are creating next-generation consumer interactions by combining AI/ML, Data Science, and emerging technologies with consumer platforms.
The E-Commerce vertical under Tathastu has developed online consumer platforms for Future Group’s portfolio of retail brands -Easy day, Big Bazaar, Central, Brand factory, aLL, Clarks, Coverstory. Backed by our network of offline stores we have built a new retail platform that merges our Online & Offline retail streams. We use data to power all our decisions across our products and build internal tools to help us scale our impact with a small closely-knit team.
Our widespread store network, robust logistics, and technology capabilities have made it possible to launch a ‘2-Hour Delivery Promise’ on every product across fashion, food, FMCG, and home products for orders placed online through the Big Bazaar mobile app and portal. This makes Big Bazaar the first retailer in the country to offer instant home delivery on almost every consumer product ordered online.
Job Responsibilities:
- You’ll streamline and automate the software development and infrastructure management processes and play a crucial role in executing high-impact initiatives and continuously improving processes to increase the effectiveness of our platforms.
- You’ll translate complex use cases into discrete technical solutions in platform architecture, design and coding, functionality, usability, and optimization.
- You will drive automation in repetitive tasks, configuration management, and deliver comprehensive automated tests to debug/troubleshoot Cloud AWS-based systems and BigData applications.
- You’ll continuously discover, evaluate, and implement new technologies to maximize the development and operational efficiency of the platforms.
- You’ll determine the metrics that will define technical and operational success and constantly track such metrics to fine-tune the technology stack of the organization.
Experience: 4 to 8 Yrs
Qualification: B.Tech / MCA
Required Skills:
- Experience with Linux/UNIX systems administration and Amazon Web Services (AWS).
- Infrastructure as Code (Terraform), Kubernetes and container orchestration, Web servers (Nginx, Apache), Application Servers(Tomcat,Node.js,..), document stores and relational databases (AWS RDS-MySQL).
- Site Reliability Engineering patterns and visibility /performance/availability monitoring (Cloudwatch, Prometheus)
- Background in and happy to work hands-on with technical troubleshooting and performance tuning.
- Supportive and collaborative personality - ability to influence and drive progress with your peers
Our Technology Stack:
- Docker/Kubernetes
- Cloud (AWS)
- Python/GoLang Programming
- Microservices
- Automation Tools
DevOps Engineer Skills Building a scalable and highly available infrastructure for data science Knows data science project workflows Hands-on with deployment patterns for online/offline predictions (server/serverless)
Experience with either terraform or Kubernetes
Experience of ML deployment frameworks like Kubeflow, MLflow, SageMaker Working knowledge of Jenkins or similar tool Responsibilities Owns all the ML cloud infrastructure (AWS) Help builds out an entirely CI/CD ecosystem with auto-scaling Work with a testing engineer to design testing methodologies for ML APIs Ability to research & implement new technologies Help with cost optimizations of infrastructure.
Knowledge sharing Nice to Have Develop APIs for machine learning Can write Python servers for ML systems with API frameworks Understanding of task queue frameworks like Celery










