
AWS Cloud infrastructure Architect/ Delivery Lead
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is looking for highly talented hands-on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’ s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
JD:
Lead the pre-sales (25%) to post-sales (75%) efforts building Public/Hybrid Cloud solutions working collaboratively with Intuitive and client technical and business stakeholders
Be a customer advocate with obsession for excellence delivering measurable success for Intuitive’s customers with secure, scalable, highly available cloud architecture that leverage AWS Cloud services
Experience in analyzing customer's business and technical requirements, assessing existing environment for Cloud enablement, advising on Cloud models, technologies, and risk management strategies
Apply creative thinking/approach to determine technical solutions that further business goals and align with corporate technology strategies
Extensive experience building Well Architected solutions in-line with AWS cloud adoption framework (DevOps/DevSecOps, Database/Data Warehouse/Data Lake, App Modernization/Containers, Security, Governance, Risk, Compliance, Cost Management and Operational Excellence)
Experience with application discovery prefera bly with tools like Cloudscape, to discover application configurations , databases, filesystems, and application dependencies
Experience with Well Architected Review, Cloud Readiness Assessments and defining migration patterns (MRA/MRP) for application migration e.g. Re-host, Re-platform, Re-architect etc
Experience in architecting and deploying AWS Landing Zone architecture with CI/CD pipeline
Experience on architecture, design of AWS cloud services to address scalability, performance, HA, security, availability, compliance, backup and DR, automation, alerting and monitoring and cost
Hands-on experience in migrating applications to AWS leveraging proven tools and processes including migration, implementation, cutover and rollback plans and execution
Hands-on experience in deploying various AWS services e.g. EC2, S3, VPC, RDS, Security Groups etc. using either manual or IaC, IaC is preferred
Hands-on Experience in writing cloud automation scripts/code such as Ansible, Terraform, CloudFormation Template (AWS CFT) etc.
Hands-on Experience with application build/release processes CI/CD pipelines
Deep understanding of Agile processes (planning/stand-ups/retros etc), and interact with cross-functional teams i.e. Development, Infrastructure, Security, Performance Engineering, and QA
Additional Requirements:
Work with Technology leadership to grow the Cloud & DevOps practice. Create cloud practice collateral
Work directly with sales teams to improve and help them drive the sales for Cloud & DevOps practice
Assist Sales and Marketing team in creating sales and marketing collateral
Write whitepapers and technology blogs to be published on social media and Intuitive website
Create case studies for projects successfully executed by Intuitive delivery team
Conduct sales enablement sessions to coach sales team on new offerings
Flexibility with work hours supporting customer’s requirement and collaboration with global delivery teams
Flexibility with Travel as required for Pre-sales/Post-sales, Design workshops, War-room Migration events and customer meetings
Strong passion for modern technology exploration and development
Excellent written, verbal communication skills, presentation, and collaboration skills - Team leadership skills
Experience with Multi-cloud (Azure, GCP, OCI) is a big plus
Experience with VMware Cloud Foundation as well as Advanced Windows and Linux Engineering is a big plus
Experience with On-prem Data Engineering (Database, Data Warehouse, Data Lake) is a big plus

About Intuitive Technology Partners
About
Connect with the team
Similar jobs
Job Description: DevOps Engineer
Location: Bangalore / Hybrid / Remote
Company: LodgIQ
Industry: Hospitality / SaaS / Machine Learning
About LodgIQ
Headquartered in New York, LodgIQ delivers a revolutionary B2B SaaS platform to the
travel industry. By leveraging machine learning and artificial intelligence, we enable precise
forecasting and optimized pricing for hotel revenue management. Backed by Highgate
Ventures and Trilantic Capital Partners, LodgIQ is a well-funded, high-growth startup with a
global presence.
Role Summary:
We are seeking a Senior DevOps Engineer with 5+ years of strong hands-on experience in
AWS, Kubernetes, CI/CD, infrastructure as code, and cloud-native technologies. This
role involves designing and implementing scalable infrastructure, improving system
reliability, and driving automation across our cloud ecosystem.
Key Responsibilities:
• Architect, implement, and manage scalable, secure, and resilient cloud
infrastructure on AWS
• Lead DevOps initiatives including CI/CD pipelines, infrastructure automation,
and monitoring
• Deploy and manage Kubernetes clusters and containerized microservices
• Define and implement infrastructure as code using
Terraform/CloudFormation
• Monitor production and staging environments using tools like CloudWatch,
Prometheus, and Grafana
• Support MongoDB and MySQL database administration and optimization
• Ensure high availability, performance tuning, and cost optimization
• Guide and mentor junior engineers, and enforce DevOps best practices
• Drive system security, compliance, and audit readiness in cloud environments
• Collaborate with engineering, product, and QA teams to streamline release
processes
Required Qualifications:
• 5+ years of DevOps/Infrastructure experience in production-grade environments
• Strong expertise in AWS services: EC2, EKS, IAM, S3, RDS, Lambda, VPC, etc.
• Proven experience with Kubernetes and Docker in production
• Proficient with Terraform, CloudFormation, or similar IaC tools
• Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, or
similar
• Advanced scripting in Python, Bash, or Go
• Solid understanding of networking, firewalls, DNS, and security protocols
• Exposure to monitoring and logging stacks (e.g., ELK, Prometheus, Grafana)
• Experience with MongoDB and MySQL in cloud environments
Preferred Qualifications:
• AWS Certified DevOps Engineer or Solutions Architect
• Experience with service mesh (Istio, Linkerd), Helm, or ArgoCD
• Familiarity with Zero Downtime Deployments, Canary Releases, and Blue/Green
Deployments
• Background in high-availability systems and incident response
• Prior experience in a SaaS, ML, or hospitality-tech environment
Tools and Technologies You’ll Use:
• Cloud: AWS
• Containers: Docker, Kubernetes, Helm
• CI/CD: Jenkins, GitHub Actions
• IaC: Terraform, CloudFormation
• Monitoring: Prometheus, Grafana, CloudWatch
• Databases: MongoDB, MySQL
• Scripting: Bash, Python
• Collaboration: Git, Jira, Confluence, Slack
Why Join Us?
• Competitive salary and performance bonuses.
• Remote-friendly work culture.
• Opportunity to work on cutting-edge tech in AI and ML.
• Collaborative, high-growth startup environment.
• For more information, visit http://www.lodgiq.com
GCP Cloud Engineer:
- Proficiency in infrastructure as code (Terraform).
- Scripting and automation skills (e.g., Python, Shell). Knowing python is must.
- Collaborate with teams across the company (i.e., network, security, operations) to build complete cloud offerings.
- Design Disaster Recovery and backup strategies to meet application objectives.
- Working knowledge of Google Cloud
- Working knowledge of various tools, open-source technologies, and cloud services
- Experience working on Linux based infrastructure.
- Excellent problem-solving and troubleshooting skills
Job Description:
• Drive end-to-end automation from GitHub/GitLab/BitBucket to Deployment,
Observability and Enabling the SRE activities
• Guide operations support (setup, configuration, management, troubleshooting) of
digital platforms and applications
• Solid understanding of DevSecOps Workflows that support CI, CS, CD, CM, CT.
• Deploy, configure, and manage SaaS and PaaS cloud platform and applications
• Provide Level 1 (OS, patching) and Level 2 (app server instance troubleshooting)
• DevOps programming: writing scripts, building operations/server instance/app/DB
monitoring tools Set up / manage continuous build and dev project management
environment: JenkinX/GitHub Actions/Tekton, Git, Jira Designing secure networks,
systems, and application architectures
• Collaborating with cross-functional teams to ensure secure product development
• Disaster recovery, network forensics analysis, and pen-testing solutions
• Planning, researching, and developing security policies, standards, and procedures
• Awareness training of the workforce on information security standards, policies, and
best practices
• Installation and use of firewalls, data encryption and other security products and
procedures
• Maturity in understanding compliance, policy and cloud governance and ability to
identify and execute automation.
• At Wesco, we discuss more about solutions than problems. We celebrate innovation
and creativity.
Job description
Problem Statement-Solution
Only 10% of India speaks English and 90% speak over 25 languages and 1000s of dialects. The internet has largely been in English. A good part of India is now getting internet connectivity thanks to cheap smartphones and Jio. The non-English speaking internet users will balloon to about 600 million users out of the total 750 million internet users in India by 2020. This will make the vernacular segment one of the largest segments in the world - almost 2x the size of the US population. The vernacular segment has very few products that they can use on the internet.
One large human need is that of sharing thoughts and connecting with people of the same community on the basis of language and common interests. Twitter serves this need globally but the experience is mostly in English. There’s a large unaddressed need for these vernacular users to express themselves in their mother tongue and connect with others from their community. Koo is a solution to this problem.
About Koo
Koo was founded in March 2020, as a micro-blogging platform in both Indian languages and English, which gives a voice to the millions of Indians who communicate in Indian languages.
Currently available in Assamese, Bengali, English, Hindi, Kannada, Marathi, Tamil and Telugu, Koo enables people from across India to express themselves online in their mother tongues. In a country where under 10% of the population speaks English as a native language, Koo meets the need for a social media platform that can deliver an immersive language experience to an Indian user, thereby enabling them to connect and interact with each other. The recently introduced ‘Talk to Type’ enables users to leverage the voice assistant to share their thoughts without having to type. In August 2021, Koo crossed 10 million downloads, in just 16 months of launch.
Since June 2021, Koo is available in Nigeria.
Founding Team
Koo is founded by veteran internet entrepreneurs - Aprameya Radhakrishna (CEO, Taxiforsure) and Mayank Bidawatka (Co-founder, Goodbox & Coreteam, redBus).
Technology Team & Culture
The technology team comprises sharp coders, technology geeks and guys who have been entrepreneurs or are entrepreneurial and extremely passionate towards technology. Talent is coming from the likes of Google, Walmart, Redbus, Dailyhunt. Anyone being part of a technology team will have a lot to learn from their peers and mentors. Download our android app and take a look at what we’ve built. Technology stack compromises of a wide variety of cutting-edge technologies like Kotlin, Java 15, Reactive Programming, MongoDB, Cassandra, Kubernetes, AWS, NodeJS, Python, ReactJS, Redis, Aerospike, ML, Deep learning etc. We believe in giving a lot of independence and autonomy to ownership-driven individuals.
Technology skill sets required for a matching profile
- Experience between 3 to 7 years in a devops role with mandatory one or more stints at a fast paced startup.
- Mandatory experience with containers, kubernetes (EKS) (setting up from scratch), istio and microservices.
- Sound knowledge of technologies like Terraform, Automation Scripts, Cron jobs etc. Must have worked with goals of putting infra as code like setting up new environments with code.
- Knowledge of industry standards around monitoring, alerting, self healing, high availability, auto scaling etc.
- Exhaustive experience with various cloud technologies (especially on AWS) like SQS, SNS, Elastic Search, Elastic Cache, Elastic Transcoder, VPC, Subnets, Security groups etc.
- Must have set up stable CI-CD pipelines with capabilities to do zero downtime deployments on any one of rolling updates, blue green or canary modes.
- Experience with VPN and LDAP solutions for securely login to infrastructure and proving SSO. 8. Master of deploying and troubleshooting all layers of application from network, frontend, backend and databases (MongoDB, Redis, Postgres, Cassandra, ElasticSearch, Aerospike etc).
Devops Engineer
Roles and Responsibilities:
As a DevOps Engineer, you’ll be responsible for ensuring that our products can be seamlessly deployed on infrastructure, whether it is on-prem or on public clouds.
- Create, Manage and Improve CI / CD pipelines to ensure our Platform and Applications can be deployed seamlessly
- Evaluate, Debug, and Integrate our products with various Enterprise systems & applications
- Build metrics, monitoring, logging, configurations, analytics and alerting for performance and security across all endpoints and applications
- Build and manage infrastructure-as-code deployment tooling, solutions, microservices and support services on multiple cloud providers and on-premises
- Ensure reliability, availability and security of our infrastructure and products
- Update our processes and design new processes as needed to optimize performance
- Automate our processes in compliance with our security requirements
- Manage code deployments, fixes, updates, and related processes
- Manage environment where we deploy our product to multiple clouds that we control as well as to client-managed environments
- Work with CI and CD tools, and source control such as GIT and SVN. DevOps Engineer
Skills/Requirements:
- 2+ years of experience in DevOps, SRE or equivalent positions
- Experience working with Infrastructure as Code / Automation tools
- Experience in deploying, analysing, and debugging on multiple environments (AWS, Azure, Private Clouds, Data Centres, etc), Linux/Unix administration, Databases such as MySQL, PostgreSQL, NoSQL, DynamoDB, Cosmos DB, MongoDB, Elasticsearch and Redis (both managed instances as well as self-installed).
- Knowledge of scripting languages such as Python, PowerShell and / or Bash.
- Hands-on experience with the following is a must: Docker, Kubernetes, ELK Stack
- Hands-on experience with at least three of the following- Terraform, AWS Cloud Formation, Jenkins, Wazuh SIEM, Ansible, Ansible Tower ,Puppet ,Chef
- Good troubleshooting skills with the ability to spot issues.
- Strong communication skills and documentation skills.
- Experience with deployments with Fortune 500 or other large Global Enterprise clients is a big plus
- Experience with participating in an ISO27001 certification / renewal cycle is a plus.
- Understanding of Information Security fundamentals and compliance requirements
Work From Home
Start Up Background is preferred
Company Location: Noida
- Hands-on knowledge on various CI-CD tools (Jenkins/TeamCity, Artifactory, UCD, Bitbucket/Github, SonarQube) including setting up of build-deployment automated pipelines.
- Very good knowledge in scripting tools and languages such as Shell, Perl or Python , YAML/Groovy, build tools such as Maven/Gradle.
- Hands-on knowledge in containerization and orchestration tools such as Docker, OpenShift and Kubernetes.
- Good knowledge in configuration management tools such as Ansible, Puppet/Chef and have worked on setting up of monitoring tools (Splunk/Geneos/New Relic/Elk).
- Expertise in job schedulers/workload automation tools such as Control-M or AutoSys is good to have.
- Hands-on knowledge on Cloud technology (preferably GCP) including various computing services and infrastructure setup using Terraform.
- Should have basic understanding on networking, certificate management, Identity and Access Management and Information security/encryption concepts.
- • Should support day-to-day tasks related to platform and environments upkeep such as upgrades, patching, migration and system/interfaces integration.
- • Should have experience in working in Agile based SDLC delivery model, multi-task and support multiple systems/apps.
- • Big-data and Hadoop ecosystem knowledge is good to have but not mandatory.
- Should have worked on standard release, change and incident management tools such as ServiceNow/Remedy or similar
Q2 is seeking a team-focused Lead Release Engineer with a passion for managing releases to ensure we release quality software developed using Agile Scrum methodology. Working within the Development team, the Release Manager will work in a fast-paced environment with Development, Test Engineering, IT, Product Management, Design, Implementations, Support and other internal teams to drive efficiencies, transparency, quality and predictability in our software delivery pipeline.
RESPONSIBILITIES:
- Provide leadership on cross-functional development focused software release process.
- Management of the product release cycle to new and existing clients including the build release process and any hotfix releases
- Support end-to-end process for production issue resolution including impact analysis of the issue, identifying the client impacts, tracking the fix through dev/testing and deploying the fix in various production branches.
- Work with engineering team to understand impacts of branches and code merges.
- Identify, communicate, and mitigate release delivery risks.
- Measure and monitor progress to ensure product features are delivered on time.
- Lead recurring release reporting/status meetings to include discussion around release scope, risks and challenges.
- Responsible for planning, monitoring, executing, and implementing the software release strategy.
- Establish completeness criteria for release of successfully tested software component and their dependencies to gate the delivery of releases to Implementation groups
- Serve as a liaison between business units to guarantee smooth and timely delivery of software packages to our Implementations and Support teams
- Create and analyze operational trends and data used for decision making, root cause analysis and performance measurement.
- Build partnerships, work collaboratively, and communicate effectively to achieve shared objectives.
- Make Improvements to processes to improve the experience and delivery for internal and external customers.
- Responsible for ensuring that all security, availability, confidentiality and privacy policies and controls are adhered to.
EXPERIENCE AND KNOWLEDGE:
- Bachelor’s degree in Computer Science, or related field or equivalent experience.
- Minimum 4 years related experience in product release management role.
- Excellent understanding of software delivery lifecycle.
- Technical Background with experience in common Scrum and Agile practices preferred.
- Deep knowledge of software development processes, CI/CD pipelines and Agile Methodology
- Experience with tools like Jenkins, Bitbucket, Jira and Confluence.
- Familiarity with enterprise software deployment architecture and methodologies.
- Proven ability in building effective partnership with diverse groups in multiple locations/environments
- Ability to convey technical concepts to business-oriented teams.
- Capable of assessing and communicating risks and mitigations while managing ambiguity.
- Experience managing customer and internal expectations while understanding the organizational and customer impact.
- Strong organizational, process, leadership, and collaboration skills.
- Strong verbal, written, and interpersonal skills.
We are looking for a Senior Platform Engineer responsible for handling our GCP/AWS clouds. The
candidate will be responsible for automating the deployment of cloud infrastructure and services to
support application development and hosting (architecting, engineering, deploying, and operationally
managing the underlying logical and physical cloud computing infrastructure).
Location: Bangalore
Reporting Manager: VP, Engineering
Job Description:
● Collaborate with teams to build and deliver solutions implementing serverless,
microservice-based, IaaS, PaaS, and containerized architectures in GCP/AWS environments.
● Responsible for deploying highly complex, distributed transaction processing systems.
● Work on continuous improvement of the products through innovation and learning. Someone with
a knack for benchmarking and optimization
● Hiring, developing, and cultivating a high and reliable cloud support team
● Building and operating complex CI/CD pipelines at scale
● Work with GCP Services, Private Service Connect, Cloud Run, Cloud Functions, Pub/Sub, Cloud
Storage, Networking in general
● Collaborate with Product Management and Product Engineering teams to drive excellence in
Google Cloud products and features.
● Ensures efficient data storage and processing functions in accordance with company security
policies and best practices in cloud security.
● Ensuring scaled database setup/montioring with near zero downtime
Key Skills:
● Hands-on software development experience in Python, NodeJS, or Java
● 5+ years of Linux/Unix Administration monitoring, reliability, and security of Linux-based, online,
high-traffic services and Web/eCommerce properties
● 5+ years of production experience in large-scale cloud-based Infrastructure (GCP preferred)
● Strong experience with Log Analysis and Monitoring tools such as CloudWatch, Splunk,Dynatrace, Nagios, etc.
● Hands-on experience with AWS Cloud – EC2, S3 Buckets, RDS
● Hands-on experience with Infrastructure as a Code (e.g., cloud formation, ARM, Terraform,Ansible, Chef, Puppet) and Version control tools
● Hands-on experience with configuration management (Chef/Ansible)
● Experience in designing High Availability infrastructure and planning for Disaster Recovery solutions
Regards
Team Merito
1. Should have worked with AWS, Dockers and Kubernetes.
2. Should have worked with a scripting language.
3. Should know how to monitor system performance, CPU, Memory.
4. Should be able to do troubleshooting.
5. Should have knowledge of automated deployment
6. Proficient in one programming knowledge - python preferred.
Skill: Python, Docker or Ansible , AWS
➢ Experience Building a multi-region highly available auto-scaling infrastructure that optimizes
performance and cost. plan for future infrastructure as well as Maintain & optimize existing
infrastructure.
➢ Conceptualize, architect and build automated deployment pipelines in a CI/CD environment like
Jenkins.
➢ Conceptualize, architect and build a containerized infrastructure using Docker,Mesosphere or
similar SaaS platforms.
Work with developers to institute systems, policies and workflows which allow for rollback of
deployments Triage release of applications to production environment on a daily basis.
➢ Interface with developers and triage SQL queries that need to be executed inproduction
environments.
➢ Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production.
➢ Assist the developers and on calls for other teams with post mortem, follow up and review of
issues affecting production availability.
➢ Establishing and enforcing systems monitoring tools and standards
➢ Establishing and enforcing Risk Assessment policies and standards
➢ Establishing and enforcing Escalation policies and standards








