
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Excellent communication skills
Open to work on EST time zone(6pm to 3am)
Technical Skills:
· In depth understanding of DevSecOps process and governance
· Understanding of various branching strategies
· Hands on experience working with various testing and scanning tools (ex. SonarQube, Snyk, Blackduck, etc.)
· Expertise working with one or more CICD platforms (ex. Azure DevOps, GitLab, GitHub Actions, etc)
· Expertise within one CSP and experience/working knowledge of a second CSP (Azure, AWS, GCP)
· Proficient with Terraform
· Hands on experience working with Kubernetes
· Proficient working with GIT version control
· Hands on experience working with monitoring/observability tool(s) (Splunk, Data Dog, Dynatrace, etc)
· Hands on experience working with Configuration Management platform(s) (Chef, Saltstack, Ansible, etc)
· Hands on experience with GitOps

About Intuitive Technology Partners
About
Connect with the team
Similar jobs
Your challenge
As a DevOps Engineer, you’re responsible for automating the deployment of our software solutions. You interact with software engineers, functional product managers, and ICT professionals daily. Using your technical skills, you provide internal tooling for development and QA teams around the globe.
We believe in an integrated approach, where every team member is involved in all steps of the software development life cycle: analysis, architectural design, programming, and maintenance. We expect you to be the proud owner of your work and take responsibility for it.
Together with a tight-knit group of 5-6 team players, you develop, maintain and support key elements of our infrastructure:
- Continuous integration and production systems
- Release and build management
- Package management
- Containerization and orchestration
Your team
As our new DevOps Engineer, you’ll be part of a large, fast-growing, international team located in Belgium (Antwerp, Ghent, Wavre), Spain (Barcelona), Ukraine (Lviv), and the US (Atlanta). Software Development creates leading software solutions that make a difference to our customers. We make smart, robust, and scalable software to solve complex supply chain planning challenges.
Your profile
We are looking for someone who meets the following qualifications:
- A bachelor’s or master’s degree in a field related to Computer Science.
- Pride in developing high-quality solutions and taking responsibility for their maintenance.
- Minimum 6 years' experience in a similar role
- Good knowledge of the following technologies: Kubernetes, PowerShell or bash scripting, Jenkins, Azure Pipelines or similar automation systems, Git.
- Familiarity with the Cloud–Native Landscape. Terraform, Ansible, and Helm are tools we use daily.
- Supportive towards users.
Bonus points if you have:
- A background in DevOps, ICT, or technical support.
- Customer support experience or other relevant work experience, including internships.
- Understanding of Windows networks and Active Directory.
- Experience with transferring applications into the cloud.
- Programming skills.
Soft skills
Team Work
Pragmatic attitude
Passionate
Analytical thinker
Tech Savvy
Fast Learner
Hard skills
Kubernetes
CI/CD
Git
Powershell
Your future
At OMP, we’re eager to find your best career fit. Our talent management program supports your personal development and empowers you to build a career in line with your ambitions.
Many of our team members who start as DevOps Engineers grow into roles in DevOps/Cloud architecture, project management, or people management.
Experience: 8-10yrs
Notice Period: max 15days
Must-haves*
1. Knowledge about Database/NoSQL DB hosting fundamentals (RDS multi-AZ, DynamoDB, MongoDB, and such)
2. Knowledge of different storage platforms on AWS (EBS, EFS, FSx) - mounting persistent volumes with Docker Containers
3. In-depth knowledge of Security principles on AWS (WAF, DDoS, Security Groups, NACL's, IAM groups, and SSO)
4. Knowledge on CI/CD platforms is required (Jenkins, GitHub actions, etc.) - Migration of AWS Code pipelines to GitHub actions
5. Knowledge of vast variety of AWS services (SNS, SES, SQS, Athena, Kinesis, S3, ECS, EKS, etc.) is required
6. Knowledge on Infrastructure as Code tool is required We use Cloudformation. (Terraform is a plus), ideally, we would like to migrate to Terraform from CloudFormation
7. Setting CloudWatch Alarms and SMS/Email Slack alerts.
8. Some Knowledge on configuring any kind of monitoring tool such as Prometheus, Dynatrace, etc. (We currently use Datadog, CloudWatch)
9. Experience with any CDN provider configurations (Cloudflare, Fastly, or CloudFront)
10. Experience with either Python or Go scripting language.
11. Experience with Git branching strategy
12. Containers hosting knowledge on both Windows and Linux
The below list is *Nice to Have*
1. Integration experience with Code Quality tools (SonarQube, NetSparker, etc) with CI/CD
2. Kubernetes
3. CDN's other than CloudFront (Cloudflare, Fastly, etc)
4. Collaboration with multiple teams
5. GitOps
Position: Senior DevOps Engineer (Azure Cloud Infra & Application deployments)
Location:
Hyderabad
Hiring a Senior
DevOps engineer having 2 to 5 years of experience.
Primary Responsibilities
Strong Programming experience in PowerShell and batch
scripts.
Strong expertise in Azure DevOps, GitLab, CI/CD, Jenkins and
Git Actions, Azure Infrastructure.
Strong
experience in configuring infrastructure and Deployments application in
Kubernetes, docker & Helm charts, App Services, Server less, SQL database, Cloud
services and Container deployment.
Continues
Integration, deployment, and version control (Git/ADO).
Strong experience in managing and configuring RBAC, managed identity and
security best practices for cloud environments.
Strong verbal and written communication skills.
Experience with agile development process.
·
Good analytical skills
Additional Responsibility.
Familiar with
various design and architecture patterns.
Work with modern frameworks and design patterns.
Experience with cloud applications Azure/AWS. Should have experience in developing solutions and plugins and should have used XRM Toolbox/ToolKit.
Exp in Customer Portal and Fetchxml and Power Apps and Power Automate is good to have.


**THIS IS A 100% WORK FROM OFFICE ROLE**
We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead to identify and establish DevOps practices in the company.
You will help us build scalable, efficient cloud infrastructure. You’ll implement monitoring for automated system health checks. Lastly, you’ll build our CI pipeline, and train and guide the team in DevOps practices.
ROLE and RESPONSIBILITIES:
• Understanding customer requirements and project KPIs
• Implementing various development, testing, automation tools, and IT infrastructure
• Planning the team structure, activities, and involvement in project management
activities.
• Managing stakeholders and external interfaces
• Setting up tools and required infrastructure
• Defining and setting development, test, release, update, and support processes for
DevOps operation
• Have the technical skill to review, verify, and validate the software code developed in
the project.
• Troubleshooting techniques and fixing the code bugs
• Monitoring the processes during the entire lifecycle for its adherence and updating or
creating new processes for improvement and minimizing the wastage
• Encouraging and building automated processes wherever possible
• Identifying and deploying cybersecurity measures by continuously performing
vulnerability assessment and risk management
• Incidence management and root cause analysis
• Coordination and communication within the team and with customers
• Selecting and deploying appropriate CI/CD tools
• Strive for continuous improvement and build continuous integration, continuous
development, and constant deployment pipeline (CI/CD Pipeline)
• Mentoring and guiding the team members
• Monitoring and measuring customer experience and KPIs
• Managing periodic reporting on the progress to the management and the customer
Essential Skills and Experience Technical Skills
• Proven 3+years of experience as DevOps
• A bachelor’s degree or higher qualification in computer science
• The ability to code and script in multiple languages and automation frameworks
like Python, C#, Java, Perl, Ruby, SQL Server, NoSQL, and MySQL
• An understanding of the best security practices and automating security testing and
updating in the CI/CD (continuous integration, continuous deployment) pipelines
• An ability to conveniently deploy monitoring and logging infrastructure using tools.
• Proficiency in container frameworks
• Mastery in the use of infrastructure automation toolsets like Terraform, Ansible, and command line interfaces for Microsoft Azure, Amazon AWS, and other cloud platforms
• Certification in Cloud Security
• An understanding of various operating systems
• A strong focus on automation and agile development
• Excellent communication and interpersonal skills
• An ability to work in a fast-paced environment and handle multiple projects
simultaneously
OTHER INFORMATION
The DevOps Engineer will also be expected to demonstrate their commitment:
• to gedu values and regulations, including equal opportunities policy.
• the gedu’s Social, Economic and Environmental responsibilities and minimise environmental impact in the performance of the role and actively contribute to the delivery of gedu’s Environmental Policy.
• to their Health and Safety responsibilities to ensure their contribution to a safe and secure working environment for staff, students, and other visitors to the campus.
- Provision Dev Test Prod Infrastructure as code using IaC (Infrastructure as Code)
- Good knowledge on Terraform
- In-depth knowledge of security and IAM / Role Based Access Controls in Azure, management of Azure Application/Network Security Groups, Azure Policy, and Azure Management Groups and Subscriptions.
- Experience with Azure and GCP compute, storage and networking (we can also look for GCP )
- Experience in working with ADLS Gen2, Databricks and Synapse Workspace
- Experience supporting cloud development pipelines using Git, CI/CD tooling, Terraform and other Infrastructure as Code tooling as appropriate
- Configuration Management (e.g. Jenkins, Ansible, Git, etc...)
- General automation including Azure CLI, or Python, PowerShell and Bash scripting
- Experience with Continuous Integration/Continuous Delivery models
- Knowledge of and experience in resolving configuration issues
- Understanding of software and infrastructure architecture
- Experience in Paas, Terraform and AKS
- Monitoring, alerting and logging tools, and build/release processes Understanding of computing technologies across Windows and Linux
Hands on experience in:
- Deploying, managing, securing and patching enterprise applications on large scale in Cloud preferably AWS.
- Experience leading End-to-end DevOps projects with modern tools encompassing both Applications and Infrastructure
- AWS Code deploy, Code build, Jenkins, Sonarqube.
- Incident management and root cause analysis.
- Strong understanding of immutable infrastructure and infrastructure as code concepts. Participate in capacity planning and provisioning of new resources. Importing already deployed infra into IaaC.
- Utilizing AWS cloud services such as EC2, S3, IAM, Route53, RDS, VPC, NAT/IG Gateway, LAMBDA, Load Balancers, CloudWatch, API Gateway are some of them.
- AWS ECS managing multi cluster container environments (ECS with EC2 and Fargate with service discovery using Route53)
- Monitoring/analytics tools like Nagios/DataDog and logging tools like LogStash/SumoLogic
- Simple Notification Service (SNS)
- Version Control System: Git, Gitlab, Bitbucket
- Participate in Security Audit of Cloud Infrastructure.
- Exceptional documentation and communication skills.
- Ready to work in Shift
- Knowledge of Akamai is Plus.
- Microsoft Azure is Plus
- Adobe AEM is plus.
- AWS Certified DevOps Professional is plus
- 3+ years experience leading a team of DevOps engineers
- 8+ years experience managing DevOps for large engineering teams developing cloud-native software
- Strong in networking concepts
- In-depth knowledge of AWS and cloud architectures/services.
- Experience within the container and container orchestration space (Docker, Kubernetes)
- Passion for CI/CD pipeline using tools such as Jenkins etc.
- Familiarity with config management tools like Ansible Terraform etc
- Proven record of measuring and improving DevOps metrics
- Familiarity with observability tools and experience setting them up
- Passion for building tools and productizing services that empower development teams.
- Excellent knowledge of Linux command-line tools and ability to write bash scripts.
- Strong in Unix / Linux administration and management,
KEY ROLES/RESPONSIBILITIES:
- Own and manage the entire cloud infrastructure
- Create the entire CI/CD pipeline to build and release
- Explore new technologies and tools and recommend those that best fit the team and organization
- Own and manage the site reliability
- Strong decision-making skills and metric-driven approach
- Mentor and coach other team members
Job role
Anaxee is India's REACH Engine! To provide access across India, we need to build highly scalable technology which needs scalable Cloud infrastructure. We’re seeking an experienced cloud engineer with expertise in AWS (Amazon Web Services), GCP (Google Cloud Platform), Networking, Security, and Database Management; who will be Managing, Maintaining, Monitoring, Handling Cloud Platforms, and ensuring the security of the same.
You will be surrounded by people who are smart and passionate about the work they are doing.
Every day will bring new and exciting challenges to the job.
Job Location: Indore | Full Time | Experience: 1 year and Above | Salary ∝ Expertise | Rs. 1.8 LPA to Rs. 2.64 LPA
About the company:
Anaxee Digital Runners is building India's largest last-mile Outreach & data collection network of Digital Runners (shared feet-on-street, tech-enabled) to help Businesses & Consumers reach the remotest parts of India, on-demand.
We want to make REACH across India (remotest places), as easy as ordering pizza, on-demand. Already serving 11000 pin codes (57% of India) | Anaxee is one of the very few venture-funded startups in Central India | Website: www.anaxee.com
Important: Check out our company pitch (6 min video) to understand this goal - https://www.youtube.com/watch?v=7QnyJsKedz8
Responsibilities (You will enjoy the process):
#Triage and troubleshoot issues on the AWS and GCP and participate in a rotating on-call schedule and address urgent issues quickly
#Develop and leverage expert-level knowledge of supported applications and platforms in support of project teams (architecture guidance, implementation support) or business units (analysis).
#Monitoring the process on production runs, communicating the information to the advisory team, and raising production support issues to the project team.
#Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management
#Developing and implementing technical efforts to design, build, and deploy AWS and GCP applications at the direction of lead architects, including large-scale data processing and advanced analytics
#Participate in all aspects of the SDLC for AWS and GCP solutions, including planning, requirements, development, testing, and quality assurance
#Troubleshoot incidents, identify root cause, fix, and document problems, and implement preventive measures
#Educate teams on the implementation of new cloud-based initiatives, providing associated training as required
#Build and maintain operational tools for deployment, monitoring, and analysis of AWS and GCP infrastructure and systems; Design, deploy, maintain, automate & troubleshoot virtual servers and storage systems, firewalls, and Load Balancers in our hybrid cloud environment (AWS and GCP)
What makes a great DevOps Engineer (Cloud) for Anaxee:
#Candidate must have sound knowledge, and hands-on experience, in GCP (Google Cloud Platform) and AWS (Amazon Web Services)
#Good hands-on Linux Operating system OR any other similar distributions, viz. Ubuntu, CentOS, RHEL/RedHat, etc.
#1+ years of experience in the industry
#Bachelor's degree preferred with Science/Maths background (B.Sc/BCA/B.E./B.Tech)
#Enthusiasm to learn new software, take ownership and latent desire and curiosity in the related domain like Cloud, Hosting, Programming, Software development, security.
#Demonstrable skills troubleshooting a wide range of technical problems at application and system level, and have strong organizational skills with eye for detail.
#Prior knowledge of risk-chain is an added advantage
#AWS/GCP certifications is a plus
#Previous startup experience would be a huge plus.
The ideal candidate must be experienced in cloud-based tech, with a firm grasp on emerging technologies, platforms, and applications, and have the ability to customize them to help our business become more secure and efficient. From day one, you’ll have an immediate impact on the day-to-day efficiency of our IT operations, and an ongoing impact on our overall growth
What we offer
#Startup Flexibility
#Exciting challenges to learn grow and implement notions
#ESOPs (Employee Stock Ownership Plans)
#Great working atmosphere in a comfortable office,
#And an opportunity to get associated with a fast-growing VC-funded startup.
What happens after you apply?
You will receive an acknowledgment email with company details.
If gets shortlisted, our HR Team will get in touch with you (Call, Email, WhatsApp) in a couple of days
Rest all the information will be communicated to you then via our AMS.
Our expectations before/after you click “Apply Now”
Read about Anaxee: http://www.anaxee.com/
Watch this six mins pitch to get a better understanding of what we are into https://www.youtube.com/watch?v=7QnyJsKedz8
Let's dive into detail (Company Presentation): https://bit.ly/anaxee-deck-brands
We are looking for a full-time remote DevOps Engineer who has worked with CI/CD automation, big data pipelines and Cloud Infrastructure, to solve complex technical challenges at scale that will reshape the healthcare industry for generations. You will get the opportunity to be involved in the latest tech in big data engineering, novel machine learning pipelines and highly scalable backend development. The successful candidates will be working in a team of highly skilled and experienced developers, data scientists and CTO.
Job Requirements
- Experience deploying, automating, maintaining, and improving complex services and pipelines • Strong understanding of DevOps tools/process/methodologies
- Experience with AWS Cloud Formation and AWS CLI is essential
- The ability to work to project deadlines efficiently and with minimum guidance
- A positive attitude and enjoys working within a global distributed team
Skills
- Highly proficient working with CI/CD and automating infrastructure provisioning
- Deep understanding of AWS Cloud platform and hands on experience setting up and maintaining with large scale implementations
- Experience with JavaScript/TypeScript, Node, Python and Bash/Shell Scripting
- Hands on experience with Docker and container orchestration
- Experience setting up and maintaining big data pipelines, Serverless stacks and containers infrastructure
- An interest in healthcare and medical sectors
- Technical degree with 4 plus years’ infrastructure and automation experience

Required Skills and Experience
- 4+ years of relevant experience with DevOps tools Jenkins, Ansible, Chef etc
- 4+ years of experience in continuous integration/deployment and software tools development experience with Python and shell scripts etc
- Building and running Docker images and deployment on Amazon ECS
- Working with AWS services (EC2, S3, ELB, VPC, RDS, Cloudwatch, ECS, ECR, EKS)
- Knowledge and experience working with container technologies such as Docker and Amazon ECS, EKS, Kubernetes
- Experience with source code and configuration management tools such as Git, Bitbucket, and Maven
- Ability to work with and support Linux environments (Ubuntu, Amazon Linux, CentOS)
- Knowledge and experience in cloud orchestration tools such as AWS Cloudformation/Terraform etc
- Experience with implementing "infrastructure as code", “pipeline as code” and "security as code" to enable continuous integration and delivery
- Understanding of IAM, RBAC, NACLs, and KMS
- Good communication skills
Good to have:
- Strong understanding of security concepts, methodologies and apply them such as SSH, public key encryption, access credentials, certificates etc.
- Knowledge of database administration such as MongoDB.
- Knowledge of maintaining and using tools such as Jira, Bitbucket, Confluence.
- Work with Leads and Architects in designing and implementation of technical infrastructure, platform, and tools to support modern best practices and facilitate the efficiency of our development teams through automation, CI/CD pipelines, and ease of access and performance.
- Establish and promote DevOps thinking, guidelines, best practices, and standards.
- Contribute to architectural discussions, Agile software development process improvement, and DevOps best practices.

