
Must Have Skills
- AWS Solutions Architect and/or DevOps certification, Professional preferred
- BS level technical degree or equivalent experience; Computer Science or Engineering background preferred
- Hands-on technical expertise on Amazon Web Services (AWS) to include but not limited to EC2, VPC, IAM, Security groups, ELB/NLB/ALB, Internet gateway, S3, EBS and EFS.
- Experience in migration-deployment of applications to Cloud, re-engineering of application for cloud, setting up OS and app environments in virtualized cloud
- DevOps automation, CI/CD, infrastructure/services provisioning, application deployment and configuration
- DevOps toolsets to include Ansible, Jenkins and XLdeploy, XLrelease
- Deploy and configuration of Java/Wildfly, Springboot, JavaScript/Node.js and Ruby applications and middleware
- Scripting in Shell (bash) and Extensive Python
- Agile software development
- Excellent written, verbal communication skills, presentation, and collaboration skills - Team leadership skills
- Linux (RHEL) administration/engineering
Experience:
- Deploying, configuring, and supporting large scale monolithic and microservices based SaaS applications
- Working as both an infrastructure and application migration specialist
- Identifying and documenting application requirements for network, F5, IAM, and security groups
- Implementing DevOps practices such as infrastructure as code, continuous integration, and automated deployment
- Work with Technology leadership to understand business goals and requirements
- Experience with continuous integration tools
- Experience with configuration management platforms
- Writing and diagnosing issues with complex shell scripts
- Strong practical application development experience on Linux and Windows-based systems

About Intuitive Technology Partners
About
Connect with the team
Similar jobs
Candidate must be from a product-based company with experience handling large-scale production traffic.
2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).
3. Candidate must have solid experience with Kubernetes.
4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.
5. Candidate must be an individual contributor with strong ownership.
6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.
7. Candidate must have working knowledge of Go/Python and Java.
8. Candidate should have working experience on Cloud platform - AWS
9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation
About the Role:
We are looking for a skilled AWS DevOps Engineer to join our Cloud Operations team in Bangalore. This hybrid role is ideal for someone with hands-on experience in AWS and a strong background in application migration from on-premises to cloud environments. You'll play a key role in driving cloud adoption, optimizing infrastructure, and ensuring seamless cloud operations.
Key Responsibilities:
- Manage and maintain AWS cloud infrastructure and services.
- Lead and support application migration projects from on-prem to cloud.
- Automate infrastructure provisioning using Infrastructure as Code (IaC) tools.
- Monitor cloud environments and optimize cost, performance, and reliability.
- Collaborate with development, operations, and security teams to implement DevOps best practices.
- Troubleshoot and resolve infrastructure and deployment issues.
Required Skills:
- 3–5 years of experience in AWS cloud environment.
- Proven experience with on-premises to cloud application migration.
- Strong understanding of AWS core services (EC2, VPC, S3, IAM, RDS, etc.).
- Solid scripting skills (Python, Bash, or similar).
Good to Have:
- Experience with Terraform for Infrastructure as Code.
- Familiarity with Kubernetes for container orchestration.
- Exposure to CI/CD tools like Jenkins, GitLab, or AWS CodePipeline.
5 to 10 years of software development & coding experience
Experience with Infrastructure as Code development (Automation, CICD) AWS CloudFormation, AWS CodeBuild, CodeDeploy are a must have.
Experience troubleshooting AWS policy or permissions related errors during resource deployments \
Programming experience; preferred Python, PowerShell, bash development experience \
Have Experience with application build automation tools like Apache Maven, Jenkins, Concourse, and Git supporting continuous integration / continuous deployment capabilities (CI/CD) à GitHub and GitHub actions for deployments are must-have skills (Maven, Jenkins, etc. are nice to have)
Have configuration management experience (Chef, Puppet, or Ansible)
Worked in a Development Shop or have SDLC hands on Experience
Familiar with how to write software, test plans, automate and release using modern development methods
AWS certified at an appropriate level
Description
Do you dream about code every night? If so, we’d love to talk to you about a new product that we’re making to enable delightful testing experiences at scale for development teams who build modern software solutions.
What You'll Do
Troubleshooting and analyzing technical issues raised by internal and external users.
Working with Monitoring tools like Prometheus / Nagios / Zabbix.
Developing automation in one or more technologies such as Terraform, Ansible, Cloud Formation, Puppet, Chef will be preferred.
Monitor infrastructure alerts and take proactive action to avoid downtime and customer impacts.
Working closely with the cross-functional teams to resolve issues.
Test, build, design, deployment, and ability to maintain continuous integration and continuous delivery process using tools like Jenkins, maven Git, etc.
Work in close coordination with the development and operations team such that the application is in line with performance according to the customer's expectations.
What you should have
Bachelor’s or Master’s degree in computer science or any related field.
3 - 6 years of experience in Linux / Unix, cloud computing techniques.
Familiar with working on cloud and datacenter for enterprise customers.
Hands-on experience on Linux / Windows / Mac OS’s and Batch/Apple/Bash scripting.
Experience with various databases such as MongoDB, PostgreSQL, MySQL, MSSQL.
Familiar with AWS technologies like EC2, S3, Lambda, IAM, etc.
Must know how to choose the best tools and technologies which best fit the business needs.
Experience in developing and maintaining CI/CD processes using tools like Git, GitHub, Jenkins etc.
Excellent organizational skills to adapt to a constantly changing technical environment
Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.
Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!
We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.
We are hiring a DevOps Engineer to join our team.
Responsibilities:
- Collaborate with the development team to design, develop, and implement Java-based applications
- Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
- Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
- Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
- Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
- Evaluate and define/modify configuration management strategies and processes using Ansible
- Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
- Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort
Requirements:
- Minimum 4+ years of relevant work experience in a DevOps role
- At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
- Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
- Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
- Mastery in configuration automation tool sets such as Ansible, Chef, etc
- Proficiency with Jira, Confluence, and Git toolset
- Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
- Proven ability to manage and prioritize multiple diverse projects simultaneously
What do we offer:
At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology.
· Work from anywhere
· Flexible work timings
· Competitive compensation, including stock options
· A chance to work in the exciting generative AI space
· Quarterly team offsite events
- 7+ years of experience in System Administration, Networking, Automation, Monitoring
- Excellent problem solving, analytical skills and technical troubleshooting skills
- Experience managing systems deployed in public cloud platforms (Microsoft Azure, AWS or Google Cloud)
- Experience implementing and maintaining CI/CD pipelines (Jenkins, Concourse, etc.)
- Linux experience, flavours: Ubuntu, Redhat, CentOS (sysadmin, bash scripting)
- Experience setting up monitoring (Datadog, Splunk, etc.)
- Experience in Infrastructure Automation tools like Terraform
- Experience in Package Manager for Kubernetes like Helm Charts
- Experience with databases and data storage (Oracle, MongoDB, Postgres SQL, ELK stack)
- Experience with Docker
- Experience with orchestration technologies (Kubernetes or DC/OS)
- Familiar with Agile Software Development
We are looking for an experienced DevOps (Development and Operations) professional to join our growing organization. In this position, you will be responsible for finding and reporting bugs in web and mobile apps & assist Sr DevOps to manage infrastructure projects and processes. Keen attention to detail, problem-solving abilities, and a solid knowledge base are essential.
As a DevOps, you will work in a Kubernetes based microservices environment.
Experience in Microsoft Azure cloud and Kubernetes is preferred, not mandatory.
Ultimately, you will ensure that our products, applications and systems work correctly.
Responsibilities:
- Detect and track software defects and inconsistencies
- Apply quality engineering principals throughout the Agile product lifecycle
- Handle code deployments in all environments
- Monitor metrics and develop ways to improve
- Consult with peers for feedback during testing stages
- Build, maintain, and monitor configuration standards
- Maintain day-to-day management and administration of projects
- Manage CI and CD tools with team
- Follow all best practices and procedures as established by the company
- Provide support and documentation
Required Technical and Professional Expertise
- Minimum 2+ years if DevOps
- Have experience in SaaS infrastructure development and Web Apps
- Experience in delivering microservices at scale; designing microservices solutions
- Proven Cloud experience/delivery of applications on Azure
- Proficient in configuration Management tools such as Ansible or any of Terraform Puppet, Chef, Salt, etc
- Hands-on experience in Networking/network configuration, Application performance monitoring, Container performance, and security.
- Understanding of Kubernetes, Python along with scripting languages like bash/shell
- Good to have experience in Linux internals, Linux packaging, Release Engineering (Branching, versioning, tagging), Artifact repository, Artifactory, Nexus, and CI/CD tooling (Concourse CI, Travis, Jenkins)
- Must be a proactive person
- You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies
- An ambitious individual who can work under their own direction towards agreed targets/goals and with a creative approach to work.
- An intuitive individual with an ability to manage change and proven time management
- Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
At Neurosensum we are committed to make customer feedback more actionable. We have developed a platform called SurveySensum which breaks the conventional market research turnaround time.
SurveySensum is becoming a great tool to not only capture the feedbacks but also to extract some useful insights with the quick workflow setups and dashboards. We have more than 7 channels through which we can collect the feedbacks. This makes us challenge the conventional software development design principles. The team likes to grind and helps each other to lift in tough situations.
Day to day responsibilities include:
- Work on the deployment of code via Bitbucket, AWS CodeDeploy and manual
- Work on Linux/Unix OS and Multi tech application patching
- Manage, coordinate, and implement software upgrades, patches, and hotfixes on servers.
- Create and modify scripts or applications to perform tasks
- Provide input on ways to improve the stability, security, efficiency, and scalability of the environment
- Easing developers’ life so that they can focus on the business logic rather than deploying and maintaining it.
- Managing release of the sprint.
- Educating team of the best practices.
- Finding ways to avoid human error and save time by automating the processes using Terraform, CloudFormation, Bitbucket pipelines, CodeDeploy, scripting
- Implementing cost effective measure on cloud and minimizing existing costs.
Skills and prerequisites
- OOPS knowledge
- Problem solving nature
- Willing to do the R&D
- Works with the team and support their queries patiently
- Bringing new things on the table - staying updated
- Pushing solution above a problem.
- Willing to learn and experiment
- Techie at heart
- Git basics
- Basic AWS or any cloud platform – creating and managing ec2, lambdas, IAM, S3 etc
- Basic Linux handling
- Docker and orchestration (Great to have)
- Scripting – python (preferably)/bash
- GCP Cloud experience mandatory
- CICD - Azure DevOps
- IaC tools – Terraform
- Experience with IAM / Access Management within cloud
- Networking / Firewalls
- Kubernetes / Helm / Istio
Experience: 12 - 20 years
Responsibilities :
The Cloud Solution Architect/Engineer specializing in migrations is a cloud role in the project delivery cycle with hands on experience migrating customers to the cloud.
Demonstrated experience in cloud infrastructure project deals for hands on migration to public clouds such as Azure.
Strong background in linux/Unix and/or Windows administration
Ability to use wide variety of open source technologies.
Closely work with Architects and customer technical teams in migrating applications to Azure cloud in Architect Role.
Mentor and monitor the junior developers and track their work.
Design as per best practices and insustry standard coding practices
Ensure services are built for performance, scalability, fault tolerance and security with reusable patterns.
Recommend best practises and standards for Azure migrations
Define coding best practices for high performance and guide the team in adopting the same
Skills:
Mandatory:
Experience with cloud migration technologies such as Azure Migrate
Azure trained / certified architect – Associate or Professional Level
Understanding of hybrid cloud solutions and experience of integrating public cloud into tradition hosting/delivery models
Strong understanding of cloud migration techniques and workflows (on premise to Cloud Platforms)
Configuration, migration and deployment experience in Azure apps technologies.
High Availability and Disaster recovery implementations
Experience architecting and deploying multi-tiered applications.
Experience building and deploying multi-tier, scalable, and highly available applications using Java, Microsoft and Database technologies
Experience in performance tuning, including the following ; (load balancing, web servers, content delivery Networks, Caching (Content and API))
Experience in large scale data center migration
Experience of implementing architectural governance and proactively managing issues and risks throughout the delivery lifecycle.
Good familiarity with the disciplines of enterprise software development such as configuration & release management, source code & version controls, and operational considerations such as monitoring and instrumentation
Experience of consulting or service provider roles (internal, or external);
Experience using database technologies like Oracle, MySQL and understanding of NoSQL is preferred.
Experience in designing or implementing data warehouse solutions is highly preferred.
Experience in automation/configuration management using Puppet, Chef, Ansible, Saltstack, Bosh, Terraform or an equivalent.
Experience with source code management tools such as GitHub, GitLab, Bitbucket or equivalent
Experience with SQL and NoSQL DBs such as SQL, MySQL.
Solid understanding of networking and core Internet Protocols such as TCP/IP, DNS, SMTP, HTTP and routing in distributed networks.
A working understanding of code and script such as: PHP, Python, Perl and/or Ruby.
A working understanding with CI/CD tools such as Jenkins or equivalent
A working understanding of scheduling and orchestration with tools such as: kubernetes, Mesos swarm or equivalent.










