
AWS/Devops automation Engineer- Hyderabad- WFO
at Software development Co.in HR Applications
(Candidates from Service based Companies apply-Looking for automation(shell or python scripting))
SHIFT- Shift time either US East coast or west coast (2:30 PM to 10:30 PM India time or 5 to 2 am india time)
Exp- 5 to 8 years
Salary- Upto 25 LPA
Hyderabad based candidates preferred!
Immediate joiners would be preferred!!
Role Objective:
- Ability to identify processes where efficiency could be improved via automation
- Ability to research, prototype, iterate and test automation solutions
- Good Technical understanding of Cloud service offering, with a sound appreciation of the associated business processes.
- Ability to build & maintain a strong working relationship with other Technical teams using the agile methodology (internal and external), Infrastructure Partners and Service Engagement Managers.
- Ability to shape and co-ordinate delivery of key initiatives to deliver improvements in stability
- Good understanding of the cost of the e2e service provision, and delivery of associated savings.
- Knowledge of web security principals
- Strong Linux experience – comfortable working from command line
- Some networking knowledge (routing, DNS)
- Knowledge of HA and DR concepts and experience implementing them
- Working with team to analyse and design infrastructure with 99.99% up-time.
Qualifications:
- Infrastructure automation through DevOps scripting (Eg Python, Ruby, PowerShell, Java, shell) or previous software development experience
- Experience in building and managing production cloud environments from the ground up.
- Hands-on, working experience with primary AWS services (EC2, VPC, RDS, Route53, S3)
- Knowledge on repository management (GitHub, SVN)
- Solid understanding of web application architecture and RDBMS (SQL Server preferred).
- Experience with IT compliance and risk management requirements is a bonus. (Eg Security, Privacy, HIPAA, SOX, etc)
- Strong logical, analytical and problem-solving skills with excellent communication skills.
- Should have degree in computer science, MIS, engineering or equivalent with 5+ years of experience.
- Should be willing to work in rotational shifts (including the nights)
Perks and benefits:
- Health & Wellness
- Paid time off
- Learning at work
- Fun at work
- Night shift allowance
- Comp off
- Pick and drop facility available to certain distance

Similar jobs
GCP Cloud Engineer:
- Proficiency in infrastructure as code (Terraform).
- Scripting and automation skills (e.g., Python, Shell). Knowing python is must.
- Collaborate with teams across the company (i.e., network, security, operations) to build complete cloud offerings.
- Design Disaster Recovery and backup strategies to meet application objectives.
- Working knowledge of Google Cloud
- Working knowledge of various tools, open-source technologies, and cloud services
- Experience working on Linux based infrastructure.
- Excellent problem-solving and troubleshooting skills
Must Have -
a. Background working with Startups
b. Good knowledge of Kubernetes & Docker
c. Background working in Azure
What you’ll be doing
- Ensure that our applications and environments are stable, scalable, secure and performing as expected.
- Proactively engage and work in alignment with cross-functional colleagues to understand their requirements, contributing to and providing suitable supporting solutions.
- Develop and introduce systems to aid and facilitate rapid growth including implementation of deployment policies, designing and implementing new procedures, configuration management and planning of patches and for capacity upgrades
- Observability: ensure suitable levels of monitoring and alerting are in place to keep engineers aware of issues.
- Establish runbooks and procedures to keep outages to a minimum. Jump in before users notice that things are off track, then automate it for the future.
- Automate everything so that nothing is ever done manually in production.
- Identify and mitigate reliability and security risks. Make sure we are prepared for peak times,
- DDoS attacks and fat fingers.
- Troubleshoot issues across the whole stack - software, applications and network.
- Manage individual project priorities, deadlines, and deliverables as part of a self-organizing team.
- Learn and unlearn every day by exchanging knowledge and new insights, conducting constructive code reviews, and participating in retrospectives.
Requirements
- 2+ years extensive experience of Linux server administration include patching, packaging (rpm), performance tuning, networking, user management, and security.
- 2+ years of implementing systems that are highly available, secure, scalable, and self-healingon Azure cloud platform
- Strong understanding of networking, especially in cloud environments along with a good understanding of CICD.
- Prior experience implementing industry standard security best practices, including those recommended by Azure
- Proficiency with Bash, and any high-level scripting language.
- Basic working knowledge of observability stacks like ELK, prometheus, grafana, Signoz etc
- Proficiency with Infrastructure as Code and Infrastructure Testing, preferably using Pulumi/Terraform.
- Hands-on experience in building and administering VMs and Containers using tools such as Docker/Kubernetes.
- Excellent communication skills, spoken as well as written, with a demonstrated ability to articulate technical problems and projects to all stakeholders.
We are seeking an experienced Lead DevOps Engineer with deep expertise in Kubernetes infrastructure design and implementation. This role requires someone who can architect, build, and manage enterprise-grade Kubernetes clusters from the ground up. You’ll lead modernization initiatives, shape infrastructure strategy, and work with cutting-edge cloud-native technologies.
🚀 Key Responsibilities
Infrastructure Design & Implementation
- Architect and design enterprise-grade Kubernetes clusters across AWS, Azure, and GCP.
- Build production-ready Kubernetes infrastructure with HA, scalability, and security best practices.
- Implement Infrastructure as Code with Terraform, Helm, and GitOps workflows.
- Set up monitoring, logging, and observability for Kubernetes workloads.
- Design and execute backup and disaster recovery strategies for containerized applications.
Leadership & Team Management
- Lead a team of 3–4 DevOps engineers, providing technical mentorship.
- Drive best practices in containerization, orchestration, and cloud-native development.
- Collaborate with development teams to optimize deployment strategies.
- Conduct code reviews and maintain infrastructure quality standards.
- Build knowledge-sharing culture with documentation and training.
Operational Excellence
- Manage and scale CI/CD pipelines integrated with Kubernetes.
- Implement security policies (RBAC, network policies, container scanning).
- Optimize cluster performance and cost-efficiency.
- Automate operations to minimize manual interventions.
- Ensure 99.9% uptime for production workloads.
Strategic Planning
- Define the infrastructure roadmap aligned with business needs.
- Evaluate and adopt new cloud-native technologies.
- Perform capacity planning and cloud cost optimization.
- Drive risk assessment and mitigation strategies.
🛠 Must-Have Technical Skills
Kubernetes Expertise
- 6+ years of hands-on Kubernetes experience in production.
- Deep knowledge of Kubernetes architecture (etcd, API server, scheduler, kubelet).
- Advanced Kubernetes networking (CNI, Ingress, Service mesh).
- Strong grasp of Kubernetes storage (CSI, PVs, StorageClasses).
- Experience with Operators and Custom Resource Definitions (CRDs).
Infrastructure as Code
- Terraform (advanced proficiency).
- Helm (developing and managing complex charts).
- Config management tools (Ansible, Chef, Puppet).
- GitOps workflows (ArgoCD, Flux).
Cloud Platforms
- Hands-on experience with at least 2 of the following:
- AWS: EKS, EC2, VPC, IAM, CloudFormation
- Azure: AKS, VNets, ARM templates
- GCP: GKE, Compute Engine, Deployment Manager
CI/CD & DevOps Tools
- Jenkins, GitLab CI, GitHub Actions, Azure DevOps
- Docker (advanced optimization and security practices)
- Container registries (ECR, ACR, GCR, Docker Hub)
- Strong Git workflows and branching strategies
Monitoring & Observability
- Prometheus & Grafana (metrics and dashboards)
- ELK/EFK stack (centralized logging)
- Jaeger/Zipkin (tracing)
- AlertManager (intelligent alerting)
💡 Good-to-Have Skills
- Service Mesh (Istio, Linkerd, Consul)
- Serverless (Knative, OpenFaaS, AWS Lambda)
- Running databases in Kubernetes (Postgres, MongoDB operators)
- ML pipelines (Kubeflow, MLflow)
- Security tools (Aqua, Twistlock, Falco, OPA)
- Compliance (SOC2, PCI-DSS, GDPR)
- Python/Go for automation
- Advanced Shell scripting (Bash/PowerShell)
🎓 Qualifications
- Bachelor’s in Computer Science, Engineering, or related field.
- Certifications (preferred):
- Certified Kubernetes Administrator (CKA)
- Certified Kubernetes Application Developer (CKAD)
- Cloud provider certifications (AWS/Azure/GCP).
Experience
- 6–7 years of DevOps/Infrastructure engineering.
- 4+ years of Kubernetes in production.
- 2+ years in a lead role managing teams.
- Experience with large-scale distributed systems and microservices.
Who you are
The possibility of having massive societal impact. Our software touches the lives of hundreds of millions of people.
Solving hard governance and societal challenges
Work directly with central and state government leaders and other dignitaries
Mentorship from world class people and rich ecosystems
Position : Architect or Technical Lead - DevOps
Location : Bangalore
Role:
Strong knowledge on architecture and system design skills for multi-tenant, multi-region, redundant and highly- available mission-critical systems.
Clear understanding of core cloud platform technologies across public and private clouds.
Lead DevOps practices for Continuous Integration and Continuous Deployment pipeline and IT operations practices, scaling, metrics, as well as running day-to-day operations from development to production for the platform.
Implement non-functional requirements needed for a world-class reliability, scale, security and cost-efficiency throughout the product development lifecycle.
Drive a culture of automation, and self-service enablement for developers.
Work with information security leadership and technical team to automate security into the platform and services.
Meet infrastructure SLAs and compliance control points of cloud platforms.
Define and contribute to initiatives to continually improve Solution Delivery processes.
Improve organisation’s capability to build, deliver and scale software as a service on the cloud
Interface with Engineering, Product and Technical Architecture group to meet joint objectives.
You are deeply motivated & have knowledge of solving hard to crack societal challenges with the stamina to see it all the way through.
You must have 7+ years of hands-on experience on DevOps, MSA and Kubernetes platform.
You must have 2+ years of strong kubernetes experience and must have CKA (Certified Kubernetes Administrator).
You should be proficient in tools/technologies involved for a microservices architecture and deployed in a multi cloud kubernetes environment.
You should have hands-on experience in architecting and building CI/CD pipelines from code check-in until production.
You should have strong knowledge on Dockers, Jenkins pipeline scripting, Helm charts, kubernetes objects and manifests, prometheus and demonstrate hands-on knowledge of multi cloud computing, storage and networking.
You should have experience in designing, provisioning and administrating using best practices across multiple public cloud offerings (Azure, AWS, GCP) and/or private cloud offerings ( OpenStack, VMware, Nutanix, NIC, bare metal Infra).
You should have experience in setting up logging, monitoring Cloud application performance, error rates, and error budgets while tracking adherence to SLOs and SLAs.
Intuitive is now hiring for DevOps Consultant for full-time employment
DevOps Consultant
Work Timings: 6.30 AM IST – 3.30 PM IST ( HKT 9.00 AM – 6.00 PM)
Key Skills / Requirements:
- Mandatory
- Integrating Jenkins pipelines with Terraform and Kubernetes
- Writing Jenkins JobDSL and Jenkinsfiles using Groovy
- Automation and integration with AWS using Python and AWS CLI
- Writing ad-hoc automation scripts using Bash and Python
- Configuring container-based application deployment to Kubernetes
- Git and branching strategies
- Integration of tests in pipelines
- Build and release management
- Deployment strategies (eg. canary, blue/green)
- TLS/SSL certificate management
- Beneficial
- Service Mesh
- Deployment of Java Spring Boot based applications
- Automated credential rotation using AWS Secrets Manager
- Log management and dashboard creation using Splunk
- Application monitoring using AppDynamics
If Your profile matches the requirements share your resume at anitha.katintuitvedotcloud
Regards,
Anitha. K
TAG Specialist
Internshala is a dot com business with the heart of dot org.
We are a technology company on a mission to equip students with relevant skills & practical exposure through internships, fresher jobs, and online trainings. Imagine a world full of freedom and possibilities. A world where you can discover your passion and turn it into your career. A world where your practical skills matter more than your university degree. A world where you do not have to wait till 21 to taste your first work experience (and get a rude shock that it is nothing like you had imagined it to be). A world where you graduate fully assured, fully confident, and fully prepared to stake a claim on your place in the world.
At Internshala, we are making this dream a reality!
👩🏻💻 Your responsibilities would include-
- Building and maintaining operational tools for monitoring and analysis of AWS infrastructure and systems
- Actively monitoring the health and performance of all systems and performing benchmarking and tuning of system applications and operating systems
- Setting up container orchestration using Kubernetes or other orchestration system for a monolithic application
- Continually working with development engineers to design the best system architectures and solutions
- Troubleshooting and resolving issues in our development, test, and production environments
- Maintaining reliability of the system and being on-call for mission-critical systems
- Performing infrastructure cost analysis and optimization
- Ensure systems’ compliance with operational risk standards (e.g. network, firewall, OS, logging, monitoring, availability, resiliency)
- Building, mentoring and leading a team of young professionals, if the need arises
🍒 You will get-
- A chance to build and lead an awesome team working on one of the best recruitment and online trainings products in the world that impact millions of lives for the better
- Awesome colleagues & a great work environment
- Loads of autonomy and freedom in your work
💯 You fit the bill if-
- You are proficient with bash, git and git workflows
- You have 3-5 years of experience as a DevOps Engineer or similar software engineering role
- You have excellent attention to detail
- AWS certification preferred but not mandatory
We are looking for a Senior Platform Engineer responsible for handling our GCP/AWS clouds. The candidate will be responsible for automating the deployment of cloud infrastructure and services to support application development and hosting (architecting, engineering, deploying, and operationally managing the underlying logical and physical cloud computing infrastructure).
Job Description:
● Collaborate with teams to build and deliver solutions implementing serverless, microservice-based, IaaS, PaaS, and containerized architectures in GCP/AWS environments.
●Responsible for deploying highly complex, distributed transaction processing systems.
● Work on continuous improvement of the products through innovation and learning. Someone with a knack for benchmarking and optimization
● Hiring, developing, and cultivating a high and reliable cloud support team ● Building and operating complex CI/CD pipelines at scale
● Work with GCP Services, Private Service Connect, Cloud Run, Cloud Functions, Pub/Sub, Cloud Storage, Networking
● Collaborate with Product Management and Product Engineering teams to drive excellence in Google Cloud products and features.
● Ensures efficient data storage and processing functions by company security policies and best practices in cloud security.
● Ensuring scaled database setup/monitoring with near zero downtime
Location: Amravati, Maharashtra 444605 , INDIA
We are looking for a Kubernetes Cloud Engineer with experience in deployment and administration of Hyperledger Fabric blockchain applications. While you will work on Truscholar blockchain based platform (both Hyperledger Fabric and INDY versions), if you combine rich Kubernetes experience with strong DevOps skills, we will still be keen on talking to you.
Responsibilities
● Deploy Hyperledger Fabric (BEVEL SETUP) applications on Kubernetes
● Monitoring Kubernetes system
● Implement and improve monitoring and alerting
● Build and maintain highly available blockchain systems on Kubernetes
● Implement an auto-scaling system for our Kubernetes nodes
● Detail Design & Develop SSI & ZKP Solution
● - Act as a liaison between the Infra, Security, business & QA Teams for end to end integration and DevOps - Pipeline adoption.
Technical Skills
● Experience with AWS EKS Kubernetes Service, Container Instances, Container Registry and microservices (or similar experience on AZURE)
● Hands on with automation tools like Terraform, Ansible
● Ability to deploy Hyperledger Fabric in Kubernetes environment is highly desirable
● Hyperledger Fabric/INDY (or other blockchain) development, architecture, integration, application experience
● Distributed consensus systems such as Raft
● Continuous Integration and automation skills including GitLab Actions ● Microservices architectures, Cloud-Native architectures, Event-driven architectures, APIs, Domain Driven Design
● Being a Certified Hyperledger Fabric Administrator would be an added advantage
Sills Set
● Understanding of Blockchain NEtworks
● Docker Products
● Amazon Web Services (AWS)
● Go (Programming Language)
● Hyperledger Fabric/INDY
● Gitlab
● Kubernetes
● Smart Contracts
Who We are:
Truscholar is a state-of- art Digital Credential Issuance and Verification Platform running as blockchain Infrastructure as an Instance of Hyperledger Indy Framework. Our Solution helps all universities, Institutes, Edtech, E-learning Platforms, Professional Training Academics, Corporate Employee Training and Certifications and Event Management Organisations managing exhibitions, Trade Fairs, Sporting Events, seminars and webinars to their learners, employees or participants. The digital certificates, Badges, or transcripts generated are immutable, shareable and verifiable thereby building an individual's Knowledge Passport. Our Platform has been architected to function as a single Self Sovereign Identity Wallet for the next decade, keeping personal data privacy guidelines in min.
Why Now?
The Startup venture, which was conceived as an idea while two founders were pursuing a Blockchain Technology Management Course, has received tremendous applause and appreciation from mentors and investors, and has been able to roll out the product within a year and comfortably complete the product market fit stage. Truscholar has entered a growth stage, and is searching for young, creative, and bright individuals to join the team and make Truscholar a preferred global product within the next 36 months.
Our Work Culture:
With our innovation, open communication, agile thought process, and will to achieve, we are a very passionate group of individuals driving the company's growth. As a result of their commitment to the company's development narrative, we believe in offering a work environment with clear metrics to support workers' individual progress and networking within the fraternity.
Our Vision:
To become the intel inside the education world by powering all academic credentials across the globe and assisting students in charting their digital academic passports.
Advantage Location Amravati, Maharashtra, INDIA
Amid businesses in India realising the advantages of the work-from-home (WFH) concept in the backdrop of the Coronavirus pandemic, there has been a major shift of the workforce towards tier-2 cities.
Amravati, also called Ambanagri, is a city of immense cultural and religious importance and a beautiful Tier 2 City of Maharastra. It is also called the cultural capital of the Vidarbha region. The cost of living is less, the work-life balance is better, much breathable air, fewer traffic bottlenecks and housing remains affordable, as compared to congested and eccentric metro cities of India. We firmly believe that they (tier-2) are the future talent hubs and job-creation centres. Our conviction has been borne out by the fact that tier-2 cities have made great strides in salary levels due to a lot of investments in building excellent physical and social infrastructure.

2. Has done Infrastructure coding using Cloudformation/Terraform and Configuration also understands it very clearly
3. Deep understanding of the microservice design and aware of centralized Caching(Redis),centralized configuration(Consul/Zookeeper)
4. Hands-on experience of working on containers and its orchestration using Kubernetes
5. Hands-on experience of Linux and Windows Operating System
6. Worked on NoSQL Databases like Cassandra, Aerospike, Mongo or
Couchbase, Central Logging, monitoring and Caching using stacks like ELK(Elastic) on the cloud, Prometheus, etc.
7. Has good knowledge of Network Security, Security Architecture and Secured SDLC practices
Job Description :
- The engineer should be highly motivated, able to work independently, work and guide other engineers within and outside the team.
- The engineer should possess varied software skills in Shell scripting, Linux, Oracle database, WebLogic, GIT, ANT, Hudson, Jenkins, Docker and Maven
- Work is super fun, non-routine, and, challenging, involving the application of advanced skills in area of specialization.
Key responsibilities :
- Design, develop, troubleshoot and debug software programs for Hudson monitoring, installing Cloud Software, Infra and cloud application usage monitoring.
Required Knowledge :
- Source Configuration management, Docker, Puppet, Ansible, Application and Server Monitoring, AWS, Database Administration, Kubernetes, Log monitoring, CI/CD, Design and implement build, deployment, and configuration management, Improve infrastructure development, Scripting language.
- Good written and verbal communication skills
Qualification :
Education and Experience : Bachelors/Masters in Computer Science
Open for 24- 7 shifts









