11+ Amazon Route 53 Jobs in Pune | Amazon Route 53 Job openings in Pune
Apply to 11+ Amazon Route 53 Jobs in Pune on CutShort.io. Explore the latest Amazon Route 53 Job opportunities across top companies like Google, Amazon & Adobe.
1. Experience: 3-6 years 2. Extensive expertise in the below in AWS Development. 3. Amazon Dynamo Db, Amazon RDS , Amazon APIs. AWS Elastic Beanstalk, and AWS Cloud Formation. 4. Lambda, Kinesis. CodeCommit ,CodePipeline. 5. Leveraging AWS SDKs to interact with AWS services from the application. 6. Writing code that optimizes performance of AWS services used by the application. 7. Developing with Restful API interfaces. 8. Code-level application security (IAM roles, credentials, encryption, etc.). 9. Programming Language Python or .NET. Programming with AWS APIs. 10. General troubleshooting and debugging.
FlytBase is a Physical AI platform powering autonomous drones and robots across industrial sites. Our software enables 24/7 operations in critical infrastructure like solar farms, ports, oil refineries, and more.
We're building intelligent autonomy — not just automation — and security is core to that vision.
What You’ll Own
You’ll be leading and building the backbone of our AI-native drone orchestration platform — used by global industrial giants for autonomous operations.
Expect to:
Design and manage multi-region, multi-cloud infrastructure (AWS, Kubernetes, Terraform, Docker)
Own infrastructure provisioning through GitOps, Ansible, Helm, and IaC
Set up observability stacks (Prometheus, Grafana) and write custom alerting rules
Build for Zero Trust security — logs, secrets, audits, access policies
Lead incident response, postmortems, and playbooks to reduce MTTR
Automate and secure CI/CD pipelines with SAST, DAST, image hardening
Script your way out of toil using Python, Bash, or LLM-based agents
Work alongside dev, platform, and product teams to ship secure, scalable systems
What We’re Looking For:
You’ve probably done a lot of this already:
3–5+ years in DevOps / DevSecOps for high-availability SaaS or product infra
Hands-on with Kubernetes, Terraform, Docker, and cloud-native tooling
Strong in Linux internals, OS hardening, and network security
Built and owned CI/CD pipelines, IaC, and automated releases
Written scripts (Python/Bash) that saved your team hours
Familiar with SOC 2, ISO 27001, threat detection, and compliance work
Bonus if you’ve:
Played with LLMs or AI agents to streamline ops and Built bots that monitor, patch, or auto-deploy.
What It Means to Be a Flyter
AI-native instincts: You don’t just use AI — you think in it. Your terminal window has a co-pilot.
Ownership without oversight: You own outcomes, not tasks. No one micromanages you here.
Joy in complexity: Security + infra + scale = your happy place.
Radical candor: You give and receive sharp feedback early — and grow faster because of it.
Loops over lines: we prioritize continuous feedback, iteration, and learning over one-way execution or rigid, linear planning.
H3: Happy. Healthy. High-Performing. We believe long-term performance stems from an environment where you feel emotionally fulfilled, physically well, and deeply motivated.
Systems > Heroics: We value well-designed, repeatable systems over last-minute firefighting or one-off effort.
Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.
Key Responsibilities:
Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
Work on data migration tasks in AWS environments.
Monitor and improve database performance; automate key performance indicators and reports.
Collaborate with cross-functional teams to support data integration and delivery requirements.
Write shell scripts for automation and manage ETL jobs efficiently.
Required Skills:
Strong experience with MySQL, complex SQL queries, and stored procedures.
Hands-on experience with AWS Glue, PySpark, and ETL processes.
Good understanding of AWS ecosystem and migration strategies.
We are looking for a proficient, passionate, and skilled DevOps Specialist. You will have the opportunity to build an in-depth understanding of our client's business problems and then implement organizational strategies to resolve the same.
Skills required: Strong knowledge and experience of cloud infrastructure (AWS, Azure or GCP), systems, network design, and cloud migration projects. Strong knowledge and understanding of CI/CD processes tools (Jenkins/Azure DevOps) is a must. Strong knowledge and understanding of Docker & Kubernetes is a must. Strong knowledge of Python, along with one more language (Shell, Groovy, or Java). Strong prior experience using automation tools like Ansible, Terraform. Architect systems, infrastructure & platforms using Cloud Services. Strong communication skills. Should have demonstrated the ability to collaborate across teams and organizations.
Benefits of working with OpsTree Solutions:
Opportunity to work on the latest cutting edge tools/technologies in DevOps Knowledge focused work culture Collaboration with very enthusiastic DevOps experts High growth trajectory Opportunity to work with big shots in the IT industry
Position: Cloud Analyst Level 1 Location – Pune Experience - 1.5 to 3 YR Payroll: Direct with Client Salary Range: 3 to 5 Lacs (depending on existing) Role and Responsibility • Good understanding and Experience on AWS CloudWatch for ES2, Amazon Web Services, and Resources, and other sources. • Collect and Store logs • Monitor and Store Logs • Log Analyze • Configure Alarm • Configure Dashboard • Preparation and following of SOP's, Documentation. • Good understanding AWS in DevOps. • Experience with AWS services ( EC2, ECS, CloudWatch, VPC, Networking ) • Experience with a variety of infrastructure, application, and log monitoring tools ~ Prometheus, Grafana, • Familiarity with Docker, Linux, and Linux security • Knowledge and experience with container-based architectures like Docker • Experience on performing troubleshooting on AWS service. • Experience in configuring services in AWS like EC2, S3, ECS • Experience with Linux system administration and engineering skills on Cloud infrastructure • Knowledge of Load Balancers, Firewalls, and network switching components • Knowledge of Internet-based technologies - TCP/IP, DNS, HTTP, SMTP & Networking concepts • Knowledge of security best practices • Comfortable 24x7 supporting Production environments • Strong communication skills
Designing and building infrastructure to support AWS, Azure, and GCP-based Cloud services and infrastructure.
Creating and utilizing tools to monitor our applications and services in the cloud including system health indicators, trend identification, and anomaly detection.
Working with development teams to help engineer scalable, reliable, and resilient software running in the cloud.
Participating in on-call escalation to troubleshoot customer-facing issues
Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance.
Providing analytics and forecasts for cloud capacity, troubleshooting analysis, and uptime.
Skills
Should have strong experience of a couple of years, in leading DevOps team and planning, defining DevOps roadmap and executing as per the same along with the team
Familiarity with AWS cloud and JSON templates, Python, AWS Cloud formation templates
Designing solutions using one or more AWS features, tools, and technologies such as EC2, EBS, Glacier, S3, ELB, CloudFormation, Lambada, CloudWatch, VPC, RDS, Direct Connect, AWS CLI, REST API
Design and implement system architecture with AWS cloud - Develop automation scripts, ARM templates, Ansible, Chef, Python, Powershell Knowledge of AWS services and cloud design patterns- Knowledge on Cloud fundamentals like autoscaling, serverless
Have experience with DevOps and Infrastructure as Code: AWS environment and application automation utilizing CloudFormation and third-party tools. CI/CD pipeline setup utilizing
CI experience with the following is a must: Jenkins, Bitbucket/GIT, Nexus or Artifactory, SonarQube, WireMock or other mocking solution
Expert knowledge on Windows/Linux OS/Mac with at least 5-6 years of system administration experience
Should have strong skills in using JIRA build tool
Should have knowledge in managing the CI/CD pipeline on public cloud deployments using AWS
Should have strong skills in using tools like Jenkins, Docker, Kubernetes (AWS EKS, Azure AKS), and Cloudformation.
Experience in monitoring tools like Pingdom, Nagios, etc.
Experience in reverse proxy services like Nginx and Apache
Desirable experience in Bitbucket with version control tools like GIT/SVN
Experience of manual/automated testing desired application deployments
Experience in database technologies such as PostgreSQL, MySQL
3+ years of relevant experience with DevOps tools Jenkins, Ansible, Chef etc
3+ years of experience in continuous integration/deployment and software tools development experience with Python and shell scripts etc
Building and running Docker images and deployment on Amazon ECS
Working with AWS services (EC2, S3, ELB, VPC, RDS, Cloudwatch, ECS, ECR, EKS)
Knowledge and experience working with container technologies such as Docker and Amazon ECS, EKS, Kubernetes
Experience with source code and configuration management tools such as Git, Bitbucket, and Maven
Ability to work with and support Linux environments (Ubuntu, Amazon Linux, CentOS)
Knowledge and experience in cloud orchestration tools such as AWS Cloudformation/Terraform etc
Experience with implementing "infrastructure as code", “pipeline as code” and "security as code" to enable continuous integration and delivery
Understanding of IAM, RBAC, NACLs, and KMS
Good communication skills
Good to have:
Strong understanding of security concepts, methodologies and apply them such as SSH, public key encryption, access credentials, certificates etc.
Knowledge of database administration such as MongoDB.
Knowledge of maintaining and using tools such as Jira, Bitbucket, Confluence.
Responsibilities
Work with Leads and Architects in designing and implementation of technical infrastructure, platform, and tools to support modern best practices and facilitate the efficiency of our development teams through automation, CI/CD pipelines, and ease of access and performance.
Establish and promote DevOps thinking, guidelines, best practices, and standards.
Contribute to architectural discussions, Agile software development process improvement, and DevOps best practices.
Profile - Sr. DevOps Engineer
Experience: 5-8 Years
The DevOps team at Druva is chartered with developing infrastructure code that is foundational in deployment and operations of Druva's Saas service. Devops team additionally enables
Druva engineers to rapidly innovate by building tools that provide a simple, fast and robust developer experience by simulating a cloud in a box.
Our focus centers on creating tooling that streamlines development, testing, building, integration, packaging, and deployment of mutable and immutable artifacts.
DevOps engineers are involved in the full life cycle of the application. You will be responsible for the design and implementation of the application’ build,
release, deployment and configuration activities as well as contribute to defining the deployment architecture of Druva's saas service.
You will automate and streamline our operations and processes involved in those activities. You will leverage existing tools
and technologies, preferably the open source ones, to build infrastructure applications needed to support deployment, operation and monitoring of Druva's saas service.
At the same time, you won't limit yourself from building such tools whenever off-the-shelf tools aren't adequate.
You will continuously focus on improving the deployment design and troubleshoot and resolve issues in our dev, test and production environments.
Qualifications
- 5-8 years experience in designing and developing large scale infrastructure applications that help deploy and smoothly operate a SAAS service.
- Experience with wide variety of open source tools and technologies relevant to deployment on a cloud, including deployment frameworks like docker swarm and containers, Kubernetes or equivalent, is a must.
- Experience with configuration management using Salt, Puppet, Chef or equivalent
- Experience working with AWS is an added advantage.
- Strong expertise with bash scripting, python or equivalent.
- Strong grasp of automation tools and ability to develop them as needed.
- Experience with continuous integration and continuous deployment (CI/CD) and associated automation
Title: Devops Engineer
Location: Pune, India
Overview
appOrbit is an emerging startup enabling customers accelerate the digital transformation of their businesses. Our vision is to make it possible for enterprises to deploy and manage end to end life cycle of any application (legacy, cloud native, windows, linux) on any infrastructure (virtual machines, containers, bare-metal) across any cloud (public, private, hybrid). Our platform is already helping several dozen customers realize this vision.
appOrbit is looking for a build & release engineer who will develop and maintain the release pipeline that will be the backbone for delivering our cutting edge products. You will be architecting and building tools that will make the end-to-end release process seamless and efficient. You will have the opportunity to work on bleeding edge open source technologies, participating in the open source community and contributing to it. This is a high visibility and high impact role.
Responsibilities:
•Implement Continuous Integration and Continuous Delivery on complex software development products using Git, Jenkins, Maven, Docker, Chef on Cloud.
•Required to author technical design of functional specifications and progress the solution from design through the software development life-cycle to implementation
•Develop and execute plans and report status
Qualifications:
•Bachelor’s degree or higher in Computer Science or related field
•4-6 years’ of experience in software development, build/CM preferred
•Experience in Bash/Shell/Python/Ruby scripting
•Excellent git experience (Multiple branch strategies)
•Expertise in supporting OSS & third party tools, including updating and patching
•Must have basic Linux system administration skills
Preferred Skills:
•Conceptual Creativity
•Autonomous and self-driven
•Work experience in agile software development teams.
•Excellent communicator.
Benefits:
•Fun, creative and fast-paced working environment
•Terrific medical and accident insurance plans
•Pantry stocked with snacks and beverages
•Flexible time-off with generous paid holidays
Read more
Get to hear about interesting companies hiring right now