11+ Amazon Route 53 Jobs in Pune | Amazon Route 53 Job openings in Pune
Apply to 11+ Amazon Route 53 Jobs in Pune on CutShort.io. Explore the latest Amazon Route 53 Job opportunities across top companies like Google, Amazon & Adobe.
1. Experience: 3-6 years 2. Extensive expertise in the below in AWS Development. 3. Amazon Dynamo Db, Amazon RDS , Amazon APIs. AWS Elastic Beanstalk, and AWS Cloud Formation. 4. Lambda, Kinesis. CodeCommit ,CodePipeline. 5. Leveraging AWS SDKs to interact with AWS services from the application. 6. Writing code that optimizes performance of AWS services used by the application. 7. Developing with Restful API interfaces. 8. Code-level application security (IAM roles, credentials, encryption, etc.). 9. Programming Language Python or .NET. Programming with AWS APIs. 10. General troubleshooting and debugging.
Hands-on knowledge on various CI-CD tools (Jenkins/TeamCity, Artifactory, UCD, Bitbucket/Github, SonarQube) including setting up of build-deployment automated pipelines.
Very good knowledge in scripting tools and languages such as Shell, Perl or Python , YAML/Groovy, build tools such as Maven/Gradle.
Hands-on knowledge in containerization and orchestration tools such as Docker, OpenShift and Kubernetes.
Good knowledge in configuration management tools such as Ansible, Puppet/Chef and have worked on setting up of monitoring tools (Splunk/Geneos/New Relic/Elk).
Expertise in job schedulers/workload automation tools such as Control-M or AutoSys is good to have.
Hands-on knowledge on Cloud technology (preferably GCP) including various computing services and infrastructure setup using Terraform.
Should have basic understanding on networking, certificate management, Identity and Access Management and Information security/encryption concepts.
• Should support day-to-day tasks related to platform and environments upkeep such as upgrades, patching, migration and system/interfaces integration.
• Should have experience in working in Agile based SDLC delivery model, multi-task and support multiple systems/apps.
• Big-data and Hadoop ecosystem knowledge is good to have but not mandatory.
Should have worked on standard release, change and incident management tools such as ServiceNow/Remedy or similar
We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.
Job Description:
Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
Experience interacting with customer.
Excellent communication.
Hands-on in creating and managing Jenkins job, Groovy scripting.
Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
Experience in Maven.
Experience in scripting languages like Bash, Powershell, Python.
Experience in automation tools like Terraform, Ansible, Chef, Puppet.
Excellent troubleshooting skills.
Experience in Docker and Kuberneties with creating docker files.
Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.
Job Location:
Pune.
Job Description:
Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
Experience interacting with customer.
Excellent communication.
Hands-on in creating and managing Jenkins job, Groovy scripting.
Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
Experience in Maven.
Experience in scripting languages like Bash, Powershell, Python.
Experience in automation tools like Terraform, Ansible, Chef, Puppet.
Excellent troubleshooting skills.
Experience in Docker and Kuberneties with creating docker files.
Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
This role requires a balance between hands-on infrastructure-as-code deployments as well as involvement in operational architecture and technology advocacy initiatives across the Numerator portfolio.
Responsibilities
Selects, develops, and evaluates local personnel to ensure the efficient operation of the function
Leads and mentors local DevOps team members throughout the organisation..
Stays current with industry standards.
Work across engineering team to help define scope and task assignments
Participate in code reviews, design work and troubleshooting across business functions, multiple teams and product groups to help communicate, document and address infrastructure issues.
Look for innovative ways to improve observability, monitoring of large scale systems over a variety of technologies across the Numerator organization.
Participate in the creation of training material, helping teams embrace a culture of DevOps with self-healing and self-service ecosystems. This includes discovery, testing and integration of third party solutions in product roadmaps.
Lead by example and evangelize DevOps best practices within the team and within the organization and product teams in Numerator.
Technical Skills
2+ years of experience in cloud-based systems, in a SRE or DevOps position
Professional and positive approach, self-motivated, strong in building relationships, team player, dynamic, creative with the ability to work on own initiatives.
Excellent oral and written communication skills.
Availability to participate in after-hours on-call support with your fellow engineers and help improve a team’s on-call process where necessary.
Strong analytical and problem solving mindset combined with experience of troubleshooting large-scale systems.
Working knowledge of networking, operating systems and packaging/build systems ie. AWS Linux, Ubuntu, PIP and NPM, Terraform, Ansible etc.
Strong working knowledge of Serverless and Kubernetes based environments in AWS, Azure and Google Cloud Platform (GCP).
Experience in managing highly redundant data stores, file systems and services both in the cloud and on-premise including both data transfer, redundancy and cost-management.
Ability to quickly stand up AWS or other cloud-based platform services in isolation or within product environments to test out a variety of solutions or concepts before developing production-ready solutions with the product teams.
Bachelors or, Masters in Science, or Post Doctorate in Computer Science or related field, or equivalent work experience.
Define and introduce security best practices, identify gaps in infrastructure and come up with solutions
Design and implement tools and software to manage Sibros’ infrastructure
Stay hands-on, write and review code and documentation, debug and root cause issues in production environment
Minimum Qualifications
Experience in Infrastructure as Code (IaC) to manage multi-cloud environments using cloud agnostic tools like Terraform or Ansible
Passionate about security and have good understanding of industry best practices
Experience in programming languages like Python, Golang, and enjoying automating everything using code
Good skills and intuition on root cause issues in production environment
Preferred Qualifications
Experience in database and network management
Experience in defining security policies and best practices
Experience in managing a large scale multi cloud environment
Knowledge of SOC, GDPR or ISO 27001 security compliance standards is a plus
Equal Employment Opportunity
Sibros is committed to a policy of equal employment opportunity. We recruit, employ, train, compensate, and promote without regard to race, color, age, sex, ancestry, marital status, religion, national origin, disability, sexual orientation, veteran status, present or past history of mental disability, genetic information or any other classification protected by state or federal law.
Designing and building infrastructure to support AWS, Azure, and GCP-based Cloud services and infrastructure.
Creating and utilizing tools to monitor our applications and services in the cloud including system health indicators, trend identification, and anomaly detection.
Working with development teams to help engineer scalable, reliable, and resilient software running in the cloud.
Participating in on-call escalation to troubleshoot customer-facing issues
Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance.
Providing analytics and forecasts for cloud capacity, troubleshooting analysis, and uptime.
Skills
Should have strong experience of a couple of years, in leading DevOps team and planning, defining DevOps roadmap and executing as per the same along with the team
Familiarity with AWS cloud and JSON templates, Python, AWS Cloud formation templates
Designing solutions using one or more AWS features, tools, and technologies such as EC2, EBS, Glacier, S3, ELB, CloudFormation, Lambada, CloudWatch, VPC, RDS, Direct Connect, AWS CLI, REST API
Design and implement system architecture with AWS cloud - Develop automation scripts, ARM templates, Ansible, Chef, Python, Powershell Knowledge of AWS services and cloud design patterns- Knowledge on Cloud fundamentals like autoscaling, serverless
Have experience with DevOps and Infrastructure as Code: AWS environment and application automation utilizing CloudFormation and third-party tools. CI/CD pipeline setup utilizing
CI experience with the following is a must: Jenkins, Bitbucket/GIT, Nexus or Artifactory, SonarQube, WireMock or other mocking solution
Expert knowledge on Windows/Linux OS/Mac with at least 5-6 years of system administration experience
Should have strong skills in using JIRA build tool
Should have knowledge in managing the CI/CD pipeline on public cloud deployments using AWS
Should have strong skills in using tools like Jenkins, Docker, Kubernetes (AWS EKS, Azure AKS), and Cloudformation.
Experience in monitoring tools like Pingdom, Nagios, etc.
Experience in reverse proxy services like Nginx and Apache
Desirable experience in Bitbucket with version control tools like GIT/SVN
Experience of manual/automated testing desired application deployments
Experience in database technologies such as PostgreSQL, MySQL
What you will do • Develop and maintain CI/CD tools to build and deploy scalable web and responsive applications in production environment • Design and implement monitoring solutions that identify both system bottlenecks and production issues • Design and implement workflows for continuous integration, including provisioning, deployment, testing, and version control of the software. • Develop self-service solutions for the engineering team in order to deliver sites/software with great speed and quality o Automating Infra creation o Provide easy to use solutions to engineering team • Conduct research, tests, and implements new metrics collection systems that can be reused and applied as engineering best practices o Update our processes and design new processes as needed. o Establish DevOps Engineer team best practices. o Stay current with industry trends and source new ways for our business to improve. • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. • Manage timely resolution of all critical and/or complex problems • Maintain, monitor, and establish best practices for containerized environments. • Mentor new DevOps engineers What you will bring • The desire to work in fast-paced environment. • 5+ years’ experience building, maintaining, and deploying production infrastructures in AWS or other cloud providers • Containerization experience with applications deployed on Docker and Kubernetes • Understanding of NoSQL and Relational Database with respect to deployment and horizontal scalability • Demonstrated knowledge of Distributed and Scalable systems Experience with maintaining and deployment of critical infrastructure components through Infrastructure-as-Code and configuration management tooling across multiple environments (Ansible, Terraform etc) • Strong knowledge of DevOps and CI/CD pipeline (GitHub, BitBucket, Artifactory etc) • Strong understanding of cloud and infrastructure components (server, storage, network, data, and applications) to deliver end-to-end cloud Infrastructure architectures and designs and recommendations o AWS services like S3, CloudFront, Kubernetes, RDS, Data Warehouses to come up with architecture/suggestions for new use cases. • Test our system integrity, implemented designs, application developments and other processes related to infrastructure, making improvements as needed Good to have • Experience with code quality tools, static or dynamic code analysis and compliance and undertaking and resolving issues identified from vulnerability and compliance scans of our infrastructure • Good knowledge of REST/SOAP/JSON web service API implementation •
Hands on experience in following is a must: Unix, Python and Shell Scripting.
Hands on experience in creating infrastructure on cloud platform AWS is a must.
Must have experience in industry standard CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory and Chef.
Must be good at these DevOps tools:
Platforms: Ubuntu, CentOS Version Control Tools: Git, CVS Build Tools: Maven and Gradle CI Tools: Jenkins
Hands-on experience with Analytics tools, ELK stack.
Knowledge of Java will be an advantage.
Roles & Responsibilities
Experience designing and implementing an effective and efficient CI/CD flow that gets code from dev to prod with high quality and minimal manual effort.
Ability to help debug and optimise code and automate routine tasks.
Should be extremely good in communication
Experience in dealing with difficult situations and making decisions with a sense of urgency.
Profile - Sr. DevOps Engineer
Experience: 5-8 Years
The DevOps team at Druva is chartered with developing infrastructure code that is foundational in deployment and operations of Druva's Saas service. Devops team additionally enables
Druva engineers to rapidly innovate by building tools that provide a simple, fast and robust developer experience by simulating a cloud in a box.
Our focus centers on creating tooling that streamlines development, testing, building, integration, packaging, and deployment of mutable and immutable artifacts.
DevOps engineers are involved in the full life cycle of the application. You will be responsible for the design and implementation of the application’ build,
release, deployment and configuration activities as well as contribute to defining the deployment architecture of Druva's saas service.
You will automate and streamline our operations and processes involved in those activities. You will leverage existing tools
and technologies, preferably the open source ones, to build infrastructure applications needed to support deployment, operation and monitoring of Druva's saas service.
At the same time, you won't limit yourself from building such tools whenever off-the-shelf tools aren't adequate.
You will continuously focus on improving the deployment design and troubleshoot and resolve issues in our dev, test and production environments.
Qualifications
- 5-8 years experience in designing and developing large scale infrastructure applications that help deploy and smoothly operate a SAAS service.
- Experience with wide variety of open source tools and technologies relevant to deployment on a cloud, including deployment frameworks like docker swarm and containers, Kubernetes or equivalent, is a must.
- Experience with configuration management using Salt, Puppet, Chef or equivalent
- Experience working with AWS is an added advantage.
- Strong expertise with bash scripting, python or equivalent.
- Strong grasp of automation tools and ability to develop them as needed.
- Experience with continuous integration and continuous deployment (CI/CD) and associated automation
Read more
Get to hear about interesting companies hiring right now