Requirements:
● Should have at least 2+ years of DevOps experience
● Should have experience with Kubernetes
● Should have experience with Terraform/Helm
● Should have experience in building scalable server-side systems
● Should have experience in cloud infrastructure and designing databases
● Having experience with NodeJS/TypeScript/AWS is a bonus
● Having experience with WebRTC is a bonus
Similar jobs
We are looking to fill the role of AWS devops engineer . To join our growing team, please review the list of responsibilities and qualifications.
Responsibilities:
- Engineer solutions using AWS services (Cloud Formation, EC2, Lambda, Route 53, ECS, EFS )
- Balance hardware, network, and software layers to arrive at a scalable and maintainable solution that meets requirements for uptime, performance, and functionality
- Monitor server applications and use tools and log files to troubleshoot and resolve problems
- Maintain 99.99% availability of the web and integration services
- Anticipate, identify, mitigate, and resolve issues relating to client facing infrastructure
- Monitor, analyse, and predict trends for system performance, capacity, efficiency, and reliability and recommend enhancements in order to better meet client SLAs and standards
- Research and recommend innovative and automated approaches for system administration and DevOps tasks
- Deploy and decommission client environments for multi and single tenant hosted applications following and updating as needed established processes and procedures
- Follow and develop CPA change control processes for modifications to systems and associated components
Practice configuration management, including maintenance of component inventory and related documentation per company policies and procedures.
Qualifications :
- Git/GitHub version control tools
- Linux and/or Windows Virtualisation (VMWare, Xen, KVM, Virtual Box )
- Cloud computing (AWS, Google App Engine, Rackspace Cloud)
- Application Servers, servlet containers and web servers (WebSphere, Tomcat)
- Bachelors / Masters Degree - 2+ years experience in software development
- Must have experience with AWS VPC networking and security
Objectives :
- Building and setting up new development tools and infrastructure
- Working on ways to automate and improve development and release processes
- Testing code written by others and analyzing results
- Ensuring that systems are safe and secure against cybersecurity threats
- Identifying technical problems and developing software updates and ‘fixes’
- Working with software developers and software engineers to ensure that development follows established processes and works as intended
- Planning out projects and being involved in project management decisions
Daily and Monthly Responsibilities :
- Deploy updates and fixes
- Build tools to reduce occurrences of errors and improve customer experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Skills and Qualifications :
- Degree in Computer Science or Software Engineering or BSc in Computer Science, Engineering or relevant field
- 3+ years of experience as a DevOps Engineer or similar software engineering role
- Proficient with git and git workflows
- Good logical skills and knowledge of programming concepts(OOPS,Data Structures)
- Working knowledge of databases and SQL
- Problem-solving attitude
- Collaborative team spirit
- 5+ years of experience in DevOps including automated system configuration, application deployment, and infrastructure-as-code.
- Advanced Linux system administration abilities.
- Real-world experience managing large-scale AWS or GCP environments. Multi-account management a plus.
- Experience with managing production environments on AWS or GCP.
- Solid understanding CI/CD pipelines using GitHub, CircleCI/Jenkins, JFrog Artifactory/Nexus.
- Experience on any configuration management tools like Ansible, Puppet or Chef is a must.
- Experience in any one of the scripting languages: Shell, Python, etc.
- Experience in containerization using Docker and orchestration using Kubernetes/EKS/GKE is a must.
- Solid understanding of SSL and DNS.
- Experience on deploying and running any open-source monitoring/graphing solution like Prometheus, Grafana, etc.
- Basic understanding of networking concepts.
- Always adhere to security best practices.
- Knowledge on Bigdata (Hadoop/Druid) systems administration will be a plus.
- Knowledge on managing and running DBs (MySQL/MariaDB/Postgres) will be an added advantage.
What you get to do
- Work with development teams to build and maintain cloud environments to specifications developed closely with multiple teams. Support and automate the deployment of applications into those environments
- Diagnose and resolve occurring, latent and systemic reliability issues across entire stack: hardware, software, application and network. Work closely with development teams to troubleshoot and resolve application and service issues
- Continuously improve Conviva SaaS services and infrastructure for availability, performance and security
- Implement security best practices – primarily patching of operating systems and applications
- Automate everything. Build proactive monitoring and alerting tools. Provide standards, documentation, and coaching to developers.
- Participate in 12x7 on-call rotations
- Work with third party service/support providers for installations, support related calls, problem resolutions etc.
Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.
Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/" target="_blank">https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Title: Cloud DevOps Engineer
Role: Individual Contributor (4-8 yrs)
Requirements:
- Energetic self-starter, a fast learner, with a desire to work in a startup environment
- Experience working with Public Clouds like AWS
- Operating and Monitoring cloud infrastructure on AWS.
- Primary focus on building, implementing and managing operational support
- Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure.
- Expert at one of the scripting languages – Python, shell, etc
- Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc
- Handling load monitoring, capacity planning, and services monitoring.
- Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues.
- Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
**THIS IS A 100% WORK FROM OFFICE ROLE**
We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead to identify and establish DevOps practices in the company.
You will help us build scalable, efficient cloud infrastructure. You’ll implement monitoring for automated system health checks. Lastly, you’ll build our CI pipeline, and train and guide the team in DevOps practices.
ROLE and RESPONSIBILITIES:
• Understanding customer requirements and project KPIs
• Implementing various development, testing, automation tools, and IT infrastructure
• Planning the team structure, activities, and involvement in project management
activities.
• Managing stakeholders and external interfaces
• Setting up tools and required infrastructure
• Defining and setting development, test, release, update, and support processes for
DevOps operation
• Have the technical skill to review, verify, and validate the software code developed in
the project.
• Troubleshooting techniques and fixing the code bugs
• Monitoring the processes during the entire lifecycle for its adherence and updating or
creating new processes for improvement and minimizing the wastage
• Encouraging and building automated processes wherever possible
• Identifying and deploying cybersecurity measures by continuously performing
vulnerability assessment and risk management
• Incidence management and root cause analysis
• Coordination and communication within the team and with customers
• Selecting and deploying appropriate CI/CD tools
• Strive for continuous improvement and build continuous integration, continuous
development, and constant deployment pipeline (CI/CD Pipeline)
• Mentoring and guiding the team members
• Monitoring and measuring customer experience and KPIs
• Managing periodic reporting on the progress to the management and the customer
Essential Skills and Experience Technical Skills
• Proven 3+years of experience as DevOps
• A bachelor’s degree or higher qualification in computer science
• The ability to code and script in multiple languages and automation frameworks
like Python, C#, Java, Perl, Ruby, SQL Server, NoSQL, and MySQL
• An understanding of the best security practices and automating security testing and
updating in the CI/CD (continuous integration, continuous deployment) pipelines
• An ability to conveniently deploy monitoring and logging infrastructure using tools.
• Proficiency in container frameworks
• Mastery in the use of infrastructure automation toolsets like Terraform, Ansible, and command line interfaces for Microsoft Azure, Amazon AWS, and other cloud platforms
• Certification in Cloud Security
• An understanding of various operating systems
• A strong focus on automation and agile development
• Excellent communication and interpersonal skills
• An ability to work in a fast-paced environment and handle multiple projects
simultaneously
OTHER INFORMATION
The DevOps Engineer will also be expected to demonstrate their commitment:
• to gedu values and regulations, including equal opportunities policy.
• the gedu’s Social, Economic and Environmental responsibilities and minimise environmental impact in the performance of the role and actively contribute to the delivery of gedu’s Environmental Policy.
• to their Health and Safety responsibilities to ensure their contribution to a safe and secure working environment for staff, students, and other visitors to the campus.
We are looking for a Senior Platform Engineer responsible for handling our GCP/AWS clouds. The candidate will be responsible for automating the deployment of cloud infrastructure and services to support application development and hosting (architecting, engineering, deploying, and operationally managing the underlying logical and physical cloud computing infrastructure).
Job Description:
● Collaborate with teams to build and deliver solutions implementing serverless, microservice-based, IaaS, PaaS, and containerized architectures in GCP/AWS environments.
●Responsible for deploying highly complex, distributed transaction processing systems.
● Work on continuous improvement of the products through innovation and learning. Someone with a knack for benchmarking and optimization
● Hiring, developing, and cultivating a high and reliable cloud support team ● Building and operating complex CI/CD pipelines at scale
● Work with GCP Services, Private Service Connect, Cloud Run, Cloud Functions, Pub/Sub, Cloud Storage, Networking
● Collaborate with Product Management and Product Engineering teams to drive excellence in Google Cloud products and features.
● Ensures efficient data storage and processing functions by company security policies and best practices in cloud security.
● Ensuring scaled database setup/monitoring with near zero downtime
We are looking for an experienced DevOps (Development and Operations) professional to join our growing organization. In this position, you will be responsible for finding and reporting bugs in web and mobile apps & assist Sr DevOps to manage infrastructure projects and processes. Keen attention to detail, problem-solving abilities, and a solid knowledge base are essential.
As a DevOps, you will work in a Kubernetes based microservices environment.
Experience in Microsoft Azure cloud and Kubernetes is preferred, not mandatory.
Ultimately, you will ensure that our products, applications and systems work correctly.
Responsibilities:
- Detect and track software defects and inconsistencies
- Apply quality engineering principals throughout the Agile product lifecycle
- Handle code deployments in all environments
- Monitor metrics and develop ways to improve
- Consult with peers for feedback during testing stages
- Build, maintain, and monitor configuration standards
- Maintain day-to-day management and administration of projects
- Manage CI and CD tools with team
- Follow all best practices and procedures as established by the company
- Provide support and documentation
Required Technical and Professional Expertise
- Minimum 2+ years if DevOps
- Have experience in SaaS infrastructure development and Web Apps
- Experience in delivering microservices at scale; designing microservices solutions
- Proven Cloud experience/delivery of applications on Azure
- Proficient in configuration Management tools such as Ansible or any of Terraform Puppet, Chef, Salt, etc
- Hands-on experience in Networking/network configuration, Application performance monitoring, Container performance, and security.
- Understanding of Kubernetes, Python along with scripting languages like bash/shell
- Good to have experience in Linux internals, Linux packaging, Release Engineering (Branching, versioning, tagging), Artifact repository, Artifactory, Nexus, and CI/CD tooling (Concourse CI, Travis, Jenkins)
- Must be a proactive person
- You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies
- An ambitious individual who can work under their own direction towards agreed targets/goals and with a creative approach to work.
- An intuitive individual with an ability to manage change and proven time management
- Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
We are front runners of the technological revolution with an inexhaustible passion for technology! DevOn is the technical organization that originated from Prowareness. We are the company at the forefront of leading DevOps transformations and setting up High Performance Distributed DevOps teams with leading companies worldwide. DevOn helps market leaders to take the next step in software delivery. We consist of a dynamic team, in which personal growth is central!
About You
You have 6+ years of experience in AWS infra Automation. This is a fantastic opportunity to work in a fast-paced operations environment and to develop your career in Cloud technologies, particularly Amazon Web Services.
You are building and monitoring CI/CD pipeline in AWS cloud. This is a highly scalable backend application building on Java platform. We need a resource who can troubleshoot, diagnose and rectify system service issues.
You’re cloud native with Terraform as an orchestration. You would use Terraform as a key Orchestration in Infrastructure as Code.
You're comfortable driving. You prefer to own your work streams and enjoy working in autonomy to progress towards your goals.
You provide an incredible support to the team. You sweat the small stuff but keep the big picture in mind. You know that a pair programming can give better result
An ideal candidate is/are:
This is a key role within our DevOps team and will involve working as part of a collaborative agile team in a shared services DevOps organization to support and deliver innovative technology solutions that directly align with the delivery of business value and enhanced customer experience. The primary objective is to provide support to Amazon Web Services hosted environment, ensure continuous availability, working closely with development teams to ensure best value for money, and effective estate management.
- Setup CI/CD Pipeline from scratch along with integration of appropriate quality gates.
- Expertise level knowledge in AWS cloud. Provision and configure infrastructure as code using Terraform
- Build and configure Kubernetes-based infrastructure, networking policies, LBs, and cluster security. Define autoscaling and cost strategies.
- Automate the build of containerized systems with CI/CD tooling, Helm charts, and more
- Manage deployments and rollbacks of applications
- Implement monitoring and metrics with Cloud watch, Newrelic
- Troubleshoot and optimize containerized workload deployments for clients
- Automate operational tasks, and assist in the transition to service ownership models.
Job Summary
Creates, modifies, and maintains software applications individually or as part of a team. Provides technical leadership on a team, including training and mentoring of other team members. Provides technology and architecture direction for the team, department, and organization.
Essential Duties & Responsibilities
- Develops software applications and supporting infrastructure using established coding standards and methodologies
- Sets example for software quality through multiple levels of automated tests, including but not limited to unit, API, End to End, and load.
- Self-starter and self-organized - able to work without supervision
- Develops tooling, test harnesses and innovative solutions to understand and monitor the quality of the product
- Develops infrastructure as code to reliably deploy applications on demand or through automation
- Understands cloud managed services and builds scalable and secure applications using them
- Creates proof of concepts for new ideas that answer key questions of feasibility, desirability, and viability
- Work with other technical leaders to establish coding standards, development best practices and technology direction
- Performs thorough code reviews that promote better understanding throughout the team
- Work with architects, designers, business analysts and others to design and implement high quality software solutions
- Builds intuitive user interfaces with the end user persona in mind using front end frameworks and styling
- Assist product owners in backlog grooming, story breakdown and story estimation
- Collaborate and communicate effectively with team members and other stakeholders throughout the organization
- Document software changes for use by other engineers, quality assurance and documentation specialists
- Master the technologies, languages, and practices used by the team and project assigned
- Train others in the technologies, languages, and practices used by the team
- Trouble shoot, instrument and debug existing software resolving root causes of defective behavior
- Guide the team in setting up the infrastructure in the cloud.
- Setup the security protocols for the cloud infrastructure
- Works with the team in setting up the data hub in the cloud
- Create dashboards for the visibility of the various interactions between the cloud services
- Other duties as assigned
Experience
Education
- BA/BS in Computer Science, a related field or equivalent work experience
Minimum Qualifications
- Mastered advanced programming concepts, including object oriented programming
- Mastered technologies and tools utilized by team and project assigned
- Able to train others on general programming concepts and specific technologies
- Minimum 8 years’ experience developing software applications
Skills/Knowledge
- Must be expert in advanced programming skills and database technology
- Must be expert in at least one technology and/or language and proficient in multiple technologies and languages:
- (Specific languages needed will vary based on development department or project)
- .Net Core, C#, Java, SQL, JavaScript, Typescript, Python
- Additional desired skills:
- Single-Page Applications, Angular (v9), Ivy, RXJS, NGRX, HTML5, CSS/SASS, Web Components, Atomic Design
- Test First approach, Test Driven Development (TDD), Automated testing (Protractor, Jasmine), Newman Postman, artillery.io
- Microservices, Terraform, Jenkins, Jupyter Notebook, Docker, NPM, Yarn, Nuget, NodeJS, Git/Gerrit, LaunchDarkly
- Amazon Web Services (AWS), Lambda, S3, Cognito, Step Functions, SQS, IAM, Cloudwatch, Elasticache
- Database Design, Optimization, Replication, Partitioning/Sharding, NoSQL, PostgreSQL, MongoDB, DynamoDB, Elastic Search, PySpark, Kafka
- Agile, Scrum, Kanban, DevSecOps
- Strong problem-solving skills
- Outstanding communications and interpersonal skills
- Strong organizational skills and ability to multi-task
- Ability to track software issues to successful resolution
- Ability to work in a collaborative fast paced environment
- Setting up complex AWS data storage hub
- Well versed in setting up infrastructure security in the interactions between the planned components
- Experienced in setting up dashboards for analyzing the various operations in the AWS infra setup.
- Ability to learn new development language quickly and apply that knowledge effectively