Role & Responsibilities
- Application Architecture: Design and implement application environment
- Manage the configuration and operation of client-based (on-premise) computer operating systems
- Monitor the system daily and respond immediately to security or usability concerns
- Create and monitor the disaster recovery (DR) of all servers.
- Respond and assign a team to resolve help desk requests
- Monitor and maintain server functionality and security issue.
- Administrate infrastructure, including firewalls, databases, malware protection software and other processes
- Automation configuration management using either Ansible, Puppet, Chef or an equivalent
- Manage and administer servers, networks, and applications such as DNS, FTP, and Web servers.
- Troubleshoot in-house network issues and fix them.
- Provide solutions to complex problems on the integration of various technologies
- Design plans as well as lead initiatives for the optimization and restructuring of network architecture
- Monitor the environmental conditions of a data center and cloud servers to ensure they are optimum for servers, routers, and other devices
- Collaborate with IT handlers, sales, and data center managers to develop an action plan for improved operations
- Conduct inspections on power and cooling systems to ensure they are operational and efficient
- Resolve operational, infrastructure or hardware incidents in a data center and cloud servers.
- Monitor and maintain company assets
- Infra-team management and skills enhancement (training) plans and execution
Skills
- In-depth knowledge of the Linux Operating System
- Expertise in Shell and/or Python scripting
- In-depth knowledge of any of the CI/CD tools like Jenkins/GitLab etc.
- Basic knowledge of monitoring tools like Zabbix/ Nagios etc.
- Expertise in any one of the cloud providers like AWS, Google Cloud, Microsoft Azure and other cloud solution providers
- Strong experience with SQL and MySQL
- A working understanding of code and script (PHP, Python, Angular and NodeJS)
- Ability to use a wide variety of open-source technologies
- Knowledge of best practices and IT operations
- Basic experience with VMware
- Advanced knowledge of system vulnerabilities and security issues

About Excelledia Digital Innovation
About
Connect with the team
Similar jobs
- Responsible for building, managing, and maintaining deployment pipelines and developing self-service tooling formanaging Git, Linux, Kubernetes, Docker, CI/CD & Pipelining etc in cloud infrastructure
- Responsible for building and managing DevOps agile tool chain with
- Responsible for working as an integrator between developer teams and various cloud infrastructures.
Section 2
- Responsibilities include helping the development team with best practices, provisioning monitoring, troubleshooting, optimizing and tuning, automating and improving deployment and release processes.
Section 3
- Responsible for maintaining application security with perioding tracking and upgrading package dependencies in coordination with respective developer teams .
- Responsible for packaging and containerization of deploy units and strategizing it in coordination with developer team
Section 4
- Setting up tools and required infrastructure. Defining and setting development, test, release, update, and support processes for DevOps operation
- Responsible for documentation of the process.
- Responsible for leading projects with end to end execution
Qualification: Bachelors of Engineering /MCA Preferably with AWS Cloud certification
Ideal Candidate -
- is experienced between 2-4 years with AWS certification and DevOps
experience.
- age less than 30 years, self-motivated and enthusiastic.
- is interested in building a sustainable DevOps platform with maximum
automation
- is interested in learning and being challenged on day to day basis.
- who can take ownership of the tasks and is willing to take the necessary
action to get it done.
- who can solve complex problems.
- who is honest with their quality of work and is comfortable with taking
ownership of their success and failure, Both
Location: Remote
Job Description :
- Strong hands-on knowledge on Azure DevOps.
- Mandatory Skills required :Azure Devops,docker,Kubernetes
- Skills required : Terraform,GIT,Jenkins,CI/CD,Pipelines,YAML,Scripting,Shell Scripting,Python, Gradle, Maven
- Require only developer experience profiles, and Admin roles are not required
Main tasks
- Supervision of the CI/CD process for the automated builds and deployments of web services and web applications as well as desktop tool in the cloud and container environment
- Responsibility of the operations part of a DevOps organization especially for development at LS telcom in the environment of container technology and orchestration, e.g. with Kubernetes
- Installation, operation and monitoring of web applications in cloud data centers for the purpose of development of the test as well as for the operation of an own productive cloud as LS service
- Implementation of installations of the LS system solution especially in the container context
- Introduction, maintenance and improvement of installation solutions for LS development in the desktop and server environment as well as in the cloud and with on-premise Kubernetes
- Maintenance of the system installation documentation and implementation of trainings
Execution of internal software tests and support of involved teams and stakeholders
- Hands on Experience with Azure DevOps.
Qualification profile
- Bachelor’s or master’s degree in communications engineering, electrical engineering, physics or comparable qualification
- Experience in software
- Installation and administration of Linux and Windows systems including network and firewalling aspects
- Experience with build and deployment automation with tools like Jenkins, Gradle, Argo or similar as well as system scripting (Bash, Power-Shell, etc.)
- Interest in operation and monitoring of applications in virtualized and containerized environments in cloud and on-premise
- Server environments, especially application, web-and database servers
- Knowledge in VMware/K3D/Rancer is an advantage
- Good spoken and written knowledge of English
ketteQ is a supply chain planning and automation platform. We are looking for an experienced AWS Devops Engineer to help manage AWS infrastructure and automation. This job comes with a attractive compensation package, work-from-home and flex-time benefits. You will get to work on projects for large global brands with a highly experienced team based in US and India. If you are high-energy, motivated, and initiative-taking individual then this could be a fantastic opportunity for you. Candidates must meet the following requirements:
Duties & Responsibilities
- Deployment, automation, management, and maintenance of AWS cloud-based production system
- Build a deployment pipeline for AWS and Salesforce
- Design cloud infrastructure that is secure, scalable, and highly available on AWS
- Work collaboratively with software engineering to define infrastructure and deployment requirements
- Provision, configure and maintain AWS cloud infrastructure defined as cloud formation template
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
- Perform infrastructure cost analysis and optimization
Requirements
- At least 5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, Fargate, S3, Cloud Formation)
- Strong understanding of how to secure AWS environments and meet compliance requirements
- Solid foundation of networking and Linux administration
- Experience with Docker, GitHub, Jenkins, Cloud Formation and deploying applications on AWS
- Ability to learn/use a wide variety of open source technologies and tools
- Database experience to help with monitoring and performance; PostgreSql experience preferred
- AWS certification preferred
Education
- Bachelors in Engineering or related field

Striim (pronounced “stream” with two i’s for integration and intelligence) was founded in 2012 with a simple goal of helping companies make data useful the instant it’s born.
Striim’s enterprise-grade, streaming integration with intelligence platform makes it easy to build continuous, streaming data pipelines – including change data capture (CDC) – to power real-time cloud integration, log correlation, edge processing, and streaming analytics
2 - 5 Years of Experience in any Programming any language (Polyglot Preferred ) & System Operations • Awareness of Devops & Agile Methodologies • Proficient in leveraging CI and CD tools to automate testing and deployment . • Experience in working in an agile and fast paced environment . • Hands on knowledge of at least one cloud platform (AWS / GCP / Azure). • Cloud networking knowledge: should understand VPC, NATs, and routers. • Contributions to open source is a plus. • Good written communication skills are a must. Contributions to technical blogs / whitepapers will be an added advantage.
Devops Engineer Position - 3+ years
Kubernetes, Helm - 3+ years (dev & administration)
Monitoring platform setup experience - Prometheus, Grafana
Azure/ AWS/ GCP Cloud experience - 1+ years.
Ansible/Terraform/Puppet - 1+ years
CI/CD - 3+ years
Location: Bengaluru
Department: DevOps
We are looking for extraordinary infrastructure engineers to build a world class
cloud platform that scales to millions of users. You must have experience
building key portions of a highly scalable infrastructure using Amazon AWS and
should know EC2, S3, EMR like the back of your hand. You must enjoy working
in a fast-paced startup and enjoy wearing multiple hats to get the job done.
Responsibilities
● Manage AWS server farm Own AWS infrastructure automation and
support.
● Own production deployments in multiple AWS environments
● End-end backend engineering infra charter includes Dev ops,Global
deployment, Security and compliances according to latest practices.
Ability to guide the team in debugging production issues and write
best-of-the breed code.
● Drive “engineering excellence” (defects, productivity through automation,
performance of products etc) through clearly defined metrics.
● Stay current with the latest tools, technology ideas and methodologies;
share knowledge by clearly articulating results and ideas to key decision
makers.
● Hiring, mentoring and retaining a very talented team.
Requirements
● B.S. or M.S in Computer Science or a related field (math, physics,
engineering)
● 5-8 years of experience in maintaining infrastructure system/devops
● Enjoy playing with tech like nginx, haproxy, postgres, AWS, ansible,
docker, nagios, or graphite Deployment automation experience with
Puppet/Chef/Ansible/Salt Stack Work with small, tightly knit product
teams that function cohesively to move as quickly as possible.
● Determination to provide reliable and fault tolerant systems to the
application developers that consume them
● Experience in developing Java/C++ backend systems is a huge plus Be a
strong team player.
Preferred
Deep working knowledge of Linux servers and networked environments
Thorough understanding of distributed systems and the protocols they use,
including TCP/IP, RESTful APIs, SQL, NoSQL. Experience in managing a NoSQL
database (Cassandra) is a huge plus.
- 7-10 years experience with secure SDLC/DevSecOps practices such as automating security processes within CI/CD pipeline.
- At least 4 yrs. experience designing, and securing Data Lake & Web applications deployed to AWS, Azure, Scripting/Automation skills on Python, Shell, YAML, JSON
- At least 4 years of hands-on experience with software development lifecycle, Agile project management (e.g. Jira, Confluence), source code management (e.g. Git), build automation (e.g. Jenkins), code linting and code quality (e.g. SonarQube), test automation (e.g. Selenium)
- Hand-on & Solid understanding of Amazon Web Services & Azure-based Infra & applications
- Experience writing cloud formation templates, Jenkins, Kubernetes, Docker, and microservice application architecture and deployment.
- Strong know-how on VA/PT integration in CI/CD pipeline.
- Experience in handling financial solutions & customer-facing applications
Roles
- Accelerate enterprise cloud adoption while enabling rapid and stable delivery of capabilities using continuous integration and continuous deployment principles, methodologies, and technologies
- Manage & deliver diverse cloud [AWS, Azure, GCP] DevSecOps journeys
- Identify, prototype, engineer, and deploy emerging software engineering methodologies and tools
- Maximize automation and enhance DevSecOps pipelines and other tasks
- Define and promote enterprise software engineering and DevSecOps standards, practices, and behaviors
- Operate and support a suite of enterprise DevSecOps services
- Implement security automation to decrease the loop between the development and deployment processes.
- Support project teams to adopt & integrate the DevSecOps environment
- Managing application vulnerabilities, Data security, encryption, tokenization, access management, Secure SDLC, SAST/DAST
- Coordinate with development and operations teams for practical automation solutions and custom flows.
- Own DevSecOps initiatives by providing objective, practical and relevant ideas, insights, and advice.
- Act as Release gatekeeper with an understanding of OWASP top 10 lists of vulnerabilities, NIST SP-800-xx, NVD, CVSS scoring, etc concepts
- Build workflows to ensure a successful DevSecOps journey for various enterprise applications.
- Understand the strategic direction to reach business goals across multiple projects & teams
- Collaborate with development teams to understand project deliverables and promote DevSecOps culture
- Formulate & deploy cloud automation strategies and tools
Skills
- Knowledge of the DevSecOps culture and principles.
- An understanding of cloud technologies & components
- A flair for programming languages such as Shell, Python, Java Scripts,
- Strong teamwork and communication skills.
- Knowledge of threat modeling and risk assessment techniques.
- Up-to-date knowledge of cybersecurity threats, current best practices, and the latest software.
- An understanding of programs such as Puppet, Chef, ThreatModeler, Checkmarx, Immunio, and Aqua.
- Strong know-how of Kubernetes, Docker, AWS, Azure-based deployments
- On the job learning for new programming languages, automation tools, deployment architectures
● Develop and deliver automation software required for building & improving the functionality, reliability, availability, and manageability of applications and cloud platforms
● Champion and drive the adoption of Infrastructure as Code (IaC) practices and mindset
● Design, architect, and build self-service, self-healing, synthetic monitoring and alerting platform and tools
● Automate the development and test automation processes through CI/CD pipeline (Git, Jenkins, SonarQube, Artifactory, Docker containers)
● Build container hosting-platform using Kubernetes
● Introduce new cloud technologies, tools & processes to keep innovating in commerce area to drive greater business value.
Skills Required:
● Excellent written and verbal communication skills and a good listener.
● Proficiency in deploying and maintaining Cloud based infrastructure services (AWS, GCP, Azure – good hands-on experience in at least one of them)
● Well versed with service-oriented architecture, cloud-based web services architecture, design patterns and frameworks.
● Good knowledge of cloud related services like compute, storage, network, messaging (Eg SNS, SQS) and automation (Eg. CFT/Terraform).
● Experience with relational SQL and NoSQL databases, including Postgres and
Cassandra.
● Experience in systems management/automation tools (Puppet/Chef/Ansible, Terraform)
● Strong Linux System Admin Experience with excellent troubleshooting and problem solving skills
● Hands-on experience with languages (Bash/Python/Core Java/Scala)
● Experience with CI/CD pipeline (Jenkins, Git, Maven etc)
● Experience integrating solutions in a multi-region environment
● Self-motivate, learn quickly and deliver results with minimal supervision
● Experience with Agile/Scrum/DevOps software development methodologies.
Nice to Have:
● Experience in setting-up Elastic Logstash Kibana (ELK) stack.
● Having worked with large scale data.
● Experience with Monitoring tools such as Splunk, Nagios, Grafana, DataDog etc.
● Previously experience on working with distributed architectures like Hadoop, Mapreduce etc.

