
environments: AWS / Azure / GCP
• Must have strong work experience (2 + years) developing IaC (i.e. Terraform)
• Must have strong work experience in Ansible development and deployment.
• Bachelor’s degree with a background in math will be a PLUS.
• Must have 8+ years experience with a mix of Linux and Window systems in a medium to large business
environment.
• Must have command level fluency and shell scripting experience in a mix of Linux and Windows
environments.
•
• Must enjoy the experience of working in small, fast-paced teams
• Identify opportunities for improvement in existing process and automate the process using Ansible Flows.
• Fine tune performance and operation issues that arise with Automation flows.
• Experience administering container management systems like Kubernetes would be plus.
• Certification with Red Hat or any other Linux variant will be a BIG PLUS.
• Fluent in the use of Microsoft Office Applications (Outlook / Word / Excel).
• Possess a strong aptitude towards automating and timely completion of standard/routine tasks.
• Experience with automation and configuration control systems like Puppet or Chef is a plus.
• Experience with Docker, Kubernetes (or container orchestration equivalent) is nice to have

About DTA India
About
Connect with the team
Similar jobs
Lightning Job By Cutshort ⚡
As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)
Job Overview:
We are seeking an experienced DevOps Engineer to join our team. The successful candidate will be responsible for designing, implementing, and maintaining the infrastructure and software systems required to support our development and production environments. The ideal candidate should have a strong background in Linux, GitHub, Actions/Jenkins, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
Responsibilities:
• Design, implement and maintain CI/CD pipelines using GitHub, Actions/Jenkins, Kubernetes, Helm, and ArgoCD.
• Deploy and manage Kubernetes clusters using AWS.
• Configure and maintain Envoy Proxy and Cert-Manager to automate deployment and manage application environments.
• Monitor system performance using Datadog, ELK, and Cloudflare tools.
• Automate infrastructure management and maintenance tasks using Terraform, Ansible, or similar tools.
• Collaborate with development teams to design, implement and test infrastructure changes.
• Troubleshoot and resolve infrastructure issues as they arise.
• Participate in on-call rotation and provide support for production issues.
Qualifications:
• Bachelor's or Master's degree in Computer Science, Engineering or a related field.
• 4+ years of experience in DevOps engineering with a focus on Linux, GitHub, Actions/CodeFresh, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
• Strong understanding of Linux administration and shell scripting.
• Experience with automation tools such as Terraform, Ansible, or similar.
• Ability to write infrastructure as code using tools such as Terraform, Ansible, or similar.
• Experience with container orchestration platforms such as Kubernetes.
• Familiarity with container technologies such as Docker.
• Experience with cloud providers such as AWS.
• Experience with monitoring tools such as Datadog and ELK.
Skills:
• Strong analytical and problem-solving skills.
• Excellent communication and collaboration skills.
• Ability to work independently or in a team environment.
• Strong attention to detail.
• Ability to learn and apply new technologies quickly.
• Ability to work in a fast-paced and dynamic environment.
• Strong understanding of DevOps principles and methodologies.
Kindly apply at https://www.wohlig.com/careers
LogiNext is looking for a technically savvy and passionate Junior DevOps Engineer to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust and scalable infrastructure.
Knowledge in building secure, high-performing and scalable infrastructure. Experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in non-production and production environments.
Responsibilities:
Scale and optimise a variety of SQL and NoSQL databases, web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Support several Linux servers running our SaaS platform stack on AWS, Azure, GCP Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations
Requirements:
Bachelor’s degree in Computer Science, Information Technology or a related field 0 to 1 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Knowledge in Linux/Unix Administration and Python/Shell Scripting Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills
2. Kubernetes Engineer
DevOps Systems Engineer with experience in Docker Containers, Docker Swarm, Docker Compose, Ansible, Jenkins and other tools. As part of this role she/he must work with the container best practices in design, development and implementation.
At Least 4 Years of experience in DevOps with knowledge of:
- Docker
- Docker Cloud / Containerization
- DevOps Best Practices
- Distributed Applications
- Deployment Architecture
- Atleast AWS Experience
- Exposure to Kubernetes / Serverless Architecture
Skills:
- 3-7+ years of experience in DevOps Engineering
- Strong experience with Docker Containers, Implementing Docker Containers, Container Clustering
- Experience with Docker Swarm, Docker Compose, Docker Engine
- Experience with Provisioning and managing VM's - Virtual Machines
- Experience / strong knowledge with Network Topologies, Network Research,
- Jenkins, BitBucket, Jira
- Ansible or other Automation Configuration Management System tools
- Scripting & Programming using languages such as BASH, Perl, Python, AWK, SED, PHP, Shell
- Linux Systems Administration -: Redhat
Additional Preference:
Security, SSL configuration, Best Practices
Role Description:
● Own, deploy, configure, and manage infrastructure environment and/or applications in
both private and public cloud through cross-technology administration (OS, databases,
virtual networks), scripting, and monitoring automation execution.
● Manage incidents with a focus on service restoration.
● Act as the primary point of contact for all compute, network, storage, security, or
automation incidents/requests.
● Manage rollout of patches and release management schedule and implementation.
Technical experience:
● Strong knowledge of scripting languages such as Bash, Python, and Golang.
● Expertise in using command line tools and shells
● Strong working knowledge of Linux/UNIX and related applications
● Knowledge in implementing DevOps and having an inclination towards automation.
● Sound knowledge in infrastructure-as-a-code approaches with Puppet, Chef, Ansible, or
Terraform, and Helm. (preference towards Terraform, Ansible, and Helm)
● Must have strong experience in technologies such as Docker, Kubernetes, OpenShift,
etc.
● Working with REST/gRPC/GraphQL APIs
● Knowledge in networking, firewalls, network automation
● Experience with Continuous Delivery pipelines - Jenkins/JenkinsX/ArgoCD/Tekton.
● Experience with Git, GitHub, and related tools
● Experience in at least one public cloud provider
Skills/Competencies
● Foundation: OS (Linux/Unix) & N/w concepts and troubleshooting
● Automation: Bash or Python or Golang
● CI/CD & Config Management: Jenkin, Ansible, ArgoCD, Helm, Chef/Puppet, Git/GitHub
● Infra as a Code: Terraform
● Platform: Docker, K8s, VMs
● Databases: MySQL, PostgreSql, DataStore (Mongo, Redis, AeroSpike) good to have
● Security: Vulnerability Management and Golden Image
● Cloud: Deep working knowledge on any public cloud (GCP preferable)
● Monitoring Tools: Prometheus, Grafana, NewRelic
Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data platforms at Petabyte scale. Our customers are Fortune 500 companies including Asia's largest telecom company, a unicorn fintech startup of India, and many more. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data observability platform that focuses on ensuring data reliability from every spectrum (compute, data and pipeline) of a cloud or on-premise data platform.
Position Summary-
This role will support the customer implementation of a data quality and reliability product. The candidate is expected to install the product in the client environment, manage proof of concepts with prospects, and become a product expert and troubleshoot post installation, production issues. The role will have significant interaction with the client data engineering team and the candidate is expected to have good communication skills.
Required experience
- 6-7 years experience providing engineering support to data domain/pipelines/data engineers.
- Experience in troubleshooting data issues, analyzing end to end data pipelines and in working with users in resolving issues
- Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
- Basic understanding of SQL
- Experience working with technologies like S3, Kubernetes experience preferred.
- Databricks/Hadoop/Kafka experience preferred but not required
As a DevOps Engineer with experience in Kubernetes, you will be responsible for leading and managing a team of DevOps engineers in the design, implementation, and maintenance of the organization's infrastructure. You will work closely with software developers, system administrators, and other IT professionals to ensure that the organization's systems are efficient, reliable, and scalable.
Specific responsibilities will include:
- Leading the team in the development and implementation of automation and continuous delivery pipelines using tools such as Jenkins, Terraform, and Ansible.
- Managing the organization's infrastructure using Kubernetes, including deployment, scaling, and monitoring of applications.
- Ensuring that the organization's systems are secure and compliant with industry standards.
- Collaborating with software developers to design and implement infrastructure as code.
- Providing mentorship and technical guidance to team members.
- Troubleshooting and resolving technical issues in collaboration with other IT professionals.
- Participating in the development and maintenance of the organization's disaster recovery and incident response plans.
To be successful in this role, you should have strong leadership skills and experience with a variety of DevOps and infrastructure tools and technologies. You should also have excellent communication and problem-solving skills, and be able to work effectively in a fast-paced, dynamic environment.

Experience of Linux
Experience using Python or Shell scripting (for Automation)
Hands-on experience with Implementation of CI/CD Processes
Experience working with one cloud platforms (AWS or Azure or Google)
Experience working with configuration management tools such as Ansible & Chef
Experience working with Containerization tool Docker.
Experience working with Container Orchestration tool Kubernetes.
Experience in source Control Management including SVN and/or Bitbucket
& GitHub
Experience with setup & management of monitoring tools like Nagios, Sensu & Prometheus or any other popular tools
Hands-on experience in Linux, Scripting Language & AWS is mandatory
Troubleshoot and Triage development, Production issues

Requirements and Qualifications
- Bachelor’s degree in Computer Science Engineering or in a related field
- 4+ years of experience
- Excellent analytical and problem-solving skills
- Strong knowledge of Linux systems and internals
- Programming experience in Python/Shell scripting
- Strong AWS skills with knowledge of EC2, VPC, S3, RDS, Cloudfront, Route53, etc
- Experience in containerization (Docker) and container orchestration (Kubernetes)
- Experience in DevOps & CI/CD tools such as Git, Jenkins, Terraform, Helm
- Experience with SQL & NoSQL databases such as MySql, MongoDB, and ElasticSearch
- Debugging and troubleshooting skills using tools such as strace, tcpdump, etc
- Good understanding of networking protocol and security concerns (VPN, VPC, IG, NAT, AZ, Subnet)
- Experience with monitoring and data analysis tools such as Prometheus, EFK, etc
- Good communication & collaboration skills and attention to details
- Participation in rotating on-call duties

- He has to perform architectural analysis, and he should know how to design enterprise-level systems.
- He should know how to design and simulate tools for the perfect delivery of systems.
- He should know how to design, develop, and maintain systems, processes, procedures to deliver a high-quality service design.
- He has to work with other members of a team and other departments to establish healthy communication and information flow.
- He should know how to deliver a high-performing solution architecture that can support the development efforts of a business.
- He has to plan, design, and configure the most typical business solutions as needed.
- He has to prepare technical documents and other presentations for multiple solutions areas.
- He has to be sure that the best practices for configuration management are carried our as it was needed.
- He has to work on customer specifications, analyze them, and conduct the best product recommendations associated with the platform
Requirements
- AWS Solution Architect 9-10 Years
- Responsible for managing applications on public cloud (AWS) infrastructure.
- Responsible for larger migrations of applications from VM to cloud/cloud-native.
- Responsible for setting up monitoring for cloud/cloud-native-based infrastructure and applications.
- MUST: AWS Solution Architect Professional certification.


