11+ Logstash Jobs in Chennai | Logstash Job openings in Chennai
Apply to 11+ Logstash Jobs in Chennai on CutShort.io. Explore the latest Logstash Job opportunities across top companies like Google, Amazon & Adobe.
DESIRED SKILLS AND EXPERIENCE
Strong analytical and problem-solving skills
Ability to work independently, learn quickly and be proactive
3-5 years overall and at least 1-2 years of hands-on experience in designing and managing DevOps Cloud infrastructure
Experience must include a combination of:
o Experience working with configuration management tools – Ansible, Chef, Puppet, SaltStack (expertise in at least one tool is a must)
o Ability to write and maintain code in at least one scripting language (Python preferred)
o Practical knowledge of shell scripting
o Cloud knowledge – AWS, VMware vSphere o Good understanding and familiarity with Linux
o Networking knowledge – Firewalls, VPNs, Load Balancers
o Web/Application servers, Nginx, JVM environments
o Virtualization and containers - Xen, KVM, Qemu, Docker, Kubernetes, etc.
o Familiarity with logging systems - Logstash, Elasticsearch, Kibana
o Git, Jenkins, Jira
We are looking for a highly skilled DevOps/Cloud Engineer with over 6 years of experience in infrastructure automation, cloud platforms, networking, and security. If you are passionate about designing scalable systems and love solving complex cloud and DevOps challenges—this opportunity is for you.
Key Responsibilities
- Design, deploy, and manage cloud-native infrastructure using Kubernetes (K8s), Helm, Terraform, and Ansible
- Automate provisioning and orchestration workflows for cloud and hybrid environments
- Manage and optimize deployments on AWS, Azure, and GCP for high availability and cost efficiency
- Troubleshoot and implement advanced network architectures including VPNs, firewalls, load balancers, and routing protocols
- Implement and enforce security best practices: IAM, encryption, compliance, and vulnerability management
- Collaborate with development and operations teams to improve CI/CD workflows and system observability
Required Skills & Qualifications
- 6+ years of experience in DevOps, Infrastructure as Code (IaC), and cloud-native systems
- Expertise in Helm, Terraform, and Kubernetes
- Strong hands-on experience with AWS and Azure
- Solid understanding of networking, firewall configurations, and security protocols
- Experience with CI/CD tools like Jenkins, GitHub Actions, or similar
- Strong problem-solving skills and a performance-first mindset
Why Join Us?
- Work on cutting-edge cloud infrastructure across diverse industries
- Be part of a collaborative, forward-thinking team
- Flexible hybrid work model – work from anywhere while staying connected
- Opportunity to take ownership and lead critical DevOps initiatives
Hiring for the below position with one of our premium client
Role: Senior DevOps Engineer
Exp:7+ years
Location: Chennai
Key skills: DevOps, Cloud, Python scripting
Description:
Strong analytical and problem-solving skills
Ability to work independently, learn quickly and be proactive
7-9 years overall and at least 3-4 years of hands-on experience in designing and managing DevOps Cloud infrastructure
Experience must include a combination of:
o Experience working with configuration management tools – Ansible, Chef, Puppet, SaltStack (expertise in at least one tool is a must)
o Ability to write and maintain code in at least one scripting language (Python preferred)
o Practical knowledge of shell scripting
o Cloud knowledge – AWS, VMware vSphere
o Good understanding and familiarity with Linux o Networking knowledge – Firewalls, VPNs, Load Balancers o Web/Application servers, Nginx, JVM environments
o Virtualization and containers - Xen, KVM, Qemu, Docker, Kubernetes, etc.
o Familiarity with logging systems - Logstash, Elasticsearch, Kibana o Git, Jenkins, Jira
If interested kindly apply!
- Bachelor of Computer Science or Equivalent Education
- At least 5 years of experience in a relevant technical position.
- Azure and/or AWS experience
- Strong in CI/CD concepts and technologies like GitOps (Argo CD)
- Hands-on experience with DevOps Tools (Jenkins, GitHub, SonarQube, Checkmarx)
- Experience with Helm Charts for package management
- Strong in Kubernetes, OpenShift, and Container Network Interface (CNI)
- Experience with programming and scripting languages (Spring Boot, NodeJS, Python)
- Strong container image management experience using Docker and distroless concepts
- Familiarity with Shared Libraries for code reuse and modularity
- Excellent communication skills (verbal, written, and presentation)
Note: Looking for immediate joiners only.
Bachelor's degree in Computer Science or a related field, or equivalent work experience
Strong understanding of cloud infrastructure and services, such as AWS, Azure, or Google Cloud Platform
Experience with infrastructure as code tools such as Terraform or CloudFormation
Proficiency in scripting languages such as Python, Bash, or PowerShell
Familiarity with DevOps methodologies and tools such as Git, Jenkins, or Ansible
Strong problem-solving and analytical skills
Excellent communication and collaboration skills
Ability to work independently and as part of a team
Willingness to learn new technologies and tools as required
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)

at Altimetrik
DevOps Architect
Experience: 10 - 12+ year relevant experience on DevOps
Locations : Bangalore, Chennai, Pune, Hyderabad, Jaipur.
Qualification:
• Bachelors or advanced degree in Computer science, Software engineering or equivalent is required.
• Certifications in specific areas are desired
Technical Skillset: Skills Proficiency level
- Build tools (Ant or Maven) - Expert
- CI/CD tool (Jenkins or Github CI/CD) - Expert
- Cloud DevOps (AWS CodeBuild, CodeDeploy, Code Pipeline etc) or Azure DevOps. - Expert
- Infrastructure As Code (Terraform, Helm charts etc.) - Expert
- Containerization (Docker, Docker Registry) - Expert
- Scripting (linux) - Expert
- Cluster deployment (Kubernetes) & maintenance - Expert
- Programming (Java) - Intermediate
- Application Types for DevOps (Streaming like Spark, Kafka, Big data like Hadoop etc) - Expert
- Artifactory (JFrog) - Expert
- Monitoring & Reporting (Prometheus, Grafana, PagerDuty etc.) - Expert
- Ansible, MySQL, PostgreSQL - Intermediate
• Source Control (like Git, Bitbucket, Svn, VSTS etc)
• Continuous Integration (like Jenkins, Bamboo, VSTS )
• Infrastructure Automation (like Puppet, Chef, Ansible)
• Deployment Automation & Orchestration (like Jenkins, VSTS, Octopus Deploy)
• Container Concepts (Docker)
• Orchestration (Kubernetes, Mesos, Swarm)
• Cloud (like AWS, Azure, GoogleCloud, Openstack)
Roles and Responsibilities
• DevOps architect should automate the process with proper tools.
• Developing appropriate DevOps channels throughout the organization.
• Evaluating, implementing and streamlining DevOps practices.
• Establishing a continuous build environment to accelerate software deployment and development processes.
• Engineering general and effective processes.
• Helping operation and developers teams to solve their problems.
• Supervising, Examining and Handling technical operations.
• Providing a DevOps Process and Operations.
• Capacity to handle teams with leadership attitude.
• Must possess excellent automation skills and the ability to drive initiatives to automate processes.
• Building strong cross-functional leadership skills and working together with the operations and engineering teams to make sure that systems are scalable and secure.
• Excellent knowledge of software development and software testing methodologies along with configuration management practices in Unix and Linux-based environment.
• Possess sound knowledge of cloud-based environments.
• Experience in handling automated deployment CI/CD tools.
• Must possess excellent knowledge of infrastructure automation tools (Ansible, Chef, and Puppet).
• Hand on experience in working with Amazon Web Services (AWS).
• Must have strong expertise in operating Linux/Unix environments and scripting languages like Python, Perl, and Shell.
• Ability to review deployment and delivery pipelines i.e., implement initiatives to minimize chances of failure, identify bottlenecks and troubleshoot issues.
• Previous experience in implementing continuous delivery and DevOps solutions.
• Experience in designing and building solutions to move data and process it.
• Must possess expertise in any of the coding languages depending on the nature of the job.
• Experience with containers and container orchestration tools (AKS, EKS, OpenShift, Kubernetes, etc)
• Experience with version control systems a must (GIT an advantage)
• Belief in "Infrastructure as a Code"(IaaC), including experience with open-source tools such as terraform
• Treats best practices for security as a requirement, not an afterthought
• Extensive experience with version control systems like GitLab and their use in release management, branching, merging, and integration strategies
• Experience working with Agile software development methodologies
• Proven ability to work on cross-functional Agile teams
• Mentor other engineers in best practices to improve their skills
• Creating suitable DevOps channels across the organization.
• Designing efficient practices.
• Delivering comprehensive best practices.
• Managing and reviewing technical operations.
• Ability to work independently and as part of a team.
• Exceptional communication skills, be knowledgeable about the latest industry trends, and highly innovative
The ideal person for the role will:
Possess a keen mind for solving tough problems by partnering effectively with various teams and stakeholders
Be comfortable working in a fast-paced, dynamic, and agile framework
Focus on implementing an end-to-end automated chain
Responsibilities
_____________________________________________________
Strengthen the application and environment security by applying standards and best practices and providing tooling to make development workflows more secure
Identify systems that can benefit from automation, monitoring and infrastructure-as-code and develop and scale products and services accordingly.
Implement sophisticated alerts and escalation mechanisms using automated processes
Help increase production system performance with a focus on high availability and scalability
Continue to keep the lights on (day-to-day administration)
Programmatically create infrastructure in AWS, leveraging Autoscaling Groups, Security Groups, Route53, S3 and IAM with Terraform and Ansible.
Enable our product development team to deliver new code daily through Continuous Integration and Deployment Pipelines.
Create a secure production infrastructure and protect our customer data with continuous security practices and monitoring. Design, develop and scale infrastructure-as-code
Establish SLAs for service uptime, and build the necessary telemetry and alerting platforms to enforce them
Architect and build continuous data pipelines for data lakes, Business Intelligence and AI practices of the company
Remain up to date on industry trends, share knowledge among teams and abide by industry best practices for configuration management and automation.
Qualifications and Background
_______________________________________________________
Graduate degree in Computer Science and Engineering or related technologies
Work or research project experience of 5-7 years, with a minimum of 3 years of experience directly related to the job description
Prior experience working in HIPAA / Hi-Trust frameworks will be given preference
About Witmer Health
_________________________________________________________
We exist to make mental healthcare more accessible, affordable, and effective. At Witmer, we are on a mission to build a research-driven, global mental healthcare company to work on developing novel solutions - by harnessing the power of AI/ML and data science - for a range of mental illnesses like depression, anxiety, OCD, and schizophrenia, among others. Our first foray will be in the space of workspace wellness, where we are building tools to help individual employees and companies improve their mental wellness and raise productivity levels.
Requirements
You will make an ideal candidate if you have:
-
Experience of building a range of Services in a Cloud Service provider
-
Expert understanding of DevOps principles and Infrastructure as a Code concepts and techniques
-
Strong understanding of CI/CD tools (Jenkins, Ansible, GitHub)
-
Managed an infrastructure that involved 50+ hosts/network
-
3+ years of Kubernetes experience & 5+ years of experience in Native services such as Compute (virtual machines), Containers (AKS), Databases, DevOps, Identity, Storage & Security
-
Experience in engineering solutions on cloud foundation platform using Infrastructure As Code methods (eg. Terraform)
-
Security and Compliance, e.g. IAM and cloud compliance/auditing/monitoring tools
-
Customer/stakeholder focus. Ability to build strong relationships with Application teams, cross functional IT and global/local IT teams
-
Good leadership and teamwork skills - Works collaboratively in an agile environment
-
Operational effectiveness - delivers solutions that align to approved design patterns and security standards
-
Excellent skills in at least one of following: Python, Ruby, Java, JavaScript, Go, Node.JS
-
Experienced in full automation and configuration management
-
A track record of constantly looking for ways to do things better and an excellent understanding of the mechanism necessary to successfully implement change
-
Set and achieved challenging short, medium and long term goals which exceeded the standards in their field
-
Excellent written and spoken communication skills; an ability to communicate with impact, ensuring complex information is articulated in a meaningful way to wide and varied audiences
-
Built effective networks across business areas, developing relationships based on mutual trust and encouraging others to do the same
-
A successful track record of delivering complex projects and/or programmes, utilizing appropriate techniques and tools to ensure and measure success
-
A comprehensive understanding of risk management and proven experience of ensuring own/others' compliance with relevant regulatory processes
Essential Skills :
-
Demonstrable Cloud service provider experience - infrastructure build and configurations of a variety of services including compute, devops, databases, storage & security
-
Demonstrable experience of Linux administration and scripting preferably Red Hat
-
Experience of working with Continuous Integration (CI), Continuous Delivery (CD) and continuous testing tools
-
Experience working within an Agile environment
-
Programming experience in one or more of the following languages: Python, Ruby, Java, JavaScript, Go, Node.JS
-
Server administration (either Linux or Windows)
-
Automation scripting (using scripting languages such as Terraform, Ansible etc.)
-
Ability to quickly acquire new skills and tools
Required Skills :
-
Linux & Windows Server Certification
Minimum 4 years exp
Skillsets:
- Build automation/CI: Jenkins
- Secure repositories: Artifactory, Nexus
- Build technologies: Maven, Gradle
- Development Languages: Python, Java, C#, Node, Angular, React/Redux
- SCM systems: Git, Github, Bitbucket
- Code Quality: Fisheye, Crucible, SonarQube
- Configuration Management: Packer, Ansible, Puppet, Chef
- Deployment: uDeploy, XLDeploy
- Containerization: Kubernetes, Docker, PCF, OpenShift
- Automation frameworks: Selenium, TestNG, Robot
- Work Management: JAMA, Jira
- Strong problem solving skills, Good verbal and written communication skills
- Good knowledge of Linux environment: RedHat etc.
- Good in shell scripting
- Good to have Cloud Technology : AWS, GCP and Azure
Your skills and experience should cover:
-
5+ years of experience with developing, deploying, and debugging solutions on the AWS platform using ALL AWS services such as S3, IAM, Lambda, API Gateway, RDS, Cognito, Cloudtrail, CodePipeline, Cloud Formation, Cloudwatch and WAF (Web Application Firewall).
-
Amazon Web Services (AWS) Certified Developer: Associate, is required; Amazon Web Services (AWS) DevOps Engineer: Professional, preferred.
-
5+ years of experience using one or more modern programming languages (Python, Node.js).
-
Hands-on experience migrating data to the AWS cloud platform
-
Experience with Scrum/Agile methodology.
-
Good understanding of core AWS services, uses, and basic AWS architecture best practices (including security and scalability)
-
Experience with AWS Data Storage Tools.
-
Experience in Configure and implement AWS tools such as CloudWatch, CloudTrail and direct system logs for monitoring.
-
Experience working with GIT, or similar tools.
-
Ability to communicate and represent AWS Recommendations and Standards.
The following areas are highly advantageous:
-
Experience with Docker
-
Experience with PostgreSQL database




