Cutshort logo
AWS Elastic Beanstalk Jobs in Chennai

11+ AWS Elastic Beanstalk Jobs in Chennai | AWS Elastic Beanstalk Job openings in Chennai

Apply to 11+ AWS Elastic Beanstalk Jobs in Chennai on CutShort.io. Explore the latest AWS Elastic Beanstalk Job opportunities across top companies like Google, Amazon & Adobe.

icon
Chennai
4 - 8 yrs
₹5L - ₹15L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
HIPAA
+4 more
Witmer Health Technologies is looking for a Development-Security-Operations Technical Specialist who will join the team in design, development, deployment, monitoring and gatekeeping HIPAA / HiTrust compliant reliable and hi-trust digital platforms for mental health.

The ideal person for the role will:
Possess a keen mind for solving tough problems by partnering effectively with various teams and stakeholders
Be comfortable working in a fast-paced, dynamic, and agile framework
Focus on implementing an end-to-end automated chain
Responsibilities
_____________________________________________________
Strengthen the application and environment security by applying standards and best practices and providing tooling to make development workflows more secure
Identify systems that can benefit from automation, monitoring and infrastructure-as-code and develop and scale products and services accordingly.
Implement sophisticated alerts and escalation mechanisms using automated processes
Help increase production system performance with a focus on high availability and scalability
Continue to keep the lights on (day-to-day administration)
Programmatically create infrastructure in AWS, leveraging Autoscaling Groups, Security Groups, Route53, S3 and IAM with Terraform and Ansible.
Enable our product development team to deliver new code daily through Continuous Integration and Deployment Pipelines.
Create a secure production infrastructure and protect our customer data with continuous security practices and monitoring. Design, develop and scale infrastructure-as-code
Establish SLAs for service uptime, and build the necessary telemetry and alerting platforms to enforce them
Architect and build continuous data pipelines for data lakes, Business Intelligence and AI practices of the company
Remain up to date on industry trends, share knowledge among teams and abide by industry best practices for configuration management and automation.
Qualifications and Background
_______________________________________________________
Graduate degree in Computer Science and Engineering or related technologies
Work or research project experience of 5-7 years, with a minimum of 3 years of experience directly related to the job description
Prior experience working in HIPAA / Hi-Trust frameworks will be given preference
About Witmer Health
_________________________________________________________
We exist to make mental healthcare more accessible, affordable, and effective. At Witmer, we are on a mission to build a research-driven, global mental healthcare company to work on developing novel solutions - by harnessing the power of AI/ML and data science - for a range of mental illnesses like depression, anxiety, OCD, and schizophrenia, among others. Our first foray will be in the space of workspace wellness, where we are building tools to help individual employees and companies improve their mental wellness and raise productivity levels.
Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dharati Thakkar
Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
7 - 10 yrs
₹21L - ₹30L / yr
Perforce
DevOps
skill iconGit
skill iconGitHub
skill iconPython
+7 more

JOB DETAILS:

* Job Title: Specialist I - DevOps Engineering

* Industry: Global Digital Transformation Solutions Provider

* Salary: Best in Industry

* Experience: 7-10 years

* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram

 

Job Description

Job Summary:

As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.

The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.

 

Key Responsibilities:

  • Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
  • Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
  • Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
  • Define migration scope — determine how much history to migrate and plan the repository structure.
  • Manage branch renaming and repository organization for optimized post-migration workflows.
  • Collaborate with development teams to determine migration points and finalize migration strategies.
  • Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.

 

Required Qualifications:

  • Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
  • Hands-on experience with P4-Fusion.
  • Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
  • Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
  • Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
  • Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
  • Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
  • Familiarity with CI/CD pipeline integration to validate workflows post-migration.
  • Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
  • Excellent communication and collaboration skills for cross-team coordination and migration planning.
  • Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.

 

Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools

 

Must-Haves

Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Sukanya Mohan
Posted by Sukanya Mohan
Chennai
7 - 12 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
CI/CD
+1 more

Greetings!


Wissen Technology is hiring for Kubernetes Lead/Admin.


Required:

  • 7+ years of relevant experience in Kubernetes
  • Must have hands on experience on Implementation, CI/CD pipeline, EKS architecture, ArgoCD & Statefulset services.
  • Good to have exposure on scripting languages
  • Should be open to work from Chennai
  • Work mode will be Hybrid


Company profile:


Company Name : Wissen Technology

Group of companies in India : Wissen Technology & Wissen Infotech

Work Location - Bangalore

Website : www.wissen.com

Wissen Thought leadership : https://www.wissen.com/articles/

LinkedIn: https://www.linkedin.com/company/wissen-technology

Read more
NovacisDigital
Vidhyasagar G
Posted by Vidhyasagar G
Chennai
6 - 10 yrs
₹12L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
skill iconJenkins
+6 more

DevOps Lead Engineer


We are seeking a skilled DevOps Lead Engineer with 8 to 10 yrs. of experience who handles the entire DevOps lifecycle and is accountable for the implementation of the process. A DevOps Lead Engineer is liable for automating all the manual tasks for developing and deploying code and data to implement continuous deployment and continuous integration frameworks. They are also held responsible for maintaining high availability of production and non-production work environments.


Essential Requirements (must have):


• Bachelor's degree preferable in Engineering.

• Solid 5+ experience with AWS, DevOps, and related technologies


Skills Required:


Cloud Performance Engineering

• Performance scaling in a Micro-Services environment

• Horizontal scaling architecture

• Containerization (such as Dockers) & Deployment

• Container Orchestration (such as Kubernetes) & Scaling


DevOps Automation

• End to end release automation.

• Solid Experience in DevOps tools like GIT, Jenkins, Docker, Kubernetes, Terraform, Ansible, CFN etc.

• Solid experience in Infra Automation (Infrastructure as Code), Deployment, and Implementation.

• Candidates must possess experience in using Linux, Jenkins, and ample experience in Configuring and automating the monitoring tools.

• Strong scripting knowledge

• Strong analytical and problem-solving skills.

• Cloud and On-prem deployments


Infrastructure Design & Provisioning

• Infra provisioning.

• Infrastructure Sizing

• Infra Cost Optimization

• Infra security

• Infra monitoring & site reliability.


Job Responsibilities:


• Responsible for creating software deployment strategies that are essential for the successful

deployment of software in the work environment and provide stable environment for delivery of

quality.

• The DevOps Lead Engineer is accountable for designing, building, configuring, and optimizing

automation systems that help to execute business web and data infrastructure platforms.

• The DevOps Lead Engineer is involved in creating technology infrastructure, automation tools,

and maintaining configuration management.

• The Lead DevOps Engineer oversees and leads the activities of the DevOps team. They are

accountable for conducting training sessions for the juniors in the team, mentoring, career

support. They are also answerable for the architecture and technical leadership of the complete

DevOps infrastructure.

Read more
US Based Product MNC

US Based Product MNC

Agency job
via People First Consultants by Naveed Mohd
Chennai
7 - 13 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+10 more

DESIRED SKILLS AND EXPERIENCE

 Strong analytical and problem-solving skills

 Ability to work independently, learn quickly and be proactive

 3-5 years overall and at least 1-2 years of hands-on experience in designing and managing DevOps Cloud infrastructure

 Experience must include a combination of:

o Experience working with configuration management tools – Ansible, Chef, Puppet, SaltStack (expertise in at least one tool is a must)

o Ability to write and maintain code in at least one scripting language (Python preferred)

o Practical knowledge of shell scripting

o Cloud knowledge – AWS, VMware vSphere o Good understanding and familiarity with Linux

o Networking knowledge – Firewalls, VPNs, Load Balancers

o Web/Application servers, Nginx, JVM environments

o Virtualization and containers - Xen, KVM, Qemu, Docker, Kubernetes, etc.

o Familiarity with logging systems - Logstash, Elasticsearch, Kibana

o Git, Jenkins, Jira

Read more
HappyFox

at HappyFox

1 video
6 products
Lindsey A
Posted by Lindsey A
Chennai, Bengaluru (Bangalore)
7 - 15 yrs
₹15L - ₹20L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+9 more

About us:

HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.

 

We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.

 

To know more, Visit! - https://www.happyfox.com/

 

Responsibilities

  • Build and scale production infrastructure in AWS for the HappyFox platform and its products.
  • Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
  • Implement consistent observability, deployment and IaC setups
  • Lead incident management and actively respond to escalations/incidents in the production environment from customers and the support team.
  • Hire/Mentor other Infrastructure engineers and review their work to continuously ship improvements to production infrastructure and its tooling.
  • Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
  • Lead infrastructure security audits

 

Requirements

  • At least 7 years of experience in handling/building Production environments in AWS.
  • At least 3 years of programming experience in building API/backend services for customer-facing applications in production.
  • Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
  • Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
  • Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
  • Experience in security hardening of infrastructure, systems and services.
  • Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
  • Experience in setting up and managing test/staging environments, and CI/CD pipelines.
  • Experience in IaC tools such as Terraform or AWS CDK
  • Exposure/Experience in setting up or managing Cloudflare, Qualys and other related tools
  • Passion for making systems reliable, maintainable, scalable and secure.
  • Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
  • Bonus points – Hands-on experience with Nginx, Postgres, Postfix, Redis or Mongo systems.

 

 

Read more
Nexevo Technologies

at Nexevo Technologies

1 recruiter
DN Abhi
Posted by DN Abhi
Tamil Nadu, Chennai, Coimbatore
3 - 4 yrs
₹3L - ₹6L / yr
skill iconLaravel
skill iconPHP
skill iconJavascript
RESTful APIs
MVC Framework
+3 more

Laravel Developer Responsibilities:

  • Discussing project aims with the client and development team.
  • Designing and building web applications using Laravel.
  • Troubleshooting issues in the implementation and debug builds.
  • Working with front-end and back-end developers on projects.
  • Testing functionality for users and the backend.
  • Ensuring that integrations run smoothly.
  • Scaling projects based on client feedback.
  • Recording and reporting on work done in Laravel.
  • Maintaining web-based applications.
  • Presenting work in meetings with clients and management.

Laravel Developer Requirements:

  • A degree in programming, computer science, or a related field.
  • Experience working with PHP, performing unit testing, and managing APIs such as REST.
  • A solid understanding of application design using Laravel.
  • Knowledge of database design and querying using SQL.
  • Proficiency in HTML and JavaScript.
  • Practical experience using the MVC architecture.
  • A portfolio of applications and programs to your name.
  • Problem-solving skills and critical mindset.
  • Great communication skills.
  • The desire and ability to learn.

Note: We are hiring Tamil Nadu candidates only..
Read more
enterprise-grade, streaming integration with intelligence pl

enterprise-grade, streaming integration with intelligence pl

Agency job
via Jobdost by Mamatha A
Chennai
2 - 5 yrs
₹5L - ₹15L / yr
DevOps
skill iconDocker
skill iconKubernetes
CI/CD
skill iconGo Programming (Golang)
+13 more

Striim (pronounced “stream” with two i’s for integration and intelligence) was founded in 2012 with a simple goal of helping companies make data useful the instant it’s born.

Striim’s enterprise-grade, streaming integration with intelligence platform makes it easy to build continuous, streaming data pipelines – including change data capture (CDC) – to power real-time cloud integration, log correlation, edge processing, and streaming analytics

2 - 5 Years of Experience in any Programming any language (Polyglot Preferred ) & System Operations • Awareness of Devops & Agile Methodologies • Proficient in leveraging CI and CD tools to automate testing and deployment . • Experience in working in an agile and fast paced environment . • Hands on knowledge of at least one cloud platform (AWS / GCP / Azure). • Cloud networking knowledge: should understand VPC, NATs, and routers. • Contributions to open source is a plus. • Good written communication skills are a must. Contributions to technical blogs / whitepapers will be an added advantage.

Read more
SquareShiftco

at SquareShiftco

6 recruiters
Gowtham M
Posted by Gowtham M
Remote, Chennai
7 - 15 yrs
₹15L - ₹30L / yr
DevOps
skill iconDocker
skill iconKubernetes
CI/CD
Ansible
+4 more

Requirements

You will make an ideal candidate if you have: 

  • Experience of building a range of Services in a Cloud Service provider

  • Expert understanding of DevOps principles and Infrastructure as a Code concepts and techniques

  • Strong understanding of CI/CD tools (Jenkins, Ansible, GitHub)

  • Managed an infrastructure that involved 50+ hosts/network 

  • 3+ years of Kubernetes experience & 5+ years of experience in Native services such as Compute (virtual machines), Containers (AKS), Databases, DevOps, Identity, Storage & Security

  • Experience in engineering solutions on cloud foundation platform using Infrastructure As Code methods (eg. Terraform)

  • Security and Compliance, e.g. IAM and cloud compliance/auditing/monitoring tools

  • Customer/stakeholder focus. Ability to build strong relationships with Application teams, cross functional IT and global/local IT teams

  • Good leadership and teamwork skills - Works collaboratively in an agile environment

  • Operational effectiveness - delivers solutions that align to approved design patterns and security standards

  • Excellent skills in at least one of following: Python, Ruby, Java, JavaScript, Go, Node.JS

  • Experienced in full automation and configuration management

  • A track record of constantly looking for ways to do things better and an excellent understanding of the mechanism necessary to successfully implement change

  • Set and achieved challenging short, medium and long term goals which exceeded the standards in their field

  • Excellent written and spoken communication skills; an ability to communicate with impact, ensuring complex information is articulated in a meaningful way to wide and varied audiences

  • Built effective networks across business areas, developing relationships based on mutual trust and encouraging others to do the same

  • A successful track record of delivering complex projects and/or programmes, utilizing appropriate techniques and tools to ensure and measure success

  • A comprehensive understanding of risk management and proven experience of ensuring own/others' compliance with relevant regulatory processes

 

Essential Skills :

  • Demonstrable Cloud service provider experience - infrastructure build and configurations of a variety of services including compute, devops, databases, storage & security

  • Demonstrable experience of Linux administration and scripting preferably Red Hat

  • Experience of working with Continuous Integration (CI), Continuous Delivery (CD) and continuous testing tools

  • Experience working within an Agile environment

  • Programming experience in one or more of the following languages: Python, Ruby, Java, JavaScript, Go, Node.JS

  • Server administration (either Linux or Windows)

  • Automation scripting (using scripting languages such as Terraform, Ansible etc.)

  • Ability to quickly acquire new skills and tools

Required Skills :

  • Linux & Windows Server Certification

Read more
Consulting and Product Engineering Company

Consulting and Product Engineering Company

Agency job
via Exploro Solutions by Sapna Prabhudesai
Hyderabad, Bengaluru (Bangalore), Pune, Chennai
8 - 12 yrs
₹7L - ₹30L / yr
DevOps
Terraform
skill iconDocker
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
+1 more

Job Dsecription:

 

○ Develop best practices for team and also responsible for the architecture

○ solutions and documentation operations in order to meet the engineering departments quality and standards

○ Participate in production outage and handle complex issues and works towards Resolution

○ Develop custom tools and integration with existing tools to increase engineering Productivity

 

 

Required Experience and Expertise

 

○ Having a good knowledge of Terraform + someone who has worked on large TF code bases.

○ Deep understanding of Terraform with best practices & writing TF modules.

○ Hands-on experience of GCP  and AWS and knowledge on AWS Services like VPC and VPC related services like (route tables, vpc endpoints, privatelinks) EKS, S3, IAM. Cost aware mindset towards Cloud services.

○ Deep understanding of Kernel, Networking and OS fundamentals

NOTICE PERIOD - Max - 30 days

Read more
Omega IT Resources

at Omega IT Resources

1 recruiter
Ashok Chinta
Posted by Ashok Chinta
Chennai
5 - 8 yrs
₹6L - ₹12L / yr
skill iconAmazon Web Services (AWS)
DevOps
AWS Lambda
AWS CloudFormation
AWS RDS
+1 more

Your skills and experience should cover:

  • 5+ years of experience with developing, deploying, and debugging solutions on the AWS platform using ALL AWS services such as S3, IAM, Lambda, API Gateway, RDS, Cognito, Cloudtrail, CodePipeline, Cloud Formation, Cloudwatch and WAF (Web Application Firewall).

  • Amazon Web Services (AWS) Certified Developer: Associate, is required; Amazon Web Services (AWS) DevOps Engineer: Professional, preferred.

  • 5+ years of experience using one or more modern programming languages (Python, Node.js).

  • Hands-on experience migrating data to the AWS cloud platform

  • Experience with Scrum/Agile methodology.

  • Good understanding of core AWS services, uses, and basic AWS architecture best practices (including security and scalability)

  • Experience with AWS Data Storage Tools.

  • Experience in Configure and implement AWS tools such as CloudWatch, CloudTrail and direct system logs for monitoring.

  • Experience working with GIT, or similar tools.

  • Ability to communicate and represent AWS Recommendations and Standards.

 

The following areas are highly advantageous:

  • Experience with Docker

  • Experience with PostgreSQL database

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort