Cutshort logo
AWS Elastic Beanstalk Jobs in Bangalore (Bengaluru)

2+ AWS Elastic Beanstalk Jobs in Bangalore (Bengaluru) | AWS Elastic Beanstalk Job openings in Bangalore (Bengaluru)

Apply to 2+ AWS Elastic Beanstalk Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest AWS Elastic Beanstalk Job opportunities across top companies like Google, Amazon & Adobe.

icon
AI Powered Logistics Company

AI Powered Logistics Company

Agency job
via Recruiting Bond by Pavan Kumar
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹36L / yr
DevOps
skill iconKubernetes
skill iconMongoDB
skill iconPython
skill iconDocker
+35 more

Job Title: Sr Dev Ops Engineer

Location: Bengaluru- India (Hybrid work type)

Reports to: Sr Engineer manager


About Our Client : 

We are a solution-based, fast-paced tech company with a team that thrives on collaboration and innovative thinking. Our Client's IoT solutions provide real-time visibility and actionable insights for logistics and supply chain management. Cloud-based, AI-enhanced metrics coupled with patented hardware optimize processes, inform strategic decision making and enable intelligent supply chains without the costly infrastructure


About the role : We're looking for a passionate DevOps Engineer to optimize our software delivery and infrastructure. You'll build and maintain CI/CD pipelines for our microservices, automate infrastructure, and ensure our systems are reliable, scalable, and secure. If you thrive on enhancing performance and fostering operational excellence, this role is for you. 


What You'll Do 🛠️

  • Cloud Platform Management: Administer and optimize AWS resources, ensuring efficient billing and cost management.
  • Billing & Cost Optimization: Monitor and optimize cloud spending.
  • Containerization & Orchestration: Deploy and manage applications and orchestrate them.
  • Database Management: Deploy, manage, and optimize database instances and their lifecycles.
  • Authentication Solutions: Implement and manage authentication systems.
  • Backup & Recovery: Implement robust backup and disaster recovery strategies, for Kubernetes cluster and database backups.
  • Monitoring & Alerting: Set up and maintain robust systems using tools for application and infrastructure health and integrate with billing dashboards.
  • Automation & Scripting: Automate repetitive tasks and infrastructure provisioning.
  • Security & Reliability: Implement best practices and ensure system performance and security across all deployments.
  • Collaboration & Support: Work closely with development teams, providing DevOps expertise and support for their various application stacks. 


What You'll Bring 💼

  • Minimum of 4 years of experience in a DevOps or SRE role.
  • Strong proficiency in AWS Cloud, including services like Lambda, IoT Core, ElastiCache, CloudFront, and S3.
  • Solid understanding of Linux fundamentals and command-line tools.
  • Extensive experience with CI/CD tools, GitLab CI.
  • Hands-on experience with Docker and Kubernetes, specifically AWS EKS.
  • Proven experience deploying and managing microservices.
  • Expertise in database deployment, optimization, and lifecycle management (MongoDB, PostgreSQL, and Redis).
  • Experience with Identity and Access management solutions like Keycloak.
  • Experience implementing backup and recovery solutions.
  • Familiarity with optimizing scaling, ideally with Karpenter.
  • Proficiency in scripting (Python, Bash).
  • Experience with monitoring tools such as Prometheus, Grafana, AWS CloudWatch, Elastic Stack.
  • Excellent problem-solving and communication skills. 


Bonus Points ➕

  • Basic understanding of MQTT or general IoT concepts and protocols.
  • Direct experience optimizing React.js (Next.js), Node.js (Express.js, Nest.js) or Python (Flask) deployments in a containerized environment.
  • Knowledge of specific AWS services relevant to application stacks.
  • Contributions to open-source projects related to Kubernetes, MongoDB, or any of the mentioned frameworks.
  • AWS Certifications (AWS Certified DevOps Engineer, AWS Certified Solutions Architect, AWS Certified SysOps Administrator, AWS Certified Advanced Networking).


Why this role: 

•You will help build the company from the ground up—shaping our culture and having an impact from Day 1 as part of the foundational team.

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore)
3 - 6 yrs
₹5L - ₹14L / yr
skill iconElastic Search
Logstash
Kibana
AWS Elastic Beanstalk
Real-time data ingestion
+3 more

Job Title : Junior ELK Data Engineer

Experience Required : 3+ Years

Location : Bangalore (Work From Office / Hybrid as per project requirement)

Job Type : Full-Time

Joining : Immediate Joiners only


Job Summary :

We are seeking a Junior ELK Data Engineer with over 3 years of hands-on experience in the Elastic Stack (Elasticsearch, Logstash, Kibana, and Beats).

The ideal candidate will help design, develop, and optimize scalable data ingestion, indexing, and visualization solutions, contributing to the development of high-performance observability and analytics platforms for real-time monitoring and analysis.


Mandatory Skills :

Elastic Stack (Elasticsearch, Logstash, Kibana, Beats), real-time data ingestion, dashboard development, log processing, search optimization, system observability.


Key Responsibilities :

  • Build and maintain data pipelines using Logstash, Beats, and Elasticsearch for real-time log ingestion and processing.
  • Design and develop Kibana dashboards for effective visualization and alerting across various data sources.
  • Optimize indexing strategies for large-scale distributed systems to ensure high search performance and reliability.
  • Collaborate with DevOps and SRE teams to enable effective observability and monitoring solutions.
  • Analyze system performance and troubleshoot issues related to logging and monitoring pipelines.
  • Assist in configuring and maintaining ELK stack components in production and development environments.

Preferred Qualifications:

  • Experience in handling distributed systems logs and metrics at scale.
  • Familiarity with scripting (Python/Shell) for data manipulation or automation.
  • Exposure to cloud platforms (AWS, Azure, or GCP) is a plus.
  • Understanding of containerized environments like Docker/Kubernetes.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort