Aws Developer

at one of the Big 4 IT companies in India

icon
Hyderabad
icon
6 - 15 yrs
icon
₹0L - ₹18L / yr
icon
Skills
AWS
Amazon Web Services (AWS)
AWS Lambda
API
aws
aws Developer
Must Have Skills:
  1. API
  2. AWS


Need a strong Amazon Web Service  developer with experience developing APIs using Lambda functions. The candidate must have a very good familiarity with API and deployment of API in AWS knowledge are mandatory.
Read more
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Lead DevOps

at MyOperator - VoiceTree Technologies

Founded 2010  •  Product  •  100-500 employees  •  Bootstrapped
Docker
Kubernetes
DevOps
Amazon Web Services (AWS)
CI/CD
Troubleshooting
Monitoring
Databases
Bash
Python
RCA
Linux administration
Nginx
Jenkins
Team leadership
icon
Remote only
icon
4 - 8 yrs
icon
₹12L - ₹15L / yr
Job description
The role requires you to design development pipelines from the ground up, Creation of Docker Files,  design and operate highly available systems in AWS Cloud environments. Also involves Configuration Management, Web Services Architectures, DevOps Implementation, Database management, Backups, and Monitoring.

Key responsibility area

- Ensure reliable operation of CI/CD pipelines
- Orchestrate the provisioning, load balancing, configuration, monitoring and billing of resources in the cloud environment in a highly automated manner
- Logging, metrics and alerting management.
- Creation of Bash/Python scripts for automation
- Performing root cause analysis for production errors.


Requirement
- 2 years experience as Team Lead.
- Good Command on kubernetes.
- Proficient in Linux Commands line and troubleshooting.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Hands-on experience with CI tooling preferably with Jenkins.
- Proficient in deployment using Ansible.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Scripting languages: Bash, Python, Groovy.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.


Must Have:
Linux, CI/CD(Jenkin), AWS, Scripting(Bash,shell Python, Go), Ngnix, Docker.
Good to have
Configuration Management(Ansible or similar tool), Logging tool( ELK or similar), Monitoring tool(Ngios or similar), IaC(Terraform, cloudformation).
Read more
Job posted by
Sanmeet Singh Sahni

DevOps Engineer

at BarRaiser

Founded 2020  •  Product  •  20-100 employees  •  Raised funding
DevOps
Docker
Kubernetes
Terraform
Scripting
Amazon Web Services (AWS)
CI/CD
Python
PostgreSQL
Linux/Unix
Infrastructure management
Jenkins
icon
Remote, Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹8L - ₹15L / yr
Responsibilities
● Create and manage the company’s technology infrastructure
● Deploy updates and fixes
● Build tools to reduce occurrences of errors and improve customer experience
● Develop software to integrate with internal back-end systems
● Perform root cause analysis for production errors
● Experience creating and maintaining CI/CD systems.
● Investigate and resolve technical issues
● Develop scripts to automate visualization
● Design procedures for system troubleshooting and maintenance

Requirements
● 2+ years of experience as a DevOps Engineer or similar software engineering role
● Min 1 year of experience with AWS (ECS, EC2, VPC, ALB and others services) is must
● Excellent understanding of scripting using Ruby, Python, Perl, or Java
● Configuration and managing databases such as Postgres, MySQL
● Working knowledge of various tools, open-source technologies
● Awareness of critical concepts in DevOps and Agile principles
● Experience with CI/CD tools like Docker, Kubernetes, Jenkins is a plus
● Strong analytical and troubleshooting skills
● Strong fundamentals on Operating systems - Linux, Ubuntu
● Experience with Terraform and configuration management systems is a plus
● Self-motivated and excellent attention to detail
● Highly customer-focused, accountable, responsive and collaborative
● Experience working on Linux based infrastructure
Read more
Job posted by
Akanksh Gupta

DevOps Engineer

at Bytequark

Founded 2020  •  Product  •  20-100 employees  •  Profitable
Docker
Kubernetes
DevOps
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
icon
Hyderabad
icon
3 - 4 yrs
icon
₹3L - ₹5L / yr
Roles & Responsibilities
Implementing various development, testing, automation tools, and IT infrastructure
Planning the team structure, activities, and involvement in project management activities.
Managing stakeholders and external interfaces
Setting up tools and required infrastructure
Defining and setting development, test, release, update, and support processes for DevOps operation
Have the technical skill to review, verify, and validate the software code developed in the project.
Troubleshooting techniques and fixing the code bugs
Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage
Encouraging and building automated processes wherever possible
Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management
Incidence management and root cause analysis
Coordination and communication within the team and with customers
Selecting and deploying appropriate CI/CD tools
Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
Mentoring and guiding the team members
Monitoring and measuring customer experience and KPIs
Managing periodic reporting on the progress to the management and the customer
Read more
Job posted by
Tarun M

Senior DevOps Engineer

at CoLearn

Founded 2020  •  Product  •  100-500 employees  •  Raised funding
Docker
Kubernetes
DevOps
Git
Linux/Unix
CI/CD
Python
Amazon Web Services (AWS)
icon
Remote only
icon
5 - 8 yrs
icon
₹30L - ₹50L / yr

About the Company

  • 💰 Early-stage, ed-tech, funded, growing, growing fast
  • 🎯 Mission Driven: Make Indonesia competitive on a global scale
  • 🥅 Build the best educational content and technology to advance STEM education
  • 🥇 Students-First approach
  • 🇮🇩 🇮🇳 Teams in India and Indonesia

 

Skillset 🧗🏼‍♀️

  • You primarily identify as a DevOps/Infrastructure engineer and are comfortable working with systems and cloud-native services on AWS
  • You can design, implement, and maintain secure and scalable infrastructure delivering cloud-based services
  • You have experience operating and maintaining production systems in a Linux based public cloud environment
  • You are familiar with cloud-native concepts - Containers, Lambdas, Orchestration (ECS, Kubernetes)
  • You’re in love with system metrics and strive to help deliver improvements to systems all the time
  • You can think in terms of Infrastructure as Code to build tools for automating deployment, monitoring, and operations of the platform
  • You can be on-call once every few weeks to provide application support, incident management, and troubleshooting
  • You’re fairly comfortable with GIT, AWS CLI, python, docker CLI, in general, all things CLI. Oh! Bash scripting too!
  • You have high integrity, and you are reliable

 

What you can expect from us 👌🏼

 

☮️ Mentorship, growth, great work culture

  • Mentorship and continuous improvement are a part of the team’s DNA. We have a battle-tested robust growth framework. You will have people to look up to and people looking up to you
  • We are a people-first, high-trust, high-autonomy team
  • We live in the TDD, Pair Programming, First Principles world

 

🌏 Remote done right

  • Distributed does not mean working in isolation, feeling alone, being buried in Zoom calls
  • Our leadership team has been WFH for 10+ years now and we know how remote teams work. This will be a place to belong
  • A good balance between deep focussed work and collaborative work ⚖️

 

🖥️ Friendly, humane interview process

  • 30-minute alignment check and screening call
  • A short take-home coding assignment, no more than 2-3 hours. Time is precious
  • Pair programming interview. Collaborate, work together. No sitting behind a desk and judging
  • In-depth engineering discussion around your skills and career so far
  • System design and architecture interview for seniors

 

What we ask from you👇🏼

  • Bring your software engineering — both individual brilliance and collaborative skills
  • Bring your good nature — we're building a team that supports each other
  • Be vested or interested in the company vision
Read more
Job posted by
Saroj Sahoo

DevOps Engineer

at Planet Spark

Founded  •   •  employees  • 
DevOps
Docker
Chef
Terraform
Amazon Web Services (AWS)
icon
NCR (Delhi | Gurgaon | Noida)
icon
1 - 5 yrs
icon
₹4L - ₹12L / yr

The AWS Cloud/Devops Engineer will be working with the engineering team and focusing on AWS infrastructure and automation.  A key part of the role is championing and leading infrastructure as code.  The Engineer will work closely with the Manager of Operations and Devops to build, manage and automate our AWS infrastructure. 

Duties & Responsibilities:

  • Design cloud infrastructure that is secure, scalable, and highly available on AWS
  • Work collaboratively with software engineering to define infrastructure and deployment requirements
  • Provision, configure and maintain AWS cloud infrastructure defined as code
  • Ensure configuration and compliance with configuration management tools
  • Administer and troubleshoot Linux based systems
  • Troubleshoot problems across a wide array of services and functional areas
  • Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
  • Perform infrastructure cost analysis and optimization

Qualifications:

  • At least 1-5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront, S3)
  • Strong understanding of how to secure AWS environments and meet compliance requirements
  • Expertise using Chef for configuration management
  • Hands-on experience deploying and managing infrastructure with Terraform
  • Solid foundation of networking and Linux administration
  • Experience with CI-CD, Docker, GitLab, Jenkins, ELK and deploying applications on AWS
  • Ability to learn/use a wide variety of open source technologies and tools
  • Strong bias for action and ownership
Read more
Job posted by
Maneesh Dhooper

DevOps Engineer

at Wheelseye Technology India Pvt Ltd.

Founded 2017  •  Product  •  100-500 employees  •  Raised funding
Kubernetes
DevOps
Ansible
Docker
ECS
Amazon Web Services (AWS)
EKS
icon
NCR (Delhi | Gurgaon | Noida)
icon
3 - 8 yrs
icon
₹15L - ₹30L / yr
About WheelsEye
Logistics is complex, layered with multiple stakeholders, unorganized, completely offline and deep with trivial and deep rooted problems. Though, industry contributes 14% to the GDP not much focus has been put and progress has been made to solve the problems of the industry.
Fleet owner sits at the centre of the supply chain, responsible for the most complex logistics implementation yet least enabled. WheelsEye is a logistics company, rebuilding logistics infrastructure around fleet owners.
Currently, Wheelseye is offering technology to empower truck fleet owners. Our software helps automate operations, secure fleet, save costs, improve on-time performance and streamline their business.
We are a young and energetic team of IIT graduates comprising of Alums from Shuttl, Snapdeal, Ola, Zomato, ClearTax, Blackbuck, having rich industry and technology experience. It is our constant endeavour to create and evolve data-driven software solutions that conquer logistic business problems and inspire smarter data driven decision making.

What’s exciting about WheelsEye?
● Work on a real Indian problem of scale
● Impact lives of 5.5 cr fleet owners, drivers and their families in a meaningful way
● Different from current market players, heavily focused and built around truck owners
● Problem solving and learning oriented organization
● Audacious goals, high speed and action orientation
● Opportunity to scale the organization across country
● Opportunity to build and execute the culture
● Contribute to and become a part of the action plan for building the tech, finance and service infrastructure for logistics industry
● It’s Tough!

Key Responsibilities:
● At least 3 years of experience working as a DevOps Engineer and should have vast experience in systems automation, orchestration, deployment, and implementation.
● Ideal candidate must have experience working with tools such as MySQL, Git, Python, Shell scripting, and MongoDB.
● Demonstrate experience in scaling distributed data systems, for example, Hadoop, Elasticsearch, Cassandra, among others.
● Keen understanding of monitoring solutions for all layers of web infrastructure.
● Experience working with monitoring tools such as Nagios, Grafana.
● The candidate must be skilled in the configuration, maintenance, and securing of Linux systems as well as skill in scripting languages such as Shell and Ruby.
● He/ She will also need to have skills in infrastructure automation tools, for example, Chef, Ansible
● Ability to handle 3-4 people in a team and design their career path.

Founders:
• Anshul Mimani (Ex-Shuttl | IIT Kharagpur)
• Manish Somani (Ex-Shuttl | IIT Roorkee)
Read more
Job posted by
Aishwarya Priyam

Devops Engineer, Plum & Empuls

at xoxoday

Founded 2012  •  Product  •  500-1000 employees  •  Raised funding
DevOps
Docker
Kubernetes
Terraform
Jenkins
Amazon Web Services (AWS)
Ansible
icon
Remote only
icon
4 - 6 yrs
icon
₹20L - ₹25L / yr

What is the role?

As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.

 

Key Responsibilities

  • Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
  • Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
  • Working on Docker images and maintaining Kubernetes clusters.
  • Develop and maintain the automation scripts using Ansible or other available tools.
  • Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
  • Working on Cloud security tools to keep applications secured.
  • Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
  • Required Technical and Professional Expertise.

What are we looking for?

  • Minimum 4-6 years of experience in IT industry.
  • Expertise in implementing and managing Devops CI/CD pipeline.
  • Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
  • Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
  • Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
  • Knowledge of Docker and Kubernetes is required.
  • Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
  • Experience with working with ticketing tools.
  • Middleware technologies knowledge or database knowledge is desirable.
  • Experience and well versed with Jira tool is a plus.


What can you look for?

A wholesome opportunity in a fast-paced environment will enable you to juggle between concepts yet maintain the quality of content, interact, share your ideas, and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.

We are

A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore, and Dublin. We have three products in our portfolio: Plum, Empuls, and Compass. Xoxoday works with over 1000 global clients. We help our clients engage and motivate their employees, sales teams, channel partners, or consumers for better business results.



Way forward

We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. However, we assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.

Read more
Job posted by
Aishwarya Anand
AWS CloudFormation
cloud automation
site reliability
cloudformation
Ansible
Terraform
Cloudformation
Amazon Web Services (AWS)
Python
JIRA
Perl
Powershell
Bash
Groovy
icon
Remote only
icon
5 - 11 yrs
icon
₹10L - ₹17L / yr
  • 5+ years of software development or site reliability engineering or equivalent experience
  • Skilled at problem solving, algorithms, and data structures
  • Building tools and scripting frameworks from scratch
  • Working with Cloud Automation tools like CloudFormation, Terraform, CDK, aws-cli
  • Scripting languages like Python, Groovy, PowerShell, Bash, Perl etc.
  • Configuration automation using Ansible or equivalent tools
  • Exposure to Windows, Linux administration skills
  • Project management tools like Jira, Trello
  • Prior experience in dealing with Datastore technologies like Postgres, MySQL, SQL, DynamoDB is desirable
  • Familiarity with basic networking, security and cloud engineering concepts
  • Team player who is eager to help others to succeed through mentoring and leading by example
  • Highly collaborative with effective written and verbal communication skills
Read more
Job posted by
Mohammad Farooq Shaik

DevOps Engineer

at Zoop.one

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
Kubernetes
DevOps
Docker
Amazon Web Services (AWS)
Jenkins
Ansible
Nginx
Python
Shell Scripting
NodeJS (Node.js)
icon
Pune
icon
2 - 4 yrs
icon
₹5L - ₹8L / yr
Role and Responsibilities:
- Solve complex Cloud Infrastructure problems.
- Drive DevOps culture in the organization by working with engineering and product teams.
- Be a trusted technical advisor to developers and help them architect scalable, robust, and highly-available systems.
- Frequently collaborate with developers to help them learn how to run and maintain systems in production.
- Drive a culture of CI/CD. Find bottlenecks in the software delivery pipeline. Fix bottlenecks with developers to help them deliver working software faster. Develop and maintain infrastructure solutions for automation, alerting, monitoring, and agility.
- Evaluate cutting edge technologies and build PoCs, feasibility reports, and implementation strategies.
- Work with engineering teams to identify and remove infrastructure bottlenecks enabling them to move fast. (In simple words you'll be a bridge between tech, operations & product)

Skills required:

Must have:
- Deep understanding of open source DevOps tools.
- Scripting experience in one or more among Python, Shell, Go, etc.
- Strong experience with AWS (EC2, S3, VPC, Security, Lambda, Cloud Formation, SQS, etc)
- Knowledge of distributed system deployment.
- Deployed and Orchestrated applications with Kubernetes.
- Implemented CI/CD for multiple applications.
- Setup monitoring and alert systems for services using ELK stack or similar.
- Knowledge of Ansible, Jenkins, Nginx.
- Worked with Queue based systems.
- Implemented batch jobs and automated recurring tasks.
- Implemented caching infrastructure and policies.
- Implemented central logging.

Good to have:
- Experience dealing with PI information security.
- Experience conducting internal Audits and assisting External Audits.
- Experience implementing solutions on-premise.
- Experience with blockchain.
- Experience with Private Cloud setup.

Required Experience:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- You need to have 2-4 years of DevOps & Automation experience.
- Need to have a deep understanding of AWS.
- Need to be an expert with Git or similar version control systems.
- Deep understanding of at least one open-source distributed systems (Kafka, Redis, etc)
- Ownership attitude is a must.

What’s attractive about us?

We offer a suite of memberships and subscriptions to spice up your lifestyle. We believe in practicing an ultimate work life balance and satisfaction. Working hard doesn’t mean clocking in extra hours, it means having a zeal to contribute the best of your talents. Our people culture helps us inculcate measures and benefits which help you feel confident and happy each and every day. Whether you’d like to skill up, go off the grid, attend your favourite events or be an epitome of fitness. We have you covered round and about.
  • Health Memberships 
  • Sports Subscriptions 
  • Entertainment Subscriptions 
  • Key Conferences and Event Passes
  • Learning Stipend 
  • Team Lunches and Parties 
  • Travel Reimbursements 
  • ESOPs 

Thats what we think would bloom up your personal life, as a gesture for helping us with your talents.

Join us to be a part of our Exciting journey to Build one Digital Identity Platform!!!
Read more
Job posted by
Gunjan G

Data Engineer

at Codalyze Technologies

Founded 2016  •  Products & Services  •  20-100 employees  •  Profitable
Apache Hive
Hadoop
Scala
Spark
Amazon Web Services (AWS)
Java
Python
icon
Mumbai
icon
3 - 9 yrs
icon
₹5L - ₹12L / yr
Job Overview :

Your mission is to help lead team towards creating solutions that improve the way our business is run. Your knowledge of design, development, coding, testing and application programming will help your team raise their game, meeting your standards, as well as satisfying both business and functional requirements. Your expertise in various technology domains will be counted on to set strategic direction and solve complex and mission critical problems, internally and externally. Your quest to embracing leading-edge technologies and methodologies inspires your team to follow suit.

Responsibilities and Duties :

- As a Data Engineer you will be responsible for the development of data pipelines for numerous applications handling all kinds of data like structured, semi-structured &
unstructured. Having big data knowledge specially in Spark & Hive is highly preferred.

- Work in team and provide proactive technical oversight, advice development teams fostering re-use, design for scale, stability, and operational efficiency of data/analytical solutions

Education level :

- Bachelor's degree in Computer Science or equivalent

Experience :

- Minimum 3+ years relevant experience working on production grade projects experience in hands on, end to end software development

- Expertise in application, data and infrastructure architecture disciplines

- Expert designing data integrations using ETL and other data integration patterns

- Advanced knowledge of architecture, design and business processes

Proficiency in :

- Modern programming languages like Java, Python, Scala

- Big Data technologies Hadoop, Spark, HIVE, Kafka

- Writing decently optimized SQL queries

- Orchestration and deployment tools like Airflow & Jenkins for CI/CD (Optional)

- Responsible for design and development of integration solutions with Hadoop/HDFS, Real-Time Systems, Data Warehouses, and Analytics solutions

- Knowledge of system development lifecycle methodologies, such as waterfall and AGILE.

- An understanding of data architecture and modeling practices and concepts including entity-relationship diagrams, normalization, abstraction, denormalization, dimensional
modeling, and Meta data modeling practices.

- Experience generating physical data models and the associated DDL from logical data models.

- Experience developing data models for operational, transactional, and operational reporting, including the development of or interfacing with data analysis, data mapping,
and data rationalization artifacts.

- Experience enforcing data modeling standards and procedures.

- Knowledge of web technologies, application programming languages, OLTP/OLAP technologies, data strategy disciplines, relational databases, data warehouse development and Big Data solutions.

- Ability to work collaboratively in teams and develop meaningful relationships to achieve common goals

Skills :

Must Know :

- Core big-data concepts

- Spark - PySpark/Scala

- Data integration tool like Pentaho, Nifi, SSIS, etc (at least 1)

- Handling of various file formats

- Cloud platform - AWS/Azure/GCP

- Orchestration tool - Airflow
Read more
Job posted by
Aishwarya Hire
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at one of the Big 4 IT companies in India?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort