Cutshort logo
Synapsica Technologies Pvt Ltd logo
Lead DevOps Engineer
Synapsica Technologies Pvt Ltd's logo

Lead DevOps Engineer

Human Resources's profile picture
Posted by Human Resources
6 - 10 yrs
₹15L - ₹40L / yr
Bengaluru (Bangalore)
Skills
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
Microservices
SQL
NOSQL Databases
API

Introduction

http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis. 

Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting.  We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls

 

Your Roles and Responsibilities

The Lead DevOps Engineer will be responsible for the management, monitoring and operation of our applications and services in production. The DevOps Engineer will be a hands-on person who can work independently or with minimal guidance and has the ability to drive the team’s deliverables by mentoring and guiding junior team members. You will work with the existing teams very closely and build on top of tools like Kubernetes, Docker and Terraform and support our numerous polyglot services.

Introducing a strong DevOps ethic into the rest of the team is crucial, and we expect you to lead the team on best practices in deployment, monitoring, and tooling. You'll work collaboratively with software engineering to deploy and operate our systems, help automate and streamline our operations and processes, build and maintain tools for deployment, monitoring, and operations and troubleshoot and resolve issues in our development, test and production environments. The position is based in our Bangalore office.

 

 

Primary Responsibilities

  • Providing strategies and creating pathways in support of product initiatives in DevOps and automation, with a focus on the design of systems and services that run on cloud platforms.
  • Optimizations and execution of the CI/CD pipelines of multiple products and timely promotion of the releases to production environments
  • Ensuring that mission critical applications are deployed and optimised for high availability, security & privacy compliance and disaster recovery.
  • Strategize, implement and verify secure coding techniques, integrate code security tools for Continuous Integration
  • Ensure analysis, efficiency, responsiveness, scalability and cross-platform compatibility of applications through captured  metrics, testing frameworks, and debugging methodologies.
  • Technical documentation through all stages of development
  • Establish strong relationships, and proactively communicate, with team members as well as individuals across the organisation

 

Requirements

  • Minimum of 6 years of experience on Devops tools.
  • Working experience with Linux, container orchestration and management technologies (Docker, Kubernetes, EKS, ECS …).
  • Hands-on experience with "infrastructure as code" solutions (Cloudformation, Terraform, Ansible etc).
  • Background of building and maintaining CI/CD pipelines (Gitlab-CI, Jenkins, CircleCI, Github actions etc).
  • Experience with the Hashicorp stack (Vault, Packer, Nomad etc).
  • Hands-on experience in building and maintaining monitoring/logging/alerting stacks (ELK stack, Prometheus stack, Grafana etc).
  • Devops mindset and experience with Agile / SCRUM Methodology
  • Basic knowledge of Storage , Databases (SQL and noSQL)
  • Good understanding of networking technologies, HAProxy, firewalling and security.
  • Experience in Security vulnerability scans and remediation
  • Experience in API security and credentials management
  • Worked on Microservice configurations across dev/test/prod environments
  • Ability to quickly adapt new languages and technologies
  • A strong team player attitude with excellent communication skills.
  • Very high sense of ownership.
  • Deep interest and passion for technology
  • Ability to plan projects, execute them and meet the deadline
  • Excellent verbal and written English communication.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos

About Synapsica Technologies Pvt Ltd

Founded :
2017
Type :
Services
Size
Stage :
Raised funding

About

Synapsica is a HealthTech and Teleradiology firm founded by alums from AIIMS Delhi, IIT - KGP & IIM - A with a vision to increase accessibility of diagnostic services to every corner of world. We are developing Artificial Intelligence for Radiodiagnosis to aid radiologists by reducing errors, improving efficiency and automating any monotonous work. Presently, our radiologist and tech panel has some of the best talent from top institutes in the country. We use NLP, Computer Vision & Deep Learning based software products to assist diagnosis in X-Ray, CT and MRI scans. Our Machine Learning automation allows for workflow management at Diagnostic Centers bringing efficiency to the entire process of stakeholders coordination, disease interpretation & report generation. Through our efforts we hope to bring accountability, standardization, efficiency and hence better patient care to the medical field.
Read more

Tech stack

skill iconKubernetes
skill iconAmazon Web Services (AWS)
skill iconDocker
skill iconPython
Firebase
skill iconMongoDB
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMachine Learning (ML)

Company video

Synapsica Technologies Pvt Ltd's video section
Synapsica Technologies Pvt Ltd's video section

Candid answers by the company

What is Synapsica Healthcare?
How do you define the company’s culture?
Two things techies don’t like about the company?
What can get you fired?
What is the compensation philosophy?
Why join Synapsica Healthcare?
Video thumbnail
Video thumbnail

Photos

Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures
Company featured pictures

Connect with the team

Profile picture
Amrita Yadav
Profile picture
Meenakshi Singh
Profile picture
Kuldeep Singh Chauhan
Profile picture
Rachayeta Singla
Profile picture
Human Resources

Company social profiles

linkedintwitterfacebook

Similar jobs

Ride-hailing Industry
Ride-hailing Industry
Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 9 yrs
₹47L - ₹50L / yr
DevOps
skill iconPython
Shell Scripting
skill iconKubernetes
Terraform
+15 more

JOB DETAILS:

- Job Title: Lead DevOps Engineer

- Industry: Ride-hailing

- Experience: 6-9 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1.   Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.

2.   Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant

3.   Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members at least)

4.   Own end-to-end infrastructure right from non-prod to prod environment including self-managed

5.   Candidate must have Self experience in database migration from scratch 

6.   Must have a firm hold on the container orchestration tool Kubernetes

7.   Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

8.   Understanding programming languages like GO/Python, and Java

9.   Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

10.   Working experience on Cloud platform -AWS

11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs

● Codify our infrastructure

● Do what it takes to keep the uptime above 99.99%

● Understand the bigger picture and sail through the ambiguities

● Scale technology considering cost and observability and manage end-to-end processes

● Understand DevOps philosophy and evangelize the principles across the organization

● Strong communication and collaboration skills to break down the silos

 

Job Requirements:

● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience

● Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant

● Must have a firm hold on the container orchestration tool Kubernetes

● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

● Strong problem-solving skills, and ability to write scripts using any scripting language

● Understanding programming languages like GO/Python, and Java

● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

 

What’s there for you?

Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as

● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node

● More than 100,000 Request per second on our edge gateways

● ~20,000 events per second on self-managed Kafka

● 100s of TB of data on self-managed databases

● 100s of real-time continuous deployment to production

● Self-managed infra supporting

● 100% OSS

Read more
Information Technology Services
Information Technology Services
Agency job
via Jobdost by Sathish Kumar
Pune
5 - 8 yrs
₹10L - ₹30L / yr
skill iconJava
skill iconPython
skill iconJavascript
skill iconScala
skill iconDocker
+5 more
 Sr. DevOps Software Engineer:
Preferred Education & Experience:
Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.

• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Required Experience: 5+ Years
Job Location: Remote/Pune
Read more
Pion Global Solutions LTD
Sheela P
Posted by Sheela P
Mumbai
3 - 16 yrs
₹3L - ₹15L / yr
DevOps
skill iconAmazon Web Services (AWS)
Linux/Unix
Looking for Devops Engineer for Mumbai Location
Read more
Webtiga Private limited
Bengaluru (Bangalore)
6 - 9 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
elk

We are looking for a DevOps Lead to join our team.


Responsibilities


• A technology Professional who understands software development and can solve IT Operational and deployment challenges using software engineering tools and processes. This position requires an understanding of both Software development (Dev) and deployment

Operations (Ops)

• Identity manual processes and automate them using various DevOps automation tools

• Maintain the organization’s growing cloud infrastructure

• Monitor and maintain DevOps environment stability

• Collaborate with distributed Agile teams to define technical requirements and resolve technical design issues

• Orchestrating builds and test setups using Docker and Kubernetes.

• Participate in designing and building Kubernetes, Cloud, and on-prem environments for maximum performance, reliability and scalability

• Share business and technical learnings with the broader engineering and product organization, while adapting approaches for different audiences


Requirements


• Candidates working for this position should possess at least 5 years of work experience as a DevOps Engineer.

• Candidate should have experience in ELK stack, Kubernetes, and Docker.

• Solid experience in the AWS environment.

• Should have experience in monitoring tools like DataDog or Newrelic.

• Minimum of 5 years experience with code repository management, code merge and quality checks, continuous integration, and automated deployment & management using tools like Jenkins, SVN, Git, Sonar, and Selenium.

• Candidates must possess ample knowledge and experience in system automation, deployment, and implementation.

• Candidates must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.

• The candidates should also possess experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, PostgreSQL, and Git.

• Candidates should demonstrate knowledge in handling distributed data systems.

Examples: Elastisearch, Cassandra, Hadoop, and others.

• Should have experience in GitLab- CIRoles and Responsibilities

Read more
F5 Networks
Gopi Daggumilli
Posted by Gopi Daggumilli
Hyderabad
5 - 10 yrs
Best in industry
skill iconDocker
skill iconKubernetes
DevOps
OpenStack
openshift
+16 more

POSITION SUMMARY:

We are looking for a passionate, high energy individual to help build and manage the infrastructure network that powers the Product Development Labs for F5 Inc. The F5 Infra Engineer plays a critical role to our Product Development team by providing valuable services and tools for the F5 Hyderabad Product Development Lab. The Infra team supports both production systems and customized/flexible testing environments used by Test and Product Development teams. As an Infra Engineer, you ’ll have the opportunity to work with cutting-edge technology and work with talented individuals. The ideal candidate will have experience in Private and Public Cloud – AWS-AZURE-GCP, OpenStack, storage, Backup, VMware, KVM, XEN, HYPER-V Hypervisor Server Administration, Networking and Automation in Data Center Operations environment at a global enterprise scale with Kubernetes, OpenShift Container Flatforms.

                                                                                                

EXPERIENCE

7- 9+ Years – Software Engineer III

 

PRIMARY RESPONSIBILITIES:

  • Drive the design, Project Build, Infrastructure setup, monitoring, measurements, and improvements around the quality of services Provided, Network and Virtual Instances service from OpenStack, VMware VIO, Public and private cloud and DevOps environments.

  • Work closely with the customers and understand the requirements and get it done on timelines.

  • Work closely with F5 architects and vendors to understand emerging technologies and F5 Product Roadmap and how they would benefit the Infra team and its users.

  • Work closely with the Team and complete the deliverables on-time

  • Consult with testers, application, and service owners to design scalable, supportable network infrastructure to meet usage requirements.

  • Assume ownership for large/complex systems projects; mentor Lab Network Engineers in the best practices for ongoing maintenance and scaling of large/complex systems.

  • Drive automation efforts for the configuration and maintainability of the public/private Cloud.  

  • Lead product selection for replacement or new technologies

  • Address user tickets in a timely manner for the covered services

  • Responsible for deploying, managing, and supporting production and pre-production environments for our core systems and services.

  • Migration and consolidations of infrastructure

  • Design and implement major service and infrastructure components.

  • Research, investigate and define new areas of technology to enhance existing service or new service directions.

  • Evaluate performance of services and infrastructure; tune, re-evaluate the design and implementation of current source code and system configuration.

  • Create and maintain scripts and tools to automate the configuration, usability and troubleshooting of the supported applications and services.

  • Ability to take ownership on activities and new initiatives.

  • Infra Global Support from India towards product Development teams.

  • On-call support on a rotational basis for a global turn-around time-zones

  • Vendor Management for all latest hardware and software evaluations keep the system up-to-date.

 

KNOWLEDGE, SKILLS AND ABILITIES:

  • Have an in-depth multi-disciplined knowledge of Storage, Compute, Network, DevOps technologies and latest cutting-edge technologies.

  • Multi-cloud - AWS, Azure, GCP, OpenStack, DevOps Operations

  • IaaS- Infrastructure as a service, Metal as service, Platform service

  • Storage – Dell EMC, NetApp, Hitachi, Qumulo and Other storage technologies

  • Hypervisors – (VMware, Hyper-V, KVM, Xen and AHV)

  • DevOps – Kubernetes, OpenShift, docker, other container and orchestration flatforms

  • Automation – Scripting experience python/shell/golan , Full Stack development and Application Deployment

  • Tools - Jenkins, splunk, kibana, Terraform, Bitbucket, Git, CI/CD configuration.

  • Datacenter Operations – Racking, stacking, cable matrix, Solution Design and Solutions Architect 

  • Networking Skills –   Cisco/Arista Switches, Routers, Experience on Cable matrix design and pathing (Fiber/copper)

  • Experience in SAN/NAS storage – (EMC/Qumulo/NetApp & others)

  • Experience with Red Hat Ceph storage.

  • A working knowledge of Linux, Windows, and Hypervisor Operating Systems and virtual machine technologies

  • SME - subject matter expert for all cutting-edge technologies

  • Data center architect professional & Storage Expert level Certified professional experience .

  • A solid understanding of high availability systems, redundant networking and multipathing solutions

  • Proven problem resolution related to network infrastructure, judgment, negotiating and decision-making skills along with excellent written and oral communication skills.

  • A Working experience in Object – Block – File storage Technologies

  • Experience in Backup Technologies and backup administration.

  • Dell/HP/Cisco UCS server’s administration is an additional advantage.

  • Ability to quickly learn and adopt new technologies.

  • A very very story experience and exposure towards open-source flatforms.

  • A working experience on monitoring tools Zabbix, nagios , Datadog etc ..

  • A working experience on and BareMetal services and OS administration.

  • A working experience on the cloud like AWS- ipsec, Azure - express route, GCP – Vpn tunnel etc.

  • A working experience in working using software define network like (VMware NSX, SDN, Openvswitch etc ..)

  • A working experience with systems engineering and Linux /Unix administration

  • A working experience with Database administration experience with PostgreSQL, MySQL, NoSQL

  • A working experience with automation/configuration management using either Puppet, Chef or an equivalent

  • A working experience with DevOps Operations Kubernetes, container, Docker, and git repositories

  • Experience in Build system process and Code-inspect and delivery methodologies.

  • Knowledge on creating Operational Dashboards and execution lane.

  • Experience and knowledge on DNS, DHCP, LDAP, AD, Domain-controller services and PXE Services

  • SRE experience in responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning.

  • Vendor support – OEM upgrades, coordinating technical support and troubleshooting experience.

  • Experience in handling On-call Support and hierarchy process.

  • Knowledge on scale-out and scale-in architecture.

  • Working experience in ITSM / process Management tools like ServiceNow, Jira, Jira Align.

  • Knowledge on Agile and Scrum principles

  • Working experience with ServiceNow

  • Knowledge sharing, transition experience and self-learning Behavioral.

Read more
Bengaluru (Bangalore)
2 - 6 yrs
₹8L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
Job Description:

About BootLabs

https://www.google.com/url?q=https://www.bootlabs.in/&;sa=D&source=calendar&ust=1667803146567128&usg=AOvVaw1r5g0R_vYM07k6qpoNvvh6" target="_blank">https://www.bootlabs.in/

-We are a Boutique Tech Consulting partner, specializing in Cloud Native Solutions. 
-We are obsessed with anything “CLOUD”. Our goal is to seamlessly automate the development lifecycle, and modernize infrastructure and its associated applications.
-With a product mindset, we enable start-ups and enterprises on the cloud
transformation, cloud migration, end-to-end automation and managed cloud services. 
-We are eager to research, discover, automate, adapt, empower and deliver quality solutions on time.
-We are passionate about customer success. With the right blend of experience and exuberant youth in our in-house team, we have significantly impacted customers.




Technical Skills:


Expertise in any one hyper scaler (AWS/AZURE/GCP), including basic services like networking,
data and workload management.
  • AWS 

              Networking: VPC, VPC Peering, Transit Gateway, Route Tables, SecuritGroups, etc.
              Data: RDS, DynamoDB, Elastic Search
Workload: EC2, EKS, Lambda, etc.
  •  Azure
                Networking: VNET, VNET Peering,
               Data: Azure MySQL, Azure MSSQL, etc.
               Workload: AKS, Virtual Machines, Azure Functions
  • GCP
               Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
                Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
               Workload: GKE, Instances, App Engine, Batch, etc.

Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup,
templating and configuration.
Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment,
networking, service mesh. Used any package manager like helm.
Scripting experience (Bash/python), automation in pipelines when required, system service.
Infrastructure automation (Terraform/pulumi/cloud formation), write modules, setup pipeline and version the code.

Optional:

Experience in any programming language is not required but is appreciated.
Good experience in GIT, SVN or any other code management tool is required.
DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
Observability tools (Opensource: Prometheus, Elasticsearch, Open Telemetry; Paid: Datadog,
24/7, etc)
Read more
FinTech NBFC dedicated to driving Finance sector
FinTech NBFC dedicated to driving Finance sector
Agency job
via Jobdost by Mamatha A
Bengaluru (Bangalore)
2 - 4 yrs
₹8L - ₹10L / yr
CI/CD
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconGit
YAML
+2 more
Technical Skills: - Knowledge of infrastructure and cloud (preferably AWS); experience with infrastructure-as-code (preferably Terraform) - Experienced with one or more scripting languages, Yaml, Python, Ruby, Bash, and/or NodeJS. - Experience with web services standards and related technologies. - Experience in working with Git, or other source control and CI/CD technologies following Agile Development Methodology for Software Development and related Agile practices, and exposure to Agile tools
- Preferred experience in development associated with Kafka or big data technologies understand essential Kafka components like Zookeeper, Brokers, and optimization of Kafka clients applications (Producers & Consumers). -
Experience with Automation of Infrastructure, Testing , DB Deployment Automation, Logging/Monitoring/alerting
- AWS services experience on CloudFormation, ECS, Elastic Container Registry, Pipelines, Cloudwatch, Glue, and other related services.
- AWS Elastic Kubernetes Services (EKS) - Kubernetes and containers managing and auto-scaling -
Good knowledge and hands-on experiences with various AWS services like EC2, RDS, EKS, S3, Lambda, API, Cloudwatch, etc.
- Good and quick with log analysis to perform Root Cause Analysis (RCA) on production deployments and container errors on cloud watch. 
Working on ways to automate and improve deployment and release processes.
- High understanding of the Serverless architecture concept. - Good with Deployment automation tools and Investigating to resolve technical issues.
technical issues. - Sound knowledge of APIs, databases, and container-based ETL jobs.
- Planning out projects and being involved in project management decisions. Soft Skills
- Adaptability
- Collaboration with different teams
- Good communication skills
- Team player attitude










Read more
Future Group
at Future Group
3 recruiters
Siddhi Desai
Posted by Siddhi Desai
Bengaluru (Bangalore)
4 - 8 yrs
₹18L - ₹22L / yr
DevOps
skill iconDocker
skill iconKubernetes
Terraform
skill iconAmazon Web Services (AWS)
+1 more

About the company:

Tathastu, the next-generation innovation labs is Future Group’s initiative to provide a new-age retail experience - combining the physical with digital and enhancing it with data. We are creating next-generation consumer interactions by combining AI/ML, Data Science, and emerging technologies with consumer platforms.

 

The E-Commerce vertical under Tathastu has developed online consumer platforms for Future Group’s portfolio of retail brands -Easy day, Big Bazaar, Central, Brand factory, aLL, Clarks, Coverstory. Backed by our network of offline stores we have built a new retail platform that merges our Online & Offline retail streams. We use data to power all our decisions across our products and build internal tools to help us scale our impact with a small closely-knit team.

 

Our widespread store network, robust logistics, and technology capabilities have made it possible to launch a ‘2-Hour Delivery Promise’ on every product across fashion, food, FMCG, and home products for orders placed online through the Big Bazaar mobile app and portal. This makes Big Bazaar the first retailer in the country to offer instant home delivery on almost every consumer product ordered online.

 

Job Responsibilities:

  • You’ll streamline and automate the software development and infrastructure management processes and play a crucial role in executing high-impact initiatives and continuously improving processes to increase the effectiveness of our platforms.
  • You’ll translate complex use cases into discrete technical solutions in platform architecture, design and coding, functionality, usability, and optimization.
  • You will drive automation in repetitive tasks, configuration management, and deliver comprehensive automated tests to debug/troubleshoot Cloud AWS-based systems and BigData applications.
  • You’ll continuously discover, evaluate, and implement new technologies to maximize the development and operational efficiency of the platforms.
  • You’ll determine the metrics that will define technical and operational success and constantly track such metrics to fine-tune the technology stack of the organization.

 

Experience: 4 to 8 Yrs

 

Qualification: B.Tech / MCA

 

Required Skills:

  • Experience with Linux/UNIX systems administration and Amazon Web Services (AWS).
  • Infrastructure as Code (Terraform), Kubernetes and container orchestration, Web servers (Nginx, Apache), Application Servers(Tomcat,Node.js,..), document stores and relational databases (AWS RDS-MySQL).
  • Site Reliability Engineering patterns and visibility /performance/availability monitoring (Cloudwatch, Prometheus)
  • Background in and happy to work hands-on with technical troubleshooting and performance tuning.
  • Supportive and collaborative personality - ability to influence and drive progress with your peers

 

Our Technology Stack:

  • Docker/Kubernetes
  • Cloud (AWS)
  • Python/GoLang Programming
  • Microservices
  • Automation Tools
Read more
Navi Technologies
Agency job
via CareerNet by Pradeep Balakrishnan (CareerNet)
Bengaluru (Bangalore)
3 - 7 yrs
₹15L - ₹40L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)

As an Infrastructure Engineer at Navi, you will be building a resilient infrastructure platform, using modern Infrastructure engineering practices.

 

You will be responsible for the availability, scaling, security, performance and monitoring of the navi Cloud platform. You’ll be joining a team that follows best practices in infrastructure as code

 

Your Key Responsibilities

  • Build out the Infrastructure components like API Gateway, Service Mesh, Service Discovery, container orchestration platform like kubernetes.
  • Developing reusable Infrastructure code and testing frameworks
  • Build meaningful abstractions to hide the complexities of provisioning modern infrastructure components
  • Design a scalable Centralized Logging and Metrics platform
  • Drive solutions to reduce Mean Time To Recovery(MTTR), enable High Availability.

What to Bring

  • Good to have experience in managing large scale cloud infrastructure, preferable AWS and Kubernetes
  • Experience in developing applications using programming languages like Java, Python and Go
  • Experience in handling logs and metrics at a high scale.
  • Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
Read more
Directi
at Directi
13 recruiters
Richa Pancholy
Posted by Richa Pancholy
Bengaluru (Bangalore)
2 - 8 yrs
₹10L - ₹40L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
Linux/Unix
DevOps
skill iconMongoDB
+8 more
What is the Job like?We are looking for a talented individual to join our DevOps and Platforms Engineering team. You will play an important role in helping build and run our globally distributed infrastructure stack and platforms. Technologies you can expect to work on every day include Linux, AWS, MySQL/PostgreSQL, MongoDB, Hadoop/HBase, ElasticSearch, FreeSwitch, Jenkins, Nagios, and CFEngine amongst others.Responsibilities:- * Troubleshoot and fix production outages and performance issues in our AWS/Linux infrastructure stack* Build automation tools for provisioning and managing our cloud infrastructure by leveraging the AWS API for EC2, S3, CloudFront, RDS and Route53 amongst others* Contribute to enhancing and managing our continuous delivery pipeline* Proactively seek out opportunities to improve monitoring and alerting of our hosts and services, and implement them in a timely fashion* Code scripts and tools to collect and visualize metrics from linux hosts and JVM applications* Enhance and maintain our logs collection, processing and visualization infrastructure* Automate systems configuration by writing policies and modules for configuration management tools* Write both frontend (html/css/js) and backend code (Python, Ruby, Perl)* Participate in periodic oncall rotations for devopsSkills:- * DevOps/System Admin experience ranging between 3-4 years* In depth Linux/Unix knowledge, good understanding the various linux kernel subsystems (memory, storage, network etc)* DNS, TCP/IP, Routing, HA & Load Balancing* Configuration management using tools like CFEngine, Puppet or Chef* SQL and NoSQL databases like MySQL, PostgreSQL, MongoDB and HBase* Build and packaging tools like Jenkins and RPM/Yum* HA and Load balancing using tools like the Elastic Load Balancer and HAProxy* Monitoring tools like Nagios, Pingdom or similar* Log management tools like logstash, fluentd, syslog, elasticsearch or similar* Metrics collection tools like Ganglia, Graphite, OpenTSDB or similar* Programming in a high level language like Python or Ruby
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Shubham Vishwakarma's profile image

Shubham Vishwakarma

Full Stack Developer - Averlon
I had an amazing experience. It was a delight getting interviewed via Cutshort. The entire end to end process was amazing. I would like to mention Reshika, she was just amazing wrt guiding me through the process. Thank you team.
Companies hiring on Cutshort
companies logos