Cutshort logo
Vectorworks Jobs in Bangalore (Bengaluru)

11+ Vectorworks Jobs in Bangalore (Bengaluru) | Vectorworks Job openings in Bangalore (Bengaluru)

Apply to 11+ Vectorworks Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Vectorworks Job opportunities across top companies like Google, Amazon & Adobe.

icon
InfoGrowth private Limited
Bengaluru (Bangalore)
6 - 10 yrs
₹22L - ₹30L / yr
bootloader
Embedded C
AUTOSAR
Adobe Flash
Vectorworks

🚀 We're Hiring: Autosar Bootloader Developer at InfoGrowth! 🚀

Join InfoGrowth as an Autosar Bootloader Developer and play a key role in the development and integration of cutting-edge automotive technologies!

Job Role: Autosar Bootloader Developer

Mandatory Skills:

  • Bootloader Development
  • Embedded C Programming
  • Autosar Framework
  • Hands-on experience with ISO 14229 (UDS Protocol)
  • Experience with Flash Bootloader topics
  • Proficiency in software development tools like CAN Analyzer, CANoe, and Debugger
  • Strong problem-solving skills and ability to work independently
  • Familiarity with Vector Flash Boot Loader is a plus
  • Exposure to Over-The-Air (OTA) updates is advantageous
  • Knowledge of the ASPICE Process is a plus
  • Excellent analytical and communication skills

Job Responsibilities:

  • Collaborate on the integration and development of Flash Bootloader (FBL) features and conduct testing activities.
  • Work closely with counterparts in Germany to understand requirements and develop FBL features.
  • Create test specifications and document the results of testing activities.


Read more
Dori AI

at Dori AI

5 recruiters
Nitin Gupta
Posted by Nitin Gupta
Bengaluru (Bangalore)
3 - 8 yrs
₹3L - ₹13L / yr
DevOps
skill iconDocker
PyTorch
Bash
Perl
+1 more
As a DevOps Engineer and Architect in Dori AI, you will be responsible for streamlining and executing Site Reliability Engineering and DevOps activities with a charter to reduce cost while improving observability, scalability, and reliability.  In this role, you will also work closely with the Service Development team and contribute to the Service Platform design.

The Key Responsibilities Include But Not Limited to:
Help identify and drive Speed, Performance, Scalability, and Reliability related optimization based on experience and learnings from the production incidents.
Work in an agile DevSecOps environment in creating, maintaining, monitoring, and automation of the overall solution-deployment.
Understand and explain the effect of product architecture decisions on systems.
Identify issues and/or opportunities for improvements that are common across multiple services/teams.
This role will require weekend deployments

Skills and Qualifications:
1. 3+ years of experience in a DevOps end-to-end development process with heavy focus on service monitoring and site reliability engineering work.
2. Advanced knowledge of programming/scripting languages (Bash, PERL, Python, Node.js).
3. Experience in Agile/SCRUM enterprise-scale software development including working with GiT, JIRA, Confluence, etc.
4. Advance experience with core microservice technology (RESTFul development).
5. Working knowledge of using Advance AI/ML tools are pluses.
6. Working knowledge in the one or more of the Cloud Services: Amazon AWS, Microsoft Azure
7. Bachelors or Master’s degree in Computer Science or equivalent related field experience

Key Behaviours / Attitudes:
Professional curiosity and a desire to a develop deep understanding of services and technologies.
Experience building & running systems to drive high availability, performance and operational improvements
Excellent written & oral communication skills; to ask pertinent questions, and to assess/aggregate/report the responses.
Ability to quickly grasp and analyze complex and rapidly changing systemsSoft skills
1. Self-motivated and self-managing.
2. Excellent communication / follow-up / time management skills.
3. Ability to fulfill role/duties independently within defined policies and procedures.
4. Ability to balance multi-task and multiple priorities while maintaining a high level of customer satisfaction is key.
5. Be able to work in an interrupt-driven environment.Work with Dori Ai world class technology to develop, implement, and support Dori's global infrastructure.

As a member of the IT organization, assist with the analyze of existing complex programs and formulate logic for new complex internal systems. Prepare flowcharting, perform coding, and test/debug programs. Develop conversion and system implementation plans. Recommend changes to development, maintenance, and system standards.
Leading contributor individually and as a team member, providing direction and mentoring to others. Work is non-routine and very complex, involving the application of advanced technical/business skills in a specialized area. BS or equivalent experience in programming on enterprise or department servers or systems.
Read more
Quber Technologies Limited
Manish Singh
Posted by Manish Singh
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹25L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
+7 more

As a SaaS DevOps Engineer, you will be responsible for providing automated tooling and process enhancements for SaaS deployment, application and infrastructure upgrades and production monitoring.

  • Development of automation scripts and pipelines for deployment and monitoring of new production environments.

  • Development of automation scripts for upgrades, hotfixes deployments and maintenance.

  • Work closely with Scrum teams and product groups to support the quality and growth of the SaaS services.

  • Collaborate closely with SaaS Operations team to handle day-to-day production activities - handling alerts and incidents.

  • Assist SaaS Operations team to handle customers focus projects: migrations, features enablement.

  • Write Knowledge articles to document known issues and best practices.

  • Conduct regression tests to validate solutions or workarounds.

  • Work in a globally distributed team.

 

What achievements should you have so far?

  • Bachelor's or master’s degree in Computer Science, Information Systems, or equivalent.

  • Experience with containerization, deployment, and operations.

  • Strong knowledge of CI/CD processes (Git, Jenkins, Pipelines).

  • Good experience with Linux systems and Shell scripting.

  • Basic cloud experience, preferably oriented on MS Azure.

  • Basic knowledge of containerized solutions (Helm, Kubernetes, Docker).

  • Good Networking skills and experience.

  • Having Terraform or CloudFormation knowledge will be considered a plus.

  • Ability to analyze a task from a system perspective.

  • Excellent problem-solving and troubleshooting skills.

  • Excellent written and verbal communication skills; mastery in English and local language.

  • Must be organized, thorough, autonomous, committed, flexible, customer-focused and productive.

Read more
Bengaluru (Bangalore)
2 - 6 yrs
₹8L - ₹20L / yr
skill iconDocker
skill iconKubernetes
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
Job Description:

About BootLabs

https://www.google.com/url?q=https://www.bootlabs.in/&;sa=D&source=calendar&ust=1667803146567128&usg=AOvVaw1r5g0R_vYM07k6qpoNvvh6" target="_blank">https://www.bootlabs.in/

-We are a Boutique Tech Consulting partner, specializing in Cloud Native Solutions. 
-We are obsessed with anything “CLOUD”. Our goal is to seamlessly automate the development lifecycle, and modernize infrastructure and its associated applications.
-With a product mindset, we enable start-ups and enterprises on the cloud
transformation, cloud migration, end-to-end automation and managed cloud services. 
-We are eager to research, discover, automate, adapt, empower and deliver quality solutions on time.
-We are passionate about customer success. With the right blend of experience and exuberant youth in our in-house team, we have significantly impacted customers.




Technical Skills:


Expertise in any one hyper scaler (AWS/AZURE/GCP), including basic services like networking,
data and workload management.
  • AWS 

              Networking: VPC, VPC Peering, Transit Gateway, Route Tables, SecuritGroups, etc.
              Data: RDS, DynamoDB, Elastic Search
Workload: EC2, EKS, Lambda, etc.
  •  Azure
                Networking: VNET, VNET Peering,
               Data: Azure MySQL, Azure MSSQL, etc.
               Workload: AKS, Virtual Machines, Azure Functions
  • GCP
               Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
                Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
               Workload: GKE, Instances, App Engine, Batch, etc.

Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup,
templating and configuration.
Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment,
networking, service mesh. Used any package manager like helm.
Scripting experience (Bash/python), automation in pipelines when required, system service.
Infrastructure automation (Terraform/pulumi/cloud formation), write modules, setup pipeline and version the code.

Optional:

Experience in any programming language is not required but is appreciated.
Good experience in GIT, SVN or any other code management tool is required.
DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
Observability tools (Opensource: Prometheus, Elasticsearch, Open Telemetry; Paid: Datadog,
24/7, etc)
Read more
CEDRETO MARKETING PRIVATE LIMITED
Ankit Agarwal
Posted by Ankit Agarwal
Remote, Bengaluru (Bangalore)
2 - 5 yrs
₹5L - ₹9L / yr
DevOps
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)

Do Your Thng 

https://doyourthng.com/ 

 

DYT - Do Your Thing, is an app, where all social media users can share brands they love with their followers and earn money while doing so! We believe everyone is an influencer. Our aim is to democratise social media and allow people to be rewarded for the content they post. How does DYT help you? It accelerates your career through collaboration opportunities with top brands and gives you access to a community full of experts in the influencer space. 



Role: DevOps
 

​​

Job Description:

We are looking for experienced DevOps Engineers to join our Engineering team. The candidate will be working with our engineers and interact with the tech team for high quality web applications for a product.


Required Experience

  • Devops Engineer with 2+ years of experience in development and production operations supporting for Linux & Windows based applications and Cloud deployments (AWS/GC stack)
  • Experience working with Continuous Integration and Continuous Deployment Pipeline
  • Exposure to managing LAMP stack-based applications
  • Experience Resource provisioning automation using tools such as CloudFormation, terraform and ARM Templates.
  • Experience in working closely with clients, understanding their requirements, design and implement quality solutions to meet their needs.
  • Ability to take ownership on the carried-out work
  • Experience coordinating with rest of the team to deliver well-architected and high-quality solutions.
  • Experience deploying Docker based applications
  • Experience with AWS services.
  • Excellent verbal and written communication skills

Desired Experience

  • Exposure to AWS, google cloud and Azure Cloud
  • Experience in Jenkins, Ansible, Terraform
  • Build Monitoring tools and respond to alarms triggered in production environment
  • Willingness to quickly become a member of the team and to do what it takes to get the job done
  • Ability to work well in a fast-paced environment and listen and learn from stakeholders
  • Demonstrate a strong work ethic and incorporate company values in your everyday work.
Read more
Synapsica Technologies Pvt Ltd

at Synapsica Technologies Pvt Ltd

6 candid answers
1 video
Human Resources
Posted by Human Resources
Bengaluru (Bangalore)
6 - 10 yrs
₹15L - ₹40L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+5 more

Introduction

http://www.synapsica.com/">Synapsica is a https://yourstory.com/2021/06/funding-alert-synapsica-healthcare-ivycap-ventures-endiya-partners/">series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis. 

Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting.  We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls">https://www.youtube.com/watch?v=FR6a94Tqqls

 

Your Roles and Responsibilities

The Lead DevOps Engineer will be responsible for the management, monitoring and operation of our applications and services in production. The DevOps Engineer will be a hands-on person who can work independently or with minimal guidance and has the ability to drive the team’s deliverables by mentoring and guiding junior team members. You will work with the existing teams very closely and build on top of tools like Kubernetes, Docker and Terraform and support our numerous polyglot services.

Introducing a strong DevOps ethic into the rest of the team is crucial, and we expect you to lead the team on best practices in deployment, monitoring, and tooling. You'll work collaboratively with software engineering to deploy and operate our systems, help automate and streamline our operations and processes, build and maintain tools for deployment, monitoring, and operations and troubleshoot and resolve issues in our development, test and production environments. The position is based in our Bangalore office.

 

 

Primary Responsibilities

  • Providing strategies and creating pathways in support of product initiatives in DevOps and automation, with a focus on the design of systems and services that run on cloud platforms.
  • Optimizations and execution of the CI/CD pipelines of multiple products and timely promotion of the releases to production environments
  • Ensuring that mission critical applications are deployed and optimised for high availability, security & privacy compliance and disaster recovery.
  • Strategize, implement and verify secure coding techniques, integrate code security tools for Continuous Integration
  • Ensure analysis, efficiency, responsiveness, scalability and cross-platform compatibility of applications through captured  metrics, testing frameworks, and debugging methodologies.
  • Technical documentation through all stages of development
  • Establish strong relationships, and proactively communicate, with team members as well as individuals across the organisation

 

Requirements

  • Minimum of 6 years of experience on Devops tools.
  • Working experience with Linux, container orchestration and management technologies (Docker, Kubernetes, EKS, ECS …).
  • Hands-on experience with "infrastructure as code" solutions (Cloudformation, Terraform, Ansible etc).
  • Background of building and maintaining CI/CD pipelines (Gitlab-CI, Jenkins, CircleCI, Github actions etc).
  • Experience with the Hashicorp stack (Vault, Packer, Nomad etc).
  • Hands-on experience in building and maintaining monitoring/logging/alerting stacks (ELK stack, Prometheus stack, Grafana etc).
  • Devops mindset and experience with Agile / SCRUM Methodology
  • Basic knowledge of Storage , Databases (SQL and noSQL)
  • Good understanding of networking technologies, HAProxy, firewalling and security.
  • Experience in Security vulnerability scans and remediation
  • Experience in API security and credentials management
  • Worked on Microservice configurations across dev/test/prod environments
  • Ability to quickly adapt new languages and technologies
  • A strong team player attitude with excellent communication skills.
  • Very high sense of ownership.
  • Deep interest and passion for technology
  • Ability to plan projects, execute them and meet the deadline
  • Excellent verbal and written English communication.
Read more
Banyan Data Services

at Banyan Data Services

1 recruiter
Sathish Kumar
Posted by Sathish Kumar
Bengaluru (Bangalore)
4 - 10 yrs
₹6L - ₹20L / yr
DevOps
skill iconJenkins
Puppet
Terraform
skill iconDocker
+10 more

DevOps Engineer 

Notice Period: 45 days / Immediate Joining

 

Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.

We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.

 

Key Qualifications

· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.

· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc

· Strong experience in Linux/Unix administration.

· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.

· Expertise in multiple coding and scripting languages including Shell, Python, and Perl

· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)

· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database

· Worked on open-source tools for logging, monitoring, search engine, caching, etc.

· Professional Certificates in AWS or any other cloud is preferable

· Excellent problem solving and troubleshooting skills

· Must have good written and verbal communication skills

Key Responsibilities

Ambitious individuals who can work under their own direction towards agreed targets/goals.

 Must be flexible to work on the office timings to accommodate the multi-national client timings.

 Will be involved in solution designing from the conceptual stages through development cycle and deployments.

 Involve development operations & support internal teams

 Improve infrastructure uptime, performance, resilience, reliability through automation

 Willing to learn new technologies and work on research-orientated projects

 Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.

 Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.

 Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming

http://www.banyandata.com" target="_blank">www.banyandata.com 

Read more
Karkinos Healthcare Pvt Ltd
Sajal Somani
Posted by Sajal Somani
Bengaluru (Bangalore)
3 - 8 yrs
₹7L - ₹14L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+3 more
It is estimated that there are 2.25 million cases of cancer in India every year, which doubles every 10 years. Three-quarters of these cancers are detected in the late stages and mortality rates are devastatingly high because of lack of access to standardized cancer care. Whilst Indians are at the forefront of medical research in the West, India as a country is a laggard in researching and curing the condition.

Karkinos Healthcare Pvt. Ltd.

The fundamental principle of Karkinos healthcare is democratization of cancer care in a participatory fashion with existing health providers, researchers and technologists. Our vision is to provide millions of cancer patients with affordable and effective treatments and have India become a leader in oncology research. Karkinos will be with the patient every step of the way, to advise them, connect them to the best specialists, and to coordinate their care.

Karkinos has an eclectic founding team with strong technology, healthcare and finance experience, and a panel of eminent clinical advisors in India and abroad.

Roles and Responsibilities:

- Critical role that involves in setting up and owning the dev, staging, and production infrastructure for the platform that uses micro services, data warehouses and a datalake.

- Demonstrate technical leadership with incident handling and troubleshooting.

- Provide software delivery operations and application release management support, including scripting, automated build and deployment processing and process reengineering.

- Build automated deployments for consistent software releases with zero downtime

- Deploy new modules, upgrades and fixes to the production environment.

- Participate in the development of contingency plans including reliable backup and restore procedures.

- Participate in the development of the end to end CI / CD process and follow through with other team members to ensure high quality and predictable delivery

- Participate in development of CI / CD processes

- Work on implementing DevSecOps and GitOps practices

- Work with the Engineering team to integrate more complex testing into a containerized pipeline to ensure minimal regressions

- Build platform tools that rest of the engineering teams can use.

Apply only if you have:

- 2+ years of software development/technical support experience.

- 1+ years of software development, operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.

- 2+ years of experience in public cloud services: AWS (VPC, EC2, ECS, Lambda, Redshift, S3, API Gateway) or GCP (Kubernetes Engine, Cloud SQL, Cloud Storage, BIG Query, API Gateway, Container Registry) - preferably in GCP.

- Experience managing infra for distributed NoSQL system (Kafka/MongoDB), Containers, Micro services, deployment and service orchestration using Kubernetes.

- Experience and a god understanding of Kubernetes, Service Mesh (Istio preferred), API Gateways, Network proxies, etc.

- Experience in setting up infra for central monitoring of infrastructure, ability to debug, trace

- Experience and deep understanding of Cloud Networking and Security

- Experience in Continuous Integration and Delivery (Jenkins / Maven Github/Gitlab).

- Strong scripting language knowledge, such as Python, Shell.

- Experience in Agile development methodologies and release management techniques.

- Excellent analytical and troubleshooting.

- Ability to continuously learn and make decisions with minimal supervision. You understand that making mistakes means that you are learning.

Interested Applicants can share their resume at sajal.somani[AT]karkinos[DOT]in with subject as "DevOps Engineer".
Read more
VUMONIC

at VUMONIC

2 recruiters
Simran Bhullar
Posted by Simran Bhullar
Bengaluru (Bangalore)
1 - 3 yrs
₹5L - ₹7.5L / yr
skill iconDocker
skill iconKubernetes
DevOps
Google Cloud Platform (GCP)
skill iconElastic Search
+3 more

Designation : DevOp Engineer

Location : HSR, Bangalore


About the Company


Making impact driven by Data. 


Vumonic Datalabs is a data-driven startup providing business insights to e-commerce & e-tail companies to help them make data-driven decisions to scale up their business and understand their competition better. As one of the EU's fastest growing (and coolest) data companies, we believe in revolutionizing the way businesses make their most important business decisions by providing first-hand transaction based insights in real-time.. 



About the Role

 

We are looking for an experienced and ambitious DevOps engineer who will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. As a DevOps engineer at Vumonic Datalabs, you will have the opportunity to work with a thriving global team to help us build functional systems that improve customer experience. If you have a strong background in software engineering, are hungry to learn, compassionate about your work and are familiar with the mentioned technical skills, we’d love to speak with you.



What you’ll do


  • Optimize and engineer the Devops infrastructure for high availability, scalability and reliability.
  • Monitor Logs on servers & Cloud management
  • Build and set up new development tools and infrastructure to reduce occurrences of errors 
  • Understand the needs of stakeholders and convey this to developers
  • Design scripts to automate and improve development and release processes
  • Test and examine codes written by others and analyze results
  • Ensure that systems are safe and secure against cybersecurity threats
  • Identify technical problems, perform root cause analysis for production errors and develop software updates and ‘fixes’
  • Work with software developers, engineers to ensure that development follows established processes and actively communicates with the operations team.
  • Design procedures for system troubleshooting and maintenance.


What you need to have


TECHNICAL SKILLS

  • Experience working with the following tools : Google Cloud Platform, Kubernetes, Docker, Elastic Search, Terraform, Redis
  • Experience working with following tools preferred : Python, Node JS, Mongo-DB, Rancher, Cassandra
  • Experience with real-time monitoring of cloud infrastructure using publicly available tools and servers
  • 2 or more years of experience as a DevOp (startup/technical experience preferred)

You are

  • Excited to learn, are a hustler and “Do-er”
  • Passionate about building products that create impact.
  • Updated with the latest technological developments & enjoy upskilling yourself with market trends.
  • Willing to experiment with novel ideas & take calculated risks.
  • Someone with a problem-solving attitude with the ability to handle multiple tasks while meeting expected deadlines.
  • Interested to work as part of a supportive, highly motivated and fun team.
Read more
Olacabs.com

at Olacabs.com

6 recruiters
Roshni Pillai
Posted by Roshni Pillai
Bengaluru (Bangalore)
5 - 9 yrs
₹8L - ₹21L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconKubernetes
Linux/Unix
We are looking for a Site Reliability Engineer/Sr. Site Reliability Engineer to help us build and enhance platforms to achieve availability, scalability and operational effectiveness. The right individual will embrace the opportunity to tackle challenging problems and use their influence to drive continual improvement. You will also work on the cutting edge of technology, leveraging Kong, Repose, Docker, Mesos/Kubernetes, Jenkins, Chef, HaProxy, Nginx, GitLab, MySQL, Scylla, Aerospike, Service Mesh ( Istio/Linkerd), Prometheus etc.

Roles and Responsibilities
● Managing Availability, Performance, Capacity of infrastructure and applications.
● Building and implementing observability for applications health/performance/capacity.
● Optimizing On-call rotations and processes.
● Documenting “tribal” knowledge.
● Managing Infra-platforms like
- Mesos/Kubernetes
- CICD
- Observability(Prometheus/New Relic/ELK)
- Cloud Platforms ( AWS/ Azure )
- Databases
- Data Platforms Infrastructure
● Providing help in onboarding new services with the production readiness review process.
● Providing reports on services SLO/Error Budgets/Alerts and Operational Overhead.
● Working with Dev and Product teams to define SLO/Error Budgets/Alerts.
● Working with the Dev team to have an in-depth understanding of the application architecture and its bottlenecks.
● Identifying observability gaps in product services, infrastructure and working with stake owners to fix it.
● Managing Outages and doing detailed RCA with developers and identifying ways to avoid that situation.
● Managing/Automating upgrades of the infrastructure services.
● Automate toil work.

Experience & Skills
● 3+ Years of experience as an SRE/DevOps/Infrastructure Engineer on large scale microservices and infrastructure.
● A collaborative spirit with the ability to work across disciplines to influence, learn, and deliver.
● A deep understanding of computer science, software development, and networking principles.
● Demonstrated experience with languages, such as Python, Java, Golang etc.
● Extensive experience with Linux administration and good understanding of the various linux kernel subsystems (memory, storage, network etc).
● Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing.
● Expertise in GitOps, Infrastructure as a Code tools such as Terraform etc.. and Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible.
● Expertise of Amazon Web Services (AWS) and/or other relevant Cloud Infrastructure solutions like Microsoft Azure or Google Cloud.
● Experience in building CI/CD solutions with tools such as Jenkins, GitLab, Spinnaker, Argo etc.
● Experience in managing and deploying containerized environments using Docker,
Mesos/Kubernetes is a plus.
● Experience with multiple datastores is a plus (MySQL, PostgreSQL, Aerospike,
Couchbase, Scylla, Cassandra, Elasticsearch).
● Experience with data platforms tech stacks like Hadoop, Hive, Presto etc is a plus
Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹2L - ₹8L / yr
KUBERNETES
MESOS
skill iconAngularJS (1.x)
skill iconDocker
skill iconGo Programming (Golang)
+1 more
Build, deploy and release automation engineer. Will be part of product development team and will be supporting the team with formulating different workflows for build, deploy and release. Product uses new age technologies such as GoLang, AngularJS, Docker etc. - 5 years of experience managing SCM & build environments. - Experience with GIT tooling and scripting. - Programming/scripting experience in Python, Bash etc - Branching, Versioning management and process automation. - Experience with Docker image management as part of product build and release. - Deployment on Cloud, on virtual machines. - Experience with service deployment on Kubernetes, Mesos etc will be added advantage.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort