Cutshort logo
Vectorworks Jobs in Bangalore (Bengaluru)

11+ Vectorworks Jobs in Bangalore (Bengaluru) | Vectorworks Job openings in Bangalore (Bengaluru)

Apply to 11+ Vectorworks Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Vectorworks Job opportunities across top companies like Google, Amazon & Adobe.

icon
InfoGrowth private Limited
Bengaluru (Bangalore)
6 - 10 yrs
₹22L - ₹30L / yr
bootloader
Embedded C
AUTOSAR
Adobe Flash
Vectorworks

🚀 We're Hiring: Autosar Bootloader Developer at InfoGrowth! 🚀

Join InfoGrowth as an Autosar Bootloader Developer and play a key role in the development and integration of cutting-edge automotive technologies!

Job Role: Autosar Bootloader Developer

Mandatory Skills:

  • Bootloader Development
  • Embedded C Programming
  • Autosar Framework
  • Hands-on experience with ISO 14229 (UDS Protocol)
  • Experience with Flash Bootloader topics
  • Proficiency in software development tools like CAN Analyzer, CANoe, and Debugger
  • Strong problem-solving skills and ability to work independently
  • Familiarity with Vector Flash Boot Loader is a plus
  • Exposure to Over-The-Air (OTA) updates is advantageous
  • Knowledge of the ASPICE Process is a plus
  • Excellent analytical and communication skills

Job Responsibilities:

  • Collaborate on the integration and development of Flash Bootloader (FBL) features and conduct testing activities.
  • Work closely with counterparts in Germany to understand requirements and develop FBL features.
  • Create test specifications and document the results of testing activities.


Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Bengaluru (Bangalore)
12 - 18 yrs
₹10L - ₹40L / yr
DevOps
Architecture

Job Description

Role: Sr. DevOps – Architect

Location: Bangalore

 

Who are we looking for?

A senior level DevOps consultant with deep DevOps related expertise. The Individual should be passionate about technology and demonstrate depth and breadth of expertise in similar roles and Enterprise Systems/Enterprise Architecture Frameworks;

 

Technical Skills:

• 8+ years of relevant  DevOps /Operations/Development experience working under Agile  DevOps  culture on large scale distributed systems.

• Experience in building a DevOps platform in integrating DevOps tool chain using REST/SOAP/ESB technologies.

• Required hands on programming skills on developing automation modules using one of these scripting languages Python/Perl/Ruby/Bash

• Require hands on experience with public cloud such as AWS, Azure, Openstack, Pivotal Cloud Foundry etc though Azure experience is must.

• Experience in working more than one of the configuration management tools like Chef/Puppet/Ansible and building own cookbook/manifest is required.

• Experience with Docker and Kubernetes.

• Experience in Building CI/CD pipelines using any of the continuous integration tools like Jenkins, Bamboo etc.

• Experience with planning tools like Jira, Rally etc.

• Hands on experience on continuous integration and build tools like (Jenkins, Bamboo, CruiseControl etc.) along with version control system like (GIT, SVN, GITHUB, TFS etc.), build automation tools like Maven/Gradle/ANT and dependency management tools like Artifactory/Nexus.

• Experience with more than one deployment automation tools like IBM urban code, CA automic, XL Deploy etc.

• Experience on setting up and managing  DevOps  tools on Repository, Monitoring, Log Analysis etc. using tools like (New Relic, Splunk, App Dynamics etc.)

• Understanding of Applications, Networking and Open source tools.

• Experience on security side of DevOps i.e. DevSecOps

• Good to have understanding of Micro services architecture.

• Experience working with remote/offshore teams is a huge plus

• Experience in building a Dashboard based on latest JS technologies like NodeJS

• Experience with NoSQL database like MongoDB

• Experience in working with REST APIs

• Experience with tools like NPM, Gulp

 

Process Skills:

• Ability in performing rapid assessments of clients’ internal technology landscape and targeting use cases and deployment targets

• Develop and create program blueprint, case study, supporting technical documentations for DevOps to be commercialized and duplicate work across different business customers

• Compile, deliver, and evangelize roadmaps that guide the evolution of services

• Grasp and communicate big-picture enterprise-wide issues to team

• Experience working in an Agile / Scrum / SAFe environment preferred

 

Behavioral Skills :

• Should have directly worked on creating enterprise level operating models, architecture options

• Model as-is and to-be architectures based on business requirements

• Good communication & presentation skills

• Self-driven + Disciplined + Organized + Result Oriented + Focused & Passionate about work

• Flexible for short term travel

Primary Duties / Responsibilities:

• Build Automations and modules for DevOps platform

• Build integrations between various DevOps tools

• Interface with another teams to provide support and understand the overall vision of the transformation platform.

• Understand the customer deployment scenarios, and Continuously improve and update the platform based on agile requirements.

• Preparing HLDs and LLDs.

• Presenting status to leadership and key stakeholders at regular intervals

 

Qualification:

• Somebody who has at least 12+ years of work experience in software development.

• 5+ years industry experience in DevOps architecture related to Continuous Integration/Delivery solutions, Platform Automation including technology consulting experience

• Education qualification: B.Tech, BE, BCA, MCA, M. Tech or equivalent technical degree from a reputed college

Read more
We are hiring for one of Top MNC company in Bangalore

We are hiring for one of Top MNC company in Bangalore

Agency job
via Vysystem pvt Ltd by hari prasath
Bengaluru (Bangalore), Hyderabad, Pune
5 - 8 yrs
₹5L - ₹13L / yr
DevOps
cicd
skill iconGitHub
  • Development/Technical support experience in preferably DevOps.
  • Looking for an engineer to be part of GitHub Actions support. Experience with CI/CD tools like Bamboo, Harness, Ansible, Salt Scripting.
  • Hands-on expertise with GitHub Actions and CICD Tools like Bamboo, Harness, CI/CD Pipeline stages, Build ToolsSonarQube, Artifactory, Nuget, Proget Veracode, LaunchDarklyGitHub/Bitbucket repos, Monitoring tools.
  • Handelling Xmatters,Techlines,Incidents
  • Strong Scripting skills (PowerShell, Python, Bash/Shell Scripting) for Implementing automation scripts and Tools to streamline administrative tasks and improve efficiency.
  • An Atlassian Tools Administrator is responsible for managing and maintaining Atlassian products such as Jira, Confluence, Bitbucket, and Bamboo.
  • Expertise in BitbucketGitHub for version control and collaboration global level.
  • Good experience on Linux/Windows systems activities, Databases.
  • Aware of SLA and Error concepts and their implementations; provide support and participate in Incident management & Jira Stories. Continuously Monitoring system performance and availability, and responding to incidents promptly to minimize downtime.
  • Well-versed with Observability tool as Splunk for Monitoring, alerting and logging solutions to identify and address potential issues, especially in infrastructure.
  • Expert with Troubleshooting production issues and bugs. Identifying and resolving issues in production environments.
  • Experience in providing 24x5 support.
  • GitHub Actions
  • Atlassian Tools (Bamboo, Bitbucket, Jira, Confluence)
  • Build Tools (Maven, Gradle, MS Build, NodeJS)
  • SonarQube, Veracode.
  • Nexus, JFrog, Nuget, Proget
  • Harness
  • Salt Services, Ansible
  • PowerShell, Shell scripting
  • Splunk
  • Linux, Windows



Read more
Thinqor
Ravikanth Dangeti
Posted by Ravikanth Dangeti
Bengaluru (Bangalore)
5 - 20 yrs
₹20L - ₹22L / yr
skill iconAmazon Web Services (AWS)
eks
Terraform
Splunk

General Description:Owns all technical aspects of software development for assigned applications

Participates in the design and development of systems & application programs

Functions as Senior member of an agile team and helps drive consistent development practices – tools, common components, and documentation



Required Skills:

In depth experience configuring and administering EKS clusters in AWS.

In depth experience in configuring **Splunk SaaS** in AWS environments especially in **EKS**

In depth understanding of OpenTelemetry and configuration of **OpenTelemetry Collectors**

In depth knowledge of observability concepts and strong troubleshooting experience.

Experience in implementing comprehensive monitoring and logging solutions in AWS using **CloudWatch**.

Experience in **Terraform** and Infrastructure as code.

Experience in **Helm**Strong scripting skills in Shell and/or python.

Experience with large-scale distributed systems and architecture knowledge (Linux/UNIX and Windows operating systems, networking, storage) in a cloud computing or traditional IT infrastructure environment.

Must have a good understanding of cloud concepts (Storage /compute/network).

Experience collaborating with several cross functional teams to architect observability pipelines for various GCP services like GKE, cloud run Big Query etc.

Experience with Git and GitHub.Experience with code build and deployment using GitHub actions, and Artifact Registry.

Proficient in developing and maintaining technical documentation, ADRs, and runbooks.


Read more
Hybrid Cloud Environments

Hybrid Cloud Environments

Agency job
via The Hub by Sridevi Viswanathan
Bengaluru (Bangalore)
5 - 8 yrs
₹10L - ₹12L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
Cloudfront
Installation
What we do? 

We are a boutique IT services & solutions firm headquartered in the Bay Area with offices in India. Our offering includes custom-configured hybrid cloud solutions backed by our managed services. We combine best in class DevOps and IT infrastructure management practices, to manage our clients Hybrid Cloud Environments.
In addition, we build and deploy our private cloud solutions using Open stack to provide our clients with a secure, cost effective and scale able Hybrid Cloud solution. We work with start-ups as well as enterprise clients.
This is an exciting opportunity for an experienced Cloud Engineer to work on exciting projects and have an opportunity to expand their knowledge working on adjacent technologies as well.

Must have skills

• Provisioning skills on IaaS cloud computing for platforms such as AWS, Azure, GCP.

• Strong working experience in AWS space with various AWS services and implementations (i.e. VPCs, SES, EC2, S3, Route 53, Cloud Front, etc.)

• Ability to design solutions based on client requirements.

• Some experience with various network LAN/WAN appliances like (Cisco routers and ASA systems, Barracuda, Meraki, SilverPeak, Palo Alto, Fortinet, etc.)

• Understanding of networked storage like (NFS / SMB / iSCSI / Storage GW / Windows Offline)

• Linux / Windows server installation, maintenance, monitoring, data backup and recovery, security, and administration.

• Good knowledge of TCP/IP protocol & internet technologies.

• Passion for innovation and problem solving, in a start-up environment.

• Good communication skills.

Good to have

• Remote Monitoring & Management.

• Familiarity with Kubernetes and Containers.

• Exposure to DevOps automation scripts & experience with tools like Git, bash scripting, PowerShell, AWS Cloud Formation, Ansible, Chef or Puppet will be a plus.

• Architect / Practitioner certification from OEM with hands-on capabilities.

What you will be working on

• Trouble shoot and handle L2/ L3 tickets.

• Design and architect Enterprise Cloud systems and services.

• Design, Build and Maintain environments primarily in AWS using EC2, S3/Storage, CloudFront, VPC, ELB, Auto Scaling, Direct Connect, Route53, Firewall, etc.

• Build and deploy in GCP/ Azure as needed.

• Architect cloud solutions keeping performance, cost and BCP considerations in mind.

• Plan cloud migration projects as needed.

• Collaborate & work as part of a cohesive team.

• Help build our private cloud offering on Open stack.
Read more
HappyFox

at HappyFox

1 video
6 products
Lindsey A
Posted by Lindsey A
Chennai, Bengaluru (Bangalore)
5 - 10 yrs
₹10L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+12 more

About us:

HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.

 

We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.

 

To know more, Visit! - https://www.happyfox.com/

 

Responsibilities:

  • Build and scale production infrastructure in AWS for the HappyFox platform and its products.
  • Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
  • Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
  • Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
  • Implement consistent observability, deployment and IaC setups
  • Patch production systems to fix security/performance issues
  • Actively respond to escalations/incidents in the production environment from customers or the support team
  • Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
  • Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
  • Participate in infrastructure security audits

 

Requirements:

  • At least 5 years of experience in handling/building Production environments in AWS.
  • At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
  • Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
  • Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
  • Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
  • Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
  • Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
  • Experience in setting up and managing test/staging environments, and CI/CD pipelines.
  • Experience in IaC tools such as Terraform or AWS CDK
  • Passion for making systems reliable, maintainable, scalable and secure.
  • Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
  • Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.

 

Read more
Acceldata

at Acceldata

5 recruiters
Richa  Kukar
Posted by Richa Kukar
Bengaluru (Bangalore)
5 - 9 yrs
₹10L - ₹30L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Windows Azure
+2 more

Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data platforms at Petabyte scale. Our customers are Fortune 500 companies including Asia's largest telecom company, a unicorn fintech startup of India, and many more. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.

 

We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data observability platform that focuses on ensuring data reliability from every spectrum (compute, data and pipeline) of a cloud or on-premise data platform.

 

Position Summary-

 

This role will support the customer implementation of a data quality and reliability product. The candidate is expected to install the product in the client environment, manage proof of concepts with prospects, and become a product expert and troubleshoot post installation, production issues. The role will have significant interaction with the client data engineering team and the candidate is expected to have good communication skills.

 

Required experience

  1. 6-7 years experience providing engineering support to data domain/pipelines/data engineers.
  2. Experience in troubleshooting data issues, analyzing end to end data pipelines and in working with users in resolving issues
  3. Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
  4. Basic understanding of SQL
  5. Experience working with technologies like S3, Kubernetes experience preferred.
  6. Databricks/Hadoop/Kafka experience preferred but not required

 

 

 

 

 

 

 

Read more
FinTech NBFC dedicated to driving Finance sector

FinTech NBFC dedicated to driving Finance sector

Agency job
via Jobdost by Mamatha A
Bengaluru (Bangalore)
2 - 4 yrs
₹8L - ₹10L / yr
CI/CD
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconGit
YAML
+2 more
Technical Skills: - Knowledge of infrastructure and cloud (preferably AWS); experience with infrastructure-as-code (preferably Terraform) - Experienced with one or more scripting languages, Yaml, Python, Ruby, Bash, and/or NodeJS. - Experience with web services standards and related technologies. - Experience in working with Git, or other source control and CI/CD technologies following Agile Development Methodology for Software Development and related Agile practices, and exposure to Agile tools
- Preferred experience in development associated with Kafka or big data technologies understand essential Kafka components like Zookeeper, Brokers, and optimization of Kafka clients applications (Producers & Consumers). -
Experience with Automation of Infrastructure, Testing , DB Deployment Automation, Logging/Monitoring/alerting
- AWS services experience on CloudFormation, ECS, Elastic Container Registry, Pipelines, Cloudwatch, Glue, and other related services.
- AWS Elastic Kubernetes Services (EKS) - Kubernetes and containers managing and auto-scaling -
Good knowledge and hands-on experiences with various AWS services like EC2, RDS, EKS, S3, Lambda, API, Cloudwatch, etc.
- Good and quick with log analysis to perform Root Cause Analysis (RCA) on production deployments and container errors on cloud watch. 
Working on ways to automate and improve deployment and release processes.
- High understanding of the Serverless architecture concept. - Good with Deployment automation tools and Investigating to resolve technical issues.
technical issues. - Sound knowledge of APIs, databases, and container-based ETL jobs.
- Planning out projects and being involved in project management decisions. Soft Skills
- Adaptability
- Collaboration with different teams
- Good communication skills
- Team player attitude










Read more
GoodWorker

at GoodWorker

6 recruiters
Sunder E
Posted by Sunder E
Bengaluru (Bangalore)
7 - 10 yrs
₹12L - ₹15L / yr
DevOps
skill iconKubernetes
skill iconDocker
skill iconAmazon Web Services (AWS)
Terraform
+1 more

Why you should join us

 

- You will join the mission to create positive impact on millions of peoples lives

- You get to work on the latest technologies in a culture which encourages experimentation - You get to work with super humans (Psst: Look up these super human1, super human2, super human3, super human4)

- You get to work in an accelerated learning environment

 

 

What you will do

 

- You will provide deep technical expertise to your team in building future ready systems.

- You will help develop a robust roadmap for ensuring operational excellence

- You will setup infrastructure on AWS that will be represented as code

- You will work on several automation projects that provide great developer experience

- You will setup secure, fault tolerant, reliable and performant systems

- You will establish clean and optimised coding standards for your team that are well documented

- You will set up systems in a way that are easy to maintain and provide a great developer experience

- You will actively mentor and participate in knowledge sharing forums

- You will work in an exciting startup environment where you can be ambitious and try new things :)

 

 

You should apply if

 

- You have a strong foundation in Computer Science concepts and programming fundamentals

- You have been working on cloud infrastructure setup, especially on AWS since 8+ years

- You have set up and maintained reliable systems that operate at high scale

- You have experience in hardening and securing cloud infrastructures

- You have a solid understanding of computer networking, network security and CDNs

- Extensive experience in AWS, Kubernetes and optionally Terraform

- Experience in building automation tools for code build and deployment (preferably in JS)

- You understand the hustle of a startup and are good with handling ambiguity

- You are curious, a quick learner and someone who loves to experiment

- You insist on highest standards of quality, maintainability and performance

- You work well in a team to enhance your impact

Read more
Semiconductor based industry
Bengaluru (Bangalore)
8 - 14 yrs
₹10L - ₹50L / yr
DevOps
Red Hat Linux
redhat
EDA
Reporting
Your challenge
The mission of R&D IT Design Infrastructure is to offer a state-of-the-art design environment
for the chip hardware designers. The R&D IT design environment is a complex landscape of EDA Applications, High Performance Compute, and Storage environments - consolidated in five regional datacenters. Over 7,000 chip hardware designers, spread across 40+ locations around the world, use this state-of-the-art design environment to design new chips and drive the innovation of company. The following figures give an idea about the scale: the landscape has more 75,000+ cores, 30+ PBytes of data, and serves 2,000+ CAD applications and versions. The operational service teams are globally organized to cover 24/7 support to the chip hardware design and software design projects.
Since the landscape is really too complex to manage the traditional way, it is our strategy to transform our R&D IT design infrastructure into “software-defined datacenters”. This transformation entails a different way of work and a different mind-set (DevOps, Site Reliability Engineering) to ensure that our IT services are reliable. That’s why we are looking for a DevOps Linux Engineer to strengthen the team that is building a new on-premise software defined virtualization and containerization platform (PaaS) for our IT landscape, so that we can manage it with best practices from software engineering and offer the IT service reliability which is required by our chip hardware design community.
It will be your role to develop and maintain the base Linux OS images that are offered via automation to the customers of the internal (on-premise) cloud platforms.

Your responsibilities as DevOps Linux Engineer:
• Develop and maintain the base RedHat Linux operating system images
• Develop and maintain code to configure and test the base OS image
• Provide input to support the team to design, develop and maintain automation
products with playbooks (YAML) and modules (Python/PowerShell) in tools like
Ansible Tower and Service-Now
• Test and verify the code produced by the team (including your own) to continuously
improve and refactor
• Troubleshoot and solve incidents on the RedHat Linux operating system
• Work actively with other teams to align on the architecture of the PaaS solution
• Keep the Base OS image up2date via patches or make sure patches are available to
the virtual machine owners
• Train team members and others with your extensive automation knowledge
• Work together with ServiceNow developers in your team to provide the best intuitive
end-user experience possible for the virtual machine OS deployments

We are looking for a DevOps engineer/consultant with the following characteristics:
• Master or Bachelor degree
• You are a technical, creative, analytical and open-minded engineer that is eager to
learn and not afraid to take initiative.
• Your favorite t-shirt has “Linux” or “RedHat” printed on it at least once.
• Linux guru: You have great knowledge Linux servers (RedHat), RedHat Satellite 6
and other RedHat products
• Experience in Infrastructure services, e.g., Networking, DNS, LDAP, SMTP
• DevOps mindset: You are a team-player that is eager to develop and maintain cool
products to automate/optimize processes in a complex IT infrastructure and are able
to build and maintain productive working relationships
• You have great English communication skills, both verbally as in writing.
• No issue to work outside business hours to support the platform for critical R&D
Applications

Other competences we value, but are not strictly mandatory:
• Experience with agile development methods, like Scrum, and are convinced of its
power to deliver products with immense (business) value.
• “Security” is your middle name, and you are always challenging yourself and your
colleagues to design and develop new solutions as security tight as possible.
• Being a master in automation and orchestration with tools like Ansible Tower (or
comparable) and feel comfortable with developing new modules in Python or
PowerShell.
• It would be awesome if you are already a true Yoda when it comes to code version
control and branching strategies with Git, and preferably have worked with GitLab
before.
• Experience with automated testing in a CI/CD pipeline with Ansible, Python and tools
like Selenium.
• An enthusiast on cloud platforms like Azure & AWS.
• Background in and affinity with R&D
Read more
Statusbrew

at Statusbrew

2 recruiters
Tushar Mahajan
Posted by Tushar Mahajan
Amritsar, NCR (Delhi | Gurgaon | Noida), Chandigarh, Ludhiana, Bengaluru (Bangalore)
2 - 7 yrs
₹5L - ₹23L / yr
skill iconAmazon Web Services (AWS)
DevOps
skill iconGit
Do you breathe and drink DevOps? Do you keep redesigning and reviewing your deployment until you can optimize it no more? If yes, and you are good at it, an extremely high quality work + great work life balance awaits you. StatusBrew is one of the few companies in India that have built a massively successful product at the global level. Ranked under 5000 by Alexa for a high volume of traffic and with about 1,000,000 monthly active users, we are ready to make it even bigger. As incharge of DevOps at StatusBrew, you will be managing the deployment of a global product with: 1. 1 mn monthly active users, 200K daily active, 2000 users concurrent at any point of time 2. 1 TB data generated in 1 week, 1 Billion database rows in just 1 day 3. $20,000 spent on AWS every month after many optimizations We have an extremely cozy office in Amritsar. We have a tight knit team that enjoys working and having fun together. Talk to us if you want to join us in our journey to the next 100 million users.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort