11+ Vectorworks Jobs in Bangalore (Bengaluru) | Vectorworks Job openings in Bangalore (Bengaluru)
Apply to 11+ Vectorworks Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Vectorworks Job opportunities across top companies like Google, Amazon & Adobe.
🚀 We're Hiring: Autosar Bootloader Developer at InfoGrowth! 🚀
Join InfoGrowth as an Autosar Bootloader Developer and play a key role in the development and integration of cutting-edge automotive technologies!
Job Role: Autosar Bootloader Developer
Mandatory Skills:
- Bootloader Development
- Embedded C Programming
- Autosar Framework
- Hands-on experience with ISO 14229 (UDS Protocol)
- Experience with Flash Bootloader topics
- Proficiency in software development tools like CAN Analyzer, CANoe, and Debugger
- Strong problem-solving skills and ability to work independently
- Familiarity with Vector Flash Boot Loader is a plus
- Exposure to Over-The-Air (OTA) updates is advantageous
- Knowledge of the ASPICE Process is a plus
- Excellent analytical and communication skills
Job Responsibilities:
- Collaborate on the integration and development of Flash Bootloader (FBL) features and conduct testing activities.
- Work closely with counterparts in Germany to understand requirements and develop FBL features.
- Create test specifications and document the results of testing activities.
Job Title : Kafka Admin
Experience : 5+ Years
Location : Hyderabad / Bangalore / Pune
Work Mode : Hybrid (3 Days Work From Office)
Shift Timings :
- 07:00 AM – 04:00 PM IST
- 02:00 PM – 11:00 PM IST
- 10:30 PM – 07:30 AM IST
Notice Period : Immediate to 15 Days
Job Description :
We are seeking an experienced Kafka Administrator with 8+ years of hands-on experience.
The ideal candidate should have deep expertise in managing Kafka environments, particularly in Confluent Kafka and EEH Kafka, and should be capable of working independently on configuration, integration, and administration.
Key Responsibilities :
- Administer and manage Kafka clusters, brokers, topics, and security configurations
- Design and implement Kafka-based integrations
- Configure, monitor, and troubleshoot Kafka environments
- Collaborate with development and infrastructure teams to ensure high availability and performance
- Work on Confluent Cloud and EEH Kafka environments
Must-Have Skills :
- Kafka Administration
- EEH Kafka
- Confluent Kafka / Confluent Cloud
- Kafka Configuration and Integration
- Strong troubleshooting and monitoring skills
∙Need 8+ years of experience in Devops CICD
∙Managing large-scale AWS deployments using Infrastructure as Code (IaC) and k8s developer tools
∙Managing build/test/deployment of very large-scale systems, bridging between developers and live stacks
∙Actively troubleshoot issues that arise during development and production
∙Owning, learning, and deploying SW in support of customer-facing applications
∙Help establish DevOps best practices
∙Actively work to reduce system costs
∙Work with open-source technologies, helping to ensure robustness and secureness of said technologies
∙Actively work with CI/CD, GIT and other component parts of the build and deployment system
∙Leading skills with AWS cloud stack
∙Proven implementation experience with Infrastructure as Code (Terraform, Terragrunt, Flux, Helm charts)
at scale
∙Proven experience with Kubernetes at scale
∙Proven experience with cloud management tools beyond AWS console (k9s, lens)
∙Strong communicator who people want to work with – must be thought of as the ultimate collaborator
∙Solid team player
∙Strong experience with Linux-based infrastructures and AWS
∙Strong experience with databases such as MySQL, Redshift, Elasticsearch, Mongo, and others
∙Strong knowledge of JavaScript, GIT
∙Agile practitioner
As a DevOps Engineer, you’ll play a key role in managing our cloud infrastructure, automating deployments, and ensuring high availability across our global server network. You’ll work closely with our technical team to optimize performance and scalability.
Responsibilities
✅ Design, implement, and manage cloud infrastructure (primarily Azure)
✅ Automate deployments using CI/CD pipelines (GitHub Actions, Jenkins, or equivalent)
✅ Monitor and optimize server performance & uptime (100% uptime goal)
✅ Work with cPanel-based hosting environments and ensure seamless operation
✅ Implement security best practices & compliance measures
✅ Troubleshoot system issues, scale infrastructure, and enhance reliability
Requirements
🔹 3-7 years of DevOps experience in cloud environments (Azure preferred)
🔹 Hands-on expertise in CI/CD tools (GitHub Actions, Jenkins, etc.)
🔹 Proficiency in Terraform, Ansible, Docker, Kubernetes
🔹 Strong knowledge of Linux system administration & networking
🔹 Experience with monitoring tools (Prometheus, Grafana, ELK, etc.)
🔹 Security-first mindset & automation-driven approach
Why Join Us?
🚀 Work at a fast-growing startup backed by Microsoft
💡 Lead high-impact DevOps projects in a cloud-native environment
🌍 Hybrid work model with flexibility in Bangalore, Delhi, or Mumbai
💰 Competitive salary ₹12-30 LPA based on experience
How to Apply?
📩 Apply now & follow us for future updates:
🔗 X (Twitter): https://x.com/CygenHost
🔗 LinkedIn: https://www.linkedin.com/company/cygen-host/
🔗 Instagram: https://www.instagram.com/cygenhost
Would you like any modifications before posting this? Or should I move on to the next role? 🚀
The Key Responsibilities Include But Not Limited to:
Help identify and drive Speed, Performance, Scalability, and Reliability related optimization based on experience and learnings from the production incidents.
Work in an agile DevSecOps environment in creating, maintaining, monitoring, and automation of the overall solution-deployment.
Understand and explain the effect of product architecture decisions on systems.
Identify issues and/or opportunities for improvements that are common across multiple services/teams.
This role will require weekend deployments
Skills and Qualifications:
1. 3+ years of experience in a DevOps end-to-end development process with heavy focus on service monitoring and site reliability engineering work.
2. Advanced knowledge of programming/scripting languages (Bash, PERL, Python, Node.js).
3. Experience in Agile/SCRUM enterprise-scale software development including working with GiT, JIRA, Confluence, etc.
4. Advance experience with core microservice technology (RESTFul development).
5. Working knowledge of using Advance AI/ML tools are pluses.
6. Working knowledge in the one or more of the Cloud Services: Amazon AWS, Microsoft Azure
7. Bachelors or Master’s degree in Computer Science or equivalent related field experience
Key Behaviours / Attitudes:
Professional curiosity and a desire to a develop deep understanding of services and technologies.
Experience building & running systems to drive high availability, performance and operational improvements
Excellent written & oral communication skills; to ask pertinent questions, and to assess/aggregate/report the responses.
Ability to quickly grasp and analyze complex and rapidly changing systemsSoft skills
1. Self-motivated and self-managing.
2. Excellent communication / follow-up / time management skills.
3. Ability to fulfill role/duties independently within defined policies and procedures.
4. Ability to balance multi-task and multiple priorities while maintaining a high level of customer satisfaction is key.
5. Be able to work in an interrupt-driven environment.Work with Dori Ai world class technology to develop, implement, and support Dori's global infrastructure.
As a member of the IT organization, assist with the analyze of existing complex programs and formulate logic for new complex internal systems. Prepare flowcharting, perform coding, and test/debug programs. Develop conversion and system implementation plans. Recommend changes to development, maintenance, and system standards.
Leading contributor individually and as a team member, providing direction and mentoring to others. Work is non-routine and very complex, involving the application of advanced technical/business skills in a specialized area. BS or equivalent experience in programming on enterprise or department servers or systems.
- Develop and Deploy Software:
- Architect and create an effective build and release process using industry best practices and tools
- Create and manage build scripts to deploy software in a multi-cloud environment
- Look for opportunities to automate as much of the deployment process as possible to provide for repeatability, auditability, scalability and build in process enforcement
- Manage Release Schedule:
- Act as a “gate keeper” for all releases into production
- Work closely with business stakeholders, development managers and developers to prepare a release schedule
- Help prioritize deployment requests for version upgrades, patches and hot-fixes
- Continuous Delivery of Software:
- Implement Continuous Integration (CI) practices to drive development teams to implement smaller changes and commit code to the version control repo frequently
- Implement Continuous Development (CD) practices that automates deployment of the application to several environments – Dev, Test and Production
- Implement Continuous Testing (functional and non-functional) to execute tests in the CI/CD pipeline
- Manage Version Control:
- Define and implement branching policies to efficiently manage source-code
- Implement business rules as a part of source control standards
- Resolve Software Issues:
- Assist technical support and development teams to troubleshoot issues and identify areas that need improvement
- Address deployment related issues
- Maintain Release Documentation:
- Maintain release notes (features available in stable versions and known issues) and other documents for both internal and external end users
• Expertise in any one hyper-scale (AWS/AZURE/GCP), including basic services like networking, data and workload management.
o AWS
Networking: VPC, VPC Peering, Transit Gateway, RouteTables, SecurityGroups, etc.
Data: RDS, DynamoDB, ElasticSearch
Workload: EC2, EKS, Lambda, etc.
o Azure
Networking: VNET, VNET Peering,
Data: Azure MySQL, Azure MSSQL, etc.
Workload: AKS, VirtualMachines, AzureFunctions
o GCP
Networking: VPC, VPC Peering, Firewall, Flowlogs, Routes, Static and External IP Addresses
Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
Workload: GKE, Instances, App Engine, Batch, etc.
• Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup, templating and configuration.
• Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment, networking, service mesh. Used any package manager like helm.
• Scripting experience (Bash/python), automation in pipelines when required, system service.
• Infrastructure automation (Terraform/pulumi/cloudformation), write modules, setup pipeline and version the code.
Optional
• Experience in any programming language is not required but is appreciated.
• Good experience in GIT, SVN or any other code management tool is required.
• DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
• Observability tools (Opensource: Prometheus, Elasticsearch, OpenTelemetry; Paid: Datadog, 24/7, etc)
Engineering Leader, Cloud Infrastructure.
Bengaluru, Karnataka, India
Do you thrive on solving complex technical problems? Do you want to be at the cutting edge of technology? If so,we’re interested in speaking with you!
Your Impact:
We’re looking for a seasoned engineering leader in the Cloud team that is responsible for building, operating, and maintaining a customer-facing DBaaS service in multiple public clouds (AWS, GCP, and Azure). The service supports unified multiverse management of YugabyteDB, including fault-domain aware provisioning, rolling upgrades, security,
networking, monitoring, and day-2 operations (backups, scaling, billing etc). If you’re a strong leader who exemplifies collaboration, who is driven and thrive in a fast-paced startup environment, and who has a strong desire to build an internet-scale, extensible cloud based service with strong emphasis on simplicity and user experience, this job is for
you.
You Will:
Lead, inspire, and influence to make sure your team is successful
Partner with the recruiting team to attract and retain high-quality and diverse talent
Establish great rapport with other development teams, Product Managers, Sales and Customer Success tomaintain high levels of visibility, efficiency, and collaboration
Ensure teams have appropriate technical direction, leadership and balance between short-term impact andlong term architectural vision.
Occasionally contributing to development tasks such as coding and feature verifications to assist teamswith release commitments, to gain an understanding of the deeply technical product as well as to keepyour technical acumen sharp.
You'll need:
BS/MS degree in CS-or- a related field with 5+ years of engineering management experience leading productive, high-functioning teams
Strong fundamentals in distributed systems design and development
Ability to hire while ensuring a high hiring bar, keep engineers motivated, coach/mentor, and handle performance management
Experience running production services in Public Clouds such as AWS, GCP, and Azure
Experience with running large stateful data systems in the Cloud
Prior knowledge of Cloud architecture and implementation features (multi-tenancy, containerization,orchestration, elastic scalability)
A great track record of shipping features and hitting deadlines consistently; should be able to move fast,build in increments and iterate; have a sense of urgency, aggressive mindset towards achieving results and excellent prioritization skills; able to anticipate future technical needs for the product and craft plans to realize them
Ability to influence the team, peers, and upper management using effective communication and collaborative techniques; focused on building and maintaining a culture of collaboration within the team.
- 7+ years of experience in System Administration, Networking, Automation, Monitoring
- Excellent problem solving, analytical skills and technical troubleshooting skills
- Experience managing systems deployed in public cloud platforms (Microsoft Azure, AWS or Google Cloud)
- Experience implementing and maintaining CI/CD pipelines (Jenkins, Concourse, etc.)
- Linux experience, flavours: Ubuntu, Redhat, CentOS (sysadmin, bash scripting)
- Experience setting up monitoring (Datadog, Splunk, etc.)
- Experience in Infrastructure Automation tools like Terraform
- Experience in Package Manager for Kubernetes like Helm Charts
- Experience with databases and data storage (Oracle, MongoDB, Postgres SQL, ELK stack)
- Experience with Docker
- Experience with orchestration technologies (Kubernetes or DC/OS)
- Familiar with Agile Software Development
About the job
👉 TL; DR: We at Sarva Labs Inc., are looking for Site Reliability Engineers with experience to join our team. As a Protocol Developer, you will handle assets in data centers across Asia, Europe and Americas for the World’s First Context-Aware Peer-to-Peer Network enabling Web4.0. We are looking for that person who will take over the ownership of DevOps, establish proper deployment processes and work with engineering teams and hustle through the Main Net launch.
About Us 🚀
Imagine if each user had their own chain with each transaction being settled by a dynamic group of nodes who come together and settle that interaction with near immediate finality without a volatile gas cost. That’s MOI for you, Anon.
Visit https://www.sarva.ai/ to know more about who we are as a company
Visit https://www.moi.technology/ to know more about the technology and team!
Visit https://www.moi-id.life/ , https://www.moibit.io/ , https://www.moiverse.io/ to know more
Read our developer documentation at https://apidocs.moinet.io/
What you'll do 🛠
- You will take over the ownership of DevOps, establish proper deployment processes and work with engineering teams to ensure an appropriate degree of automation for component assembly, deployment, and rollback strategies in medium to large scale environments
- Monitor components to proactively prevent system component failure, and enable the engineering team on system characteristics that require improvement
- You will ensure the uninterrupted operation of components through proactive resource management and activities such as security/OS/Storage/application upgrades
You'd fit in 💯 if you...
- Familiar with any of these providers: AWS, GCP, DO, Azure, RedSwitches, Contabo, Redswitches, Hetzner, Server4you, Velia, Psychz, Tier and so on
- Experience in virtualizing bare metals using Openstack / VMWare / Similar is a PLUS
- Seasoned in building and managing VMs, Containers and clusters across the continents
- Confident in making best use of Docker, Kubernetes with stateful set deployment, autoscaling, rolling update, UI dashboard, replications, persistent volume, ingress
- Must have experience deploying in multi-cloud environments
- Working knowledge on automation tools such as Terraform, Travis, Packer, Chef, etc.
- Working knowledge on Scalability in a distributed and decentralised environment
- Familiar with Apache, Rancher, Nginx, SELinux/Ubuntu 18.04 LTS/CentOS 7 and RHEL
- Monitoring tools like PM2, Grafana and so on
- Hands-on with ELK stack/similar for log analytics
🌱 Join Us
- Flexible work timings
- We’ll set you up with your workspace. Work out of our Villa which has a lake view!
- Competitive salary/stipend
- Generous equity options (for full-time employees)
As an Infrastructure Engineer at Navi, you will be building a resilient infrastructure platform, using modern Infrastructure engineering practices.
You will be responsible for the availability, scaling, security, performance and monitoring of the navi Cloud platform. You’ll be joining a team that follows best practices in infrastructure as code
Your Key Responsibilities
- Build out the Infrastructure components like API Gateway, Service Mesh, Service Discovery, container orchestration platform like kubernetes.
- Developing reusable Infrastructure code and testing frameworks
- Build meaningful abstractions to hide the complexities of provisioning modern infrastructure components
- Design a scalable Centralized Logging and Metrics platform
- Drive solutions to reduce Mean Time To Recovery(MTTR), enable High Availability.
What to Bring
- Good to have experience in managing large scale cloud infrastructure, preferable AWS and Kubernetes
- Experience in developing applications using programming languages like Java, Python and Go
- Experience in handling logs and metrics at a high scale.
- Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.







.png&w=256&q=75)