REVOS is a smart micro-mobility platform that works with enterprises across the automotive shared mobility value chain to enable and accelerate their smart vehicle journeys. Founded in 2017, it aims to empower all 2 and 3 wheeler vehicles through AI-integrated IoT solutions that will make them smart, safe, connected. We are backed by investors like USV and Prime Venture.
Duties and Responsibilities :
- Automating various tasks in cloud operations, deployment, monitoring, and performance optimization for big data stack.
- Build, release, and configuration management of production systems.
- System troubleshooting and problem-solving across platform and application domains.
- Suggesting architecture improvements, recommending process improvements.
- Evaluate new technology options and vendor products.
- Function well in a fast-paced, rapidly-changing environment
- Communicate effectively with people at all levels of the organization
Qualifications and Required Skills:
- Overall 3+ years of experience in various software engineering roles.
- 3+ years of experience in building applications and tools in any tech stack, preferably deployed on cloud
- Recent 3 years’ experience must be on Serverless/cloud-native development in AWS (preferred)/Azure
- Expertise in any of the programming languages – (NodeJS or Python preferable)
- Must have hands-on experience in using AWS/Azure - SDK/APIs.
- Must have experience in deploying, releasing, and managing production systems
- MCA or a degree in engineering in Computer Science, IT, or Electronics stream

Similar jobs
Primary Skills:
Linux – Ubuntu Administration, Git, Gerrit, Jenkins Administration, Cloud services (Preferred AWS) Apache, Ansible, Python, Postgresql, Rabbit MQ, CloudWatch AWS, CFT in AWS
Additional Skills Required:
- Should have experience working with Jenkins, Git, Gerrit
- Should have Good understanding of AWS Security and execution.
- Should have Good python skills
- Should have experience of working with GIT, Gerrit, Jira, Confluence,
- Exposure to messaging systems Rabbit MQ
- Exposure to Html, Groovy, Javascript, shell scripting
- Exposure to Kibana, Provisioning, capacity planning and performance analysis at various levels
- Exposure to Android skills.
- Should have experience in working with cloud-native architecture.
- Experience with log stash and elastic search
- Expert in Full Stack design technique as well as experience working across large environments with multiple operating systems/infrastructure for large-scale programs
- May be recognized as a leader in Agile and cultivating teams working in Agile frameworks
- Strong understanding of techniques such as Continuous Integration, Continuous Delivery, Test Driven Development, Cloud Development, resiliency, security
- Stays abreast of cutting edge technologies/trends and uses experience to influence application of those technologies/trends to support the business
- Experience on Modelling and Provisioning cloud infrastructure using AWS CloudFormation
Key Responsibilities:
- Perform a Technical Lead role for DevOPs development and support teams.
- Need to communicate & coordinate with both offshore and onsite teams
- Should translate business requirements into project plans and workable item/activities
- Have a thorough understanding of software development lifecycle and the ability to implement software following the structured approach.
- Need to perform in-depth technical reviews of project deliverables and ensure it should be defect free (minimize post release defects).
- Understand the current applications and technical architecture and improvise them as needed.
- Stay abreast of new technologies, methods to optimize development process and latest SDKs, testing tools etc
Staff DevOps Engineer with Azure
EGNYTE YOUR CAREER. SPARK YOUR PASSION.
Egnyte is a place where we spark opportunities for amazing people. We believe that every role has meaning, and every Egnyter should be respected. With 22,000+ customers worldwide and growing, you can make an impact by protecting their valuable data. When joining Egnyte, you’re not just landing a new career, you become part of a team of Egnyters that are doers, thinkers, and collaborators who embrace and live by our values:
Invested Relationships
Fiscal Prudence
Candid Conversations
ABOUT EGNYTE
Egnyte is the secure multi-cloud platform for content security and governance that enables organizations to better protect and collaborate on their most valuable content. Established in 2008, Egnyte has democratized cloud content security for more than 22,000 organizations, helping customers improve data security, maintain compliance, prevent and detect ransomware threats, and boost employee productivity on any app, any cloud, anywhere. For more information, visit www.egnyte.com.
Our Production Engineering team enables Egnyte to provide customers access to their data 24/7 by providing best in class infrastructure.
ABOUT THE ROLE
We store multibillion files and multiple petabytes of data. We observe more than 11K API requests per second on average. To make that possible and to provide the best possible experience, we rely on great engineers. For us, people who own their work, from start to finish, are integral. Our engineers are part of the process from design to code, to test, to deployment and back again for further iterations. You can, and will, touch every level of the infrastructure depending on the day and what project you are working on. The ideal candidate should be able to take a complex problem and execute end to end. Mentor and set higher standards for the rest of the team and for the new hires.
WHAT YOU’LL DO:
• Design, build and maintain self-hosted and cloud environments to serve our own applications and services.
• Collaborate with software developers to build stable, scalable and high-performance solutions.
• Taking part in big projects like migrating solutions from self-hosted environments to the cloud, from virtual machines to Kubernetes, from monolith to microservices.
- Proactively make our organization and technology better!
- Advising others as to how DevOps can make a positive impact on their work.
• Share knowledge, mentor more junior team members while also still learning and gaining new skills.
- Maintain consistently high standards of communication, productivity, and teamwork across all teams.
YOUR QUALIFICATIONS:
• 5+ years of proven experience in a DevOps Engineer, System Administrator or Developer role, working on infrastructure or build processes.
• Expert knowledge of Microsoft Azure.
• Programming prowess (Python, Golang).
• Knowledge and experience about deployment and maintenance of Java and Python apps using application and web servers (Tomcat, Nginx, etc.).
• Ability to solve complex problems with simple, elegant and clean code.
• Practical knowledge of CI/CD solutions, GitLab CI or similar.
• Practical knowledge of Docker as a tool for testing and building an environment.
• Knowledge of Kubernetes and related technologies.
• Experience with metric-based monitoring solutions.
• Solid English skills to effectively communicate with other team members.
• Good understanding of the Linux Operating System on the administration level.
• Drive to grow as a DevOps Engineer (we value open-mindedness and a can-do attitude).
• Strong sense of ownership and ability to drive big projects.
BONUS SKILLS:
• Work experience as a Microsoft Azure architect.
• Experience in Cloud migrations projects.
• Leadership skills and experience.
COMMITMENT TO DIVERSITY, EQUITY, AND INCLUSION:
At Egnyte, we celebrate our differences and thrive on our diversity for our employees, our products, our customers, our investors, and our communities. Egnyters are encouraged to bring their whole selves to work and to appreciate the many differences that collectively make Egnyte a higher-performing company and a great place to be.
This person MUST have:
- B.E Computer Science or equivalent
- 2+ Years of hands-on experience troubleshooting/setting up of the Linux environment, who can write shell scripts for any given requirement.
- 1+ Years of hands-on experience setting up/configuring AWS or GCP services from SCRATCH and maintaining them.
- 1+ Years of hands-on experience setting up/configuring Kubernetes & EKS and ensuring high availability of container orchestration.
- 1+ Years of hands-on experience setting up CICD from SCRATCH in Jenkins & Gitlab.
- Experience configuring/maintaining one monitoring tool.
- Excellent verbal & written communication skills.
- Candidates with certifications - AWS, GCP, CKA, etc will be preferred
- Hands-on experience with databases (Cassandra, MongoDB, MySQL, RDS).
Experience:
- Min 3 years of experience as SRE automation engineer building, running, and maintaining production sites. Not looking for candidates who have experience only as L1/L2.
Location:
- Remotely, anywhere in India
Timings:
- The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead to identify and establish DevOps practices in the company. You will also help us build scalable, efficient cloud infrastructure. You’ll implement monitoring for automated system health checks. Lastly, you’ll build our CI pipeline, and train and guide the team in DevOps practices.
Responsibilities
- Deployment, automation, management, and maintenance of production systems.
- Ensuring availability, performance, security, and scalability of production systems.
- Evaluation of new technology alternatives and vendor products.
- System troubleshooting and problem resolution across various application domains and platforms.
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on the AWS
platform.
- Manage the establishment and configuration of SaaS infrastructure in an agile way
by storing infrastructure as code and employing automated configuration management tools with a goal to be able to re-provision environments at any point in time.
- Be accountable for proper backup and disaster recovery procedures.
- Drive operational cost reductions through service optimizations and demand-based
auto-scaling.
- Have on-call responsibilities.
- Perform root cause analysis for production errors
- Uses open source technologies and tools to accomplish specific use cases encountered
within the project.
- Uses coding languages or scripting methodologies to solve a problem with a custom workflow.
Requirements
- Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
- Prior experience as a software developer in a couple of high-level programming
languages.
- Extensive experience in any Javascript-based framework since we will be deploying services to NodeJS on AWS Lambda (Serverless)
- Strong Linux system administration background.
- Ability to present and communicate the architecture in a visual form.
- Strong knowledge of AWS (e.g. IAM, EC2, VPC, ELB, ALB, Autoscaling, Lambda, NAT
gateway, DynamoDB)
- Experience maintaining and deploying highly-available, fault-tolerant systems at scale (~
1 Lakh users a day)
- A drive towards automating repetitive tasks (e.g. scripting via Bash, Python, Ruby, etc)
- Expertise with Git
- Experience implementing CI/CD (e.g. Jenkins, TravisCI)
- Strong experience with databases such as MySQL, NoSQL, Elasticsearch, Redis and/or
Mongo.
- Stellar troubleshooting skills with the ability to spot issues before they become problems.
- Current with industry trends, IT ops and industry best practices, and able to identify the
ones we should implement.
- Time and project management skills, with the capability to prioritize and multitask as
needed.
- Experience working on Linux based infrastructure
- Strong hands-on knowledge of setting up production, staging, and dev environments on AWS/GCP/Azure
- Strong hands-on knowledge of technologies like Terraform, Docker, Kubernetes
- Strong understanding of continuous testing environments such as Travis-CI, CircleCI, Jenkins, etc.
- Configuration and managing databases such as MySQL, Mongo
- Excellent troubleshooting
- Working knowledge of various tools, open-source technologies, and cloud services
- Awareness of critical concepts in DevOps and Agile principles
• Bachelor or Master Degree in Computer Science, Software Engineering from a reputed
University.
• 5 - 8 Years of experience in building scalable, secure and compliant systems.
• More than 2 years of experience in working with GCP deployment for millions of daily visitors
• 5+ years hosting experience in a large heavy-traffic environment
• 5+ years production application support experience in a high uptime environment
• Software development and monitoring knowledge with Automated builds
• Technology:
o Cloud: AWS or Google Cloud
o Source Control: Gitlab or Bitbucket or Github
o Container Concepts: Docker, Microservices
o Continuous Integration: Jenkins, Bamboos
o Infrastructure Automation: Puppet, Chef or Ansible
o Deployment Automation: Jenkins, VSTS or Octopus Deploy
o Orchestration: Kubernets, Mesos, Swarm
o Automation: Node JS or Python
o Linux environment network administration, DNS, firewall and security management
• Ability to be adapt to the startup culture, handle multiple competing priorities, meet
deadlines and troubleshoot problems.

We are looking for a System Engineer who can manage requirements and data management in Rational Doors and Siemens Polarion. You will be part of a global development team with resources in China, Sweden and the US.
Responsibilities and tasks
- Import of requirement specifications to DOORS module
- Create module structure according to written specification (e-mail, word, etc)
- Format: reqif, word, excel, pdf, csv
- Make adjustments to data required to be able to import to tool
- Review that the result is readable and possible to work with
- Import of information to new or existing modules in DOORS
- Feedback of Compliance status from an excel compliance matrix to a module in DOORS
- Import requirements from one module to another based on baseline/filter…
- Import lists of items: Test cases, documents, etc in excel or csv to a module
- Provide guidance on format to information holder at client
- Link information/attribute data from one module to others
- Status, test results, comment
- Link requirements according to information from the client in any given format
- Export data and reports
- Assemble report based on data from one or several modules according to filters/baseline/written requests in any given format
- Export statistics from data in DOORS modules
- Create filters in DOORS modules
Note: Polarion activities same as DOORS activities, but process, results and structure may vary
Requirements – Must list (short, and real must, no order)
- =>10 years of overall experience in Automotive Industry
- Having requirement management experience in the automotive industry.
- =>3 years of experience in Rational Doors as user
- Knowledge in Siemens Polarion, working knowledge is a plus
- Experience in offshore delivery for more than 7 years
- Able to lead a team of 3 to 5 people and manage temporary additions to team
- Having working knowledge in ASPICE and handling requirements according to ASPICE L2
- Experience in setting up offshore delivery that best fits the expectations of the customer
- Experience in setting up quality processes and ways of working
- Experience in metrics management – propose, capture and share metrics with internal/ external stakeholders
- Good Communication skills in English
Requirements - Good to have list, strictly sorted in falling priority order
- Experience in DevOps framework of delivery
- Interest in learning new languages
- Handling requirements according to ASPICE L3
- Willingness in travel, travel to Sweden may be needed (approx. 1-2 per year)
Soft skills
- Candidate must be driving and proactive person, able to work with minimum supervision and will be asked to give example situations incoming interviews.
- Good team player with attention to detail, self-disciplined, able to manage their own time and workload, proactive and motivated.
- Strong sense of responsibility and commitment, innovative thinking.
- Solve complex Cloud Infrastructure problems.
- Drive DevOps culture in the organization by working with engineering and product teams.
- Be a trusted technical advisor to developers and help them architect scalable, robust, and highly-available systems.
- Frequently collaborate with developers to help them learn how to run and maintain systems in production.
- Drive a culture of CI/CD. Find bottlenecks in the software delivery pipeline. Fix bottlenecks with developers to help them deliver working software faster. Develop and maintain infrastructure solutions for automation, alerting, monitoring, and agility.
- Evaluate cutting edge technologies and build PoCs, feasibility reports, and implementation strategies.
- Work with engineering teams to identify and remove infrastructure bottlenecks enabling them to move fast. (In simple words you'll be a bridge between tech, operations & product)
Skills required:
Must have:
- Deep understanding of open source DevOps tools.
- Scripting experience in one or more among Python, Shell, Go, etc.
- Strong experience with AWS (EC2, S3, VPC, Security, Lambda, Cloud Formation, SQS, etc)
- Knowledge of distributed system deployment.
- Deployed and Orchestrated applications with Kubernetes.
- Implemented CI/CD for multiple applications.
- Setup monitoring and alert systems for services using ELK stack or similar.
- Knowledge of Ansible, Jenkins, Nginx.
- Worked with Queue based systems.
- Implemented batch jobs and automated recurring tasks.
- Implemented caching infrastructure and policies.
- Implemented central logging.
Good to have:
- Experience dealing with PI information security.
- Experience conducting internal Audits and assisting External Audits.
- Experience implementing solutions on-premise.
- Experience with blockchain.
- Experience with Private Cloud setup.
Required Experience:
- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.
- You need to have 2-4 years of DevOps & Automation experience.
- Need to have a deep understanding of AWS.
- Need to be an expert with Git or similar version control systems.
- Deep understanding of at least one open-source distributed systems (Kafka, Redis, etc)
- Ownership attitude is a must.
We offer a suite of memberships and subscriptions to spice up your lifestyle. We believe in practicing an ultimate work life balance and satisfaction. Working hard doesn’t mean clocking in extra hours, it means having a zeal to contribute the best of your talents. Our people culture helps us inculcate measures and benefits which help you feel confident and happy each and every day. Whether you’d like to skill up, go off the grid, attend your favourite events or be an epitome of fitness. We have you covered round and about.
- Health Memberships
- Sports Subscriptions
- Entertainment Subscriptions
- Key Conferences and Event Passes
- Learning Stipend
- Team Lunches and Parties
- Travel Reimbursements
- ESOPs
Thats what we think would bloom up your personal life, as a gesture for helping us with your talents.
Join us to be a part of our Exciting journey to Build one Digital Identity Platform!!!
- Mandatory: Docker, AWS, Linux, Kubernete or ECS
- Prior experience provisioning and spinning up AWS Clusters / Kubernetes
- Production experience to build scalable systems (load balancers, memcached, master/slave architectures)
- Experience supporting a managed cloud services infrastructure
- Ability to maintain, monitor and optimise production database servers
- Prior work with Cloud Monitoring tools (Nagios, Cacti, CloudWatch etc.)
- Experience with Docker, Kubernetes, Mesos, NoSQL databases (DynamoDB, Cassandra, MongoDB, etc)
- Other Open Source tools used in the infrastructure space (Packer, Terraform, Vagrant, etc.)
- In-depth knowledge on Linux Environment.
- Prior experience leading technical teams through the design and implementation of systems infrastructure projects.
- Working knowledge of Configuration Management (Chef, Puppet or Ansible preferred) Continuous Integration Tools (Jenkins preferred)
- Experience in handling large production deployments and infrastructure.
- DevOps based infrastructure and application deployments experience.
- Working knowledge of the AWS network architecture including designing VPN solutions between regions and subnets
- Hands-on knowledge with the AWS AMI architecture including the development of machine templates and blueprints
- He/she should be able to validate that the environment meets all security and compliance controls.
- Good working knowledge of AWS services such as Messaging, Application Services, Migration Services, Cost Management Platform.
- Proven written and verbal communication skills.
- Understands and can serve as the technical team lead to oversee the build of the Cloud environment based on customer requirements.
- Previous NOC experience.
- Client Facing Experience with excellent Customer Communication and Documentation Skills

2. Has done Infrastructure coding using Cloudformation/Terraform and Configuration also understands it very clearly
3. Deep understanding of the microservice design and aware of centralized Caching(Redis),centralized configuration(Consul/Zookeeper)
4. Hands-on experience of working on containers and its orchestration using Kubernetes
5. Hands-on experience of Linux and Windows Operating System
6. Worked on NoSQL Databases like Cassandra, Aerospike, Mongo or
Couchbase, Central Logging, monitoring and Caching using stacks like ELK(Elastic) on the cloud, Prometheus, etc.
7. Has good knowledge of Network Security, Security Architecture and Secured SDLC practices








