● Responsible for development, and implementation of Cloud solutions.
● Responsible for achieving automation & orchestration of tools(Puppet/Chef)
● Monitoring the product's security & health(Datadog/Newrelic)
● Managing and Maintaining databases(Mongo & Postgres)
● Automating Infrastructure using AWS services like CloudFormation
● Provide evidences in Infrastructure Security Audits
● Migrating to Container technologies (Docker/Kubernetes)
● Should have knowledge on serverless concepts (AWS Lambda)
● Should be able to work with AWS services like EC2, S3, Cloud-formation, EKS, IAM, RDS, ..etc
What you bring:
● Problem-solving skills that enable you to identify the best solutions.
● Team collaboration and flexibility at work.
● Strong verbal and written communication skills that will help in presenting complex ideas
in an accessible and engaging way.
● Ability to choose the best tools and technologies which best fits the business needs.
Aviso offers:
● Dynamic, diverse, inclusive startup environment driven by transparency and velocity
● Bright, open, sunny working environment and collaborative office space
● Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs
● Competitive salaries and company equity, and a focus on developing world class talent operations
● Comprehensive health insurance available (medical) for you and your family
● Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service
● CEO moonshots projects with cash awards every quarter
● Upskilling and learning support including via paid conferences, online courses, and certifications
● Every month Rupees 2,500 will be credited to Sudexo meal card
About Aviso Inc
About Aviso:
Aviso is the AI Compass that guides Sales and Go-to-Market teams to close more deals, accelerate revenue growth, and find their True North. Aviso delivers true revenue intelligence, nudges team-wide actions, and gives precise guidance so sellers and teams don’t get lost in the fog of CRM, scattered data lakes, and human biases.
We are a global company with offices in Redwood City, San Francisco, Hyderabad, and Bangalore. Our customers are innovative leaders in their market. We are proud to count Dell, Honeywell, MongoDB, Glassdoor, Splunk, FireEye, and RingCentral as our customers, helping them drive revenue, achieve goals faster, and win in bold new frontiers.
Aviso is backed by Storm Ventures, Shasta Ventures, Scale Venture Partners and leading Silicon Valley technology investors.
Similar jobs
DESIRED SKILLS AND EXPERIENCE
Strong analytical and problem-solving skills
Ability to work independently, learn quickly and be proactive
3-5 years overall and at least 1-2 years of hands-on experience in designing and managing DevOps Cloud infrastructure
Experience must include a combination of:
o Experience working with configuration management tools – Ansible, Chef, Puppet, SaltStack (expertise in at least one tool is a must)
o Ability to write and maintain code in at least one scripting language (Python preferred)
o Practical knowledge of shell scripting
o Cloud knowledge – AWS, VMware vSphere o Good understanding and familiarity with Linux
o Networking knowledge – Firewalls, VPNs, Load Balancers
o Web/Application servers, Nginx, JVM environments
o Virtualization and containers - Xen, KVM, Qemu, Docker, Kubernetes, etc.
o Familiarity with logging systems - Logstash, Elasticsearch, Kibana
o Git, Jenkins, Jira
Electrum is looking for an experienced and proficient DevOps Engineer. This role will provide you with an opportunity to explore what’s possible in a collaborative and innovative work environment. If your goal is to work with a team of talented professionals that is keenly focused on solving complex business problems and supporting product innovation with technology, you might be our new DevOps Engineer. With this position, you will be involved in building out systems for our rapidly expanding team, enabling the whole engineering group to operate more effectively and iterate at top speed in an open, collaborative environment. The ideal candidate will have a solid background in software engineering and a vivid experience in deploying product updates, identifying production issues, and implementing integrations. The ideal candidate has proven capabilities and experience in risk-taking, is willing to take up challenges, and is a strong believer in efficiency and innovation with exceptional communication and documentation skills.
YOU WILL:
- Plan for future infrastructure as well as maintain & optimize the existing infrastructure.
- Conceptualize, architect, and build:
- 1. Automated deployment pipelines in a CI/CD environment like Jenkins;
- 2. Infrastructure using Docker, Kubernetes, and other serverless platforms;
- 3. Secured network utilizing VPCs with inputs from the security team.
- Work with developers & QA team to institute a policy of Continuous Integration with Automated testing Architect, build and manage dashboards to provide visibility into delivery, production application functional, and performance status.
- Work with developers to institute systems, policies, and workflows which allow for a rollback of deployments.
- Triage release of applications/ Hotfixes to the production environment on a daily basis.
- Interface with developers and triage SQL queries that need to be executed in production environments.
- Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production.
- Assist the developers and on calls for other teams with a postmortem, follow up and review of issues affecting production availability.
- Scale Electum platform to handle millions of requests concurrently.
- Reduce Mean Time To Recovery (MTTR), enable High Availability and Disaster Recovery
PREREQUISITES:
- Bachelor’s degree in engineering, computer science, or related field, or equivalent work experience.
- Minimum of six years of hands-on experience in software development and DevOps, specifically managing AWS Infrastructures such as EC2s, RDS, Elastic cache, S3, IAM, cloud trail, and other services provided by AWS.
- At least 2 years of experience in building and owning serverless infrastructure.
- At least 2 years of scripting experience in Python (Preferable) and Shell Web Application Deployment Systems Continuous Integration tools (Ansible).
- Experience building a multi-region highly available auto-scaling infrastructure that optimizes performance and cost.
- Experience in automating the provisioning of AWS infrastructure as well as automation of routine maintenance tasks.
- Must have prior experience automating deployments to production and lower environments.
- Worked on providing solutions for major automation with scripts or infrastructure.
- Experience with APM tools such as DataDog and log management tools.
- Experience in designing and implementing Essential Functions System Architecture Process; establishing and enforcing Network Security Policy (AWS VPC, Security Group) & ACLs.
- Experience establishing and enforcing:
- 1. System monitoring tools and standards
- 2. Risk Assessment policies and standards
- 3. Escalation policies and standards
- Excellent DevOps engineering, team management, and collaboration skills.
- Advanced knowledge of programming languages such as Python and writing code and scripts.
- Experience or knowledge in - Application Performance Monitoring (APM), and prior experience as an open-source contributor will be preferred.
- Candidate should be able to write the sample programs using the Tools (Bash, PowerShell, Python or Shell scripting)
- Analytical/logical reasoning
- GitHub Actions
- Should have good working experience with GitHub Actions
- Repository/Workflow Dispatch, writing reusable workflows, etc
- AZ CLI commands
- Hands-on experience with AZ CLI commands
Designation : DevOp Engineer
Location : HSR, Bangalore
About the Company
Making impact driven by Data.
Vumonic Datalabs is a data-driven startup providing business insights to e-commerce & e-tail companies to help them make data-driven decisions to scale up their business and understand their competition better. As one of the EU's fastest growing (and coolest) data companies, we believe in revolutionizing the way businesses make their most important business decisions by providing first-hand transaction based insights in real-time..
About the Role
We are looking for an experienced and ambitious DevOps engineer who will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. As a DevOps engineer at Vumonic Datalabs, you will have the opportunity to work with a thriving global team to help us build functional systems that improve customer experience. If you have a strong background in software engineering, are hungry to learn, compassionate about your work and are familiar with the mentioned technical skills, we’d love to speak with you.
What you’ll do
- Optimize and engineer the Devops infrastructure for high availability, scalability and reliability.
- Monitor Logs on servers & Cloud management
- Build and set up new development tools and infrastructure to reduce occurrences of errors
- Understand the needs of stakeholders and convey this to developers
- Design scripts to automate and improve development and release processes
- Test and examine codes written by others and analyze results
- Ensure that systems are safe and secure against cybersecurity threats
- Identify technical problems, perform root cause analysis for production errors and develop software updates and ‘fixes’
- Work with software developers, engineers to ensure that development follows established processes and actively communicates with the operations team.
- Design procedures for system troubleshooting and maintenance.
What you need to have
TECHNICAL SKILLS
- Experience working with the following tools : Google Cloud Platform, Kubernetes, Docker, Elastic Search, Terraform, Redis
- Experience working with following tools preferred : Python, Node JS, Mongo-DB, Rancher, Cassandra
- Experience with real-time monitoring of cloud infrastructure using publicly available tools and servers
- 2 or more years of experience as a DevOp (startup/technical experience preferred)
You are
- Excited to learn, are a hustler and “Do-er”
- Passionate about building products that create impact.
- Updated with the latest technological developments & enjoy upskilling yourself with market trends.
- Willing to experiment with novel ideas & take calculated risks.
- Someone with a problem-solving attitude with the ability to handle multiple tasks while meeting expected deadlines.
- Interested to work as part of a supportive, highly motivated and fun team.
• Design cloud infrastructure that is secure, scalable, and highly available on AWS
• Define infrastructure and deployment requirements
• Provision, configure and maintain AWS cloud infrastructure defined as code
• Ensure configuration and compliance with configuration management tools
• Troubleshoot problems across a wide array of services and functional areas
• Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
• Perform infrastructure cost analysis and optimization
Qualifications:
• At least 3-5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront, S3)
• Strong understanding of how to secure AWS environments and meet compliance requirements
• Expertise on configuration management
• Hands-on experience deploying and managing infrastructure with Terraform
• Solid foundation of networking and Linux administration
• Experience with Docker, GitHub, Jenkins, ELK and deploying applications on AWS
• Ability to learn/use a wide variety of open source technologies and tools
• Strong bias for action and ownership
Engineering group to plan ongoing feature development, product maintenance.
• Familiar with Virtualization, Containers - Kubernetes, Core Networking, Cloud Native
Development, Platform as a Service – Cloud Foundry, Infrastructure as a Service, Distributed
Systems etc
• Implementing tools and processes for deployment, monitoring, alerting, automation, scalability,
and ensuring maximum availability of server infrastructure
• Should be able to manage distributed big data systems such as hadoop, storm, mongoDB,
elastic search and cassandra etc.,
• Troubleshooting multiple deployment servers, Software installation, Managing licensing etc,.
• Plan, coordinate, and implement network security measures in order to protect data, software, and
hardware.
• Monitor the performance of computer systems and networks, and to coordinate computer network
access and use.
• Design, configure and test computer hardware, networking software, and operating system
software.
• Recommend changes to improve systems and network configurations, and determine hardware or
software requirements related to such changes.
DevOps Engineer Skills Building a scalable and highly available infrastructure for data science Knows data science project workflows Hands-on with deployment patterns for online/offline predictions (server/serverless)
Experience with either terraform or Kubernetes
Experience of ML deployment frameworks like Kubeflow, MLflow, SageMaker Working knowledge of Jenkins or similar tool Responsibilities Owns all the ML cloud infrastructure (AWS) Help builds out an entirely CI/CD ecosystem with auto-scaling Work with a testing engineer to design testing methodologies for ML APIs Ability to research & implement new technologies Help with cost optimizations of infrastructure.
Knowledge sharing Nice to Have Develop APIs for machine learning Can write Python servers for ML systems with API frameworks Understanding of task queue frameworks like Celery