● Responsible for development, and implementation of Cloud solutions.
● Responsible for achieving automation & orchestration of tools(Puppet/Chef)
● Monitoring the product's security & health(Datadog/Newrelic)
● Managing and Maintaining databases(Mongo & Postgres)
● Automating Infrastructure using AWS services like CloudFormation
● Provide evidences in Infrastructure Security Audits
● Migrating to Container technologies (Docker/Kubernetes)
● Should have knowledge on serverless concepts (AWS Lambda)
● Should be able to work with AWS services like EC2, S3, Cloud-formation, EKS, IAM, RDS, ..etc
What you bring:
● Problem-solving skills that enable you to identify the best solutions.
● Team collaboration and flexibility at work.
● Strong verbal and written communication skills that will help in presenting complex ideas
in an accessible and engaging way.
● Ability to choose the best tools and technologies which best fits the business needs.
Aviso offers:
● Dynamic, diverse, inclusive startup environment driven by transparency and velocity
● Bright, open, sunny working environment and collaborative office space
● Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs
● Competitive salaries and company equity, and a focus on developing world class talent operations
● Comprehensive health insurance available (medical) for you and your family
● Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service
● CEO moonshots projects with cash awards every quarter
● Upskilling and learning support including via paid conferences, online courses, and certifications
● Every month Rupees 2,500 will be credited to Sudexo meal card
About Aviso Inc
About
About Aviso:
Aviso is the AI Compass that guides Sales and Go-to-Market teams to close more deals, accelerate revenue growth, and find their True North. Aviso delivers true revenue intelligence, nudges team-wide actions, and gives precise guidance so sellers and teams don’t get lost in the fog of CRM, scattered data lakes, and human biases.
We are a global company with offices in Redwood City, San Francisco, Hyderabad, and Bangalore. Our customers are innovative leaders in their market. We are proud to count Dell, Honeywell, MongoDB, Glassdoor, Splunk, FireEye, and RingCentral as our customers, helping them drive revenue, achieve goals faster, and win in bold new frontiers.
Aviso is backed by Storm Ventures, Shasta Ventures, Scale Venture Partners and leading Silicon Valley technology investors.
Company video
Photos
Connect with the team
Similar jobs
Please Apply - https://zrec.in/GzLLD?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: DevOps Engineer AWS
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 2-4 years
Education: B.Tech/MCA/BCA
Notice Period: Immediately
Infra360.io is searching for a DevOps Engineer to lead our group of IT specialists in maintaining and improving our software infrastructure. You'll collaborate with software engineers, QA engineers, and other IT pros in deploying, automating, and managing the software infrastructure. As a DevOps engineer you will also be responsible for setting up CI/CD pipelines, monitoring programs, and cloud infrastructure.
Below is a detailed description of the roles and responsibilities, expectations for the role.
Tech Stack :
- Kubernetes: Deep understanding of Kubernetes clusters, container orchestration, and its architecture.
- Terraform: Extensive hands-on experience with Infrastructure as Code (IaC) using Terraform for managing cloud resources.
- ArgoCD: Experience in continuous deployment and using ArgoCD to maintain GitOps workflows.
- Helm: Expertise in Helm for managing Kubernetes applications.
- Cloud Platforms: Expertise in AWS, GCP or Azure will be an added advantage.
- Debugging and Troubleshooting: The DevOps Engineer must be proficient in identifying and resolving complex issues in a distributed environment, ranging from networking issues to misconfigurations in infrastructure or application components.
Key Responsibilities:
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
- CI/CD and configuration management
- Doing RCA of production issues and providing resolution
- Setting up failover, DR, backups, logging, monitoring, and alerting
- Containerizing different applications on the Kubernetes platform
- Capacity planning of different environment's infrastructure
- Ensuring zero outages of critical services
- Database administration of SQL and NoSQL databases
- Infrastructure as a code (IaC)
- Keeping the cost of the infrastructure to the minimum
- Setting up the right set of security measures
Ideal Candidate Profile:
- A graduation/post-graduation degree in Computer Science and related fields
- 2-4 years of strong DevOps experience with the Linux environment.
- Strong interest in working in our tech stack
- Excellent communication skills
- Worked with minimal supervision and love to work as a self-starter
- Hands-on experience with at least one of the scripting languages - Bash, Python, Go etc
- Experience with version control systems like Git
- Strong experience of Amazon Web Services (EC2, RDS, VPC, S3, Route53, IAM etc.)
- Strong experience with managing the Production Systems day in and day out
- Experience in finding issues in different layers of architecture in production environment and fixing them
- Knowledge of SQL and NoSQL databases, ElasticSearch, Solr etc.
- Knowledge of Networking, Firewalls, load balancers, Nginx, Apache etc.
- Experience in automation tools like Ansible/SaltStack and Jenkins
- Experience in Docker/Kubernetes platform and managing OpenStack (desirable)
- Experience with Hashicorp tools i.e. Vault, Vagrant, Terraform, Consul, VirtualBox etc. (desirable)
- Experience with managing/mentoring small team of 2-3 people (desirable)
- Experience in Monitoring tools like Prometheus/Grafana/Elastic APM.
- Experience in logging tools Like ELK/Loki.
Job Description:
• Drive end-to-end automation from GitHub/GitLab/BitBucket to Deployment,
Observability and Enabling the SRE activities
• Guide operations support (setup, configuration, management, troubleshooting) of
digital platforms and applications
• Solid understanding of DevSecOps Workflows that support CI, CS, CD, CM, CT.
• Deploy, configure, and manage SaaS and PaaS cloud platform and applications
• Provide Level 1 (OS, patching) and Level 2 (app server instance troubleshooting)
• DevOps programming: writing scripts, building operations/server instance/app/DB
monitoring tools Set up / manage continuous build and dev project management
environment: JenkinX/GitHub Actions/Tekton, Git, Jira Designing secure networks,
systems, and application architectures
• Collaborating with cross-functional teams to ensure secure product development
• Disaster recovery, network forensics analysis, and pen-testing solutions
• Planning, researching, and developing security policies, standards, and procedures
• Awareness training of the workforce on information security standards, policies, and
best practices
• Installation and use of firewalls, data encryption and other security products and
procedures
• Maturity in understanding compliance, policy and cloud governance and ability to
identify and execute automation.
• At Wesco, we discuss more about solutions than problems. We celebrate innovation
and creativity.
You will be responsible for:
- Managing all DevOps and infrastructure for Sizzle
- We have both cloud and on-premise servers
- Work closely with all AI and backend engineers on processing requirements and managing both development and production requirements
- Optimize the pipeline to ensure ultra fast processing
- Work closely with management team on infrastructure upgrades
You should have the following qualities:
- 3+ years of experience in DevOps, and CI/CD
- Deep experience in: Gitlab, Gitops, Ansible, Docker, Grafana, Prometheus
- Strong background in Linux system administration
- Deep expertise with AI/ML pipeline processing, especially with GPU processing. This doesn’t need to include model training, data gathering, etc. We’re looking more for experience on model deployment, and inferencing tasks at scale
- Deep expertise in Python including multiprocessing / multithreaded applications
- Performance profiling including memory, CPU, GPU profiling
- Error handling and building robust scripts that will be expected to run for weeks to months at a time
- Deploying to production servers and monitoring and maintaining the scripts
- DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend)
- Expertise in Docker-based virtualization including - creating & maintaining custom Docker images, deployment of Docker images on cloud and on-premise services, monitoring of production Docker images with robust error handling
- Expertise in AWS infrastructure, networking, availability
Optional but beneficial to have:
- Experience with running Nvidia GPU / CUDA-based tasks
- Experience with image processing in python (e.g. openCV, Pillow, etc)
- Experience with PostgreSQL and MongoDB (Or SQL familiarity)
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Bachelors or Masters degree in computer science or related field
- Ideally a gamer or someone interested in watching gaming content online
Skills:
DevOps, Ansible, CI/CD, GitLab, GitOps, Docker, Python, AWS, GCP, Grafana, Prometheus, python, sqlalchemy, Linux / Ubuntu system administration
Seniority: We are looking for a mid to senior level engineer
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply.
Work Experience: 3 years to 6 years
Task:
- Need to run our software products in different international environments (on premise and cloud providers)
- Support the developers while debugging issues
- Analyse and monitor software during runtime to find bugs, performance issues and plan growth of the system
- Integrate new technologies to support our products while growing in the market
- Develop Continuous Integration and Continuous Deployment Pipelines
- Maintain our on premise hosted servers and applications, like operating system upgrades, software upgrades, introducing new database versions etc.
- Automation of task to reduce amount of human errors and parallelize work
We wish:
- Basic OS knowledge (Debian, CentOS, Suse Enterprise Linux)
- Webserver administration and optimization (Apache, Traefik)
- Database administration and optimization (Mysql/MariaDB, Oracle, Elasticsearch)
- jvm administration and optimization application server administration and optimization (Servicemix, Karaf, Glassfish, Springboot)
- Scripting experience (Perl, Python, PHP, Java)
- Monitoring experience (Icinga/Nagios, Appdynamics, Prometheus, Grafana)
- Knowledge container management (Docker/ContainerD, DC/OS, Kubernetes)
- Experience with automatic deployment processes (Ansible, Gitlab-CI, Helm)
- Define and optimize processes for system maintenance, continuous integration, and continuous delivery
- Excellent communication skill & proficiency in English is necessary
- Leadership skill with team motivational approach
- Good Team player
We Offer:
- Freedom to realise your own ideas & individual career & development opportunity.
- A motivating work environment, flat hierarchical structure, numerous company events which cannot be forgotten and fun at work place with flexibilities.
- Professional challenges and career development opportunities.
Your Contact for this position is Janki Raval .
Would you like to become part of this highly innovative, dynamic, and exciting world?
We look forward to your expressive Resume.
Skills required:
Strong knowledge and experience of cloud infrastructure (AWS, Azure or GCP), systems, network design, and cloud migration projects.
Strong knowledge and understanding of CI/CD processes tools (Jenkins/Azure DevOps) is a must.
Strong knowledge and understanding of Docker & Kubernetes is a must.
Strong knowledge of Python, along with one more language (Shell, Groovy, or Java).
Strong prior experience using automation tools like Ansible, Terraform.
Architect systems, infrastructure & platforms using Cloud Services.
Strong communication skills. Should have demonstrated the ability to collaborate across teams and organizations.
Benefits of working with OpsTree Solutions:
Opportunity to work on the latest cutting edge tools/technologies in DevOps
Knowledge focused work culture
Collaboration with very enthusiastic DevOps experts
High growth trajectory
Opportunity to work with big shots in the IT industry
Projects you'll be working on:
- We're focused on enhancing our product for our clients and their users, as well as streamlining operations and improving our technical foundation.
- Writing scripts for procurement, configuration and deployment of instances (infrastructure automation) on GCP
- Managing Kubernetes cluster
- Manage product and services like VPC, Elasticsearch, cloud functions, rabbitMQ, redis servers, postgres infrastructure, app engine, etc.
- Supporting developers in setting up infrastructure for services
- Manage and improve microservices infrastructure
- Managing high availability, low latency applications
- Focus on security best practices to ensure assist in security and compliance activities
Requirements
- Minimum 3 years experience as DevOps
- Minimum 1 years' experience with Kubernetes Cluster (Infrastructure as code, maintaining and scalability).
- BASH expertise, node or python professional programming experience
- Experience with setting up, configuring and using Jenkins or any CI tools, building CI/CD pipeline
- Experience setting microservices architecture
- Experience with package management and deployments
- Thorough understanding of networking.
- Understanding of all common services and protocols
- Experience in web server configuration, monitoring, network design and high availability
- Thorough understanding of DNS, VPN, SSL
Technologies you'll work with:
- GKE, Prometheus, Grafana, Stackdriver
- ArgoCD and GitHub Actions
- NodeJS Backend
- Postgres, ElasticSearch, Redis, RabbitMQ
- Whatever else you decide - we're constantly re-evaluating our stack and tools
- Having prior experience with the technologies is a plus, but not mandatory for skilled candidates.
Benefits
- Remote Option - You can work from location of your choice :)
- Reimbursement of Home Office Setup
- Competitive Salary
- Friendly atmosphere
- Flexible paid vacation policy
Job Brief:
We are looking for candidates that have experience in development and have performed CI/CD based projects. Should have a good hands-on Jenkins Master-Slave architecture, used AWS native services like CodeCommit, CodeBuild, CodeDeploy and CodePipeline. Should have experience in setting up cross platform CI/CD pipelines which can be across different cloud platforms or on-premise and cloud platform.
Job Location:
Pune.
Job Description:
- Hands on with AWS (Amazon Web Services) Cloud with DevOps services and CloudFormation.
- Experience interacting with customer.
- Excellent communication.
- Hands-on in creating and managing Jenkins job, Groovy scripting.
- Experience in setting up Cloud Agnostic and Cloud Native CI/CD Pipelines.
- Experience in Maven.
- Experience in scripting languages like Bash, Powershell, Python.
- Experience in automation tools like Terraform, Ansible, Chef, Puppet.
- Excellent troubleshooting skills.
- Experience in Docker and Kuberneties with creating docker files.
- Hands on with version control systems like GitHub, Gitlab, TFS, BitBucket, etc.
Requirements:-
- Must have good understanding of Python and Shell scripting with industry standard coding conventions
- Must possess good coding debugging skills
- Experience in Design & Development of test framework
- Experience in Automation testing
- Good to have experience in Jenkins framework tool
- Good to have exposure to Continuous Integration process
- Experience in Linux and Windows OS
- Desirable to have Build & Release Process knowledge
- Experience in Automating Manual test cases
- Experienced in automating OS / FW related tasks
- Understanding of BIOS / FW QA is a strong plus
- OpenCV experience is a plus
- Good to have platform exposure
- Must have good Communication skills
- Good Leadership capabilities & collaboration capabilities, as individual will have to work with multiple teams and single handedly maintain the automation framework and enable the Manual validation team