● Responsible for design, development, and implementation of Cloud solutions.
● Responsible for achieving automation & orchestration of tools(Puppet/Chef)
● Monitoring the product's security & health(Datadog/Newrelic)
● Managing and Maintaining databases(Mongo & Postgres)
● Automating Infrastructure using AWS services like CloudFormation
● Participating in Infrastructure Security Audits
● Migrating to Container technologies (Docker/Kubernetes)
● Should be able to work on serverless concepts (AWS Lambda)
● Should be able to work with AWS services like EC2, S3, Cloud-formation, EKS, IAM, RDS, ..etc
What you bring:
● Problem-solving skills that enable you to identify the best solutions.
● Team collaboration and flexibility at work.
● Strong verbal and written communication skills that will help in presenting complex ideas
in
● an accessible and engaging way.
● Ability to choose the best tools and technologies which best fits the business needs.
Aviso offers:
● Dynamic, diverse, inclusive startup environment driven by transparency and velocity
● Bright, open, sunny working environment and collaborative office space
● Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs
● Competitive salaries and company equity, and a focus on developing world class talent operations
● Comprehensive health insurance available (medical) for you and your family
● Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service
● CEO moonshots projects with cash awards every quarter
● Upskilling and learning support including via paid conferences, online courses, and certifications
● Every month Rupees 2,500 will be credited to Sudexo meal card
About Aviso Inc
About
About Aviso:
Aviso is the AI Compass that guides Sales and Go-to-Market teams to close more deals, accelerate revenue growth, and find their True North. Aviso delivers true revenue intelligence, nudges team-wide actions, and gives precise guidance so sellers and teams don’t get lost in the fog of CRM, scattered data lakes, and human biases.
We are a global company with offices in Redwood City, San Francisco, Hyderabad, and Bangalore. Our customers are innovative leaders in their market. We are proud to count Dell, Honeywell, MongoDB, Glassdoor, Splunk, FireEye, and RingCentral as our customers, helping them drive revenue, achieve goals faster, and win in bold new frontiers.
Aviso is backed by Storm Ventures, Shasta Ventures, Scale Venture Partners and leading Silicon Valley technology investors.
Company video
Photos
Connect with the team
Similar jobs
Apprication Pvt Ltd -Work form office -Goregoan(E)
DevOps Engineer:
Should Have, Preferred 1-4YRS
Automate the deployment process from Bitbucket to servers.
Implement autoscaling servers.
Create containers and server images.
Possess comprehensive AWS server knowledge beyond just EC2 instances.
Elastic Load Balancing (ELB) and Amazon Route 53 for traffic management.
Amazon RDS and DynamoDB for database management.
Utilize a broad range of AWS services beyond EC2, including but not limited to.
AWS IAM for secure access management.
Optimize AWS infrastructure for security, scalability, and cost-effectiveness.
Jenkins, or similar CI/CD tools.
Ensure seamless and reliable code deployments with minimal downtime.
Deploy and manage infrastructure using IaC tools such as Terraform, AWS CloudFormation, or Ansible.
Implement autoscaling solutions to automatically adjust server capacity based on demand, ensuring optimal performance and cost-efficiency.
Create, manage, and optimize container images using Docker.
Deploy and orchestrate containers using Kubernetes or other orchestration tools to ensure scalability.
Implement comprehensive monitoring solutions using tools like Prometheus, Grafana, ELK Stack, etc.
Regards
HR
Key Responsibilities:
- Cloud Infrastructure Management: Oversee the deployment, scaling, and management of cloud infrastructure across platforms like AWS, GCP, and Azure. Ensure optimal configuration, security, and cost-effectiveness.
- Application Deployment and Maintenance: Responsible for deploying and maintaining web applications, particularly those built on Django and the MERN stack (MongoDB, Express.js, React, Node.js). This includes setting up CI/CD pipelines, monitoring performance, and troubleshooting.
- Automation and Optimization: Develop scripts and automation tools to streamline operations. Continuously seek ways to improve system efficiency and reduce downtime.
- Security Compliance: Ensure that all cloud deployments comply with relevant security standards and practices. Regularly conduct security audits and coordinate with security teams to address vulnerabilities.
- Collaboration and Support: Work closely with development teams to understand their needs and provide technical support. Act as a liaison between developers, IT staff, and management to ensure smooth operation and implementation of cloud solutions.
- Disaster Recovery and Backup: Implement and manage disaster recovery plans and backup strategies to ensure data integrity and availability.
- Performance Monitoring: Regularly monitor and report on the performance of cloud services and applications. Use data to make informed decisions about upgrades, scaling, and other changes.
Required Skills and Experience:
- Proven experience in managing cloud infrastructure on AWS, GCP, and Azure.
- Strong background in deploying and maintaining Django-based and MERN stack web applications.
- Expertise in automation tools and scripting languages.
- Solid understanding of network architecture and security protocols.
- Experience with continuous integration and deployment (CI/CD) methodologies.
- Excellent problem-solving abilities and a proactive approach to system optimization.
- Good communication skills for effective collaboration with various teams.
Desired Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Relevant certifications in AWS, GCP, or Azure are highly desirable.
- Minimum 5 years of experience in a DevOps or similar role, with a focus on cloud computing and web application deployment.
What you’ll be doing at Novo:
● Systems thinking
● Creating best practices, templates, and automation for build, test, integration and
deployment pipelines on multiple projects
● Designing and developing tools for easily creating and managing dev/test infrastructure
and services in AWS cloud
● Providing expertise and guidance on CI/CD, Github, and other development tools via
containerization
● Monitoring and support systems in Dev, UAT and production environments
● Building mock services and production-like data sources for use in development and
testing
● Managing Github integrations, feature flag systems, code coverage tools, and other
development & monitoring tools tools
● Participating in support rotations to help troubleshoot to infrastructure issues
Stacks you eat everyday ( For Devops Engineer )
● Creating and working with containers, as well as using container orchestration tools
(Kubernetes / Docker)
● AWS: S3, EKS, EC2, RDS, Route53, VPC etc.
● Fair understanding of Linux
● Good knowledge of CI/CD : Jenkins / CircleCI / Github Actions
● Basic level of monitoring
● Support for Deployment along with various Web Servers and Linux environments , both
backend and frontend.
DevOps Engineer, Plum & Empuls
As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.
Key Responsibilities
- Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Working on Docker images and maintaining Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Working on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
- Required Technical and Professional Expertise.
- Minimum 4-6 years of experience in IT industry.
- Expertise in implementing and managing Devops CI/CD pipeline.
- Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
- Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
- Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience with working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience and well versed with Jira tool is a plus.
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, San Francisco, and Dublin. Empuls works with over 100 global clients. We help our clients engage and motivate their employees, sales teams, channel partners, or consumers for better business results.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status
Devops Engineer Position - 3+ years
Kubernetes, Helm - 3+ years (dev & administration)
Monitoring platform setup experience - Prometheus, Grafana
Azure/ AWS/ GCP Cloud experience - 1+ years.
Ansible/Terraform/Puppet - 1+ years
CI/CD - 3+ years
About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, you’ll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!
Your Role
- Applications built at scale to go up and down on command.
- Manage a cluster of microservices talking to each other.
- Build pipelines for huge data ingestion, processing, and dissemination.
- Optimize services for low cost and high efficiency.
- Maintain high availability and scalable PSQL database cluster.
- Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.
Requirements
- 1-4 years of work experience.
- Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
- CI/CD concepts and implementation using Codepipeline, Github Actions.
- Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
- Advanced Containerization - Docker, Kubernetes, ECS.
- Experience with managed services like database cluster, distributed services on EC2.
- Self-starters and curious folks who don't need to be micromanaged.
- Passionate about Blue Sky Climate Action and working with data at scale.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
- 3-6 years of relevant work experience in a DevOps role.
- Deep understanding of Amazon Web Services or equivalent cloud platforms.
- Proven record of infra automation and programming skills in any of these languages - Python, Ruby, Perl, Javascript.
- Implement DevOps Industry best practices and the application of procedures to achieve a continuously deployable system
- Continuously improve and increase the capabilities of the CI/CD pipeline
- Support engineering teams in the implementation of life-cycle infrastructure solutions and documentation operations in order to meet the engineering departments quality and standards
- Participate in production outages and handle complex issues and works towards resolution
Location – Pune
Experience - 1.5 to 3 YR
Payroll: Direct with Client
Salary Range: 3 to 5 Lacs (depending on existing)
Role and Responsibility
• Good understanding and Experience on AWS CloudWatch for ES2, Amazon Web Services, and Resources, and other sources.
• Collect and Store logs
• Monitor and Store Logs
• Log Analyze
• Configure Alarm
• Configure Dashboard
• Preparation and following of SOP's, Documentation.
• Good understanding AWS in DevOps.
• Experience with AWS services ( EC2, ECS, CloudWatch, VPC, Networking )
• Experience with a variety of infrastructure, application, and log monitoring tools ~ Prometheus, Grafana,
• Familiarity with Docker, Linux, and Linux security
• Knowledge and experience with container-based architectures like Docker
• Experience on performing troubleshooting on AWS service.
• Experience in configuring services in AWS like EC2, S3, ECS
• Experience with Linux system administration and engineering skills on Cloud infrastructure
• Knowledge of Load Balancers, Firewalls, and network switching components
• Knowledge of Internet-based technologies - TCP/IP, DNS, HTTP, SMTP & Networking concepts
• Knowledge of security best practices
• Comfortable 24x7 supporting Production environments
• Strong communication skills
Job Dsecription: (8-12 years)
○ Develop best practices for team and also responsible for the architecture
○ solutions and documentation operations in order to meet the engineering departments quality and standards
○ Participate in production outage and handle complex issues and works towards Resolution
○ Develop custom tools and integration with existing tools to increase engineering Productivity
Required Experience and Expertise
○ Deep understanding of Kernel, Networking and OS fundamentals
○ Strong experience in writing helm charts.
○ Deep understanding of K8s.
○ Good knowledge in service mesh.
○ Good Database understanding
Notice Period: 30 day max