Job Description:
Mandatory Skills:
Should have strong working experience with Cloud technologies like AWS and Azure.
Should have strong working experience with CI/CD tools like Jenkins and Rundeck.
Must have experience with configuration management tools like Ansible.
Must have working knowledge on tools like Terraform.
Must be good at Scripting Languages like shell scripting and python.
Should be expertise in DevOps practices and should have demonstrated the ability to apply that knowledge across diverse projects and teams.
Preferable skills:
Experience with tools like Docker, Kubernetes, Puppet, JIRA, gitlab and Jfrog.
Experience in scripting languages like groovy.
Experience with GCP
Summary & Responsibilities:
Write build pipelines and IaaC (ARM templates, terraform or cloud formation).
Develop ansible playbooks to install and configure various products.
Implement Jenkins and Rundeck jobs( and pipelines).
Must be a self-starter and be able to work well in a fast paced, dynamic environment
Work independently and resolve issues with minimal supervision.
Strong desire to learn new technologies and techniques
Strong communication (written / verbal ) skills
Qualification:
Bachelor's degree in Computer Science or equivalent.
4+ years of experience in DevOps and AWS.
2+ years of experience in Python, Shell scripting and Azure.
Similar jobs
We are looking for an excellent experienced person in the Dev-Ops field. Be a part of a vibrant, rapidly growing tech enterprise with a great working environment. As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to
deploy and release their code seamlessly.
Responsibilities
The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems.
Understanding accessibility and security compliance (Depending on the specific project)
User authentication and authorization between multiple systems,
servers, and environments
Integration of multiple data sources and databases into one system
Understanding fundamental design principles behind a scalable
application
Configuration management tools (Ansible/Chef/Puppet), Cloud
Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.
Should be able to make key decisions for our infrastructure,
networking and security.
Manipulation of shell scripts during migration and DB connection.
Monitor Production Server Health of different parameters (CPU Load, Physical Memory, Swap Memory and Setup Monitoring tool to
Monitor Production Servers Health, Nagios
Created Alerts and configured monitoring of specified metrics to
manage their cloud infrastructure efficiently.
Setup/Managing VPC, Subnets; make connection between different zones; blocking suspicious ip/subnet via ACL.
Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade
AWS resources (CPU, Memory, EBS)
The candidate would be Responsible for managing microservices at scale maintain the compute and storage infrastructure for various product teams.
Strong Knowledge about Configuration Management Tools like –
Ansible, Chef, Puppet
Extensively worked with Change tracking tools like JIRA and log
Analysis, Maintaining documents of production server error log's
reports.
Experienced in Troubleshooting, Backup, and Recovery
Excellent Knowledge of Cloud Service Providers like – AWS, Digital
Ocean
Good Knowledge about Docker, Kubernetes eco-system.
Proficient understanding of code versioning tools, such as Git
Must have experience working in an automated environment.
Good knowledge of Amazon Web Service Architects like – Amazon EC2, Amazon S3 (Amazon Glacier), Amazon VPC, Amazon Cloud Watch.
Scheduling jobs using crontab, Create SWAP Memory
Proficient Knowledge about Access Management (IAM)
Must have expertise in Maven, Jenkins, Chef, SVN, GitHub, Tomcat, Linux, etc.
Candidate Should have good knowledge about GCP.
EducationalQualifications
B-Tech-IT/M-Tech -/MBA- IT/ BCA /MCA or any degree in the relevant field
EXPERIENCE: 2-6 yr
- Experience using AWS (that’s just common sense)
- Experience designing and building web environments on AWS, which includes working with services like EC2, ELB, RDS, and S3
- Experience building and maintaining cloud-native applications
- A solid background in Linux/Unix and Windows server system administration
- Experience using https://www.simplilearn.com/tutorials/devops-tutorial/devops-tools" target="_blank">DevOps tools in a cloud environment, such as Ansible, Artifactory, https://www.simplilearn.com/tutorials/docker-tutorial/what-is-docker-container" target="_blank">Docker, GitHub, https://www.simplilearn.com/tutorials/jenkins-tutorial/what-is-jenkins" target="_blank">Jenkins, https://www.simplilearn.com/tutorials/kubernetes-tutorial/what-is-kubernetes" target="_blank">Kubernetes, Maven, and Sonar Qube
- Experience installing and configuring different application servers such as JBoss, Tomcat, and WebLogic
- Experience using monitoring solutions like CloudWatch, ELK Stack, and Prometheus
- An understanding of writing Infrastructure-as-Code (IaC), using tools like CloudFormation or Terraform
- Knowledge of one or more of the most-used programming languages available for today’s cloud computing (i.e., SQL data, XML data, R math, Clojure math, Haskell functional, Erlang functional, Python procedural, and Go procedural languages)
- Experience in troubleshooting distributed systems
- Proficiency in script development and scripting languages
- The ability to be a team player
- The ability and skill to train other people in procedural and technical topics
- Strong communication and collaboration skills
As a special aside, an AWS engineer who works in DevOps should also have experience with:
- The theory, concepts, and real-world application of Continuous Delivery (CD), which requires familiarity with tools like AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline
- An understanding of automation
Excellent understanding of SDLC patching, releases and software development at scale.
Excellent knowledge of Git.
Excellent knowledge of Docker.
Good understanding of enterprise standards ond enterprise building principles,
In-depth knowledge in Windows OS
Knowledge of Linux os
Theoretical and practical skills in Web-environments based on .Net technologies, e.g. Ils,
Kestrel, .Net Core, C#.
Strong scripting skills in one or any combination of CMD Shell,Bash, PowerShell. Python.
Good understanding of the mechanisms of Web-environment architectures approaches.
Strong knowledge of cloud providers offering, Azure or AWS.
Good knowledge of a configuration management tools, Ansible, Chef, Salt stack, Puppet.
(Good to have)
Good knowledge of cloud infrastructure orchestration tools like kubemetes or cloud based orchestration.
Good knowledge in one or any combination of cloud infrastructure provisioning tools like
ARM Templates, Terraform, Pulumi.
In-depth knowledge in one or any combination of software delivery orchestration tools like Azure Pipelines, Jenkins Pipelines, Octopus Deploy, etc.
Strong practical knowledge of CI Tools, ie, Azure Devops, Jenkins Excellent knowledge of Continuous Integration and Delivery approaches
Good knowledge on integration of Code Quality tools like SonarQube, Application or Container Security tool like Vera Code, Checksum, Chekov, Trivy.
In-depth knowledge on Azure DevOps Build infrastructure setup, Azure DevOps
Administration and Access management
Kubernetes Admin-Security
at client of people first consultant
Years: 5-9 Years
Job Responsibilities
Primary:
- Responsible for security road map for EPDM application
- Train the CI-CD team on the required technologies security adoptation
- Lead the upskill program within the team
- Support Application architect with right inputs on security processes and tools
- Help setup DevSecOps for EPDM.
- Find Security vulnerability in development process and sealed secretes
- Support in defining the Three-tier architecture.
Secondary:
- Coordination with different IT stakeholders as and when needed
- Suggestion and Implementation of further tool chains towards DevOps and GitOps
- Responsible to train the peer colleagues
Skills:
Mandatory skill:
- Expert knowledge of container solutions. Must have >3 years of experience working with networking & debugging within Docker and Kubernetes.
- Hands-on experience with Kubernetes workload deployments using Kustomize & Helm.
- Good understanding of Bitnami, Hashicorp and other secrete management tools
- SAST/DAST integration in CI/CD pipeline - design, implementation Expert knowledge of Source Control Systems, build & integration tools (e.g., GIT, Jenkins & Maven).
- Hands-on experience with designing the CI/CD architecture & building pipelines (on On-prem, Cloud & Hybrid infrastructure services).
- Experience with Security log management tools (e.g. Splunk ELK/EFK stack, Azure monitor or similar).
- Experience with monitoring tools like Prometheus-Grafana & Dynatrace.
- Experience with Infrastructure as a Service / Cloud computing (preferably Azure).
- Expert in writing automation scripts in Yaml, Unix shell, linux shell.
- Pulumi would be added advantage.
Software Architect/Solution Architect/CTO
at Enquero
- 8+ years of experience in software development
- 4+ years of recent hands-on experience in architecting and building complex solutions that run in SaaS/PaaS environments, especially on AWS cloud leveraging SaaS based Microservices development coupled with Distributed Caching & Message Queuing.
- Should have experience in developing solution architecture and/or evaluate architectural alternatives for private, public and hybrid cloud models, including IaaS, PaaS, and other cloud services
- Should have excellent knowledge of cloud architecture and implementation features (OS, multi-tenancy, virtualization, orchestration, elastic scalability)
- Should have experience with Full Stack development with experience in technology stacks/frameworks like JAVA, Springboot, Python, Redis, SQL, NoSQL and Graph DBs
- Good to have Architected solutions that handle Big Data and should be proficient in data analytics
- Must have Expert level proficiency in Design / Architectural patterns, data structures and algorithms
- Must demonstrate knowledge of DevOps tool chains and processes
- Experience in web-based application migration from on-premise to SaaS model is a big plus.
- Experience of Integration patterns and associated best practice(e.g. Web Services, REST API's, Pub/Sub, MOM)
- Excellent knowledge and hands-on experience in Web services related, functionally decomposed architecture, Load Balancing of Web Services and applications, designing multi-tenant systems, Clustering and sharding of data, microservices architecture / design patterns, and throttling and performance management of such services
The expectation is to set up complete automation of CI/CD pipeline & monitoring and ensure high availability of the pipeline. The automated deployment environment can be on-prem or cloud (virtual instances, containerized and serverless). Complete test automation and ensure Security of Application as well as Infrastructure.
ROLES & RESPONSIBILITIES
Configure Jenkins with load distribution between master/slave Setting up the CI pipeline with Jenkins and Cloud(AWS or Azure) Code Build Static test (Quality & Security) Setting up Dynamic Test configuration with selenium and other tools Setting up Application and Infrastructure scanning for security. Post-deployment security plan including PEN test. Usage of RASP tool. Configure and ensure HA of the pipeline and monitoring Setting up composition analysis in the pipeline Setting up the SCM and Artifacts repository and management for branching, merging and archiving Must work in Agile environment using ALM tool like Jira DESIRED SKILLS
Extensive hands-on Continuous Integration and Continuous Delivery technology experience of .Net, Node, Java and C++ based projects(Web, mobile and Standalone). Experience configuring and managing
- ALM tools like Jira, TFS, etc.
- SCM such as GitHub, GitLab, CodeCommit
- Automation tools such as Terraform, CHEF, or Ansible
- Package repo configuration(Artifactory / Nexus), Package managers like Nuget & Chocholatey
- Database Configuration (sql & nosql), Web/Proxy Setup(IIS, Nginx, Varnish, Apache).
Deep knowledge of multiple monitoring tools and how to mine them for advanced data Prior work with Helm, Postgres, MySQL, Redis, ElasticSearch, microservices, message queues and related technologies Test Automation with Selenium / CuCumber; Setting up of test Simulators. AWS Certified Architect and/or Developer; Associate considered, Professional preferred Proficient in: Bash, Powershell, Groovy, YAML, Python, NodeJS, Web concepts such as REST APIs and Aware of MVC and SPA application design. TTD experience and quality control with Sonarqube or Checkmarx, Tics Tiobe and Coverity Thorough with Linux(Ubuntu, Debian CentOS), Docker(File/compose/volume), Kubernetes cluster setup Expert in Workflow tools: Jenkins(declarative, plugins)/TeamCity and Build Servers configuration Experience with AWS CloudFormation / CDK and delivery automation Ensure end-to-end deployments succeed and resources come up in an automated fashion Good to have ServiceNow configuration experience for collaboration
What you will get:
- To be a part of the Core-Team 💪
- A Chunk of ESOPs 🚀
- Creating High Impact by Solving a Problem at Large (No one in the World has a similar product) 💥
- High Growth Work Environment ⚙️
What we are looking for:
- An 'Exceptional Executioner' -> Leader -> Create an Impact & Value 💰
- Ability to take Ownership of your work
- Past experience in leading a team
Department: Engineering
Reports to: Project Manager
Location: Remote | India
Employment Type: Full-time
Start Date: ASAP
Who We are
Fabric is the new commerce infrastructure for the Internet. Our mission is to accelerate the GMV of the Internet by providing a platform and ecosystem to fundamentally change the way commerce happens in a multi-channel world.
We're building a future where Direct-to-Consumer Brands, Retailers, and B2B Businesses (wholesalers, manufacturers, and distributors) have the commerce capabilities that today are only afforded by Marketplace organizations with billions of dollars in R&D. We’re building a future where the customer experience of discovery, shopping, or replenishment is individualized, delightful, and seamless in all channels. We’re building a future where merchandising, marketing, and commerce operations teams have intelligent, powerful, and practical tools to best serve their customers and grow every channel of commerce. We’re building a future where developers have a platform that is highly secure, scalable, the most adaptable, and simplest to build upon.
We are a team of passionate people who love what we do. Join us to build the new commerce fabric for the internet.
Job Description
The DevOps Engineer partners with Product, Engineering and Design teams to deliver new features and enhancements for YDV’s new eCommerce platform. This position focuses on providing eCommerce and related technology expertise to design, develop, and support of on-line, customer facing, eCommerce business solutions.
The successful candidate will have experience of a strong, hands-on technologist. A person who is comfortable with multiple priorities in a fast-paced environment is required. Work with other engineers, managers, Product Management, QA, and Operations teams to develop innovative solutions that meet market needs with respect to functionality, performance, reliability, realistic implementation schedules, and adherence to development goals and principles
Primary Responsibilities:
- Work on our client’s e-commerce solution. Our eCommerce solution runs on AWS with a server less architecture and a ReactJs front-end. We use several kinds of in-memory and persistent data storage.
- Manage deployment of code components through a Continuous Integration and Continuous Deployment pipeline.
- Lead a small team dedicated to management and projects, supporting a high-performing, cloud infrastructure
- Work with a small but experienced tech team, providing an unparalleled continuous learning environment with growth potential
- Able to move the needle / contribute in a significant manner
Qualifications
- 4+ years experience with CI/CD using git, Webhooks, Jenkins/CircleCI or similar
- 1+ years experience of working with cloud-native infrastructure on major cloud players like AWS, Heroku, MS Azure, GCP, with hands-on experience of managing services like:
- Function As A Service, like AWS Lambda
- API Gateways
- Cloud Servers, Like EC2
- Container orchestration using AWS ECS/Kubernetes or similar, serverless/clusterless container offerings like AWS Fargate on ECS
- Network and Security: VPC, Security Groups, Route53, CloudFront
- IaC : AWS CloudFormation or Terraform
- Infrastructure monitoring and log management with ELK, Datadog or similar
- Proficiency in at least one scripting language
- Have worked closely with the QA Automation engineering team for automatic segmented deployment.
- Know and understand how to configure high-availability, high-performing applications
- Server and service monitoring with Nagios or similar
- Knowledge of firewall, switch and network configuration & debugging (DHCP, DNS, IPv4, IP routing)
*Brownie Points on:*
- Experience of owning CI/CD pipelines and creating branching strategies including release tagging and versioning.
- Experience of deploying *js applications managed by npm/yarn/serverless
- Experience with multiple IaC tools like Terraform, Ansible, Chef, Puppet.
DevOps Engineer
Company Introduction
CometChat harnesses the power of chat by helping thousands of businesses around the world create customized in-app messaging experiences. Our products allow developers to seamlessly add voice, video and text chat to their websites and mobile apps so that their users can communicate with each other, resulting in a unified customer experience, increased engagement and retention, and revenue growth.
In 2019, CometChat was selected into the exclusive Techstars Boulder Accelerator. CometChat (Industry CPaaS: communication-platform-as-a-service) has also been listed among the top 10 best SaaS companies by G2 Crowd. With solid financials, strong organic growth and increasing interest in developer tool-focused companies (from the market and with top technical talent), we’re heading into an exciting period of growth and acceleration. CometChat is backed by seasoned investors such as iSeed Ventures, Range Ventures, Silicon Badia, eonCapital and Matchstick Ventures.
A global business from the start, we have 60+ team members across our Denver and Mumbai offices serving over 50,000 customers around the world. We’ve had an exciting journey so far, and we know this is just the beginning!
CometChat’s Mission
Enable meaningful connections between real people in an increasingly digital world.
CometChat’s Products
CometChat offers a robust suite of cloud hosted text, voice and video options that meet businesses where they are–whether they need drag and drop plugins that can be ready within 30 minutes or if they want more advanced features and can invest development resources to launch the experience that will best serve their users.
● Quickly build a reliable & full featured chat experience into any mobile or web app
● Fully customizable SDKs and API designed to help companies ship faster
At every step, CometChat helps customers solve complex infrastructure, performance and security challenges, regardless of the platform. But there is so much more! With over 20 ready to use extensions, customers can build an experience and get the data, analysis and insights they need to drive their business forward.
CometChat’s solutions are perfect for every kind of chat including:
● Social community – Allowing people in online communities to interact without moving the conversation to another platform
● Marketplace – Enabling communications between buyers and sellers
● Events – Bringing thousands of users together to interact without diminishing the quality of the experience
● Telemedicine – Making connections between patients and providers more accessible
● Dating – Keeping people engaged while they connect with one another
● And more!
CometChat is committed to fostering a culture of innovation & collaboration. Our people are our strength so we respect and nurture their individual talent and potential. Join us if you are looking to be a part of a high growth team!
Position Overview & Priorities:
The DevOps Engineer will be responsible for effective provisioning, installation/configuration, operation, and maintenance of systems and software using Infrastructure as Code. This can include the provision of cloud instances, streamlining deployments, configuring virtual instances, scaling out DB servers.
Primary responsibility would be:
- Oversight of all server environments, from Dev through Production.
- Work on an infrastructure that is 100% on AWS.
- Work on CI/CD tooling which is used to build and deploy code to our cloud.
- Assist with day-to-day issue management.
- Work on internal tooling which simplifies workflows.
- Research, design and implement solutions for fault tolerance, monitoring, performance enhancement, capacity optimization, and configuration management of systems and applications.
Work Location:
We operate on a Hybrid model – you choose where you work from! Remotely or from our offices. Currently, our talent is spread across 14 different cities globally.
Prioritized Experiences and Capabilities:
- 2-4 years of experience working as a DevOps Engineer/currently practicing DevOps methodology
- Experience in AWS Infrastructure
- Hands-on experience with Infrastructure as Code (Cloud Formation / Terraform, Puppet / Chef / Ansible)
- Strong background in Linux/Unix Administration
- DevOps automation with CI/CD, a pipeline that enforces proper versioning and branching practices
- Experience in Docker and Kubernetes.
- Proficient in Java, Node or Python
- Experience with NewRelic, Splunk, SignalFx, DataDog etc.
- Monitoring and alerting experience
- Full stack development experience
- Hands-on with building and deploying micro services in Cloud (AWS/Azure)
- Experience with terraform w.r.t Infrastructure As Code
- Should have experience troubleshooting live production systems using monitoring/log analytics tools
- Should have experience leading a team (2 or more engineers)
- Experienced using Jenkins or similar deployment pipeline tools
- Understanding of distributed architectures
DevOps Engineer
at Product-centric market leader in building loyalty solutions.
2. Has done Infrastructure coding using Cloudformation/Terraform and Configuration also understands it very clearly
3. Deep understanding of the microservice design and aware of centralized Caching(Redis),centralized configuration(Consul/Zookeeper)
4. Hands-on experience of working on containers and its orchestration using Kubernetes
5. Hands-on experience of Linux and Windows Operating System
6. Worked on NoSQL Databases like Cassandra, Aerospike, Mongo or
Couchbase, Central Logging, monitoring and Caching using stacks like ELK(Elastic) on the cloud, Prometheus, etc.
7. Has good knowledge of Network Security, Security Architecture and Secured SDLC practices