Job Description:
- Hands on experience with Ansible & Terraform.
- Scripting language, such as Python or Bash or PowerShell is required and willingness to learn and master others.
- Troubleshooting and resolving automation, build, and CI/CD related issues (in cloud environment like AWS or Azure).
- Experience with Kubernetes is mandate.
- To develop and maintain tooling and environments for test and production environments.
- Assist team members in the development and maintenance of tooling for integration testing, performance testing, security testing, as well as source control systems (that includes working in CI systems like Azure DevOps, Team City, and orchestration tools like Octopus).
- Good with Linux environment.
Similar jobs
GCP Cloud Engineer:
- Proficiency in infrastructure as code (Terraform).
- Scripting and automation skills (e.g., Python, Shell). Knowing python is must.
- Collaborate with teams across the company (i.e., network, security, operations) to build complete cloud offerings.
- Design Disaster Recovery and backup strategies to meet application objectives.
- Working knowledge of Google Cloud
- Working knowledge of various tools, open-source technologies, and cloud services
- Experience working on Linux based infrastructure.
- Excellent problem-solving and troubleshooting skills
Role : Principal Devops Engineer
About the Client
It is a Product base company that has to build a platform using AI and ML technology for their transportation and logiticsThey also have a presence in the global market
Responsibilities and Requirements
• Experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
• Knowledge in Linux/Unix Administration and Python/Shell Scripting
• Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
• Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
• Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
• Experience in enterprise application development, maintenance and operations
• Knowledge of best practices and IT operations in an always-up, always-available service
• Excellent written and oral communication skills, judgment and decision-making skill
Requirements
Core skills:
● Strong background in Linux / Unix Administration and
troubleshooting
● Experience with AWS (ideally including some of the following:
VPC, Lambda, EC2, Elastic Cache, Route53, SNS, Cloudwatch,
Cloudfront, Redshift, Open search, ELK etc.)
● Experience with Infra Automation and Orchestration tools
including Terraform, Packer, Helm, Ansible.
● Hands on Experience on container technologies like Docker,
Kubernetes/EKS, Gitlab and Jenkins as Pipeline.
● Experience in one or more of Groovy, Perl, Python, Go or
scripting experience in Shell.
● Good understanding of with Continuous Integration(CI) and
Continuous Deployment(CD) pipelines using tools like Jenkins,
FlexCD, ArgoCD, Spinnaker etc
● Working knowledge of key value stores, database technologies
(SQL and NoSQL), Mongo, mySQL
● Experience with application monitoring tools like Prometheus,
Grafana, APM tools like NewRelic, Datadog, Pinpoint
● Good exposure on middleware components like ELK, Redis, Kafka
and IOT based systems including Redis, NewRelic, Akamai,
Apache / Nginx, ELK, Grafana, Prometheus etc
Good to have:
● Prior experience in Logistics, Payment and IOT based applications
● Experience in unmanaged mongoDB cluster, automations &
operations, analytics
● Write procedures for backup and disaster recovery
Core Experience
● 3-5 years of hands-on DevOps experience
● 2+ years of hands-on Kubernetes experience
● 3+ years of Cloud Platform experience with special focus on
Lambda, R53, SNS, Cloudfront, Cloudwatch, Elastic Beanstalk,
RDS, Open Search, EC2, Security tools
● 2+ years of scripting experience in Python/Go, shell
● 2+ years of familiarity with CI/CD, Git, IaC, Monitoring, and
Logging tools
Key Skills Required:
· You will be part of the DevOps engineering team, configuring project environments, troubleshooting integration issues in different systems also be involved in building new features for next generation of cloud recovery services and managed services.
· You will directly guide the technical strategy for our clients and build out a new capability within the company for DevOps to improve our business relevance for customers.
· You will be coordinating with Cloud and Data team for their requirements and verify the configurations required for each production server and come with Scalable solutions.
· You will be responsible to review infrastructure and configuration of micro services and packaging and deployment of application
To be the right fit, you'll need:
· Expert in Cloud Services like AWS.
· Experience in Terraform Scripting.
· Experience in container technology like Docker and orchestration like Kubernetes.
· Good knowledge of frameworks such as Jenkins, CI/CD pipeline, Bamboo Etc.
· Experience with various version control system like GIT, build tools (Mavan, ANT, Gradle ) and cloud automation tools (Chef, Puppet, Ansible)
Internshala is a dot com business with the heart of dot org.
We are a technology company on a mission to equip students with relevant skills & practical exposure through internships, fresher jobs, and online trainings. Imagine a world full of freedom and possibilities. A world where you can discover your passion and turn it into your career. A world where your practical skills matter more than your university degree. A world where you do not have to wait till 21 to taste your first work experience (and get a rude shock that it is nothing like you had imagined it to be). A world where you graduate fully assured, fully confident, and fully prepared to stake a claim on your place in the world.
At Internshala, we are making this dream a reality!
👩🏻💻 Your responsibilities would include-
- Building and maintaining operational tools for monitoring and analysis of AWS infrastructure and systems
- Actively monitoring the health and performance of all systems and performing benchmarking and tuning of system applications and operating systems
- Setting up container orchestration using Kubernetes or other orchestration system for a monolithic application
- Continually working with development engineers to design the best system architectures and solutions
- Troubleshooting and resolving issues in our development, test, and production environments
- Maintaining reliability of the system and being on-call for mission-critical systems
- Performing infrastructure cost analysis and optimization
- Ensure systems’ compliance with operational risk standards (e.g. network, firewall, OS, logging, monitoring, availability, resiliency)
- Building, mentoring and leading a team of young professionals, if the need arises
🍒 You will get-
- A chance to build and lead an awesome team working on one of the best recruitment and online trainings products in the world that impact millions of lives for the better
- Awesome colleagues & a great work environment
- Loads of autonomy and freedom in your work
💯 You fit the bill if-
- You are proficient with bash, git and git workflows
- You have 3-5 years of experience as a DevOps Engineer or similar software engineering role
- You have excellent attention to detail
- AWS certification preferred but not mandatory
Must Haves: Openshift, Kubernetes
Location: Currently in India (also willing to relocate to UAE)
Preferred an immediate joiner with minimum 2 weeks to 1 month of Notice Period.
Add on skills: Terraform, Gitops, Jenkins, ELK
DevOps Engineer, Plum & Empuls
As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.
Key Responsibilities
- Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Working on Docker images and maintaining Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Working on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
- Required Technical and Professional Expertise.
- Minimum 4-6 years of experience in IT industry.
- Expertise in implementing and managing Devops CI/CD pipeline.
- Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
- Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
- Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience with working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience and well versed with Jira tool is a plus.
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, San Francisco, and Dublin. Empuls works with over 100 global clients. We help our clients engage and motivate their employees, sales teams, channel partners, or consumers for better business results.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status
You will work on:
You will be working on some of our clients massive scale Infrastructure and DevOps requirements - designing for microservices and large scale data analytics. You will be working on enterprise scale problems - but will be part of our agile team that delivers like a startup. You will have opportunity to be part of team that's building and managing large private cloud.
What you will do (Responsibilities):
- Work on cloud marketplace enablements for some of our clients products
- Write Kubernetes Operators to automate custom PaaS solutions
- Participate in cloud projects to implement new technology solutions, Proof of concepts to improve cloud technology offerings.
- Work with developers to deploy to private or public cloud/on-premise services, debug and resolve issues.
- On call responsibilities to respond to emergency situations and scheduled maintenance.
- Contribute to and maintain documentation for systems, processes, procedures and infrastructure configuration
What you bring (Skills):
- Experience with administering of and debugging on Linux based systems with programming skills in Scripting, Golang, Python among others
- Expertise in Git repositories specifically on GitHub, Gitlab, Bitbucket, Gerrit
- Comfortable with DevOps for Big Data databases like Terradata, Netezza, Hadoop based ecosystems, BigQuery, RedShift among others
- Comfortable in interfacing with SQL and No-SQL databases like MySQL, Postgres, MongoDB, ElasticSearch, Redis
Great if you know (Skills):
- Understanding various build and CI/CD systems – Maven, Gradle, Jenkins, Gitlab CI, Spinnaker or Cloud based build systems
- Exposure to deploying and automating on any public cloud – GCP, Azure or AWS
- Private cloud experience – VMWare or OpenStack
- Big DataOps experience – managing infrastructure and processes for Apache Airflow, Beam, Hadoop clusters
- Containerized applications – Docker based image builds and maintainenace.
- Kubernetes applications – deploy and develop operators, helm charts, manifests among other artifacts.
Advantage Cognologix:
- Higher degree of autonomy, startup culture & small teams
- Opportunities to become expert in emerging technologies
- Remote working options for the right maturity level
- Competitive salary & family benefits
- Performance based career advancement
About Cognologix:
Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business first approach to help meet our client’s strategic goals.
We are DevOps focused organization helping our clients focus on their core product activities by handling all aspects of their infrastructure, integration and delivery.
Benefits Working With Us:
- Health & Wellbeing
- Learn & Grow
- Evangelize
- Celebrate Achievements
- Financial Wellbeing
- Medical and Accidental cover.
- Flexible Working Hours.
- Sports Club & much more.
- Automate deployments of infrastructure components and repetitive tasks.
- Drive changes strictly via the infrastructure-as-code methodology.
- Promote the use of source control for all changes including application and system-level changes.
- Design & Implement self-recovering systems after failure events.
- Participate in system sizing and capacity planning of various components.
- Create and maintain technical documents such as installation/upgrade MOPs.
- Coordinate & collaborate with internal teams to facilitate installation & upgrades of systems.
- Support 24x7 availability for corporate sites & tools.
- Participate in rotating on-call schedules.
- Actively involved in researching, evaluating & selecting new tools & technologies.
- Cloud computing – AWS, OCI, OpenStack
- Automation/Configuration management tools such as Terraform & Chef
- Atlassian tools administration (JIRA, Confluence, Bamboo, Bitbucket)
- Scripting languages - Ruby, Python, Bash
- Systems administration experience – Linux (Redhat), Mac, Windows
- SCM systems - Git
- Build tools - Maven, Gradle, Ant, Make
- Networking concepts - TCP/IP, Load balancing, Firewall
- High-Availability, Redundancy & Failover concepts
- SQL scripting & queries - DML, DDL, stored procedures
- Decisive and ability to work under pressure
- Prioritizing workload and multi-tasking ability
- Excellent written and verbal communication skills
- Database systems – Postgres, Oracle, or other RDBMS
- Mac automation tools - JAMF or other
- Atlassian Datacenter products
- Project management skills
Qualifications
- 3+ years of hands-on experience in the field or related area
- Requires MS or BS in Computer Science or equivalent field
- You have a Bachelor's degree in computer science or equivalent
- You have at least 7 years of DevOps experience.
- You have deep understanding of AWS and cloud architectures/services.
- You have expertise within the container and container orchestration space (Docker, Kubernetes, etc.).
- You have experience working with infrastructure provisioning tools like CloudFormation, Terraform, Chef, Puppet, or others.
- You have experience enabling CI/CD pipelines using tools such as Jenkins, AWS Code Pipeline, Gitlab, or others.
- You bring a deep understanding and application of computer science fundamentals: data structures, algorithms, and design patterns.
- You have a track record of delivering successful solutions and collaborating with others.
- You take security into account when building new systems.