![A Reputed IT Services Company's logo](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fdefault_company_picture.jpg&w=3840&q=75)
- GCP Cloud experience mandatory
- CICD - Azure DevOps
- IaC tools – Terraform
- Experience with IAM / Access Management within cloud
- Networking / Firewalls
- Kubernetes / Helm / Istio
![companies logos](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fhiring_companies_logos-v2.webp&w=3840&q=80)
Similar jobs
Role: DevOps engineer
We are looking for an experienced DevOps engineer that will work closely with our client’s team to establish DevOps practice.
Your role will include establishing configuration management, automating the infrastructure, implementing continuous integration, and training the team in DevOps best practices to achieve a continuously deployable system.
You will be part of a continually growing team. You will have the chance to be creative and think of new ideas. You might also get the opportunity to work on some open-source projects, ranging from small to large.
Required Skills & Experience
- Degree in IT/Software Engineering or similar, or equivalent practical experience
- 1yr+ experience in DevOps supporting cloud environments, specifically in AWS or GCP
- 1yr+ experience with automation tools and performing refactorisation
- Excellent technical problem-solving skills which you can quickly draw on in unfamiliar situations
- Willingness to roll up your sleeves and get things done in a fast-paced environment
- Ability to work with minimal supervision
- Ability to take instructions and constructive guidance
- Exposure to Agile/DevOps principles and CI/CD tools
- Good written skills and demonstrated experience in documentation of work
- Excellent collaboration skills with a positive and helpful attitude towards your coworkers
- Good capabilities in source control technologies such as Git
- Good capabilities with Python/Bash/PowerShell
- Familiarity with multiple operating systems, particularly Linux
Desired
- AWS certification or similar Cloud certification and working cloud support experience
- Experience with Docker or Kubernetes
- Experience with Terraform
- Experience with CDN tools
- Experience with using cloud security tools
- Experience with multiple public cloud providers (AWS/GCP/Azure)
- Experience with SQL / MongoDB / PostgreSQL / GraphQL
- Experience with Content Mgmt tools and Front End applications
What you’ll be doing – your role
Key Responsibilities
1. To support the DevOps process for web-based products hosted on cloud infrastructure, specifically:
- To respond and complete tickets, meet SLAs, and manage Reporter expectations.
- To collaborate with assigned tribe change streams to deliver project/change objectives:
a. Understand requirements, and support services before they go live through activities such as system design consulting, developing software platforms and frameworks, capacity planning, and launch reviews.
b. Build software and systems to manage infrastructure and applications through automation deployment.
c. Scale systems sustainably through mechanisms like automation, and evolve systems by pushing for changes that improve performance, reliability, scalability, security, and velocity.
- Maintain services once they are live by measuring and monitoring availability, latency, and overall system health.
- To monitor and respond to alerts, issues, and incidents about cloud infrastructure (and corporate infrastructure as required).
- Practice sustainable incident response and provide appropriate communications and blameless post-mortems.
2. To drive DevOps process and Cloud infrastructure improvements as part of service and security improvement roadmap, specifically:
- Engage in and improve the whole lifecycle of services—from inception and design, through deployment, operation, and refinement.
- To support training and learning by sharing knowledge with the Tech team and taking responsibility for own professional development.
- Explore and evaluate new technologies and solutions to push our capabilities forward.
- To articulate and escalate risks and issues, provide recommended solutions to problems, and implement them.
- Document procedures, configuration changes, and guidelines.
3. To maintain cloud infrastructure and networking as per Cloud policy, standards, and governance requirements.
Other Duties
Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this position. Duties, responsibilities, and activities may change at any time without notice.
After-hours support, as agreed, for eg, on-call support, releases that need to be done out of business hours due to potential risk of disruption to the business, incidents work that needs to be resolved as a priority.
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fscala.png&w=32&q=75)
Preferred Education & Experience: •
Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
Required Experience: 5+ Years
Job Location: Remote/Pune
![technology based supply chain management](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fdefault_company_picture.jpg&w=256&q=75)
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
A Strong Devops experience of at least 4+ years
Strong Experience in Unix/Linux/Python scripting
Strong networking knowledge,vSphere networking stack knowledge desired.
Experience on Docker and Kubernetes
Experience with cloud technologies (AWS/Azure)
Exposure to Continuous Development Tools such as Jenkins or Spinnaker
Exposure to configuration management systems such as Ansible
Knowledge of resource monitoring systems
Ability to scope and estimate
Strong verbal and communication skills
Advanced knowledge of Docker and Kubernetes.
Exposure to Blockchain as a Service (BaaS) like - Chainstack/IBM blockchain platform/Oracle Blockchain Cloud/Rubix/VMWare etc.
Capable of provisioning and maintaining local enterprise blockchain platforms for Development and QA (Hyperledger fabric/Baas/Corda/ETH).
Location: Amravati, Maharashtra 444605 , INDIA
We are looking for a Kubernetes Cloud Engineer with experience in deployment and administration of Hyperledger Fabric blockchain applications. While you will work on Truscholar blockchain based platform (both Hyperledger Fabric and INDY versions), if you combine rich Kubernetes experience with strong DevOps skills, we will still be keen on talking to you.
Responsibilities
● Deploy Hyperledger Fabric (BEVEL SETUP) applications on Kubernetes
● Monitoring Kubernetes system
● Implement and improve monitoring and alerting
● Build and maintain highly available blockchain systems on Kubernetes
● Implement an auto-scaling system for our Kubernetes nodes
● Detail Design & Develop SSI & ZKP Solution
● - Act as a liaison between the Infra, Security, business & QA Teams for end to end integration and DevOps - Pipeline adoption.
Technical Skills
● Experience with AWS EKS Kubernetes Service, Container Instances, Container Registry and microservices (or similar experience on AZURE)
● Hands on with automation tools like Terraform, Ansible
● Ability to deploy Hyperledger Fabric in Kubernetes environment is highly desirable
● Hyperledger Fabric/INDY (or other blockchain) development, architecture, integration, application experience
● Distributed consensus systems such as Raft
● Continuous Integration and automation skills including GitLab Actions ● Microservices architectures, Cloud-Native architectures, Event-driven architectures, APIs, Domain Driven Design
● Being a Certified Hyperledger Fabric Administrator would be an added advantage
Sills Set
● Understanding of Blockchain NEtworks
● Docker Products
● Amazon Web Services (AWS)
● Go (Programming Language)
● Hyperledger Fabric/INDY
● Gitlab
● Kubernetes
● Smart Contracts
Who We are:
Truscholar is a state-of- art Digital Credential Issuance and Verification Platform running as blockchain Infrastructure as an Instance of Hyperledger Indy Framework. Our Solution helps all universities, Institutes, Edtech, E-learning Platforms, Professional Training Academics, Corporate Employee Training and Certifications and Event Management Organisations managing exhibitions, Trade Fairs, Sporting Events, seminars and webinars to their learners, employees or participants. The digital certificates, Badges, or transcripts generated are immutable, shareable and verifiable thereby building an individual's Knowledge Passport. Our Platform has been architected to function as a single Self Sovereign Identity Wallet for the next decade, keeping personal data privacy guidelines in min.
Why Now?
The Startup venture, which was conceived as an idea while two founders were pursuing a Blockchain Technology Management Course, has received tremendous applause and appreciation from mentors and investors, and has been able to roll out the product within a year and comfortably complete the product market fit stage. Truscholar has entered a growth stage, and is searching for young, creative, and bright individuals to join the team and make Truscholar a preferred global product within the next 36 months.
Our Work Culture:
With our innovation, open communication, agile thought process, and will to achieve, we are a very passionate group of individuals driving the company's growth. As a result of their commitment to the company's development narrative, we believe in offering a work environment with clear metrics to support workers' individual progress and networking within the fraternity.
Our Vision:
To become the intel inside the education world by powering all academic credentials across the globe and assisting students in charting their digital academic passports.
Advantage Location Amravati, Maharashtra, INDIA
Amid businesses in India realising the advantages of the work-from-home (WFH) concept in the backdrop of the Coronavirus pandemic, there has been a major shift of the workforce towards tier-2 cities.
Amravati, also called Ambanagri, is a city of immense cultural and religious importance and a beautiful Tier 2 City of Maharastra. It is also called the cultural capital of the Vidarbha region. The cost of living is less, the work-life balance is better, much breathable air, fewer traffic bottlenecks and housing remains affordable, as compared to congested and eccentric metro cities of India. We firmly believe that they (tier-2) are the future talent hubs and job-creation centres. Our conviction has been borne out by the fact that tier-2 cities have made great strides in salary levels due to a lot of investments in building excellent physical and social infrastructure.
A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets.
As a manager, you would be incharge of managing the devops team and your remit shall include the following
- Private Cloud - Design & maintain a high performance and reliable network architecture to support HPC applications
- Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud.
- Security - Implementing best security practices and implementing data isolation policy between different divisions internally.
- Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis.
- Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
- NFS - Implement and optimize latest version of NFS for our use case.
- Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm.
- BackUps - Identify and automate back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless.
- Access Control - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins.
- Operating System -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions.
- Configuration management -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same.
- Data Storage & Security Planning - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
- Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity.
- Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc
Qualifications
- Bachelors or Masters Level Degree, preferably in CSE/IT
- 10+ years of relevant experience in sys-admin function
- Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
- Must have strong grasp of automation & Data management tools.
- Efficient in scripting languages and python
Desirables
- Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
- Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.
APT Portfolio is an equal opportunity employer
- Design cloud infrastructure that is secure, scalable, and highly available on AWS, Azure and GCP
- Work collaboratively with software engineering to define infrastructure and deployment requirements
- Provision, configure and maintain AWS, Azure, GCP cloud infrastructure defined as code
- Ensure configuration and compliance with configuration management tools
- Administer and troubleshoot Linux based systems
- Troubleshoot problems across a wide array of services and functional areas
- Build and maintain operational tools for deployment, monitoring, and analysis of AWS, Azure Infrastructure and systems
- Perform infrastructure cost analysis and optimization
As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.
- Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Working on Docker images and maintaining Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Working on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
- Required Technical and Professional Expertise.
- Minimum 4-6 years of experience in IT industry.
- Expertise in implementing and managing Devops CI/CD pipeline.
- Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
- Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
- Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience with working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience and well versed with Jira tool is a plus.
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. However, we assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
![skill icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fskill_icons%2Fpython.png&w=32&q=75)
Objectives of this Role
Improve reliability, quality, and time-to-market of our suite of software solutions
- Run the production environment by monitoring availability and taking a holistic view of system health
- Measure and optimize system performance, with an eye toward pushing our capabilities forward, getting ahead of customer - needs, and innovating to continually improve
- Provide primary operational support and engineering for multiple large distributed software applications
- Participate in system design consulting, platform management, and capacity planning
- Languages: Python, Java, Ruby DSL, Bash
- Databases : MySQL, Cassandra , Elastic Search
- Deployment: AWS CloudFormation
Essential Criteria:
- 8 or more years administrating production Linux systems in a 24x7 environment
- 3 or more years’ experience in a DevOps/ SRE role as an engineer or technical lead
- At least 1 year of team leadership experience
- Significant knowledge of Amazon Web Services (CLI/APIs, EC2, EBS, S3, VPCs, IAM, AWS Lambda)
- Experience deploying services into containerized orchestration environments such as Kubernetes
- Experience with infrastructure automation tools like CloudFormation, Terraform, etc.
- Experience with at least one of Python, Bash, Ruby, or equivalent
- Experience creating and managing CI/CD pipeline like Jenkins or Spinnaker
- Familiar with version control using Git
- Solid understanding of common security principles
Nice to Have:
- Preference for hands on experience with Serverless Architecture, Kubernetes and Docker
- Strong experience with open-source configuration management tools
- Managing distributed systems spanning multiple AWS regions / data-centers
- Experience with bootstrapping solutions
- Open source contributor
- We’re committed to client success: There are over 6,200 brand and retail websites in the Bazaarvoice network. Our clients represent some of the world’s leading companies across a wide range of industries including retail, apparel, automotive, consumer electronics and travel.
- We’re leaders in consumer-generated content: Each month, more than one billion consumers view and share authentic consumer-generated content, such as ratings and reviews, curated photos, social posts and videos, about products in our network. Thousands upon thousands or reviews are added to the Bazaarvoice network everyday.
- Our network delivers: Network analytics provide insights that help marketers and advertisers provide more engaging experiences that drive brand awareness, consideration, sales, and loyalty.
- We’re a great place to work: We pride ourselves on our unique culture. Join a company that values passion, innovation, authenticity, generosity, respect, teamwork, and performance.
We are looking for an experienced DevOps engineer that will help our team establish DevOps practice. You will work closely with the technical lead ( and/or CTO ) to identify and establish DevOps practices in the company.
You will help us build scalable, efficient cloud infrastructure. You’ll implement monitoring for automated system health checks. Lastly, you’ll build our CI pipeline, and train and guide the team in DevOps practices.
Responsibilities
- Implement and own the CI.
- Manage CD tooling.
- Implement and maintain monitoring and alerting.
- Build and maintain highly available production systems.
Qualification- B.tech in IT
![icon](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fsearch.png&w=48&q=75)
![companies logos](/_next/image?url=https%3A%2F%2Fcdn.cutshort.io%2Fpublic%2Fimages%2Fhiring_companies_logos-v2.webp&w=3840&q=80)