- Responsible for building, managing, and maintaining deployment pipelines and developing self-service tooling formanaging Git, Linux, Kubernetes, Docker, CI/CD & Pipelining etc in cloud infrastructure
- Responsible for building and managing DevOps agile tool chain with
- Responsible for working as an integrator between developer teams and various cloud infrastructures.
Section 2
- Responsibilities include helping the development team with best practices, provisioning monitoring, troubleshooting, optimizing and tuning, automating and improving deployment and release processes.
Section 3
- Responsible for maintaining application security with perioding tracking and upgrading package dependencies in coordination with respective developer teams .
- Responsible for packaging and containerization of deploy units and strategizing it in coordination with developer team
Section 4
- Setting up tools and required infrastructure. Defining and setting development, test, release, update, and support processes for DevOps operation
- Responsible for documentation of the process.
- Responsible for leading projects with end to end execution
Qualification: Bachelors of Engineering /MCA Preferably with AWS Cloud certification
Ideal Candidate -
- is experienced between 2-4 years with AWS certification and DevOps
experience.
- age less than 30 years, self-motivated and enthusiastic.
- is interested in building a sustainable DevOps platform with maximum
automation
- is interested in learning and being challenged on day to day basis.
- who can take ownership of the tasks and is willing to take the necessary
action to get it done.
- who can solve complex problems.
- who is honest with their quality of work and is comfortable with taking
ownership of their success and failure, Both
Similar jobs
Responsibilities
● Work with application development teams to identify and understand their operational pain points.
● Document these challenges and define goals to be achieved by the infrastructure team.
● Prototype and evaluate multiple solutions, often by experimenting with various vendors and tools available, to achieve the goals undertaken.
● Rollout tools and processes with heavy focus on automation.
● Evangelize and help onboard application development teams on the platforms provided by the infrastructure team.
● Co-own the responsibility with application development teams to ensure the reliability of services.
● Design and implement solutions around observability to ensure ease of maintenance and quick debugging of services
● Establish and implement administrative and operational best practices in the application development teams.
● Find avenues to reduce infrastructure costs and drive optimization in all services.
Qualifications
● 5+ years of experience as a DevOps / Infrastructure engineer with cloud platforms (preferably AWS)
● Experience with git, CI / CD, Docker, etc
● Experience in working with infrastructure as code (Terraform, etc).
● Strong Linux Shell scripting experience
● Experience with one of the programming languages like Python, Java, Kotlin, etc.
● Improve CI/CD tooling using gitlab.
● Implement and improve monitoring and alerting.
● Build and maintain highly available systems.
● Implement the CI pipeline.
● Implement and maintain monitoring stacks.
● Lead and guide the team in identifying and implementing new technologies.
● Implement and own the CI.
● Manage CD tooling.
● Implement and maintain monitoring and alerting.
● Build and maintain highly available production systems.
Skills
● Configuration Management experience such as Kubernetes, Ansible or similar.
● Managing production infrastructure with Terraform, CloudFormation, etc.
● Strong Linux, system administration background.
● Ability to present and communicate the architecture in a visual form. Strong knowledge of AWS,
Azure, GCP.
● Bachelor Degree or 5+ years of professional or experience.
● 2+ years of hands-on experience of programming in languages such as Python, Ruby,
Go, Swift, Java, .Net, C++ or similar object-oriented language.
● Experience with automating cloud native technologies, deploying applications, and
provisioning infrastructure.
● Hands-on experience with Infrastructure as Code, using CloudFormation, Terraform, or
other tools.
● Experience developing cloud native CI/CD workflows and tools, such as Jenkins,
Bamboo, TeamCity, Code Deploy (AWS) and/or GitLab.
● Hands-on experience with microservices and distributed application architecture, such
as containers, Kubernetes, and/or serverless technology.
● Hands-on experience in building/managing data pipelines, reporting & analytics.
● Experience with the full software development lifecycle and delivery using Agile
practices.
● Preferable (bonus points if you know these):
○ AWS cloud management
○ Kafka
○ Databricks
○ Gitlab CI/CD hooks
○ Python notebooks
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
- Essentail Skills:
- Docker
- Jenkins
- Python dependency management using conda and pip
- Base Linux System Commands, Scripting
- Docker Container Build & Testing
- Common knowledge of minimizing container size and layers
- Inspecting containers for un-used / underutilized systems
- Multiple Linux OS support for virtual system
- Has experience as a user of jupyter / jupyter lab to test and fix usability issues in workbenches
- Templating out various configurations for different use cases (we use Python Jinja2 but are open to other languages / libraries)
- Jenkins PIpeline
- Github API Understanding to trigger builds, tags, releases
- Artifactory Experience
- Nice to have: Kubernetes, ArgoCD, other deployment automation tool sets (DevOps)
Location: Amravati, Maharashtra 444605 , INDIA
We are looking for a Kubernetes Cloud Engineer with experience in deployment and administration of Hyperledger Fabric blockchain applications. While you will work on Truscholar blockchain based platform (both Hyperledger Fabric and INDY versions), if you combine rich Kubernetes experience with strong DevOps skills, we will still be keen on talking to you.
Responsibilities
● Deploy Hyperledger Fabric (BEVEL SETUP) applications on Kubernetes
● Monitoring Kubernetes system
● Implement and improve monitoring and alerting
● Build and maintain highly available blockchain systems on Kubernetes
● Implement an auto-scaling system for our Kubernetes nodes
● Detail Design & Develop SSI & ZKP Solution
● - Act as a liaison between the Infra, Security, business & QA Teams for end to end integration and DevOps - Pipeline adoption.
Technical Skills
● Experience with AWS EKS Kubernetes Service, Container Instances, Container Registry and microservices (or similar experience on AZURE)
● Hands on with automation tools like Terraform, Ansible
● Ability to deploy Hyperledger Fabric in Kubernetes environment is highly desirable
● Hyperledger Fabric/INDY (or other blockchain) development, architecture, integration, application experience
● Distributed consensus systems such as Raft
● Continuous Integration and automation skills including GitLab Actions ● Microservices architectures, Cloud-Native architectures, Event-driven architectures, APIs, Domain Driven Design
● Being a Certified Hyperledger Fabric Administrator would be an added advantage
Sills Set
● Understanding of Blockchain NEtworks
● Docker Products
● Amazon Web Services (AWS)
● Go (Programming Language)
● Hyperledger Fabric/INDY
● Gitlab
● Kubernetes
● Smart Contracts
Who We are:
Truscholar is a state-of- art Digital Credential Issuance and Verification Platform running as blockchain Infrastructure as an Instance of Hyperledger Indy Framework. Our Solution helps all universities, Institutes, Edtech, E-learning Platforms, Professional Training Academics, Corporate Employee Training and Certifications and Event Management Organisations managing exhibitions, Trade Fairs, Sporting Events, seminars and webinars to their learners, employees or participants. The digital certificates, Badges, or transcripts generated are immutable, shareable and verifiable thereby building an individual's Knowledge Passport. Our Platform has been architected to function as a single Self Sovereign Identity Wallet for the next decade, keeping personal data privacy guidelines in min.
Why Now?
The Startup venture, which was conceived as an idea while two founders were pursuing a Blockchain Technology Management Course, has received tremendous applause and appreciation from mentors and investors, and has been able to roll out the product within a year and comfortably complete the product market fit stage. Truscholar has entered a growth stage, and is searching for young, creative, and bright individuals to join the team and make Truscholar a preferred global product within the next 36 months.
Our Work Culture:
With our innovation, open communication, agile thought process, and will to achieve, we are a very passionate group of individuals driving the company's growth. As a result of their commitment to the company's development narrative, we believe in offering a work environment with clear metrics to support workers' individual progress and networking within the fraternity.
Our Vision:
To become the intel inside the education world by powering all academic credentials across the globe and assisting students in charting their digital academic passports.
Advantage Location Amravati, Maharashtra, INDIA
Amid businesses in India realising the advantages of the work-from-home (WFH) concept in the backdrop of the Coronavirus pandemic, there has been a major shift of the workforce towards tier-2 cities.
Amravati, also called Ambanagri, is a city of immense cultural and religious importance and a beautiful Tier 2 City of Maharastra. It is also called the cultural capital of the Vidarbha region. The cost of living is less, the work-life balance is better, much breathable air, fewer traffic bottlenecks and housing remains affordable, as compared to congested and eccentric metro cities of India. We firmly believe that they (tier-2) are the future talent hubs and job-creation centres. Our conviction has been borne out by the fact that tier-2 cities have made great strides in salary levels due to a lot of investments in building excellent physical and social infrastructure.
This company is a network of the world's best developers - full-time, long-term remote software jobs with better compensation and career growth. We enable our clients to accelerate their Cloud Offering, and Capitalize on Cloud. We have our own IOT/AI platform and we provide professional services on that platform to build custom clouds for their IOT devices. We also build mobile apps, run 24x7 devops/site reliability engineering for our clients.
We are looking for very hands-on SRE (Site Reliability Engineering) engineers with 3 to 6 years of experience. The person will be part of team that is responsible for designing & implementing automation from scratch for medium to large scale cloud infrastructure and providing 24x7 services to our North American / European customers. This also includes ensuring ~100% uptime for almost 50+ internal sites. The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.
This person MUST have:
- B.E Computer Science or equivalent
- 2+ Years of hands-on experience troubleshooting/setting up of the Linux environment, who can write shell scripts for any given requirement.
- 1+ Years of hands-on experience setting up/configuring AWS or GCP services from SCRATCH and maintaining them.
- 1+ Years of hands-on experience setting up/configuring Kubernetes & EKS and ensuring high availability of container orchestration.
- 1+ Years of hands-on experience setting up CICD from SCRATCH in Jenkins & Gitlab.
- Experience configuring/maintaining one monitoring tool.
- Excellent verbal & written communication skills.
- Candidates with certifications - AWS, GCP, CKA, etc will be preferred
- Hands-on experience with databases (Cassandra, MongoDB, MySQL, RDS).
Experience:
- Min 3 years of experience as SRE automation engineer building, running, and maintaining production sites. Not looking for candidates who have experience only as L1/L2 or Build & Deploy..
Location:
- Remotely, anywhere in India
Timings:
- The person is expected to deliver with both high speed and high quality as well as work for 40 Hours per week (~6.5 hours per day, 6 days per week) in shifts which will rotate every month.
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
We are looking for an experienced DevOps (Development and Operations) professional to join our growing organization. In this position, you will be responsible for finding and reporting bugs in web and mobile apps & assist Sr DevOps to manage infrastructure projects and processes. Keen attention to detail, problem-solving abilities, and a solid knowledge base are essential.
As a DevOps, you will work in a Kubernetes based microservices environment.
Experience in Microsoft Azure cloud and Kubernetes is preferred, not mandatory.
Ultimately, you will ensure that our products, applications and systems work correctly.
Responsibilities:
- Detect and track software defects and inconsistencies
- Apply quality engineering principals throughout the Agile product lifecycle
- Handle code deployments in all environments
- Monitor metrics and develop ways to improve
- Consult with peers for feedback during testing stages
- Build, maintain, and monitor configuration standards
- Maintain day-to-day management and administration of projects
- Manage CI and CD tools with team
- Follow all best practices and procedures as established by the company
- Provide support and documentation
Required Technical and Professional Expertise
- Minimum 2+ years if DevOps
- Have experience in SaaS infrastructure development and Web Apps
- Experience in delivering microservices at scale; designing microservices solutions
- Proven Cloud experience/delivery of applications on Azure
- Proficient in configuration Management tools such as Ansible or any of Terraform Puppet, Chef, Salt, etc
- Hands-on experience in Networking/network configuration, Application performance monitoring, Container performance, and security.
- Understanding of Kubernetes, Python along with scripting languages like bash/shell
- Good to have experience in Linux internals, Linux packaging, Release Engineering (Branching, versioning, tagging), Artifact repository, Artifactory, Nexus, and CI/CD tooling (Concourse CI, Travis, Jenkins)
- Must be a proactive person
- You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies
- An ambitious individual who can work under their own direction towards agreed targets/goals and with a creative approach to work.
- An intuitive individual with an ability to manage change and proven time management
- Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
We are looking for a Sr. Engineer DevOps and SysOps, who is responsible for managing AWS and Azure cloud computing. Your primary focus would be to help multiple projects with various cloud service implementation, create and manage CI/CD pipelines for deployment, explore new services on cloud and help projects to implement them.
Technical Requirements & Responsibilities
- Have 4+ years’ experience as a DevOps and SysOps Engineer.
- Apply cloud computing skills to deploy upgrades and fixes on AWS and Azure (GCP is optional / Good to have).
- Design, develop, and implement software integrations based on user feedback.
- Troubleshoot production issues and coordinate with the development team to streamline code deployment.
- Implement automation tools and frameworks (CI/CD pipelines).
- Analyze code and communicate detailed reviews to development teams to ensure a marked improvement in applications and the timely completion of projects.
- Collaborate with team members to improve the company’s engineering tools, systems and procedures, and data security.
- Optimize the company’s computing architecture.
- Conduct systems tests for security, performance, and availability.
- Develop and maintain design and troubleshooting documentation.
- Expert in code deployment tools (Puppet, Ansible, and Chef).
- Can maintain Java / PHP / Ruby on Rail / DotNet web applications.
- Experience in network, server, and application-status monitoring.
- Possess a strong command of software-automation production systems (Jenkins and Selenium).
- Expertise in software development methodologies.
- You have working knowledge of known DevOps tools like Git and GitHub.
- Possess a problem-solving attitude.
- Can work independently and as part of a team.
Soft Skills Requirements
- Strong communication skills
- Agility and quick learner
- Attention to detail
- Organizational skills
- Understanding of the Software development life cycle
- Good Analytical and problem-solving skills
- Self-motivated with the ability to prioritize, meet deadlines, and manage changing priorities
- Should have a high level of energy working as an individual contributor and as a part of team.
- Good command over verbal and written English communication