• Develop and maintain CI/CD tools to build and deploy scalable web and responsive applications in production environment
• Design and implement monitoring solutions that identify both system bottlenecks and production issues
• Design and implement workflows for continuous integration, including provisioning, deployment, testing, and version control of the software.
• Develop self-service solutions for the engineering team in order to deliver sites/software with great speed and quality
o Automating Infra creation
o Provide easy to use solutions to engineering team
• Conduct research, tests, and implements new metrics collection systems that can be reused and applied as engineering best practices
o Update our processes and design new processes as needed.
o Establish DevOps Engineer team best practices.
o Stay current with industry trends and source new ways for our business to improve.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
• Manage timely resolution of all critical and/or complex problems
• Maintain, monitor, and establish best practices for containerized environments.
• Mentor new DevOps engineers
What you will bring
• The desire to work in fast-paced environment.
• 5+ years’ experience building, maintaining, and deploying production infrastructures in AWS or other cloud providers
• Containerization experience with applications deployed on Docker and Kubernetes
• Understanding of NoSQL and Relational Database with respect to deployment and horizontal scalability
• Demonstrated knowledge of Distributed and Scalable systems Experience with maintaining and deployment of critical infrastructure components through Infrastructure-as-Code and configuration management tooling across multiple environments (Ansible, Terraform etc)
• Strong knowledge of DevOps and CI/CD pipeline (GitHub, BitBucket, Artifactory etc)
• Strong understanding of cloud and infrastructure components (server, storage, network, data, and applications) to deliver end-to-end cloud Infrastructure architectures and designs and recommendations
o AWS services like S3, CloudFront, Kubernetes, RDS, Data Warehouses to come up with architecture/suggestions for new use cases.
• Test our system integrity, implemented designs, application developments and other processes related to infrastructure, making improvements as needed
Good to have
• Experience with code quality tools, static or dynamic code analysis and compliance and undertaking and resolving issues identified from vulnerability and compliance scans of our infrastructure
• Good knowledge of REST/SOAP/JSON web service API implementation
•

About EZEU (OPC) India Pvt Ltd
About
Connect with the team
Similar jobs
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
Job Role : DevOps Engineer (Python + DevOps)
Experience : 4 to 10 Years
Location : Hyderabad
Work Mode : Hybrid
Mandatory Skills : Python, Ansible, Docker, Kubernetes, CI/CD, Cloud (AWS/Azure/GCP)
Job Description :
We are looking for a skilled DevOps Engineer with expertise in Python, Ansible, Docker, and Kubernetes.
The ideal candidate will have hands-on experience automating deployments, managing containerized applications, and ensuring infrastructure reliability.
Key Responsibilities :
- Design and manage containerization and orchestration using Docker & Kubernetes.
- Automate deployments and infrastructure tasks using Ansible & Python.
- Build and maintain CI/CD pipelines for streamlined software delivery.
- Collaborate with development teams to integrate DevOps best practices.
- Monitor, troubleshoot, and optimize system performance.
- Enforce security best practices in containerized environments.
- Provide operational support and contribute to continuous improvements.
Required Qualifications :
- Bachelor’s in Computer Science/IT or related field.
- 4+ years of DevOps experience.
- Proficiency in Python and Ansible.
- Expertise in Docker and Kubernetes.
- Hands-on experience with CI/CD tools and pipelines.
- Experience with at least one cloud provider (AWS, Azure, or GCP).
- Strong analytical, communication, and collaboration skills.
Preferred Qualifications :
- Experience with Infrastructure-as-Code tools like Terraform.
- Familiarity with monitoring/logging tools like Prometheus, Grafana, or ELK.
- Understanding of Agile/Scrum practices.
We seek a skilled and motivated Azure DevOps engineer to join our dynamic team. The ideal candidate will design, implement, and manage CI/CD pipelines, automate deployments, and optimize cloud infrastructure using Azure DevOps tools and services. You will collaborate closely with development and IT teams to ensure seamless integration and delivery of software solutions in a fast-paced environment.
Responsibilities:
- Design, implement, and manage CI/CD pipelines using Azure DevOps.
- Automate infrastructure provisioning and deployments using Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Azure CLI.
- Monitor and optimize Azure environments to ensure high availability, performance, and security.
- Collaborate with development, QA, and IT teams to streamline the software development lifecycle (SDLC).
- Troubleshoot and resolve issues related to build, deployment, and infrastructure.
- Implement and manage version control systems, primarily using Git.
- Manage containerization and orchestration using tools like Docker and Kubernetes.
- Ensure compliance with industry standards and best practices for security, scalability, and reliability.
● Good understanding of how the web works
● Experience with at least one language like Java, Python etc
● Good with Shell scripting
● Experience with *Nix based operating systems
● Experience with k8s, containers
● Fairly good understanding of AWS/GCP/Azure
● Troubleshoot and fix outages and performance issues in infrastructure stack
● Identify gap and design automation tools for all feasible functions in infrastructure
● Good verbal and written communication skills
● Drive SLA/SLO of team
Benefits
This is an opportunity to work on a fairly complex set of systems and improve
them. You will get a chance to learn things like “how to think about code
simplicity”, “how to write for maintainability” and several other things.
● Comprehensive health insurance policy.
● Flexible working hours and a very friendly work environment.
● Flexibility to work either in the office (post Covid) or remotely.
Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange
We are seeking talented DevOps-Engineers with focus on Elastic Stack (ELK) to join our dynamic DPS team. In this role, you will be responsible for refining and advising on the further development of an existing monitoring solution based on the Elastic Stack (ELK). You will independently handle tasks related to architecture, setup, technical migration, and documentation.
The current application landscape features multiple Java web services running on JEE application servers, primarily hosted on AWS, and integrated with various systems such as SAP, other services, and external partners. DPS is committed to delivering the best digital work experience for the customers employees and customers alike.
Responsibilities:
Install, set up, and automate rollouts using Ansible/CloudFormation for all stages (Dev, QA, Prod) in the AWS Cloud for components such as Elastic Search, Kibana, Metric beats, APM server, APM agents, and interface configuration.
Create and develop regular "Default Dashboards" for visualizing metrics from various sources like Apache Webserver, application servers and databases.
Improve and fix bugs in installation and automation routines.
Monitor CPU usage, security findings, and AWS alerts.
Develop and extend "Default Alerting" for issues like OOM errors, datasource issues, and LDAP errors.
Monitor storage space and create concepts for expanding the Elastic landscape in AWS Cloud and Elastic Cloud Enterprise (ECE).
Implement machine learning, uptime monitoring including SLA, JIRA integration, security analysis, anomaly detection, and other useful ELK Stack features.
Integrate data from AWS CloudWatch.
Document all relevant information and train involved personnel in the used technologies.
Requirements:
Experience with Elastic Stack (ELK) components and related technologies.
Proficiency in automation tools like Ansible and CloudFormation.
Strong knowledge of AWS Cloud services.
Experience in creating and managing dashboards and alerts.
Familiarity with IAM roles and rights management.
Ability to document processes and train team members.
Excellent problem-solving skills and attention to detail.
Skills & Requirements
Elastic Stack (ELK), Elasticsearch, Kibana, Logstash, Beats, APM, Ansible, CloudFormation, AWS Cloud, AWS CloudWatch, IAM roles, AWS security, Automation, Monitoring, Dashboard creation, Alerting, Anomaly detection, Machine learning integration, Uptime monitoring, JIRA integration, Apache Webserver, JEE application servers, SAP integration, Database monitoring, Troubleshooting, Performance optimization, Documentation, Training, Problem-solving, Security analysis.
● Explore
○ As a devops engineer, you will have multiple ways, tools & technologies to solve
a particular problem. We want you to take things in your own hands and figure
out the best way to solve it.
● PDCT
○ Plan, design, code & write test cases for problems you are solving
● Tuning
○ Help to tune performance and ensure high availability of infrastructure, including
reviewing system and application logs
● Security
○ Work on code-level application security
● Deploy
○ Deploy, manage and operate scalable, highly available, and fault-tolerant
systems in client environments.
Technologies (4 out of 5 are required) :
● Terraform*
● Docker*
● Kubernetes*
● Bash Scripting
● SQL
(* marked are a must)
The challenges are great (as are the rewards). If you are looking to take these DevOps
challenges head on & wish to learn a great deal out of it and contribute to the company along
the way, this is the role for you.
Ready?
If developing impactful product for a initial stage startup sounds appealing to you, let’s
have a conversation. (Confidential, of course)
Contract Review and Lifecycle management is no longer a niche idea. It is one of the fastest growing sectors within legal operations automation with a market size of $10B growing at 15% YoY. InkPaper helps corporations and law firms optimize their contract workflow and lifecycle management by providing workflow automation, process transparency, efficiency, and speed. Automation and Blockchain have the power to transform legal contracts as we know of today; if you are interested in being part of that journey, keep reading!
InkPaper.AI is looking for passionate DevOps Engineer who can drive and build next generation AI-powered products in Legal Technology: Document Workflow Management and E-signature platforms. You will be a part of the product engineering team based out of Gurugram, India working closely with our team in Austin, USA.
If you are a highly skilled DevOps Engineer with expertise in GCP, Azure, AWS ecosystems, and Cybersecurity, and you are passionate about designing and maintaining secure cloud infrastructure, we would love to hear from you. Join our team and play a critical role in driving our success while ensuring the highest standards of security.
Responsibilities:
- Solid experience in building enterprise-level cloud solutions on one of the big-3(AWS/Azure/GCP)
- Collaborate with development teams to automate software delivery pipelines, utilizing CI/CD tools and technologies.
- Responsible for configuring and overseeing cloud services, including virtual machines, containers, serverless functions, databases, and networking components, ensuring their effective management and operation.
- Responsible for implementing robust monitoring, logging, and alerting solutions to ensure optimal system health and performance
- Develop and maintain documentation for infrastructure, deployment processes, and security procedures.
- Troubleshoot and resolve infrastructure and deployment issues, ensuring system availability and reliability.
- Conduct regular security assessments, vulnerability scans, and penetration tests to identify and address potential threats.
- Implement security controls and best practices to protect systems, data, and applications in compliance with industry standards and regulations
- Stay updated on emerging trends and technologies in DevOps, cloud, and cybersecurity. Recommend improvements to enhance system efficiency and security.
An ideal candidate would credibly demonstrate various aspects of the InkPaper Culture code –
- We solve for the customer
- We practice good judgment
- We are action-oriented
- We value deep work over shallow work
- We reward work over words
- We value character over only skills
- We believe the best perk is amazing peers
- We favor autonomy
- We value contrarian ideas
- We strive for long-term impact
You Have:
- B.Tech in Computer Science.
- 2 to 4 years of relevant experience in DevOps.
- Proficiency in GCP, Azure, AWS ecosystems, and Cybersecurity
- Experience with: CI/CD automation, cloud service configuration, monitoring, troubleshooting, security implementation.
- Familiarity with Blockchain will be an edge.
- Excellent verbal communication skills.
- Good problem-solving skills.
- Attention to detail
At InkPaper, we hire people who will help us change the future of legal services. Even if you do not think you check off every bullet point on this list, we still encourage you to apply! We value both current experience and future potential.
Benefits
- Hybrid environment to work from our Gurgaon Office and from the comfort of your home.
- Great compensation package!
- Tools you need on us!
- Our insurance plan offers medical, dental, vision, short- and long-term disability coverage, plus supplemental for all employees and dependents
- 15 planned leaves + 10 Casual Leaves + Company holidays as per government norms
InkPaper is committed to creating a welcoming and inclusive workplace for everyone. We value and celebrate our differences because those differences are what make our team shine. We hire great people from diverse backgrounds, not just because it is the right thing to do, but because it makes us stronger. We are an equal opportunity employer and does not discriminate against candidates based on race, ethnicity, religion, sex, gender, sexual orientation, gender identity, or disability
Location: Gurugram or remote
Job Description:
Responsibilities
· Having E2E responsibility for Azure landscape of our customers
· Managing to code release and operational tasks within a global team with a focus on automation, maintainability, security and customer satisfaction
· Make usage of CI/CD framework to rapidly support lifecycle management of the platform
· Acting as L2-L3 support for incidents, problems and service request
· Work with various Atos and 3rd party teams to resolve incidents and implement changes
· Implement and drive automation and self-healing solutions to reduce toil
· Enhance error budgets and hands on design and development of solutions to address reliability issues and/or risks
· Support ITSM processes and collaborate with service management representatives
Job Requirements
· Azure Associate certification or equivalent knowledge level
· 5+ years of professional experience
· Experience with Terraform and/or native Azure automation
· Knowledge of CI/CD concepts and toolset (i.e. Jenkins, Azure DevOps, Git)
· Must be adaptable to work in a varied, fast paced exciting, ever changing environment
· Good analytical and problem-solving skills to resolve technical issues
· Understanding of Agile development and SCRUM concepts a plus
· Experience with Kubernetes architecture and tools a plus
DevOps Engineer
The DevOps team is one of the core technology teams of Lumiq.ai and is responsible for managing network activities, automating Cloud setups and application deployments. The team also interacts with our customers to work out solutions. If you are someone who is always pondering how to make things better, how technologies can interact, how various tools, technologies, and concepts can help a customer or how you can use various technologies to improve user experience, then Lumiq is the place of opportunities.
Job Description
- Explore about the newest innovations in scalable and distributed systems.
- Helps in designing the architecture of the project, solutions to the existing problems and future improvements to be done.
- Make the cloud infrastructure and services smart by implementing automation and trigger based solutions.
- Interact with Data Engineers and Application Engineers to create continuous integration and deployment frameworks and pipelines.
- Playing around with large clusters on different clouds to tune your jobs or to learn.
- Researching about new technologies, proving the concepts and planning how to integrate or update.
- Be part of discussions of other projects to learn or to help.
Responsibilities
- 2+years of experience as DevOps Engineer.
- You understand actual networking to Software defined networking.
- You like containers and open source orchestration system like Kubernetes, Mesos.
- Should have experience to secure system by creating robust access policy and network restrictions enforcement.
- Should have knowledge about how applications work are very important to design distributed systems.
- Should have experience to open source projects and have discussed the shortcomings or problems with the community on several occasions.
- You understand that provisioning a Virtual Machine is not DevOps.
- You know you are not a SysAdmin but DevOps Engineer who is the person behind developing operations for the system to run efficiently and scalably.
- Exposure on Private Cloud, Subnets, VPNs, Peering, Load Balancers and have worked with them.
- You check logs before screaming about error.
- Multiple Screens makes you more efficient.
- You are a doer who don’t say the word impossible.
- You understand the value of documentation of your work.
- You understand the Big Data ecosystem and how can you leverage cloud for it.
- You know these buddies - #airflow, #aws, #azure, #gcloud, #docker, #kubernetes, #mesos, #acs
DevOps Engineer Skills Building a scalable and highly available infrastructure for data science Knows data science project workflows Hands-on with deployment patterns for online/offline predictions (server/serverless)
Experience with either terraform or Kubernetes
Experience of ML deployment frameworks like Kubeflow, MLflow, SageMaker Working knowledge of Jenkins or similar tool Responsibilities Owns all the ML cloud infrastructure (AWS) Help builds out an entirely CI/CD ecosystem with auto-scaling Work with a testing engineer to design testing methodologies for ML APIs Ability to research & implement new technologies Help with cost optimizations of infrastructure.
Knowledge sharing Nice to Have Develop APIs for machine learning Can write Python servers for ML systems with API frameworks Understanding of task queue frameworks like Celery










