
Role: Full-Time, Long-Term Required: Docker, GCP, CI/CD Preferred: Experience with ML pipelines
OVERVIEW
We are seeking a DevOps engineer to join as a core member of our technical team. This is a long-term position for someone who wants to own infrastructure and deployment for a production machine learning system. You will ensure our prediction pipeline runs reliably, deploys smoothly, and scales as needed.
The ideal candidate thinks about failure modes obsessively, automates everything possible, and builds systems that run without constant attention.
CORE TECHNICAL REQUIREMENTS
Docker (Required): Deep experience with containerization. Efficient Dockerfiles, layer caching, multi-stage builds, debugging container issues. Experience with Docker Compose for local development.
Google Cloud Platform (Required): Strong GCP experience: Cloud Run for serverless containers, Compute Engine for VMs, Artifact Registry for images, Cloud Storage, IAM. You can navigate the console but prefer scripting everything.
CI/CD (Required): Build and maintain deployment pipelines. GitHub Actions required. You automate testing, building, pushing, and deploying. You understand the difference between continuous integration and continuous deployment.
Linux Administration (Required): Comfortable on the command line. SSH, diagnose problems, manage services, read logs, fix things. Bash scripting is second nature.
PostgreSQL (Required): Database administration basics—backups, monitoring, connection management, basic performance tuning. Not a DBA, but comfortable keeping a production database healthy.
Infrastructure as Code (Preferred): Terraform, Pulumi, or similar. Infrastructure should be versioned, reviewed, and reproducible—not clicked together in a console.
WHAT YOU WILL OWN
Deployment Pipeline: Maintaining and improving deployment scripts and CI/CD workflows. Code moves from commit to production reliably with appropriate testing gates.
Cloud Run Services: Managing deployments for model fitting, data cleansing, and signal discovery services. Monitor health, optimize cold starts, handle scaling.
VM Infrastructure: PostgreSQL and Streamlit on GCP VMs. Instance management, updates, backups, security.
Container Registry: Managing images in GitHub Container Registry and Google Artifact Registry. Cleanup policies, versioning, access control.
Monitoring and Alerting: Building observability. Logging, metrics, health checks, alerting. Know when things break before users tell us.
Environment Management: Configuration across local and production. Secrets management. Environment parity where it matters.
WHAT SUCCESS LOOKS LIKE
Deployments are boring—no drama, no surprises. Systems recover automatically from transient failures. Engineers deploy with confidence. Infrastructure changes are versioned and reproducible. Costs are reasonable and resources scale appropriately.
ENGINEERING STANDARDS
Automation First: If you do something twice, automate it. Manual processes are bugs waiting to happen.
Documentation: Runbooks, architecture diagrams, deployment guides. The next person can understand and operate the system.
Security Mindset: Secrets never in code. Least-privilege access. You think about attack surfaces.
Reliability Focus: Design for failure. Backups are tested. Recovery procedures exist and work.
CURRENT ENVIRONMENT
GCP (Cloud Run, Compute Engine, Artifact Registry, Cloud Storage), Docker, Docker Compose, GitHub Actions, PostgreSQL 16, Bash deployment scripts with Python wrapper.
WHAT WE ARE LOOKING FOR
Ownership Mentality: You see a problem, you fix it. You do not wait for assignment.
Calm Under Pressure: When production breaks, you diagnose methodically.
Communication: You explain infrastructure decisions to non-infrastructure people. You document what you build.
Long-Term Thinking: You build systems maintained for years, not quick fixes creating tech debt.
EDUCATION
University degree in Computer Science, Engineering, or related field preferred. Equivalent demonstrated expertise also considered.
TO APPLY
Include: (1) CV/resume, (2) Brief description of infrastructure you built or maintained, (3) Links to relevant work if available, (4) Availability and timezone.

About Capital Squared
About
Company social profiles
Similar jobs
What You’ll Do:
We’re looking for a skilled DevOps Engineer to help us build and maintain reliable, secure, and scalable infrastructure. You will work closely with our development, product, and security teams to streamline deployments, improve performance, and ensure cloud infrastructure resilience.
Responsibilities:
● Deploy, manage, and monitor infrastructure on Google Cloud Platform (GCP)
● Build CI/CD pipelines using Jenkins and integrate them with Git workflows
● Design and manage Kubernetes clusters and helm-based deployments
● Manage infrastructure as code using Terraform
● Set up logging, monitoring, and alerting (Stackdriver, Prometheus, Grafana)
● Ensure security best practices across cloud resources, networks, and secrets
● Automate repetitive operations and improve system reliability
● Collaborate with developers to troubleshoot and resolve issues in staging/production environments
What We’re Looking For:
Required Skills:
● 1–3 years of hands-on experience in a DevOps or SRE role
● Strong knowledge of GCP services (IAM, GKE, Cloud Run, VPC, Cloud Build, etc.)
● Proficiency in Kubernetes (deployment, scaling, troubleshooting)
● Experience with Terraform for infrastructure provisioning
● CI/CD pipeline setup using Jenkins, GitHub Actions, or similar tools
● Understanding of DevSecOps principles and cloud security practices
● Good command over Linux, shell scripting, and basic networking concepts
Nice to have:
● Experience with Docker, Helm, ArgoCD
● Exposure to other cloud platforms (AWS, Azure)
● Familiarity with incident response and disaster recovery planning
● Knowledge of logging and monitoring tools like ELK, Prometheus, Grafana
Role Overview:
As a DevOps Engineer (L2), you will play a key role in designing, implementing, and optimizing infrastructure. You will take ownership of automating processes, improving system reliability, and supporting the development lifecycle.
Key Responsibilities:
- Design and manage scalable, secure, and highly available cloud infrastructure.
- Lead efforts in implementing and optimizing CI/CD pipelines.
- Automate repetitive tasks and develop robust monitoring solutions.
- Ensure the security and compliance of systems, including IAM, VPCs, and network configurations.
- Troubleshoot complex issues across development, staging, and production environments.
- Mentor and guide L1 engineers on best practices.
- Stay updated on emerging DevOps tools and technologies.
- Manage cloud resources efficiently using Infrastructure as Code (IaC) tools like Terraform and AWS CloudFormation.
Qualifications:
- Bachelor’s degree in Computer Science, IT, or a related field.
- Proven experience with CI/CD pipelines and tools like Jenkins, GitLab, or Azure DevOps.
- Advanced knowledge of cloud platforms (AWS, Azure, or GCP) with hands-on experience in deployments, migrations, and optimizations.
- Strong expertise in containerization (Docker) and orchestration tools (Kubernetes).
- Proficiency in scripting languages like Python, Bash, or PowerShell.
- Deep understanding of system security, networking, and load balancing.
- Strong analytical skills and problem-solving mindset.
- Certifications (e.g., AWS Certified Solutions Architect, Kubernetes Administrator) are a plus.
What We Offer:
- Opportunity to work with a cutting-edge tech stack in a product-first company.
- Collaborative and growth-oriented environment.
- Competitive salary and benefits.
- Freedom to innovate and contribute to impactful projects.
Please Apply - https://zrec.in/L51Qf?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: Senior DevOps Engineer (Infrastructure/SRE)
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 4-6 years
Education: B.Tech/MCA
Notice Period: Immediately
About Us
At Infra360.io, we are a next-generation cloud consulting and services company committed to delivering comprehensive, 360-degree solutions for cloud, infrastructure, DevOps, and security. We partner with clients to transform and optimize their technology landscape, ensuring resilience, scalability, cost efficiency and innovation.
Our core services include Cloud Strategy, Site Reliability Engineering (SRE), DevOps, Cloud Security Posture Management (CSPM), and related Managed Services. We specialize in driving operational excellence across multi-cloud environments, helping businesses achieve their goals with agility and reliability.
We thrive on ownership, collaboration, problem-solving, and excellence, fostering an environment where innovation and continuous learning are at the forefront. Join us as we expand and redefine what’s possible in cloud technology and infrastructure.
Role Summary
We are looking for a Senior DevOps Engineer (Infrastructure) to design, automate, and manage cloud-based and datacentre infrastructure for diverse projects. The ideal candidate will have deep expertise in a public cloud platform (AWS, GCP, or Azure), with a strong focus on cost optimization, security best practices, and infrastructure automation using tools like Terraform and CI/CD pipelines.
This role involves designing scalable architectures (containers, serverless, and VMs), managing databases, and ensuring system observability with tools like Prometheus and Grafana. Strong leadership, client communication, and team mentoring skills are essential. Experience with VPN technologies and configuration management tools (Ansible, Helm) is also critical. Multi-cloud experience and familiarity with APM tools are a plus.
Ideal Candidate Profile
- Solid 4-6 years of experience as a DevOps engineer with a proven track record of architecting and automating solutions on Cloud
- Experience in troubleshooting production incidents and handling high-pressure situations.
- Strong leadership skills and the ability to mentor team members and provide guidance on best practices.
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- Extensive experience with Kubernetes, Terraform, ArgoCD, and Helm.
- Strong with at least one public cloud AWS/GCP/Azure
- Strong with Cost Optimization and Security Best practices
- Strong with Infrastructure automation using Terraform and CI/CD automation
- Strong with Configuration Management using Ansible, Helm etc
- Good with designing architectures (Containers, Serverless, VMs etc)
- Hands-on Experience working on Multiple Projects
- Strong with Client communication and requirements gathering
- Databases management experience
- Good experience with Prometheus, Grafana & Alert Manager
- Able to manage multiple clients and take ownership of client issues.
- Experience with Git and coding best practices
- Proficiency in cloud networking, including VPCs, DNS, VPNs (OpenVPN, OpenSwan, Pritunl, Site-to-Site VPNs), load balancers, and firewalls, ensuring secure and efficient connectivity.
- Strong understanding of cloud security best practices, identity and access management (IAM), and compliance requirements for modern infrastructure.
Good to have
- Multi-cloud experience with AWS, GCP & Azure
- Experience with APM & Observability tools like - Newrelic, Datadog, and OpenTelemetry
- Proficiency in scripting languages (Python, Go) for automation and tooling to improve infrastructure and application reliability.
Key Responsibilities
- Design and Development:
- Architect, design, and develop high-quality, scalable, and secure cloud-based software solutions.
- Collaborate with product and engineering teams to translate business requirements into technical specifications.
- Write clean, maintainable, and efficient code, following best practices and coding standards.
- Cloud Infrastructure:
- Develop and optimise cloud-native applications, leveraging cloud services like AWS, Azure, or Google Cloud Platform (GCP).
- Implement and manage CI/CD pipelines for automated deployment and testing.
- Ensure the security, reliability, and performance of cloud infrastructure.
- Technical Leadership:
- Mentor and guide junior engineers, providing technical leadership and fostering a collaborative team environment.
- Participate in code reviews, ensuring adherence to best practices and high-quality code delivery.
- Lead technical discussions and contribute to architectural decisions.
- Problem Solving and Troubleshooting:
- Identify, diagnose, and resolve complex software and infrastructure issues.
- Perform root cause analysis for production incidents and implement preventative measures.
- Continuous Improvement:
- Stay up-to-date with the latest industry trends, tools, and technologies in cloud computing and software engineering.
- Contribute to the continuous improvement of development processes, tools, and methodologies.
- Drive innovation by experimenting with new technologies and solutions to enhance the platform.
- Collaboration:
- Work closely with DevOps, QA, and other teams to ensure smooth integration and delivery of software releases.
- Communicate effectively with stakeholders, including technical and non-technical team members.
- Client Interaction & Management:
- Will serve as a direct point of contact for multiple clients.
- Able to handle the unique technical needs and challenges of two or more clients concurrently.
- Involve both direct interaction with clients and internal team coordination.
- Production Systems Management:
- Must have extensive experience in managing, monitoring, and debugging production environments.
- Will work on troubleshooting complex issues and ensure that production systems are running smoothly with minimal downtime.
Contract Review and Lifecycle management is no longer a niche idea. It is one of the fastest growing sectors within legal operations automation with a market size of $10B growing at 15% YoY. InkPaper helps corporations and law firms optimize their contract workflow and lifecycle management by providing workflow automation, process transparency, efficiency, and speed. Automation and Blockchain have the power to transform legal contracts as we know of today; if you are interested in being part of that journey, keep reading!
InkPaper.AI is looking for passionate DevOps Engineer who can drive and build next generation AI-powered products in Legal Technology: Document Workflow Management and E-signature platforms. You will be a part of the product engineering team based out of Gurugram, India working closely with our team in Austin, USA.
If you are a highly skilled DevOps Engineer with expertise in GCP, Azure, AWS ecosystems, and Cybersecurity, and you are passionate about designing and maintaining secure cloud infrastructure, we would love to hear from you. Join our team and play a critical role in driving our success while ensuring the highest standards of security.
Responsibilities:
- Solid experience in building enterprise-level cloud solutions on one of the big-3(AWS/Azure/GCP)
- Collaborate with development teams to automate software delivery pipelines, utilizing CI/CD tools and technologies.
- Responsible for configuring and overseeing cloud services, including virtual machines, containers, serverless functions, databases, and networking components, ensuring their effective management and operation.
- Responsible for implementing robust monitoring, logging, and alerting solutions to ensure optimal system health and performance
- Develop and maintain documentation for infrastructure, deployment processes, and security procedures.
- Troubleshoot and resolve infrastructure and deployment issues, ensuring system availability and reliability.
- Conduct regular security assessments, vulnerability scans, and penetration tests to identify and address potential threats.
- Implement security controls and best practices to protect systems, data, and applications in compliance with industry standards and regulations
- Stay updated on emerging trends and technologies in DevOps, cloud, and cybersecurity. Recommend improvements to enhance system efficiency and security.
An ideal candidate would credibly demonstrate various aspects of the InkPaper Culture code –
- We solve for the customer
- We practice good judgment
- We are action-oriented
- We value deep work over shallow work
- We reward work over words
- We value character over only skills
- We believe the best perk is amazing peers
- We favor autonomy
- We value contrarian ideas
- We strive for long-term impact
You Have:
- B.Tech in Computer Science.
- 2 to 4 years of relevant experience in DevOps.
- Proficiency in GCP, Azure, AWS ecosystems, and Cybersecurity
- Experience with: CI/CD automation, cloud service configuration, monitoring, troubleshooting, security implementation.
- Familiarity with Blockchain will be an edge.
- Excellent verbal communication skills.
- Good problem-solving skills.
- Attention to detail
At InkPaper, we hire people who will help us change the future of legal services. Even if you do not think you check off every bullet point on this list, we still encourage you to apply! We value both current experience and future potential.
Benefits
- Hybrid environment to work from our Gurgaon Office and from the comfort of your home.
- Great compensation package!
- Tools you need on us!
- Our insurance plan offers medical, dental, vision, short- and long-term disability coverage, plus supplemental for all employees and dependents
- 15 planned leaves + 10 Casual Leaves + Company holidays as per government norms
InkPaper is committed to creating a welcoming and inclusive workplace for everyone. We value and celebrate our differences because those differences are what make our team shine. We hire great people from diverse backgrounds, not just because it is the right thing to do, but because it makes us stronger. We are an equal opportunity employer and does not discriminate against candidates based on race, ethnicity, religion, sex, gender, sexual orientation, gender identity, or disability
Location: Gurugram or remote
JOB DETAILS
What You'll Do
Responsibilities
- Building and maintenance of resilient and scalable production infrastructure
- Improvement of monitoring systems
- Creation and support of development automation processes (CI / CD)
- Participation in infrastructure development
- Detection of problems in architecture and proposing of solutions for solving them
- Creation of tasks for system improvements for system scalability, performance and monitoring
- Analysis of product requirements in the aspect of devops
- Managing a team of DevOps, control of task deliveries
- Incident analysis and fixing
Technology stack
Linux, Bash, Salt/Ansible, LXC, libvirt, IPsec, VXLAN, Open vSwitch, OpenVPN, OSPF, BIRD, Cisco NX-OS, Multicast, PIM, LVM, software RAID, LUKS, PostgreSQL, nginx, haproxy, Prometheus, Grafana, Zabbix, GitLab, Capistrano
Skills and Experience
- Understanding of the distributed systems principles
- Understanding of principles for building a resistant network infrastructure
- Experience of Ubuntu Linux administration (Debian-like will be a plus)
- Strong knowledge of Bash
- Experience of working with LXC-containers
- Understanding and experience with infrastructure as a code approach
- Experience of development idempotent Ansible roles
- Experience with relational databases (PostgeSQL), ability to create simple SQL queries
- Experience with git
- Experience with monitoring and metric collect systems (Prometheus, Grafana, Zabbix)
- Understanding of dynamic routing (OSPF)
Preferred experience
- Experience of working with highload zero-downtown environments
- Experience of coding on Python
- Experience of working with IPsec, VXLAN, Open vSwitch
- Knowledge and experience of working with network equipment Cisco
- Experience of working with Cisco NX-OS
- Knowledge of principles of multicast protocols IGMP, PIM
- Experience of setting multicast on Cisco equipment
- Experience of working with Solarflare Onload
- Experience administering Atlassian products

We are looking for a System Engineer who can manage requirements and data management in Rational Doors and Siemens Polarion. You will be part of a global development team with resources in China, Sweden and the US.
Responsibilities and tasks
- Import of requirement specifications to DOORS module
- Create module structure according to written specification (e-mail, word, etc)
- Format: reqif, word, excel, pdf, csv
- Make adjustments to data required to be able to import to tool
- Review that the result is readable and possible to work with
- Import of information to new or existing modules in DOORS
- Feedback of Compliance status from an excel compliance matrix to a module in DOORS
- Import requirements from one module to another based on baseline/filter…
- Import lists of items: Test cases, documents, etc in excel or csv to a module
- Provide guidance on format to information holder at client
- Link information/attribute data from one module to others
- Status, test results, comment
- Link requirements according to information from the client in any given format
- Export data and reports
- Assemble report based on data from one or several modules according to filters/baseline/written requests in any given format
- Export statistics from data in DOORS modules
- Create filters in DOORS modules
Note: Polarion activities same as DOORS activities, but process, results and structure may vary
Requirements – Must list (short, and real must, no order)
- =>10 years of overall experience in Automotive Industry
- Having requirement management experience in the automotive industry.
- =>3 years of experience in Rational Doors as user
- Knowledge in Siemens Polarion, working knowledge is a plus
- Experience in offshore delivery for more than 7 years
- Able to lead a team of 3 to 5 people and manage temporary additions to team
- Having working knowledge in ASPICE and handling requirements according to ASPICE L2
- Experience in setting up offshore delivery that best fits the expectations of the customer
- Experience in setting up quality processes and ways of working
- Experience in metrics management – propose, capture and share metrics with internal/ external stakeholders
- Good Communication skills in English
Requirements - Good to have list, strictly sorted in falling priority order
- Experience in DevOps framework of delivery
- Interest in learning new languages
- Handling requirements according to ASPICE L3
- Willingness in travel, travel to Sweden may be needed (approx. 1-2 per year)
Soft skills
- Candidate must be driving and proactive person, able to work with minimum supervision and will be asked to give example situations incoming interviews.
- Good team player with attention to detail, self-disciplined, able to manage their own time and workload, proactive and motivated.
- Strong sense of responsibility and commitment, innovative thinking.
2. Extensive expertise in the below in AWS Development.
3. Amazon Dynamo Db, Amazon RDS , Amazon APIs. AWS Elastic Beanstalk, and AWS Cloud Formation.
4. Lambda, Kinesis. CodeCommit ,CodePipeline.
5. Leveraging AWS SDKs to interact with AWS services from the application.
6. Writing code that optimizes performance of AWS services used by the application.
7. Developing with Restful API interfaces.
8. Code-level application security (IAM roles, credentials, encryption, etc.).
9. Programming Language Python or .NET. Programming with AWS APIs.
10. General troubleshooting and debugging.









