
We are looking for a DevOps / Site Reliability Engineer (L5) to own and scale the production reliability of a large-scale, AI-first platform. You will be responsible for running mission-critical workloads on cloud infrastructure, hardening Kubernetes-based systems, and ensuring high availability, performance, and cost efficiency across platform and AI services.
This role is deeply hands-on and ownership-driven. You will be trusted to run day-2 production systems end-to-end, lead incident response, and continuously raise the reliability bar for AI and data-intensive workloads.
At Proximity, you won’t just keep systems running — you’ll shape how reliability, observability, and operational excellence are built into the platform from the ground up.
Responsibilities
- Own day-2 production operations of a large-scale, AI-first platform running on cloud infrastructure
- Run, scale, and harden Kubernetes-based workloads integrated with a broad set of managed cloud services across data, messaging, AI, networking, and security
- Define, implement, and operate SLIs, SLOs, and error budgets across core platform and AI services
- Build and own observability end-to-end, including:
- APM
- Infrastructure monitoring
- Logs, alerts, and operational dashboards
- Improve and maintain CI/CD pipelines and Terraform-driven infrastructure automation
- Operate and integrate AI platform services for LLM deployments and model lifecycle management
- Lead incident response, conduct blameless postmortems, and drive systemic reliability improvements
- Optimize cost, performance, and autoscaling for AI, ML, and data-intensive workloads
- Partner closely with backend, data, and ML engineers to ensure production readiness and operational best practices
What Matters (Non-Negotiable Alignment)
Infra owners, not operators.
This role is for engineers who design, build, and own infrastructure, not those limited to ticket-based operations.
- Built and operated production-grade cloud infrastructure end-to-end
- Strong Kubernetes experience in real, high-traffic production environments
- AWS experience is mandatory, with GCP as a strong plus
- Experience operating AI / ML workloads in production
- Including GPU-based systems
- Strong ownership of CI/CD systems and Infrastructure as Code
- End-to-end observability ownership
- Monitoring, logging, alerting, dashboards
- Comfortable making infrastructure decisions under ambiguity
- Proven ability to collaborate deeply with ML and backend teams to take systems from design → production → scale
Requirements
- 6+ years of hands-on experience in DevOps, SRE, or Platform Engineering roles.
- Strong, production-grade experience with cloud platforms
- AWS required
- GCP strongly preferred, especially Kubernetes and managed services
- Proven expertise running Kubernetes at scale in live production environments.
- Deep hands-on experience with New Relic in complex, distributed systems.
- Experience operating AI/ML or LLM-driven platforms in production environments.
- Solid background in Terraform, CI/CD systems, cloud networking, and security fundamentals.
- Strong understanding of reliability engineering principles, including capacity planning, failure modes, and resilience patterns.
- Comfortable owning production systems end-to-end with minimal supervision.
- Strong communication skills and the ability to operate calmly and effectively during incidents.
- Experience building internal platform tooling for developer productivity.
Desired Skills
- Experience managing multi-cloud environments or cross-cloud integrations.
- Familiarity with cost optimization strategies for large-scale Kubernetes and AI workloads.
- Exposure to service meshes, advanced traffic management, or zero-trust security models.
Benefits
- Best in class compensation: We hire only the best, and we pay accordingly.
- Proximity Talks: Learn from senior engineers, platform leaders, and industry experts.
- Work on real-world AI systems: Operate and scale production AI platforms used at meaningful scale.
- Continuous learning: Grow alongside a high-caliber team that values operational excellence and engineering rigor.
About us
We are Proximity — a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting-edge technology at scale.
Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be significant. You’ll work with experienced leaders who have built and led high-performing tech and platform teams.

About Proximity Works
About
We are Proximity - a global team of coders, designers, product managers, geeks and experts. We solve hard, long-term engineering problems and build cutting edge tech products.
About us
Born in 2019, Proximity Works is a global, fully distributed tech firm headquartered in San Francisco - with hubs across Mumbai, Dubai, Toronto, Stockholm, and Bengaluru. We’re in the business of solving high-stakes engineering challenges with AI-powered solutions tailored for industries like sports, media & entertainment, fintech, and enterprise platforms. From real-time game analytics and ticketing workflows to creative content generation, we help build software that serves millions every day.
About the Founders
At the helm is Hardik Jagda, CEO - a technologist with a startup DNA who brings clarity to complexity and a passion for building delightful experiences.
Milestones & Impact
- Trusted by some of the world’s biggest players - from major media & entertainment giants to one of the world’s largest cricket websites and the second-largest stock exchange in the world.
- Delivered game-changing tech: slashing content creation by 90%, doubling performance metrics for NASDAQ clients, and accelerating speed/performance wins for platforms like Dream11.
Culture & Why It Matters
- Fully distributed and flexible: work 100% remotely, design your own schedule, build habits that work for you and not the other way around.
- People-first culture: Community events, “Proxonaut battles,” monthly off-sites, and a liberal referral policy keep us connected even when we’re apart.
- High-trust environment: autonomy is encouraged. You’re empowered to act, learn fast, and iterate boldly. We know great work comes when talented people have space to think and create.
Company video


Connect with the team
Similar jobs
ROLE & RESPONSIBILITIES:
We are hiring a Senior DevSecOps / Security Engineer with 8+ years of experience securing AWS cloud, on-prem infrastructure, DevOps platforms, MLOps environments, CI/CD pipelines, container orchestration, and data/ML platforms. This role is responsible for creating and maintaining a unified security posture across all systems used by DevOps and MLOps teams — including AWS, Kubernetes, EMR, MWAA, Spark, Docker, GitOps, observability tools, and network infrastructure.
KEY RESPONSIBILITIES:
1. Cloud Security (AWS)-
- Secure all AWS resources consumed by DevOps/MLOps/Data Science: EC2, EKS, ECS, EMR, MWAA, S3, RDS, Redshift, Lambda, CloudFront, Glue, Athena, Kinesis, Transit Gateway, VPC Peering.
- Implement IAM least privilege, SCPs, KMS, Secrets Manager, SSO & identity governance.
- Configure AWS-native security: WAF, Shield, GuardDuty, Inspector, Macie, CloudTrail, Config, Security Hub.
- Harden VPC architecture, subnets, routing, SG/NACLs, multi-account environments.
- Ensure encryption of data at rest/in transit across all cloud services.
2. DevOps Security (IaC, CI/CD, Kubernetes, Linux)-
Infrastructure as Code & Automation Security:
- Secure Terraform, CloudFormation, Ansible with policy-as-code (OPA, Checkov, tfsec).
- Enforce misconfiguration scanning and automated remediation.
CI/CD Security:
- Secure Jenkins, GitHub, GitLab pipelines with SAST, DAST, SCA, secrets scanning, image scanning.
- Implement secure build, artifact signing, and deployment workflows.
Containers & Kubernetes:
- Harden Docker images, private registries, runtime policies.
- Enforce EKS security: RBAC, IRSA, PSP/PSS, network policies, runtime monitoring.
- Apply CIS Benchmarks for Kubernetes and Linux.
Monitoring & Reliability:
- Secure observability stack: Grafana, CloudWatch, logging, alerting, anomaly detection.
- Ensure audit logging across cloud/platform layers.
3. MLOps Security (Airflow, EMR, Spark, Data Platforms, ML Pipelines)-
Pipeline & Workflow Security:
- Secure Airflow/MWAA connections, secrets, DAGs, execution environments.
- Harden EMR, Spark jobs, Glue jobs, IAM roles, S3 buckets, encryption, and access policies.
ML Platform Security:
- Secure Jupyter/JupyterHub environments, containerized ML workspaces, and experiment tracking systems.
- Control model access, artifact protection, model registry security, and ML metadata integrity.
Data Security:
- Secure ETL/ML data flows across S3, Redshift, RDS, Glue, Kinesis.
- Enforce data versioning security, lineage tracking, PII protection, and access governance.
ML Observability:
- Implement drift detection (data drift/model drift), feature monitoring, audit logging.
- Integrate ML monitoring with Grafana/Prometheus/CloudWatch.
4. Network & Endpoint Security-
- Manage firewall policies, VPN, IDS/IPS, endpoint protection, secure LAN/WAN, Zero Trust principles.
- Conduct vulnerability assessments, penetration test coordination, and network segmentation.
- Secure remote workforce connectivity and internal office networks.
5. Threat Detection, Incident Response & Compliance-
- Centralize log management (CloudWatch, OpenSearch/ELK, SIEM).
- Build security alerts, automated threat detection, and incident workflows.
- Lead incident containment, forensics, RCA, and remediation.
- Ensure compliance with ISO 27001, SOC 2, GDPR, HIPAA (as applicable).
- Maintain security policies, procedures, RRPs (Runbooks), and audits.
IDEAL CANDIDATE:
- 8+ years in DevSecOps, Cloud Security, Platform Security, or equivalent.
- Proven ability securing AWS cloud ecosystems (IAM, EKS, EMR, MWAA, VPC, WAF, GuardDuty, KMS, Inspector, Macie).
- Strong hands-on experience with Docker, Kubernetes (EKS), CI/CD tools, and Infrastructure-as-Code.
- Experience securing ML platforms, data pipelines, and MLOps systems (Airflow/MWAA, Spark/EMR).
- Strong Linux security (CIS hardening, auditing, intrusion detection).
- Proficiency in Python, Bash, and automation/scripting.
- Excellent knowledge of SIEM, observability, threat detection, monitoring systems.
- Understanding of microservices, API security, serverless security.
- Strong understanding of vulnerability management, penetration testing practices, and remediation plans.
EDUCATION:
- Master’s degree in Cybersecurity, Computer Science, Information Technology, or related field.
- Relevant certifications (AWS Security Specialty, CISSP, CEH, CKA/CKS) are a plus.
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits
Key Responsibilities:
- Develop and Maintain CI/CD Pipelines: Design, implement, and manage CI/CD pipelines using GitOps practices.
- Kubernetes Management: Deploy, manage, and troubleshoot Kubernetes clusters to ensure high availability and scalability of applications.
- Cloud Infrastructure: Design, deploy, and manage cloud infrastructure on AWS, utilizing services such as EC2, S3, RDS, Lambda, and others.
- Infrastructure as Code: Implement and manage infrastructure using IaC tools like Terraform, CloudFormation, or similar.
- Monitoring and Logging: Set up and manage monitoring, logging, and alerting systems to ensure the health and performance of the infrastructure.
- Automation: Identify and automate repetitive tasks to improve efficiency and reliability.
- Security: Implement security best practices and ensure compliance with industry standards.
- Collaboration: Work closely with development, QA, and operations teams to ensure seamless integration and delivery of products.
Required Skills and Qualifications:
- Experience: 2-5 years of experience in a DevOps role.
- AWS: In-depth knowledge of AWS services and solutions.
- CI/CD Tools: Experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar.
- GitOps Expertise: Proficient in GitOps methodologies and tools.
- Kubernetes: Strong hands-on experience with Kubernetes and container orchestration.
- Scripting and Automation: Proficient in scripting languages such as Bash, Python, or similar.
- Infrastructure as Code (IaC): Hands-on experience with IaC tools like Terraform, CloudFormation, or similar.
- Monitoring Tools: Familiarity with monitoring and logging tools like Prometheus, Grafana, ELK stack, or similar.
- Version Control: Strong understanding of version control systems, primarily Git.
- Problem-Solving: Excellent problem-solving and debugging skills.
- Collaboration: Ability to work in a fast-paced, collaborative environment.
- Education: Bachelor’s or master’s degree in computer science or a related field.
We are looking for a DevOps Lead to join our team.
Responsibilities
• A technology Professional who understands software development and can solve IT Operational and deployment challenges using software engineering tools and processes. This position requires an understanding of both Software development (Dev) and deployment
Operations (Ops)
• Identity manual processes and automate them using various DevOps automation tools
• Maintain the organization’s growing cloud infrastructure
• Monitor and maintain DevOps environment stability
• Collaborate with distributed Agile teams to define technical requirements and resolve technical design issues
• Orchestrating builds and test setups using Docker and Kubernetes.
• Participate in designing and building Kubernetes, Cloud, and on-prem environments for maximum performance, reliability and scalability
• Share business and technical learnings with the broader engineering and product organization, while adapting approaches for different audiences
Requirements
• Candidates working for this position should possess at least 5 years of work experience as a DevOps Engineer.
• Candidate should have experience in ELK stack, Kubernetes, and Docker.
• Solid experience in the AWS environment.
• Should have experience in monitoring tools like DataDog or Newrelic.
• Minimum of 5 years experience with code repository management, code merge and quality checks, continuous integration, and automated deployment & management using tools like Jenkins, SVN, Git, Sonar, and Selenium.
• Candidates must possess ample knowledge and experience in system automation, deployment, and implementation.
• Candidates must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.
• The candidates should also possess experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, PostgreSQL, and Git.
• Candidates should demonstrate knowledge in handling distributed data systems.
Examples: Elastisearch, Cassandra, Hadoop, and others.
• Should have experience in GitLab- CIRoles and Responsibilities
Experience: 8-10yrs
Notice Period: max 15days
Must-haves*
1. Knowledge about Database/NoSQL DB hosting fundamentals (RDS multi-AZ, DynamoDB, MongoDB, and such)
2. Knowledge of different storage platforms on AWS (EBS, EFS, FSx) - mounting persistent volumes with Docker Containers
3. In-depth knowledge of Security principles on AWS (WAF, DDoS, Security Groups, NACL's, IAM groups, and SSO)
4. Knowledge on CI/CD platforms is required (Jenkins, GitHub actions, etc.) - Migration of AWS Code pipelines to GitHub actions
5. Knowledge of vast variety of AWS services (SNS, SES, SQS, Athena, Kinesis, S3, ECS, EKS, etc.) is required
6. Knowledge on Infrastructure as Code tool is required We use Cloudformation. (Terraform is a plus), ideally, we would like to migrate to Terraform from CloudFormation
7. Setting CloudWatch Alarms and SMS/Email Slack alerts.
8. Some Knowledge on configuring any kind of monitoring tool such as Prometheus, Dynatrace, etc. (We currently use Datadog, CloudWatch)
9. Experience with any CDN provider configurations (Cloudflare, Fastly, or CloudFront)
10. Experience with either Python or Go scripting language.
11. Experience with Git branching strategy
12. Containers hosting knowledge on both Windows and Linux
The below list is *Nice to Have*
1. Integration experience with Code Quality tools (SonarQube, NetSparker, etc) with CI/CD
2. Kubernetes
3. CDN's other than CloudFront (Cloudflare, Fastly, etc)
4. Collaboration with multiple teams
5. GitOps
Ask any CIO about corporate data and they’ll happily share all the work they’ve done to make their databases secure and compliant. Ask them about other sensitive information, like contracts, financial documents, and source code, and you’ll probably get a much less confident response. Few organizations have any insight into business-critical information stored in unstructured data.
There was a time when that didn’t matter. Those days are gone. Data is now accessible, copious, and dispersed, and it includes an alarming amount of business-critical information. It’s a target for both cybercriminals and regulators but securing it is incredibly difficult. It’s the data challenge of our generation.
Existing approaches aren’t doing the job. Keyword searches produce a bewildering array of possibly relevant documents that may or may not be business critical. Asking users to categorize documents requires extensive training and constant vigilance to make sure users are doing their part. What’s needed is an autonomous solution that can find and assess risk so you can secure your unstructured data wherever it lives.
That’s our mission. Concentric’s semantic intelligence solution reveals the meaning in your structured and unstructured data so you can fight off data loss and meet compliance and privacy mandates.
Check out our core cultural values and behavioural tenets here: https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/" target="_blank">https://concentric.ai/the-concentric-tenets-daily-behavior-to-aspire-to/
Title: Cloud DevOps Engineer
Role: Individual Contributor (4-8 yrs)
Requirements:
- Energetic self-starter, a fast learner, with a desire to work in a startup environment
- Experience working with Public Clouds like AWS
- Operating and Monitoring cloud infrastructure on AWS.
- Primary focus on building, implementing and managing operational support
- Design, Develop and Troubleshoot Automation scripts (Configuration/Infrastructure as code or others) for Managing Infrastructure.
- Expert at one of the scripting languages – Python, shell, etc
- Experience with Nginx/HAProxy, ELK Stack, Ansible, Terraform, Prometheus-Grafana stack, etc
- Handling load monitoring, capacity planning, and services monitoring.
- Proven experience With CICD Pipelines and Handling Database Upgrade Related Issues.
- Good Understanding and experience in working with Containerized environments like Kubernetes and Datastores like Cassandra, Elasticsearch, MongoDB, etc
About BootLabs
https://www.google.com/url?q=https://www.bootlabs.in/&sa=D&source=calendar&ust=1667803146567128&usg=AOvVaw1r5g0R_vYM07k6qpoNvvh6" target="_blank">https://www.bootlabs.in/
-We are a Boutique Tech Consulting partner, specializing in Cloud Native Solutions.
-We are obsessed with anything “CLOUD”. Our goal is to seamlessly automate the development lifecycle, and modernize infrastructure and its associated applications.
-With a product mindset, we enable start-ups and enterprises on the cloud
transformation, cloud migration, end-to-end automation and managed cloud services.
-We are eager to research, discover, automate, adapt, empower and deliver quality solutions on time.
-We are passionate about customer success. With the right blend of experience and exuberant youth in our in-house team, we have significantly impacted customers.
Technical Skills:
• Expertise in any one hyper scaler (AWS/AZURE/GCP), including basic services like networking,
data and workload management.
- AWS
Networking: VPC, VPC Peering, Transit Gateway, Route Tables, Security Groups, etc.
Data: RDS, DynamoDB, Elastic Search
Workload: EC2, EKS, Lambda, etc.
- Azure
Data: Azure MySQL, Azure MSSQL, etc.
Workload: AKS, Virtual Machines, Azure Functions
- GCP
Data: Cloud Storage, DataFlow, Cloud SQL, Firestore, BigTable, BigQuery
Workload: GKE, Instances, App Engine, Batch, etc.
• Experience in any one of the CI/CD tools (Gitlab/Github/Jenkins) including runner setup,
templating and configuration.
• Kubernetes experience or Ansible Experience (EKS/AKS/GKE), basics like pod, deployment,
networking, service mesh. Used any package manager like helm.
• Scripting experience (Bash/python), automation in pipelines when required, system service.
• Infrastructure automation (Terraform/pulumi/cloud formation), write modules, setup pipeline and version the code.
Optional:
• Experience in any programming language is not required but is appreciated.
• Good experience in GIT, SVN or any other code management tool is required.
• DevSecops tools like (Qualys/SonarQube/BlackDuck) for security scanning of artifacts, infrastructure and code.
• Observability tools (Opensource: Prometheus, Elasticsearch, Open Telemetry; Paid: Datadog,
24/7, etc)
DevOps Engineer
Job Description:
The position requires a broad set of technical and interpersonal skills that includes deployment technologies, monitoring and scripting from networking to infrastructure. Well versed in troubleshooting Prod issues and should be able to drive till the RCA.
Skills:
- Manage VMs across multiple datacenters and AWS to support dev/test and production workloads.
- Strong hands-on over Ansible is preferred
- Strong knowledge and hands-on experience in Kubernetes Architecture and administration.
- Should have core knowledge in Linux and System operations.
- Proactively and reactively resolve incidents as escalated from monitoring solutions and end users.
- Conduct and automate audits for network and systems infrastructure.
- Do software deployments, per documented processes, with no impact to customers.
- Follow existing devops processes while having flexibility to create and tweak processes to gain efficiency.
- Troubleshoot connectivity problems across network, systems or applications.
- Follow security guidelines, both policy and technical to protect our customers.
- Ability to automate recurring tasks to increase velocity and quality.
- Should have worked on any one of the Database (Postgres/Mongo/Cockroach/Cassandra)
- Should have knowledge and hands-on experience in managing ELK clusters.
- Scripting Knowledge in Shell/Python is added advantage.
- Hands-on Experience over K8s based Microservice Architecture is added advantage.
- You will manage all elements of the post-sale program relationship with your customers, starting with customer on-boarding and continuing throughout the customer relationship.
- As the primary customer interface, you engage with customer teams to educate, identify needs, develop designs, set goals, manage and execute on plans that unlock continuous, incremental value from their investments in the CloudPassage Halo platform.
- You are hands-on during execution and thoroughly enjoy seeing your security projects come to life and supporting them afterwards. You are a trusted adviser.
Responsibilities :
- Manage a portfolio of 5+ Enterprise customer accounts with complex needs (typical enterprise customers invest between $500k and $4m+ per year with CloudPassage, have hundreds to tens of thousands of individual public cloud infrastructure deployments, and protect hundreds of thousands of cloud infrastructure assets with Halo).
- Provide level-3 technical support on your customer's most complex issues
- Lead implementation of low-level security controls in Cloud environments, for services, server and containers
- Remotely diagnose & resolve DevSecOps issues in customer environments - able to resolve their DevOps issues that may be interfering with CloudPassage processing.
- Interact with CloudPassage Engineering team by providing customer issue reproduction and data capture, technical diagnostics and validating fixes. QA experience preferred.
- Establish and program manage proactive, value-driven, high-touch relationships with your customers to understand, document and align customer strategies, business objectives, designs, processes and projects with Halo platform capabilities and broader CloudPassage services.
- Develop a trusted advisor relationship by building and maintaining appropriate relationships at all levels with your customer accounts, creating a premium and high-caliber experience.
- Ensure continued satisfaction, identify & confirm unaddressed customer needs that can be value-add opportunities for up-sell and cross-sell, and communicate those needs to the CloudPassage sales team. Identify any early CSAT issues and renewal risks and work with the internal team to remediate and ensure strong CSAT and a successful renewal.
- Be a strong customer advocate within CloudPassage and identify and support areas for improvement in the customer experience, both in our product and processes.
- Be team-oriented, but with a bias towards action to get things done for your customers.
Requirements : Strong cloud security knowledge & experience including :
- End-to-end enterprise security processes
- Cloud security - cloud migrations & shift in security requirements, tooling & approach
- Hands-on DevOps, DevSecOps architecture & automation (critical)
- 4+ years experience in security consulting and project/program management serving cybersecurity customers.
- Complex, level 3 technical support
- Remotely diagnosing & resolving DevSecOps issues in customer environments
- Interacting with CloudPassage Engineering team with customer issue reproduction
- Experience working in a security SaaS company in a startup environment.
- Experience working with Executive and C-Level teams.
- Ability to build and maintain strong relationships with internal and external constituents.
- Excellent organization, project management, time management, and communication skills.
- Understand and document customer requirements, map to product, track & report metrics, identify up-sell and cross-sell opportunities.
- Analytical both quantitatively and qualitatively.
- Excellent verbal and written communication skills.
- Security certifications (Security +, CISSP, etc.).
Expert Technical Skills :
- Consulting and project management : documenting project charters, project plans, executing delivery management, status reporting. Executive-level presentation skills.
- Security best practices expertise : software vulnerabilities, configuration management, intrusion detection, file integrity.
- System administration (including Linux and Windows) of cloud environments : AWS, Azure, GCP; strong networking/proxy skills.
Proficient Technical Skills :
- Configuration/Orchestration (Chef, Puppet, Ansible, SaltStack, CloudFormation, Terraform).
- CI/CD processes and environments.
Familiar Technical Skills & Knowledge : Python scripting & REST API's, Docker containers, Zendesk & JIRA.
What we are looking for
Work closely with product & engineering groups to identify and document
infrastructure requirements.
Design infrastructure solutions balancing requirements, operational
constraints and architecture guidelines.
Implement infrastructure including network connectivity, virtual machines
and monitoring.
Implement and follow security guidelines, both policy and technical to
protect our customers.
Resolve incidents as escalated from monitoring solutions and lower tiers.
Identify root cause for issues and develop long term solutions to fix recurring
issues.
Ability to automate recurring tasks to increase velocity and quality.
Partner with the engineering team to build software tolerance for
infrastructure failure or issues.
Research emerging technologies, trends and methodologies and enhance
existing systems and processes.
Qualifications
Master’s/Bachelors degree in Computer Science, Computer Engineering,
Electrical Engineering, or related technical field, and two years of experience
in software/systems or related.
5+ years overall experience.
Work experience must have included:
Proven track record in deploying, configuring and maintaining Ubuntu server
systems on premise and in the cloud.
Minimum of 4 years’ experience designing, implementing and troubleshooting
TCP/IP networks, VPN, Load Balancers & Firewalls.
Minimum 3 years of experience working in public clouds like AWS & Azure.
Hands on experience in any of the configuration management tools like Anisble,
Chef & Puppet.
Strong in performing production operation activities.
Experience with Container & Container Orchestrator tools like Kubernetes, Docker
Swarm is plus.
Good at source code management tools like Bitbucket, GIT.
Configuring and utilizing monitoring and alerting tools.
Scripting to automate infrastructure and operational processes.
Hands on work to secure networks and systems.
Sound problem resolution, judgment, negotiating and decision making skills
Ability to manage and deliver multiple project phases at the same time
Strong analytical and organizational skills
Excellent written and verbal communication skills
Interview focus areas
Networks, systems, monitoring
AWS (EC2, S3, VPC)
Problem solving, scripting, network design, systems administration and
troubleshooting scenarios
Culture fit, agility, bias for action, ownership, communication













