11+ Routers Jobs in Chennai | Routers Job openings in Chennai
Apply to 11+ Routers Jobs in Chennai on CutShort.io. Explore the latest Routers Job opportunities across top companies like Google, Amazon & Adobe.
Striim (pronounced “stream” with two i’s for integration and intelligence) was founded in 2012 with a simple goal of helping companies make data useful the instant it’s born.
Striim’s enterprise-grade, streaming integration with intelligence platform makes it easy to build continuous, streaming data pipelines – including change data capture (CDC) – to power real-time cloud integration, log correlation, edge processing, and streaming analytics
2 - 5 Years of Experience in any Programming any language (Polyglot Preferred ) & System Operations • Awareness of Devops & Agile Methodologies • Proficient in leveraging CI and CD tools to automate testing and deployment . • Experience in working in an agile and fast paced environment . • Hands on knowledge of at least one cloud platform (AWS / GCP / Azure). • Cloud networking knowledge: should understand VPC, NATs, and routers. • Contributions to open source is a plus. • Good written communication skills are a must. Contributions to technical blogs / whitepapers will be an added advantage.

AI & ML (45 Days – Live Hands‑On)
Program Fee: ₹25,000
Duration: 45 Days
Mode: Hybrid (Online Sessions + Live Lab Access)
Eligibility: Freshers, Final‑Year Students, and Career Switchers
About the Internship
This intensive 45‑day internship program is designed for freshers who want to build strong, industry‑relevant skills in Cloud Infrastructure, Cybersecurity, and AI/ML model development. The program offers live production hands‑on training, allowing interns to work on real-world architectures, security workflows, and AI/ML deployments.
Participants will receive end‑to‑end exposure to modern cloud platforms, DevOps practices, security operations, and machine learning deployment, making them job‑ready for roles like:
- Cloud/Infra Engineer
- DevOps Engineer
- Security Analyst
- AI/ML Engineer
- Site Reliability Engineer (SRE)
Key Highlights
- 45 days of practical, mentor-led training
- Live production-style projects and deployments
- Hands‑on experience with:
- AWS / Azure Cloud
- Terraform, CI/CD, Docker, Kubernetes
- Security Hardening & IAM
- Python ML pipelines & model deployment
- Architect & deploy real systems using best practices
- Build portfolio-ready projects
- Receive an industry-recognized Internship Certificate
What You Will Learn
1️⃣ Cloud Infrastructure & DevOps
- Cloud fundamentals (AWS/Azure/GCP)
- Linux administration & scripting
- VPC, Subnets, Routing, NAT, Firewalls
- EC2 provisioning & autoscaling
- Load balancing & High Availability
- Terraform for Infrastructure as Code
- CI/CD pipelines using Jenkins / GitHub Actions
- Docker containerization & Kubernetes basics
2️⃣ Cybersecurity & Cloud Security
- IAM roles, policies, access control
- Server security & hardening
- SSL/TLS, encryption, key management
- Secure VPC & subnet design
- Threat detection & logging
- Secrets management
- Network segmentation & firewall best practices
3️⃣ AI & Machine Learning
- Python for ML
- Supervised and unsupervised algorithms
- Data preprocessing & model training
- Model evaluation & optimization
- Build ML inference APIs using FastAPI/Flask
- Containerize and deploy ML models to cloud
- Integrate monitoring for ML workflows
✅ Live Hands‑On Projects
Interns will work on real-world, production-grade projects such as:
- Deploy a secure 3‑tier web application on cloud
- Automate infra provisioning using Terraform
- Build CI/CD pipelines for automated deployments
- Harden servers & configure security groups, IAM
- Develop and deploy an ML model as a cloud API
- Create monitoring dashboards with Prometheus/Grafana
- End-to-end system deployment with logging and alerting
Each intern will complete a Capstone Project and present it during the final evaluation.
✅ Internship Deliverables
- Internship Completion Certificate
- 3+ Production‑level projects
- GitHub portfolio with all code and deployments
- Cloud & ML documentation
- Resume enhancement and guidance
- Career mentoring + interview preparation
✅ Who Should Apply
This internship is ideal for:
- Fresh graduates
- Final‑year engineering or IT students
- BSc, BCA, MCA, B.Tech learners
- Professionals switching careers to Cloud/DevOps/AI
- Anyone seeking hands‑on, real‑time industry experience
✅ Program Fee
₹25,000/- (includes training, labs, live‑project access, certificate, and mentorship)
✅ Certificate Provided
All participants will receive a verified Internship Certificate, including:
- Candidate Name
- Internship Duration & Dates
- Skills Covered
- Project Evaluation Score
- Authorized Signatory & Company Seal
Job Title: AWS DevOps Engineer
Experience Level: 5+ Years
Location: Bangalore, Pune, Hyderabad, Chennai and Gurgaon
Summary:
We are looking for a hands-on Platform Engineer with strong execution skills to provision and manage cloud infrastructure. The ideal candidate will have experience with Linux, AWS services, Kubernetes, and Terraform, and should be capable of troubleshooting complex issues in cloud and container environments.
Key Responsibilities:
- Provision AWS infrastructure using Terraform (IaC).
- Manage and troubleshoot Kubernetes clusters (EKS/ECS).
- Work with core AWS services: VPC, EC2, S3, RDS, Lambda, ALB, WAF, and CloudFront.
- Support CI/CD pipelines using Jenkins and GitHub.
- Collaborate with teams to resolve infrastructure and deployment issues.
- Maintain documentation of infrastructure and operational procedures.
Required Skills:
- 3+ years of hands-on experience in AWS infrastructure provisioning using Terraform.
- Strong Linux administration and troubleshooting skills.
- Experience managing Kubernetes clusters.
- Basic experience with CI/CD tools like Jenkins and GitHub.
- Good communication skills and a positive, team-oriented attitude.
Preferred:
- AWS Certification (e.g., Solutions Architect, DevOps Engineer).
- Exposure to Agile and DevOps practices.
- Experience with monitoring and logging tools.
Job Description
What does a successful Senior DevOps Engineer do at Fiserv?
This role’s focus will be on contributing and enhancing our DevOps environment within Issuer Solution group, where our cross functional Scrum teams are delivering solutions built on cutting-edge mobile technology and products. You will be expected to support across the wider business unit, leading DevOps practices and initiatives.
What will you do:
• Build, manage, and deploy CI/CD pipelines.
• DevOps Engineer - Helm Chart, Rundesk, Openshift
• Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline.
• Implementing various development, testing, automation tools, and IT infrastructure
• Optimize and automate release/development cycles and processes.
• Be part of and help promote our DevOps culture.
• Identify and implement continuous improvements to the development practice
What you must have:
• 3+ years of experience in devops with hands-on experience in the following:
- Writing automation scripts for deployments and housekeeping using shell scripts (bash) and ansible playbooks
- Building docker images and running/managing docker instances
- Building Jenkins pipelines using groovy scripts
- Working knowledge on kubernetes including application deployments, managing application configurations and persistence volumes
• Has good understanding on infrastructure as code
• Ability to write and update documentation
• Demonstrate a logical, process orientated approach to problems and troubleshooting
• Ability to collaborate with multi development teams
What you are preferred to have:
• 8+ years of development experience
• Jenkins administration experience
• Hands-on experience in building and deploying helm charts
Process Skills:
• Should have worked in Agile Project
Behavioral Skills :
• Good Communication skills
Skills
PRIMARY COMPETENCY : Cloud Infra PRIMARY SKILL : DevOps PRIMARY SKILL PERCENTAGE : 100
The candidates should have:
· Strong knowledge on Windows and Linux OS
· Experience working in Version Control Systems like git
· Hands-on experience in tools Docker, SonarQube, Ansible, Kubernetes, ELK.
· Basic understanding of SQL commands
· Experience working on Azure Cloud DevOps

technology based supply chain management
A Strong Devops experience of at least 4+ years
Strong Experience in Unix/Linux/Python scripting
Strong networking knowledge,vSphere networking stack knowledge desired.
Experience on Docker and Kubernetes
Experience with cloud technologies (AWS/Azure)
Exposure to Continuous Development Tools such as Jenkins or Spinnaker
Exposure to configuration management systems such as Ansible
Knowledge of resource monitoring systems
Ability to scope and estimate
Strong verbal and communication skills
Advanced knowledge of Docker and Kubernetes.
Exposure to Blockchain as a Service (BaaS) like - Chainstack/IBM blockchain platform/Oracle Blockchain Cloud/Rubix/VMWare etc.
Capable of provisioning and maintaining local enterprise blockchain platforms for Development and QA (Hyperledger fabric/Baas/Corda/ETH).
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is looking for highly talented hands-on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
Job Description :
- Extensive exp. with K8s (EKS/GKE) and k8s eco-system tooling e,g., Prometheus, ArgoCD, Grafana, Istio etc.
- Extensive AWS/GCP Core Infrastructure skills
- Infrastructure/ IAC Automation, Integration - Terraform
- Kubernetes resources engineering and management
- Experience with DevOps tools, CICD pipelines and release management
- Good at creating documentation(runbooks, design documents, implementation plans )
Linux Experience :
- Namespace
- Virtualization
- Containers
Networking Experience
- Virtual networking
- Overlay networks
- Vxlans, GRE
Kubernetes Experience :
Should have experience in bringing up the Kubernetes cluster manually without using kubeadm tool.
Observability
Experience in observability is a plus
Cloud automation :
Familiarity with cloud platforms exclusively AWS, DevOps tools like Jenkins, terraform etc.
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Job description
The ideal candidate is a self-motivated, multi-tasker, and demonstrated team player. You will be a lead developer responsible for the development of new software security policies and enhancements to security on existing products. You should excel in working with large-scale applications and frameworks and have outstanding communication and leadership skills.
Responsibilities
- Consulting with management on the operational requirements of software solutions.
- Contributing expertise on information system options, risk, and operational impact.
- Mentoring junior software developers in gaining experience and assuming DevOps responsibilities.
- Managing the installation and configuration of solutions.
- Collaborating with developers on software requirements, as well as interpreting test stage data.
- Developing interface simulators and designing automated module deployments.
- Completing code and script updates, as well as resolving product implementation errors.
- Overseeing routine maintenance procedures and performing diagnostic tests.
- Documenting processes and monitoring performance metrics.
- Conforming to best practices in network administration and cybersecurity.
Qualifications
- Minimum of 2 years of hands-on experience in software development and DevOps, specifically managing AWS Infrastructure such as EC2s, RDS, Elastic cache, S3, IAM, cloud trail and other services provided by AWS.
- Experience Building a multi-region highly available auto-scaling infrastructure that optimises performance and cost. plan for future infrastructure as well as Maintain & optimise existing infrastructure.
- Conceptualise, architect and build automated deployment pipelines in a CI/CD environment like Jenkins.
- Conceptualise, architect and build a containerised infrastructure using Docker, Mesosphere or similar SaaS platforms.
- Conceptualise, architect and build a secured network utilising VPCs with inputs from the security team.
- Work with developers & QA to institute a policy of Continuous Integration with Automated testing Architect, build and manage dashboards to provide visibility into delivery, production application functional and performance status.
- Work with developers to institute systems, policies and workflows which allow for rollback of deployments Triage release of applications to production environment on a daily basis.
- Interface with developers and triage SQL queries that need to be executed in production environments.
- Assist the developers and on calls for other teams with post mortem, follow up and review of issues affecting production availability.
- Minimum 2 years’ experience in Ansible.
- Must have written playbook to automate provisioning of AWS infrastructure as well as automation of routine maintenance tasks.
- Must have had prior experience automating deployments to production and lower environments.
- Experience with APM tools like New Relic and log management tools.
- Our entire platform is hosted on AWS, comprising of web applications, webservices, RDS, Redis and Elastic Search clusters and several other AWS resources like EC2, S3, Cloud front, Route53 and SNS.
- Essential Functions System Architecture Process Design and Implementation
- Minimum of 2 years scripting experience in Ruby/Python (Preferable) and Shell Web Application Deployment Systems Continuous Integration tools (Ansible)Establishing and enforcing Network Security Policy (AWS VPC, Security Group) & ACLs.
- Establishing and enforcing systems monitoring tools and standards
- Establishing and enforcing Risk Assessment policies and standards
- Establishing and enforcing Escalation policies and standards
- Hands on experience in AWS provisioning of AWS services like EC2, S3,EBS, AMI, VPC, ELB, RDS, Auto scaling groups, Cloud Formation.
- Good experience on Build and release process and extensively involved in the CICD using
Jenkins
- Experienced on configuration management tools like Ansible.
- Designing, implementing and supporting fully automated Jenkins CI/CD
- Extensively worked on Jenkins for continuous Integration and for end to end Automation for all Builds and Deployments.
- Proficient with Docker based container deployments to create shelf environments for dev teams and containerization of environment delivery for releases.
- Experience working on Docker hub, creating Docker images and handling multiple images primarily for middleware installations and domain configuration.
- Good knowledge in version control system in Git and GitHub.
- Good experience in build tools
- Implemented CI/CD pipeline using Jenkins, Ansible, Docker, Kubernetes ,YAML and Manifest
Job Dsecription:
○ Develop best practices for team and also responsible for the architecture
○ solutions and documentation operations in order to meet the engineering departments quality and standards
○ Participate in production outage and handle complex issues and works towards Resolution
○ Develop custom tools and integration with existing tools to increase engineering Productivity
Required Experience and Expertise
○ Having a good knowledge of Terraform + someone who has worked on large TF code bases.
○ Deep understanding of Terraform with best practices & writing TF modules.
○ Hands-on experience of GCP and AWS and knowledge on AWS Services like VPC and VPC related services like (route tables, vpc endpoints, privatelinks) EKS, S3, IAM. Cost aware mindset towards Cloud services.
○ Deep understanding of Kernel, Networking and OS fundamentals
NOTICE PERIOD - Max - 30 days



