
What You Can Expect from Us:
Here at Nomiso, we work hard to provide our team with the best opportunities to grow their careers. You can expect to be a pioneer of ideas, a student of innovation, and a leader of thought. Innovation and thought leadership is at the center of everything we do, at all levels of the company. Let’s make your career great!
Position Overview:
The Principal Cloud Network Engineer is a key interface to Client teams and is responsible to develop convincing technical solutions. This requires them to work closely with clients for multiple partner-vendors teams to architect the solution.
This position requires sound technical knowledge, proven business acumen and differentiating client interfacing ability. You are required to anticipate, create, and define an innovative solution which matches customer’s needs and the clients tactical and strategic requirements.
Roles and Responsibilities:
- Design and implement next-generation networking technologies
- Deploy/support large-scale production network
- Track, analyze, and trend capacity on the broadcast network and datacenter infrastructure
- Provide Tier 3 escalated network support
- Perform fault management and problem resolution
- Work closely with other departments, vendors, and service providers
- Perform network change management, support modifications, and maintenance
- Perform network upgrade, maintenance, and repair work
- Lead implementation of new systems
- Perform capacity planning and management
- Suggest opportunities for improvement
- Create and support network management objectives, policies, and procedures
- Ensure network documentation is kept up-to-date
- Train and assist junior engineers.
Must Have Skills:
Candidates with overall 10+ years of experience in the following:
- Hands-on: Routers/Switches, Firewalls (Palo Alto or similar), Load Balancer (RTM, GTM), AWS (VPC , API Gateway , Cloudfront , Route53, CloudVAN, Directconnect, Privatelink, Transit Gateway ) Networking, Wireless.
- Strong hands-on coding/scripting experience in one or more programming languages such as Python, Golang, Java, Bash, etc.
- Networking technologies: Routing Protocols (BGP, EIGRP & OSPF, VRFs, VLANs, VRRP, LACP, MLAG, TACACS / Rancid / GIT, IPSec VPN, DNS / DHCP, NAT / SNAT, IP Multicast, VPC, Transit Gateway, NAT Gateway, ALB/ELB), Security Groups, ACL, HSRP, VRRP, SNMP, DHCP.
- Managing hardware, IOS, coordinating with vendors/partners for support.
- Managing CDN, Links, VPN technologies, SDN/Cisco ACI ( Design and implementaion ) and Network Function Virtualization (NFV).
- Reviewing technology designs, and architecture, taking local and regional regulatory requirements into account for Voice, Video Solutions, Routing, Switching, VPN, LAN, WAN, Network Security, Firewalls, NGFW, NAT, IPS, Botnet, Application Control, DDoS, Web Filtering.
- Palo Alto Firewall / Panorama, Big-IQ, and NetBrain tools/technology standards to daily support and enhance performance, improve reliability .
- Creating a real-time contextual living map of Client’s network with detailed network specifications, including diagrams, equipment configurations with defined standards
- Improve the reliability of the service, bring in proactiveness to identify and prevent impact to customers by eliminating Single Point of Failure (SPOF).
- Capturing critical forensic data, and providing complete visibility across the enterprise, for security incidents as soon as a threat is detected, by implementing tools like NetBrain.
Good to Have Skills:
- Industry certifications on Switching, Routing and Security.
- Elastic Load Balancing (ELB), DNS / DHCP, IPSec VPN,Multicast, TACACS / Rancid / GIT, ALB/ELB
- AWS Control Tower
- Experience leading a team of 5 or more.
- Strong Analytical and Problem Solving Skills.
- Experience implementing / maintaining Infrastructure as Code (IaC)
- Certifications : CCIE, AWS Certified Advanced Networking

About Nomiso
About
Similar jobs
A strong proficiency in at least one scripting language (e.g., Python, Bash, PowerShell) is required.
Candidates must possess an in-depth ability to design, write, and implement complex automation logic, not just basic scripts.
Proven experience in automating DevOps processes, environment provisioning, and configuration management is essential.
Cloud Platform (AWS Preferred) : • Extensive hands-on experience with Amazon Web Services (AWS) is highly preferred.
Candidates must be able to demonstrate expert-level knowledge of core AWS services and articulate their use cases.
Excellent debugging and problem-solving skills within the AWS ecosystem are mandatory. The ability to diagnose and resolve issues efficiently is a key requirement.
Infrastructure as Code (IaC - Terraform Preferred) : • Expert-level knowledge and practical experience with Terraform are required.
Candidates must have a deep understanding of how to write scalable, modular, and reusable Terraform code.
Containerization and Orchestration (Kubernetes Preferred) : • Advanced, hands-on experience with Kubernetes is mandatory. • Candidates must be proficient in solving complex, production-level issues related to deployments, networking, and cluster management. • A solid foundational knowledge of Docker is required.
Springer Capital is a cross-border asset management firm focused on real estate investment banking in China and the USA. We are offering a remote internship for individuals passionate about automation, cloud infrastructure, and CI/CD pipelines. Start and end dates are flexible, and applicants may be asked to complete a short technical quiz or assignment as part of the application process.
Responsibilities:
▪ Assist in building and maintaining CI/CD pipelines to automate development workflows
▪ Monitor and improve system performance, reliability, and scalability
▪ Manage cloud-based infrastructure (e.g., AWS, Azure, or GCP)
▪ Support containerization and orchestration using Docker and Kubernetes
▪ Implement infrastructure as code using tools like Terraform or CloudFormation
▪ Collaborate with software engineering and data teams to streamline deployments
▪ Troubleshoot system and deployment issues across development and production environments
Bito is a startup that is using AI (ChatGPT, OpenAI, etc) to create game-changing productivity experiences for software developers in their IDE and CLI. Already, over 100,000 developers are using Bito to increase their productivity by 31% and performing more than 1 million AI requests per week.
Our founders have previously started, built, and taken a company public (NASDAQ: PUBM), worth well over $1B. We are looking to take our learnings, learn a lot along with you, and do something more exciting this time. This journey will be incredibly rewarding, and is incredibly difficult!
We are building this company with a fully remote approach, with our main teams for time zone management in the US and in India. The founders happen to be in Silicon Valley and India.
We are hiring a DevOps Engineer to join our team.
Responsibilities:
- Collaborate with the development team to design, develop, and implement Java-based applications
- Perform analysis and provide recommendations for Cloud deployments and identify opportunities for efficiency and cost reduction
- Build and maintain clusters for various technologies such as Aerospike, Elasticsearch, RDS, Hadoop, etc
- Develop and maintain continuous integration (CI) and continuous delivery (CD) frameworks
- Provide architectural design and practical guidance to software development teams to improve resilience, efficiency, performance, and costs
- Evaluate and define/modify configuration management strategies and processes using Ansible
- Collaborate with DevOps engineers to coordinate work efforts and enhance team efficiency
- Take on leadership responsibilities to influence the direction, schedule, and prioritization of the automation effort
Requirements:
- Minimum 4+ years of relevant work experience in a DevOps role
- At least 3+ years of experience in designing and implementing infrastructure as code within the AWS/GCP/Azure ecosystem
- Expert knowledge of any cloud core services, big data managed services, Ansible, Docker, Terraform/CloudFormation, Amazon ECS/Kubernetes, Jenkins, and Nginx
- Expert proficiency in at least two scripting/programming languages such as Bash, Perl, Python, Go, Ruby, etc.
- Mastery in configuration automation tool sets such as Ansible, Chef, etc
- Proficiency with Jira, Confluence, and Git toolset
- Experience with automation tools for monitoring and alerts such as Nagios, Grafana, Graphite, Cloudwatch, New Relic, etc
- Proven ability to manage and prioritize multiple diverse projects simultaneously
What do we offer:
At Bito, we strive to create a supportive and rewarding work environment that enables our employees to thrive. Join a dynamic team at the forefront of generative AI technology.
· Work from anywhere
· Flexible work timings
· Competitive compensation, including stock options
· A chance to work in the exciting generative AI space
· Quarterly team offsite events
The DevOps Engineer's core responsibilities include automated configuration and management
of infrastructure, continuous integration and delivery of distributed systems at scale in a Hybrid
environment.
Must-Have:
● You have 4-10 years of experience in DevOps
● You have experience in managing IT infrastructure at scale
● You have experience in automation of deployment of distributed systems and in
infrastructure provisioning at scale.
● You have in-depth hands-on experience on Linux and Linux-based systems, Linux
scripting
● You have experience in Server hardware, Networking, firewalls
● You have experience in source code management, configuration management,
continuous integration, continuous testing, continuous monitoring
● You have experience with CI/CD and related tools
* You have experience with Monitoring tools like ELK, Grafana, Prometheus
● You have experience with containerization, container orchestration, management
● Have a penchant for solving complex and interesting problems.
● Worked in startup-like environments with high levels of ownership and commitment.
● BTech, MTech or Ph.D. in Computer Science or related Technical Discipline
This person MUST have:
- Min of 3-5 prior experience as a DevOps Engineer.
- Expertise in CI/CD pipeline maintenance and enhancement specifically Jenkins based pipelines.
- Working experience with engineering tools like git, git work flow, bitbucket, JIRA etc
- Hands-on experience deploying and managing infrastructure with CloudFormation/Terraform
- Experience managing AWS infrastructure
- Hands on experience of Linux administration.
- Basic understanding of Kubernetes/Docker orchestration
- Works closely with engineering team for day to day activities
- Manges existing infrastructure/Pipelines/Engineering tools (On Prem or AWS) for engineering team (Build servers/Jenkin nodes etc.)
- Works with engineering team for new config required for infra like replicating the setups, adding new resources etc.
- Works closely with engineering team for improving existing pipelines for build .
- Troubleshoots problems across infrastructure/services
Experience:
- Min 5-7 year experience
Location
- Remotely, anywhere in India
Timings:
- 40 hours a week (11 AM to 7 PM).
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
Position: DevOps Engineer
Job Description
The candidate should have the following Skills:
- Hands-on experience with DevOps & CICD open source tools (Jenkins, ), including AWS DevOps services (CodePipeline, CloudFormation, etc).
- Experience in building and deploying using Java/Python/Node.js on Cloud infrastructure (Docker or Kubernetes containers or Lambda.)
- Exposure to Cloud operations, releases, and configuration management
- Experience in implementing Non-functional requirements for microservices, including performance, security, compliance, HA and Disaster Recovery.
- Good soft skills, great attitude, and passion for working in a product startup environment
Total Experience of 2-5 years post BE or BTech or MCA in Computer Science Engineering.
Job Location: Jaipur
Experience Required: Minimum 3 years
About the role:
As a DevOps Engineer for Punchh, you will be working with our developers, SRE, and DevOps teams implementing our next generation infrastructure. We are looking for a self-motivated, responsible, team player who love designing systems that scale. Punchh provides a rich engineering environment where you can be creative, learn new technologies, solve engineering problems, all while delivering business objectives. The DevOps culture here is one with immense trust and responsibility. You will be given the opportunity to make an impact as there are no silos here.
Responsibilities:
- Deliver SLA and business objectives through whole lifecycle design of services through inception to implementation.
- Ensuring availability, performance, security, and scalability of AWS production systems
- Scale our systems and services through continuous integration, infrastructure as code, and gradual refactoring in an agile environment.
- Maintain services once a project is live by monitoring and measuring availability, latency, and overall system and application health.
- Write and maintain software that runs the infrastructure that powers the Loyalty and Data platform for some of the world’s largest brands.
- 24x7 in shifts on call for Level 2 and higher escalations
- Respond to incidents and write blameless RCA’s/postmortems
- Implement and practice proper security controls and processes
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on platform.
Must have:
- Minimum 3 Years of Experience in DevOps.
- BS degree in Computer Science, Mathematics, Engineering, or equivalent practical experience.
- Strong inter-personal skills.
- Must have experience in CI/CD tooling such as Jenkins, CircleCI, TravisCI
- Must have experience in Docker, Kubernetes, Amazon ECS or Mesos
- Experience in code development in at least one high-level programming language fromthis list: python, ruby, golang, groovy
- Proficient in shell scripting, and most importantly, know when to stop scripting and start developing.
- Experience in creation of highly automated infrastructures with any Configuration Management tools like: Terraform, Cloudformation or Ansible.
- In-depth knowledge of the Linux operating system and administration.
- Production experience with a major cloud provider such Amazon AWS.
- Knowledge of web server technologies such as Nginx or Apache.
- Knowledge of Redis, Memcache, or one of the many in-memory data stores.
- Experience with various load balancing technologies such as Amazon ALB/ELB, HA Proxy, F5.
- Comfortable with large-scale, highly-available distributed systems.
Good to have:
- Understanding of Web Standards (REST, SOAP APIs, OWASP, HTTP, TLS)
- Production experience with Hashicorp products such as Vault or Consul
- Expertise in designing, analyzing troubleshooting large-scale distributed systems.
- Experience in an PCI environment
- Experience with Big Data distributions from Cloudera, MapR, or Hortonworks
- Experience maintaining and scaling database applications
- Knowledge of fundamental systems engineering principles such as CAP Theorem, Concurrency Control, etc.
- Understanding of the network fundamentals: OSI, TCI/IP, topologies, etc.
- Understanding of Auditing of Infrastructure and help org. to control Infrastructure costs.
- Experience in Kafka, RabbitMQ or any messaging bus.
We are looking for an experienced software engineer with a strong background in DevOps and handling traffic & infrastructure at scale.
Responsibilities :
Work closely with product engineers to implement scalable and highly reliable systems.
Scale existing backend systems to handle ever-increasing amounts of traffic and new product requirements.
Collaborate with other developers to understand & setup tooling needed for - Continuous Integration/Delivery/
Build & operate infrastructure to support website, backend cluster, ML projects in the organization.
Monitor and track performance and reliability of our services and software to meet promised SLA
2+ years of experience working on distributed systems and shipping high-quality product features on schedule
Intimate knowledge of the whole web stack (Front end, APIs, database, networks etc.)
Ability to build highly scalable, robust, and fault-tolerant services and stay up-to-date with the latest architectural trends
Experience with container based deployment, microservices, in-memory caches, relational databases, key-value stores
Hands-on experience with cloud infrastructure provisioning, deployment, monitoring (we are on AWS and use ECS, RDS, ELB, EC2, Elasticache, Elasticsearch, S3, CloudWatch)










