
We are looking for an experienced Sr.Devops Consultant Engineer to join our team. The ideal candidate should have at least 5+ years of experience.
We are retained by a promising startup located in Silicon valley backed by Fortune 50 firm with veterans from firms as Zscaler, Salesforce & Oracle. Founding team has been part of three unicorns and two successful IPO’s in the past and well funded by Dell Technologies and Westwave Capital. The company has been widely recognized as an industry innovator in the Data Privacy, Security space and being built by proven Cybersecurity executives who have successfully built and scaled high growth Security companies and built Privacy programs as executives.
Responsibilities:
- Develop and maintain infrastructure as code using tools like Terraform, CloudFormation, and Ansible
- Manage and maintain Kubernetes clusters on EKS and EC2 instances
- Implement and maintain automated CI/CD pipelines for microservices
- Optimize AWS costs by identifying cost-saving opportunities and implementing cost-effective solutions
- Implement best security practices for microservices, including vulnerability assessments, SOC2 compliance, and network security
- Monitor the performance and availability of our cloud infrastructure using observability tools such as Prometheus, Grafana, and Elasticsearch
- Implement backup and disaster recovery solutions for our microservices and databases
- Stay up to date with the latest AWS services and technologies and provide recommendations for improving our cloud infrastructure
- Collaborate with cross-functional teams, including developers, and product managers, to ensure the smooth operation of our cloud infrastructure
- Experience with large scale system design and scaling services is highly desirable
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field
- At least 5 years of experience in AWS DevOps and infrastructure engineering
- Expertise in Kubernetes management, Docker, EKS, EC2, Queues, Python Threads, Celery Optimization, Load balancers, AWS cost optimizations, Elasticsearch, Container management, and observability best practices
- Experience with SOC2 compliance and vulnerability assessment best practices for microservices
- Familiarity with AWS services such as S3, RDS, Lambda, and CloudFront
- Strong scripting skills in languages like Python, Bash, and Go
- Excellent communication skills and the ability to work in a collaborative team environment
- Experience with agile development methodologies and DevOps practices
- AWS certification (e.g. AWS Certified DevOps Engineer, AWS Certified Solutions Architect) is a plus.
Notice period : Can join within a month

About Eitacies Inc
About
EITACIES Inc is a Product Development and IT Services company, providing pioneer services in Digital transformation, Cloud & Cyber Security, DevSecops, AI & ML, Business Intelligence and Enterprise Integration. We have been supporting multiple bay area start-ups and Fortune 500 companies in different industry verticals since 2008.
Tech stack
Candid answers by the company
Our pool of experts excels at delivering tailored solutions for businesses, irrespective of their scale. EITACIES has enabled multiple start-ups, Mid and large enterprises to enhance their service delivery and upscale their technical aspects. We have collaborated with promising organizations from their nascent phase and helped them grow phenomenally.
Photos
Similar jobs
Key Responsibilities
DevOps Strategy & Leadership
- Define and execute the end-to-end DevOps strategy for high-frequency trading and fintech platforms.
- Lead, mentor, and scale a high-performing DevOps team focused on automation, reliability, and performance.
- Partner closely with engineering and product leaders to ensure infrastructure strategy supports business and technical goals.
CI/CD & Infrastructure Automation
- Architect, implement, and optimize enterprise-grade CI/CD pipelines for ultra-low-latency trading systems.
- Drive Infrastructure as Code (IaC) adoption using Terraform, Helm, Kubernetes, and advanced automation toolsets.
- Establish robust release management, deployment workflows, and versioning best practices for mission‑critical environments.
Cloud & On‑Prem Infrastructure Management
- Design and manage hybrid infrastructures across AWS, GCP, and on-premise data centers ensuring high availability and fault tolerance.
- Implement sophisticated networking strategies for low-latency workloads including routing optimization and performance tuning.
- Lead multi‑cloud scalability, cost optimization, and environment standardization initiatives.
Performance Monitoring & Optimization
- Oversee large-scale monitoring systems using Prometheus, Grafana, ELK, and related observability tools.
- Implement predictive alerting, automated remediation, and system‑wide health checks for zero‑downtime operations.
- Conduct root-cause analyses and performance tuning for systems processing millions of transactions per second.
Security & Compliance
- Champion DevSecOps practices and embed security across the entire development and deployment lifecycle.
- Ensure adherence to financial regulatory standards (SEBI and global frameworks) with strong audit and compliance mechanisms.
- Lead security automation efforts, vulnerability management, and advanced IAM policy implementation.
Required Skills & Qualifications
- 10+ years of DevOps experience, with 5+ years in a leadership capacity.
- Deep hands-on expertise in CI/CD tools such as Jenkins, GitLab CI/CD, and ArgoCD.
- Strong command of AWS, GCP, and hybrid cloud infrastructures.
- Expert-level knowledge of Kubernetes, Docker, and large-scale container orchestration.
- Advanced proficiency in Terraform, Helm, and overall IaC workflows.
- Strong Linux administration, networking fundamentals (TCP/IP, DNS, Firewalls), and system internals.
- Experience with monitoring and observability platforms (Prometheus, Grafana, ELK).
- Excellent scripting skills in Python, Bash, or Go for automation and tooling.
- Deep understanding of security principles, encryption, IAM, and compliance frameworks.
Good to Have
- Experience with ultra-low-latency or high-frequency trading systems.
- Knowledge of FIX protocol, FPGA acceleration, or network‑level optimizations.
- Familiarity with Redis, Nginx, or other high‑throughput systems.
- Exposure to micro‑second‑level performance tuning or network acceleration technologies.
Why Join Us?
- Be part of a team that consistently raises the bar and delivers exceptional engineering outcomes.
- A culture where innovation, ownership, and bold thinking are valued.
- Exceptional growth opportunities—ideal for someone who thrives in fast-paced, high-impact environments.
- Build systems that influence markets and redefine the fintech landscape.
This isn’t just a role—it’s a challenge, a platform, and a proving ground.
Ready to step up? Apply now.
Senior DevOps Engineer
Experience: Minimum 5 years of relevant experience
Key Responsibilities:
• Hands-on experience with AWS tools and CI/CD pipelines, Redhat Linux
• Strong expertise in DevOps practices and principles
• Experience with infrastructure automation and configuration management
• Excellent problem-solving skills and attention to detail
Nice to Have:
• Redhat certification
Role : Senior Engineer Infrastructure
Key Responsibilities:
● Infrastructure Development and Management: Design, implement, and manage robust and scalable infrastructure solutions, ensuring optimal performance,security, and availability. Lead transition and migration projects, moving legacy systemsto cloud-based solutions.
● Develop and maintain applications and services using Golang.
● Automation and Optimization: Implement automation tools and frameworksto optimize operational processes. Monitorsystem performance, optimizing and modifying systems as necessary.
● Security and Compliance: Ensure infrastructure security by implementing industry best practices and compliance requirements. Respond to and mitigate security incidents and vulnerabilities.
Qualifications:
● Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent practical experience).
● Good understanding of prominent backend languageslike Golang, Python, Node.js, or others.
● In-depth knowledge of network architecture,system security, infrastructure scalability.
● Proficiency with development tools,server management, and database systems.
● Strong experience with cloud services(AWS.), deployment,scaling, and management.
● Knowledge of Azure is a plus
● Familiarity with containers and orchestration services,such as Docker, Kubernetes, etc.
● Strong problem-solving skills and analytical thinking.
● Excellent verbal and written communication skills.
● Ability to thrive in a collaborative team environment.
● Genuine passion for backend development and keen interest in scalable systems.
We are a boutique IT services & solutions firm headquartered in the Bay Area with offices in India. Our offering includes custom-configured hybrid cloud solutions backed by our managed services. We combine best in class DevOps and IT infrastructure management practices, to manage our clients Hybrid Cloud Environments.
In addition, we build and deploy our private cloud solutions using Open stack to provide our clients with a secure, cost effective and scale able Hybrid Cloud solution. We work with start-ups as well as enterprise clients.
This is an exciting opportunity for an experienced Cloud Engineer to work on exciting projects and have an opportunity to expand their knowledge working on adjacent technologies as well.
Must have skills
• Provisioning skills on IaaS cloud computing for platforms such as AWS, Azure, GCP.
• Strong working experience in AWS space with various AWS services and implementations (i.e. VPCs, SES, EC2, S3, Route 53, Cloud Front, etc.)
• Ability to design solutions based on client requirements.
• Some experience with various network LAN/WAN appliances like (Cisco routers and ASA systems, Barracuda, Meraki, SilverPeak, Palo Alto, Fortinet, etc.)
• Understanding of networked storage like (NFS / SMB / iSCSI / Storage GW / Windows Offline)
• Linux / Windows server installation, maintenance, monitoring, data backup and recovery, security, and administration.
• Good knowledge of TCP/IP protocol & internet technologies.
• Passion for innovation and problem solving, in a start-up environment.
• Good communication skills.
Good to have
• Remote Monitoring & Management.
• Familiarity with Kubernetes and Containers.
• Exposure to DevOps automation scripts & experience with tools like Git, bash scripting, PowerShell, AWS Cloud Formation, Ansible, Chef or Puppet will be a plus.
• Architect / Practitioner certification from OEM with hands-on capabilities.
What you will be working on
• Trouble shoot and handle L2/ L3 tickets.
• Design and architect Enterprise Cloud systems and services.
• Design, Build and Maintain environments primarily in AWS using EC2, S3/Storage, CloudFront, VPC, ELB, Auto Scaling, Direct Connect, Route53, Firewall, etc.
• Build and deploy in GCP/ Azure as needed.
• Architect cloud solutions keeping performance, cost and BCP considerations in mind.
• Plan cloud migration projects as needed.
• Collaborate & work as part of a cohesive team.
• Help build our private cloud offering on Open stack.
• DevOps/Build and Release Engineer with maturity to help, define and automate the processes.
• Work, configure, install, manage, on source control tools like AWS Codecommit / GitHub / BitBucket.
• Automate implementation/deployment of code in the cloud-based infrastructure (AWS Preferred).
• Setup monitoring of infrastructure and applications with alerting frameworks
Requirements:
• Able to code in Python.
• Extensive experience with building and supporting Docker and Kubernetes in
production.
• Understand AWS (Amazon Web Services) and be able to jump right into our
environment.
• Security Clearance will be required.
• Lambda used in conjunction with S3, CloudTrail and EC2.
• CloudFormation (Infrastructure as code)
• CloudWatch and CloudTrail
• Version Control (SVN, Git, Artifactory, Bit bucket)
• CI/CD (Jenkins or similar)
• Docker Compose or other orchestration tools
• Rest API
• DB (Postgres/Oracle/SQL Server or NoSql or Graph DB)
• Bachelor’s Degree in Computer Science, Computer Engineering or a closely
related field.
• Server orchestration using tools like Puppet, Chef, Ansible, etc.
Please send your CV at priyanka.sharma @ neotas.com
Neotas.com
|
This role requires a balance between hands-on infrastructure-as-code deployments as well as involvement in operational architecture and technology advocacy initiatives across the Numerator portfolio.
Responsibilities
|
|
|
|
Technical Skills
|
DevOps
Engineers : Min 3 to 5 Years
Tech Leads : Min 6 to 10 Years
- Implementing & supporting CI/CD/CT pipelines at scale.
- Knowledge and experience using Chef, Puppet or Ansible automation to deploy and be able to manage Linux systems in production and CI environments.
- Extensive experience with Shell scripts (bash).
- Knowledge and practical experience of Jenkins for CI.
- Experienced in build & release management.
- Experience of deploying JVM based applications.
- Enterprise AWS deployment with sound knowledge on AWS & AWS security.
- Knowledge of encryption technologies: IPSec, SSL, SSH.
- Minimum of 2 years of experience as a Linux Systems Engineer (CentOS/Red Hat) ideally supporting a highly-available, 24x7 production environments.
- DNS providing and maintenance.
- Helpful skills: Knowledge of applications relying on Maven, Ant, Gradle, Spring Boot.
- Knowledge of app and server monitoring tools such as ELK/AppEngine.
- Excellent written, oral communication and interpersonal skills.
Role : DevOps Engineer
Experience : 1 to 2 years and 2 to 5 Years as DevOps Engineer (2 Positions)
Location : Bangalore. 5 Days Working.
Education Qualification : Tech/B.E for Tier-1/Tier-2/Tier-3 Colleges or equivalent institutes
Skills :- DevOps Engineering, Ruby On Rails or Python and Bash/Shell skills, Docker, rkt or similar container engine, Kubernetes or similar clustering solutions
As DevOps Engineer, you'll be part of the team building the stage for our Software Engineers to work on, helping to enhance our product performance and reliability.
Responsibilities:
- Build & operate infrastructure to support website, backed cluster, ML projects in the organization.
- Helping teams become more autonomous and allowing the Operation team to focus on improving the infrastructure and optimizing processes.
- Delivering system management tooling to the engineering teams.
- Working on your own applications which will be used internally.
- Contributing to open source projects that we are using (or that we may start).
- Be an advocate for engineering best practices in and out of the company.
- Organizing tech talks and participating in meetups and representing Box8 at industry events.
- Sharing pager duty for the rare instances of something serious happening. ∙ Collaborate with other developers to understand & setup tooling needed for Continuous Integration/Delivery/Deployment (CI/CD) practices.
Requirements:
- 1+ Years Of Industry Experience Scale existing back end systems to handle ever increasing amounts of traffic and new product requirements.
- Ruby On Rails or Python and Bash/Shell skills.
- Experience managing complex systems at scale.
- Experience with Docker, rkt or similar container engine.
- Experience with Kubernetes or similar clustering solutions.
- Experience with tools such as Ansible or Chef Understanding of the importance of smart metrics and alerting.
- Hands on experience with cloud infrastructure provisioning, deployment, monitoring (we are on AWS and use ECS, ELB, EC2, Elasticache, Elasticsearch, S3, CloudWatch).
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Knowledge of data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience in working on linux based servers.
- Managing large scale production grade infrastructure on AWS Cloud.
- Good Knowledge on scripting languages like ruby, python or bash.
- Experience in creating in deployment pipeline from scratch.
- Expertise in any of the CI tools, preferably Jenkins.
- Good knowledge of docker containers and its usage.
- Using Infra/App Monitoring tools like, CloudWatch/Newrelic/Sensu.
Good to have:
- Knowledge of Ruby on Rails based applications and its deployment methodologies.
- Experience working on Container Orchestration tools like Kubernetes/ECS/Mesos.
- Extra Points For Experience With Front-end development NewRelic GCP Kafka, Elasticsearch.
Job Location: Jaipur
Experience Required: Minimum 3 years
About the role:
As a DevOps Engineer for Punchh, you will be working with our developers, SRE, and DevOps teams implementing our next generation infrastructure. We are looking for a self-motivated, responsible, team player who love designing systems that scale. Punchh provides a rich engineering environment where you can be creative, learn new technologies, solve engineering problems, all while delivering business objectives. The DevOps culture here is one with immense trust and responsibility. You will be given the opportunity to make an impact as there are no silos here.
Responsibilities:
- Deliver SLA and business objectives through whole lifecycle design of services through inception to implementation.
- Ensuring availability, performance, security, and scalability of AWS production systems
- Scale our systems and services through continuous integration, infrastructure as code, and gradual refactoring in an agile environment.
- Maintain services once a project is live by monitoring and measuring availability, latency, and overall system and application health.
- Write and maintain software that runs the infrastructure that powers the Loyalty and Data platform for some of the world’s largest brands.
- 24x7 in shifts on call for Level 2 and higher escalations
- Respond to incidents and write blameless RCA’s/postmortems
- Implement and practice proper security controls and processes
- Providing recommendations for architecture and process improvements.
- Definition and deployment of systems for metrics, logging, and monitoring on platform.
Must have:
- Minimum 3 Years of Experience in DevOps.
- BS degree in Computer Science, Mathematics, Engineering, or equivalent practical experience.
- Strong inter-personal skills.
- Must have experience in CI/CD tooling such as Jenkins, CircleCI, TravisCI
- Must have experience in Docker, Kubernetes, Amazon ECS or Mesos
- Experience in code development in at least one high-level programming language fromthis list: python, ruby, golang, groovy
- Proficient in shell scripting, and most importantly, know when to stop scripting and start developing.
- Experience in creation of highly automated infrastructures with any Configuration Management tools like: Terraform, Cloudformation or Ansible.
- In-depth knowledge of the Linux operating system and administration.
- Production experience with a major cloud provider such Amazon AWS.
- Knowledge of web server technologies such as Nginx or Apache.
- Knowledge of Redis, Memcache, or one of the many in-memory data stores.
- Experience with various load balancing technologies such as Amazon ALB/ELB, HA Proxy, F5.
- Comfortable with large-scale, highly-available distributed systems.
Good to have:
- Understanding of Web Standards (REST, SOAP APIs, OWASP, HTTP, TLS)
- Production experience with Hashicorp products such as Vault or Consul
- Expertise in designing, analyzing troubleshooting large-scale distributed systems.
- Experience in an PCI environment
- Experience with Big Data distributions from Cloudera, MapR, or Hortonworks
- Experience maintaining and scaling database applications
- Knowledge of fundamental systems engineering principles such as CAP Theorem, Concurrency Control, etc.
- Understanding of the network fundamentals: OSI, TCI/IP, topologies, etc.
- Understanding of Auditing of Infrastructure and help org. to control Infrastructure costs.
- Experience in Kafka, RabbitMQ or any messaging bus.
Required Skills and Experience
- 4+ years of relevant experience with DevOps tools Jenkins, Ansible, Chef etc
- 4+ years of experience in continuous integration/deployment and software tools development experience with Python and shell scripts etc
- Building and running Docker images and deployment on Amazon ECS
- Working with AWS services (EC2, S3, ELB, VPC, RDS, Cloudwatch, ECS, ECR, EKS)
- Knowledge and experience working with container technologies such as Docker and Amazon ECS, EKS, Kubernetes
- Experience with source code and configuration management tools such as Git, Bitbucket, and Maven
- Ability to work with and support Linux environments (Ubuntu, Amazon Linux, CentOS)
- Knowledge and experience in cloud orchestration tools such as AWS Cloudformation/Terraform etc
- Experience with implementing "infrastructure as code", “pipeline as code” and "security as code" to enable continuous integration and delivery
- Understanding of IAM, RBAC, NACLs, and KMS
- Good communication skills
Good to have:
- Strong understanding of security concepts, methodologies and apply them such as SSH, public key encryption, access credentials, certificates etc.
- Knowledge of database administration such as MongoDB.
- Knowledge of maintaining and using tools such as Jira, Bitbucket, Confluence.
- Work with Leads and Architects in designing and implementation of technical infrastructure, platform, and tools to support modern best practices and facilitate the efficiency of our development teams through automation, CI/CD pipelines, and ease of access and performance.
- Establish and promote DevOps thinking, guidelines, best practices, and standards.
- Contribute to architectural discussions, Agile software development process improvement, and DevOps best practices.












