11+ Enterprise storage Jobs in Bangalore (Bengaluru) | Enterprise storage Job openings in Bangalore (Bengaluru)
Apply to 11+ Enterprise storage Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Enterprise storage Job opportunities across top companies like Google, Amazon & Adobe.
JOB DESCRIPTION
- Lead of IT team must guide & manage dev-ops, cloud system administrators, desktop support analysts and also assist in procure & manage assets.
- Design and develop a scalable IT infrastructure that benefits the organization.
- Take part in IT strategic planning activities that reflect the future vision of the organization.
- Introduce cost-effective best practices related to the needs of the business needs of the organization.
- Research and recommend solutions that circumvent potential technical issues.
- Provide high levels of customer service as it pertains to enterprise infrastructure.
- Review and document key performance metrics and indicators to ensure high performance of IT service delivery systems.
- Take charge of available client databases, networks, storage, servers, directories, and other technology services.
- Collaborate with the network engineer to design infrastructure improvements and changes and to troubleshoot any issues that arise.
- Plan, design, and manage infrastructure technologies that can support complex and heterogeneous corporate data and voice infrastructure.
- Execute, test and roll out innovative solutions to keep up with the growing competition technologies that can support complex and heterogeneous corporate data and voice infrastructure.
- Create and document proper installation and configuration procedures.
- Assist in handling software distributions and software updates and patches.
- Oversee deployment of systems and network integration in association with partner clients, business partners, suppliers and subsidiaries.
- Create, update, and manage IT policies.
- Manage, & drive assigned vendors. Perform cost benefit analysis and provide recommendations to management
KEY Proficiencies
* Bachelor’s or Master’s degree in computer science, information technology, electronics, telecommunications or any related field.
* Minimum 10 years of experience in the above mentioned fields.
About Us:
Tradelab Technologies Pvt Ltd is not for those seeking comfort—we are for those hungry to make a mark in the trading and fintech industry.
Key Responsibilities
CI/CD and Infrastructure Automation
- Design, implement, and maintain CI/CD pipelines to support fast and reliable releases
- Automate deployments using tools such as Terraform, Helm, and Kubernetes
- Improve build and release processes to support high-performance and low-latency trading applications
- Work efficiently with Linux/Unix environments
Cloud and On-Prem Infrastructure Management
- Deploy, manage, and optimize infrastructure on AWS, GCP, and on-premises environments
- Ensure system reliability, scalability, and high availability
- Implement Infrastructure as Code (IaC) to standardize and streamline deployments
Performance Monitoring and Optimization
- Monitor system performance and latency using Prometheus, Grafana, and ELK stack
- Implement proactive alerting and fault detection to ensure system stability
- Troubleshoot and optimize system components for maximum efficiency
Security and Compliance
- Apply DevSecOps principles to ensure secure deployment and access management
- Maintain compliance with financial industry regulations such as SEBI
- Conduct vulnerability assessments and maintain logging and audit controls
Required Skills and Qualifications
- 2+ years of experience as a DevOps Engineer in a software or trading environment
- Strong expertise in CI/CD tools (Jenkins, GitLab CI/CD, ArgoCD)
- Proficiency in cloud platforms such as AWS and GCP
- Hands-on experience with Docker and Kubernetes
- Experience with Terraform or CloudFormation for IaC
- Strong Linux administration and networking fundamentals (TCP/IP, DNS, firewalls)
- Familiarity with Prometheus, Grafana, and ELK stack
- Proficiency in scripting using Python, Bash, or Go
- Solid understanding of security best practices including IAM, encryption, and network policies
Good to Have (Optional)
- Experience with low-latency trading infrastructure or real-time market data systems
- Knowledge of high-frequency trading environments
- Exposure to FIX protocol, FPGA, or network optimization techniques
- Familiarity with Redis or Nginx for real-time data handling
Why Join Us?
- Work with a team that expects and delivers excellence.
- A culture where risk-taking is rewarded, and complacency is not.
- Limitless opportunities for growth—if you can handle the pace.
- A place where learning is currency, and outperformance is the only metric that matters.
- The opportunity to build systems that move markets, execute trades in microseconds, and redefine fintech.
This isn’t just a job—it’s a proving ground. Ready to take the leap? Apply now.
we’d love to speak with you. Skills and Qualifications:
Strong experience with continuous integration/continuous deployment (CI/CD) pipeline tools such as Jenkins, TravisCI, or GitLab CI.
Proficiency in scripting languages such as Python, Bash, or Ruby.
Knowledge of infrastructure automation tools such as Ansible, Puppet, or Terraform.
Experience with cloud platforms such as AWS, Azure, or GCP.
Knowledge of container orchestration tools such as Docker, Kubernetes, or OpenShift.
Experience with version control systems such as Git.
Familiarity with Agile methodologies and practices.
Understanding of networking concepts and principles.
Knowledge of database technologies such as MySQL, MongoDB, or PostgreSQL.
Good understanding of security and data protection principles.
Roles and responsibilities:
● Building and setting up new development tools and infrastructure
● Working on ways to automate and improve development and release processes
● Deploy updates and fixes
● Helping to ensure information security best practices
● Provide Level 2 technical support
● Perform root cause analysis for production errors
● Investigate and resolve technical issues
Role – Sr. Devops Engineer
Location - Bangalore
Experience 5+ Years
Responsibilities
- Implementing various development, testing, automation tools, and IT infrastructure
- Planning the team structure, activities, and involvement in project management activities.
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Troubleshooting techniques and fixing the code bugs
- Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage
- Encouraging and building automated processes wherever possible
- Incidence management and root cause analysis.
- Selecting and deploying appropriate CI/CD tools
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Mentoring and guiding the team members
- Monitoring and measuring customer experience and KPIs
Requirements
- 5-6 years of relevant experience in a Devops role.
- Good knowledge in cloud technologies such as AWS/Google Cloud.
- Familiarity with container orchestration services, especially Kubernetes experience (Mandatory) and good knowledge in Docker.
- Experience administering and deploying development CI/CD tools such as Git, Jira, GitLab, or Jenkins.
- Good knowledge in complex Debugging mechanisms (especially JVM) and Java programming experience (Mandatory)
- Significant experience with Windows and Linux operating system environments
- A team player with excellent communication skills.
We are hiring for a Lead DevOps Engineer in Cloud domain with hands on experience in Azure / GCP.
- Expertise in managing Cloud / VMWare resources and good exposure on Dockers/Kubernetes
- Working knowledge of operating systems( Unix, Linux, IBM AIX)
- Experience in installation, configuration and managing apache webserver, Tomcat/Jboss
- Good understanding of JVM, troubleshooting and performance tuning through thread dump and log analysis
-Strong expertise in Dev Ops tools:
- Deployment (Chef/Puppet/Ansible /Nebula/Nolio)
- SCM (TFS, GIT, ClearCase)
- Build tools (Ant,Maven, Make, Gradle)
- Artifact repositories (Nexes, JFrog ArtiFactory)
- CI tools (Jenkins, TeamCity),
- Experienced in scripting languages: Python, Ant, Bash and Shell
What will be required of you?
- Responsible for implementation and support of application/web server infrastructure for complex business applications
- Server configuration management, release management, deployments, automation & troubleshooting
- Set-up and configure Development, Staging, UAT and Production server environment for projects and install/configure all dependencies using the industry best practices
- Manage Code Repositories
- Manage, Document, Control and Innovate Development and Release procedure.
- Configure automated deployment on multiple environment
- Hands-on working experience of Azure or GCP.
- Knowledge Transfer the implementation to support team and until such time support any production issues
Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data platforms at Petabyte scale. Our customers are Fortune 500 companies including Asia's largest telecom company, a unicorn fintech startup of India, and many more. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data observability platform that focuses on ensuring data reliability from every spectrum (compute, data and pipeline) of a cloud or on-premise data platform.
Position Summary-
This role will support the customer implementation of a data quality and reliability product. The candidate is expected to install the product in the client environment, manage proof of concepts with prospects, and become a product expert and troubleshoot post installation, production issues. The role will have significant interaction with the client data engineering team and the candidate is expected to have good communication skills.
Required experience
- 6-7 years experience providing engineering support to data domain/pipelines/data engineers.
- Experience in troubleshooting data issues, analyzing end to end data pipelines and in working with users in resolving issues
- Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
- Basic understanding of SQL
- Experience working with technologies like S3, Kubernetes experience preferred.
- Databricks/Hadoop/Kafka experience preferred but not required
- Design, Develop, deploy, and run operations of infrastructure services in the Acqueon AWS cloud environment
- Manage uptime of Infra & SaaS Application
- Implement application performance monitoring to ensure platform uptime and performance
- Building scripts for operational automation and incident response
- Handle schedule and processes surrounding cloud application deployment
- Define, measure, and meet key operational metrics including performance, incidents and chronic problems, capacity, and availability
- Lead the deployment, monitoring, maintenance, and support of operating systems (Windows, Linux)
- Build out lifecycle processes to mitigate risk and ensure platforms remain current, in accordance with industry standard methodologies
- Run incident resolution within the environment, facilitating teamwork with other departments as required
- Automate the deployment of new software to cloud environment in coordination with DevOps engineers
- Work closely with Presales, understand customer requirement to deploy in Production
- Lead and mentor a team of operations engineers
- Drive the strategy to evolve and modernize existing tools and processes to enable highly secure and scalable operations
- AWS infrastructure management, provisioning, cost management and planning
- Prepare RCA incident reports for internal and external customers
- Participate in product engineering meetings to ensure product features and patches comply with cloud deployment standards
- Troubleshoot and analyse performance issues and customer reported incidents working to restore services within the SLA
- Monthly SLA Performance reports
As a Cloud Operations Manager in Acqueon you will need….
- 8 years’ progressive experience managing IT infrastructure and global cloud environments such as AWS, GCP (must)
- 3-5 years management experience leading a Cloud Operations / Site Reliability / Production Engineering team working with globally distributed teams in a fast-paced environment
- 3-5 years’ experience in IAC (Terraform, K8)
- 3+ years end-to-end incident management experience
- Experience with communicating and presenting to all stakeholders
- Experience with Cloud Security compliance and audits
- Detail-oriented. The ideal candidate is one who naturally digs as deep as they need to understand the why
- Knowledge on GCP will be added advantage
- Manage and monitor customer instances for uptime and reliability
- Staff scheduling and planning to ensure 24x7x365 coverage for cloud operations
- Customer facing, excellent communication skills, team management, troubleshooting
A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets.
As a manager, you would be incharge of managing the devops team and your remit shall include the following
- Private Cloud - Design & maintain a high performance and reliable network architecture to support HPC applications
- Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud.
- Security - Implementing best security practices and implementing data isolation policy between different divisions internally.
- Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis.
- Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
- NFS - Implement and optimize latest version of NFS for our use case.
- Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm.
- BackUps - Identify and automate back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless.
- Access Control - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins.
- Operating System -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions.
- Configuration management -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same.
- Data Storage & Security Planning - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
- Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity.
- Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc
Qualifications
- Bachelors or Masters Level Degree, preferably in CSE/IT
- 10+ years of relevant experience in sys-admin function
- Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
- Must have strong grasp of automation & Data management tools.
- Efficient in scripting languages and python
Desirables
- Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
- Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.
APT Portfolio is an equal opportunity employer
- You have a Bachelor's degree in computer science or equivalent
- You have at least 7 years of DevOps experience.
- You have deep understanding of AWS and cloud architectures/services.
- You have expertise within the container and container orchestration space (Docker, Kubernetes, etc.).
- You have experience working with infrastructure provisioning tools like CloudFormation, Terraform, Chef, Puppet, or others.
- You have experience enabling CI/CD pipelines using tools such as Jenkins, AWS Code Pipeline, Gitlab, or others.
- You bring a deep understanding and application of computer science fundamentals: data structures, algorithms, and design patterns.
- You have a track record of delivering successful solutions and collaborating with others.
- You take security into account when building new systems.
We are looking for an experienced software engineer with a strong background in DevOps and handling traffic & infrastructure at scale.
Responsibilities :
Work closely with product engineers to implement scalable and highly reliable systems.
Scale existing backend systems to handle ever-increasing amounts of traffic and new product requirements.
Collaborate with other developers to understand & setup tooling needed for - Continuous Integration/Delivery/
Build & operate infrastructure to support website, backend cluster, ML projects in the organization.
Monitor and track performance and reliability of our services and software to meet promised SLA
2+ years of experience working on distributed systems and shipping high-quality product features on schedule
Intimate knowledge of the whole web stack (Front end, APIs, database, networks etc.)
Ability to build highly scalable, robust, and fault-tolerant services and stay up-to-date with the latest architectural trends
Experience with container based deployment, microservices, in-memory caches, relational databases, key-value stores
Hands-on experience with cloud infrastructure provisioning, deployment, monitoring (we are on AWS and use ECS, RDS, ELB, EC2, Elasticache, Elasticsearch, S3, CloudWatch)


