
Job Title - Sr. Administrator, IT Infrastructure Storage
Job Duties -
- Administer IT storage recovery and backup systems. Perform complex provisioning, advanced maintenance, data replication, disaster recovery, data migration, and documentation.
- Participate in ongoing maintenance, utilization, availability, and security of storage infrastructure.
- Perform IT implementations, performance analysis and optimization, monitoring, problem resolution, upgrade planning and execution, and process creation and documentation.
- Analyze and work to improve the quality of services offered by IT. Participate in ongoing technology evaluations to keep up with technology trends and industry standards.
- Be able to script in Perl or Python and perform capacity planning and growth projections.
- Resolve complex IT issues as pertain to the environment and keep abreast on storage & backup technology.
- Experience working with ISILON Storage is a must-have skill.
- Good to have experience in ransomware, anomaly detection, airgap and vaulting technologies.
Education and Experience Requirements -
The position requires a Bachelor’s degree in
Computer Science, Computer Engineering or related field plus 5 years of post-
baccalaureate progressive experience in IT storage environments.
Skills Requirements- Experience must include:
- Expertise in Linux and Windows – Good to have Cloud knowledge
- Utilizing OS-enabled tools for data copy like rsync, robocopy and other tools
- Setting up NFS, SMB, snapshots, replication, SnapMirror, and SyncIQ
- Experience in NAS storage technologies like Netapp, Power scale, Isilon, Qumulo and Weka
- Dell EMC Insight IQ, DataIQ and ESRS, Netapp tools (OCUM/AIUM)
- Rubrik, Cohesity, Veeam and other comparable backup technologies
- Good to have Monitoring and Alerting like Prometheus, SolarWinds, telegraph and Grafana

Similar jobs
A DevSecOps Staff Engineer integrates security into DevOps practices, designing secure CI/CD
pipelines, building and automating secure cloud infrastructure and ensuring compliance across
development, operations, and security teams.
Responsibilities
• Design, build and maintain secure CI/CD pipelines utilizing DevSecOps principles and
practices to increase automation and reduce human involvement in the process
• Integrate tools of SAST, DAST, SCA, etc. within pipelines to enable automated application
building, testing, securing and deployment.
• Implement security controls for cloud platforms (AWS, GCP), including IAM, container
security (EKS/ECS), and data encryption for services like S3 or BigQuery, etc.
• Automate vulnerability scanning, monitoring, and compliance processes by collaborating
with DevOps and Development teams to minimize risks in deployment pipelines.
• Suggesting architecture improvements, recommending process improvements.
• Review cloud deployment architectures and implement required security controls.
• Mentor other engineers on security practices and processes.
Requirements
• Bachelor's degree, preferably in CS or a related field, or equivalent experience
• 10+ years of overall industry experience with AWS Certified - Security Specialist.• Must have implementation experience using security tools and processes related to SAST,
DAST and Pen Testing
• AWS-specific: 5+ years’ experience with using a broad range of AWS technologies (e.g.
EC2, RDS, ELB, S3, VPC, CloudWatch) to develop and maintain an Amazon AWS based
cloud solution, with an emphasis on best practice cloud security.
• Experienced with CI/CD tool chain (GitHub Actions, Packages, Jenkins, etc.)
• Passionate about solving security challenges and being informed of available and
emerging security threats and various security technologies.
• Must be familiar with the OWASP Top 10 Security Risks and Controls
• Good skills in at least one or more scripting languages: Python, Bash
• Good knowledge in Kubernetes, Docker Swarm or other cluster management software.
• Willing to work in shifts as required
Good to Have
• AWS Certified DevOps Engineer
• Observability: Experience with system monitoring tools (e.g. CloudWatch, New Relic,
etc.).
• Experience with Terraform/Ansible/Chef/Puppet
• Operating Systems: Windows and Linux system administration.
Perks:
● Day off on the 3rd Friday of every month (one long weekend each month)
● Monthly Wellness Reimbursement Program to promote health well-being
● Monthly Office Commutation Reimbursement Program
● Paid paternity and maternity leaves
Role Summary
Our CloudOps/DevOps teams are distributed across India, Canada, and Israel.
As a Manager, you will lead teams of Engineers and champion configuration management, cloud technologies, and continuous improvement. The role involves close collaboration with global leaders to ensure our applications, infrastructure, and processes remain scalable, secure, and supportable. You will work closely with Engineers across Dev, DevOps, and DBOps to design and implement solutions that improve customer value, reduce costs, and eliminate toil.
Key Responsibilities
- Guide the professional development of Engineers and support teams in meeting business objectives
- Collaborate with leaders in Israel on priorities, architecture, delivery, and product management
- Build secure, scalable, and self-healing systems
- Manage and optimize deployment pipelines
- Triage and remediate production issues
- Participate in on-call escalations
Key Qualifications
- Bachelor’s in CS or equivalent experience
- 3+ years managing Engineering teams
- 8+ years as a Site Reliability or Platform Engineer
- 5+ years administering Linux and Windows environments
- 3+ years programming/scripting (Python, JavaScript, PowerShell)
- Strong experience with OS internals, virtualization, storage, networking, and firewalls
- Experience maintaining On-Prem (90%) and Cloud (10%) environments (AWS, GCP, Azure)
Job Description:
• Drive end-to-end automation from GitHub/GitLab/BitBucket to Deployment,
Observability and Enabling the SRE activities
• Guide operations support (setup, configuration, management, troubleshooting) of
digital platforms and applications
• Solid understanding of DevSecOps Workflows that support CI, CS, CD, CM, CT.
• Deploy, configure, and manage SaaS and PaaS cloud platform and applications
• Provide Level 1 (OS, patching) and Level 2 (app server instance troubleshooting)
• DevOps programming: writing scripts, building operations/server instance/app/DB
monitoring tools Set up / manage continuous build and dev project management
environment: JenkinX/GitHub Actions/Tekton, Git, Jira Designing secure networks,
systems, and application architectures
• Collaborating with cross-functional teams to ensure secure product development
• Disaster recovery, network forensics analysis, and pen-testing solutions
• Planning, researching, and developing security policies, standards, and procedures
• Awareness training of the workforce on information security standards, policies, and
best practices
• Installation and use of firewalls, data encryption and other security products and
procedures
• Maturity in understanding compliance, policy and cloud governance and ability to
identify and execute automation.
• At Wesco, we discuss more about solutions than problems. We celebrate innovation
and creativity.
Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".
Only applications received via email will be reviewed. Applications through other channels will not be considered.
Job Description
The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange
We are seeking talented DevOps-Engineers with focus on Elastic Stack (ELK) to join our dynamic DPS team. In this role, you will be responsible for refining and advising on the further development of an existing monitoring solution based on the Elastic Stack (ELK). You will independently handle tasks related to architecture, setup, technical migration, and documentation.
The current application landscape features multiple Java web services running on JEE application servers, primarily hosted on AWS, and integrated with various systems such as SAP, other services, and external partners. DPS is committed to delivering the best digital work experience for the customers employees and customers alike.
Responsibilities:
Install, set up, and automate rollouts using Ansible/CloudFormation for all stages (Dev, QA, Prod) in the AWS Cloud for components such as Elastic Search, Kibana, Metric beats, APM server, APM agents, and interface configuration.
Create and develop regular "Default Dashboards" for visualizing metrics from various sources like Apache Webserver, application servers and databases.
Improve and fix bugs in installation and automation routines.
Monitor CPU usage, security findings, and AWS alerts.
Develop and extend "Default Alerting" for issues like OOM errors, datasource issues, and LDAP errors.
Monitor storage space and create concepts for expanding the Elastic landscape in AWS Cloud and Elastic Cloud Enterprise (ECE).
Implement machine learning, uptime monitoring including SLA, JIRA integration, security analysis, anomaly detection, and other useful ELK Stack features.
Integrate data from AWS CloudWatch.
Document all relevant information and train involved personnel in the used technologies.
Requirements:
Experience with Elastic Stack (ELK) components and related technologies.
Proficiency in automation tools like Ansible and CloudFormation.
Strong knowledge of AWS Cloud services.
Experience in creating and managing dashboards and alerts.
Familiarity with IAM roles and rights management.
Ability to document processes and train team members.
Excellent problem-solving skills and attention to detail.
Skills & Requirements
Elastic Stack (ELK), Elasticsearch, Kibana, Logstash, Beats, APM, Ansible, CloudFormation, AWS Cloud, AWS CloudWatch, IAM roles, AWS security, Automation, Monitoring, Dashboard creation, Alerting, Anomaly detection, Machine learning integration, Uptime monitoring, JIRA integration, Apache Webserver, JEE application servers, SAP integration, Database monitoring, Troubleshooting, Performance optimization, Documentation, Training, Problem-solving, Security analysis.
The ideal person for the role will:
Possess a keen mind for solving tough problems by partnering effectively with various teams and stakeholders
Be comfortable working in a fast-paced, dynamic, and agile framework
Focus on implementing an end-to-end automated chain
Responsibilities
_____________________________________________________
Strengthen the application and environment security by applying standards and best practices and providing tooling to make development workflows more secure
Identify systems that can benefit from automation, monitoring and infrastructure-as-code and develop and scale products and services accordingly.
Implement sophisticated alerts and escalation mechanisms using automated processes
Help increase production system performance with a focus on high availability and scalability
Continue to keep the lights on (day-to-day administration)
Programmatically create infrastructure in AWS, leveraging Autoscaling Groups, Security Groups, Route53, S3 and IAM with Terraform and Ansible.
Enable our product development team to deliver new code daily through Continuous Integration and Deployment Pipelines.
Create a secure production infrastructure and protect our customer data with continuous security practices and monitoring. Design, develop and scale infrastructure-as-code
Establish SLAs for service uptime, and build the necessary telemetry and alerting platforms to enforce them
Architect and build continuous data pipelines for data lakes, Business Intelligence and AI practices of the company
Remain up to date on industry trends, share knowledge among teams and abide by industry best practices for configuration management and automation.
Qualifications and Background
_______________________________________________________
Graduate degree in Computer Science and Engineering or related technologies
Work or research project experience of 5-7 years, with a minimum of 3 years of experience directly related to the job description
Prior experience working in HIPAA / Hi-Trust frameworks will be given preference
About Witmer Health
_________________________________________________________
We exist to make mental healthcare more accessible, affordable, and effective. At Witmer, we are on a mission to build a research-driven, global mental healthcare company to work on developing novel solutions - by harnessing the power of AI/ML and data science - for a range of mental illnesses like depression, anxiety, OCD, and schizophrenia, among others. Our first foray will be in the space of workspace wellness, where we are building tools to help individual employees and companies improve their mental wellness and raise productivity levels.
Senior Devops Engineer
Who are we?
Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud.
What do we believe?
- Best practices are overrated
- Implementing best practices can only make one n ‘average’ .
- Honesty and Transparency
- We believe in naked truth. We do what we tell and tell what we do.
- Client Partnership
- Client - Vendor relationship: No. We partner with clients instead.
- And our sales team comprises 100% of our clients.
How do we work?
It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER.
- Humble: Happy people don’t carry ego around. We listen to understand; not to respond.
- Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about.
- Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it.
- Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver.
- Innovative: Innovate or Die. We love to challenge the status quo.
- Experimental: We encourage curiosity & making mistakes.
- Responsible: Driven. Self motivated. Self governing teams. We own it.
Are you the one? Quick self-discovery test:
- Love for cloud: When was the last time your dinner entailed an act on “How would ‘Jerry Seinfeld’ pitch Cloud platform & products to this prospect” and your friend did the ‘Sheldon’ version of the same thing.
- Passion for sales: When was the last time you went at a remote gas station while on vacation, and ended up helping the gas station owner saasify his 7 gas stations across other geographies.
- Compassion for customers: You listen more than you speak. When you do speak, people feel the need to listen.
- Humor for life: When was the last time you told a concerned CEO, ‘If Elon Musk can attempt to take humanity to Mars, why can’t we take your business to run on cloud ?
Introduction
When was the last time you thought about rebuilding your smart phone charger using solar panels on your backpack OR changed the sequencing of switches in your bedroom (on your own, of course) to make it more meaningful OR pointed out an engineering flaw in the sequencing of traffic signal lights to a fellow passenger, while he gave you a blank look? If the last time this happened was more than 6 months ago, you are a dinosaur for our needs. If it was less than 6 months ago, did you act on it? If yes, then let’s talk.
We are quite keen to meet you if:
- You eat, dream, sleep and play with Cloud Data Store & engineering your processes on cloud architecture
- You have an insatiable thirst for exploring improvements, optimizing processes, and motivating people.
- You like experimenting, taking risks and thinking big.
3 things this position is NOT about:
- This is NOT just a job; this is a passionate hobby for the right kind.
- This is NOT a boxed position. You will code, clean, test, build and recruit & energize.
- This is NOT a position for someone who likes to be told what needs to be done.
3 things this position IS about:
- Attention to detail matters.
- Roles, titles, ego does not matter; getting things done matters; getting things done quicker & better matters the most.
- Are you passionate about learning new domains & architecting solutions that could save a company millions of dollars?
Roles and Responsibilities
This is an entrepreneurial Cloud/DevOps Lead position that evolves to the Director- Cloud engineering .This position requires fanatic iterative improvement ability - architect a solution, code, research, understand customer needs, research more, rebuild and re-architect, you get the drift. We are seeking hard-core-geeks-turned-successful-techies who are interested in seeing their work used by millions of users the world over.
Responsibilities:
- Consistently strive to acquire new skills on Cloud, DevOps, Big Data, AI and ML technologies
- Design, deploy and maintain Cloud infrastructure for Clients – Domestic & International
- Develop tools and automation to make platform operations more efficient, reliable and reproducible
- Create Container Orchestration (Kubernetes, Docker), strive for full automated solutions, ensure the up-time and security of all cloud platform systems and infrastructure
- Stay up to date on relevant technologies, plug into user groups, and ensure our client are using the best techniques and tools
- Providing business, application, and technology consulting in feasibility discussions with technology team members, customers and business partners
- Take initiatives to lead, drive and solve during challenging scenarios
Requirements:
- 3 + Years of experience in Cloud Infrastructure and Operations domains
- Experience with Linux systems, RHEL/CentOS preferred
- Specialize in one or two cloud deployment platforms: AWS, GCP, Azure
- Hands on experience with AWS services (EC2, VPC, RDS, DynamoDB, Lambda)
- Experience with one or more programming languages (Python, JavaScript, Ruby, Java, .Net)
- Good understanding of Apache Web Server, Nginx, MySQL, MongoDB, Nagios
- Knowledge on Configuration Management tools such as Ansible, Terraform, Puppet, Chef
- Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos)
- Deep experience in customer facing roles with a proven track record of effective verbal and written communications
- Dependable and good team player
- Desire to learn and work with new technologies
Key Success Factors
- Are you
- Likely to forget to eat, drink or pee when you are coding?
- Willing to learn, re-learn, research, break, fix, build, re-build and deliver awesome code to solve real business/consumer needs?
- An open source enthusiast?
- Absolutely technology agnostic and believe that business processes define and dictate which technology to use?
- Ability to think on your feet, and follow-up with multiple stakeholders to get things done
- Excellent interpersonal communication skills
- Superior project management and organizational skills
- Logical thought process; ability to grasp customer requirements rapidly and translate the same into technical as well as layperson terms
- Ability to anticipate potential problems, determine and implement solutions
- Energetic, disciplined, with a results-oriented approach
- Strong ethics and transparency in dealings with clients, vendors, colleagues and partners
- Attitude of ‘give me 5 sharp freshers and 6 months and I will rebuild the way people communicate over the internet.
- You are customer-centric, and feel strongly about building scalable, secure, quality software. You thrive and succeed in delivering high quality technology products in a growth environment where priorities shift fast.
Mandatory Skills Sets
- Excellent problem-solving skills in technical challenges
- Deep knowledge of at least one cloud platform (AWS Preferred)
- Understanding of Latest cloud computing technologies
- Experience in architecting solutions based on knowledge of infrastructure & application architectures including the integration approaches
- Complete hands-on with ability to grasp evolving technologies and coding languages
- Excellent communication skills which would involve customer facing role
- Design thinking
- Customer facing skills and strong technical capabilities to review the teams work as well as guide the team
- Experience working/building/contributing to proposals for architecture, estimations
Preferred Skills Sets
- Experience architecting infrastructure solutions using both Linux/Unix and Windows with specific recommendations on server, load balancing, HA/DR, & storage architectures.
- Experience architecting or deploying Cloud/Virtualization solutions in enterprise customers.
- Person must have performed Application Architect Role for 3+ years
- AWS platform specific experience a bonus.
- Enterprise application and database architecture a bonus.
Experience: 12 - 20 years
Responsibilities :
The Cloud Solution Architect/Engineer specializing in migrations is a cloud role in the project delivery cycle with hands on experience migrating customers to the cloud.
Demonstrated experience in cloud infrastructure project deals for hands on migration to public clouds such as Azure.
Strong background in linux/Unix and/or Windows administration
Ability to use wide variety of open source technologies.
Closely work with Architects and customer technical teams in migrating applications to Azure cloud in Architect Role.
Mentor and monitor the junior developers and track their work.
Design as per best practices and insustry standard coding practices
Ensure services are built for performance, scalability, fault tolerance and security with reusable patterns.
Recommend best practises and standards for Azure migrations
Define coding best practices for high performance and guide the team in adopting the same
Skills:
Mandatory:
Experience with cloud migration technologies such as Azure Migrate
Azure trained / certified architect – Associate or Professional Level
Understanding of hybrid cloud solutions and experience of integrating public cloud into tradition hosting/delivery models
Strong understanding of cloud migration techniques and workflows (on premise to Cloud Platforms)
Configuration, migration and deployment experience in Azure apps technologies.
High Availability and Disaster recovery implementations
Experience architecting and deploying multi-tiered applications.
Experience building and deploying multi-tier, scalable, and highly available applications using Java, Microsoft and Database technologies
Experience in performance tuning, including the following ; (load balancing, web servers, content delivery Networks, Caching (Content and API))
Experience in large scale data center migration
Experience of implementing architectural governance and proactively managing issues and risks throughout the delivery lifecycle.
Good familiarity with the disciplines of enterprise software development such as configuration & release management, source code & version controls, and operational considerations such as monitoring and instrumentation
Experience of consulting or service provider roles (internal, or external);
Experience using database technologies like Oracle, MySQL and understanding of NoSQL is preferred.
Experience in designing or implementing data warehouse solutions is highly preferred.
Experience in automation/configuration management using Puppet, Chef, Ansible, Saltstack, Bosh, Terraform or an equivalent.
Experience with source code management tools such as GitHub, GitLab, Bitbucket or equivalent
Experience with SQL and NoSQL DBs such as SQL, MySQL.
Solid understanding of networking and core Internet Protocols such as TCP/IP, DNS, SMTP, HTTP and routing in distributed networks.
A working understanding of code and script such as: PHP, Python, Perl and/or Ruby.
A working understanding with CI/CD tools such as Jenkins or equivalent
A working understanding of scheduling and orchestration with tools such as: kubernetes, Mesos swarm or equivalent.










