Job Title - Sr. Administrator, IT Infrastructure Storage
Job Duties -
- Administer IT storage recovery and backup systems. Perform complex provisioning, advanced maintenance, data replication, disaster recovery, data migration, and documentation.
- Participate in ongoing maintenance, utilization, availability, and security of storage infrastructure.
- Perform IT implementations, performance analysis and optimization, monitoring, problem resolution, upgrade planning and execution, and process creation and documentation.
- Analyze and work to improve the quality of services offered by IT. Participate in ongoing technology evaluations to keep up with technology trends and industry standards.
- Be able to script in Perl or Python and perform capacity planning and growth projections.
- Resolve complex IT issues as pertain to the environment and keep abreast on storage & backup technology.
- Experience working with ISILON Storage is a must-have skill.
- Good to have experience in ransomware, anomaly detection, airgap and vaulting technologies.
Education and Experience Requirements -
The position requires a Bachelor’s degree in
Computer Science, Computer Engineering or related field plus 5 years of post-
baccalaureate progressive experience in IT storage environments.
Skills Requirements- Experience must include:
- Expertise in Linux and Windows – Good to have Cloud knowledge
- Utilizing OS-enabled tools for data copy like rsync, robocopy and other tools
- Setting up NFS, SMB, snapshots, replication, SnapMirror, and SyncIQ
- Experience in NAS storage technologies like Netapp, Power scale, Isilon, Qumulo and Weka
- Dell EMC Insight IQ, DataIQ and ESRS, Netapp tools (OCUM/AIUM)
- Rubrik, Cohesity, Veeam and other comparable backup technologies
- Good to have Monitoring and Alerting like Prometheus, SolarWinds, telegraph and Grafana
Similar jobs
Roles and Responsibilities:
· Support the design and implementation of performance management systems, including goal-setting, performance evaluations, and employee development plans.
· Need to conduct talent mapping aligned with the MHFA action plan.
· Handle the performance management cycle process from start to end and monitor timely and accurate completion of the appraisals (e.g. forms and templates, communications).
· Analyse performance data and generate reports to identify trends, opportunities, and areas for improvement.
· Provide guidance and support to the team on skill development, competencies and to assess skill gap
· Develop and work with new staff on Key Result Areas (KRAs) and Key Performance Indicator (KPIs) aligned with organizational goals
· To Develop and update job descriptions and job specifications.
· To keep job descriptions up-to-date, accurate and compliant with relevant details for all positions.
· Conduct and Coordinate the interview process, collaborating with hiring managers to understand their specific needs.
· To Monitor progress of tasks and key HR metrics.
· Develop individual career maps for each staff member, working closely with them to coach and support their development.
· Provide regular updates to management on the team's progress with their tasks and tie management performance accordingly.
· Work on building HR systems and in charge of the automation of HR activities, processes and information.
· Thorough understanding of the HRMS portal.
Job Requirements:
- Minimum of 5 to 7 years of relevant work experience in Human Resources with proven ability to manage a set of processes and to manage teams.
- Minimum of a Bachelor's degree in a relevant field is required e.g. Human Resources, Business Administration.
· Strong communication, interpersonal, and organizational skills.
· Strong problem-solving and decision-making abilities.
· Demonstrated ability to handle confidential information with discretion.
· Proficiency with Microsoft Office / Google Docs / Spreadsheet
· Strong knowledge on Windows and Linux
· Experience working in Version Control Systems like git
· Hands-on experience in tools Docker, SonarQube, Ansible, Kubernetes, ELK.
· Basic understanding of SQL commands
· Experience working on Azure Cloud DevOps
🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐
Hello
We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!
Position: Data Engineer
Location: Gurugram (Gurgaon)
Experience: 5+ years
Key Skills:
- Python
- Spark, Pyspark
- Data Governance
- Cloud (AWS/Azure/GCP)
Main Responsibilities:
- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.
- Implement ETL processes for telemetry-based and stationary test data.
- Support in defining data governance, including data lifecycle management.
- Develop large-scale data processing engines and real-time search and analytics based on time series data.
- Ensure technical, methodological, and quality aspects.
- Support CI/CD processes.
- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.
- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.
Qualification Requirements:
- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.
- Proficiency in Python and the PyData stack (Pandas/Numpy).
- Experience in high-level programming languages (C#/C++/Java).
- Familiarity with scalable processing environments like Dask (or Spark).
- Proficient in Linux and scripting languages (Bash Scripts).
- Experience in containerization and orchestration of containerized services (Kubernetes).
- Education in database technologies (SQL/OLAP and Non-SQL).
- Interest in Big Data storage technologies (Elastic, ClickHouse).
- Familiarity with Cloud technologies (Azure, AWS, GCP).
- Fluent English communication skills (speaking and writing).
- Ability to work constructively with a global team.
- Willingness to travel for business trips during development projects.
Preferable:
- Working knowledge of vehicle architectures, communication, and components.
- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).
- Experience in time-series processing.
How to Apply:
Interested candidates, please share your updated CV/resume with me.
Thank you for considering this exciting opportunity.
Type, Location
Full Time @ Anywhere in India
Desired Experience
2+ years
Job Description
What You’ll Do
● Deploy, automate and maintain web-scale infrastructure with leading public cloud vendors such as Amazon Web Services, Digital Ocean & Google Cloud Platform.
● Take charge of DevOps activities for CI/CD with the latest tech stacks.
● Acquire industry-recognized, professional cloud certifications (AWS/Google) in the capacity of developer or architect Devise multi-region technical solutions.
● Implementing the DevOps philosophy and strategy across different domains in organisation.
● Build automation at various levels, including code deployment to streamline release process
● Will be responsible for architecture of cloud services
● 24*7 monitoring of the infrastructure
● Use programming/scripting in your day-to-day work
● Have shell experience - for example Powershell on Windows, or BASH on *nix
● Use a Version Control System, preferably git
● Hands on at least one CLI/SDK/API of at least one public cloud ( GCP, AWS, DO)
● Scalability, HA and troubleshooting of web-scale applications.
● Infrastructure-As-Code tools like Terraform, CloudFormation
● CI/CD systems such as Jenkins, CircleCI
● Container technologies such as Docker, Kubernetes, OpenShift
● Monitoring and alerting systems: e.g. NewRelic, AWS CloudWatch, Google StackDriver, Graphite, Nagios/ICINGA
What you bring to the table
● Hands on experience in Cloud compute services, Cloud Function, Networking, Load balancing, Autoscaling.
● Hands on with GCP/AWS Compute & Networking services i.e. Compute Engine, App Engine, Kubernetes Engine, Cloud Function, Networking (VPC, Firewall, Load Balancer), Cloud SQL, Datastore.
● DBs: Postgresql, MySQL, Elastic Search, Redis, kafka, MongoDB or other NoSQL systems
● Configuration management tools such as Ansible/Chef/Puppet
Bonus if you have…
● Basic understanding of Networking(routing, switching, dns) and Storage
● Basic understanding of Protocol such as UDP/TCP
● Basic understanding of Cloud computing
● Basic understanding of Cloud computing models like SaaS, PaaS
● Basic understanding of git or any other source code repo
● Basic understanding of Databases(sql/no sql)
● Great problem solving skills
● Good in communication
● Adaptive to learning
At F5, we strive to bring a better digital world to life. Our teams empower organizations across the globe to create, secure, and run applications that enhance how we experience our evolving digital world. We are passionate about cybersecurity, from protecting consumers from fraud to enabling companies to focus on innovation.
Everything we do centers around people. That means we obsess over how to make the lives of our customers, and their customers, better. And it means we prioritize a diverse F5 community where each individual can thrive.
F5 is looking for a Sr. Security Engineer with experience in building, integrating, operating, and maintaining robust security monitoring and auditing systems. F5’s Edge 2.0 platform provides global, scalable, and secure way to deploy applications. In this position, you will build and maintain monitoring and audit systems across the platform that provide necessary visibility and alerts to effectively defend the platform.
Responsibilities:
- Collaborate with software architects, security defenders, Operations, SRE, compliance experts, and business leaders to understand the logical boundaries of the systems and identify the events to monitor, audits to maintain, alerts to tweak, as well as systems to integrate with
- You will continuously hunt for areas and metrics to be added into monitoring systems for better operational visibility, incident response capability, availability, and forensics capability of the overall platform
- You will participate in the definition of processes around change and inventory management and develop solutions to audit the changes
- You will work with other teams within security organization to define communication and alerting protocols for effective and timely actions
- You will participate in defining and executing the Incident Response Plan for the platform and be responsible for providing necessary information during the response and forensics
- Demonstrate technical leadership in multiple domain areas, providing mentorship to other team members
Minimum qualifications:
- BS degree in Computer Science or equivalent with 5+ years of security operation and monitoring experience
- Experience with logging, monitoring, SIEM, dashboarding tools like AWS GuardDuty, Sumo, Grafana, SolarWinds, DataDog, Splunk, etc.
- Working knowledge of at least one Cloud Computing platform (e.g. Amazon AWS, Microsoft Azure, Google Compute etc.)
- Good understanding of how to handle logs from various systems, integrate with systems handling logs and metrics, how to setup and tune alerts based on thresholds and policies
- Hands on experience with computer programming languages and/or scripting languages such as Python, Java, Shell
- Good understanding of complexities and security challenges in large-scale distributed systems
- Working knowledge of Cloud orchestration systems such as Kubernetes, Openstack etc.
- Self-motivated and willing to delve into new areas and take on new challenges in an enthusiastic manner
- Excellent written and verbal communication skills
- Strong interpersonal, team building, and mentoring skills
Position Summary
Cloud Engineer helps to solutionize, enable, migrate and onboard clients to a secure cloud
platform, which offload the heavy lifting for the clients so that they can focus on their own business
value creation.
Job Description
- Assessing existing customer systems and/or cloud environment to determine the best migration approach and supporting tools used
- Build a secure and compliant cloud environment, with a proven enterprise operating model, on-going cost optimisation, and day-to-day infrastructure management
- Provide and implement cloud solutions to reduce operational overhead and risk, and automates common activities, such as change requests, monitoring, patch management, security, and backup services, and provides full-lifecycle services to provision, run, and support their infrastructure
- Collaborate with internal service teams to meet the clients’ needs for their infrastructure and application deployments
- Troubleshoot complex infrastructure deployments, recreate customer issues, and build proof of concept environments that abide by cloud-best-practices & well architecture frameworks
- Apply advanced troubleshooting techniques to provide unique solutions to our customers’ individual needs
- Work on critical, highly complex customer problems that will span across multiple cloud platforms and services
- Identify and drive improvements on process and technical related issues. Act as an escalation point of contact for the clients
- Drive clients meetings & communication during reviews
Requirement:
- Degree in computer science or a similar field.
- At least 2 year of experience in the field of cloud computing.
- Experience with CI/CD systems.
- Strong in Cloud services
- Exposure to AWS/GCP and other cloud-based infrastructure platforms
- Experience with AWS configuration and management – EC2, S3, EBS, ELB, IAM, VPC, RDS, CloudFront etc
- Exposure in architecting, designing, developing, and implementing cloud solutions on AWS platforms or other cloud server such as Azure, Google Cloud
- Proficient in the use and administration of all versions of MS Windows Server
- Experience with Linux and Windows system administration and web server configuration and monitoring
- Solid programming skills in Python, Java, Perl
- Good understanding of software design principles and best practices
- Good knowledge of REST APIs
- Should have hands-on experience in any deployment orchestration tool (Jenkins, Urbancode, Bamboo, etc.)
- Experience with Docker, Kubernetes, and Helm charts
- Hands on experience in Ansible and Git repositories
- Knowledge in Maven / Gradle
- Azure, AWS, and GCP certifications are preferred.
- Troubleshooting and analytical skills.
- Good communication and collaboration skills.
-We will provide a best hike on candidate's current ctc or offered ctc.....
we will provide upto 25LPA -30LPA
The ideal person for the role will:
Possess a keen mind for solving tough problems by partnering effectively with various teams and stakeholders
Be comfortable working in a fast-paced, dynamic, and agile framework
Focus on implementing an end-to-end automated chain
Responsibilities
_____________________________________________________
Strengthen the application and environment security by applying standards and best practices and providing tooling to make development workflows more secure
Identify systems that can benefit from automation, monitoring and infrastructure-as-code and develop and scale products and services accordingly.
Implement sophisticated alerts and escalation mechanisms using automated processes
Help increase production system performance with a focus on high availability and scalability
Continue to keep the lights on (day-to-day administration)
Programmatically create infrastructure in AWS, leveraging Autoscaling Groups, Security Groups, Route53, S3 and IAM with Terraform and Ansible.
Enable our product development team to deliver new code daily through Continuous Integration and Deployment Pipelines.
Create a secure production infrastructure and protect our customer data with continuous security practices and monitoring. Design, develop and scale infrastructure-as-code
Establish SLAs for service uptime, and build the necessary telemetry and alerting platforms to enforce them
Architect and build continuous data pipelines for data lakes, Business Intelligence and AI practices of the company
Remain up to date on industry trends, share knowledge among teams and abide by industry best practices for configuration management and automation.
Qualifications and Background
_______________________________________________________
Graduate degree in Computer Science and Engineering or related technologies
Work or research project experience of 5-7 years, with a minimum of 3 years of experience directly related to the job description
Prior experience working in HIPAA / Hi-Trust frameworks will be given preference
About Witmer Health
_________________________________________________________
We exist to make mental healthcare more accessible, affordable, and effective. At Witmer, we are on a mission to build a research-driven, global mental healthcare company to work on developing novel solutions - by harnessing the power of AI/ML and data science - for a range of mental illnesses like depression, anxiety, OCD, and schizophrenia, among others. Our first foray will be in the space of workspace wellness, where we are building tools to help individual employees and companies improve their mental wellness and raise productivity levels.