You are currently viewing Deploying Data Immutability with the Veeam Hardened Repository ISO (Part 1 of 3)

Deploying Data Immutability with the Veeam Hardened Repository ISO (Part 1 of 3)

The History of the Veeam Hardened Repository ISO

If you’ve been in the Veeam game within the past two years, you’re probably aware that the Product Development team has been working hard on a turn-key solution that can be used to deploy a Veeam Hardened Repository (VHR) from an ISO. Historically if you wanted to utilize a VHR, you had to deploy a linux server and setup a repository backed by the XFS filesystem to take advantage of both reflink cloning as well as the native immutability capability by utilizing the immutability flags available within the filesystem. Obviously, setting up a VHR by hand is a bit of a manual process, especially when you harden the repo on top of building it, and folks including myself have been long awaiting this turn-key solution that Hannes Kasparick has been working on.

When the VHR was first showcased at VeeamON 2023 by Hannes and Rick Vanover, it was clearly still a work in progress. The backing OS was Ubuntu Linux Server LTS. However, one of the best ways to harden the server us via a DISA STIG. At the time, Ubuntu didn’t have the capability to be hardened via STIG and required some manual work and scripting, though that has also since changed. There were some other bugs as well that had to be worked through, but as this was still in development and not for production use. Since the initial public preview in 2023, the next preview at VeeamON 2024 showcased a transition to Rocky Linux. In the past couple of months, a private beta of the new VHR ISO was released to testers, and on September 30th, 2024, Anton Gostev announced in the Veeam R&D Forums that a public beta was available which only a month later moved onwards to a public release with Experimental Support, meaning that you could contact Veeam support for assistance though you may not yet want to put this release into production as there are slower SLA’s and hotfixes and patches are a lower priority.

Our Backing VHR Hardware

For a bit of background on what we’re using here for hardware, we have a Dell PowerEdge R550 a BOSS-S2 480GB M.2 RAID 1 card to handle the Linux OS, and 8x 12GB 7.2k 12Gb SAS disks in a RAID 5 volume backed by a PERC H755 SAS card for approximately 80TB of local storage for the VHR repository. Connectivity is provided by a Broadcom 57414 10/25GB OCP NIC 3.0. In order to preserve the existing 25Gb links for future host additions, the server is being connected to the top of rack switching utilizing a pair of 40Gb QSFP+ to 4x 10Gb SFP+ breakout cables though we can at a later date replace these 10Gb breakout cables with their 100Gb QSFP28 to 4x 25Gb SFP28 counterparts as we were originally also using the 10Gb links to connect the ToR switches to the core switch stack, but they are no longer in use. The primary array is a Dell PowerStore 500T and two Dell PowerEdge R650 VMware vSphere ESXI 7.0.3 hosts utilizing redundant 25Gb ISCSI links. This server has been sitting in the rack patiently awaiting the ISO release for over a year but finally gets to see it’s days in action, and will eventually replace a dated Synology Rackstation that is currently working as the primary backup repository utilizing multiple 1Gb ISCSI links.

In part two of this three part blog series, we’ll download the VHR ISO and get started by deploying our first VHR from ISO.

Why couldn’t the little boy go see the pirate movie? (click to reveal the answer)
Because it was rated “Arrrgh!”

This Post Has 2 Comments

Leave a Reply