Deploying FreeNAS as iSCSI SAN for VMware vSphere

Joe PerksTechnical Tips4 Comments

Shared storage among vSphere/ESXi hosts is required for many features in VMware’s vSphere. Traditional solutions for shared storage include VMware vSAN, VMware VSA and commercial SAN.

VMware vSAN, VMware’s latest offering (currently in beta), requires at least 3 hosts with a SSD/HDD pair dedicated to vSAN. Built into vSphere Kernel, vSAN beta requires at least vSphere 5.5 and vCenter Server 5.5 and is targeted toward enterprise level customers looking for an alternative to traditional commercial SAN systems.

VMware vSphere Storage Appliance (VSA) is targeted towards small to medium businesses and is deployed on either two or three vSphere hosts. Each host in the VSA cluster runs the VSA virtual machine and the cluster offers replicated datastores across the VSA VMs.

Commercial SAN is at the upper end of shared storage in terms of reliability and performance. While many enterprises either use their existing SAN systems or invest into a new one when deploying VMware on a production system, if the VMware infrastructure is used solely for testing and development as this blog post’s architecture is, the high cost of a commercial SAN investment may not be worth it.

Below we step through installing FreeNAS to enable a single shared datastore across any number of vSphere hosts. FreeNAS is an open source project sponsored by iXsystems, based on an optimized version of FreeBSD that runs from a USB drive or CD, allowing as much hard drive space as possible to be allocated towards your VMs. For our install we boot FreeNAS from a USB drive, not to install it to a hard drive, but for every boot. Enable USB boot and use the instructions found here for burning the IMG available on the FreeNAS website.

Once you have a USB drive that will boot on your host FreeNAS machine, boot into it and follow the instructions to adjust your IP settings. A static IP is required as it would be with any other SAN solution. Once the IP address is set, browse to that address in your web browser and set a root password. All administration of FreeNAS is done through this web portal, so make sure to set a secure password.

Once logged in, navigate to Storage and click “View Disks”. Here you can see the names and serial numbers of the detected disks. If the number of disks is different than what you expected, then check your motherboard/HDD connections as well as your hardware compatibility. Follow the illustrated steps below to create a ZFS volume and connect it to each vSphere host.

zfsvolumecreationCROP

Select “ZFS Volume Manager” under the Storage tab.

You can see I have selected three 3.0TB HDD and the RaidZ volume layout. RaidZ with three disks implies RaidZ1, or in other words, all drives consolidated into storage that can withstand the failure of a single HDD without the ZFS volume failing. This protection against failure is why my grouping of three 3.0TB HDDs displays “Capacity: 5.45 TiB”. I have also selected a single 64 GB SSD drive to use as “Cache”. This drive will be used as a Level Two Adaptive Read Cache (L2ARC), which will be filled with read cache once the RAM has been consumed. FreeNAS suggests 1GB of RAM for every 1TB of storage, but for obvious reasons the bigger your read cache the better your overall performance. This will position our FreeNAS with 32GB of RAM for level one cache (ARC) and 64GB of SSD L2ARC. For the non-production workload we have, this will suffice. Select “Services” from the top level menu and click the configure icon on iSCSI. The first page is “Target Global Configuration”, here we will not change any settings except “Discovery Auth Method” to “None”. For this blog post, we won’t examine setting up CHAP or Mutual CHAP.

zvol

Select “Create zvol” from the lower menu under Storage

Create a single zvol under our ZFS Volume. Separate zvols in our ZFS Volume allow us to separate the pool of data if we wish. We will use 4TB of the 5.45 TiB for VMware.

portalCROP

Select “Portals” from the iSCSI Tab

A portal is the endpoint that our vSphere hosts will point to looking for the iSCSI service. Select the static IP address that you set during setup and the default port 3260, unless you desire another port.

initiatorCROP

Select Initiators from the iSCSI Tab

Initiators are the IP addresses allowed to access the iSCSI resource. For this demonstration, we illustrate all, but it would be best practice to add your vSphere hosts individually.

targetCROP

Select “Targets” from the iSCSI tab

A target is a storage resource on the iSCSI resource. Create a target, inserting the name you desire.

extentCROP

Select “Extents” from the iSCSI tab

An extent represents the file, or in this case, device that is the storage for an iSCSI storage resource. Enter an extent name, choose device under “Extent Type” and select the zvol we created earlier.

extentTargetAssociationCROP

Select “Associated Targets” from the iSCSI tab

Now we associate both the target and extent that we have created. After this step, click on the top level “Services” icon and ensure that iSCSI is on. If it already is on, turn it off and on to ensure all changes are current.

Now we move to the VMware vCenter Server Web Client. We will be using version 5.5. Select the host which to add the iSCSI resource and navigate to the Manage tab, Storage section and Storage adapters subsection. Click the green plus sign, and select “Software iSCSI Adapter”. After the adapter is added, we can see the device list is empty.

vmwareiscsiblank

Empty iSCSI Software Adapter device list

 
While the iSCSI software adapter is selected, select the Target tab, Dynamic Discovery option and click “Add”. Enter the same IP address and port here as we entered earlier for the portal. VMware will prompt you to rescan the adapter. If you click the icon displaying a host with a green progress bar underneath, the host will rescan all storage adapters for new devices. Then you should see an available, but still unattached device under the iSCSI software adapter device list.

deviceunattached

iSCSI device detected, but unattached

 
Click the datastore icon with a green circle to attach the iSCSI LUN we setup earlier. We then will see the active device. Rescan the host’s storage adapters before navigating to either create a new datastore on the iSCSI device or to attach the datastore if another host has already created a datastore on the device. Remember, all hosts see the same data on that iSCSI device so the datastore will need to be created only once and then attached on whatever other hosts you desire to share that datastore.
 
4tbdevicevsphereCROP

iSCSI device detected and attached

4 Comments on “Deploying FreeNAS as iSCSI SAN for VMware vSphere”

  1. While it is good that you are using FreeNAS, using RaidZ is inappropriate, especially in this scenario where you want performance.
    You should be using mirrors that are striped (i.e. RAID10)

  2. I’m deploying a FreeNAS 11 server as iSCSI SAN for VMware vSphere 6.5 Cluster (2 nodes). It has 4 x HDD 1TB. I’m not sure about use a RAID hardware controller, a RAID-Z1 or Mirrors in ZFS. I’ve read that Mirrors are almost always faster than RAID-Z groups, especially for the cases that are interesting to iSCSI storage for virtualization.So, What would be better in my scenario?

Leave a Reply to Jamie Cancel reply

Your email address will not be published. Required fields are marked *