In my lab, I often need different types of storage to test various scenarios. For example, just last week someone asked about using New-Datastore
with a specific version of VMFS and I needed to quickly perform a quick syntax check. I’ve found that having a nested storage appliance, like a Open Filer or Free NAS available is helpful. However, these appliances offer way more features than I need and typically have higher resource requirements. Sometimes, setting up specific storage protocols like NFS or iSCSI is crucial for testing and development, but existing solutions can be overly complex or resource-heavy for lab environments. In this post I’ll outline how I solved this problem with a few utilities added to an existing Ubuntu 24.04 template.
Storage Protocols
With this project I wanted to have a single storage target that could provide both NFS and iSCSI storage protocols. For purposes of my lab, nearly all of my storage testing the ‘client’ system will be ESXi. This post will provide some output/examples in that context. For example, ESXi supports block storage (such as iSCSI) and file storage, specifically NFS3 and NFS4.1. Ideally, I want to provide all three of these options with this single appliance, so we’ll show examples of using the appliance in all three of those ways.
Setting Up the Test Appliance
I deployed an Ubuntu 24.04 VM, using the image/customization spec described here: https://enterpriseadmins.org/blog/scripting/ubuntu-24-04-packer-and-vcenter-server-customization-specifications/
The template VM has a single 50GB disk, so I added an additional 50GB disk to use as the backing for the storage server. We’ll format this disk as btrfs
and mount it as /data
. This will be covered in the following code block:
sudo mkdir /data
sudo mkfs.btrfs /dev/sdb
echo "/dev/sdb /data btrfs defaults 0 0" | sudo tee -a /etc/fstab
sudo systemctl daemon-reload
sudo mount /data
The above code block is creating a folder, formatting our second disk in the system, adding an entry to the fstab
file so the filesystem mounts when the system boots, and finally mounts our new disk. After the above is complete, running df -h /data
should return the mounted disk and its size, just as a confirmation that everything worked successfully.
Configuring NFS on Ubuntu 24.04
I’ll start with NFS as this is a problem I’ve previously solved for using a Photon OS (https://enterpriseadmins.org/blog/virtualization/vmware-workstation-lab-photon-os-container-host-and-nfs-server/). The only difference this time is that I planned to use Ubuntu 24.04, which has a slightly different package name for the NFS Server components.
sudo apt install nfs-kernel-server -y
sudo mkdir /data/nfs
echo "/data/nfs *(rw,async,no_root_squash,insecure_locks,sec=sys,no_subtree_check)" | sudo tee -a /etc/exports
sudo systemctl daemon-reload
sudo systemctl reload nfs-server
The above code block installs our NFS server package, creates a subfolder to export as NFS, adds an entry to the NFS exports configuration file, then reloads the configuration to take effect. On our client system (ESXi), we can confirm that our work was successful by creating a datastore. I’ll complete that task in PowerCLI below:
New-Datastore -VMHost h316-vesx-64* -Name 'nfs326' -Nfs -NfsHost 192.168.10.26 -Path /data/nfs -Confirm:$false
Name FreeSpaceGB CapacityGB
---- ----------- ----------
nfs326 14.994 15.000
As we can see, the test was a success and returned our mount point size in the command results. The above example resulted in an NFS 3 mount of the NFS folder. I created a subfolder (test41
) and executed a similar test to confirm this could work for NFS 4.1 as well.
New-Datastore -VMHost h316-vesx-64* -Name 'nfs326-41' -Nfs -FileSystemVersion 4.1 -NfsHost 192.168.10.26 -Path /data/nfs/test41 -Confirm:$false
Name FreeSpaceGB CapacityGB
---- ----------- ----------
nfs326-41 14.994 15.000
As we can see in the vCenter web interface, one of these datastores is NFS 3 and the other is NFS 4.1, both showing the same capacity and free space.

This confirms that we were able to successfully connect to our NFS Server service using ESXi with both NFS 3 and NFS 4.1 connections. Next, we’ll look at setting up iSCSI storage, which requires a slightly different approach.
iSCSI
The iSCSI testing was a bit more interesting. Looking around, I found a couple of ways to create an iSCSI target. I ended up using targetcli
to create the iSCSI target. There are plenty of tutorials around for this, including a video: https://www.youtube.com/watch?v=OIpxwX6pTIU and Ubuntu 24.04 specific article: https://www.server-world.info/en/note?os=Ubuntu_24.04&p=iscsi&f=1 that were very helpful. I’ll document the steps below for completeness.
In this first code block we’ll install the service and create the folder where we’ll storage some images.
sudo apt install targetcli-fb -y
sudo mkdir /data/iscsi
The targetcli command can create image files to use as backing for our disks, but in my testing the sparse=true
switch does not create sparse files, so we’ll do this as two steps. You’ll note that I’m specifying one image has having a 2T file size… but as you may have noticed in our NFS example, we only have 15GB of disk for our /data
mount. This doesn’t result in some ‘magic beans’ sort of free storage… once we write 15GB of stuff to this disk we’ll be out of capacity and run into problems. This is only being done for illustration/simulation purposes…. sometimes you’ll want the UI to show ‘normal’ sizes that you’d see with actual datastores. One reason that the /data
mount was configured with btrfs
instead of something like ext4
is so we can support image files larger than 16T. This btrfs
filesystem will allow files over 62TB in size (62TB being the maximum supported size for VMFS 6). Also in the code block output below, we’ll use du
to show that these disks are using 0 bytes on the file system, but have larger apparent sizes.
sudo truncate -s 10G /data/iscsi/disk0.img
sudo truncate -s 10G /data/iscsi/disk1.img
sudo truncate -s 2T /data/iscsi/disk2.img
du -h /data/iscsi/*.img
0 /data/iscsi/disk0.img
0 /data/iscsi/disk1.img
0 /data/iscsi/disk2.img
du -h /data/iscsi/*.img --apparent-size
10G /data/iscsi/disk0.img
10G /data/iscsi/disk1.img
2.0T /data/iscsi/disk2.img
Once we have our files pre-staged, we can start working with targetcli
.
sudo targetcli
# this should enter the targetcli shell
cd /backstores/fileio
create disk0 /data/iscsi/disk0.img sparse=true write_back=false
cd disk0
set attribute is_nonrot=1
cd ..
create disk1 /data/iscsi/disk1.img sparse=true write_back=false
cd disk1
set attribute is_nonrot=1
cd ..
create disk2 /data/iscsi/disk2.img sparse=true write_back=false
cd disk2
set attribute is_nonrot=1
The above code block creates the fileio references to each of our disks, and also set the is_nonrot
flag to tell the system that these are non-rotational (ie Flash) devices.
Still in the targetcli
shell, we’ll start our iSCSI configuration.
cd /iscsi
create iqn.2025-02.com.example.iscsi:target01
cd iqn.2025-02.com.example.iscsi:target01/tpg1/luns
create /backstores/fileio/disk0
create /backstores/fileio/disk1
create /backstores/fileio/disk2
This will create LUNs for each of our disks. Finally, still in the targetcli
shell we’ll create an ACL to allow a specific host to access the target. We’ll then delete it. This puts the correct syntax in our command history, so we can refer back to it in the future (I plan to use this as a template for future tests).
cd /iscsi/iqn.2025-02.com.example.iscsi:target01/tpg1/acls
create iqn.1998-01.com.vmware:host.lab.enterpriseadmins.org:3:65
delete iqn.1998-01.com.vmware:host.lab.enterpriseadmins.org:3:65
exit
The exit will cause targetcli
to save our changes so they’ll persist a reboot. For testing, we’ll go back into targetcli
and add a specific reference to allow our test host to the iSCSI target.
sudo targetcli
create iqn.1998-01.com.vmware:h316-vesx-64.lab.enterpriseadmins.org:394284478:65
exit
On our test client system, we can add a dynamic target for the IP/name & port 3260 of our storage appliance and then rescan for storage. We should see the three disks that we created, with the sizes specified.

As another confirmation, we may want to make each of these disks a VMFS volume. We can do that using syntax similar to the below code block:
Get-ScsiLun -VmHost h316* |?{$_.Vendor -eq 'LIO-ORG'} | %{
New-Datastore -Host h316* -Name "ds-vmfs-$($_.model)" -Path $_.CanonicalName -Vmfs -FileSystemVersion 6
}
Name FreeSpaceGB CapacityGB
---- ----------- ----------
ds-vmfs-disk1 8.345 9.750
ds-vmfs-disk2 2,046.312 2,047.750
ds-vmfs-disk0 8.345 9.750
Looking in the vCenter web interface, we can see that all our storage has been presented.

Once we’ve placed filesystems on these disks, we can go back to the shell and see how much space is being used on disk.
du -h /data/iscsi/*.img
29M /data/iscsi/disk0.img
29M /data/iscsi/disk1.img
62M /data/iscsi/disk2.img
We can see that the creation of a filesystem on these disks does consume some of the blocks (we are using several MB of disk, instead of the previous 0 bytes).
Adding extra LUNs to the iSCSI target is a straightforward process, requiring just a handful of commands. An example can be found in the code block below:
sudo truncate -s 10T /data/iscsi/disk3.img
sudo targetcli
# this should enter the targetcli shell
cd /backstores/fileio
create disk3 /data/iscsi/disk3.img sparse=true write_back=false
cd /iscsi/iqn.2025-02.com.example.iscsi:target01/tpg1/luns
create /backstores/fileio/disk3
exit
The above code block shows the creation of a 10TB disk image, entering the targetcli
shell, adding the newly created disk as a ‘fileio’ option, and mapping that disk to our iSCSI target. Finally we exit, which by default will save the configuration and make it persistent. Refreshing storage on ESXi should cause the new LUN to appear. Since we didn’t set the is_nonrot
attribute, this device will appear as an HDD instead of a Flash device.
Growing the /data
btrfs filesystem
Our filesystem is currently backed by a 15GB disk. We’ve allocated about 12TB of that, so it is grossly oversubscribed. For a production system this would be a terrible idea, but for our lab/illustration purposes this is probably fine. At some point we may need to extend this filesystem to accommodate some growth. I’ve grown ext3 and ext4 filesystems before but wanted to document how to do the extension for the btrfs
filesystem used in this example. I choose btrfs
because it supports larger files, allowing us to create images as large as supported by ESXi (62TB). The following code block will show how to extend this filesystem in the guest OS. This assumes we have already increased the size of the disk in the vCenter web client, for illustration purposes we’ve extended the disk from 15GB to 20GB.
df -h
# shows that the filesystem did not autogrow
echo 1 | sudo tee /sys/class/block/sdb/device/rescan
# rescans for disk changes
sudo lsblk
# confirms disk is now seen as 20gb
sudo btrfs device usage /data
# shows that device size is 20gb
sudo btrfs filesystem resize max /data
# results in:
# Resize device id 1 (/dev/sdb) from 15.00GiB to max
df -h /data
# confirm filesystem is now 15gb
The above commands rescanned our disk to be aware of the new size, then resized the filesystem to the size we defined in the hypervisor (20GB).
To confirm this works as expected, we can refresh storage information for one of our NFS mounts. The capacity should increase from 15GB to 20GB, as seen in the following screenshot.

Conclusion
Creating this storage server to support NFS 3, NFS 4.1, and iSCSI targets is relatively straightforward. Having this pre-configured storage appliance can greatly streamline the process of testing various storage protocols, especially in virtual environments where quick deployment is key.
Pingback: Automated Deployment of Storage Server | Enterprise Admins.org