ZFS root on Linux, two pools

I’ve installed my old notebook following the Debian Stretch Root on ZFS Guide. This time I’ll use the same guide with a few modifications:

What and Why

My new notebook has an NVMe and a HDD. I’d like to use the NVMe to store my system-files, swap and cache for the HDD. The HDD on the other hand should store user-files. My old notebook has been installed in CSM/BIOS-mode, the new one is UEFI. In case you’re wondering why I do use ZFS on my notebook, this is pretty much a because-we-can-philosophy – in fact there are a few reasons to NOT use ZFS on this system: No mirror / redundancy (So I won’t be able to use ALL nice ZFS features) no ECC RAM, rescuing such a system takes extra steps and requirements (you need a zfs-capable live/rescue system). RAM is usually limited in a notebook (I do have 16 GB that should be plenty for my work-load). On the other hand, I never had any trouble with ZFS on my old notebook:

root@christine:/home/jean# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool   464G   111G   353G         -    16%    23%  1.00x  ONLINE  -
root@christine:/home/jean# zfs list
NAME                             USED  AVAIL  REFER  MOUNTPOINT
rpool                            120G   330G    96K  /
rpool/ROOT                      3.37G   330G    96K  none
rpool/ROOT/debian               3.37G   330G  2.71G  /
rpool/home                       106G   330G   104K  /home
rpool/home/jean                  106G   330G  93.0G  /home/jean
rpool/home/jean/VirtualBox VMs  11.8G   330G  11.8G  /home/jean/VirtualBox VMs
rpool/home/root                  128K   330G   120K  /root
rpool/srv                         96K   330G    96K  /srv
rpool/swap                      8.50G   338G   121M  -
rpool/tmp                        168K   330G   168K  /tmp
rpool/var                       1.55G   330G    96K  /var
rpool/var/cache                 1.55G   330G  1.55G  /var/cache
rpool/var/games                   96K   330G    96K  /var/games
rpool/var/log                   6.13M   330G  6.13M  /var/log
rpool/var/mail                    96K   330G    96K  /var/mail
rpool/var/nfs                    104K   330G   104K  /var/lib/nfs
rpool/var/spool                  164K   330G   164K  /var/spool
rpool/var/tmp                    152K   330G   152K  /var/tmp
root@christine:/home/jean# zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0 in 0h6m with 0 errors on Sun Mar 11 00:30:47 2018
config:
 
        NAME                            STATE     READ WRITE CKSUM
        rpool                           ONLINE       0     0     0
          pci-0000:00:1f.2-ata-1-part1  ONLINE       0     0     0
 
errors: No known data errors

Another reason I’m installing ZFS is that I would like to use the NVMe for caching purposes to speed up the harddisc – dm-cache had terrible results (tested both in a notebook as well as on a server), enhanceio has good results but looks dead, flashcache from a try-to-compile-it-and-install-its-required-libraries-point-of-view is nothing I ever want to work again with, bcache had superior results in my tests – and I was using it until a few days. Currently there seems (I’m not exactly sure) to be an issue which leads to disks not entering standby. That’s however a feature I really want on a notebook to save power.

Partitioning

My partition scheme is different from the guide since the guide uses whole discs and I do use partitions. Also I did not delete the EFI-partition but kept it. On the NVMe I’ve got a partition for EFI (260 MB) a partition for swap (20 GB – the choice for 20 GB was to allow hibernation in any case even if I’ll never use so much swap),

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          534527   260.0 MiB   EF00  EFI system partition
   2          534528        42477567   20.0 GiB    8200  Linux swap
   3        42477568       243804159   96.0 GiB    BF01  Solaris /usr & Mac ZFS
   4       243804160       500101119   122.2 GiB   BF00  Solaris root
   9       500101120       500117503   8.0 MiB     BF07  Solaris Reserved 1
 
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048      1953507327   931.5 GiB   BF01  zfs-646841c4755888d4
   9      1953507328      1953523711   8.0 MiB     BF07

How

Created both pools (note, that both have / as mountpoint, though canmount=off is set).

root@debian:~# zpool create -o ashift=12 -O atime=off -O canmount=off \
    -O compression=lz4 -O mountpoint=/ -R /mnt rpool /dev/disk/by-id/nvme-SAMSUNG_MZVLW256HEHP-000H1_xxx-part4 
 
root@debian:~# zpool create -o ashift=12 -O atime=off -O canmount=off \
    -O compression=lz4 -O mountpoint=/ -R /mnt hpool /dev/disk/by-id/ata-ST1000LM049-xxx \
    cache /dev/disk/by-id/nvme-SAMSUNG_MZVLW256HEHP-000H1_xxx-part3

That looks like this:

root@debian:~# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
hpool   928G   336K   928G         -     0%     0%  1.00x  ONLINE  /mnt/hpool
rpool   122G   324K   122G         -     0%     0%  1.00x  ONLINE  /mnt
 
root@debian:~# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
hpool   276K   899G    96K  /mnt/hpool
rpool   264K   118G    96K  /mnt
 
root@debian:~# zpool status
  pool: hpool
 state: ONLINE
  scan: none requested
config:
 
	NAME                                                    STATE     READ WRITE CKSUM
	hpool                                                   ONLINE       0     0     0
	  ata-ST1000LM049-xxxx                                  ONLINE       0     0     0
	cache
	  nvme-SAMSUNG_MZVLW256HEHP-000H1_xxxx-part3            ONLINE       0     0     0
 
errors: No known data errors
 
  pool: rpool
 state: ONLINE
  scan: none requested
config:
 
	NAME                                                    STATE     READ WRITE CKSUM
	rpool                                                   ONLINE       0     0     0
	  nvme-SAMSUNG_MZVLW256HEHP-000H1_xxxx-part4            ONLINE       0     0     0
 
errors: No known data errors

Create the datasets

root@debian:~# zfs create -o canmount=off -o mountpoint=none rpool/ROOT
root@debian:~# zfs create -o canmount=noauto -o mountpoint=/ rpool/ROOT/debian
root@debian:~# zfs mount rpool/ROOT/debian 
root@debian:~# zpool set bootfs=rpool/ROOT/debian rpool
root@debian:~# zfs create -o setuid=off hpool/home
root@debian:~# zfs create -o mountpoint=/root hpool/home/root
root@debian:~# zfs create -o canmount=off -o setuid=off -o exec=off rpool/var
root@debian:~# zfs create -o com.sun:auto-snapshot=false rpool/var/cache
root@debian:~# zfs create rpool/var/log
root@debian:~# zfs create rpool/var/spool
root@debian:~# zfs create -o com.sun:auto-snapshot=false -o exec=on rpool/var/tmp
root@debian:~# zfs create rpool/srv
root@debian:~# zfs create -o mountpoint=/var/games hpool/games
root@debian:~# zfs create rpool/var/mail
root@debian:~# zfs create -o com.sun:auto-snapshot=false -o mountpoint=/var/lib/nfs rpool/var/nfs
root@debian:~# zfs list
NAME                USED  AVAIL  REFER  MOUNTPOINT
hpool               552K   899G    96K  /mnt
hpool/games          96K   899G    96K  /mnt/var/games
hpool/home           96K   899G    96K  /mnt/home
hpool/home/root      96K   899G    96K  /mnt/root
rpool              1.53M   118G    96K  /mnt
rpool/ROOT          200K   118G    96K  none
rpool/ROOT/debian   104K   118G   104K  /mnt
rpool/srv            96K   118G    96K  /mnt/srv
rpool/var           672K   118G    96K  /mnt/var
rpool/var/cache      96K   118G    96K  /mnt/var/cache
rpool/var/log        96K   118G    96K  /mnt/var/log
rpool/var/mail       96K   118G    96K  /mnt/var/mail
rpool/var/nfs        96K   118G    96K  /mnt/var/lib/nfs
rpool/var/spool      96K   118G    96K  /mnt/var/spool
rpool/var/tmp        96K   118G    96K  /mnt/var/tmp

I was thinking whether I should place var/cache onto HDD instead of NVMe. Still not sure hence keeping it on the NVMe for now. Just continued following the guide. More modifications include issuing zfs set devices=off hpool as well as installing console-setup and keyboard-configuration to set german keyboard layout. Another modification which was required to make it boot was to modify /etc/default/zfs so that it contains:

ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"
ZFS_POOL_IMPORT="rpool;hpool"

Lastly I switched over to buster from stretch; had to re-install zfs-dkms while doing so (carefully monitor the update/upgrade/dist-upgrade process) as well as zpool upgrade rpool and the same for hpool. That was it. After some time running it:

root@asuna:/home/jean# zfs list
NAME                USED  AVAIL  REFER  MOUNTPOINT
hpool               107G   792G    96K  /
hpool/games        14.8G   792G  14.8G  /var/games
hpool/home         91.9G   792G    96K  /home
hpool/home/jean    91.9G   792G  91.9G  /home/jean
hpool/home/root     220K   792G   140K  /root
rpool              3.30G   115G    96K  /
rpool/ROOT         2.40G   115G    96K  none
rpool/ROOT/debian  2.40G   115G  2.01G  /
rpool/srv           160K   115G    96K  /srv
rpool/var           922M   115G    96K  /var
rpool/var/cache     919M   115G   883M  /var/cache
rpool/var/log      1.98M   115G  1.76M  /var/log
rpool/var/mail      160K   115G    96K  /var/mail
rpool/var/nfs       160K   115G    96K  /var/lib/nfs
rpool/var/spool     160K   115G    96K  /var/spool
rpool/var/tmp       224K   115G   128K  /var/tmp
root@asuna:/home/jean# zpool iostat -v
                                                          capacity     operations     bandwidth 
pool                                                    alloc   free   read  write   read  write
------------------------------------------------------  -----  -----  -----  -----  -----  -----
hpool                                                    107G   821G      0     32  20.4K  2.60M
  ata-ST1000LM049-xxxx                                   107G   821G      0     32  20.4K  2.60M
cache                                                       -      -      -      -      -      -
  nvme-SAMSUNG_MZVLW256HEHP-000H1_xxxx-part3            15.8G  80.2G      0     24  25.0K  2.37M
------------------------------------------------------  -----  -----  -----  -----  -----  -----
rpool                                                   3.30G   119G      2      6  96.8K   170K
  nvme-SAMSUNG_MZVLW256HEHP-000H1_xxxx-part4            3.30G   119G      2      6  96.8K   170K
------------------------------------------------------  -----  -----  -----  -----  -----  -----
root@asuna:/home/jean# zpool status
  pool: hpool
 state: ONLINE
  scan: none requested
config:
 
	NAME                                                    STATE     READ WRITE CKSUM
	hpool                                                   ONLINE       0     0     0
	  ata-ST1000LM049-xxxx                                  ONLINE       0     0     0
	cache
	  nvme-SAMSUNG_MZVLW256HEHP-000H1_xxxx-part3            ONLINE       0     0     0
 
errors: No known data errors
 
  pool: rpool
 state: ONLINE
  scan: none requested
config:
 
	NAME                                                    STATE     READ WRITE CKSUM
	rpool                                                   ONLINE       0     0     0
	  nvme-SAMSUNG_MZVLW256HEHP-000H1_xxxx-part4            ONLINE       0     0     0
 
errors: No known data errors

No Comments

Post a Comment