Building my own NAS – From the ground up

I guess everyone with more than one computer knows the problem that data (documents, music, etc) is sooner or later spread across all available machines. In my case I have my data spread over my netbook, notebook, desktop and server. Some stuff is on my wife’s desktop and notebook, because I had to work with her stuff when my equipment wasn’t around. Hence I decided to build my own NAS to store my data on. I decided for a low power system, but the mainboard I’ve bought was defect and so I got some replacement hardware (mainboard+cpu) by a colleague (thx hubi!). Not sure how power efficient that stuff is, as I did not test it. It’s for sure eating less than my old sli-desktop.

  • 3x HDD Seagate 2 TB, ST2000VX000
  • 3x HDD Toshiba 2 TB, DT01ACA200
  • 1x SSD ADATA 64 GB, SP900
  • 1x 3ware Inc 9650SE
  • 2x 4 GB Crucial 1600 DDR2
  • 1x Intel(R) Celeron(R) CPU G1620T @ 2,40 GHz
  • 1x Some Asrock
  • 1x bequiet 300W

The raid controller has not been necessary, though I wanted to play around with a raid controller and as it gives me two additional SATA ports I’ve just assembled it. I’ve got it cheap (thx nico!) and from what I can tell it is doing a good job right after I did a firmware upgrade :^) Probably I am going to replace it with some sata 3 controller. Not sure. You might wonder why I did buy Seagate and Toshiba discs – Currently I am not buying Western Digital discs (wrong reporting of 4k drives…) otherwise that would have been my first choice. I always wanted to try the pretty cheap Toshiba discs and to reduce the risk that all discs die at the same time I’ve picked a second manufacturer. Due to the amount of these discs, you can already guess how my ZFS will look like:

  • mirror
    • toshiba0
    • seagate0
  • mirror
    • toshiba1
    • seagate1
  • mirror
    • toshiba2
    • seagate2

That will give me 6 TB of usable data (a little bit less in fact) and one disc can die per vdev, or all discs by a specific manufacturer can die at the same time :^). I was considering two raidz1 and a big raidz2 or even raidz3. But the resilver speeds and performance drops with a degraded array made me decide for the striped mirror sets. One day I’ll just replace with 4 TB discs and more memory.


I only measured using hdparm.

hdparm -tT --direct /dev/sdX
Disk cached reads disk reads notes
ADATA SSD 474.69 MB/s 383.06 MB/s primary onboard controller
Seagate 1 235.70 MB/s 211.60 MB/s primary onboard controller
Toshiba 1 260.75 MB/s 192.74 MB/s primary onboard controller
Seagate 2 188.34 MB/s 188.46 MB/s connected to the raid controller hence just sata2
Toshiba 2 189.34 MB/s 191.07 MB/s connected to the raid controller hence just sata2
Seagate 3 318.12 MB/s 219.01 MB/s secondary onboard controller
Toshiba 3 368.32 MB/s 193.47 MB/s secondary board controller

I am not sure if the different speed across VDEVs is bad or a limiting factor, since reads and write will be sent down in parallel and I am not mixing speeds WITHIN mirrors.


Make sure you are using GPT even if it is just an 64 GB disc. Using GPT you can give your partition names – which is nice. I missed that and hat to convert my SSD from DOS to GPT.

  • ssd
    • 14 GB /
    • 8 GB swap
    • 8 GB zfs-log (half of the physical available memory – I am going to upgrade to 16 GB memory soon)
    • 29.7 GB zfs-cache (remaining space of the ssd)
  • hdd1-6
    • 2 TB zfs (I did leave ~64 MB free at the end of each disc, furthermore I used GPT as disk label and I used the discs model + serial as partition label. partition type is solaris root)

Pool / Datasets

zpool create -f -o ashift=12 storage mirror /dev/disk/by-partlabel/DT01ACA200-xxxx /dev/disk/by-partlabel/ST2000VX000-1ES164-xxxx
zpool set autoexpand=on storage

-f because I had zfs on these discs already (so I am forcing..)
-o ashift=12 because of the 4K drives
storage is my pool name
mirror is the first set consisting of the two discs which I select by partition label

the second command will just auto expand the pool, just as the command implies.

  • storage (compression lz4, noatime, copies=1)
    • storage/backups (compression off)
    • storage/netboot (nfs)
    • storage/documents (copies=2, nfs)
    • storage/music (nfs)
    • storage/videos (compression off, nfs)
    • storage/pictures (copies=2, nfs)
zfs set compression=lz4 storage
zfs set atime=off storage
zfs set copies=1 storage
zfs create -o compression=off storage/backups
zfs create storage/netboot
zfs create -o copies=2 storage/documents
zfs create -o storage/music
zfs create -o compression=off storage/videos
zfs create -o copies=2 storage/pictures

As you can see, I am not playing around with deduplication. Deduplication is for sure one of the best features zfs offers, but at the same time it’s the most costly feature. I simply do not have enough memory for deduplication and therefore I am not using it. My pictures and documents are very important to me, hence I want two copies of them. Because videos aren’t very compressible, neither are already compressed backups, I am disabling compression for them.

Lets add the remaining discs (I could have done that earlier, but I was moving my data from a disc which I will use here, so I was copying over first)

zpool add storage mirror /dev/disk/by-partlabel/... /dev/disk/by-partlabel/...
zpool add storage mirror /dev/disk/by-partlabel/... /dev/disk/by-partlabel/...
zpool add storage log /dev/disk/by-partlabel/zfs-log cache /dev/disk/by-partlabel/zfs-cache

Later I am going to add another SSD so that I can mirror zfs-log and add another larger cache for the storage (renaming zfs-cache to zfs-cache-1 adding zfs-cache-2, renaming zfs-log to zfs-log-1 adding zfs-log-2)

How does it look like now?

root@janice:/home/jean# zpool status
  pool: storage
 state: ONLINE
  scan: none requested
        NAME                             STATE     READ WRITE CKSUM
        storage                          ONLINE       0     0     0
          mirror-0                       ONLINE       0     0     0
            DT01ACA200-xxxx              ONLINE       0     0     0
            ST2000VX000-1ES164-xxxx      ONLINE       0     0     0
          mirror-1                       ONLINE       0     0     0
            DT01ACA200-xxxx              ONLINE       0     0     0
            ST2000DM001-1ER164-xxxx      ONLINE       0     0     0
          mirror-2                       ONLINE       0     0     0
            DT01ACA200-xxxx              ONLINE       0     0     0
            ST2000VX000-1ES164-xxxx      ONLINE       0     0     0
          zfs-log                        ONLINE       0     0     0
          zfs-cache                      ONLINE       0     0     0
errors: No known data errors
root@janice:/home/jean# zpool iostat -v
                                    capacity     operations    bandwidth
pool                             alloc   free   read  write   read  write
-------------------------------  -----  -----  -----  -----  -----  -----
storage                          2.46T  2.97T  1.05K  1.09K   130M   133M
  mirror                         1.39T   435G  1.05K    224   130M  26.1M
    DT01ACA200-xxxx                  -      -    522    223  63.5M  26.1M
    ST2000VX000-1ES164-xxxx          -      -    550    223  66.8M  26.1M
  mirror                          555G  1.27T      0    450      1  54.1M
    DT01ACA200-xxxx                  -      -      0    447     49  54.1M
    ST2000DM001-1ER164-xxxx          -      -      0    447     50  54.1M
  mirror                          547G  1.28T      0    457      0  55.0M
    DT01ACA200-xxxx                  -      -      0    454      2  55.0M
    ST2000VX000-1ES164-xxxx          -      -      0    454      2  55.0M
logs                                 -      -      -      -      -      -
  zfs-log                            0  7.94G      0      0      2    239
cache                                -      -      -      -      -      -
  zfs-cache                      29.6G  24.0M      0    469     13  55.4M
-------------------------------  -----  -----  -----  -----  -----  -----
root@janice:/home/jean# iostat -dm
Linux 3.16.0-4-amd64 (janice)   06/21/2015      _x86_64_        (2 CPU)
Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
sda             453.37         0.01        53.49         70     562681
sdc             704.53        63.47        26.11     667697     274626
sdg             441.31         0.00        53.34         10     561105
sdd             441.13         0.00        53.34         13     561105
sdb             708.76        66.82        26.11     702911     274661
sdf             447.95         0.00        54.05          6     568535
sde             448.00         0.00        54.05          5     568535
root@janice:/home/jean# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
storage                    2.05T  3.30T  1.14T  /storage
storage/backups            57.5K  3.30T  57.5K  /storage/backups
storage/documents           300K  3.30T  89.5K  /storage/documents
storage/music              37.4G  3.30T  57.5K  /storage/music
storage/netboot            57.5K  3.30T  57.5K  /storage/netboot
storage/pictures           6.27G  3.30T  6.27G  /storage/pictures
storage/videos              884G  3.30T   368G  /storage/videos

I’ve removed some datasets from the above output.

How about speed?

Just collected some statistics, these numbers might mean nothing.

test result
dd if=/dev/zero of=testfile bs=4K count=4194304 (17 GB) copied, 44.8458 s, 383 MB/s
dd if=/dev/zero of=testfile1 bs=4K count=4194304 &
dd if=/dev/zero of=testfile2 bs=4K count=4194304 &
(17 GB) copied, 62.1723 s, 276 MB/s
(17 GB) copied, 62.2113 s, 276 MB/s
dd if=/dev/zero of=testfile3 bs=4K count=4194304 &
dd if=/dev/zero of=testfile4 bs=4K count=4194304 &
dd if=/dev/zero of=testfile5 bs=4K count=4194304 &
dd if=/dev/zero of=testfile6 bs=4K count=4194304 &
(17 GB) copied, 122.486 s, 140 MB/s
(17 GB) copied, 125.304 s, 137 MB/s
(17 GB) copied, 127.148 s, 135 MB/s
(17 GB) copied, 124.981 s, 137 MB/s
dd if=/dev/zero of=testfile bs=128K count=131072 (17 GB) copied, 16.5719 s, 1.0 GB/s
dd if=/dev/zero of=testfile1 bs=128K count=131072 &
dd if=/dev/zero of=testfile2 bs=128K count=131072 &
(17 GB) copied, 29.9564 s, 573 MB/s
(17 GB) copied, 30.4172 s, 565 MB/s
dd if=/dev/zero of=testfile1 bs=128K count=131072 &
dd if=/dev/zero of=testfile2 bs=128K count=131072 &
dd if=/dev/zero of=testfile3 bs=128K count=131072 &
dd if=/dev/zero of=testfile4 bs=128K count=131072 &
(17 GB) copied, 49.5753 s, 347 MB/s
(17 GB) copied, 50.9746 s, 337 MB/s
(17 GB) copied, 57.5112 s, 299 MB/s
(17 GB) copied, 57.2647 s, 300 MB/s
dd if=/dev/zero of=testfile oflag=sync bs=128K count=131072 (17 GB) copied, 80.5647 s, 213 MB/s
dd if=/dev/zero of=t5 oflag=sync bs=128K count=131072 &
dd if=/dev/zero of=t6 oflag=sync bs=128K count=131072 &
(17 GB) copied, 145.753 s, 118 MB/s
(17 GB) copied, 146.372 s, 117 MB/s
dd if=/dev/zero of=t8 oflag=sync bs=128K count=131072 &
dd if=/dev/zero of=t9 oflag=sync bs=128K count=131072 &
dd if=/dev/zero of=t10 oflag=sync bs=128K count=131072 &
dd if=/dev/zero of=t11 oflag=sync bs=128K count=131072 &
(17 GB) copied, 243.832 s, 70.5 MB/s
(17 GB) copied, 250.981 s, 68.5 MB/s
(17 GB) copied, 250.737 s, 68.5 MB/s
(17 GB) copied, 249.155 s, 69.0 MB/s
dd if=/storage/videos/somevideo.mkv of=/dev/null bs=4K (37 GB) copied, 80.8317 s, 455 MB/s
dd if=/storage/videos/anothervideo.mkv of=/dev/null bs=128K (28 GB) copied, 55.6786 s, 507 MB/s
dd if=/storage/videos/somevideo.mkv of=/dev/null bs=4K &
dd if=/storage/videos/anothervideo.mkv of=/dev/null bs=4K &
(28 GB) copied, 110.335 s, 256 MB/s
(37 GB) copied, 129.626 s, 284 MB/s

No Comments

Post a Comment