Root on ZFS FreeBSD 9 (non legacy mountpoint – 4K optimized)

6a00e54fc8012e88330105365f1554970c-350wi

In this guide I will demonstrate how you can install a fully functional full root on ZFS FreeBSD9 using a GPT scheme with a non legacy root ZFS mountpoint optimized for 4K drives. We will also use ZFS for SWAP.

You can use this as a reference guide for a single or mirror installation.

(1) Boot from a FreeBSD9 installation DVD or memstick and choose “Live CD”.

(2) Create the necessary partitions on the disk(s) and add ZFS aware boot code.

a) For a single disk installation.

gpart create -s gpt ada0
gpart add -b 34 -s 94 -t freebsd-boot ada0
gpart add -t freebsd-zfs -l disk0 ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0

b) Repeat the procedure for the second drive if you want a mirror installation.

gpart create -s gpt ada1
gpart add -b 34 -s 94 -t freebsd-boot ada1
gpart add -t freebsd-zfs -l disk1 ada1
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1

(3) Align the Disks for 4K and create the pool.

a) For a single disk installation.

gnop create -S 4096 /dev/gpt/disk0
zpool create -o altroot=/mnt -o cachefile=/var/tmp/zpool.cache zroot /dev/gpt/disk0.nop
zpool export zroot
gnop destroy /dev/gpt/disk0.nop
zpool import -o altroot=/mnt -o cachefile=/var/tmp/zpool.cache zroot

b) For a mirror installation.

gnop create -S 4096 /dev/gpt/disk0
gnop create -S 4096 /dev/gpt/disk1
zpool create -o altroot=/mnt -o cachefile=/var/tmp/zpool.cache zroot mirror /dev/gpt/disk0.nop /dev/gpt/disk1.nop
zpool export zroot
gnop destroy /dev/gpt/disk0.nop
gnop destroy /dev/gpt/disk1.nop
zpool import -o altroot=/mnt -o cachefile=/var/tmp/zpool.cache zroot

(4) Set the bootfs property and checksums.

zpool set bootfs=zroot zroot
zfs set checksum=fletcher4 zroot

(5) Create appropriate filesystems (feel free to improvise!).

zfs create zroot/usr
zfs create zroot/usr/home
zfs create zroot/var
zfs create -o compression=on -o exec=on -o setuid=off zroot/tmp
zfs create -o compression=lzjb -o setuid=off zroot/usr/ports
zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/distfiles
zfs create -o compression=off -o exec=off -o setuid=off zroot/usr/ports/packages
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/usr/src
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/crash
zfs create -o exec=off -o setuid=off zroot/var/db
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/db/pkg
zfs create -o exec=off -o setuid=off zroot/var/empty
zfs create -o compression=lzjb -o exec=off -o setuid=off zroot/var/log
zfs create -o compression=gzip -o exec=off -o setuid=off zroot/var/mail
zfs create -o exec=off -o setuid=off zroot/var/run
zfs create -o compression=lzjb -o exec=on -o setuid=off zroot/var/tmp

(6) Add swap space and disable checksums. In this case I add 4GB of swap.

zfs create -V 4G zroot/swap
zfs set org.freebsd:swap=on zroot/swap
zfs set checksum=off zroot/swap

(7) Create a symlink to /home and fix some permissions.

chmod 1777 /mnt/tmp
cd /mnt ; ln -s usr/home home
chmod 1777 /mnt/var/tmp

(8) Instal FreeBSD.

sh
cd /usr/freebsd-dist
export DESTDIR=/mnt
for file in base.txz lib32.txz kernel.txz doc.txz ports.txz src.txz;
do (cat $file | tar --unlink -xpJf - -C ${DESTDIR:-/}); done

(9) Copy zpool.cache (very important!!!)

cp /var/tmp/zpool.cache /mnt/boot/zfs/zpool.cache

(10) Create the rc.conf, loader.conf and an empty fstab (otherwise the system will complain).

echo 'zfs_enable="YES"' >> /mnt/etc/rc.conf
echo 'zfs_load="YES"' >> /mnt/boot/loader.conf
echo 'vfs.root.mountfrom="zfs:zroot"' >> /mnt/boot/loader.conf
touch /mnt/etc/fstab

Reboot, adjust time zone info, add a password for root, add a user and enjoy!!!
 
Powered by BareBSD
 

59 Responses

  1. Kostya Berger says:

    Thanks for the tutorials :) ) With legacy mountpoint it worked perfectly well. Only, because of the multiple mountpoints it was very tricky to mount elsewhere…

    So I decided to copy the whole installation to another zpool on another HDD, which I created according to this tutorial here. Then I just copied over all the files from the original zpool to the new one (had to do that from the running system).

    Only, it can’t boot now from the new zpool (mountroot error 2) for some reason. It loads kernel & modules etc. OK, also the zpool root IS visible to the loader (checked from the prompt) — only mountroot can’t mount root at zfs:zroot.

    Is it, perhaps, because there is no mountpoint defined for the zpool here in this tutorial? In the previous one we defined a legacy mountpoint, but for this one we gave no definition at all. Can this be the problem?

      • Kostya Berger says:

        Yes I did. Did it originally after the copying, then after the first fail I once again imported zpool (from my linux install) with the -o cachefile=/var/tmp/zpool.cache option (and altroot also), which I then copied to /boot/zfs of the zpool.

        • gkontos says:

          I would suggest that you boot from a FreeBSD 9.1 iso and copy zpool.cache after importing the pool. Possible reasons for a root pool not mounting:

          missing zpool.cache (not necessary in 9.1-STABLE)
          missing bootfs property
          missing bootcode

  2. OK, I don’t know what the problem was, everything looked ok, including what you’ve mentioned. It finally got to boot, but with certain things missing…

    So I did it again: destroyed the new pool and created another one. Then I did the following:
    imported the old one, saving the zpool.cache outside. Then the trickiest thing was to mount it.
    So after importing I unmounted what got mounted (usr, var, tmp). Then I mounted the “zroot”(legacy) at /mnt, after which defined the mountpoints for zroot/usr, var, tmp as relative the /mnt — for the time being.
    The problem was, that it proved to be necessary now to mount ALL the datasets we created in step 5 manually, one by one — only THEN all the system got mounted properly.
    After which I created a snapshot:
    zfs snapshot -r zroot@complete
    And then sent this snapshot to the new pool:
    zfs send -R zroot@complete | zfs receive newpool

    That indeed copied everything to the newpool. Then I redefined the mountpoints porperly for newpool and also changed the definition in loader.conf. And of course, once again exported/imported with all these zpool.cache manipulations.

    Now finally it boots. BTW: I’m booting it from GPT disk managed by GRUB2. For that purpose I copied BTX using ‘dd’ from the original FreeBSD installation to my linux partition:

    dd if=/dev/sdf of=/boot/freebsd.boot

    This file I now use as ‘chainloader’ from GRUB2.

  3. Kostya Berger says:

    and sending full snapshot, since I mentioned it, looks exactly like this:
    zfs send -R zroot@complete | zfs receive -dF newpool

    This proved to be by far the best way to create an exact copy of an existing pool. This “restores” recursively the snapshot to the newpool, only instead of “zroot” it has now in all the definitions the new pool name (“newpool” in my case).

  4. developej says:

    Hey,
    I was wondering if this is the guide to go with if someone would try ZFS on root today? Did something changed in the meantime? New stuff, different approaches, anything?
    Thanks

  5. Marcello says:

    What would be the procedure to replace a failed disk, in this setup? More specifically, of the above steps which should be done manually before issuing the zfs replace command to reestablish the mirror?

Leave a Reply

*


*