Cloning a ZFS bootable system

blue_computer_speed

I always thought that ZFS was the most exciting addition to FreeBSD during the last ten years that I am involved.

Generally speaking the process of cloning a live system has been the cause of many headaches in the past. You can find many tools, commercial and open source, that try to ghost the full file systems. But there are always limitations that rise because of the way file systems are laid. Raid controllers, software raid and disk geometry are some of the things you have to consider only. Till today…

We are going to clone a live raid1 system over ssh to a new system with a single disk  and with smaller capacity. On top of that during this operation, our host system will be online and we will shut it down after the clone has been completed.

Imagine having to do this a few years ago.

First of all we will use Martin Matuška ISO images to boot our target system. Download your platform ISO and boot. Your target system will load in a fully workable FreeBSD image loaded into memory. Much better than the FIXIT environment. Assuming that you have a DCHP server around your system will already have an IP address:

Now on the new system we have to initialize our disk and create our pool. We will use a gpt partition scheme.

mfsbsd#gpart create -s gpt ad0
mfsbsd#gpart add -b 34 -s 64k -t freebsd-boot ad0
mfsbsd#gpart add -t freebsd-zfs -l disk0 ad0
mfsbsd#gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ad0
mfsbsd#zpool create zroot /dev/gpt/disk0
mfsbsd#zpool set bootfs=zroot zroot

In this scenario swap is assumed to reside within the zfs pool. If we were using swap space on a different partition we would have to create it also. Now it is time to prepare our original system for cloning. First create a full snapshot of the filesystem

zfs snapshot -r zroot@bck

Now send the snapshot to out new system. The password for the root user is mfsroot

zfs send -R zroot@bck | ssh root@10.10.10.141 zfs recv -Fdv zroot

That should take a while. After that the last steps, on the target machine.

mfsbsd#zfs destroy -r zroot@bck
mfsbsd#zfs set mountpoint=/zroot zroot
mfsbsd#zpool export -f zroot
mfsbsd#zpool import -f zroot
mfsbsd#cp /boot/zfs/zpool.cache /zroot/boot/zfs/zpool.cache
mfsbsd#zfs umount -a
mfsbsd#zfs set mountpoint=legacy zroot
mfsbsd#zfs set mountpoint=/tmp zroot/tmp
mfsbsd#zfs set mountpoint=/usr zroot/usr
mfsbsd#zfs set mountpoint=/var zroot/var
mfsbsd#reboot

That’s it. Make sure that you power off your first system so that you don’t get an IP conflict.
This will work regardless of the source – destination zpool types. They could be simple, mirror or RAIDz1. Also, disk capacity is irrelevant. Meaning that you can transfer a 240G system to a 80G system as long as the second has enough room to hold the data.

Here you can see a bare metal FreeBSD9 being cloned to a Linux KVM server


 
Powered by BareBSD
 

13 Responses

  1. BSD says:

    Excellent article! I was wondering, can you restore an encrypted ZFS root from (a zfs send) file on another machine using zfs receive?

    I’m assuming the source server has completely failed and you now need to do a full restore from the zfs send file you have (on an external USB drive say). Is this possible? Only difference to above is that a) its all encrypted on root and b) you not using SSH but a file on an external drive

  2. Eric says:

    This worked wonderfully! it failed a few times using Virtual Box but once i switched to VMware it worked flawlessly! thanks!!

  3. Bram Pieters says:

    Worked like a charm from the first time.
    Used this procedure to create a test system from a life production system.
    thx!

  4. Alex says:

    Isnt working for me. =/

    When i try “zfs snapshot -r zroot@bck” this msg apears:
    cannot open ‘zroot’: dataset does not exist
    no snapshots were created

    What am i doing wrong?

    thanks :)

  5. bobby says:

    Thanks for the manual,i will test it if i have to recover my server from death next time :) !
    Umm … how about you write some article of how to install mfsBSD with zfs? It has it’s own “zfsinstall” script BUT it doesn’t provide many options as how much zfs slices to create, which mountpoints to set, zfs compression/setuid/exec options etc… I am not so good at editing bash scripts. Can you make a tutorial for zfs root with mfsbsd as you made for freebsd? If you could make mfsbsd iso image with modified zfsinstall script to automate the whole process will be SO F** GREAT :)

  6. bobby says:

    I (maybe) will have successful clone of my freebsd 10/zfs/custom kernel virtual machine to a remote pc. The difference is that I am using the FreeBSD 9.0 (live) install disc. Here is how I do it:

    zpool create -f tank raidz1 /dev/gpt/disk0 /dev/gpt/disk1
    mkdir /tmp/etc
    mount_unionfs /tmp/etc /etc
    vi /etc/ssh/sshd_config

    /etc/rc.d/sshd onestart
    passwd root

    zfs send -R tank@bck | ssh root@192.168.100.106 zfs recv -Fdv tank

  7. bobby says:

    I (maybe) will have successful clone of my freebsd 10/zfs/custom kernel virtual machine to a remote pc. The difference is that I am using the FreeBSD 9.0 (live) install disc. Here is how I do it:
    (boot live fbsd cd)
    (create the gpt scheme and necessary partitions)
    zpool create -f tank raidz1 /dev/gpt/disk0 /dev/gpt/disk1 (as I have 2x40gb IDE disks)
    mkdir /tmp/etc
    mount_unionfs /tmp/etc /etc
    vi /etc/ssh/sshd_config
    (PermitRootLogin yes)
    /etc/rc.d/sshd onestart
    passwd root
    (and send the snapshot from vm to pc)
    zfs send -R tank@bck | ssh root@192.168.100.106 zfs recv -Fdv tank
    P.S sorry,in the upper comment it removed the comments somehow …
    Delete it,pls.

  8. bobby says:

    I skipped the line (i didn’t found zpool.cache file)
    “cp /boot/zfs/zpool.cache /zroot/boot/zfs/zpool.cache”
    …and added that one :
    zfs set org.freebsd:swap=on zroot/swap
    It works fuukeng good! Now I can install perfectly-configured FreeBSD to any computer in minutes !!!

  9. Jur van der Burg says:

    This almost worked for me, except that the bootcd has a different zfs version than the system that I cloned. This caused a failure after the restore in that I could not give any zfs or zpool command because it could not init the libraries. This was caused by the fact that the restore caused the new disk to replace the root filesystem…..

    I used this when creating the new pool to get around that:

    zpool create -R /a -m legacy zroot /dev/gpt/disk0

    That caused the new zroot to be mounted in /a after which I could give the rest of the commands without a problem.

    I hope this may save someone else some time.

    • gkontos says:

      ZFS send would not work if you are transferring to a lower version. In any case it should be implied that the destination has >= ZFS version that the source.

Leave a Reply

*


*