You are not logged in.

#1 2020-02-02 14:25:50

Registered: 2013-01-26
Posts: 67

Cloned Arch Linux ZFS Root Installation - Now I can't boot the clone

I'm sorry if this is the wrong forum for this question, however just looking for guidance

My host system is FreeNAS which creates VM's with the bhyve hypervisor. The VM was created with a UEFI boot using systemd-boot.  Partition structure:

sda      8:0    0   40G  0 disk
├─sda1   8:1    0  512M  0 part /boot
└─sda2   8:2    0 39.5G  0 part

Within FreeNAS  I created an Arch Linux VM with ZFS on Root.  VM boots and I'm overall pleased.

I wanted to duplicate or clone this VM to avoid setting things up all over again.

Within FreeNAS, I snapshotted my installation, then cloned via a zfs send/zfs receive. I then had to recreate a new VM but used the new cloned dataset as the data source.  Unfortunately this means the new VM was created with all new hardware IDs.

I then went to boot "the new Arch" installation however ran into a problem:

ZFS: importing pool tank
cannot import 'tank': no such pool or dataset

I thought this was strange so I booted the VM using the remastered archiso with the zfs modules

Once booting the archiso, I saw the disk partition structure was preserved.

Following the instructions on the wiki (but wiki didn't exactly address this problem):

modprobe zfs
zpool import -a -R /mnt
mount /dev/sda2 /mnt/boot
arch-chroot /mnt /bin/bash

Once inside the arch-chroot, the zfs datasets were appropriately preserved

I tried exporting/reimporting pool and then rebooting but faced same problem with not being able to find the  tank partition

What exactly do I do at this point?  Generate initramfs or something different?

In creating the new VM, the hardware ID's have changed (which might be part of problem)??
(old installation)

zpool status                                                         
  pool: tank
 state: ONLINE
  scan: scrub repaired 0B in 0 days 00:00:04 with 0 errors on Sat Feb  1 09:42:25 2020

	NAME                                              STATE     READ WRITE CKSUM
	tank                                              ONLINE       0     0     0
	  ata-BHYVE_SATA_DISK_BHYVE-866B-88CE-A25F-part2  ONLINE       0     0     0

(new installation)

# zpool status                                                                    :(
  pool: tank
 state: ONLINE
  scan: scrub repaired 0B in 0 days 00:00:04 with 0 errors on Sat Feb  1 15:42:25 2020

	NAME                                              STATE     READ WRITE CKSUM
	tank                                              ONLINE       0     0     0
	  ata-BHYVE_SATA_DISK_BHYVE-34DB-4C4D-2B38-part2  ONLINE       0     0     0


#2 2020-02-02 14:31:11

Forum Moderator
From: Scotland
Registered: 2010-06-16
Posts: 9,397

Re: Cloned Arch Linux ZFS Root Installation - Now I can't boot the clone

Mod note: Moving to AUR Issues.

Mobo: MSI X299 TOMAHAWK ARCTIC // Processor: Intel Core i7-7820X 3.6GHz // GFX: nVidia GeForce GTX 970 // RAM: 32GB (4x 8GB) Corsair DDR4 (@ 3000MHz) // Storage: 1x 3TB HDD, 5x 1TB HDD, 2x 120GB SSD, 1x 275GB M2 SSD

Making lemonade from lemons since 2015.


#3 2020-02-02 19:42:41

Registered: 2013-01-26
Posts: 67

Re: Cloned Arch Linux ZFS Root Installation - Now I can't boot the clone

I'm going to answer my own question here hoping it will help someone else (or probably me) when they encounter this error in the future

Conclusion: the initramfs needs to be regenerated

Steps to do this:
1. Please use alternative media to boot to VM (ie remastered Arch Linux ISO with the zfs modules).  The remastered archiso is probably the same media you used to recreate ZFS on root for Arch Linux. Details how to generate this specific archiso are: … an_archiso

2. (Optional step -- I prefer this step since without it I normally have to log into VM through VNC). Once booted you will be automatically logged into the archiso as root. Please set passwd for root and and optionally enable the sshd so you can log into the system via alternative or normal means

systemctl start sshd.service

    Login to system from remote ssh terminal as root

3. Manually import and mount the zpool(s), and manually mount the /boot partition (if required)

    First confirm the partitions for your cloned VM exist and seem to be correct.  An example shown below shows two partitions sda1, sda2.  When created sda1 was the /boot partition and sda2 was the main zpool partition. 

# lsblk
loop0    7:0    0 546.8M  1 loop /run/archiso/sfs/airootfs
sda      8:0    0    40G  0 disk
├─sda1   8:1    0   512M  0 part
└─sda2   8:2    0  39.5G  0 part
sr0     11:0    1   661M  0 rom  /run/archiso/bootmnt

      Find the ID# of the zpool throught zpool import:  (In example below the id# = 12523338641105440463)

root@archiso ~ # zpool import
   pool: tank
     id: 12523338641105440463
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.

	tank                                              ONLINE

     Import the zpool manually and perform a mount.   In the example below the zpool is known as "tank".  Also mount the /boot partition if you originally configured your setup with a boot partition.  Mount any other partitions your setup requires.

zpool import 12523338641105440463 -d /dev/disk/by-id -R /mnt tank
mount /dev/sda1 /mnt/boot

If needed you can verify correct mounting of the zpool with "zfs mount" and/or "zpool status"

3a.  Bring the zpool.cache file into your new system

In the example below, the zpool on root is known as tank.  Please adjust accordingly to your zpool name

# cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache

if you do not have /etc/zfs/zpool.cache, create it:

# zpool set cachefile=/etc/zfs/zpool.cache tank

4.chroot into installation to regenerate the hostid and initial ramdisk

arch-chroot /mnt /bin/bash
mv /etc/hostid /etc/hostid.bak
zgenhostid $(hostid)

My /boot/loader/entries/arch.conf file appears as the following:

title         Arch Linux↲
linux       /vmlinuz-linux↲
initrd       /initramfs-linux.img↲
options   zfs=tank/ROOT/default rw↲
options   zfs_import_dir=/dev/disk/by-id↲

I'm not sure if the zfs_force=1 is necessary. 

mkinitcpio -p linux

5. Unmount all partitions and export the zpool.  In the example below the name of the zpool=tank.  Adjust accordingly.

unmount /mnt/boot
zfs umount -a
zpool export tank

6. Eject archiso boot media and reboot VM.  It should boot cleanly at this point however networking may not be functioning.

7. Additional steps that might be needed

- Rename the hostid (newarchlinuxvm=new hostid)

   hostnamectl set-hostname newarchlinuxvm

- Network Parameter Adjustment: Likely needed due to MAC address of networking card changed when migrating VMs. It's difficult for me to make exact recommendations on what to do here, however I'll give an example below.  Please note that I used systemd-networkd to manage my network interfaces.  If you use a different network manager as discussed here: … figuration, please consult the appropriate Arch Wiki page for assistance.

The MAC address can be discovered with the ip link command.  Below the MAC address is shown as: 00:a0:98:34:70:23

#ip link                                                    
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:a0:98:34:70:23 brd ff:ff:ff:ff:ff:ff

My configuration file for systemd-networkd is the following:

cat /etc/systemd/network/                  


To modify the MAC address associated with eth0, I need to modify /etc/udev/rules.d/10-network.rules for the following (this file needed to be generated from scratch -- it does not exist in a base system)

SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:a0:98:34:70:23", NAME="eth0"

8. Reboot and enjoy!.  The beauty of working with a hypervisor is its possible to create and setup a base system once (create the VM), then clone the base setup (clone VM) as many times as needed and setup each VM accordingly.  This setup explained above is applicable to VMs that were created with the ZFS file system on root. With these particular systems, the initramfs needs to be regenerated appropriately to boot the system.

Last edited by kevdog (2020-02-10 04:18:40)


Board footer

Powered by FluxBB