You are not logged in.
So I have Arch Linux on my laptop with UEFI, four partitions on GPT (/boot/efi, swap, / and /home) with systemd-boot and I want to convert that installation into a VirtualBox VM.
The idea was to make a full disk backup using Clonezilla, then create the VM, boot Clonezilla in that VM and restore the backup in that VM.
After I had finished that process and "EFI" was activated in the VirtualBox settings for that VM, nothing happened, I can see the UEFI Inteface Shell v2.1 with the "Shell>" prompt and that's it.
Has anybody here ever tried that successfully and can provide some hints?
Offline
Once the system and the boot loader are installed, VirtualBox will first attempt to run /EFI/BOOT/BOOTX64.EFI from the ESP.
Did you copy the bootloader to /EFI/BOOT/BOOTX64.EFI ?
--
saint_abroad
Offline
I was able to do it 2-3 years ago, but VirtualBox's implementation of EFI is buggy. Please see this link from the Arch wiki.
oops. Looks like someone beat me to it.
Last edited by kermit63 (2019-11-26 11:42:25)
Never argue with an idiot, they will drag you down to their level and then beat you with experience.
It is better to light a candle than curse the darkness.
A journey of a thousand miles begins with a single step.
Offline
Once the system and the boot loader are installed, VirtualBox will first attempt to run /EFI/BOOT/BOOTX64.EFI from the ESP.
Did you copy the bootloader to /EFI/BOOT/BOOTX64.EFI ?
It's a recent installation on my laptop, so the EFI partition (/dev/nvme0n1p1) gets mounted to /boot
$ ll /boot/EFI/BOOT/
insgesamt 100
drwxr-xr-x 2 root root 4096 21. Nov 07:20 ./
drwxr-xr-x 5 root root 4096 3. Okt 17:05 ../
-rwxr-xr-x 1 root root 91072 19. Nov 21:24 BOOTX64.EFI*
which is why I thought this should just work, but it didn't.
Last edited by Master One (2019-11-26 12:02:48)
Offline
Hello. I had the same problem as you, and you have 2 ways to solve it:
1 (Not Recommended):
From the Live USB mount your EFI Partition and rename grub to BOOT and then grubx64.efi to BOOTX64.EFI. This is not recommended since with GRUB updates it will break every time.
2 (Recommended):
Mount your EFI partition and create a script called startup.nsh. It shouldn't be inside EFI. It should be alone on the beginning of the partition. Open it with a text editor and type EFI/GRUB/GRUBX64.EFI
For now you can just boot your Arch installation by typing EFI/GRUB/GRUBX64.EFI in the UEFI shell.
Last edited by Kosmas12 (2019-11-26 13:35:17)
Offline
Mount your EFI partition and create a script called startup.nsh. It shouldn't be inside EFI. It should be alone on the beginning of the partition. Open it with a text editor and type EFI/GRUB/GRUBX64.EFI
For now you can just boot your Arch installation by typing EFI/GRUB/GRUBX64.EFI in the UEFI shell.
I am not using grub but systemd-boot and EFI/BOOT/BOOTX64.EFI is already the proper location which should not require the startup.nsh script.
I'm not really sure what's going on here, so this is what happens:
I start the VM and the UEFI Interactive Shell comes up with a countdown for pressing ESC to skip startup.nsh or any other key to continue. Then it drops to the Shell> prompt.
The mapping table only shows the following:
BLK0: Alias(s):
PciRoot (0x0) /Pci (0x1,0x1) /Ata (0x0)
BLK1: Alias(s):
PciRoot (0x0) /Pci (0xD,0x0) /Sata (0x0,0x0,0x0)
I have tried to play around with the UEFI shell but something does not seem right, because none of the recommendations I found work.
The mapping table does not show any names of available file systems (like FS0) but only storage devices (BLK0), so things like
Shell> FS0:
'FS0' is not a correct mapping.
don't work.
Maybe something went wrong with creating or restoring the Clonezilla backup, so I will start over as soon as I find the time. Should anyone have any more ideas or hints for me in the meantime, please tell.
Offline
OK, I have now created two new backups with Clonezilla, one disk backup of the whole system (nvme0n1) and one partition backup only covering the two partitions /boot (nvme0n1p1) and root (nvme0n1p3).
nvme0n1 has a size of 512GB and the original plan was to restore the full backup to a VM with that size. Since the whole system currently occupies only around 40 GB (14 GB in root, 26 GB in /home), I'd like to try to restore to a VM with a total size of 120 GB only (as well as to an external SSD with that size), but as it seems it's not possible with Clonezilla to restore to a smaller disk, only to a larger one.
Is there another way to make this happen? Restore only the two partitions /boot and root and add /home afterwards by adding that partition in new size and just copy its content over?
It would be great to get some input from someone who has done that before or has experience with that kind of backup/restore, because I'm completely in the dark here and playing trial & error is extremely time-consuming.
Offline
I'd like to try to restore to a VM with a total size of 120 GB only (as well as to an external SSD with that size), but as it seems it's not possible with Clonezilla to restore to a smaller disk, only to a larger one.
Not entirely true. Check out this link from the clonezilla website. At the bottom of the page, it is possible to use the -icds parameter to clone from a larger disk to a smaller one.
P.S. I was able to do it before (clone existing install to vm), albeit using grub instead of systemd boot. Since I'm not conversant with systemd-boot, I'll let other users help with the EFI boot problems.
Last edited by kermit63 (2019-11-27 04:39:30)
Never argue with an idiot, they will drag you down to their level and then beat you with experience.
It is better to light a candle than curse the darkness.
A journey of a thousand miles begins with a single step.
Offline
Master One wrote:I'd like to try to restore to a VM with a total size of 120 GB only (as well as to an external SSD with that size), but as it seems it's not possible with Clonezilla to restore to a smaller disk, only to a larger one.
Not entirely true. Check out this link from the clonezilla website. At the bottom of the page, it is possible to use the -icds parameter to clone from a larger disk to a smaller one.
I already tried that option, it still leads to the following error and aborts:
Last LBA specified by script is out of range.
Failed to apply script headers, disk label not created.
Leaving.
This was done by: LC_ALL=C sfdisk --wipe always --force /dev/sda < /tmp/2019-11-26-17-img-full-tmp-cnvted/sda-pt.sf 2>&1
Error with creation of a partition table on this disk: /dev/sda
Is this disk too small: /dev/sda?
Program aborted!!
I have taken a look at that file (/tmp/2019-11-26-17-img-full-tmp-cnvted/sda-pt.sf) but I don't know how to rewrite that partition table to fit the smaller drive (and I guess that file gets overwritten on a rerun anyway?).
Offline
There are other alternatives:
1. use rsync to back up individual partitions (since your current install is very sparse), then use the rsync back-up to populate the smaller-size vm partitions. The Arch wiki has an excellent write-up on how to back up using rsync.
2. if you still want to use clonezilla, an option (although a bit dangerous) is to shrink you current partitions to sizes that your vm partitions can accommodate, back up the shrunk partitions, then expand to their original state after the clonezilla backup.
Never argue with an idiot, they will drag you down to their level and then beat you with experience.
It is better to light a candle than curse the darkness.
A journey of a thousand miles begins with a single step.
Offline
@kermit63, Clonezilla indeed may not be the proper way to go, though my next thought would have been to restore just the /boot and root partition from the backup and go from there, but even then it wouldn't have been the best solution (if I even would have managed that approach to work). I think I'll try a basic Arch Linux installation in the VM and then copy over the content of my partitions. That way I can downsize the root and /home partition to just make everything fit without having to try to shrink them afterwards.
Last edited by Master One (2019-11-28 15:22:45)
Offline