You are not logged in.
Hi everyone,
I'm trying to fix my HomeAssistant installation which has been running for many years. I think one of the Arch Updates perhaps for docker caused issues?
Unfortunately I keep getting this error when trying to pull the latest HA version:
docker pull ghcr.io/home-assistant/qemux86-64-homeassistant:2025.12.1
Error response from daemon: error creating temporary lease: Unimplemented: unknown service containerd.services.leases.v1.Leasesgoogle'ing the closest I got was this:
https://forum.openmediavault.org/index. … n-service/
Citing:
("Leases" is in the error message. Usually that's a DHCP function, but Docker has it's own internal IP network and I've never looked that far into it. "Leases not implemented" in the Docker context might mean that Docker has no control over running containers or there's something wrong with the IP stack.)
I have the latest version installed as of writing: extra/containerd 2.2.0-1 [installed]
What could be causing this?
I have this set in my docker/daemon.json file:
{
"storage-driver": "overlay2",
"log-driver": "journald"
}there's another file called key.json which has stuff in it but I'm not sure what that's for?
Would anyone be able to help me track down what's going on here?
Many thanks!
Offline
Ok some further information....
Tracing this thing to network issues.... I am not that familiar with docker but looking up here incase there are any Arch specific things that need changing:
https://wiki.archlinux.org/title/Docker
Had a look under this header: docker0 Bridge gets no IP / no internet access in containers when using systemd-networkd
however that wasn't the issue....
The exact problem seems to be no network connectivity at all so nothing will pull from any repository.
Running a quick ps this seems to be how Docker is running:
/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sockBoth containerd and docker.socket services are up and running.
I wonder what the issue could be?
I have the docker0 network with ip address:
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether d6:f4:50:7f:3e:a8 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 2 overruns 0 carrier 0 collisions 0
docker_gwbridge: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::3819:9ff:fec5:9ace prefixlen 64 scopeid 0x20<link>
ether 3a:19:09:c5:9a:ce txqueuelen 0 (Ethernet)
RX packets 3 bytes 84 (84.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 25 bytes 2740 (2.6 KiB)
TX errors 0 dropped 6 overruns 0 carrier 0 collisions 0Not sure why the bridge started?
Hmm... I'm totally lost!
Offline
Hi, I ran into the same error message after upgrading Docker 28.5 to 29.1 (on Ubuntu though, please stone me. This is the only relevant search result to this error message at the time of writing this post) and finally had some time to dig into it. I think that link you found directed you into the wrong direction. The important part of the error message is not the lease part, but the "Unimplemented: unknown service containerd.services.leases.v1.Leases".
This looked rather like a containerd problem than a Docker problem to me. This comment prompted me to run ctr plugin list | grep -i cri on my servers, both on one where Docker works as it should and the one where I get the error message.
This was the result:
user@host1:/etc/containerd$ sudo ctr plugin list | grep -i cri
io.containerd.cri.v1 images - error
io.containerd.cri.v1 runtime linux/amd64 ok
user@host2:~$ sudo ctr plugin list | grep -i cri
io.containerd.cri.v1 images - ok
io.containerd.cri.v1 runtime linux/amd64 okChecking journalctl -u containerd on the problematic host I found these messages:
Dec 09 08:49:42 host1 containerd[35142]: time="2025-12-09T08:49:42.863239732+01:00" level=warning msg="failed to load plugin"
error="/var/lib/containerd/io.containerd.snapshotter.v1.erofs does not support d_type. If the backing filesystem is xfs, please reformat with ftype=1 to enable d_type support"
id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1
Dec 09 08:49:42 host1 containerd[35142]: time="2025-12-09T08:49:42.866451070+01:00" level=warning msg="failed to load plugin"
error="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs does not support d_type. If the backing filesystem is xfs, please reformat with ftype=1 to enable d_type support"
id=io.containerd.snapshotter.v1.overlayfs type=io.container.snapshotter.v1Investigating this furter:
user@host1:/etc/containerd$ sudo xfs_info /dev/sda2
meta-data=/dev/sda2 isize=256 agcount=9, agsize=626304 blks
= sectsz=512 attr=2, projid32bit=0
= crc=0 finobt=0, sparse=0, rmapbt=0
= reflink=0 bigtime=0 inobtcount=0
data = bsize=4096 blocks=5126912, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=0
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0Well, ftype=0. Checking the same on the Host where Docker runs fine xfs is formatted with ftype=1. I guess I will migrate the VM to a different disk that is formatted with ftype=1, but this will have to wait until my next maintenance window I have next week.
For the record, along with the Docker upgrade containerd was upgraded from v1.7.28 to v2.2.0. I guess this is relevant for the compatibility with xfs. So, check if your filesystem has been formatted properly for containerd use. With a different filesystem the error messages could be different, but with a similar background.
Some more relevant quotes from the Docker docs:
Docker Engine 29.0 and later uses the containerd image store by default. The overlay2 driver is a legacy storage driver that is superseded by the overlayfs containerd snapshotter.
The overlay2 driver is supported on xfs backing filesystems, but only with d_type=true enabled.
If you are unable to migrate your system to a properly formatted partition you could work around this by selecting a different storage backend ... but this will mean recreating all images, containers etc.
Last edited by GeraldS (2025-12-09 09:10:04)
Offline
Hi, I ran into the same error message after upgrading Docker 28.5 to 29.1 (on Ubuntu though, please stone me. This is the only relevant search result to this error message at the time of writing this post) and finally had some time to dig into it. I think that link you found directed you into the wrong direction. The important part of the error message is not the lease part, but the "Unimplemented: unknown service containerd.services.leases.v1.Leases".
This looked rather like a containerd problem than a Docker problem to me. This comment prompted me to run ctr plugin list | grep -i cri on my servers, both on one where Docker works as it should and the one where I get the error message.
This was the result:
user@host1:/etc/containerd$ sudo ctr plugin list | grep -i cri io.containerd.cri.v1 images - error io.containerd.cri.v1 runtime linux/amd64 ok user@host2:~$ sudo ctr plugin list | grep -i cri io.containerd.cri.v1 images - ok io.containerd.cri.v1 runtime linux/amd64 okChecking journalctl -u containerd on the problematic host I found these messages:
Dec 09 08:49:42 host1 containerd[35142]: time="2025-12-09T08:49:42.863239732+01:00" level=warning msg="failed to load plugin" error="/var/lib/containerd/io.containerd.snapshotter.v1.erofs does not support d_type. If the backing filesystem is xfs, please reformat with ftype=1 to enable d_type support" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 09 08:49:42 host1 containerd[35142]: time="2025-12-09T08:49:42.866451070+01:00" level=warning msg="failed to load plugin" error="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs does not support d_type. If the backing filesystem is xfs, please reformat with ftype=1 to enable d_type support" id=io.containerd.snapshotter.v1.overlayfs type=io.container.snapshotter.v1Investigating this furter:
user@host1:/etc/containerd$ sudo xfs_info /dev/sda2 meta-data=/dev/sda2 isize=256 agcount=9, agsize=626304 blks = sectsz=512 attr=2, projid32bit=0 = crc=0 finobt=0, sparse=0, rmapbt=0 = reflink=0 bigtime=0 inobtcount=0 data = bsize=4096 blocks=5126912, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=0 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0Well, ftype=0. Checking the same on the Host where Docker runs fine xfs is formatted with ftype=1. I guess I will migrate the VM to a different disk that is formatted with ftype=1, but this will have to wait until my next maintenance window I have next week.
For the record, along with the Docker upgrade containerd was upgraded from v1.7.28 to v2.2.0. I guess this is relevant for the compatibility with xfs. So, check if your filesystem has been formatted properly for containerd use. With a different filesystem the error messages could be different, but with a similar background.
Some more relevant quotes from the Docker docs:
Docker Engine 29.0 and later uses the containerd image store by default. The overlay2 driver is a legacy storage driver that is superseded by the overlayfs containerd snapshotter.
The overlay2 driver is supported on xfs backing filesystems, but only with d_type=true enabled.
If you are unable to migrate your system to a properly formatted partition you could work around this by selecting a different storage backend ... but this will mean recreating all images, containers etc.
That's really interesting!
Actually checking things out... I have my docker installed onto an ext4 filesystem.
But I think you are correct about this being a "containerd" issue....
I have managed to get my installation up and running without containerd and not using systemd at all
I simply ran:
dockerd -Dthen started the Home Assistant Supervisor service manually.
I'll see if
overlayfs works and report back when I have the chance to test, probably tomorrow now.
If that runs on ext4 then that might be the way forward....?
Offline
Glad that you got it working. I'm not aware of issues with containerd and ext4, but that doesn't mean much, just that I didn't encounter any.
I did run the xfs migration I planned for today, and it worked as expected.
A summary of what I did:
I added a new disk to the VM, with the same size as sda and partitioned it exactly the same way. I formatted the new root partition with xfs and made sure that it is formatted with ftype=1, which is the default nowadays. This issue should only come up on machines that have been formatted with xfs some years ago.
I mounted both the new root and the old root via bind (to prevent copying of other mount points) into separate mount points and rsynced everything a first time in the running system to reduce downtime to a minimum.
mount /dev/sde2 /mnt/sde2
mount -o bind / /mnt/sda2
rsync -a /mnt/sda2/ /mnt/sde2Then I rebootet into a live image, mounted both partitions again and ran rsync a second time to transfer any changes that occured after the initial rsync.
Then I edited /mnt/sde2/etc/fstab and replaced the UUIDs from sda partitions with the UUIDs from their sde counterparts. Then I chrooted into the new system and installed grub into the new disk
mount -t proc proc /mnt/sde2/proc/
mount -o bind /dev /mnt/sde2/dev/
mount -o bind /sys /mnt/sde2/sys/
chroot /mnt/sde2 /bin/bash
grub-install /dev/sdeThen I shut down the VM, removed sda from the configuration (without deleting the disk) and changed the SCSI address of the new disk to replace it (SCSI 0:0 in my case). After turning the VM on it booted directly into the system from the new disk.
A final check if ftype=1:
user@host1:~$ sudo xfs_info /dev/sda2
meta-data=/dev/sda2 isize=512 agcount=4, agsize=1281728 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=0 inobtcount=0
data = bsize=4096 blocks=5126912, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0Now is a good time to make a final snapshot of the VM, just in case. Then, upgrade docker and check if it works.
user@host1:~$ sudo apt-mark unhold containerd.io docker-ce docker-ce-cli docker-ce-rootless-extras
# [...]
user@host1:~$ sudo apt upgrade
# [...]
user@host1:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2a1722878ac8 harbor.example.com/it-dep/fisheye:4.9.6 "/__cacert_entrypoin…" 41 minutes ago Up 6 minutes (healthy) 0.0.0.0:8060->8080/tcp fisheye
7e4ca5478724 harbor.example.com/dockerhub/library/mysql:5.7 "docker-entrypoint.s…" 23 months ago Up 6 minutes 3306/tcp, 33060/tcp mysql_fisheye
user@host1:~$ sudo ctr plugin list | grep -i cri
io.containerd.cri.v1 images - ok
io.containerd.cri.v1 runtime linux/amd64 okThat looks good, everything runs fine.
Last edited by GeraldS (2025-12-16 13:59:40)
Offline