You are not logged in.
Trying to move my bind9 nameserver into a systemd-nspawn container. I thought it'd be easiest to use host networking, but it turns out there's a four-year-old bug in systemd-nspawn that prevents binding to privileged ports (like 53).
Ok, the virtual ethernet method is more work but more secure anyway. Here's a convenience link to the wiki section I'm following:
Wiki: Systemd-nspawn#Use_a_virtual_Ethernet_link
I started and enabled systemd-networkd and systemd-resolved on the host and container. Then I created this /etc/systemd/nspawn/bind.nspawn file:
[Network]
VirtualEthernet=yes
Port=udp:53:53
Port=tcp:53:53
Port=tcp:953:953
I also ran this iptables command, according to the wiki, to allow the container to access the host's systemd-networkd DHCP server, as I understand:
# iptables -A INPUT -i ve-+ -p udp -m udp --dport 67 -j ACCEPT
EDIT: I had the wrong iptables command posted here
I also did the other things in the wiki, like ensure that IPMasquerade=both in /usr/lib/systemd/network/80-container-ve.network on the host.
When I first start the container, the interface has no IP... but after a few seconds, systemd-networkd performs its magic and I get a link-local address in the container:
2: host0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 76:ab:f4:9f:96:19 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.179.233/16 metric 2048 brd 169.254.255.255 scope link host0
valid_lft forever preferred_lft forever
inet6 fe80::74ab:f4ff:fe9f:9619/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
On the host, the container gets a link-local address and a private address:
11: ve-bind@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 36:64:a5:1c:9b:8e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.18.251/16 metric 2048 brd 169.254.255.255 scope link ve-bind
valid_lft forever preferred_lft forever
inet 192.168.11.129/28 brd 192.168.11.143 scope global ve-bind
valid_lft forever preferred_lft forever
inet 192.168.142.145/28 brd 192.168.142.159 scope global ve-bind
valid_lft forever preferred_lft forever
inet6 fe80::3464:a5ff:fe1c:9b8e/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
Shouldn't I be getting a private address on the host0 interface inside the container? I guess the DHCP is not working completely. Inside the container, I see this warning in the systemd-networkd logs... might be unrelated.
Starting Network Configuration...
Failed to increase receive buffer size for general netlink socket, ignoring: Operation not permitted
lo: Link UP
lo: Gained carrier
host0: Configuring with /usr/lib/systemd/network/80-container-host0.network.
Enumeration completed
Started Network Configuration.
host0: Link UP
host0: Gained carrier
host0: Gained IPv6LL
Operation not permitted? I thought we were root! I googled that and got two results and no solutions. Could this be another major bug in systemd-nspawn, like the one with privileged ports on host networking? Is systemd-nspawn usable at all? I mean, with Poettering working for micro$oft, these bugs might never get fixed. Ok, getting a little subjective here, but that's what forums are for, right?
Well, ultimately all I want is for the container to be able to access the internet so bind9 can talk to other nameservers. But I get this:
# ping -c1 1.1.1.1
ping: connect: Network is unreachable
Which is obviously caused by having no default route (still in the container):
# ip route
169.254.0.0/16 dev host0 proto kernel scope link src 169.254.179.233 metric 2048
And I could solve that by creating a default route, but I guess I need an IP for that (not link-local):
ip route add default via $MYIP dev host0
Just curious, does anyone actually use systemd-nspawn? It seems well-documented (unlike lxc) and works great with IPv6 (unlike docker) and it just plays so nicely with systemd, pre-installed dominant init system... but it seems to have multiple major bugs that might never be fixed and when I google around for solutions, there's almost no solved stackoverflow/forum questions for it. So I get the impression that it's a kind of zombie project that nobody actually uses.
Last edited by ki9 (2023-07-09 17:56:12)
Offline
Well, I never figured this out. But I did get static addressing to work, using this guide:
https://gist.github.com/ALTinners/c174b … t-a-bridge
I was still seeing the "Operation not permitted" error with the working static addresses so it's probably nothing to worry about.
Ultimately, I decided that it'd be too much work to get the veth to connect to all my VPN interfaces, setting up static IPs and routing for everything. Too many moving parts. Would be fine with host networking, but that doesn't work so... ultimately I went with the tried-and-true chroot method from the wiki:
https://wiki.archlinux.org/title/BIND#R … nvironment
To anyone trying to make systemd-nspawn work... I recommend using lxc instead. The documentation has gotten better over the years and there's lots of solved questions in forums and stack exchanges. Also, it's actually maintained, which counts for a lot.
Offline
Hi ki9,
I just found out your thread due to issue playing with mkosi. How do you create your container? Is it using normal user? If yes, have you tried creating container using root?
My suggestion is to try to create container using root first, without user namespaces involved, and then run again also using root.
If its does not works run bind on different port and route the host port (probably with dummy .socket) to container.
Some pointer that may help beside the link that you have posted is https://github.com/nosada/mkosi-files/i … -706651468 .
Offline
This container I was creating manually (as root). I was planning on writing some mkosi files once I got it working.
This is the second or third time I've tried systemd-nspawn/mkosi. The last time, I kept getting I/O errors whenever I tried to start the container with mkosi and eventually gave up.
Although I got static addressing to work this time around, I ran into some other problem that I never solved. Probably I could have routed 53 into the container using iptables (or a .socket like you said) but it's not an ideal solution. I don't want to run into problems in the future that I can't solve because systemd-nspawn is not well-supported. So I abandoned nspawn again and went with chroot instead.
Offline
isn't this a limitation on your system? You are mapping user 0 to 1000 and on your host user 1000 cannot listen on port 53. I think the easiest route is to add a firewall forward or masq from 53 to the container higher port. veth will add even more complexity.
Offline
You are mapping user 0 to 1000
I am?
I think the easiest route is to add a firewall forward or masq from 53 to the container higher port.
This probably would have worked, but I went with the chroot.
Offline