You are not logged in.
I've had this working for years on various distros, so I'm utterly confused why it no longer works as intended. I was using Ubuntu on my server and about a month ago my VMs would randomly lose network connectivity, after a few weeks of messing with it I ditched Ubuntu last night and went back to my beloved Arch, expecting it to work perfectly....and it doesn't. The only thing I swapped since it worked fine was switching from the onboard 1G NIC to the onboard 10G NIC.
I have my bridge setup with systemd-networkd, following the wiki.
[bran@server network]$ cat vmbridge.netdev
[NetDev]
Name=br0
Kind=bridge
[bran@server network]$ cat bind.network
[Match]
Name=enp3s0
[Network]
Bridge=br0
[bran@server network]$ cat vmbridge.network
[Match]
Name=br0
[Network]
DNS=192.168.1.1
Address=192.168.1.7/24
Gateway=192.168.1.1
(Leaving out all the docker interfaces because they're not relevant)
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether 70:85:c2:be:1a:12 brd ff:ff:ff:ff:ff:ff
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 86:a3:9a:e9:a0:23 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.7/24 brd 192.168.1.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::84a3:9aff:fee9:a023/64 scope link
valid_lft forever preferred_lft forever
[bran@server network]$ ip route
default via 192.168.1.1 dev br0 proto static
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.18.0.0/16 dev br-977785fb35b5 proto kernel scope link src 172.18.0.1
192.168.1.0/24 dev br0 proto kernel scope link src 192.168.1.7
(from server)
[bran@server network]$ ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.356 ms
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.209 ms
64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=0.159 ms
64 bytes from 192.168.1.2: icmp_seq=4 ttl=64 time=0.249 ms
^C
--- 192.168.1.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3095ms
rtt min/avg/max/mdev = 0.159/0.243/0.356/0.072 ms
(from laptop)
[bran@arch ~]$ ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
^C
--- 192.168.1.2 ping statistics ---
5 packets transmitted, 0 received, 100% packet loss, time 4091ms
03:00.0 Ethernet controller: Aquantia Corp. AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] (rev 02)
Subsystem: ASRock Incorporation AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion]
Kernel driver in use: atlantic
Kernel modules: atlantic
04:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
Subsystem: ASRock Incorporation I211 Gigabit Network Connection
Kernel driver in use: igb
Kernel modules: igb
The Aquantia NIC is the 10G NIC that I'm currently using, and the Intel NIC was the one I was using previously. Is there some sort of special config I need to do for the 10G NIC that I'm not aware of? I also have Docker running, over the 10G NIC, and those connections work perfectly, which is even more baffling.
iptables
[ [bran@server network]$ sudo iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
LIBVIRT_INP all -- anywhere anywhere
Chain FORWARD (policy DROP)
target prot opt source destination
LIBVIRT_FWX all -- anywhere anywhere
LIBVIRT_FWI all -- anywhere anywhere
LIBVIRT_FWO all -- anywhere anywhere
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
LIBVIRT_OUT all -- anywhere anywhere
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.18.0.3 tcp dpt:58946
ACCEPT tcp -- anywhere 172.18.0.3 tcp dpt:58846
ACCEPT tcp -- anywhere 172.18.0.3 tcp dpt:8112
ACCEPT tcp -- anywhere 172.18.0.5 tcp dpt:http-alt
ACCEPT tcp -- anywhere 172.18.0.5 tcp dpt:https
ACCEPT tcp -- anywhere 172.18.0.5 tcp dpt:http
ACCEPT tcp -- anywhere 172.18.0.7 tcp dpt:intermapper
ACCEPT tcp -- anywhere 172.18.0.9 tcp dpt:owms
ACCEPT tcp -- anywhere 172.18.0.10 tcp dpt:5076
ACCEPT tcp -- anywhere 172.18.0.11 tcp dpt:ttat3lb
ACCEPT tcp -- anywhere 172.18.0.13 tcp dpt:cslistener
ACCEPT tcp -- anywhere 172.18.0.14 tcp dpt:mysql
ACCEPT tcp -- anywhere 172.18.0.16 tcp dpt:sunwebadmins
ACCEPT tcp -- anywhere 172.18.0.8 tcp dpt:radg
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain LIBVIRT_FWI (1 references)
target prot opt source destination
Chain LIBVIRT_FWO (1 references)
target prot opt source destination
Chain LIBVIRT_FWX (1 references)
target prot opt source destination
Chain LIBVIRT_INP (1 references)
target prot opt source destination
Chain LIBVIRT_OUT (1 references)
target prot opt source destination
Last edited by brando56894 (2019-10-09 09:52:41)
Offline
Have you checked your iptables configuration (or post it please)? Docker creates some rules and so does libvirt.
Offline
Added it to the first post, thanks.
Offline
While the libvirt options are sparse I suspect they are mostly used for NAT mode since forward is allowed for all (or I am not looking correctly).
Can you confirm forwarding is enabled?
sysctl net.ipv4.conf.all.forwarding
should be 1.
Maybe try adding
-A LIBVIRT_FWX -i br0 -o br0 -j ACCEPT
to your rules.
Last edited by Swiggles (2019-10-09 13:01:33)
Offline
ipv4 forwarding was already enabled.
That iptables rule seemed to do it, odd that it wasn't there by default. Thanks!
How can I make it permanent? I know there's a save argument but the iptables daemon itself isn't running. Should I enable it? IIRC there's a file I can throw that in...
Last edited by brando56894 (2019-10-09 21:39:19)
Offline
This is actually a good question. I guess it is setup when you have it handled through libvirt (create and start a bridge via virsh).
Anyway there are two options for your setup. Either add a qemu hook, you can find an example here: https://wiki.libvirt.org/page/Networkin … onnections
Or use the iptables service and write your config into /etc/iptables/iptables.rules: https://wiki.archlinux.org/index.php/Ip … _and_usage
Offline
This is actually a good question. I guess it is setup when you have it handled through libvirt (create and start a bridge via virsh).
Anyway there are two options for your setup. Either add a qemu hook, you can find an example here: https://wiki.libvirt.org/page/Networkin … onnections
Or use the iptables service and write your config into /etc/iptables/iptables.rules: https://wiki.archlinux.org/index.php/Ip … _and_usage
The odd thing is that I've always setup the bridge manually with systemd-networkd and never did it via libvirt, maybe because when I first started messing with KVM/libvirt there weren't integrations for Arch, just RHEL/Ubuntu so it never worked automagically, this was probably before systemd, old habits die hard hahaha. I tried to add it in earlier as the default network connection since whenever I create a VM it complains that the default network connection isn't enabled and asks me to enable it, but my bridge wasn't listed in the dropdown and I didn't know what options to select to set it. I always just manually define it in the vm settings.
I'll look into both of those options and will probably just enable the daemon since I obviously need it anyway, I'm guessing Docker just force starts iptables when it starts. Thanks for the help buddy.
edit: sudo iptables-save -f /etc/iptables/iptables.rules was enough to save it without having to start the daemon, which I feared would screw up my current rules. I may need to take out all the docker specifc rules though in case the IPs change.
Last edited by brando56894 (2019-10-10 00:13:18)
Offline
Check the first link I posted and scroll to the top. There is an explanation for how to (auto)start and handle the default network.
While it won't matter for your setup now, it could give some closure
Offline
Will do!
Offline