You are not logged in.
Hello,
Some time ago my libvirt+kvm+qemu network setup stopped working:
I was bridging the libvirt-created interfaces to my lan interface using a bridge setup via systemd-networkd and the qemu-ifup and qemu-ifdown scripts (so connecting my vms to the local network). It worked, and now it doesn't anymore.
ifup and ifdown:
[camille@CAMILLE_WORKPC ~]$ cat /etc/qemu-ifup
#!/bin/bash
set -x
switch=br0
if [ -n "$1" ];then
ip link set $1 up promisc on
sleep 0.5s
ip link set $1 master $switch
exit 0
else
echo "Error: no interface specified"
exit 1
fi
[camille@CAMILLE_WORKPC ~]$ cat /etc/qemu-ifdown
#!/bin/bash
set -x
switch=br0
if [ -n "$1" ];then
ip link set $1 down
sleep 0.5s
exit 0
else
echo "Error: no interface specified"
exit 1
fi
Systemd-networkd setup:
[camille@CAMILLE_WORKPC ~]$ for each in /etc/systemd/network/*;do printf \\n${each}\\n&&cat ${each}; done
/etc/systemd/network/br0.netdev
[NetDev]
Name=br0
Kind=bridge
/etc/systemd/network/br0.network
[Match]
Name=br0
[Network]
DHCP=ipv4
/etc/systemd/network/enp5s0.network
[Match]
Name=enp5s0
[Network]
Bridge=br0
ip addr in situation:
vnet0 is the vm interface
[camille@CAMILLE_WORKPC ~]$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
link/ether bc:ae:c5:1a:06:3c brd ff:ff:ff:ff:ff:ff
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 3a:58:13:a8:8d:6e brd ff:ff:ff:ff:ff:ff
inet 192.168.5.54/24 brd 192.168.5.255 scope global dynamic br0
valid_lft 668650sec preferred_lft 668650sec
inet 172.0.5.1/32 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::3858:13ff:fea8:8d6e/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:06:19:44:ec brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:6ff:fe19:44ec/64 scope link
valid_lft forever preferred_lft forever
5: br-0545a54ff6ba: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:83:89:27:66 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 scope global br-0545a54ff6ba
valid_lft forever preferred_lft forever
74: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UNKNOWN group default qlen 1000
link/ether fe:54:00:91:08:a0 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe91:8a0/64 scope link
valid_lft forever preferred_lft forever
I couldn't make the VM contact any ip of my machine. This was tested with all three interfaces i have available on my system for VMs (virtio, rtl8139, e1000).
I couldn't spot anything wrong with my configuration, and the only weird log I got is this:
[19556.394712] br0: port 2(vnet0) entered blocking state
[19556.394715] br0: port 2(vnet0) entered forwarding state
[19557.759736] kvm: zapping shadow pages for mmio generation wraparound
[19564.834715] kvm: zapping shadow pages for mmio generation wraparound
[19565.242024] kvm [20820]: vcpu0, guest rIP: 0xffffffff81060692 unhandled rdmsr: 0x34
[19659.689959] br0: port 2(vnet0) entered disabled state
[19683.247269] br0: port 2(vnet0) entered blocking state
[19683.247272] br0: port 2(vnet0) entered forwarding state
[19771.605259] sky2 0000:05:00.0: error interrupt status=0x40000008
[19771.605476] sky2 0000:05:00.0 enp5s0: rx error, status 0x7ffc0001 length 996
[19816.612858] sky2 0000:05:00.0: error interrupt status=0x40000008
[19816.613079] sky2 0000:05:00.0 enp5s0: rx error, status 0x7ffc0001 length 996
[20872.775933] sky2 0000:05:00.0: error interrupt status=0x40000008
[20872.776164] sky2 0000:05:00.0 enp5s0: rx error, status 0x7ffc0001 length 996
which seems to have stuffs to do with my LAN interface (Marvell Technology Group Ltd. 88E8056 PCI-E Gigabit Ethernet Controller) according to : https://www.oxygenimpaired.com/marvell- … ver-broken
If anyone can help me diagnosing why the bridge is not working...
Thanks.
Last edited by Twen (2017-03-13 08:44:31)
Offline
hi, i have similar problem. i found this bug https://github.com/systemd/systemd/issues/1967 and it occurs that i can login using ipv6 but for some reason it stopped working for ipv4
cat /etc/os-release
NAME="Arch Linux"
PRETTY_NAME="Arch Linux"
ID=arch
ID_LIKE=archlinux
ANSI_COLOR="0;36"
HOME_URL="https://www.archlinux.org/"
SUPPORT_URL="https://bbs.archlinux.org/"
BUG_REPORT_URL="https://bugs.archlinux.org/"
➜ ~ ssh test1234@fda5:9bfc:5fbb::225
test1234@fda5:9bfc:5fbb::225's password:
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Mar 8 05:14:19 2017 from fda5:9bfc:5fbb::b1c
test1234@debian:~$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=debian
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
test1234@debian:~$
Offline
I have noticed this as well after a reboot recently. Check your FORWARD policy in iptables. If it is not set to ACCEPT, you'll need a rule to allow forwarding on the interface you are bridging to. In my case, I bridge to br0 as such:
vortex ~ # iptables -L
...
Chain FORWARD (policy DROP)
...
vortex ~ # iptables -A FORWARD -p all -i br0 -j ACCEPT
vortex ~ # iptables -v -L
...
Chain FORWARD (policy DROP 0 packets, 0 bytes)
...
0 0 ACCEPT all -- br0 any anywhere anywhere
...
Edit: also either of you wouldn't happen to have docker installed would you? If so, this commit could be the culprit.
Also, you could also just change the policy to ACCEPT again.
iptables -P FORWARD ACCEPT
Last edited by zimmedon (2017-03-10 16:03:01)
Offline
thank you @zimmedon
indeed i have docker installed and when i changed iptables rules, all started working again
Offline
That worked!
Thank you zimmedon!
Offline
It's 2022 now and this thread saved me. Thank you!!
Offline