You are not logged in.
Hi, I am using a Thinkpad P14S (Amd) and recently (~1 week) the network connection quickly degrades, going from around 40Mbps (consistent with what my phone registers on fast.com on the same wifi) to extremely slow (to the point pacman has trouble syncing up the databases at times)
Setting the correct regulatory domain or disabling power saving with
iwconfig wlan0 power offdoes not seem to help
The output of some commands I've seen suggested follows:
lshw -C network
[....]
*-network
description: Wireless interface
product: MT7921 802.11ax PCI Express Wireless Network Adapter
vendor: MEDIATEK Corp.
physical id: 0
bus info: pci@0000:03:00.0
logical name: wlan0
version: 00
serial: b4:b5:b6:a7:74:a5
width: 64 bits
clock: 33MHz
capabilities: bus_master cap_list ethernet physical wireless
configuration: broadcast=yes driver=mt7921e driverversion=6.4.8-arch1-1 firmware=____010000-20230526130958 ip=192.168.1.212 latency=0 link=yes multicast=yes wireless=IEEE 802.11
resources: iomemory:40-3f iomemory:40-3f iomemory:40-3f irq:90 memory:470200000-4702fffff memory:470300000-470303fff memory:470304000-470304fff
[....]iwconfig wlan0
wlan0 IEEE 802.11 ESSID:"....."
Mode:Managed Frequency:5.22 GHz Access Point: B8:D5:26:2C:90:06
Bit Rate=585 Mb/s Tx-Power=3 dBm
Retry short limit:7 RTS thr:off Fragment thr:off
Power Management:off
Link Quality=44/70 Signal level=-66 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:1 Invalid misc:3 Missed beacon:0Does anyone have any suggestion or has had similar issues before?
Last edited by arlort (2023-08-12 12:04:05)
Offline
Please review https://bbs.archlinux.org/viewtopic.php?id=287755 and its recommendations for logs, etc. It could be the same problem as described there.
Offline
Thanks, I have tried disabling docker and tailscale and I'll see if that makes any difference, but from my understanding of the thread I am missing the most likely error symptom ("WRT: Invalid buffer destination")
I do use bluetooth headphones so it is possible there could be some interference, I'll try to turn them off next time I notice a slowdown
Below are my logs:
Result of the systemd grep:
bluetooth.service | bluetooth.target.wants
cpupower.service | multi-user.target.wants
dbus-org.bluez.service | system
dhcpcd.service | multi-user.target.wants
docker-compose@softserve.service | multi-user.target.wants
docker.socket | sockets.target.wants
gcr-ssh-agent.socket | sockets.target.wants
getty@tty1.service | getty.target.wants
gnome-keyring-daemon.socket | sockets.target.wants
iwd.service | multi-user.target.wants
libvirtd.socket | sockets.target.wants
openvpn-client@unikie-vpn.service | multi-user.target.wants
p11-kit-server.socket | sockets.target.wants
pipewire-pulse.socket | sockets.target.wants
pipewire-session-manager.service | user
pipewire.socket | sockets.target.wants
remote-fs.target | multi-user.target.wants
snap.auto-cpufreq.service.service | multi-user.target.wants
snapd.service | multi-user.target.wants
sshd.service | multi-user.target.wants
tailscaled.service | multi-user.target.wants
var-lib-snapd-snap-auto\x2dcpufreq-129.mount | multi-user.target.wants
var-lib-snapd-snap-auto\x2dcpufreq-129.mount | snapd.mounts.target.wants
var-lib-snapd-snap-bare-5.mount | multi-user.target.wants
var-lib-snapd-snap-bare-5.mount | snapd.mounts.target.wants
var-lib-snapd-snap-core18-2785.mount | multi-user.target.wants
var-lib-snapd-snap-core18-2785.mount | snapd.mounts.target.wants
var-lib-snapd-snap-core22-817.mount | multi-user.target.wants
var-lib-snapd-snap-core22-817.mount | snapd.mounts.target.wants
var-lib-snapd-snap-core22-858.mount | multi-user.target.wants
var-lib-snapd-snap-core22-858.mount | snapd.mounts.target.wants
var-lib-snapd-snap-gnome\x2d3\x2d34\x2d1804-93.mount | multi-user.target.wants
var-lib-snapd-snap-gnome\x2d3\x2d34\x2d1804-93.mount | snapd.mounts.target.wants
var-lib-snapd-snap-gtk\x2dcommon\x2dthemes-1535.mount | multi-user.target.wants
var-lib-snapd-snap-gtk\x2dcommon\x2dthemes-1535.mount | snapd.mounts.target.wants
var-lib-snapd-snap-slack-82.mount | multi-user.target.wants
var-lib-snapd-snap-slack-82.mount | snapd.mounts.target.wants
var-lib-snapd-snap-slack-83.mount | multi-user.target.wants
var-lib-snapd-snap-slack-83.mount | snapd.mounts.target.wants
var-lib-snapd-snap-snapd-19361.mount | multi-user.target.wants
var-lib-snapd-snap-snapd-19361.mount | snapd.mounts.target.wants
var-lib-snapd-snap-snapd-19457.mount | multi-user.target.wants
var-lib-snapd-snap-snapd-19457.mount | snapd.mounts.target.wants
wireplumber.service | pipewire.service.wantsOffline
The quality of the connection went down again, I haven't rebooted since the previous message and I've linked the new logs.
I had bluetooth headphones off and tailscale and docker disabled (docker the whole time, tailscale once I noticed the speed dropping but with no effect)
I have noticed that my ping compared to another device on the same network to the speedtest servers is much higher (5ms vs 50 or so)
Offline
tailescale/openvpn and docker are active in that journal? (docker gets disabled 15m in, but the VPN doesn't)
Can you test the performace w/ a local iperf and w/o any VPN?
On a guess: VPNs often have some MTU overhead, so you want to limit the local MTU << 1500, https://tailscale.com/kb/1023/troublesh … -two-nodes
ip link set dev wlan0 mtu 1280Offline
Openvpn was active the whole time, I disabled tailscale once the issue appeared.
I have executed the command you gave, I will try and see if with that command and no openvpn/tailscale I get the same issue. I'll also check the tailscale page you linked, thanks
I am not sure if I used iperf in the correct way, I started a server on my machine and then gave my own ip address as the remote host with "-c"
Connecting to host 192.168.1.212, port 5201
[ 5] local 192.168.1.212 port 52424 connected to 192.168.1.212 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 4.21 GBytes 36.2 Gbits/sec 0 2.44 MBytes
[ 5] 1.00-2.00 sec 4.32 GBytes 37.1 Gbits/sec 0 2.44 MBytes
[ 5] 2.00-3.00 sec 4.29 GBytes 36.9 Gbits/sec 0 2.44 MBytes
[ 5] 3.00-4.00 sec 4.30 GBytes 36.9 Gbits/sec 0 2.44 MBytes
[ 5] 4.00-5.00 sec 8.54 GBytes 73.3 Gbits/sec 0 2.44 MBytes
[ 5] 5.00-6.00 sec 8.57 GBytes 73.6 Gbits/sec 0 2.44 MBytes
[ 5] 6.00-7.00 sec 7.98 GBytes 68.6 Gbits/sec 0 2.50 MBytes
[ 5] 7.00-8.00 sec 8.18 GBytes 70.3 Gbits/sec 0 2.50 MBytes
[ 5] 8.00-9.00 sec 8.74 GBytes 75.0 Gbits/sec 0 2.50 MBytes
[ 5] 9.00-10.00 sec 7.25 GBytes 62.3 Gbits/sec 0 2.50 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 66.4 GBytes 57.0 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 66.4 GBytes 57.0 Gbits/sec receiverOffline
iperf only makes sense across the LAN, ie. you start one instance on a different host (ideally wired) in your local network and the other one on the tested one.
Offline
Hi, I'm just writing to say that I believe the problem was with openvpn, since disabling it I have not encountered more issues (it's also possible some update in the meantime fixed the issue)
I don't know if I should mark this as solved or leave it open in case the issue arises again.
Thank you for your help
Offline
Have you tried altering the MTU?
You should tag the thread somehow ("solved" if that's the status quo) so others will know that there's no task left, but maybe a solution to find.
Thanks.
Offline