You are not logged in.
Hello.
I am experiencing strange behavior of my NAS. Its download rate is dramatically reduced, while upload remains satisfactory. The system is home built upon an Asrock J5005 ITX platform, headless, runs 24/7, providing file (Samba), dhcp, ntp and database services for home lan. The odd thing is that the faster the link, the more impaired receiving rate is. The problem was spotted on samba/file sharing, and confirmed using iperf3.
Server IP is 192.168.0.129:
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
-----------------------------------------------------------
Server listening on 5201 (test #1)
-----------------------------------------------------------
Time: Fri, 03 Oct 2025 13:35:43 GMT
Accepted connection from 192.168.0.130, port 9544
Cookie: g2fpj4opao4fw4ubs7deb4dxdcpduv4kusbc
TCP MSS: 0 (default)
[ 5] local 192.168.0.129 port 5201 connected to 192.168.0.130 port 9545
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 0.00 Bytes 0.00 bits/sec
[ 5] 1.00-2.00 sec 256 KBytes 2.10 Mbits/sec
[ 5] 2.00-3.00 sec 256 KBytes 2.10 Mbits/sec
[ 5] 3.00-4.00 sec 640 KBytes 5.24 Mbits/sec
[ 5] 4.00-5.00 sec 128 KBytes 1.05 Mbits/sec
[ 5] 5.00-6.00 sec 384 KBytes 3.15 Mbits/sec
[ 5] 6.00-7.00 sec 512 KBytes 4.19 Mbits/sec
[ 5] 7.00-8.00 sec 384 KBytes 3.15 Mbits/sec
[ 5] 8.00-9.00 sec 384 KBytes 3.15 Mbits/sec
[ 5] 9.00-10.00 sec 512 KBytes 4.19 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate
[ 5] (sender statistics not available)
[ 5] 0.00-10.01 sec 3.38 MBytes 2.83 Mbits/sec receiver
rcv_tcp_congestion cubic
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
-----------------------------------------------------------
Server listening on 5201 (test #2)
-----------------------------------------------------------
Time: Fri, 03 Oct 2025 13:35:59 GMT
Accepted connection from 192.168.0.130, port 9639
Cookie: ihoedtrjcxvclbnht2yxyp4gxfqoeo3gmcrf
TCP MSS: 0 (default)
[ 5] local 192.168.0.129 port 5201 connected to 192.168.0.130 port 9640
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 110 MBytes 925 Mbits/sec 0 490 KBytes
[ 5] 1.00-2.00 sec 108 MBytes 904 Mbits/sec 0 543 KBytes
[ 5] 2.00-3.00 sec 105 MBytes 881 Mbits/sec 0 624 KBytes
[ 5] 3.00-4.00 sec 96.1 MBytes 806 Mbits/sec 0 686 KBytes
[ 5] 4.00-5.00 sec 108 MBytes 904 Mbits/sec 0 686 KBytes
[ 5] 5.00-6.00 sec 110 MBytes 918 Mbits/sec 0 686 KBytes
[ 5] 6.00-7.00 sec 109 MBytes 913 Mbits/sec 0 686 KBytes
[ 5] 7.00-8.00 sec 108 MBytes 908 Mbits/sec 0 686 KBytes
[ 5] 8.00-9.00 sec 106 MBytes 894 Mbits/sec 0 761 KBytes
[ 5] 9.00-10.00 sec 108 MBytes 907 Mbits/sec 0 807 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.01 sec 1.04 GBytes 896 Mbits/sec 0 sender
[ 5] (receiver statistics not available)
CPU Utilization: local/sender 8.2% (0.2%u/7.9%s), remote/receiver 0.0% (0.0%u/0.0%s)
snd_tcp_congestion cubic
The most used client is an i7 windows 10 machine equiped with an onboard (realtek) network adapter - its IP is 192.168.0.130
Other cable clients (all windows 10, gigabit ethernet) experience similar results, however, when a distant one, connected by two switches got a fast ethernet link (cable between the switches got damaged - found during investigating the problem) had almost normal performacje
Wifi is not that severly impaired, tests 3 and 4 were done with an iMac with 802.11 ac link:
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
-----------------------------------------------------------
Server listening on 5201 (test #3)
-----------------------------------------------------------
Time: Fri, 03 Oct 2025 14:30:59 GMT
Accepted connection from 192.168.0.144, port 49173
Cookie: ht6xtlwbl4o6ib2gojgqz2s746gzvbagkrxf
TCP MSS: 0 (default)
[ 5] local 192.168.0.129 port 5201 connected to 192.168.0.144 port 49174
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.62 MBytes 13.6 Mbits/sec
[ 5] 1.00-2.00 sec 1.50 MBytes 12.6 Mbits/sec
[ 5] 2.00-3.00 sec 2.00 MBytes 16.8 Mbits/sec
[ 5] 3.00-4.00 sec 1.38 MBytes 11.5 Mbits/sec
[ 5] 4.00-5.00 sec 1.75 MBytes 14.7 Mbits/sec
[ 5] 5.00-6.00 sec 2.00 MBytes 16.8 Mbits/sec
[ 5] 6.00-7.00 sec 2.38 MBytes 19.9 Mbits/sec
[ 5] 7.00-8.00 sec 1.62 MBytes 13.6 Mbits/sec
[ 5] 8.00-9.00 sec 1.50 MBytes 12.6 Mbits/sec
[ 5] 9.00-10.00 sec 1.75 MBytes 14.7 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate
[ 5] (sender statistics not available)
[ 5] 0.00-10.01 sec 17.5 MBytes 14.7 Mbits/sec receiver
rcv_tcp_congestion cubic
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
-----------------------------------------------------------
Server listening on 5201 (test #4)
-----------------------------------------------------------
Time: Fri, 03 Oct 2025 14:31:26 GMT
Accepted connection from 192.168.0.144, port 49175
Cookie: jkinvlokg4yi5y6zydxqmy4lxmisbe6ois2t
TCP MSS: 0 (default)
[ 5] local 192.168.0.129 port 5201 connected to 192.168.0.144 port 49176
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 59.1 MBytes 495 Mbits/sec 15 2.69 MBytes
[ 5] 1.00-2.00 sec 57.1 MBytes 479 Mbits/sec 0 2.92 MBytes
[ 5] 2.00-3.00 sec 60.2 MBytes 505 Mbits/sec 0 3.14 MBytes
[ 5] 3.00-4.00 sec 54.5 MBytes 457 Mbits/sec 0 3.31 MBytes
[ 5] 4.00-5.00 sec 56.0 MBytes 470 Mbits/sec 0 3.46 MBytes
[ 5] 5.00-6.00 sec 57.8 MBytes 484 Mbits/sec 0 3.57 MBytes
[ 5] 6.00-7.00 sec 61.4 MBytes 515 Mbits/sec 0 3.66 MBytes
[ 5] 7.00-8.00 sec 61.6 MBytes 517 Mbits/sec 0 3.73 MBytes
[ 5] 8.00-9.00 sec 61.5 MBytes 516 Mbits/sec 0 3.73 MBytes
[ 5] 9.00-10.00 sec 62.4 MBytes 523 Mbits/sec 0 3.81 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.01 sec 593 MBytes 497 Mbits/sec 15 sender
[ 5] (receiver statistics not available)
CPU Utilization: local/sender 3.9% (0.1%u/3.8%s), remote/receiver 0.6% (0.0%u/0.5%s)
snd_tcp_congestion cubic
and tests 5 and 6 with a windows 10 laptop equipped with 802.11 n USB dongle:
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
-----------------------------------------------------------
Server listening on 5201 (test #5)
-----------------------------------------------------------
Time: Fri, 03 Oct 2025 14:34:05 GMT
Accepted connection from 192.168.0.132, port 7686
Cookie: leniuszek.1759502046.625839.62ecd394
TCP MSS: 0 (default)
[ 5] local 192.168.0.129 port 5201 connected to 192.168.0.132 port 7687
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 3.62 MBytes 30.4 Mbits/sec
[ 5] 1.00-2.00 sec 3.12 MBytes 26.2 Mbits/sec
[ 5] 2.00-3.00 sec 3.75 MBytes 31.5 Mbits/sec
[ 5] 3.00-4.00 sec 4.00 MBytes 33.6 Mbits/sec
[ 5] 4.00-5.00 sec 4.38 MBytes 36.7 Mbits/sec
[ 5] 5.00-6.00 sec 3.62 MBytes 30.4 Mbits/sec
[ 5] 6.00-7.00 sec 4.75 MBytes 39.8 Mbits/sec
[ 5] 7.00-8.00 sec 3.12 MBytes 26.2 Mbits/sec
[ 5] 8.00-9.00 sec 3.00 MBytes 25.2 Mbits/sec
[ 5] 9.00-10.00 sec 2.38 MBytes 19.9 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate
[ 5] (sender statistics not available)
[ 5] 0.00-10.02 sec 35.8 MBytes 29.9 Mbits/sec receiver
rcv_tcp_congestion cubic
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
-----------------------------------------------------------
Server listening on 5201 (test #6)
-----------------------------------------------------------
Time: Fri, 03 Oct 2025 14:34:22 GMT
Accepted connection from 192.168.0.132, port 7696
Cookie: leniuszek.1759502063.286785.411147a6
TCP MSS: 0 (default)
[ 5] local 192.168.0.129 port 5201 connected to 192.168.0.132 port 7697
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 6.88 MBytes 57.6 Mbits/sec 0 271 KBytes
[ 5] 1.00-2.00 sec 6.38 MBytes 53.5 Mbits/sec 0 271 KBytes
[ 5] 2.00-3.00 sec 6.25 MBytes 52.4 Mbits/sec 0 271 KBytes
[ 5] 3.00-4.00 sec 5.88 MBytes 49.3 Mbits/sec 0 271 KBytes
[ 5] 4.00-5.00 sec 6.38 MBytes 53.5 Mbits/sec 0 271 KBytes
[ 5] 5.00-6.00 sec 5.75 MBytes 48.2 Mbits/sec 0 271 KBytes
[ 5] 6.00-7.00 sec 5.88 MBytes 49.3 Mbits/sec 0 271 KBytes
[ 5] 7.00-8.00 sec 6.25 MBytes 52.4 Mbits/sec 0 271 KBytes
[ 5] 8.00-9.00 sec 6.88 MBytes 57.7 Mbits/sec 0 271 KBytes
[ 5] 9.00-10.00 sec 6.25 MBytes 52.4 Mbits/sec 0 271 KBytes
[ 5] 10.00-10.02 sec 128 KBytes 51.1 Mbits/sec 0 271 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.02 sec 63.4 MBytes 53.0 Mbits/sec 0 sender
[ 5] (receiver statistics not available)
CPU Utilization: local/sender 0.5% (0.1%u/0.5%s), remote/receiver 1.6% (0.6%u/0.9%s)
snd_tcp_congestion cubic
What is stupendous, Speedtest-cli however, yields normal results (my ISP provides 150Mbps):
Retrieving speedtest.net configuration...
Testing from UPC Polska (89.71.203.183)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by dg-net.pl (Katowice) [176.50 km]: 23.119 ms
Testing download speed................................................................................
Download: 142.04 Mbit/s
Testing upload speed......................................................................................................
Upload: 23.35 Mbit/s
Server was initialy equipped with onboard realtek adapter, but I exchanged it to Intel I210-T1: without improvement. Cables between server and "most of the time used client" were exchanged (cat6), for testing purposes they were even connected directly to exclude switch failure and external traffic overload. No progress yielded. When hte server was booted with a fresh-clear linux (SystemRescue1202) it exhibited nearly gigabit speeds both directions. TSO and ASPM turned off. Clearing firewall and traffic control rules makes no difference. When carefully inspected, the speeds are slightly better immediately after boot.
Systemd services:
find /etc/systemd -type l -exec test -f {} \; -print | awk -F'/' '{ printf ("%-40s | %s\n", $(NF-0), $(NF-1)) }' | sort -f
airsaned.service | multi-user.target.wants
avahi-daemon.service | multi-user.target.wants
avahi-daemon.socket | sockets.target.wants
backuppc.service | multi-user.target.wants
brscan-skey.service | multi-user.target.wants
certbot.timer | multi-user.target.wants
certbot.timer | timers.target.wants
cups.path | multi-user.target.wants
cups.service | printer.target.wants
cups.socket | sockets.target.wants
dbus-org.freedesktop.Avahi.service | system
dbus-org.freedesktop.timesync1.service | system
dnsmasq.service | multi-user.target.wants
fcgiwrap.socket | sockets.target.wants
freedns-update.timer | timers.target.wants
getty@tty1.service | getty.target.wants
he-ipv6.service | multi-user.target.wants
imaginary.service | default.target.wants
ipp-usb.service | multi-user.target.wants
journal@tty1.service | getty.target.wants
kurdwanow.service | multi-user.target.wants
mariadb.service | multi-user.target.wants
netctl@intern0\x2dprofile.service | multi-user.target.wants
nextcloud-app-update-all.timer | timers.target.wants
nextcloud-cron.timer | timers.target.wants
nextcloud-files-scan-all.timer | timers.target.wants
nginx.service | multi-user.target.wants
nmb.service | multi-user.target.wants
ntpd.service | multi-user.target.wants
openvpn-server@server-domowy.service | multi-user.target.wants
openvpn-server@zasoby-upc.service | multi-user.target.wants
p11-kit-server.socket | sockets.target.wants
papermc.service | multi-user.target.wants
php-fpm-legacy.service | multi-user.target.wants
pulseaudio.socket | sockets.target.wants
redis.service | multi-user.target.wants
remote-fs.target | multi-user.target.wants
saned.socket | sockets.target.wants
shorewall6.service | basic.target.wants
shorewall.service | basic.target.wants
smb.service | multi-user.target.wants
speedtest.timer | multi-user.target.wants
sshd.service | multi-user.target.wants
stubby.service | multi-user.target.wants
systemd-timesyncd.service | sysinit.target.wants
transmission.service | multi-user.target.wants
wsdd2.service | multi-user.target.wants
Offline
MTU issue? Do the local windows hosts (the only ones showing abnormal behavior, right?) have jumbo frames enabled?
Can you boot https://grml.org or so on the main host (192.168.0.130) and iperf that?
Offline
I have never used jumbo frames in my LAN, but for testing purposes tried it for a while. Now I rembered why:
[misio@norka2 ~]$ sudo ip link set dev enp1s0 mtu 9000
[misio@norka2 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc htb state UP group default qlen 1000
link/ether 00:1b:21:f2:0c:1d brd ff:ff:ff:ff:ff:ff
altname enx001b21f20c1d
inet 192.168.0.129/24 brd 192.168.0.255 scope global enp1s0
valid_lft forever preferred_lft forever
inet6 fe80::21b:21ff:fef2:c1d/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
3: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/sit 0.0.0.0 brd 0.0.0.0
4: he-ipv6@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
link/sit 192.168.0.129 peer 216.66.80.162
inet6 2001:470:70:87::2/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::c0a8:81/64 scope link
valid_lft forever preferred_lft forever
5: tun1: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc htb state UNKNOWN group default qlen 500
link/none
inet 10.209.125.161/28 scope global tun1
valid_lft forever preferred_lft forever
inet6 fe80::84d0:e5bc:13f0:a555/64 scope link stable-privacy proto kernel_ll
valid_lft forever preferred_lft forever
6: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc htb state UNKNOWN group default qlen 500
link/none
inet 10.180.32.209/28 scope global tun0
valid_lft forever preferred_lft forever
inet6 fe80::dacd:8d22:5907:dcc5/64 scope link stable-privacy proto kernel_ll
valid_lft forever preferred_lft forever
[misio@norka2 ~]$ iperf3 -c norka3 -V -R
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
Control connection MSS 8960
Time: Sat, 04 Oct 2025 10:33:37 GMT
Connecting to host norka3, port 5201
Reverse mode, remote host norka3 is sending
Cookie: wb3dygo6fig2amw6w6fwwgmnekfni4jodtpk
TCP MSS: 8960 (default)
[ 5] local 192.168.0.129 port 60600 connected to 192.168.0.130 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 0.00 Bytes 0.00 bits/sec
[ 5] 1.00-2.00 sec 0.00 Bytes 0.00 bits/sec
[ 5] 2.00-3.00 sec 0.00 Bytes 0.00 bits/sec
[ 5] 3.00-4.00 sec 0.00 Bytes 0.00 bits/sec
[ 5] 4.00-5.00 sec 0.00 Bytes 0.00 bits/sec
[ 5] 5.00-6.00 sec 0.00 Bytes 0.00 bits/sec
[ 5] 6.00-7.00 sec 0.00 Bytes 0.00 bits/sec
[ 5] 7.00-8.00 sec 0.00 Bytes 0.00 bits/sec
[ 5] 8.00-9.00 sec 0.00 Bytes 0.00 bits/sec
[ 5] 9.00-10.00 sec 0.00 Bytes 0.00 bits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 256 KBytes 210 Kbits/sec sender
[ 5] 0.00-10.00 sec 0.00 Bytes 0.00 bits/sec receiver
rcv_tcp_congestion cubic
iperf Done.
[misio@norka2 ~]$ iperf3 -c norka3 -V
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
Control connection MSS 8960
Time: Sat, 04 Oct 2025 10:34:29 GMT
Connecting to host norka3, port 5201
Cookie: yvpm4fxzjlkei7bvbbrtdc3a3g4ewuf25or7
TCP MSS: 8960 (default)
[ 5] local 192.168.0.129 port 33422 connected to 192.168.0.130 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 113 MBytes 947 Mbits/sec 0 420 KBytes
[ 5] 1.00-2.00 sec 112 MBytes 943 Mbits/sec 0 464 KBytes
[ 5] 2.00-3.00 sec 112 MBytes 937 Mbits/sec 0 490 KBytes
[ 5] 3.00-4.00 sec 112 MBytes 937 Mbits/sec 0 542 KBytes
[ 5] 4.00-5.00 sec 112 MBytes 939 Mbits/sec 0 578 KBytes
[ 5] 5.00-6.00 sec 112 MBytes 936 Mbits/sec 0 674 KBytes
[ 5] 6.00-7.00 sec 113 MBytes 949 Mbits/sec 0 1006 KBytes
[ 5] 7.00-8.00 sec 111 MBytes 933 Mbits/sec 22 892 KBytes
[ 5] 8.00-9.00 sec 112 MBytes 938 Mbits/sec 0 892 KBytes
[ 5] 9.00-10.00 sec 112 MBytes 936 Mbits/sec 37 822 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.09 GBytes 940 Mbits/sec 59 sender
[ 5] 0.00-10.00 sec 1.09 GBytes 937 Mbits/sec receiver
CPU Utilization: local/sender 7.2% (0.2%u/6.9%s), remote/receiver 3.3% (1.4%u/1.9%s)
snd_tcp_congestion cubic
iperf Done.
Now the receiving speed is dead.
Why do you assume it concerns only windows clients? I supplied an iperf test using MacOS with similar complaints before, and also booted the "most often used client" with SystemRescue to test, and nothing improved.
I reckon the exact issue is MTU, but lowering it improves the results a bit
[misio@norka2 ~]$ sudo ip link set dev enp1s0 mtu 1500; iperf3 -c norka3 -V -R; iperf3 -c norka3 -V
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
Control connection MSS 1460
Time: Sat, 04 Oct 2025 10:17:30 GMT
Connecting to host norka3, port 5201
Reverse mode, remote host norka3 is sending
Cookie: 7dt4cdjjsajvo6zewejcxjyjk37mfozjoqfn
TCP MSS: 1460 (default)
[ 5] local 192.168.0.129 port 57390 connected to 192.168.0.130 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 256 KBytes 2.09 Mbits/sec
[ 5] 1.00-2.00 sec 512 KBytes 4.19 Mbits/sec
[ 5] 2.00-3.00 sec 512 KBytes 4.19 Mbits/sec
[ 5] 3.00-4.00 sec 384 KBytes 3.15 Mbits/sec
[ 5] 4.00-5.00 sec 512 KBytes 4.19 Mbits/sec
[ 5] 5.00-6.00 sec 384 KBytes 3.15 Mbits/sec
[ 5] 6.00-7.00 sec 640 KBytes 5.24 Mbits/sec
[ 5] 7.00-8.00 sec 512 KBytes 4.19 Mbits/sec
[ 5] 8.00-9.00 sec 512 KBytes 4.19 Mbits/sec
[ 5] 9.00-10.00 sec 512 KBytes 4.19 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 4.88 MBytes 4.09 Mbits/sec sender
[ 5] 0.00-10.00 sec 4.62 MBytes 3.88 Mbits/sec receiver
rcv_tcp_congestion cubic
iperf Done.
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
Control connection MSS 1460
Time: Sat, 04 Oct 2025 10:17:40 GMT
Connecting to host norka3, port 5201
Cookie: vpqjzvzuswg36u6ecdwpcaieonnw534yybx5
TCP MSS: 1460 (default)
[ 5] local 192.168.0.129 port 37256 connected to 192.168.0.130 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 100 MBytes 841 Mbits/sec 0 646 KBytes
[ 5] 1.00-2.00 sec 104 MBytes 868 Mbits/sec 440 361 KBytes
[ 5] 2.00-3.00 sec 97.2 MBytes 816 Mbits/sec 0 433 KBytes
[ 5] 3.00-4.00 sec 68.6 MBytes 575 Mbits/sec 730 74.1 KBytes
[ 5] 4.00-5.00 sec 72.8 MBytes 611 Mbits/sec 0 325 KBytes
[ 5] 5.00-6.00 sec 82.1 MBytes 689 Mbits/sec 0 402 KBytes
[ 5] 6.00-7.00 sec 107 MBytes 896 Mbits/sec 0 412 KBytes
[ 5] 7.00-8.00 sec 108 MBytes 905 Mbits/sec 0 419 KBytes
[ 5] 8.00-9.00 sec 107 MBytes 899 Mbits/sec 0 423 KBytes
[ 5] 9.00-10.00 sec 106 MBytes 892 Mbits/sec 0 426 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 954 MBytes 800 Mbits/sec 1170 sender
[ 5] 0.00-10.00 sec 951 MBytes 797 Mbits/sec receiver
CPU Utilization: local/sender 6.0% (0.1%u/5.9%s), remote/receiver 5.2% (2.5%u/2.7%s)
snd_tcp_congestion cubic
iperf Done.
[misio@norka2 ~]$ sudo ip link set dev enp1s0 mtu 1300; iperf3 -c norka3 -V -R; iperf3 -c norka3 -V
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
Control connection MSS 1260
Time: Sat, 04 Oct 2025 10:20:32 GMT
Connecting to host norka3, port 5201
Reverse mode, remote host norka3 is sending
Cookie: qinlxeh5bn6soc64vpp44citiuub6c2x6zsz
TCP MSS: 1260 (default)
[ 5] local 192.168.0.129 port 59522 connected to 192.168.0.130 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 128 KBytes 1.05 Mbits/sec
[ 5] 1.00-2.00 sec 384 KBytes 3.15 Mbits/sec
[ 5] 2.00-3.00 sec 896 KBytes 7.34 Mbits/sec
[ 5] 3.00-4.00 sec 640 KBytes 5.24 Mbits/sec
[ 5] 4.00-5.00 sec 640 KBytes 5.24 Mbits/sec
[ 5] 5.00-6.00 sec 1.00 MBytes 8.39 Mbits/sec
[ 5] 6.00-7.00 sec 768 KBytes 6.29 Mbits/sec
[ 5] 7.00-8.00 sec 640 KBytes 5.24 Mbits/sec
[ 5] 8.00-9.00 sec 640 KBytes 5.24 Mbits/sec
[ 5] 9.00-10.00 sec 640 KBytes 5.24 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 6.50 MBytes 5.45 Mbits/sec sender
[ 5] 0.00-10.00 sec 6.25 MBytes 5.24 Mbits/sec receiver
rcv_tcp_congestion cubic
iperf Done.
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
Control connection MSS 1260
Time: Sat, 04 Oct 2025 10:20:42 GMT
Connecting to host norka3, port 5201
Cookie: m2lrbodbtn3mna3n6q3lsayrlu4mda7ovcxf
TCP MSS: 1260 (default)
[ 5] local 192.168.0.129 port 44570 connected to 192.168.0.130 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 112 MBytes 934 Mbits/sec 0 390 KBytes
[ 5] 1.00-2.00 sec 108 MBytes 910 Mbits/sec 0 390 KBytes
[ 5] 2.00-3.00 sec 109 MBytes 911 Mbits/sec 0 437 KBytes
[ 5] 3.00-4.00 sec 109 MBytes 911 Mbits/sec 0 437 KBytes
[ 5] 4.00-5.00 sec 108 MBytes 904 Mbits/sec 0 437 KBytes
[ 5] 5.00-6.00 sec 109 MBytes 915 Mbits/sec 0 458 KBytes
[ 5] 6.00-7.00 sec 107 MBytes 901 Mbits/sec 0 506 KBytes
[ 5] 7.00-8.00 sec 109 MBytes 912 Mbits/sec 0 530 KBytes
[ 5] 8.00-9.00 sec 108 MBytes 905 Mbits/sec 0 555 KBytes
[ 5] 9.00-10.00 sec 108 MBytes 909 Mbits/sec 0 651 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.06 GBytes 911 Mbits/sec 0 sender
[ 5] 0.00-10.00 sec 1.06 GBytes 908 Mbits/sec receiver
CPU Utilization: local/sender 4.6% (0.1%u/4.5%s), remote/receiver 3.4% (2.2%u/1.2%s)
snd_tcp_congestion cubic
iperf Done.
[misio@norka2 ~]$ sudo ip link set dev enp1s0 mtu 1100; iperf3 -c norka3 -V -R; iperf3 -c norka3 -V
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
Control connection MSS 1060
Time: Sat, 04 Oct 2025 10:26:36 GMT
Connecting to host norka3, port 5201
Reverse mode, remote host norka3 is sending
Cookie: r634wxhwhabuvjlxvnxrcyetwqa3kml6qw4s
TCP MSS: 1060 (default)
[ 5] local 192.168.0.129 port 36442 connected to 192.168.0.130 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 896 KBytes 7.33 Mbits/sec
[ 5] 1.00-2.00 sec 2.62 MBytes 22.0 Mbits/sec
[ 5] 2.00-3.00 sec 1.62 MBytes 13.6 Mbits/sec
[ 5] 3.00-4.00 sec 3.38 MBytes 28.3 Mbits/sec
[ 5] 4.00-5.00 sec 4.50 MBytes 37.8 Mbits/sec
[ 5] 5.00-6.00 sec 4.75 MBytes 39.8 Mbits/sec
[ 5] 6.00-7.00 sec 3.12 MBytes 26.2 Mbits/sec
[ 5] 7.00-8.00 sec 5.38 MBytes 45.1 Mbits/sec
[ 5] 8.00-9.00 sec 3.00 MBytes 25.2 Mbits/sec
[ 5] 9.00-10.00 sec 2.50 MBytes 21.0 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 32.0 MBytes 26.8 Mbits/sec sender
[ 5] 0.00-10.00 sec 31.8 MBytes 26.6 Mbits/sec receiver
rcv_tcp_congestion cubic
iperf Done.
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
Control connection MSS 1060
Time: Sat, 04 Oct 2025 10:26:46 GMT
Connecting to host norka3, port 5201
Cookie: bpwfves4upoqodon4q5hg2jutukcbuzy4ryl
TCP MSS: 1060 (default)
[ 5] local 192.168.0.129 port 44818 connected to 192.168.0.130 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 111 MBytes 931 Mbits/sec 0 424 KBytes
[ 5] 1.00-2.00 sec 106 MBytes 888 Mbits/sec 0 470 KBytes
[ 5] 2.00-3.00 sec 107 MBytes 897 Mbits/sec 0 542 KBytes
[ 5] 3.00-4.00 sec 107 MBytes 896 Mbits/sec 0 542 KBytes
[ 5] 4.00-5.00 sec 107 MBytes 900 Mbits/sec 0 542 KBytes
[ 5] 5.00-6.00 sec 106 MBytes 893 Mbits/sec 0 576 KBytes
[ 5] 6.00-7.00 sec 108 MBytes 908 Mbits/sec 0 605 KBytes
[ 5] 7.00-8.00 sec 107 MBytes 901 Mbits/sec 0 605 KBytes
[ 5] 8.00-9.00 sec 108 MBytes 903 Mbits/sec 0 605 KBytes
[ 5] 9.00-10.00 sec 108 MBytes 905 Mbits/sec 0 605 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.05 GBytes 902 Mbits/sec 0 sender
[ 5] 0.00-10.00 sec 1.05 GBytes 899 Mbits/sec receiver
CPU Utilization: local/sender 4.8% (0.0%u/4.8%s), remote/receiver 2.0% (1.2%u/0.8%s)
snd_tcp_congestion cubic
iperf Done.
[misio@norka2 ~]$ sudo ip link set dev enp1s0 mtu 900; sleep 4s; iperf3 -c norka3 -V -R; iperf3 -c norka3 -V
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
Control connection MSS 860
Time: Sat, 04 Oct 2025 10:28:16 GMT
Connecting to host norka3, port 5201
Reverse mode, remote host norka3 is sending
Cookie: jsanbqpisrovculvssojvprm53q447lffq4c
TCP MSS: 860 (default)
[ 5] local 192.168.0.129 port 38988 connected to 192.168.0.130 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 13.1 MBytes 110 Mbits/sec
[ 5] 1.00-2.00 sec 5.12 MBytes 43.0 Mbits/sec
[ 5] 2.00-3.00 sec 2.88 MBytes 24.1 Mbits/sec
[ 5] 3.00-4.00 sec 10.1 MBytes 84.9 Mbits/sec
[ 5] 4.00-5.00 sec 14.4 MBytes 121 Mbits/sec
[ 5] 5.00-6.00 sec 14.2 MBytes 120 Mbits/sec
[ 5] 6.00-7.00 sec 13.6 MBytes 114 Mbits/sec
[ 5] 7.00-8.00 sec 15.8 MBytes 132 Mbits/sec
[ 5] 8.00-9.00 sec 17.0 MBytes 143 Mbits/sec
[ 5] 9.00-10.00 sec 4.75 MBytes 39.8 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.00 sec 113 MBytes 94.8 Mbits/sec sender
[ 5] 0.00-10.00 sec 111 MBytes 93.1 Mbits/sec receiver
rcv_tcp_congestion cubic
iperf Done.
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
Control connection MSS 860
Time: Sat, 04 Oct 2025 10:28:26 GMT
Connecting to host norka3, port 5201
Cookie: u3scacqku5xsnywjfdolhlk347oor5lt3hiu
TCP MSS: 860 (default)
[ 5] local 192.168.0.129 port 59798 connected to 192.168.0.130 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 10 second test, tos 0
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 104 MBytes 876 Mbits/sec 0 608 KBytes
[ 5] 1.00-2.00 sec 103 MBytes 862 Mbits/sec 0 608 KBytes
[ 5] 2.00-3.00 sec 106 MBytes 891 Mbits/sec 0 608 KBytes
[ 5] 3.00-4.00 sec 107 MBytes 895 Mbits/sec 0 636 KBytes
[ 5] 4.00-5.00 sec 102 MBytes 854 Mbits/sec 0 685 KBytes
[ 5] 5.00-6.00 sec 106 MBytes 886 Mbits/sec 82 480 KBytes
[ 5] 6.00-7.00 sec 104 MBytes 876 Mbits/sec 1001 296 KBytes
[ 5] 7.00-8.00 sec 107 MBytes 894 Mbits/sec 0 377 KBytes
[ 5] 8.00-9.00 sec 105 MBytes 884 Mbits/sec 0 394 KBytes
[ 5] 9.00-10.00 sec 107 MBytes 897 Mbits/sec 0 394 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 1.03 GBytes 882 Mbits/sec 1083 sender
[ 5] 0.00-10.00 sec 1.02 GBytes 879 Mbits/sec receiver
CPU Utilization: local/sender 3.4% (0.1%u/3.3%s), remote/receiver 5.5% (1.9%u/3.6%s)
snd_tcp_congestion cubic
iperf Done.
I noticed another strange thing: right after changing MTU iperf3 fails with an error. It needs a second or two to get going (hence the last example). I didn't see that before (executed commands consecutively, one after another), but when using that "one liner" above, only the second iperf did its job. I had to repeat it to get both: reverse and normal client.
iperf 3.19.1
Linux norka2 6.12.48-1-lts #1 SMP PREEMPT_DYNAMIC Sun, 21 Sep 2025 17:47:58 +0000 x86_64
iperf3: error - unable to connect to server - server may have stopped running or use a different port, firewall issue, etc.: No route to host
I shal try GRML, but I wonder how it is better than System Rescue (the latter is based on Arch). I wil post results later.
I had an idea that maybe I deal with a memory problem: my system is overbeefed with RAM (according to the specifications the mobo supports 8GB, while my config runs with 32GB), however it has never happened before and I am using it for >4 years now. Will also run memtest or sth.
Offline
It seems that I found the guilty's whereabouts.
I ran memtest, no errors.
As #seth suggested I ran GRML (whooa! i did it on server, not the mail client! why did you want the client to switch to linux?) and conducted tests.
For a couple of hours a liveCD linux distro performed briliantly - almost 1 Gbit in both directions.
I reverted back to my server setup and started a trial and error: switching off systemd units.
When stopped shorewall my performance jumped immediately!
Silly me, while testing before I did not imagine that an advanced firewall configuration tool would overcome my code:
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -t raw -F
iptables -t raw -X
iptables -t security -F
iptables -t security -X
although I checked (iptables -nvL) twice!
I'll inspect my firewall configuration and report the results, because this misconfiguration produces really fancy behaviour.
Last edited by messbl (2025-10-04 22:27:45)
Offline
why did you want the client to switch to linux?
The theory because of the atypical behavior was that the problem might not be the linux server but the server being excluded from network tunings the other systems got (notably jumbo frames which would not cause problems when sending because then the server decides the MTU and the tcp ACK's don't take any space anyway) - but apparently your firewall puts a quota on the inbound traffic (CPU peak on downloads? Though interesting only affecting the own LAN segment out of all??)
Make sure shorewall doesn't bring it's one https://wiki.archlinux.org/title/Nftables tables
Offline