You are not logged in.
Hello!
I have an old HP MicroServer N40L (AMD Turion II Neo N40L 2 cores at 1,5 GHz with 2GB of RAM) running Arch Linux and Deluge, connected to internet with a 1 Gbps/250 Mbps fiber connection.
I have installed Netdata to monitor performance and errors.
Each time I'm leeching or seeding a torrent, I get emails from Netdata that warn me mostly about "ipv4 udp send buffer errors". Most of the time it's between 1 and 10 errors in 1 minute. Sometimes it's around 100 errors in 1 minute.
Last month I leeched a dozen of around 500MB torrents at the same time, and got a few "ipv4 udp receive buffer errors", with around 1000 errors per minute.
It seems that alarms are raised for all the duration of transfers.
Sometimes I can get notifications that "inbound packets dropped" too.
From a Deluge perspective, torrents are downloaded fine at 10 to 20MB/s, and files aren't corrupted.
I have also "tcp syn queue cookies" alerts when the server is mostly idling.
What happens and how can I fix this, please? :-) Can I download even quicker? (with 1Gb/s I should come close to 100MB/s, isn't it?)
I've read that I could tune the Linux network stack via the /etc/sysctl.d directory, but what I don't know is how much I should increase queues and sizes.
Thanks in advance!
Offline
Hello there,
with 1Gb/s I should come close to 100MB/s
That would be theoretical speeds, you will have to consider variables, eg Disk speed, CPU Performance, NIC Performance, Network Congestion, etc. Notwithstanding, if the peer has 1Gbps uplink? Eg, you only have 250Mbps uplink. You could adjust, connect to X amount peers that have this piece to increase speed?
Cheers,
Regards
Offline
That would be theoretical speeds, you will have to consider variables, eg Disk speed, CPU Performance, NIC Performance, Network Congestion, etc. Notwithstanding, if the peer has 1Gbps uplink? Eg, you only have 250Mbps uplink. You could adjust, connect to X amount peers that have this piece to increase speed?
Thank you for your reply.
The disks and the NIC are capable of 1Gbps on my LAN. The CPU is capable of 1Gbps with most file transfer protocols (SMB local, SFTP, rsync with internet). There is plenty of CPU available when my BitTorrent client downloads at 20MBps.
Regarding peers uplinks, most torrents I download have lots of peers, so I think I should be able to exceed 20MBps/160Mbps, at least now and then. It looks like there's a hard cap at 25MBps. Can it be related to the network stack config?
Offline
Hey,
so I think I should be able to exceed 20MBps/160Mbps
Why do you believe that?
Offline
Why shouldn't I?
Offline
Where do you believe the bottleneck exist?
Offline
I wonder if the default network stack tuning is the bottleneck, as I get lots of network errors. That's my main request: I'd like to get rid of the buffer errors and dropped packets first.
Offline
Did you increase the buffer sizes? Did that make the buffer warnings stop? Does your download speed increase?
Did you measure your general downstream into /dev/null ./. into ~/file?
A variance of 10MB/s to 20MB/s does not suggest a network issue, but a weak source. Did you try to increase or decrease the amount of peers? Does the actual speed relate to the torrent at hand?
Offline
Try some well seeded linux iso files
https://ugjka.net
"It is easier to fool people, than to convince them that they've been fooled" ~ Dr. Andrea Love
Online
The buffer errors are usually due to overflowing buffers. You can check and configure them via
sysctl net.core.rmem_max
sysctl net.ipv4.udp_mem
sysctl net.ipv4.udp_rmem_min
Check the man pages for "socket" and "udp" for further information.
What is most likely happening here is that the CPU just can't handle the load of torrents, because it is usually clustered and the system is waiting on IO while the buffers fill up. Have you monitored your caches, IO and CPU utilization? What about single source downloads e.g. over http?
Offline
Did you increase the buffer sizes? Did that make the buffer warnings stop? Does your download speed increase?
Not yet, because I don't know how much I should increase them. How should I determine the right buffer sizes?
Did you measure your general downstream into /dev/null ./. into ~/file?
Using iperf3, I do get around 900 Mb/s in download and 250 Mb/s in upload.
The buffer errors are usually due to overflowing buffers. You can check and configure them via
sysctl net.core.rmem_max sysctl net.ipv4.udp_mem sysctl net.ipv4.udp_rmem_min
Check the man pages for "socket" and "udp" for further information.
There are no occurrences of socket or udp in sysctl's manpage. Which manpage are you suggesting that I read?
Can you suggest me a way to determine the right buffer sizes for my hardware and my needs?
Thanks in advance!
Last edited by romano2k (2019-03-14 21:52:27)
Offline
Just attach a "0" to every value - factor 10 should make some impact ;-)
If the error is gone, bisect your way to your requirement.
As for the perfomance test:
Try https://linhost.info/2013/10/download-test-files/ and ensure to compare writing to /dev/null and ~/file (to see whether your HDD is the limiting factor here)
Offline
Thank you for your help.
Here's what I did:
sudo sysctl net.core.rmem_max=2129920
sudo sysctl net.ipv4.udp_mem="428220 570970 856440"
sudo sysctl net.ipv4.udp_rmem_min=40960
Downloading a torrent with 4000 seeders, I'm not exceeding 10 MB/s. I still get "udp receive buffer errors", "ram available alert" (less than 10%) and "accept queue overflows" emails from netdata. Also, I just noticed that deluged was using up to 80% CPU according to htop. Should I add another 0? (factor 100!)
Regarding HTTP performance:
With Deluge mostly idle, wget -O /dev/null http://ipv4.rbx.proof.ovh.net/files/100Mio.dat gives me variable throughputs between 65 and 85 MB/s.
I have only around 40 MB/s while downloading to the mdadm array where Deluge stores files (3 x 2TB WD Red SATA harddrives) :-/
Offline
I'd rather try to limit the peers - and it would seem that the disk becomes the limiting factor (esp. if your seeding while leeching, you'll have reading IO load as well)
Offline
Just adding 0 won't help, because it is an effect, not the cause. Honestly, it looks like the hardware is limiting here.
Limit the number of peers and monitor IO closely as well.
Offline