You are not logged in.
Pages: 1
Hey folks!
I have an Arch Linux machine running at my dad's house. It's only used as a backup server and all connections and files come in on SSH. All machines that backup to it are remote.
It's behaving kinda sluggish when receiving connections and files. If I SSH to it from any of my machines that backup files to it, even the SSH connection itself feels slow. Simply spamming the enter key takes about a full second between showing the next prompt. When my clients backup files to it, they go slower than I would expect. Much slower.
However.... when I ssh OUT from this backup server, speeds are great! No noticeable lag. This server also backs up it's /etc, /var, and /home to one of the same clients that back up to it.... and goes REALLY fast! SSH behaves timely like I would expect.
I've setup several Arch machines before... I like to think I have a pretty good idea of linux. Most of my experience is with RHEL.
For the most part, I just followed the wiki for setup. It runs the iptables tutorial for "simple stateful firewall". The only thing I did different with this machine from others is it runs the linux-hardened kernel.
ISP is Comcast. If anything I would have thought outgoing connections would be more likely to be throttled than incoming. The SSH hits the router at a high port, but then is forwarded internally to the 22 port.
All clients use borg backup (linux), rsync (*nix), and duplicati (windows) for backing up.... but all are slower than I would expect. I also kinda ruled out disk IO as a problem since even spamming enter and typing regular commands in the ssh window lags.
If anyone has suggestions on what I could check.... I'd really like to try to improve ssh latency. I only have a dozen or so machines backing up to it, now.... but I'd like to be able to grow that number if I can figure out why it's running so slow.
Offline
Was it ever fast?
If so, what changed?
Edit: The reason I ask is
Unusual Incoming SSH Latency
I was wondering ..unusual compared to what?
Last edited by Zod (2020-01-14 23:16:49)
Offline
Thinking back, I'm really not sure. I remember it being fine during install... but install was done on my local network, not remotely. I'll try booting off standard linux instead of linux-hardened and see if that makes any difference. It really seems like the biggest difference from my other arch machines.
I'm still open to any suggestions in the mean time.
Offline
Well...what I would do
1) drop the firewall on the central server.
2) remove the port mapping(s)on the router.
3) make sure there is only one (1) incoming ssh connection.
make it as simple as possible and then establish a baseline, then start to add the stuff back in (one at a time) to see what happens.
Isolate the cause and go from there.
Offline
ISP is Comcast. If anything I would have thought outgoing connections would be more likely to be throttled than incoming. The SSH hits the router at a high port, but then is forwarded internally to the 22 port.
The inbound TCP connections eg.
ComcastHome:40572 <- RandomIP:39852
look a lot like bittorrent seed traffic, the throttling of which doesn't show up in traditional (outbound TCP) speedtests.
To measure/test try:
1) ip tcp_metrics on host, paying attention to rtt; and/or
2) low (<1024) portforward on router.
Last edited by sabroad (2020-01-15 10:44:32)
--
saint_abroad
Offline
Pages: 1