You are not logged in.
my contract upgrade just kicked in and i'm now up from 250mbit/s to 500mbit/s
testing with steam i get the full bandwidth (sic!) but it seems i max out the mirrors capacity even with parallel downloads active
is there a way for pacman to utilize more than just one mirror at once so i can take advatange of my upgrade or have i reached some limit the mirror infrastructure is throtteling me?
don't get me wrong - i fully understand that available bandwidth should be shared equaly among all users - but splitting my load across multiple mirrors seems a neat idea
Last edited by cryptearth (2026-03-18 12:27:03)
Offline
No
Paperclips in avatars? | Sometimes I seem a bit harsh — don’t get offended too easily!
Offline
Offline
Add that to the list of things that will probably never be implemented...
Offline
It's not a pacman problem. It would be easy to implement this into pacman. The problem are the non-atomic updates of the mirrors themselves. You cannot guarantee that all mirrors are in sync at all times.
Online
@cryptearth
So what download speeds do you achieve from servers? Post a record. Are some servers faster then others?
Last edited by xerxes_ (2026-03-16 09:55:39)
Offline
Add that to the list of things that will probably never be implemented...
Thanks for this btw ^^
in /etc/pacman.conf
ParallelDownloads = 5That probably solve your entire problem.
As for me with just 5 Parallel downloads is more faster than Windows updates by a LOT ![]()
The most annoying packages are;
1) Linux kernel sometimes, but not too much to be honest, sometimes that pacakge takes like 3 minutes at most.
2) Nvidia CUDA. That thing could be waiting to 20 minutes when traffic get high over here. Also ROCm updates sometimes made me that.
Apart from that, I think i can increase the amount to 7 or 10 maybe, with fiber optic maybe obviously i guess it can be more.
Last edited by Succulent of your garden (2026-03-16 10:27:30)
str( @soyg ) == str( @potplant ) btw!
Also now with avatar logo included!
Online
That probably solve your entire problem.
...snip...
Apart from that, I think i can increase the amount to 7 or 10 maybe, with fiber optic maybe obviously i guess it can be more.
It partly solves a problem but not the problem having parallel mirrors ![]()
Offline
You could set up a load balancer like https://nginx.org/en/docs/http/load_balancing.html
Pacman would download from your nginx instance, which would forward distribute requests across mirrors.
That said I never tried if it actually works and never had a situation where I wanted to find out. Pacman mirrors are plenty fast as is already.
Offline
@frostschutz, it balances the load.
You would still need to check if every package on every mirror is the same version.
I wouldn't go that slippery path, personally that is;)
Offline
500mbps isn't really all that fast at this point, many mirrors should be able to saturate that. Just pick a better mirror.
Online
It partly solves a problem but not the problem having parallel mirrors wink
My bad, that's why I just should wait a more couple of hours after the coffee effect get's done well in my brain before writing stuff over here. Now I read it again and duh myself. Just don't consider my comment now please ![]()
Thanks for let me notice it btw.
EDIT:
@frostschutz, it balances the load.
You would still need to check if every package on every mirror is the same version.
I wouldn't go that slippery path, personally that is;)
Yep, in order to not break system and get into dependency hell I guess. But does by chance exists mirrors that have replicated the services by more than one load balancer ? like they have more than one ip to connect to ? Myabe if one mirror services does that you can exploit that if the mirrors are like 99% all the time in the same version. Also that does make sense if the mirrors are close to you, since if they are regional ones, it doesn't make sense to do that if the other load balancer is for Japan and you are in Europe ![]()
How much does the arch linux repos weight in memory ? Maybe a local mirror for oneself ? Not sure to be honest.
Last edited by Succulent of your garden (2026-03-16 11:24:15)
str( @soyg ) == str( @potplant ) btw!
Also now with avatar logo included!
Online
seth has already posted the relevant issue which should solve your query.
The problem are the non-atomic updates of the mirrors themselves. You cannot guarantee that all mirrors are in sync at all times.
You cannot blame them though. They are providing you this much bandwidth, thanklessly, for free, and now you want them to be completely synchronised like they are atomic clocks?!
As Scimmia has already said, pick a better mirror.
500 mbit/s is more than enough for pacman upgrades. I get 200 mbit/s of internet speed and get 160 mbit/s (20% less) from my chosen mirror but that is more than enough for me.
If changing mirrors does not help, then you should just become a mirror and let others share your bandwidth. Assuming that your mirror is geographically 0.005-0.010 km away from your computer, then you should expect a slight increase in your speeds. Around 100%-200% increment (estimate). I know it's not much but a small change times a million can become a large change.
How it feels to run shred/wipe in a COW system
Offline
I'm not accusing them of anything. This is the way that Arch chose by having a decentralized infrastructure and polling mirrors. I didn't mean synchronized by atomic clocks. It would be more like a decentralized database where all the mirrors are updated automatically (push-related) rather than they update the files every few hours themselves. I'm not arguing for this. This is a limitation which makes administration very easy and no further synchronization necessary. But again, this is not pacman's problem, pacman could download from all mirrors. The implementation shouldn't be hard to run parallelize download requests through the whole mirror-list instead of one mirror.
Online
If the overhead of misses is as bad as the bug suggests then what would realistically have to happen is to mutually sync the mirrors, ie. introduce some fencing system - effectively turning the rolling release into a daily release distro.
The implications of this complete infrastructure change would also be that if a packager screws up before the fence and notices after, the repo is (absent manual intervention) now broken for 24+n hours and if some more involved package update (python, haskell, the usual suspects) is due, that needs to start in time or allow for explicit fencing.
=> pick the fastest mirror you can get and use the rest of the bandwidth to watch online porn or some less useful traffic.
Offline
It would be more like a decentralized database where all the mirrors are updated automatically (push-related) rather than they update the files every few hours themselves.
Do you mean that the tier zero (archlinux.org) should push to tier 1 itself instead of tier 1 mirrors syncing from tier zero?
But again, this is not pacman's problem, pacman could download from all mirrors. The implementation shouldn't be hard to run parallelize download requests through the whole mirror-list instead of one mirror.
If the mirrors were in almost perfect sync we could expect that to happen.
What if they set up some common time period of a few hours? During those hours, all mirrors should sync from their parent mirrors. That would place all mirrors into sync.
Or the tier zero mirror can send a message to tier 1 mirrors, telling them to sync, and this message would then be sent to tier 2 mirrors telling them to sync. That could work if some packager screws up.
I am not a networking expert so I may be wrong on some parts.
How it feels to run shred/wipe in a COW system
Offline
Yes, I could imagine something like this.
Tier 0 knows about Tier 1s and will push a "dirty" bit so that the know that they need to synchronize. I expect that within distributed databases there's already a possibility to do just that.
And don't Debian or Fedora handle it that way or similar?
But this is out of scope and I wouldn't like it to have this managed structure.
Online
sorry for the late reply - got knocked out by food poisening
guess the synchronization is the limitting factor
also sorry for brought this up for billionth time - wasn't aware this is already on the "likely never" list
as for providing a mirror myself: nah, unfortunate the 500mbit/s is downstream only, upstream i get merely 80mbit/s
Offline
The “likely never” part is not even because of technical impossibility. The reasons given above are obstacles, but not hard blockers. Yes, volunteer-run mirrors are not a CDN. With no state consistency guarantee, implementing multi-server downloads is a risky and unpleasant task, facing much more complexity than it would be otherwise.
Judging by the discussions carried out over the years the main blocker is, however, no real benefit. Times change, conditions change, how that “no benefit” works also changes. But in the end it persists.
It seems like a great idea and I fell into this trap myself a long, long time ago. There is a hope that downloads from multiple sources somehow magically makes things better. Be it parallel downloads, using BitTorrent, or multiple mirrors. The moment we dig into the idea, the beautiful picture falls apart. The proposal doesn’t bring great improvements, even on paper. The gains promised are not in par with the effort required either.
It’s even tested in practice. Pacman may use aria2 as XferCommand. People did try that repeatedly. People even tried torrenting the packages. It’s 2025 and all such attempts are now of historical importance only. Despite high expectations, the real gains are neglible or none. Now pacman has parallel downloads from a single mirror, which most users should have enabled. But is it bringing multifold speed improvement? No, it basically smooths out delays induced by sending HTTP requests and transfer hickups.
Just make a simple observation on something that is easy to miss. The fresher packages you get, the less mirrors is there to serve you. And very likely they’re slower than your current one. If a parallel download on the faster server ends before the slower one, you end up waiting for the latter.
Of course anybody is welcome to offer and maintain code that brings that feature to pacman. Times change, conditions change, maybe this time it’s going to work. But just routinely hearing the proposal for it to be introduced (by the “somebody” person) becomes boring and tedious, when many users in the community came through this very talk “a billion times.” ![]()
Paperclips in avatars? | Sometimes I seem a bit harsh — don’t get offended too easily!
Offline
I think pacman parallel downloads from a single mirror is REAL improvement if you have to download many packages and you have at least few big ones; download starts from biggest and you didn't stalled on them but at the same time smaller packages are downloaded.
And BTW, why pacman torrent packages download by aria2 didn't make sens? Torrents were downloaded from the same server?
Offline
And BTW, why pacman torrent packages download by aria2 didn't make sens? Torrents were downloaded from the same server?
Torrents ended up using web seeds, so just using https was faster for the vast majority of packages.
Offline
And BTW, why pacman torrent packages download by aria2 didn't make sens? Torrents were downloaded from the same server?
What Allan wrote, but even without HTTP seeds there are issues with BitTorrent protocol itself.
With the exception of the initial, coördinated seeding, it doesn’t work well for distributing short-lived files. ISOs remain valid for up to a month, and see active downloads for the next few months.⁽¹⁾ That works fine. Packages on the other hand become almost completely useless within weeks after installation,⁽²⁾ sometimes even days.
While the mechanism remains unknown to me, even back in early 2000s it was evident BitTorrent doesn’t scale well for a large number of small torrents. This observation was repeated over the years.⁽³⁾ This is why we generally don’t see single, small files being shared as separate torrents.
Neither of the above implies there is zero advantage. But it’s nowhere near what one expects, when imagining BitTorrent-based distribution.
It’s further complicated by non-technical issues. The option would have to be opt-in for both seeders and downloaders, since it exposes their identity and machines to others. Many ISPs and VPS providers can be hostile towards the use of the protocol too, leading to unconditional termination of the service/subscription, possibly followed with punitive actions. If the user downloads, installs, configures and runs a BitTorrent client and downloads and opens torrent files or magnet links, they are nearly always aware of what they are doing. Switching an option in pacman will not be as clear even if there would be a huge warning in the config file.
____
⁽¹⁾ For example, here the October ’25 image was last downloaded a day ago, November one just 7 hours ago, and December — a hour.
⁽²⁾ Installation, not release, since it’s the installation that creates a new user seed.
⁽³⁾ This part is not a mere confirmation, but addresses a potential shortcoming of the earlier observations. The original client could only handle a single torrent and couldn’t resume downloads, which obviously impacted that metric. However, that remained true even after clients handling multiple torrents and capable of resuming arrived.
Paperclips in avatars? | Sometimes I seem a bit harsh — don’t get offended too easily!
Offline
Just make a simple observation on something that is easy to miss. The fresher packages you get, the less mirrors is there to serve you. And very likely they’re slower than your current one. If a parallel download on the faster server ends before the slower one, you end up waiting for the latter.
Technically this is solvable since aria2 can download one file from multiple servers, after typing this I realize this could be implemented as a custom download script and set it as XferCommand for pacman.
Last edited by JimmyZ (2026-04-07 05:41:00)
Offline