You are not logged in.
Hiya all,
AUR is getting DDoS'ed again...
I've been thinking: a way to mitigate these attacks is to make AUR's infrastructure more distributed.
I'm not talking about extreme distributed tech here, such as P2P or blockchain. Just a regular old-fashioned mirroring system that facilitates the following:
- Multiple mirrors that host copies of AUR's website and git repos
- Infrastructure that keeps aforementioned mirrors in sync
- A standard method for clients such as yay to discover and use a mirror of their choice
Such a system must, of course, be designed with security and resilience in mind. For example, it must provide facilities for authenticating mirrors, monitoring them, taking problematic ones offline, etc. But these are problems that have been solved before.
So, are there any plans afoot for doing this?
Offline
Sakura:-
Mobo: MSI MAG X570S TORPEDO MAX // Processor: AMD Ryzen 9 5950X @4.9GHz // GFX: AMD Radeon RX 5700 XT // RAM: 32GB (4x 8GB) Corsair DDR4 (@ 3000MHz) // Storage: 1x 3TB HDD, 6x 1TB SSD, 2x 120GB SSD, 1x 275GB M2 SSD
Making lemonade from lemons since 2015.
Offline
I know about the GitHub mirror. And, if someone patches yum make use of it, it will solve the problem of installing software when the main site is down.
But it's not a real distributed solution. And it doesn't support submitting packages either.
Offline
Probably because if you really need the AUR in asap always, then maybe the simplest solution is to have the repo for those folks, and if you need to use yay or something similar you can put the repo in another Linux machine and get the updates from there, if you make the redirection to that machine of course as your main AUR repo. Maybe if you don't want it to have it in another computer in LAN network, you can create a container with docker or kubernetes [I think k8s is overkill for this tbh] to have it as your main AUR server, just add the github repo to the list of repos available in the container and use it as an official AUR. That probably is more easy to make it happen, and with less money and resources involved. Probably that's the main reason, but I'm not part of IT arch team, I'm just a regular plant user
Last edited by Succulent of your garden (2025-10-05 14:01:03)
str( @soyg ) == str( @potplant ) btw!
Offline
The existing Arch mirrors do not need to be trusted as packages are all signed and all data transfers are one way from Arch's server to the mirrors. Submitting packages to AUR mirrors would require either the mirror posses a copy of the AUR database or has to be trusted to forward authentication requests. To me this limits the AUR mirrors to being run by Arch themselves but would have to be hosted on different service providers as the current DDOS is being mitigated at that provider level being far beyond Arch's ability to filter it as the server level. This means increased costs for Arch and increased work load dealing with different service providers.
Offline
But it's not a real distributed solution.
Yeah, but that's not a virtue in and by itself.
it will solve the problem of installing software when the main site is down
The point is to maintain the service, the single redundancy largely achieves that - the only problem is the client-side access balancing and probably database queries.
For the latter:
curl https://api.github.com/repos/archlinux/aur/branches
Online
fun fact: for some reason I keep having issues with github specificly - as, for some reason still unknown, from time to time its dns resolution fails - so for me this could end up in "main aur down - but backup github unreachable"
I'm not sure if this is a me / my network issue - but it's persitently only github to which this happens
as for my dns: I've cloudflare as primary with google as secondary - that should keep me covered
but it still could be github - it only has a ipv4 and with an extreme low ttl of just one minute - so could be just some cache misses I encounter rather frequently
Offline
github doesn't have an AAAA DNS entry, so if you (temporarily) require IPv6 you might not get there w/o a tunnel
dig -t AAAA github.com
dig -t A github.com
ip a; ip r
resolvectl status
or it's late any you keep thinking of Pizza and while genuinely trying to type github.com, you always end up w/ grubhub.com which you've blocked because of your nightly Pizza feats getting out of control
Online
A few notes:
AUR's website provides an RPC interface for searching and querying packages. It also provides the git repositories and HTTP access to the packages (in .tar.gz format if I remember correctly, though I can't really check since AUR is still offline).
According to its manual page, yay(8), yay can be configured with alternative URLs for both the website and the RPC interface.
In order for tools like yay to work with it, a mirror will have to replicate both the RPC interface and the part of the website that supplies the packages, at the very least. And it must do this independently of the main AUR website, which may be down. As far as I know, no code for doing this currently exists.
Arch Linux does provide the source code for AUR's website. This is certainly helpful. However, building and installing it won't magically create a mirror of AUR, just an empty clone of the website. This is because the site's data will be missing (and Arch admins won't give it to us, since it contains all users' personal info, security credentials, etc.) However, one might be able to create automation that populates the mirror website with enough data from the GitHub mirror to make it functional enough to work with yay.
Or perhaps just writing some code that replicates the RPC and package access parts (using the GitHub mirror as its source) would be easier.
It would certainly be useful to take a look into yay's source code, and/or analyse its network traffic with wireshark, in order to understand which parts of AUR's website the tool actually uses.
P.S. Or perhaps just asking its developer might be easier :-P
Last edited by plp (2025-10-05 22:46:22)
Offline
BTW, it looks like people have been asking for yay to support the GitHub mirror directly:
https://github.com/Jguer/yay/issues/2660
And also to support local repos other than AUR (which is essentially the same as my "mirror" idea):
Offline
This isn’t a first time somebody mentions AUR being mirrored like the official repos are. Similar to the other cases, it feels like there is a perception mismatch regarding what AUR is.
The person asking thinks of it as a bunch of PKGBUILDs. While not straighforward and with its own issues (see below), it’s not a crazy idea. The replying regulars, however, think of AUR as a service and a community. And that can’t be reasonably replicated. It’s as if anybody wanted to make mirrors of this forum. Sure, you can copy existing posts,⁽¹⁾ but posters don’t come here just to read content.
As for the aformentioned issues:
Consistency. Repos generally keep strict temporal order. Unless somebody’s finger slipped, you never get a package that depends on another package (or its version) not already in the db. For ultra short time it’s possible that a mirror gives incompatible dbs from different repos, but that’s also resolved almost instantly. AUR doesn’t have such mechanisms and it relies on the publisher offering PKGBUILDs at the right time, in the right order, updating them in clusters, possibly in relation to entries from other users or the official repos. A snapshot may be taken in the in-between state and you’ll be stuck with that until the next snapshot is distributed the next day.
Authenticity. There is a huge warning about not trusting PKGBUILDs, but the purpose of that warning is makeing people aware they take programs from random people. But there is nothing wrong in trusting some specific AUR user. A TLS connection to AUR guarantees, that you are obtaining a PKGBUILD from the user you trusted.⁽²⁾ This can no longer be provided with a distributed approach, unless either each user is required to digitally sign all their content, the entire snapshot has to be authenticated against some trusted key, or each PKGBUILD has its own AUR-provided signature.
Neither is a hard blocker, but both are notable obstacles. In particular if one wants to think of “distributed AUR” as something more than a stop-gap solution for intermittent⁽³⁾ outages.
____
⁽¹⁾ Which in fact does exist with some degree of completness, on the Wayback Machine.
⁽²⁾ Of course the standard risks like a server or account take-over scenarios apply, but they apply equally to upstreams.
⁽³⁾ Under a day in the past week, two instances of minimally reduced availability the week before, one short outage in yet another past two weeks.
Paperclips in avatars?
NIST on password policies (PDF) — see §3.1.1.2
Sometimes I seem a bit harsh — don’t get offended too easily!
Offline
Surprising to see the DDOS is still going on. Can't even access the domain, just dead.
Offline
Now I can access the AUR ^^
str( @soyg ) == str( @potplant ) btw!
Offline
There already is a separate thread, where people share their experiences on connecting to Arch services.
Paperclips in avatars?
NIST on password policies (PDF) — see §3.1.1.2
Sometimes I seem a bit harsh — don’t get offended too easily!
Offline
The person asking thinks of it as a bunch of PKGBUILDs. While not straighforward and with its own issues (see below), it’s not a crazy idea. The replying regulars, however, think of AUR as a service and a community.
Being an AUR contributor, I'm aware of that.
And that can’t be reasonably replicated.
There's no reason we couldn't create a distributed network of PKGBUILD repos and communities. I suppose we'd need the following:
A well-crafted web service, offering everything aurweb does (submissions, hosting, comments, RPC service etc.) but still simple enough to be easily deployed by anyone (aurweb is too complex, requiring several Docker images). To be run by anyone who wishes to host his/her own PKGBUILD repository and community.
A set of mirrors that aggregate all the independent repositories/communities, presenting a unified web and RPC interface to end-users. To be run by volunteers wishing to serve the ecosystem, and by sysadmins wishing to host a mirror for their organization.
A distributed protocol, that specifies how everything comes together and enforces security. This protocol will definitely need to satisfy all the consistency and authenticity points you have raised, plus (I'm sure) many more.
Of course, creating something like this would be a monumental undertaking.
Perhaps, for now, we could just focus on a little "utility service" that replicates the RPC and PKGBUILD functionality of aurweb, pulling its data from the GitHub mirror. You know, something that's sufficient for tools like yay to remain functional in case of an outage. This would be much simpler, no?
Last edited by plp (2025-10-06 19:21:02)
Offline
A well-crafted web service, offering everything aurweb does (submissions, hosting, comments, RPC service etc.) but still simple enough to be easily deployed by anyone (aurweb is too complex, requiring several Docker images). To be run by anyone who wishes to host his/her own PKGBUILD repository and community.
Or we can just agree* on a source forge platform, such as GitLab, Codeberg, or Gitea. No need to reinvent the wheel.
These platforms are much more modern and mature than aurweb will ever be. They offer all required features. And people are already familiar with how to deploy them.
Anyway, I'm probably just mumbling at this stage.
(*) By "agree" I mean "agree on which source forge software to use", not "force everyone to host their code on GitLab or Codeberg". That wouldn't be very distributed...
Last edited by plp (2025-10-08 16:54:45)
Offline
You know, something that's sufficient for tools like yay to remain functional in case of an outage
Keep in mind that aur helpers are not supported by arch linux and not necessary to use the aur .
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline
not necessary to use the aur .
maybe - but a helper makes it quite a lot easier (keyword: dependency management)
Offline
Keep in mind that aur helpers are not supported by arch linux and not necessary to use the aur .
To be clear, I fully understand that the AUR ecosystem is community-driven, and I don't expect upstream Arch developers to do anything about this. I just want to discuss the downstream community's options on how to mitigate the problems caused by the DDoS, and how to make the platform more distributed.
Once we agree on what needs to be done, I'll be glad to help with the coding.
maybe - but a helper makes it quite a lot easier (keyword: dependency management)
Yes!
Last edited by plp (Yesterday 15:14:06)
Offline
Short term: Implement a github backend for your favorite AUR helper. The mirror will likely stay because "why not" - and the "just" in "just agree* on a source forge platform" does a lot of heavy lifting.
Even iff such approach is taken and even iff there's ultimately some agreement, that will likely not be hasted under the pressure of the current attacks and then still needs to be executed.
As for "a little "utility service" that replicates the RPC and PKGBUILD functionality of aurweb": why would that not be subject to the same DDoS attacks?
Online
As for "a little "utility service" that replicates the RPC and PKGBUILD functionality of aurweb": why would that not be subject to the same DDoS attacks?
Several people will be running instances of it. Administrators and power users will even be able to run it locally, using their local copy of the GitHub repo as a backend. Such a setup would be completely offline and impervious to attacks.
Also, a utility service should allow us to use different AUR helpers without any (significant) modifications.
But perhaps modifying yay to use the GitHub repo directly might be easier. Though I'm not sure if the search functionality will be possible without using some kind of intermediary index (in which case some form of background "service" will still be necessary).
Last edited by plp (Yesterday 19:36:15)
Offline
plp: I was not opposing you in the post from Monday. It’s about a general pattern that repeats in these talks.
But given you’re an AUR contributor and assuming you imply that matters, I do not see how your idea works.
Are mirrors read-only snapshot of PKGBUILDs? I covered this already, so I believe this is not your point.
Are mirrors a loose collection of independent repositories? Trivially implementable. How do we know this? Because it’s the current situation. PKGBUILDs are available from other services too, not only from AUR. It’s just automation systems being AUR-centered and not able to deal with this kind of a distributed system.
Are mirrors a true distributed system, with write permissions etc.? Technically possible, of course, but it’s an entire project of the scale only slightly less than Arch Linux itself. And a lot of questions remain, like authentication over untrusted nodes, privacy concerns (both actual and legal), the ability to maintain consistency with deletions, legal responsibility etc. This isn’t a new attempt either and we have some solid experience with such experiments. IRC has been originally devised and worked as federated networks. 35 years later networks are not federated (like Libera) or have decentralized control (like EFnet).
Server donation for mirrors remaining under Arch’s control? The way e.g. Libera does it. Or simply money, so devops can deploy more addresses over multiple datacenters. Great idea and it’s already implemented through the donation system.
Paperclips in avatars?
NIST on password policies (PDF) — see §3.1.1.2
Sometimes I seem a bit harsh — don’t get offended too easily!
Offline
Lone_Wolf wrote:not necessary to use the aur .
maybe - but a helper makes it quite a lot easier (keyword: dependency management)
A custom local repository that is added to pacman.conf solves that.
]$ pacman -Qm | wc -l
0
pacman -Sl lonewolf | wc -l
86
$
Everything in the lonewolf repo is build by myself with makepkg -rs or a clean chroot build.
Most of it are aur packages, a few are local versions of packages from extra/core .
No need for aur helpers.
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline
cryptearth wrote:Lone_Wolf wrote:not necessary to use the aur .
maybe - but a helper makes it quite a lot easier (keyword: dependency management)
A custom local repository that is added to pacman.conf solves that.
and how does a local repo solves recursive aur dependencies?
some packages depend on others only available in aur themself - I don'T see how a local repo, which is merely a cache for built packages and which also has to be kept up-to-date, does help here all by itself? so unless you hunt down the dependecies yourself and keep everything up-to-date a local repo is just a one-shot of the moment when you built a package
don'T get me wrong - I don't say "one NEEDs an aur helper" - but the effort of manage a local repo and hunt for dependecies yourself is exactly why aur helpers exists - why not use them? isn't that what linux philosophy is about? a tool should do one job and do that one job good
along with the point mentioned above: some (too few) devs go the way to properly sign thier packages - and as I don'T want to deal with reading pkgbuild I don'T want to deal with all that pgp nonesense - yes, I'm fully aware that it is my duty to read them and make my own decision which keys to trust - but given that pgp isn't used properly in the web-of-trust way as it was once intented (c'mon everybody - when was the last time you were at a proper key party and physically checked finger prints with a persons id?) the only thing a signature provides is merely attaching some e-mail to a hash - I don'T want to deal with that nonesense - if I go the windows-noob route and just use unknown untrusted software from god knows who - then please let me get the full experience in just not care at all about all that crap
these are just two points why I see an aur helper as a somewhat "you either use one or don't use the aur at all" kinda tool
at what's the difference to other distros? well, as I only know debian/ubuntu and suse: over on them you add random repos instead - but it's the same in green
Offline
I don'T see how a local repo, which is merely a cache for built packages and…
Mass deployment - it's essentially what the chaotic-aur repo is doing for you as well.
I don't want to deal with reading pkgbuild
AUR helpers do not exempt you from that and it's also not related to any "pgp nonsense" - not even superficially checking what the PKGBUILD does is AUR-roulette
when was the last time you were at a proper key party and physically checked finger prints with a persons id
The question isn't whether you have verified someones ID but whether you decided to trust your OS to a particular signature or not.
You implicitly trust every key in the archlinux keyring (because they're building your OS) which is a decent start into a WOT - then there's multivectoral authentication: if a gazillion websites tell you that this is Linus Torvalds' signature, then there's an outstanding chance that it's in fact his.
it's the same in green
I'm not sure how much this is understood even in the Lowlands
https://www.merriam-webster.com/diction … difference
On the matter: It's perfectly ok to use any preferred AUR helper unless you use that as an excuse to not having to read https://wiki.archlinux.org/title/Arch_User_Repository
They're an excuse to be lazy, not to be uninformed.
Online