You are not logged in.
I found that `pacman -Fy' and `pacman -Sy' share the same lock `/var/lib/pacman/db.lck', even though they operate on different databases (core.sb vs core.files). Therefore, files database and packages database cannot be updated simultaneously. Is this behaviour intended or a bug?
Offline
I would say: neither. It seems like a mismatch between the abstraction provided to the user through the `pacman` CLI binary and the concrete, underlying implementation in libalpm. Both pacman_files (option -F) and pacman_sync (option -S) are doing the same thing (-Fy, -Sy), just with different database names. The operations are conceptually the same. Since libalpm doesn’t support per-db lock granularity, the same lock is used for -Fy and -Sy in the very same manner as it is used for e.g. core and extra.
If you consider that to be a serious issue and can support such a motion with arguments, a feature request in the bugtracker may be opened for that purpose for the pacman project, requesting per-database lock granularity. That would, however, require redefining fragments of the current libalpm behavior and will make cleaning stale locks more complicated.
Paperclips in avatars? | Sometimes I seem a bit harsh — don’t get offended too easily!
Offline
I would say: neither. It seems like a mismatch between the abstraction provided to the user through the `pacman` CLI binary and the concrete, underlying implementation in libalpm. Both pacman_files (option -F) and pacman_sync (option -S) are doing the same thing (-Fy, -Sy), just with different database names. The operations are conceptually the same. Since libalpm doesn’t support per-db lock granularity, the same lock is used for -Fy and -Sy in the very same manner as it is used for e.g. core and extra.
If you consider that to be a serious issue and can support such a motion with arguments, a feature request in the bugtracker may be opened for that purpose for the pacman project, requesting per-database lock granularity. That would, however, require redefining fragments of the current libalpm behavior and will make cleaning stale locks more complicated.
Thanks for your reply! I don't think this behaviour bothers me, since pacman_files is a relatively infrequently-used operation, and few would update files db every week or so.
Offline
Are you running the -Fy as on a timer and coincidentally tried to update the system at the exact same moment that the timer happened to be running? If not, what other use case could their possibly be for running both of these at the same time?
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
Are you running the -Fy as on a timer and coincidentally tried to update the system at the exact same moment that the timer happened to be running? If not, what other use case could their possibly be for running both of these at the same time?
With a slow network, `pacman -Fy' takes minutes to finish, so I execute `pacman -Sy' to better utilize time.
Offline
If your network is the bottleneck, I'd argue that you gain nothing by parallelizing downloads.
Inofficial first vice president of the Rust Evangelism Strike Force
Offline
If the network is the bottleneck, parallel processes won't help.
Also:
I execute `pacman -Sy'
No you won't.
Because this does nothing but setting you up for a partial update.
The file database is not necessary for regular use, you can sporadically update it and do something completely different that doesn't require network activity at all (in doubt play hcraft or bring out the trash)
Edit: curses.
Last edited by seth (2021-12-02 08:23:19)
Online
Without harping on, you are aware that running pacman -Sy can be dangerous as it increases the probability of system breakage due to partial upgrades?. See here
edit: ninja'd by seth. Again!
Last edited by skunktrader (2021-12-02 08:26:08)
Offline
ZeppLu:
What is the cause of the operation taking that long? If it’s really a 1Mb/s connection, as indicated by the time `pacman -Fy` takes, spreading downloads over time is the way to go. Doing the opposite, as noted by others, is counterproductive. For that you have two approaches, both of which can be applied at the same time:
Use `checkupdates -d`, from the community/pacman-contrib package, to download actual package files while the connection is unused and you expect to do system update soon. Upon `pacman -Syu` invocation the cached files will be reused.
Reduce frequency at which you invoke `pacman -Fy`. Keeping the files list up to date is not affecting your system in any way. If they’re outdated, even by months, the worst that could happen is -F giving you a false response. The consequences are minor and lead to invoking -Fy as needed.
But if it’s not your network’s limitation, then perhaps your mirror is too slow? Choose a faster one. Either using the mirror list generator or community/reflector.
Last edited by mpan (2021-12-02 12:48:29)
Paperclips in avatars? | Sometimes I seem a bit harsh — don’t get offended too easily!
Offline
... The consequences are minor and lead to invoking -Fy as needed.
'Dunno about that. It depends on whether you consider looking like a fool on these forums as a minor consequence as this has happened to me several times when I use pacman -F as a source of information to provide an answer to a post. Apparently for me the "as needed" interval is generally shortly after my ridiculously outdated files database has caused me to say something completely incorrect on these forums.
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
Thanks for all of your kind help! Let me answer some questions
But if it's not your network's limitation, then perhaps your mirror is too slow? Choose a faster one.
Thanks, but I'm already using the fastest mirror located in the same city. Normally it's quite fast, but at the time I wrote this post it just slowed down perhaps due to some network congestion.
If your network is the bottleneck, I'd argue that you gain nothing by parallelizing downloads.
If the network is the bottleneck, parallel processes won't help.
I believe that in TCP where connections share bandwidth fairly, the more connections you have, the more bandwidth you gain.
Reduce frequency at which you invoke `pacman -Fy`. Keeping the files list up to date is not affecting your system in any way.
Yes, that's exactly my practice. I'm using `pacman -Fy' just because this is a newly installed system.
Because this does nothing but setting you up for a partial update.
Without harping on, you are aware that running `pacman -Sy` can be dangerous as it increases the probability of system breakage due to partial upgrades?
Thanks for pointing out, I'm not aware of partial update until now. But when I encountered the problem, I was actually using `pacman -S <package>'. So why listing `pacman -Sy' as an example in this post? Well, for symmetry with `pacman -Fy', I think...
Offline
I believe that in TCP where connections share bandwidth fairly, the more connections you have, the more bandwidth you gain.
Ah ... no.
The more connections you have, the more bandwidth you use. You still have the exact same finite bandwidth limit.
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
ZeppLu wrote:I believe that in TCP where connections share bandwidth fairly, the more connections you have, the more bandwidth you gain.
Ah ... no.
The more connections you have, the more bandwidth you use. You still have the exact same finite bandwidth limit.
Consider the situation where 5 persons share a 10M network, and each of them has 1 connection, then it's a 2M network for you.
If you open another connection, then the 10M network is shared by 6 connections fairly, and you effectively get 2*10/6=3.33M network.
Offline
Trilby wrote:ZeppLu wrote:I believe that in TCP where connections share bandwidth fairly, the more connections you have, the more bandwidth you gain.
Ah ... no.
The more connections you have, the more bandwidth you use. You still have the exact same finite bandwidth limit.
Consider the situation where 5 persons share a 10M network, and each of them has 1 connection, then it's a 2M network for you.
If you open another connection, then the 10M network is shared by 6 connections fairly, and you effectively get 2*10/6=3.33M network.
And if you open 10000 connections then you can get almost all of the bandwidth for yourself, huh? I'm going to cite this post as a microcosmic example the next time someone asks me what's wrong with society.
Even if that were the case, there would be no point in launching irrelevant operations to hog bandwidth because you would reduce the bandwidth for your operation of interest (2M/s -> 1.66M/s). And what kind of network admin would implement QoS in such a way that all separate connections, even from the same user, receive the same bandwidth. Who has the much faith in humanity?
This number of wtfs/post is too high in this thread.
My Arch Linux Stuff • Forum Etiquette • Community Ethos - Arch is not for everyone
Offline
And if you open 10000 connections then you can get almost all of the bandwidth for yourself, huh?
In reality nobody has 10000 files to download at the same time. Even if it wants to, no website would allow that many connections.
Even if that were the case, there would be no point in launching irrelevant operations to hog bandwidth because you would reduce the bandwidth for your operation of interest (2M/s -> 1.66M/s).
I would say both operations are of my interest. So it's 2M/s -> 3.33M/s.
And what kind of network admin would implement QoS in such a way that all separate connections, even from the same user, receive the same bandwidth.
Sorry I don't know much about network administration, so correct me if needed. In my country, usually a family has fixed-bandwidth broadband, like 100M. Therefore, when I use more connections, I'm competing with my family members for bandwidth in the LAN. Is any kind of QoS needed in such a small LAN consisting of only several persons?
Offline
Sorry I don't know much about network administration, so correct me if needed. In my country, usually a family has fixed-bandwidth broadband, like 100M. Therefore, when I use more connections, I'm competing with my family members for bandwidth in the LAN. Is any kind of QoS needed in such a small LAN consisting of only several persons?
Ah. When you mentioned sharing a network with a connection per user, I imagined a shared connection between different residences (e.g. student halls or houses in a rural area). In those cases a router is sometimes used to equally allot available bandwidth to each residence, which ensures that each residence receives at least <max bandwidth>/<number of residences>, but more if some residences are not using the bandwidth. However, the bandwidth is allotted per residence and not per connection to avoid the problem that I described.
My Arch Linux Stuff • Forum Etiquette • Community Ethos - Arch is not for everyone
Offline
Pacman uses parallel downloads anyway and on the other hand browsers will frequently create multiple connections for one webpage as well.
https://en.wikipedia.org/wiki/Arms_race
And if those are wireless connections, the connection quality would actually have far greater impact than the amount of connections.
If you wanted to give yourself an edge for sure, you'd actually do implement an QoS rule that prefers your IP.
Online