You are not logged in.

#26 2018-03-25 12:48:28

eschwartz
Trusted User/Bug Wrangler
Registered: 2014-08-08
Posts: 2,853

Re: pacman hooks timing

On top of which IIRC Allan will tend to reject things automatically if they are of the pattern "here, do this because it partially fixes something. I know it is still horribly broken but something is better than nothing".

Proper fixes exist. They're called btrfs and snapper. Papering over the fix but only in certain conditions doesn't really get us anywhere.


Managing AUR repos The Right Way -- aurpublish (now a standalone tool)

Offline

#27 2018-03-25 14:06:40

amish
Member
Registered: 2014-05-10
Posts: 415

Re: pacman hooks timing

Not everyone uses btrfs. Infact most dont. (I am not even sure if btrfs is considered stable yet)

And not everything is perfect. Pacman (or any other package) always keeps improving. So currently its partially complete with respect to future.

Spectre / Meltdown patches are not perfect as well.. but they are not rejected by kernel developers saying ... fix everything first else we wont fix at all.

And yes.. I am not pressuring to do anything ..

I am just putting ideas with due respect to all developers and their priorities and their willingness.

Everytime someone suggests something does not imply "here, do this" smile

PS: I disagree that my suggestions are "horribly broken". But yes you may not agree with me. And I am completely fine with it. smile

Last edited by amish (2018-03-25 14:22:12)


Forum signature: I discuss. I put my thoughts strongly. But I definitely respect all developers and time they put in.

Offline

#28 2018-03-25 15:16:32

eschwartz
Trusted User/Bug Wrangler
Registered: 2014-08-08
Posts: 2,853

Re: pacman hooks timing

I don't really think pacman crashing during an update is quite the same as fixing meltdown/spectre...

This has a perfectly functional solution, so maybe people *should* use btrfs. Or... any filesystem that supports snapshotting. Quite frankly, "help my computer crashed during an update" is something that fundamentally seems to be more of a filesystem thing than a pacman thing anyway. Because how on earth do you properly solve this without any of the tools (snapshotting filesystems) you need?

Any pacman solution would just be btrfs bindings that created snapshots for you. And at that point, simply use the existing hooks. They're packaged in [community]/snap-pac.

amish wrote:

PS: I disagree that my suggestions are "horribly broken". But yes you may not agree with me. And I am completely fine with it. smile

That's fine, I never suggested they are either.

What I said is that pacman is horribly broken with respect to atomicity of updates in a system crash. And your solution does not fix it. And solutions that don't actually fix anything are just useless code churn that fails to achieve useful results.

Last edited by eschwartz (2018-03-25 15:20:32)


Managing AUR repos The Right Way -- aurpublish (now a standalone tool)

Offline

#29 2018-03-25 16:56:47

Trilby
Inspector Parrot
Registered: 2011-11-29
Posts: 22,380
Website

Re: pacman hooks timing

amish wrote:

Spectre / Meltdown patches are not perfect as well.. but they are not rejected by kernel developers saying ... fix everything first else we wont fix at all.

No, it doesn't need to fix something, but it does need to be a proper fix for the one little part that the patch intends to fix.  You are right that the meltdown/spectre issues are not fully fixed, but they are not a singular problem but rather a collection of related issues.  Some of those individual issues have been well fixed with changes to the kernel.

So the proper comparison is not "fix everything all at once or fix nothing" but rather "actually fix something properly or don't add code".

This is not to weigh in on whether or not your suggestion would be a proper fix - I have no input on that.  I'd only highlight that the point that a fix for one thing need not fix everything is not a valid defense against a claim that it's just not a good fix in the first place.

Last edited by Trilby (2018-03-25 16:58:53)


"UNIX is simple and coherent..." - Dennis Ritchie, "GNU's Not UNIX" -  Richard Stallman

Offline

#30 2018-03-25 17:51:33

amish
Member
Registered: 2014-05-10
Posts: 415

Re: pacman hooks timing

Umm, I am not proposing a FIX. I am proposing a change in algorithm (process flow)

Sometimes when there is no complete fix, one might want to pick an algorithm which is less destructive OR one which creates less probability of something going horribly wrong.

In current pacman algorithm - *you invite bad luck*. i.e. power failure when untar is in progress, will *surely* lead to one or many files missing from system.

My algorithm is less destructive and more likely to survive a power failure.

Instead of delete all files of a package first and then add all files again, I am proposing overwrite* (untar over existing files which will also add new files, if any) and then delete unwanted files of old package (if any).

Since file names/paths in most update cases remain same (especially minor version updates or pkgrel changes), there will be no (direct) deletion. So atleast one version of a file will remain.

PS: *overwrite here means copy-to-temp and rename i.e. atomic replacing of a file i.e. incase of power failure either old version of file will be there or new version but the file would not be lost completely. (like how it would happen in current pacman process)

Thats all I have to say on this smile


Forum signature: I discuss. I put my thoughts strongly. But I definitely respect all developers and time they put in.

Offline

#31 2018-03-25 20:01:42

eschwartz
Trusted User/Bug Wrangler
Registered: 2014-08-08
Posts: 2,853

Re: pacman hooks timing

amish wrote:

Umm, I am not proposing a FIX. I am proposing a change in algorithm (process flow)

Sometimes when there is no complete fix, one might want to pick an algorithm which is less destructive OR one which creates less probability of something going horribly wrong.

But... less destructive is still destructive, at which point I still have zero faith in the integrity of my system so how have your proposed changes helped me if I have to revert to a btrfs snapshot and/or replay all updates properly from an archiso?

How does changing an algorithm to another still-broken algorithm fix anything? The root of the problem is still there, *and* the effects of the problem are still there in many, many cases, and you have not systematically fixed anything but instead basically said "with my fix, you can now flip a coin and see if you're lucky. And if you're lucky, you'll just end up with mismatched package files from old versions that are actually compatible with each other, so at least everything still somehow runs."

I would not really call that "fixed" in any way shape or form. Many people would go so far as to say your changes are actively bad as they result in less trust in the system than a comparatively honest crash with missing files.

amish wrote:

PS: *overwrite here means copy-to-temp and rename i.e. atomic replacing of a file i.e. incase of power failure either old version of file will be there or new version but the file would not be lost completely. (like how it would happen in current pacman process)

Just want to point out that, regardless of anyone's opinion on whether your suggestions make sense to implement, this will not even work the way you expect it to, as the whole point of atomicity via renaming is that you can transfer an inode for an existing file a lot faster than writing a new file to disk... but using /tmp means using a tmpfs that is (usually) backed by RAM, and therefore involves crossing filesystems which incurs the full write penalty again. In order to do this properly you'd have to reserve some directory backed by the same filesystem as / and guarantee cleanup yourself rather than making use of $TMP (ignoring for a moment the possibility that users have partitioned /usr, /var, etc on different physical devices for any of several reasons).
One might say this is a good idea as we have repo packages that are gigabytes in size which could clobber your RAM in a way you didn't expect...

(And yes, you've saved the decompression penalty, but that still doesn't equal actual atomicity so it is rather missing the point.)

Last edited by eschwartz (2018-03-25 20:12:31)


Managing AUR repos The Right Way -- aurpublish (now a standalone tool)

Offline

#32 2018-03-26 01:03:01

amish
Member
Registered: 2014-05-10
Posts: 415

Re: pacman hooks timing

You took copy-to-temp literally. smile

But roughly this is how one can do it.

If file to replace is /path/to/file then:

1) untar to /path/to/file.$$$$$$ #$$$$$$ is a unique name
2) mv /path/to/file.$$$$$$ /path/to/file #mv is atomic

Ofcourse this uses external commands but we can use equivalent syscalls in programs.

Last edited by amish (2018-03-26 01:04:11)


Forum signature: I discuss. I put my thoughts strongly. But I definitely respect all developers and time they put in.

Offline

#33 2018-03-26 01:18:16

Allan
Member
From: Brisbane, AU
Registered: 2007-06-09
Posts: 10,909
Website

Re: pacman hooks timing

Ok - do you do that for an entire package?  Extract all files to temporary location then move all at once?   An entire update?   How much space do we need to reserve?

Our largest package is 3464.30MiB.

Offline

#34 2018-03-26 01:46:59

amish
Member
Registered: 2014-05-10
Posts: 415

Re: pacman hooks timing

You need space equivalent to size of one file (not even .pkg.xz file) that is being extracted currently. Definitely not in GBs (unless there exists such a file!)

How does pacman untar files currently? I believe either by calling bsdtar or tar or using libarchive?

The same can possibly continue - it should be able to take care of atomic file replacements.

PS: I will have to check what tar or bsdtar does internally.

Last edited by amish (2018-03-26 02:13:47)


Forum signature: I discuss. I put my thoughts strongly. But I definitely respect all developers and time they put in.

Offline

#35 2018-03-26 02:12:47

Trilby
Inspector Parrot
Registered: 2011-11-29
Posts: 22,380
Website

Re: pacman hooks timing

amish wrote:

You need space equivalent to size of one file (not even .pkg.xz file) that is being extracted currently. Definitely not in GBs (unless there exists such a file!)

And how do you know how big the biggest file in a given archive will be prior to extraction?  This would likely require a new bit of meta-data for each package.  If you don't know before hand, it is not safe to assume that a given package isn't just one large file.  So despite your claim that this would be "Definitely not in GBs" it is definitely in the GB range: 3.63 GB to be exact.


"UNIX is simple and coherent..." - Dennis Ritchie, "GNU's Not UNIX" -  Richard Stallman

Offline

#36 2018-03-26 02:16:10

amish
Member
Registered: 2014-05-10
Posts: 415

Re: pacman hooks timing

Huh??? Pacman already checks the disk space requirement before starting the installation, doesnt it?

There would be no change in that code as space required would definitely be less.

Update: Ok nevermind, i think pacman calculates file space requirement after considering that files will be deleted first. Making sufficient space before re-adding the files.

PS: About GB .. i did write in brackets that " (unless there exists such a file!)"

Last edited by amish (2018-03-26 02:21:28)


Forum signature: I discuss. I put my thoughts strongly. But I definitely respect all developers and time they put in.

Offline

#37 2018-03-26 02:21:23

Trilby
Inspector Parrot
Registered: 2011-11-29
Posts: 22,380
Website

Re: pacman hooks timing

Yes, pacman checks whether there is space for the "Installed size" meta-data.  It does *not* check file-by-file - so your claim that file size is relevant is false.  And the suggestion that files be duplicated rather than removed and replaced would double the necessary space available.

edit: this was cross posted with your edit - we seem to be arriving at the same point now.

edit 2: RE your PS, yes, you said unless there was such a file and that's exactly my point: file size is not known before hand, so pacman would have no way to reserve just enough space for the largest file, it would have to reserve enough space for the full package to be duplicated.

Last edited by Trilby (2018-03-26 02:23:59)


"UNIX is simple and coherent..." - Dennis Ritchie, "GNU's Not UNIX" -  Richard Stallman

Offline

#38 2018-03-26 02:24:12

amish
Member
Registered: 2014-05-10
Posts: 415

Re: pacman hooks timing

I was updating my post while you were replying. You may want to re-read it smile


Forum signature: I discuss. I put my thoughts strongly. But I definitely respect all developers and time they put in.

Offline

#39 2018-03-26 02:27:06

Trilby
Inspector Parrot
Registered: 2011-11-29
Posts: 22,380
Website

Re: pacman hooks timing

I have, my point stands.  Allan asked how much space must be reserved for such an operation, and your response referred to a size of a file.  But you can't know that size.  So back to the question: how much space must be reserved?

Imagine if I had a 10GB root partition of which I was using just 4.5G, then wanted to install a sizeable UFO game that required 3.3G - I should be good, right?  Under your plan I would be good but not for long.  I could install it, but then I could never subsequently update it.  Pacman would error out due to not having enough free space despite none of the packages increasing in size and having an unsued ~3GB.  I'd be pretty annoyed.

Last edited by Trilby (2018-03-26 02:37:39)


"UNIX is simple and coherent..." - Dennis Ritchie, "GNU's Not UNIX" -  Richard Stallman

Offline

#40 2018-03-26 02:49:31

apg
Developer
Registered: 2012-11-10
Posts: 192

Re: pacman hooks timing

This is not a new feature request https://bugs.archlinux.org/task/8585.  So far nobody has come forward with a way to actually solve the problem in pacman, just half-solutions that involve adding significant complexity and overhead.  Even if we could make individual file updates atomic (and I'm not sure we can even do that without constantly sync(2)ing), that's such a small gain when the overall transaction is not.  This is a filesystem-level problem with a filesystem-level solution.  If you want atomic transactions, use a filesystem with snapshots.  If you choose not to, you get to deal with the fallout.  I don't see that changing any time soon.

Offline

#41 2018-03-26 02:58:50

amish
Member
Registered: 2014-05-10
Posts: 415

Re: pacman hooks timing

How does pacman calculate space requirement?

lets say system has 3 partitions,  / (root), /usr and /var

Lets just pick /usr for example.

Pacman would calculate sum of size of existing files under /usr which are to be deleted before untar  (say sum is X)
Pacman would go through each package and calculate sum of size of every file under /usr (say sum is Y)

So space required under /usr will be (Y-X) + some margin of safety.

So pacman must be going through contents of archive for space requirement for each mount point.

That time pacman can also note down maximum file size requirement for that mount point. (should just need one "off_t maxfilesize" variable in struct and 2 line additional code. if (newfilesize > maxfilesize) maxfilesize = newfilesize;. ....something like that)

So there is no need of additional meta-data. (unless pacman does not follow above process of calculating disk space requirement)

Trilby wrote:

Yes, pacman checks whether there is space for the "Installed size" meta-data.

Checking just "installed size" meta data does not sound correct.. we must check space requirement for each mount point on the system.

Last edited by amish (2018-03-26 03:08:50)


Forum signature: I discuss. I put my thoughts strongly. But I definitely respect all developers and time they put in.

Offline

#42 2018-03-26 03:43:57

amish
Member
Registered: 2014-05-10
Posts: 415

Re: pacman hooks timing

Checking pacman git - file named diskspace.c

https://git.archlinux.org/pacman.git/tr … iskspace.c

It is indeed checking each file removed and file to be added per mount point.

So it is not checking just "installed size" metadata. (of a package)

So finding a file of maximum size is trivial. (as i mentioned above)

PS: It may not be going through archive but may be through mtree file. Either way it is checking size of each file. So size requirement would indeed be known.

Last edited by amish (2018-03-26 03:51:43)


Forum signature: I discuss. I put my thoughts strongly. But I definitely respect all developers and time they put in.

Offline

#43 2018-03-26 04:51:29

Allan
Member
From: Brisbane, AU
Registered: 2007-06-09
Posts: 10,909
Website

Re: pacman hooks timing

Updating one file at a time from a package can be as disastrous as files from a package not being present.    So an "atomic" update would really need to handle all files from a package at once.   Even then, a library update then poweroff without updating other packages can result in a non-bootable system.

Even more fun, update the kernel without updating the initramfs (occurs right at the end - we need to wait for module packages to update) results in no booting.

Offline

#44 2018-03-26 05:12:55

amish
Member
Registered: 2014-05-10
Posts: 415

Re: pacman hooks timing

Allan wrote:

Updating one file at a time from a package can be as disastrous as files from a package not being present.

Would like to know how? Can you mention a case? (all libraries missing completely is more disastrous - but in my case, if it was just a minor version update - it may save our day because old version may still work). Yes it can be as disastrous but not always.

Allan wrote:

Even more fun, update the kernel without updating the initramfs (occurs right at the end - we need to wait for module packages to update) results in no booting

In current (existing method) case on power fail while kernel package is being untar'ed --- vmlinuz will be gone completely. (because older version was deleted before untar). So it will *definitely* not boot at all.

In my case if power fail occurs while kernel package is being updated and vmlinuz was "still" not overwritten yet then system will still continue to boot because old module directories and old initramfs are not deleted yet.

So as I said earlier - we are reducing chances of disaster. (may be not completely but one method is sure to cause complete disaster, other retains some possibility of not causing disaster even for important packages)

Last edited by amish (2018-03-26 05:30:29)


Forum signature: I discuss. I put my thoughts strongly. But I definitely respect all developers and time they put in.

Offline

#45 2018-03-26 15:37:50

eschwartz
Trusted User/Bug Wrangler
Registered: 2014-08-08
Posts: 2,853

Re: pacman hooks timing

zOMG!!! (!!)

Eschwartz wrote:

But... less destructive is still destructive, at which point I still have zero faith in the integrity of my system so how have your proposed changes helped me if I have to revert to a btrfs snapshot and/or replay all updates properly from an archiso?

How does changing an algorithm to another still-broken algorithm fix anything? The root of the problem is still there, *and* the effects of the problem are still there in many, many cases, and you have not systematically fixed anything but instead basically said "with my fix, you can now flip a coin and see if you're lucky. And if you're lucky, you'll just end up with mismatched package files from old versions that are actually compatible with each other, so at least everything still somehow runs."

I would not really call that "fixed" in any way shape or form. Many people would go so far as to say your changes are actively bad as they result in less trust in the system than a comparatively honest crash with missing files.

amish wrote:
Allan wrote:

Updating one file at a time from a package can be as disastrous as files from a package not being present.

Would like to know how? Can you mention a case? (all libraries missing completely is more disastrous - but in my case, if it was just a minor version update - it may save our day because old version may still work). Yes it can be as disastrous but not always.

Literally any case where missing libraries, due to the fact that one missing library can be just as dangerous as 400 missing libraries. You keep suggesting the same incredibly complex-but-still-not-working solution to a problem solved by using a snapshotting filesystem.

https://gist.github.com/vodik/5660494

Last edited by eschwartz (2018-03-26 15:43:53)


Managing AUR repos The Right Way -- aurpublish (now a standalone tool)

Offline

#46 2018-03-27 05:04:13

amish
Member
Registered: 2014-05-10
Posts: 415

Re: pacman hooks timing

Incredibly complex?? Its not at all complex - its just re-arrangement of existing code.

OR

if you do not want a major change you can just defer deletion to one stage later.

Instead of deleting all files first, let libarchive handle the deletion (as it progresses) and then delete whatever else is left over to be deleted. (i.e. whatever is not present in new version but was present in old version)

https://git.archlinux.org/pacman.git/tr … til.c#n379

Line 379 (currently):

int readret = archive_read_extract(archive, entry, 0);

to

int readret = archive_read_extract(archive, entry, ARCHIVE_EXTRACT_UNLINK);

Manpage: https://github.com/libarchive/libarchiv … WriteDisk3

PS: ok this was my final proposal. If nothing, I definitely had good things to learn about pacman smile

Last edited by amish (2018-03-27 05:05:33)


Forum signature: I discuss. I put my thoughts strongly. But I definitely respect all developers and time they put in.

Offline

#47 2018-03-27 12:02:21

eschwartz
Trusted User/Bug Wrangler
Registered: 2014-08-08
Posts: 2,853

Re: pacman hooks timing

amish wrote:

Incredibly complex?? Its not at all complex - its just re-arrangement of existing code.

I just looked at the date, and you're almost a week early. wink wink


Managing AUR repos The Right Way -- aurpublish (now a standalone tool)

Offline

#48 2018-03-27 12:35:12

Allan
Member
From: Brisbane, AU
Registered: 2007-06-09
Posts: 10,909
Website

Re: pacman hooks timing

amish wrote:

Instead of deleting all files first, let libarchive handle the deletion (as it progresses) and then delete whatever else is left over to be deleted. (i.e. whatever is not present in new version but was present in old version)

https://git.archlinux.org/pacman.git/tr … til.c#n379

Line 379 (currently):

int readret = archive_read_extract(archive, entry, 0);

to

int readret = archive_read_extract(archive, entry, ARCHIVE_EXTRACT_UNLINK);

Manpage: https://github.com/libarchive/libarchiv … WriteDisk3


Fun story...  that function is only used to extract .INSTALL scripts to temporary locations so we can run pre_install() functions.

The function that actually extracts the package has ARCHIVE_EXTRACT_UNLINK set - see perform_extraction() in lib/libalpm/add.c

Offline

#49 2018-03-27 13:25:18

amish
Member
Registered: 2014-05-10
Posts: 415

Re: pacman hooks timing

Oh ok. I looked at the description just above those functions

There were two:
_alpm_unpack_single() - Unpack a specific file in an archive
_alpm_unpack() - Unpack a list of files in an archive, list a list of files within the archive to unpack or NULL for all

So I thought those were the primary functions used for unpacking the archive to disk.

Allan wrote:

Fun story...  that function is only used to extract .INSTALL scripts to temporary locations so we can run pre_install() functions.

Actually it looks like above functions are not used at all. extract_db_file() seems to be one extracting ".INSTALL" file which in turn calls perform_extract().

And there is also another similar function extract_single_file()

Last edited by amish (2018-03-27 13:33:34)


Forum signature: I discuss. I put my thoughts strongly. But I definitely respect all developers and time they put in.

Offline

#50 2018-03-27 16:32:25

apg
Developer
Registered: 2012-11-10
Posts: 192

Re: pacman hooks timing

amish wrote:

Oh ok. I looked at the description just above those functions

There were two:
_alpm_unpack_single() - Unpack a specific file in an archive
_alpm_unpack() - Unpack a list of files in an archive, list a list of files within the archive to unpack or NULL for all

So I thought those were the primary functions used for unpacking the archive to disk.

Allan wrote:

Fun story...  that function is only used to extract .INSTALL scripts to temporary locations so we can run pre_install() functions.

Actually it looks like above functions are not used at all. extract_db_file() seems to be one extracting ".INSTALL" file which in turn calls perform_extract().

And there is also another similar function extract_single_file()

Yes they are; they are used exactly as Allan described.

Offline

Board footer

Powered by FluxBB