You are not logged in.
I noticed that p7zip has been replaced with 7zip in Extra. Why is that? 7zip, in short, simply sucks. Mostly because of its refusal to archive anything with a symlink in it. If you have 7izp, try to archive ~/.wine (it has dozens of symlinks for dosdevices) and you'll see what I mean. Fortunately for me, I had downloaded p7zip for an offline installation, so I fixed that easily. But the problem with 7zip and symlinks remains, which makes me wonder why would you put an inferior package in the repo?
RADO OS GTK3 (customized Arch), i7-12700F, RTX 3070 Ti 8GB, 64GB DDR5-4800 (OCed to 5200 MHz).
Offline
Just a guess
https://sourceforge.net/p/p7zip/bugs/
Last release was 2016.
https://aur.archlinux.org/cgit/aur.git/ … b28a1687ab
+ printf ">>> This software is dead. Works, but no new features will be added\n"
+ printf ">>> or no bugs will be repaired, and will detioriate over time.\n"
+ printf ">>> Start looking for alternatives.\n"
its refusal to archive anything with a symlink in it
https://sourceforge.net/p/sevenzip/bugs … ?q=symlink
Why are you *creating* 7z archives instead of tarballs?
You're not gonna be able to extract symlinks on non-posix systems.
Offline
$ 7z --help | grep -i link
-snh : store hard links as links
-snl : store symbolic links as linksOffline
Why are you *creating* 7z archives instead of tarballs?
1. I only make tarballs for small directories - 50 MB tops. Anything beyond that - 7z.
2. Many of the dirs that need regular backing up start from 6 GB (the largest is 110 GB), like my heavily customized Wine dir:

I have no intention of waiting twice for the creation of a tarball which is slow as f**k, in both packing and unpacking. By the time this big tarball gets created, I would have packed the directory 3 times in a row with p7zip.
stanczew, I'll check that -snl out, altough that's only more work for me to edit all of the 72 scripts for backing things up which is all the more reason to stay with p7zip. It might be an abandoned project but guess what - everything works just fine 9 years later + I don't have to add -snl bc p7zip does that by default. I'm not saying "bring p7zip back" - I know that's not gonna happen, I was just curious. For the 10 years with it and with Linux (first Mint, Arco for about a year and then vanilla Arch since 2019 until present day) I had never seen a single bug with p7zip. If it didn't deteriorate in 10 years, I don't think it ever will. Not on my computer anyway. ![]()
RADO OS GTK3 (customized Arch), i7-12700F, RTX 3070 Ti 8GB, 64GB DDR5-4800 (OCed to 5200 MHz).
Offline
If it didn't deteriorate in 10 years, I don't think it ever will. Not on my computer anyway. smile
To be honest, I could see somewhere in the future that the majority of computers are going to ARM based instead of x86. So probably in that time, if it happens of course, you will need to compile it by yourself
It's deprecated as you said, that's the reason probably why it's not longer in the repos, but if you love it so much why not use it ? I mean, I don't know if it does have a CVE that program.
The worst thing is when somebody send you a .rar file that could only be opened with winrar, some days back ago I had to do that and download it for Linux LoL. I tried all compressors alternatives on Linux, all failed :C
Last edited by Succulent of your garden (2025-10-29 10:56:38)
str( @soyg ) == str( @potplant ) btw!
Offline
you will need to compile it by yourself
I used to be afraid of compiling from source but that time is long gone.
Atm I'm testing the new 7zip. I've added -snl wherever it's needed and I'm testing other stuff - mostly speed when compressing large files with -mx9 and -mmt20 (my CPU has 20 threads). For the moment it performs... unbelievably good. Sure, fills half of the available RAM (50% of 64 GB RAM) to complete the task with these parameters but on this point only 7zip outperforms p7zip by more than 10 times. So, for the moment it's great. I'll test it one or two weeks more and if it all goes well, I'll stick with it. p7zip will remain in my "museum" for obsolete programs. ![]()
Currently it performs awesome when archiving, the only downside is that it's limited to about 30-32 GB RAM to use. It would be great, if the user had an option to manually set the maximum number of used RAM, I'd set it to 50-55 GB and still have more than enough for all daily tasks.
https://i.imgur.com/Qswxa48.png
Last edited by Valso (2025-10-29 20:47:44)
RADO OS GTK3 (customized Arch), i7-12700F, RTX 3070 Ti 8GB, 64GB DDR5-4800 (OCed to 5200 MHz).
Offline
That seems to be a nice improvement then. Probably another reason why is not the in the repos the previous version ![]()
str( @soyg ) == str( @potplant ) btw!
Offline
@Valso, you'll have to file an upstream bug though it seems kinda weird that there's an artificial RAM limit imposed by it.
Can you allocate more RAM w/ stress or head|tail - https://unix.stackexchange.com/question … ree-memory ?
p7zip will remain in my "museum" for obsolete programs.
Please always remember to mark resolved threads by editing your initial posts subject - so others will know that there's no task left, but maybe a solution to find.
Thanks.
Offline
There are/were some efforts to modernize p7zip in forks, but they seem to be struggling as well.
https://github.com/p7zip-project/p7zip
https://github.com/cielavenir/p7zip
Last edited by progandy (2025-10-30 18:31:40)
| alias CUTF='LANG=en_XX.UTF-8@POSIX ' |
Offline
Support for Linux has been added a few years ago. Originally 7-Zip was written as a 32-bit program for Windows. That inherently limited the algorithm to 2 GiB of address space (~3 GB with PAE enabled). As a consequence the compression window couldn’t exceed 2 GiB.⁽¹⁾
Nowadays the size limitation is gone, so hypothetically the value can be bumped up. I don’t speak for Igor, but I can see at least a few reasons he may wish not to make the change:
Brings no real benefit to increasing it. 2 GiB is already unresonably high.
Breaks backward compatibility.
Inhibits adoption. Mind an archive, if passed to other users, needs to be something they can decompress without facing an OOM.
____
⁽¹⁾ Not without heavy disk I/O.
Paperclips in avatars? | Sometimes I seem a bit harsh — don’t get offended too easily!
Offline
@Valso, you'll have to file an upstream bug though it seems kinda weird that there's an artificial RAM limit imposed by it.
Can you allocate more RAM w/ stress or head|tail - https://unix.stackexchange.com/question … ree-memory ?p7zip will remain in my "museum" for obsolete programs.
Please always remember to mark resolved threads by editing your initial posts subject - so others will know that there's no task left, but maybe a solution to find.
Thanks.
So far the modern 7zip (the one that requires manually setting -snl for symlinks) performs better than expected and a lot better (and faster) than p7zip, so I'm gonna stay with it. However, you might be right - there is a limit of 30.7 GiB RAM in 7zip and I'll have to file a bug report or at least ask why that limit exists.
I'm not familiar with the stress command, so I had to ask an AI about it. I did RAM allocation with stress (it didn't work with "head"):
stress --vm 1 --vm-bytes 50G --timeout 120and it did allocate 51.2 GiB. I tried with "timeout 30" as the AI suggested but that time wasn't enough for conky to react and display the allocated RAM, so I set it to 120 seconds. This corresponds with the description of my CPU (i7-12700F) in intel's website:
Memory Specifications
Max Memory Size (dependent on memory type)
128 GB
Memory Types
Up to DDR5 4800 MT/s
Up to DDR4 3200 MT/s
Max # of Memory Channels
2
Max Memory Bandwidth
76.8 GB/s
[*]Inhibits adoption. Mind an archive, if passed to other users, needs to be something they can decompress without facing an OOM.[/*]
____
⁽¹⁾ Not without heavy disk I/O.
Keyword is "IF". The archives I make on my computer are either for testing or for backup. Which means I don't share them with other people, therefore I don't really care about other people's decompressing issues. That's why in my report I suggested adding an option "--memlimit" (like "--memlimit=55" in my case, in GiB units), so that we can manually set higher limits.
Here's my bug report, if you wanna read it: https://sourceforge.net/p/sevenzip/bugs/2603/
---------------------
60 minutes later: I managed to make 7zip use more RAM, thus making archiving of large files at max compression (-mx9) almost twice faster than it did them with the hard limit size of 30.7 GiB. Took some testing, though. Fortunately the kernel reacts instantenously before the system crashes due to out of memory cases and cuts the process in its tracks when it reaches the available RAM or more. With this I quickly found the perfect value for my computer. With -md=460m it uses exactly 50 GiB RAM for the compression process. 107.4 GB (not GiB) file was shrunk to 15.6 MB in under 2 minutes. That way the compression speeds up and yet there's more than enough memory for the rest of the system and other tasks I do while the compression is running.
Last edited by Valso (2025-11-02 16:47:34)
RADO OS GTK3 (customized Arch), i7-12700F, RTX 3070 Ti 8GB, 64GB DDR5-4800 (OCed to 5200 MHz).
Offline