You are not logged in.

#1 2019-07-14 11:24:41

xerxes_
Member
Registered: 2018-04-29
Posts: 662

[SOLVED] ZRAM advisory

I needed more main memory (currently have 2GB) or some kind of more effectively use it (for internet browser), but I can't extend it for this old computer.
So I setup and test ZRAM according to this wiki: https://wiki.archlinux.org/index.php/Im … erformance in section "Swap on zRAM using a udev rule", but instead disksize=512M, I set it to 1024M (because of two core CPU (which is old and slow 1.60GHz) it's 1GB zram0 and 1GB zram1).

Everything is working fine for now, but I have some questions:
1. Can be ZRAM(s) bigger then RAM memory and if so how much bigger can it be? And will it have some negative impact on performance if it will not fit entirely in RAM?
2. Now I use default algorithm for compression which is lzo-rle, and  I can choose from: lzo lzo-rle lz4 lz4hc 842 zstd. Which of these algorithms might be better in terms of CPU overhead and encryption ratio (842 is very cryptic to me, never heard about it) ?

If you have some other suggestions feel free to share it.

Last edited by xerxes_ (2019-07-20 11:22:09)

Offline

#3 2019-07-14 12:49:18

latalante1
Member
Registered: 2018-08-30
Posts: 110

Re: [SOLVED] ZRAM advisory

xerxes_ wrote:

I needed more main memory (currently have 2GB) or some kind of more effectively use it (for internet browser), but I can't extend it for this old computer.
So I setup and test ZRAM according to this wiki: https://wiki.archlinux.org/index.php/Im … erformance in section "Swap on zRAM using a udev rule", but instead disksize=512M, I set it to 1024M (because of two core CPU (which is old and slow 1.60GHz) it's 1GB zram0 and 1GB zram1).

There is no such need from this commit. One will do well enough and it works even faster than with two ( and more) devices zram.
https://git.kernel.org/pub/scm/linux/ke … 82fb1ddb26

cat /sys/block/zram0/mm_stat

The first two values:
orig_data_size/compr_data_size
33914880/6476502=5.2366 (lzo-rle)
In my opinion, the size of a swap file in a zram can be larger by at least twice compared to RAM (although I do not know if such a large value is recommended).
As to the choice of compression, definitely trust the kernel developers. Their choice of lzo-rle is best suited in most cases.

My compilation test during very intensive swapping:
lzo  34m03,673s
lz4  36m24,586s
zstd 35m13,827s

edit:
Do not be suggestive of compression tests running on typical files. Compression in ZRAM works differently, generally operates on very small chunks. Here is a brief description:
https://patchwork.kernel.org/patch/10307397/
The other thing is that the code of compression algorithms in applications and kernel are not identical. In the kernel, these are probably clean and very simple implementations.

Last edited by latalante1 (2019-07-14 15:43:45)

Offline

#4 2019-07-15 13:58:28

xerxes_
Member
Registered: 2018-04-29
Posts: 662

Re: [SOLVED] ZRAM advisory

Thank you all!
I end up with one zram0 device with size 4GB, and it works great. I notice that it start to be used when RAM memory is getting full.
Now I have to observe that while zram is used, normal swap is also used. To see this I will use commands:

zramctl
swapon -s

I will check later compression algorithms.

Last edited by xerxes_ (2019-07-15 13:58:59)

Offline

#5 2019-07-15 14:58:58

latalante1
Member
Registered: 2018-08-30
Posts: 110

Re: [SOLVED] ZRAM advisory

If this amount of memory is sufficient and you have a relatively fast disk (ssd, eMMC, nvme) you can consider turning off the real swap partition and allocating it to support the zram device. In some cases, the number of uncompressed memory pages can be significant, and it is better to move them out of RAM.
https://bugs.chromium.org/p/chromium/is … ?id=887987

Add an additional value to the udev rule.

ATTR{backing_dev}="/dev/sda2" #!!!!!real SWAP partition!!!!!

Last edited by latalante1 (2019-07-15 15:00:10)

Offline

#6 2019-07-15 18:53:48

xerxes_
Member
Registered: 2018-04-29
Posts: 662

Re: [SOLVED] ZRAM advisory

latalante1 wrote:

If this amount of memory is sufficient and you have a relatively fast disk (ssd, eMMC, nvme)...

No, I have old, slow hdd 5400 drive, so I will not turn off real swap, but I try "backing_dev" option.

I don't fully understand how zram works: I have 2GB of RAM (in 2 modules), and I set up 4GB zram0. But compression starts when RAM is almost full (I checked this by zramctl), so where data is stored then?

Last edited by xerxes_ (2019-07-15 19:14:00)

Offline

#7 2019-07-15 18:58:22

seth
Member
Registered: 2012-09-03
Posts: 49,943

Re: [SOLVED] ZRAM advisory

Thin air, it's a scam: https://en.wikipedia.org/wiki/SoftRAM

:-P

No, seriously: the zram/zswap is in your RAM, - the bottom line is that when you're running out of RAM, its increasinly compressed (so the zram/zswap part in you RAM grows and occupies space formerly used by the data now stored in the zram/zswap)
Unlike the scam back then, this actually works (on CPU costs)

Online

#8 2019-07-15 20:03:38

latalante1
Member
Registered: 2018-08-30
Posts: 110

Re: [SOLVED] ZRAM advisory

xerxes_ wrote:

No, I have old, slow hdd 5400 drive, so I will not turn off real swap, but I try "backing_dev" option.

In that case, this proposal is not for you.

xerxes_ wrote:

I don't fully understand how zram works: I have 2GB of RAM (in 2 modules), and I set up 4GB zram0. But compression starts when RAM is almost full (I checked this by zramctl), so where data is stored then?

ZRAM is a compressed block device on which you created the swap file. Everything works in RAM. I would suggest a little to reduce the size of the ~3GB swap.
You can force the kernel to move faster unused pages of memory to a swap.

echo 90 | sudo tee /proc/sys/vm/swappiness

Yes, compression and decompression takes the CPU time, but the profits from this are huge additional space in RAM for applications, cache. Swap in zram works many, many times faster than that placed on an old, slow disk.

Offline

#9 2019-07-16 13:58:03

latalante1
Member
Registered: 2018-08-30
Posts: 110

Re: [SOLVED] ZRAM advisory

swap partition VS swap on ZRAM (archlinux x86_64 - kernel 5.2.1)
A really old laptop (2006) with very little memory and a slow processor - core2 1.73GHz and even a slower hard drive.

>free -m
              total        used        free      shared  buff/cache   available
Mem:            919          86         644           0         188         801
Swap:          4095           0        4095

Running only the necessary applications on startup: systemd, stubby, haveged, Xorg, ion, urxvt, tmux, vim.... Zram size = 2GB. Swap partition 2GB.

The first example - compiling the code using g++. Testcase from attachments. Demand for memory ~ 1.5GB.
https://gcc.gnu.org/bugzilla//show_bug.cgi?id=14179

bzip2 -dc ~/Downloads/array.C.bz2 | ( time g++ -O0 -x c++ -c - -o - 2>/dev/null )

zram 1m52,045s (vmstat swapd maximum ~1.7GB)
swap 5m26,497s (vmstat swapd maximum ~1.3GB)

The second example is the opening of a heavy page in Firefox 68.
https://chromium.googlesource.com/chrom … er&n=10000
zram ~1:55 (swapd ~780MB)
swap ~8:31 (swapd ~720MB)

In both cases the demand for RAM is significantly (2x) higher than the system has.

Edit:
Archlinux developers should start optimizing GCC in 2019. Fedora has been building gcc with PGO (make profiledbootstrap) since 2015. The difference is significant around 15%.
https://src.fedoraproject.org/rpms/gcc/ … nch=master

Same version of  (GCC) 9.1.0  launched from the fedora chroot.

bzip2 -dc ~/Downloads/array.C.bz2 | ( time g++ -O0 -x c++ -c - -o - 2>/dev/null )

zram 1m36s (16 seconds difference from g++ from archlinux).

Edit2:
It's really sad how the requirements for compilers, browsers and systems are increasing from year to year from version to version.

>/opt/gcc49/bin/g++ --version
g++ (GCC) 4.9.4 (x86_64)

>time /opt/gcc49/bin/g++ -O0 -x c++ -c ~/src/array.C -o - 2>/dev/null
real    0m48,738s (zram, vmstat swapd maximum 1.17GB)

Edit3:
For comparison, how does the same look under the i686 architecture (Voidlinux). It is a bit better due to the much lower pressure on the swap file.

>g++ --version
g++ (GCC) 9.1.0
>time g++ -O0 -c ~/src/array.C -o - 2>/dev/null
1m25,202s (zram, vmstat swapd maximum ~768MB)
>time /opt/gcc484/bin/g++ -O0 -c ~/src/array.C -o - 2>/dev/null
0m28,496s (zram, vmstat swapd maximum ~306MB)

firefox 68.0
1m37s (zram, vmstat swapd maximum ~203MB)

Last edited by latalante1 (2019-07-16 20:58:50)

Offline

#10 2019-07-17 19:45:24

xerxes_
Member
Registered: 2018-04-29
Posts: 662

Re: [SOLVED] ZRAM advisory

Thanks latalante1, impressive benchmarks and findings!

latalante1 wrote:

Archlinux developers should start optimizing GCC in 2019. Fedora has been building gcc with PGO (make profiledbootstrap) since 2015. The difference is significant around 15%.

I hope they really do.

If there will be no posts for a couple of few days, I will mark this thread as solved.

Offline

#11 2019-07-17 20:50:52

latalante1
Member
Registered: 2018-08-30
Posts: 110

Re: [SOLVED] ZRAM advisory

I will add that they do it even in Ubuntu. GCC is one of the best profiling pieces of software. I always profile it for my needs, so I have also done the latest version (I did not include it in these tests).

Offline

#12 2019-07-17 20:57:44

loqs
Member
Registered: 2014-03-06
Posts: 17,192

Re: [SOLVED] ZRAM advisory

@latalante1 see https://lists.archlinux.org/pipermail/p … 20943.html for an explanation as to why PGO is not enabled by default.

Offline

#13 2019-07-17 21:06:16

latalante1
Member
Registered: 2018-08-30
Posts: 110

Re: [SOLVED] ZRAM advisory

Of course, the effect is the best on the same processor, but if the other profit will be "only" 7-10% it is also worth it. Compiling software takes a lot of CPU time.
I only mentioned PGO and not PGO + LTO.

Last edited by latalante1 (2019-07-17 21:11:08)

Offline

#14 2019-07-17 21:11:45

loqs
Member
Registered: 2014-03-06
Posts: 17,192

Re: [SOLVED] ZRAM advisory

I think you may have missed the point as the profiling data is not stored in the resulting package a bit identical binary can not be produced which means Alan will not add it.
Edit:
Arch does do LTO on a package by package basis.

Last edited by loqs (2019-07-17 21:13:40)

Offline

#15 2019-07-26 16:51:05

xerxes_
Member
Registered: 2018-04-29
Posts: 662

Re: [SOLVED] ZRAM advisory

Recently I tested zram and swap usage by program compiled from this site: https://github.com/vovo/testing/blob/ma … m_thrash.c -> https://cdn.kernel.org/pub/linux/kernel … eLog-5.2.3 on kernel 5.2.2 with arguments: ./mem_thrash 1000 1000 1000 1000 10.

According to command swapon --show peak usage of zram was ~1.1GB and no real swap was used, but while running mem_thrash I noticed greatly reduced responsiveness and heard that my old hdd drive heavy worked.

When mem_thrash finished responsiveness and performance has almost back (some light swapping).

As a reminder I set single 3GB zram0 and have 2GB of total RAM on this desktop.

So my conclusion is: although swapon show that only zram is used and no swap is used, in reality sometimes swap may be used, thus then performance may not be better than without zram.

Next I will have to retry this test after 5.2.3 kernel (when "mm: vmscan: scan anonymous pages on file refaults" will be committed).

Last edited by xerxes_ (2019-07-26 16:54:39)

Offline

#16 2019-07-26 19:25:44

seth
Member
Registered: 2012-09-03
Posts: 49,943

Re: [SOLVED] ZRAM advisory

This is not about using swap, the testcode provokes thrashing what means that file caches are dropped so that the files have to be re-read from disk on the next access. swap space is used for anon pages, ie. memory that can NOT be re-created by reading stuff from the "regular" filesystem. Apparently the testcode exploits a condition where the kernel starts thrashing rather than re-claiming anon pages (and swapping them out)

Online

#17 2019-07-27 08:55:03

latalante1
Member
Registered: 2018-08-30
Posts: 110

Re: [SOLVED] ZRAM advisory

In your case, the conditions are unrealistic. Do not be surprised that the system has completely slowed down.
You allocated 1GB of memory (anonymous pages) and now 10 (!!!) processes are trying to read 1000 (!!!) times as if 1GB (!!!) executable file and 1GB regular file.
The more optimal you have in the description of this commit.

$ ./thrash 2000 100 2100 5 1 # ANON_MB FILE_EXEC FILE_NOEXEC ROUNDS PROCESSES

https://git.kernel.org/pub/scm/linux/ke … 9f84eda1fa

Last edited by latalante1 (2019-07-27 09:06:33)

Offline

#18 2019-07-27 14:16:53

latalante1
Member
Registered: 2018-08-30
Posts: 110

Re: [SOLVED] ZRAM advisory

As mentioned by @seth, this test is not about swapping. Do not forget to delete the created file 'large'.
If you are interested in synthetic benchmarks see this one.
https://github.com/sergey-senozhatsky/zram-perf-test
Synthetic benchmarks are useful for developers, I'm not interested in them.

Maybe you are looking for a way to fill your memory? Start Chrome, Firefox smile
If they are not effective, use it.
https://github.com/sergey-senozhatsky/z … m-hogger.c

./mem-hogger -m 2G

Offline

Board footer

Powered by FluxBB