You are not logged in.
Pages: 1
I've built packages before without a problem. However when I build firefox I keep getting an OOM exception which ends up crashing my computer. I've read it takes a lot of ram to build but I have 32gb of ram along with 16 gb of swap so I feel this should be plenty.
To avoid crashing I reproduced inside a docker container and limited the ram, here's three relevant logs
- journal output at the time of the crash
- docker log output
- docker log output with PKGBUILD changed to "./mach build --verbose"
I'm not very familiar with building packages and not sure where to start when debugging this sort of thing. Any direction would be appreciated.
Thanks,
Phil
Last edited by olsonpm (2023-12-13 17:20:27)
Offline
Are you building in /tmp?
Offline
So originally I was building in the clean chroot via "The Classic Way". Not sure whether that builds in /tmp ?
However when that was crashing my computer I used this Dockerfile instead which seems to build in /tmp/pkg
Offline
So originally I was building in the clean chroot via "The Classic Way". Not sure whether that builds in /tmp ?
It would if that is where you created the chroot.
However when that was crashing my computer I used this Dockerfile instead which seems to build in /tmp/pkg
So that build was in /tmp which if it using tmpfs would increase memory usage. Have you tried lowering the number of jobs mach uses?
Online
I created chroot at ~/chroot
I'll try reducing the number of jobs in the mornin
edit: update. Based off the second docker log file i linked the number of jobs spawned was 8. I reduced to 4 and got an OOM although it reached one step further in the build process. I further reduced to 2 and that build is still running. I'll update this when I have time.
Last edited by olsonpm (2023-12-09 17:31:46)
Offline
Okay I had the job number down to 2. The build took a good amount longer and still OOM'd
I'd appreciate any other ideas for looking into this - or I can also ask on a mozilla forum
Offline
I can compile Firefox entirely in RAM with no issues, and without the need to create a chroot.
I have 32G of RAM and 8G for Swap and i use MOZ_PARALLEL_BUILD=16. It compiles in about 40 minutes.
Excuse my poor English.
Offline
thanks agapito, when you say "without creating a chroot", what's your process for building ? Do you just run makepkg ?
Offline
thanks agapito, when you say "without creating a chroot", what's your process for building ? Do you just run makepkg ?
Yes, in a ramdisk i created for these purposes.
Excuse my poor English.
Offline
I've built packages before without a problem. However when I build firefox I keep getting an OOM exception which ends up crashing my computer. I've read it takes a lot of ram to build but I have 32gb of ram along with 16 gb of swap so I feel this should be plenty.
I recently solved a similar problem, not with firefox, but with kernel compilation on a VPS machine with 2 Gb RAM. I wrote a script to find out how much RAM+SWAP is actually required during this task:
makepkg -m > makepkg.log &
watch -n 1 memlog.sh makepkg.log >> mem.log &
tail -f mem.log # For monitoring process
Where memlog.sh is:
freem=$(free -m | sed 's/ / /g')
swpusage=$(echo $freem | cut -d' ' -f16)
memusage=$(echo $freem | cut -d' ' -f9)
line=$(tail -1 "$1")
echo "$memusage;$swpusage;$line"
So I found out that at least 8 Gb: RAM+SWAP is needed to compile the kernel (thanks to btf.o, because even 1.5GB is enough for everything else) so I disabled zram and made a SWAP file, which is convenient to add +1GB if needed.
The task is similar and logging memory consumption during compilation gives more info how much memory is actually needed, you should try this method.
You can make sure that zram is disabled by command zramctl - there should be empty output.
And of course you should avoid using builddir in RAM or using other applications during compile, that using a lot of ram for work such as browsers, databases etc.
Last edited by Nebulosa (2023-12-12 08:53:57)
Offline
okay apparently this line was causing my OOM. I'm not motivated enough to dig into why because it was a pretty frustrating issue. Glad to be able to build finally
ac_add_options --enable-lto=cross,full
Offline
okay apparently this line was causing my OOM. I'm not motivated enough to dig into why because it was a pretty frustrating issue. Glad to be able to build finally
ac_add_options --enable-lto=cross,full
Oh well, i have "ac_add_options --enable-lto=cross" in my PKGBUILD, maybe that was the reason i could compile it just fine... I will try that line on next Monday when FF 122 will be released.
So I found out that at least 8 Gb: RAM+SWAP is needed to compile the kernel (thanks to btf.o, because even 1.5GB is enough for everything else) so I disabled zram and made a SWAP file, which is convenient to add +1GB if needed.
ZRAM is when you don ´t want or need any SWAP file/partition but I strongly advise against it, especially in your case with only 2 Gigs of RAM. What you need in your case is ZSWAP + a big SWAP partition.
Excuse my poor English.
Offline
thanks agapito, please respond here when you test it out - if you don't mind.
Offline
I could not compile it, even with 32 gigabytes of swap and zswap.max_pool_percent=60. After an hour, memory is exhausted and the oomd makes his job. Don't even try it, it was not a nice experience.
The only thing I can think of to achieve this is to create a compressed zram drive as the build directory and save a few gigabytes of memory by compressing the source code or try the mold linker.
As I write this, I am compiling it for the third time, but this time with the old configuration that did allow me to finish the compilation. For the next stable release, I will try the above mentioned.
Excuse my poor English.
Offline
Thanks much for the confirmation. I'm only slightly interested in how it works for the firefox repo, not enough to dig and find out ha. For now I'm glad it's not just me with the issue.
Offline
Has anybody tried the openSUSE Build Service for FF? That's what I use when I want large packages compiled without using my electricity
Their VMs are industrial sized, judging by their speed, so they might be able to handle the troublesome options.
Para todos todo, para nosotros nada
Offline
The only thing I can think of to achieve this is to create a compressed zram drive as the build directory and save a few gigabytes of memory by compressing the source code.
And that's what has allowed me, not without pain, to be able to compile 121.0.1 with "ac_add_options --enable-lto=cross,full" enabled and "mk_add_options MOZ_PARALLEL_BUILD=28", totally on RAM, except FF tarball.
I've created a 24GB ZRAM ext4 device without journal and lzo compression. Thanks to that I could save almost 7 GB of RAM at the linking stage.
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS
/dev/zram0 lzo 24G 12,7G 5,9G 6G 32
ZSWAP allowed me to save a lot of RAM too.
Compressor: zstd
Zpool: zsmalloc
Stored pages: 5189274
Pool size: 5,4G
Decompressed size: 20G
But all that was not enough, without a large amount of hdd swap, I would not have made it. I needed almost 30G of swap for this task. I had to close all the programs and at some point I thought the OOM Killer would be activated. Compilation time was...
real 66m12,907s
user 505m48,118s
sys 27m57,705s
Next time I'll try ZSTD on ZRAM disk to compare times.
Excuse my poor English.
Offline
I point out that these statistics are useful when you are compiling your programs in memory like me, because so far Firefox is the only program I have had problems with 32G of memory.
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS
/dev/zram0 lzo 24G 12,7G 5,9G 6G 32
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS
/dev/zram0 zstd 24G 12,9G 3,4G 3,5G 32
Zstd is a little bit slower, but compresses much better: 6G vs 3,5G.
At some point in the compilation, the memory consumption of the Zram disk increased to 19,8G but thanks to the compression, it was reduced to 6,3G.
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS
/dev/zram0 zstd 24G 19,8G 6,2G 6,3G 32
Using the Zram Zstd method I could save 13,5G of memory, and thanks to that I could finish the Firefox compilation process.
Excuse my poor English.
Offline
Pages: 1