You are not logged in.
Pages: 1
Just when it was about to finish, I got:
VOFFSET arch/x86/boot/voffset.h
LDS arch/x86/boot/compressed/vmlinux.lds
AS arch/x86/boot/compressed/head_32.o
CC arch/x86/boot/compressed/misc.o
CC arch/x86/boot/compressed/string.o
CC arch/x86/boot/compressed/cmdline.o
CC arch/x86/boot/compressed/early_serial_console.o
OBJCOPY arch/x86/boot/compressed/vmlinux.bin
HOSTCC arch/x86/boot/compressed/relocs
arch/x86/boot/compressed/relocs.c: In function \u2018print_absolute_symbols\u2019:
arch/x86/boot/compressed/relocs.c:405:14: warning: variable \u2018sh_symtab\u2019 set but not used [-Wunused-but-set-variable]
RELOCS arch/x86/boot/compressed/vmlinux.relocs
LZMA arch/x86/boot/compressed/vmlinux.bin.lzma
lzma: (stdin): Cannot allocate memory
make[2]: *** [arch/x86/boot/compressed/vmlinux.bin.lzma] Error 1
make[1]: *** [arch/x86/boot/compressed/vmlinux] Error 2
make: *** [bzImage] Error 2
==> ERROR: A failure occurred in build().
Aborting...
==> ERROR: Makepkg was unable to build linux-pf.
==> Restart building linux-pf ? [y/N]
==> ---------------------------------
==>
Offline
\u2018print_absolute_symbols\u2019
It seems there's some mess with character encoding in your sources...
:: Registered Linux User No. 223384
:: github
:: infinality-bundle+fonts: good looking fonts made easy
Offline
Try building it outside of /tmp as it seems yaourt (or anything like that) did.
It seems there's some mess with character encoding in your sources...
You should at least pay attention, that was a WARNING
Best Testing Repo Warning: [testing] means it can eat you hamster, catch fire and you should keep it away from children. And I'm serious here, it's not an April 1st joke.
Offline
A kernel compile can't take 18h unless something is *seriously* screwed up. However, I can't help but LOL that the compile went through, but then there was not enough ram to compress the final image
ethail may be on to something, if you're using tmpfs for /tmp and yaourt does compilations there, this totally won't work for large compilations. On the other hand, a kernel compile is not that large, compared to the actual monsters like browsers or libreoffice.
Offline
You should at least pay attention, that was a WARNING
I did: I didn't mention the WARNING regarding the unused variable, but the escape characters that reminded me of some encoding issues the OP has recently had.
:: Registered Linux User No. 223384
:: github
:: infinality-bundle+fonts: good looking fonts made easy
Offline
A kernel compile can't take 18h unless something is *seriously* screwed up. However, I can't help but LOL that the compile went through, but then there was not enough ram to compress the final image
I guess it can on a Pentium II 400Mhz with 394MB RAM.
So was it because of RAM?
Offline
I guess it can on a Pentium II 400Mhz with 394MB RAM.
Not even there can I imagine 18h. A few hours tops, but that's far from 18h.
So was it because of RAM?
It's because you compile *everything* in ram. The whole source tree gets copied into ram (for the kernel, that's 510MB already), then all the intermediary files (which would otherwise be written to disk) pile up in ram as well, so in the end there's not much ram anymore for the actual compilation to happen.
If your /tmp is a tmpfs, don't do compilations there. Or at least not the big compilations, smaller stuff shouldn't be a problem.
Last edited by Gusar (2012-02-27 16:39:59)
Offline
Or just don't use yaourt or other helpers which default to sending stuff to ram or tmp.
Allan-Volunteer on the (topic being discussed) mailn lists. You never get the people who matters attention on the forums.
jasonwryan-Installing Arch is a measure of your literacy. Maintaining Arch is a measure of your diligence. Contributing to Arch is a measure of your competence.
Griemak-Bleeding edge, not bleeding flat. Edge denotes falls will occur from time to time. Bring your own parachute.
Offline
IIRC it took a couple hours on my previous 1.6 GHz Pentium-M with 1GB RAM, so 18 hours on a 400 MHz P-II with 400 MB RAM doesn't sound impossible at all to me. It must have been swapping...
Maybe there's a way to add a "make localmodconfig" into linux-pf's PKGBUILD, it could help a lot with compilation time.
I guess there's also an option to compress the kernel with gzip instead of lzma, like CONFIG_KERNEL_GZIP or something.
Also if not already done, you probably want to change PKGEXT in /etc/makepkg.conf to:
PKGEXT='.pkg.tar.gz'
Offline
IIRC it took a couple hours on my previous 1.6 GHz Pentium-M with 1GB RAM
No way. On my Pentium-M 1.73GHz, a fairly generic kernel takes about 22 minutes. The Arch kernel, which has everything including the kitchen sink in it, would take a few minutes more tops.
But wait a minute... Lockheed, are you saying you were actually compiling on an actual Pentium II? Can I have it?
Offline
stqn wrote:IIRC it took a couple hours on my previous 1.6 GHz Pentium-M with 1GB RAM
No way. On my Pentium-M 1.73GHz, a fairly generic kernel takes about 22 minutes. The Arch kernel, which has everything including the kitchen sink in it, would take a few minutes more tops.
No way. On my Core 2 Duo 2.1Ghz and 4GB RAM Arch kernel compiles in 40-60 minutes (can't say for sure).
But wait a minute... Lockheed, are you saying you were actually compiling on an actual Pentium II? Can I have it?
No. It's mine. It's awesome and you can't have it Seriously, I have an old lightweight LXbuntu 9.04 on it and it works like a charm.
I have a Pentium 133, too, unfortunately the mobo must be cracked somewhere as it rarely starts. What a pity.
Last edited by Lockheed (2012-02-28 08:14:38)
Offline
Just when it was about to finish, I got:
LZMA arch/x86/boot/compressed/vmlinux.bin.lzma lzma: (stdin): Cannot allocate memory make[2]: *** [arch/x86/boot/compressed/vmlinux.bin.lzma] Error 1 make[1]: *** [arch/x86/boot/compressed/vmlinux] Error 2 make: *** [bzImage] Error 2 ==> ERROR: A failure occurred in build(). Aborting... ==> ERROR: Makepkg was unable to build linux-pf. ==> Restart building linux-pf ? [y/N] ==> --------------------------------- ==>
As has been pointed out, you are simply out of memory. You can work around this in several ways:
* don't use lzma as it uses a lot of memory to compress, you will probably be better off with gzip anyway
* add some more swap, even if just a swapfile for the sake of allowing the compile to finish
* don't do the compile on /tmp
In principle a compile on tmpfs backed by swap should be faster than a compile on a regular block device, but I have never tried to measure the difference, so it would be interesting to know :-) The reason being that a tmpfs would only write to disk if it runs out of memory, but a regular fs would write to disk all the time.
Good news is that (unless whatever helper you used is stupid and overwrites everything, which, come to think of it, it probably does) you should be able to reuse all of the compiled files so a recompile should be quick.
Offline
No way. On my Core 2 Duo 2.1Ghz and 4GB RAM Arch kernel compiles in 40-60 minutes (can't say for sure).
Wtf, not possible. Really. On the weekend I'll be at the Pentium-M again, I'll go compile a kernel using the Arch config file. I can't imagine it taking more than half an hour.
No. It's mine. It's awesome and you can't have it
Seriously, I have an old lightweight LXbuntu 9.04 on it and it works like a charm.
Whoa!
Though, of course it works like a charm, with 384MB you have plenty of ram. Should be enough even for a kernel compile. Just don't compile in a tmpfs. And, like tomegun says, maybe switch from lzma to gzip compression. And a machine-specific config would also cut down on the compile time a lot.
I have a Pentium 133, too, unfortunately the mobo must be cracked somewhere as it rarely starts. What a pity.
My condolences
We had the habit of always giving away the old computer when we bought a new one. Sometimes I regret it. One of the masterpieces was a Pentium MMX 166Mhz with, get this, 3dfx Voodoo graphics. Yeah, the legendary accelerator. Oh how I wish I still had that. The oldest machine that's still at home is an AMD Duron 800Mhz, with I think a Geforce FX 5200.
Edit: Holy eff, I compiled a kernel with an Arch config on a Core i3-530. It took 24 minutes!! My fairly generic kernel takes 4:30 minutes, a machine-specific one takes 2:30 minutes. Man, the Arch kernel really does have everything including the kitchen sink in it. So I can now see it taking 60 minutes on a Core2Duo.
Compiling all that stuff on a slow machine makes no sense, so Lockheed I really suggest you make a machine-specific config. You'll get the kernel compiled in like 1/10 of the time.
Last edited by Gusar (2012-02-28 14:42:31)
Offline
Really, don't compile the kernel in a tmpfs mount.
The kernel is something you might want to keep anyway. I have all my "system" realted packages in ~/pkg/sys (drivers and kernel).
Makes it easy to remember that anything in ~/pkg/sys goes hand in hand, has to be upgraded together, and no reboots must happen unless everything's done. (Especially if you use things like AUFS...)
Not only will it then finally work, it will also be much much faster, since you won't end up with a full tmpfs all the time.
And in reply to all the posts about how long it takes to compile a kernel:
Anywhere from a few minutes to a few hours, depending on your configuration. If you have a configuration made specifically for your hardware, ie: only *one* network driver, sata driver etc. etc. then it will take minutes.
If you just use the standard config it'll take much much longer since its config is supposed to build modules for every kind of supported hardware.
Personally, I keep seperate configs for every machine. A little more work to maintain, but worth the effort. Also: you tend to get to know your system better, and what kind of drivers you need.
You know you're paranoid when you start thinking random letters while typing a password.
A good post about vim
Python has no multithreading.
Offline
The problem is I cannot tell yaourt to place the compilation in any other place, and I also can't make it to use localconfigmod (?) parameter.
Personally, I keep seperate configs for every machine. A little more work to maintain, but worth the effort. Also: you tend to get to know your system better, and what kind of drivers you need.
Apart from shorter compilation and marginally less memory occupied by kernel, what other benefits are there?
Last edited by Lockheed (2012-02-28 19:05:43)
Offline
The problem is I cannot tell yaourt to place the compilation in any other place, and I also can't make it to use localconfigmod (?) parameter.
The solution is very simple - don't use yaourt. The kernel is very self-contained: you have the kernel image itself and maybe an initramfs which go to /boot, the modules go in /lib/modules/<kernel-version>/, and that's that. You don't really need a package manager to track that. You can still use package management though, just run the makepkg commands manually, after modifying the PKGBUILD to use your config. No yaourt necessary.
Apart from shorter compilation and marginally less memory occupied by kernel, what other benefits are there?
Marginally faster boot time maybe. But aren't these enough? Especially on an older machine, the difference in compile time is very significant. We're talking about a tenfold reduction in time here, that's quite something. Also, you get to know the kernel a lot better, as well as you hardware.
Offline
We had the habit of always giving away the old computer when we bought a new one. Sometimes I regret it. One of the masterpieces was a Pentium MMX 166Mhz with, get this, 3dfx Voodoo graphics. Yeah, the legendary accelerator. Oh how I wish I still had that.
ebay?
I always was more of a Riva guy
Offline
The problem is I cannot tell yaourt to place the compilation in any other place, and I also can't make it to use localconfigmod (?) parameter.
Personally, I keep seperate configs for every machine. A little more work to maintain, but worth the effort. Also: you tend to get to know your system better, and what kind of drivers you need.
Apart from shorter compilation and marginally less memory occupied by kernel, what other benefits are there?
Easier to mix in multiple additional patches for which you only find independent and seperate packages, like -ck, and easier to keep provides=('linux-selinux') to use selinux, aufs, ...
Faster to restore the alsa modules removed by oss (since I sometimes use OSS)...
Easier to use git snapshots and merge in additional branches...
Last edited by Blµb (2012-02-28 22:03:37)
You know you're paranoid when you start thinking random letters while typing a password.
A good post about vim
Python has no multithreading.
Offline
Pages: 1