You are not logged in.
Will I get a significant speed boost from recompiling my system using CFLAGS for my CPU? Including possible bonuses in boottime? NOTE: I'm not talking about recompiling the kernel to make it smaller, but recompiling all packages, including the kernel, with my own system's CFLAGS.
Or should I select only certain packages for recompilation? Or should I not even bother with it at all and use the binary versions from the repos?
I'm mostly concerned with responsiveness and speed of applications, not their ram usage. I am on a 64-bit install.
Thank you for your response.
Offline
People say that there's a performance gain, but I've never seen these claims backed up by benches.
The main benefit, for me, is less troubleshooting. If I have less libraries linked to my executables I know for sure that libfoo isn't the one crashing the app.
Before you do recompile the whole system, think about how are you going to maintain a recompiled system in Arch.
It's relatively easy but pacman/makepkg/abs alone won't do, or at least not in a way that is practical and comfortable.
Last edited by gog (2009-10-28 02:07:37)
Offline
Even if things run slightly faster, you will never gain enough time to offset the time spent recompiling.
Offline
Compiling over night, etc.
The same deal with people with slow connections, they download the packages while they're sleeping.
Offline
It's relatively easy but pacman/makepkg/abs alone won't do, or at least not in a way that is practical and comfortable.
What is this better way? ![]()
Even if things run slightly faster, you will never gain enough time to offset the time spent recompiling.
But how slightly is slightly? Is it noticeable? And if "it depends", as it most likely does, on what factors?
Thanks again for the prompt responses.
Offline
Compiling over night, etc.
Still requires some setup or automation...
Offline
And if "it depends", as it most likely does, on what factors?
Well, what is your CPU?
Offline
gog wrote:It's relatively easy but pacman/makepkg/abs alone won't do, or at least not in a way that is practical and comfortable.
What is this better way?
You'll be downloading PKGBUILDs and patches, not packages. An abs sync does that for you, but updating with that alone is a nightmare. You need a script like pbget or a full blown wrapper like yaourt.
You also need a way to efficiently patch/sed the newer PKGBUILDs, so you'll probably end up writing your own scripts at one point. I don't recompile every package, though.
Last edited by gog (2009-10-28 02:22:49)
Offline
RedScare wrote:And if "it depends", as it most likely does, on what factors?
Well, what is your CPU?
Dual core Core 2 Duo Intel CPU. 2.6 GHz I think, definetly not lower.
You'll be downloading PKGBUILDs and patches, not packages. An abs sync does that for you, but updating with that alone is a nightmare. You need a script like pbget or a full blown wrapper like yaourt.
You also need a way to efficiently patch the newer PKGBUILDs with modified ones, so you'll probably end up writing your own scripts at one point. I don't recompile every package, though.
I used yaourt -Sb, but it was painfully slow when there were extra makedepends. Is that the "best" way?
Last edited by RedScare (2009-10-28 02:23:16)
Offline
I don't like yaourt, but it has customizepkg. It automatically modifies the PKGBUILDs, provided that there's a profile in customizepkg.d.
I also haven't used yaourt in a while but I'm pretty sure that it has flags for not prompting.
Also look at pacbuilder.
If I were to do it I'd much prefer writing my own stuff since both of these programs do either more or less than they should.
Offline
Allan wrote:Well, what is your CPU?
Dual core Core 2 Duo Intel CPU. 2.6 GHz I think, definetly not lower.
Might notice a difference if your are running i686, doubtful if you are running x86_64.
Offline
Might notice a difference in apps that takes advantage of ssse3(not that there is many of those anyway), but other than that, not much.
But there is always the placebo effect, and the warm fuzzy fealing one get from compiling stuff for no gain at all.
Last edited by Mr.Elendig (2009-10-28 12:14:06)
Evil #archlinux@libera.chat channel op and general support dude.
. files on github, Screenshots, Random pics and the rest
Offline
It is not Arch way:) Try Gentoo for that.
And I think the speed won't be noticeable. It doesn't worth to recompile everything.
Arch is fast enough.
Offline
It's not worth it unless you think that kind of thing is fun. Though, certain applications can be compiled with different options to make them faster. Firefox, for example, is worth recompiling if you dive into the mozconfig file and enable pgo, etc.
Also, recompiling the kernel with a patch and stripping the drivers you don't need can make your system boot faster and be more responsive (depending on the patch and your config). Try the rt-patch for snappiness.
Offline
Just pick and choose what really will benefit. Some apps like x264 and FFmpeg will optimize at compile time. Most won't see much if any improvement.
Offline
Having both my own source based spin-off and Arch installed on the same rig I can tell you the difference is marginal on most day to day apps. There are exceptions when CPU features are not compiled in such as sse3 and the like, also some apps respond to using Os (+ or - a few flags here and there) for a smaller resulting binary instead of using O2 but then you get into a lot of work with per package CFLAGS.
Honestly the time you save not having to maintain packages built from source will more than make up for any speed improvement gained. In my case it has just got to be too much to maintain my own repo, write ebuilds and test packages so it is time to drop my own blend and move on.
Last edited by ASOM (2009-10-28 15:56:55)
Offline
Thanks all for your detailed responses. I'm running x86_64, and compiling packages isn't exactly my idea of a good time, so I guess I'll just keep compiling the kernel and a few other apps.
Incidentally, has anyone who has used gentoo seen a speed difference between it and arch?
Offline
seen a speed difference
Depends on the measurement. Most users' PCs spend most of their time *idle*. The rare user who is performing e.g. video processing 24 hours a day will notice a difference.
This question has been asked countless times on the Gentoo forums, along with whether 64-bit is worth the hassle. The answer for roughly nine tenths of users is that neither is worth the maintenance hassle.
Speed differences can occur for many different reasons which are nothing to do with the CFLAGS, e.g. specific combinations of specific versions of software. I vaguely hear that gcc is getting faster (i.e. producing faster code), and glibc is getting *slower* ![]()
Offline
Incidentally, has anyone who has used gentoo seen a speed difference between it and arch
I came from Gentoo, and haven't noticed any big difference on my laptop (core 2 duo, 1.8 Ghz, x86) or my desktop (AMD 3200+ 2.2 Ghz, 64 bit), and while compiling from source can be fun sometimes, I don't miss having to compile each update, then realize I left out a USE flag or something and have to recompile the whole package again.
Last edited by sctincman (2009-10-28 22:29:31)
Offline
Thanks again for everyone's answers. The only reason I would try Gentoo, or any other distro for that matter, over arch is speed. If Gentoo isn't noticeably faster, then I'll just stick to wonderful Arch:)
Thanks again everyone.
Offline
Thanks again for everyone's answers. The only reason I would try Gentoo, or any other distro for that matter, over arch is speed. If Gentoo isn't noticeably faster, then I'll just stick to wonderful Arch:)
Thanks again everyone.
Gentoo's advantage over Arch is its ultimate configurability (read: for control freak only). You could precisely control what features you like through USE flags. The flip side of that is you have to rebuild many packages if you later decide to change the USE flags.
Offline
Incidentally, has anyone who has used gentoo seen a speed difference between it and arch?
None at all. Was a waste of my time. But I had to, to get that out of my system. The "I wanna use a source based distro and optimize my system" craze.
Offline
RedScare wrote:Thanks again for everyone's answers. The only reason I would try Gentoo, or any other distro for that matter, over arch is speed. If Gentoo isn't noticeably faster, then I'll just stick to wonderful Arch:)
Thanks again everyone.
Gentoo's advantage over Arch is its ultimate configurability (read: for control freak only). You could precisely control what features you like through USE flags. The flip side of that is you have to rebuild many packages if you later decide to change the USE flags.
I think you can do that with Arch as well. I remember Allan mentioning somewhere about USE flags. Not sure which post.
Offline
roy_hu wrote:Gentoo's advantage over Arch is its ultimate configurability (read: for control freak only). You could precisely control what features you like through USE flags. The flip side of that is you have to rebuild many packages if you later decide to change the USE flags.
I think you can do that with Arch as well. I remember Allan mentioning somewhere about USE flags. Not sure which post.
How? I thought I had to fiddle with the PKGBUILD files. I don't think there's an infrastructure for such things in Arch, is there?
Offline
I remember Allan mentioning somewhere about USE flags. Not sure which post.
I'm fairly sure I did not.... but you can use srcpac which will aplly a sed line to a PKGBUILD before building it to update.
Offline