You are not logged in.
Currently I'm using a 2 core Athlon 64 x2 6000+ 2x3GHz, but most of the programs uses only one cpu core. For example openmovieeditor used only one core, but avidemux used both when I was converting a video. I know that there are some programs, which can use more cores with proper config or arguments (for example make -j2 I think), but even the kioslave at a file copy only uses one cpu core.
My question is: Do I really need more than 2 cpu cores on a desktop computer? If the cores are slower it can be even slower sometimes. Or it's just my lameness in configuration? Can it be better wth a custom kernel?
Offline
Hi.
It always depends on what you are doing on your computer. For me, personally, I am very, very happy with my quad core here - because I do lot of web development where you have many applications at once running (e.g. application server, web server, browser, IDE) and each of them are running on separate cores. But that's because that are different applications. If you for example used openmovieeditor for video editing and it used only one core, it is the program, that only uses only one core (apparently it only uses one thread for processing). So I guess, a custom kernel won't bring many advantages.
But my guess is, even if you don't do things like web dev, etc. a quad core would bring performance improvements in daily work. For example, consider your torrent program, your backup and firefox running at all the same time (which shouldn't be such a rare situation). Clearly a CPU with more cores would be more efficient here.
Offline
You can check AUR for apps versions that can use multi-threading or SMP (yeah, some of them don't use smp, they show as false positives).
Swapping a 3GHz 2-core for 2.5GHz 4-core w/ less cache memory may result in an overall slowdown.
Offline
It is surprising how little software is written to take advantage of multiple cores. This will change eventually though. Multiple cores have only become mainstream in the past 4 years or so and it takes a while to rewrite code. A custom kernel won't help though as long as you're already using an SMP kernel and all the pre-compiled ones are AFAIK right now.
Last edited by brianhanna (2010-04-11 13:38:36)
Offline
> It is surprising how little software is written to take advantage of multiple cores.
My previous post was meant to show that there are mt / smp versions of some apps, but many other do use multicore and don't sport 'mt' in the name: Ardour comes to mind just from the top of my head. It can use IIRC up to three cores so tri- or quad-core is a good idea if you're going to use it often.
https://www.archlinux.de/?page=PackageStatistics
warning* It's a biiig page
Submissions per architectures:
i686 54.01 %
x86_64 45.99 %
Not every x86_64 is multicore and some i686 are, but let's say half boxen don't need mt-enabled apps because they can't use them anyway.
Offline
Same here. I have a dual-core laptop and it's a pain to can't run any *heavy* program, or compiling needed.
Indeed, I need a bigger screen for that, but it's the only part of my matter of fact.
Offline
Even if <application> does not do smp, that doesn't mean that the multiple cores are wasted. Eg you could run some cpu heavy task that loads one core 100%, but your destkop would still be fully responcive with no noticable slowdowns.
@ gtklocker
The -j flag to make will tell it how many workers to spawn, for the fastest compile time, the general rule of thumb is /n/ cores + 1
Last edited by Mr.Elendig (2010-04-11 17:32:42)
Evil #archlinux@libera.chat channel op and general support dude.
. files on github, Screenshots, Random pics and the rest
Offline
fyi, the -j flag on make only tells the compiler how many cores to use while compiling, resulting in a faster compile time. for a program to utilize multiple cores, it must be
written using threading and other parallel processing techniques to take advantage of multiple cores. The reason many programs only use one core is because it is much
easier to implement, maintain, and trouble-shoot a program that only uses one core (no worry about synchronization/deadlocks/and other pitfalls).
Hofstadter's Law:
It always takes longer than you expect, even when you take into account Hofstadter's Law.
Offline
I don't think software that is designed to run on multi-core is as important as running multiple apps. With a 4 core you can multi-task far more efficiently, that's the real benefit.
Personally, I'd rather be back in Hobbiton.
Offline
When you use software capable of multi-threading efficiently, it sure has benefits.
But having more cores can also have uses for apps less capable of multi-threading. I'm never running just one thing on my desktop, nobody is. You always have multiple things running, for example, the kernel, X, loggers, a browser, perhaps something for email and or IM, terminals, you name it... Every process can be assigned to a separate core, which decreases load on other cores.
I have an i7 860, it has 8 threads (quadcore with Hyper-Threading), and when I'm doing lots of things at the same time I really benefit from it over my old dual core (E2180 @ 3GHz) configuration. (had a Q9550 in between for a short time as well). Ideally I see all the cores getting some work and their individual loads remaining low. This i7 860 stays at it's low power state with 1200MHz most of the times, because it can divide the work over it's threads so nicely.
Back to your dual core now. With compiling and encoding video, having extra cores is superb, it really speeds up operations. I use -j12 at the moment, -j9 is not enough to keep them all busy during a compile job.
Last edited by Ultraman (2010-04-11 19:05:48)
Offline
I'm never running just one thing on my desktop, nobody is. You always have multiple things running, for example, the kernel, X, loggers, a browser, perhaps something for email and or IM, terminals, you name it... Every process can be assigned to a separate core, which decreases load on other cores.
Errr, I have only P4 2GHz and I'm running all of the above on the single core 6yo box. There's a thing called a scheduler, ya know ...
Most of the time the cpu waits for *me* not the other way 'round.
Compiling and other heavy lifting is another story.
Offline
Ultraman wrote:I'm never running just one thing on my desktop, nobody is. You always have multiple things running, for example, the kernel, X, loggers, a browser, perhaps something for email and or IM, terminals, you name it... Every process can be assigned to a separate core, which decreases load on other cores.
Errr, I have only P4 2GHz and I'm running all of the above on the single core 6yo box. There's a thing called a scheduler, ya know ...
Most of the time the cpu waits for *me* not the other way 'round.Compiling and other heavy lifting is another story.
But with multicore the system load will be lower and the system will be more responsive. It's like adding ram, it doesn't make your pc faster, but it let's you run more programs before the system starts lagging.
Offline
Ultraman wrote:I'm never running just one thing on my desktop, nobody is. You always have multiple things running, for example, the kernel, X, loggers, a browser, perhaps something for email and or IM, terminals, you name it... Every process can be assigned to a separate core, which decreases load on other cores.
Errr, I have only P4 2GHz and I'm running all of the above on the single core 6yo box. There's a thing called a scheduler, ya know ...
Most of the time the cpu waits for *me* not the other way 'round.Compiling and other heavy lifting is another story.
You're not running 20 things at once on a single core machine. Schedulers put things in a que. Modern processors/RAM/disks are fast enough to give the illusion continuity.
Offline
The problem is writing applications so that they use multiple cores isn't easy and many algorithms are hard or even impossible to parallize. Otehres (like for example Raytracing) are easy and very efficiently paralized. If you can use it entirely depends on you workload.
For example compiling a kernel can easily scale quite well up to dozens of CPUs with make -j n where n is the number of CPUs used. Or a MySQL Server, Povray etc can easly use many cores.
One of the biggest advantages though is that with 4 cors you can easly run a make -j 3 AND have you desktop be responsive.
If you generally have many programs open at once you will profit from many cores.
Offline
karol wrote:Ultraman wrote:I'm never running just one thing on my desktop, nobody is. You always have multiple things running, for example, the kernel, X, loggers, a browser, perhaps something for email and or IM, terminals, you name it... Every process can be assigned to a separate core, which decreases load on other cores.
Errr, I have only P4 2GHz and I'm running all of the above on the single core 6yo box. There's a thing called a scheduler, ya know ...
Most of the time the cpu waits for *me* not the other way 'round.Compiling and other heavy lifting is another story.
You're not running 20 things at once on a single core machine. Schedulers put things in a que. Modern processors/RAM/disks are fast enough to give the illusion continuity.
That was exactly the thing I had in mind. I assumed people here know what a scheduler is.
Offline
The -j flag to make will tell it how many workers to spawn, for the fastest compile time, the general rule of thumb is /n/ cores + 1
Only with a crappy scheduler. This was definitely true pre-O(1) scheduler in Linux, and probably even with it. That's one of the reasons BFS got written. But with CFS, kernel builds on my quad core are slightly faster with -j4 than -j5; similarly, -j2 is faster than -j3 on my dual-core. I don't think -j2 on a single-core was ever a good idea.
I'm building a dual-6-core (with hyperthreading) beast this summer, partly because I'm working on writing a multi-threaded raytracer, and partly so I win every computer spec pissing contest I'm in for a while. I'm excited to type -j24 .
Offline
-j12 on my Core i7 860 is no problem. Desktop is still responsive enough for my needs.
I didn't really test a lot of different settings, I just noticed that -j9 did not saturate my CPU and considered that I should try to have -j n (number of threads) + c (number of physical cores), which makes 8 + 4 = 12. To keep every core saturated, and have a job to work on when another job on a thread finishes. A kernel build finished a lot sooner than it did with -j9 (and default -j2 for that matter )
And yes I know what a scheduler is.
I use the Arch kernel and it's default scheduler. If you use the BFS scheduler, it's possible that having n be the number of threads would be best.
Offline
sure the scheduler allows u to switch between processes on the same cpu but you still have the overhead of the context switch (http://en.wikipedia.org/wiki/Context_switch).
Try to run handbrake ripping a dvd and compiling the linux kernel at the same time on a single core cpu. Compare this to a dualcore/multi-core and you'll notice the difference.
Major advantage of multi-core cpu's is that we can lower the clock frequency but still have the responsiveness and increasing throughput. With 3,x Ghz cpus we hit the power wall (http://edwardbosworth.com/My5155_Slides … erWall.htm). Lowering the clock and increasing the number of cores allows us to decrease the power usage while still increasing performance (when thinking about overall throughput of your system). The software guys just have to catch up if they wanna increase the performance of one single app on a multi-core system.
edit: corrected an error in terminology
Last edited by rubend (2010-04-13 15:51:19)
Offline
[OT]
@rubend
The paper on power wall mentions IMB system z/11 - where did that come from? I've read only about z/9 and z/10. Do you have any sources or should I ask the author?
"The technique taken by IBM for implementation on their large z/10 and z/11 servers was to include sophisticated and costly cooling technologies."
[/OT]
Offline
[OT]
@karol
z/11 series is about to come out.
See http://www.thehotaisle.com/2009/09/25/b … mainframe/ and http://itknowledgeexchange.techtarget.c … t-3q-2010/ about the new z/11.
If you want more information you should look it up yourself or ask the author as you suggested. (I only found these links through a quick google myself)
[/OT]
Offline
For an example of a breathtakingly multicore app, see x264.
I believe the point where you stop seeing a > 95% performance increase per doubling of cores is somewhere around 16 or 32.
Last edited by Ranguvar (2010-04-13 21:39:32)
Offline
Offline
I have to say that i feel a notable difference if i run my CPUs with one or two cores.
I friend of mine is using a Intel Core2 QuadCore and his Computer owns mine in light speed (i have 2x2 Intel T4200) so maybe KIO doesn´t use both cores for copying a file but especially on KDE you can see a really speed increasement
Offline
I was shocked the other day to hear that ReiserFs is not a good fit for multicore CPUs.
Guess I have to reinstall Arch yet again. Should I go with ext4?
Offline
This is not windows, you do not have to reinstall for something so trivial.
And what is wrong with reiserfs on multi core? Never had a problem and have run it on multi core systems for years...
Offline