You are not logged in.
Pages: 1
Hi All,
I have a generic question that I am looking for an answer to regarding the Linux kernel. Please remember before you continue reading, I am not passing any type of judgment, I am just looking for facts here.
Does the Linux kernel 'pretty much' lock a process to a physical CPU? I say pretty much because depending on drastic situations, I can see where it would be beneficial to move all of the data (heap info as well as thread info) from one CPU to another. I ask this because I see issues with making sure that all of the process info that could be residing in the CPU caches gets relocated correctly, or am I really in the weeds here?
Offline
If I understand the problem correctly, the thing that you described is possible. There is sched_setaffinity call that can tie specific task to given set of CPUs, but aside from some real-time and virtualisation software I have never seen in used.
Offline
What brought me to this question is the push for application developer (gamers to be specific) to take advantage of multiple cores. Personally, I am a big fan of threads, but I don't believe that multi-threading is the best way to take advantage of multiple cores if what I believe to be true regarding processes.
Offline
What brought me to this question is the push for application developer (gamers to be specific) to take advantage of multiple cores. Personally, I am a big fan of threads, but I don't believe that multi-threading is the best way to take advantage of multiple cores if what I believe to be true regarding processes.
to the linux kernel, there is no concept of a 'thread', rather seperate threads are all treated as standard processes. Threads are merely a process that shares resources with another. As a result, threads receive no special scheduling semantics over a normal process.
As a RULE don't enforce or intefere with kernel scheduling within the application, and never try and force the kernel to do *anything*. Leave it to the kernel to determine what processor the application runs on.
Forcing policy on the kernel is broken application design, and down the track, nearly always causes problems when there's changes in the kernel.
James
Last edited by iphitus (2007-04-12 08:52:28)
Offline
to the linux kernel, there is no concept of a 'thread', rather seperate threads are all treated as standard processes. Threads are merely a process that shares resources with another. As a result, threads receive no special scheduling semantics over a normal process.
I think I understand this part. What I am curious is if the kernel ever has a thread from say process A which is currently running on CPU #1 schedule a second thread from process A in say CPU #2. I can see where this could be done quickly when there are separate cores on a die like we are seeing more and more these days. Now when you have two physical chips, this could be difficult since like I said, the thread/process info could be spread across several physical caches. I can't believe the kernel can/should make a distinction between the two types of cores, hence my question/confusion.
Forcing policy on the kernel is broken application design, and down the track, nearly always causes problems when there's changes in the kernel.
This part I complete agree with other than letting process give some type of hint to kernel to how 'fast' the process/thread should be serviced for real-time like activities.
Offline
iphitus wrote:to the linux kernel, there is no concept of a 'thread', rather seperate threads are all treated as standard processes. Threads are merely a process that shares resources with another. As a result, threads receive no special scheduling semantics over a normal process.
I think I understand this part. What I am curious is if the kernel ever has a thread from say process A which is currently running on CPU #1 schedule a second thread from process A in say CPU #2. I can see where this could be done quickly when there are separate cores on a die like we are seeing more and more these days. Now when you have two physical chips, this could be difficult since like I said, the thread/process info could be spread across several physical caches. I can't believe the kernel can/should make a distinction between the two types of cores, hence my question/confusion.
The two points at which I remember the Linux scheduler to switch CPU's is handling exec and fork. scheduler performs some balancing of the load on both cpu's and selects the one to be used by new process. You may want to check this out: http://lxr.free-electrons.com/source/kernel/sched.c
From kernel's point of view process/thread map to task_struct.
iphitus wrote:Forcing policy on the kernel is broken application design, and down the track, nearly always causes problems when there's changes in the kernel.
This part I complete agree with other than letting process give some type of hint to kernel to how 'fast' the process/thread should be serviced for real-time like activities.
It's not that clearly a design issue. Say you have a chip with 4 arm cores and a task that does some heavy critical processing which cannot be parallelised and has hard real time constaints, then an obvious solution is to tie the task to one CPU, use some decent scheduling (fifo perhaps) for it. At the same time other tasks use the remaining 3 CPUs.
Similarly, say you have a huge machine with 32 CPUs that handles lot's of web traffic, but at the same time you want to have some processing power reserved for other tasks. It's possible to restrict apache processes to some set of CPUs that it's allowed to run on, thus leaving some processing time for others.
But unless you have lots of CPUs or some elevated timing requirements, there is no need to bother the scheduler.
Offline
Pages: 1