You are not logged in.
"Be conservative in what you send; be liberal in what you accept." -- Postel's Law
"tacos" -- Cactus' Law
"t̥͍͎̪̪͗a̴̻̩͈͚ͨc̠o̩̙͈ͫͅs͙͎̙͊ ͔͇̫̜t͎̳̀a̜̞̗ͩc̗͍͚o̲̯̿s̖̣̤̙͌ ̖̜̈ț̰̫͓ạ̪͖̳c̲͎͕̰̯̃̈o͉ͅs̪ͪ ̜̻̖̜͕" -- -̖͚̫̙̓-̺̠͇ͤ̃ ̜̪̜ͯZ͔̗̭̞ͪA̝͈̙͖̩L͉̠̺͓G̙̞̦͖O̳̗͍
Offline
Has anyone tried this? As soon as I get home, I'm gonna screw around with this a bit, see if I can crash my computer . Do the nitro or mm kernels take care of this at all?
fffft!
Offline
You can help prevent fork bombs by limiting the number of processes per user using ulimit. Also, someone would have to hack into an account to be able to use this attack on your box.
A slashdotter posted that typing
:(){ :|:& };:
on the bash prompt will test if you are vulnerable. Can't vouch for it, as I don't have access to my linux box right now.
I have nothing to say, and I am saying it.
Offline
Also, someone would have to hack into an account to be able to use this attack on your box.
Not everyone has just single user boxen.
"Be conservative in what you send; be liberal in what you accept." -- Postel's Law
"tacos" -- Cactus' Law
"t̥͍͎̪̪͗a̴̻̩͈͚ͨc̠o̩̙͈ͫͅs͙͎̙͊ ͔͇̫̜t͎̳̀a̜̞̗ͩc̗͍͚o̲̯̿s̖̣̤̙͌ ̖̜̈ț̰̫͓ạ̪͖̳c̲͎͕̰̯̃̈o͉ͅs̪ͪ ̜̻̖̜͕" -- -̖͚̫̙̓-̺̠͇ͤ̃ ̜̪̜ͯZ͔̗̭̞ͪA̝͈̙͖̩L͉̠̺͓G̙̞̦͖O̳̗͍
Offline
"Be conservative in what you send; be liberal in what you accept." -- Postel's Law
"tacos" -- Cactus' Law
"t̥͍͎̪̪͗a̴̻̩͈͚ͨc̠o̩̙͈ͫͅs͙͎̙͊ ͔͇̫̜t͎̳̀a̜̞̗ͩc̗͍͚o̲̯̿s̖̣̤̙͌ ̖̜̈ț̰̫͓ạ̪͖̳c̲͎͕̰̯̃̈o͉ͅs̪ͪ ̜̻̖̜͕" -- -̖͚̫̙̓-̺̠͇ͤ̃ ̜̪̜ͯZ͔̗̭̞ͪA̝͈̙͖̩L͉̠̺͓G̙̞̦͖O̳̗͍
Offline
formats:
bash:
$ :(){ :|:& };:
perl:
$ perl -e "fork while fork"
c:
#include <unistd.h>
void main() { while(1) fork(); }
Offline
I added the following to my /etc/security/limits.conf:
* soft nproc 50
* hard nproc 100
* soft rss 100000
* hard rss 150000
With the default limits X froze to death. Alt+SysRq+i/e/k (think I tried them all, so not sure which one did the actual trick) did work to get rid of it though. But because I also did Alt+SysRq+u to remount readonly, I needed to reboot (alt+sysrq+b) anyway, as I couldn't log in anymore.
I think sane limits should be the default in Arch.
Offline
$ perl -e "fork while fork"
Sounds like a punk song to me!
DUH-DAH DUH-DAH DUH-DAH... FORK WHILE FORK! :twisted:
fffft!
Offline
heh, yeah it works, but maybe i shouldnt have run it as root.
Offline
yeah the big deal is that it works as a non-root user... I'm trying it when I get home
Offline
I added the following to my /etc/security/limits.conf:
* soft nproc 50 * hard nproc 100 * soft rss 100000 * hard rss 150000
Looks good i3839..what is rss though.. max resident set size (KB)...
what is a set size for?
I think sane limits should be the default in Arch.
I agree.
"Be conservative in what you send; be liberal in what you accept." -- Postel's Law
"tacos" -- Cactus' Law
"t̥͍͎̪̪͗a̴̻̩͈͚ͨc̠o̩̙͈ͫͅs͙͎̙͊ ͔͇̫̜t͎̳̀a̜̞̗ͩc̗͍͚o̲̯̿s̖̣̤̙͌ ̖̜̈ț̰̫͓ạ̪͖̳c̲͎͕̰̯̃̈o͉ͅs̪ͪ ̜̻̖̜͕" -- -̖͚̫̙̓-̺̠͇ͤ̃ ̜̪̜ͯZ͔̗̭̞ͪA̝͈̙͖̩L͉̠̺͓G̙̞̦͖O̳̗͍
Offline
yeah the big deal is that it works as a non-root user... I'm trying it when I get home
Yeah, I just went to my screen session, which happened to have a root terminal in it.
Offline
Tried as a non-root user, and it brought things to a stop fast. Setting some reasonable limits took care of things.
If you develop an ear for sounds that are musical it is like developing an ego. You begin to refuse sounds that are not musical and that way cut yourself off from a good deal of experience.
- John Cage
Offline
i use 75/150 for my limits.... maybe we should figure out some good default settings... anyone willing to spend some time and figure out at what number it starts failing?
Offline
The max number of process should be greater than 100. One hundred processes is not enough especially if you compile because make uses a lot of processes. I have 78 process currently running and I am just using a few konsole in KDE. However I don't know what that number should be.
Offline
RSS is the closest to the real ram usage of a program. It still includes shared things like libc and other libraries, but that's one Mb, or with bloated apps less than 10, and fixed size from the start. So all in all that's the variable to limit real ram usage of an app (alternative would be to limit virtual memory, but asking for a lot of virtual memory doesn't hurt, actualy using it all does).
What the proper limits are depends on your hardware, faster comps can have higher limits.
Compiling doesn't take that much processes, most are very short lived, and more than 4 parallel compiles isn't very useful. But if running big monsters like KDE or Gnome then the total ammount of user processes goes up of course, so 100 is a bit low.
I'll try different setting on my pc and see what works. If it's ok for my 600 Mhz box then it's probably ok for most Archers. I'll see how high I can go.
Offline
For my pc (600 Mhz, 256 Mb ram) 2048 processes is the limit before it becomes unmanageable. In console it's easier to recover than in X, there I needed to switch to console with Ctrl+Alt+F2 and then kill the forkbomb from there.
As the limit is per user and not global, 1024 should be plenty for almost everyone. People who need more like big servers probably need to also change other limits anyway.
Figuring out the max memory per process is much harder, as that depends on the ammount of ram, what programs are going to run, and what the user wants. Safest would be of course to use totalmem/(maxprocess*maxusers) as limit, but then you can hardly run anything... So I think it's best to set no default rss limit.
So all in all I recommend the following limits:
* soft nproc 1024
* hard nproc 2048
Offline
I'll try those limits and forkbomb my system and see where it gets...
Offline
Currently my proc limit is 2046 and I haven't changed any setting.
Offline
What exactly is the difference between soft and hard limits?
Offline
The soft limit is the real limit. The user may always change the limit downwards, but only up to the hard limit.
Offline
with default limits in limits.conf my system survived, but when I updated like i3839 suggested, I've got my box crashed...
Offline
1024 processes is a bit high. Even if I run KDE with openoffice, mozilla, gimp, eclipse and some other applications, number of my processes does not exceed 55. So I think the limit of 100 or maybe 200 processes shoud be enough.
Offline
with default limits in limits.conf my system survived, but when I updated like i3839 suggested, I've got my box crashed...
What were your default limits? I had none, so perhaps you edited it in the past? (running pam-0.78-4 here) By default Arch had no limit at all in /etc/secerity/limits.conf, and the system wide default of 4096 was used, at least on my pc.
Also what hardware do you have? Mainly cpu and ram.
Of course 1024 is high, but point is to avoid a forkbomb to do any damage, not to limit the users' max number of processes, and after testing 2048 seemed to limit on my system, so taking the half of that seemed reasonable to me. Desktop systems won't have more than a couple of hundred processes, but a busy server can.
Offline
Desktop systems won't have more than a couple of hundred processes, but a busy server can.
Those processes are distributed among various users. Many daemons run as a special user (postgresql processes are owned by the user postgres, mysql by user mysql, apache by user nobody, or whichever user you configure them). Even if you suspect a single owner (user) of those services will require so many processes, I still think normal (real) users should be fine with 100 or at most a few hundred processes. Something like:
@users soft nproc 100
@users hard nproc 200
* soft nproc 1024
* hard nproc 1500
Does anybody know what are default/sugested/common limits on *BSD systems? They would probably be a good reference.
Offline