You are not logged in.
Pages: 1
Hi.
EDIT: Oh, btw, this is just a rant...
I've just seen the release announcements for 2007.08. And I see the 128M memory statement there. Makes me wonder. Is Linux (as a platform, togheter with things like KDE, FF, OOo) becoming more memory-hungry as we speak? It's mostly my gut feeling, but the system is getting demanding more and more. It's not as snappy as it used to be.
I switched to CK patchset recently, so it's a bit better, but just a wee bit. MM patchset and 2.6.23-rc1 is a bit better. But still... more and more distros are requiring more memory to run.
Last edited by foxbunny (2007-08-08 12:02:39)
Offline
no. it's just the horrible way the installer works. Everything is in the initrd except the packages. Thus, the initrd grows, and the memory requirements grow.
James
Offline
If i'm not mistaken, you have got to have at least 96 megs free for mkinitcpio to do it's thing at boot time. That's just the way it works atm, but this is by no means an indicator of how much memory the system will use when finished with init and boot procedure.
My experience is that linux + userland is constantly improving both in terms of memory usage and responsiveness. The CFS scheudler and other improvements in coming kernels will probably render CK patches more or less useless (even though CK is in reality discontinued from now on....).
KDE is certainly getting speedier with every release, and so is gnome. The KDE developers are even saying KDE4 will be even more memory efficient. That may depend on what new features you choose utilize in KDE of course. Much of this is tanks to optimizations done in gcc, cairo, display drivers etc.
I think a lot of people will agree with me. You are free to disagree though
I don't think you should worry too much about these things. The fact is that one day, we might miss out on really good features just because we are too occupied with memory usage. There's a balance you know. When hardware prices drop, the logical approach is to use that extra to improve our systems, and that doesn't have to mean bloat (unnecessary, bad, overhead)...
"Your beliefs can be like fences that surround you.
You must first see them or you will not even realize that you are not free, simply because you will not see beyond the fences.
They will represent the boundaries of your experience."
SETH / Jane Roberts
Offline
Hi.
EDIT: Oh, btw, this is just a rant...
I've just seen the release announcements for 2007.08. And I see the 128M memory statement there. Makes me wonder. Is Linux (as a platform, togheter with things like KDE, FF, OOo) becoming more memory-hungry as we speak? It's mostly my gut feeling, but the system is getting demanding more and more. It's not as snappy as it used to be.
I switched to CK patchset recently, so it's a bit better, but just a wee bit. MM patchset and 2.6.23-rc1 is a bit better. But still... more and more distros are requiring more memory to run.
Try recompiling the current 2.6.22 kernel with SLUB allocator enabled - it has more impact in terms of responsiveness than CK patch ever had, IMO. Personally, I'm very impressed with how things are unfolding in the kernel land (and Linux in general) - at the moment my oldish laptop is faster and more responsive than ever before and it definitely works better than when it was brand new.
In other words, my gut feeling disagrees with your gut feeling ![]()
Offline
FYI, the ck kernels uses the SLUB allocator.
Offline
My server really has very little and uses very little memory, try getting a windows server running in that small footprint.
[gary@server ~]$ free -m
total used free shared buffers cached
Mem: 58 56 1 0 6 27
-/+ buffers/cache: 22 35
Swap: 125 0 125Last edited by gazj (2007-08-08 20:02:43)
Offline
Depending on the options used, how much performance can one juice out of a kernel by (a) compiling a stock kernel and (b) compiling a stock kernel with patches? IOW, is it worth trying compared to just using the precompiled ck (or mm, or whatever) kernel from the repos?
Offline
Depending on the options used, how much performance can one juice out of a kernel by (a) compiling a stock kernel and (b) compiling a stock kernel with patches? IOW, is it worth trying compared to just using the precompiled ck (or mm, or whatever) kernel from the repos?
a) I don't think you'll notice significant performance improvement, if any.
b) Depends on the patches - the benefit here, as I see it, is not so much performance, but the ability to only include the patches you need.
I've used 2.6.21ck kernel in the past and my recompiled 2.6.22 (with SLUB) was perceptibly faster. I haven't tried 2.6.22ck, but given that it uses SLUB as well it should be a really good performer. There are some other kernels in aur (fallen, kamikaze, pierlo, and so on) that might also be worth checking out.
Personally I just use the standard ARCH 2.6.22 kernel recompiled with Suspend2 patch, SLUB and some minor hardware specific configuration changes which is plenty fast for my needs and very stable.
Offline
Is there a way that if I am running fluxbox and have 2gb of ram, I could make the OS run from RAM? I know some liveCD do this and it is super fast.
Offline
@Anonymo
You can make the os load in RAM most of the things it uses , even the entire OS (I think puppy can do that?)
@gazj
Try reading
http://support.zenwalk.org/index.php/topic,5014.0.html
and see
- fluxbox 1.0rc2
- Artwiz fonts
- Conky
- 33.6 MiB RAM Usage (normal usage, with mpd, clipboard-daemon and other services)
Last edited by energiya (2007-08-22 13:14:41)
Offline
@gazj
Try reading
http://support.zenwalk.org/index.php/topic,5014.0.html
and see
http://pix.nofrag.com/b5/4f/cbcd4cd640d … 45632t.jpg
- fluxbox 1.0rc2
- Artwiz fonts
- Conky
- 33.6 MiB RAM Usage (normal usage, with mpd, clipboard-daemon and other services)
Wow that is impressive.
Also your right about puppy it can load entirely in ram, that disk should be in any admin's toolkit, it has saved the day countless times. ![]()
Offline
An 800x600 wallpaper at 24bpp uses at least 1.4MBx2 (~3MB) RAM itself. If your GFX card doesn't handle Render even more for additional surfaces that get blitted onto the VRAM from sytem memory plus depending on wether you use DBE or not. That said, it's no wonder we need that much ressources these days.
Actually it's not the applications themselves that use the RAM. It's more the candy we all want and grew accustomed to, i.e. graphics and sound. Buffering of files and careless developers do the rest. Then again, we live in the days were 1GB RAM is the defacto standard in newer machines. 256MB is the bare minimum everyone has in its machine. mkinitcpio merely assumes that it should expect that to run as quickly as possible. If we were to really optimize it for memory usage, things would get a lot more complicated and probably slower as more garbage cleanup has to be done here and there.
I also want to point out that Arch is not made for lowend machines. We expect at least an i686 processor. Even in the days of the Pentium II the <96MB era was already over for most of us. It's so easy to pick up old SD-RAM modules, i happen to have more in stock than I need to. heh
Last edited by kth5 (2007-08-24 22:36:08)
I recognize that while theory and practice are, in theory, the same, they are, in practice, different. -Mark Mitchell
Offline
Pages: 1