You are not logged in.
Pages: 1
Why we need klibc? What's the reason to use it instead standard glibc, which must be installed anyway? Is there any benefit?
Offline
You need klibc for early userspace. glibc is simply too big for that.
Offline
klibc is not meant for userland applications - in fact, it has quite a few memory issues if an application runs for more than a few seconds.
Offline
phrakture but why we are using it? It is possible to start Linux without klibc, because I was always running it this way, so what exactly we gain by using klibc?
Offline
phrakture but why we are using it? It is possible to start Linux without klibc, because I was always running it this way, so what exactly we gain by using klibc?
This is all described here: http://wiki.archlinux.org/index.php/Mkinitcpio
In short: no it's not _needed_, but it allows us here to distribute a kernel which is optimal for all users using it, with some minor added complexity
Offline
phrakture but why we are using it? It is possible to start Linux without klibc, because I was always running it this way, so what exactly we gain by using klibc?
Most old initrd tools use a static busybox build, which is not a very good environment for setting up different more or less complicated environments. Especially now that we use some nice udev features in our initramfs, using busybox wouldn't be possible anymore.
Offline
phrakture the reason I am asking is the complexity, I realized that kernel upgrade became something very complex, I never used initrd on my distributions before, but I guess that's the standard today. At least it works, I don't need to configure/compile kernel on Arch at all.
Offline
If you think the kernel26 package is too complex, you can make your own package or just compile your own kernel that doesn't need klibc or any ramdisk at all.
The stock kernel really needs to be as complex as it is, since it has to work on every computer on the planet. I think it's really impressive work.
My laptop (asus w5a) is extremely slow with the stock kernel (must be some strange conflicting module or something, I haven't looked into it), so I use it only for convenience at first install, then I just compile my own with everything I need compiled into the kernel. No need for any loading of modules or ramdisks etc, and it halves the boot time.
I would think that most Arch users would be interested in compiling kernels for their own systems, since having a kernel made for any system is needlessly complex as long as you have knowledge of what system it will run on.
Offline
Is there a basic run down of what the pro's and con's are for each libc.
uclibc
klibc
glibc
dietlibc
etc...?
To be honest I dont fully understand the purpose of each.
"The ecological crisis is a moral issue."
Offline
Is there a basic run down of what the pro's and con's are for each libc.
uclibc
klibc
glibc
dietlibcetc...?
To be honest I dont fully understand the purpose of each.
they all pretty much just vary in size and level of features.
klibc is small, and limited, and is designed for the kernel's early userspace.
glibc on the other hand is complete, and fully featured.
uclibc is smaller than glibc but still intended to be useful, you could build a system with it if you wanted. Very suitable for embedded usage too.
dietlibc looks like it's similar to uclibc, maybe a tiny bit smaller, again only implementing useful functions and features. It's aimed towards embedded as well.
I suppose someone might be able to give a more detailed explanation, but that's how I understand it.
Offline
Do you believe there would be any speed gains from using a cut down libc?
Do the libraries provide different routines to improve speed? This would have to be another aspect I have thought on recently, mainly because I am performance concious.
I find that there is a lot of doubling up in linux. Having seperate widgets, libs, compilers etc... Just seems to make things much more complex in the end and has the side effect of creating a lot of bloat. It would be great to merge some features such as widgets to improve the speed and size ratio of applications.
"The ecological crisis is a moral issue."
Offline
I don't think you will see a huge performance boot when using an other c libs, probably more problems (and you need to recompile most of you system, if not all).
Of course a smaller library has a smaller linkage table (faster function lookups) and allocates less memory when an application relocates the library, but glibc is very well written and Ulrich Drepper (maintainer of glibc) has _a lot_ of skills in optimizing libs. So maybe the application start a couple of nano seconds faster, but it definitely won't 'run' faster. Other point: glibc is a collection of 302 small libraries and the program will only be linked to the ones it needs.
Other libraries are only interesting when you're using embedded stuff and you want to save every bit of memory.
Klibc is more like a small clib + utilities, for moving kernel tasks to the userspace. So it's not _only_ a c library.
pacman -Ql klibc | grep bin/
For compiler: there's only on linux compiler and that is gcc. The intel compiler produces better code, but not all application will compile with it (and it's not free/open source)
Widgets: Use 1 toolkit (GTK+, QT) when possible and never use bindings , because when a application uses a binding it allocs the binding + the base toolkit + the programming language backend. And bindings/libraries are not the definition of bloat, because you're using them all the time, if you like it or not .
Offline
Yes but is the intel compiler any more stable than gcc?
Also is the intel compiler capable of optimizing for AMD cpu's?
Binding, is that when an application binary includes all the pieces to make it work. Say just in case something in the OS is missing?
"The ecological crisis is a moral issue."
Offline
Yes but is the intel compiler any more stable than gcc?
Compilers are always stable, they don't crash and application. If a compiler has regressions, it shouldn't be released.
Also is the intel compiler capable of optimizing for AMD cpu's?
Duno, but when you optimize for athlon-xp (for example) it won't run noticeable faster compared to i686. The power of the intel compiler is that it can optimize on binary level and gcc only per file.
Binding, is that when an application binary includes all the pieces to make it work. Say just in case something in the OS is missing?
Binding is the bridge between 2 programming languages.
Offline
Well if that's the case then I guess using the intel compiler can only be a good thing. I have read somewhere that there is a good 20% speed increase when using it. Depending on application of course and if the code for gcc is compatible with the intel compiler.
"The ecological crisis is a moral issue."
Offline
Pages: 1