a lot (if not most) linux kernel features can be compiled as modules. i usually compile things i need into the kernel ; things i might need are compiled as modules.
this gives me quite a monolithic kernel, a big fat 2.3 mb kernel, with 5 modules (3 of them, such as nvidia, cannot be built into kernel).
basically my kernel is monolithic because i believe it's easier to manage (no modprobing). but my concerns is that my kernel is 30% bigger than archlinux stock kernel, and i wonder if size can be a performance issue.
make it short : what's best ? modular or monolithic ? what do you think ?
what goes up must come down
Either way, is 2.3 mb out of your RAM really that big a deal nowadays? ;-)
Moreover, if you modprobe a module, it usually stays in memory, as far as I know they aren't automatically removed after some idle period or anything (CMIIW). So your modular kernel will tend to grow to look like your monolithic one.
There is overhead either way; the time it takes to load a kernel module versus the space it takes to store the monolithic kernel. The space-time tradeoff is a long-standing tradition in computing, but my personal opinion (which will certainly be contested ;-)) is that time is more important than space, for interactive or serving environments.
Your method seems like an ideal balance; compile what you need in, leave the rest as modules. Personally, I don't compile my kernel because of the time it takes to do so (as compared to the extra space wasted by having extra space taken by modules on the hard disk that I don't need, or extra compiled in features loaded into memory that I don't need).
This question is a matter of personal preference, no hard and fast rules. I highly doubt you could change anything to create a noticeable (or even a measurable) performance increase.
There is no performance boost when using a monolithic kernel. The only difference that will be seen is upon the initial load. It takes udev some time to figure out the proper modules you need and load them. So, if boot speed is absolutely crucial, then you'll be happy. Otherwise, you're gaining almost nothing, except the need to recompile everytime you buy a new chunk of hardware.
thanks you both for your opinion. this indeed makes me think i got a good compromise.
surely it requires some hd space to compile a kernel, but, as a former gentoo user, i'm used to keep some spare hd space...
what goes up must come down
Biggest bottleneck is disk IO. I reckon monolithic will be (negliably) quicker as loading one contiguous file is tends to be faster than seeking several smaller files (that ultimately make the same whole).
Still, any time saved would probably be best measured in microseconds and therefore the time taken to simply compose your post has offset any potential time saved for the next 10 years!
I compile in stuff like IDE drivers and root filesystems, leaving stuff like sound drivers, usb-serial, fuse and commonly unused things as modules. I also prune the config to just leave out stuff that is never going to be required for my machine.