You are not logged in.
GoboLinux is a modular Linux distribution: it organizes the programs in your system in a new, logical way. Instead of having parts of a program thrown at /usr/bin, other parts at /etc and yet more parts thrown at /usr/share/something/or/another, each program gets its own directory tree, keeping them all neatly separated and allowing you to see everything that's installed in the system and which files belong to which programs in a simple and obvious way.
They also seem to have a clear direction, or philosophy, like Arch does:
http://gobo.kundor.org/wiki/The_GoboLinux_way
Gobolinux's FS layout have some advantages, like:
* Easy package management, no need to handle and track multiple files scatered in the system
* Easy coexistence of multiple versions of the same package
* Easy to rollback, just change symlinks
* Easy for third-party application distribution, just make a bundle with all it needs
In fact, this layout is much close to what OS X does. There's an obvious con, and is the fact that some packages might duplicate files, thus increasing disk usage. On the other hand, it would be a much more friendly Linux system for shipping applications, as it's easier to ship a bundle that installs itself correctly with all it needs. I think the current UNIX historical FS layout is holding back a lot of inovation Linux could have on the desktop for making easier for developers target and ship binaries for the plataform.
No more various distro teams duplicating effort to put source tarballs together. You see we have a lot of distros, but they are actually doing small variations and incrementations of the same thing: building and packaging stuff from source, and taking care for one not break the other. The result is, if the package doesn't come from your distro, you're on high risk of breaking your system. This holds back 3rd parties from targeting the plataform, as they can't control the package building and distribution themselves.
As I said, OS X already does a similar package concept to that, and see how much good aplications they're bringing from both big companies and independent developers. And, despite the core being an open kernel and GNU userland, all the relevant stack (core libraries, UI) is closed source! I'm asking myself, why Linux, for free and with a ton of open-source core and libraries, without any vendor lock-in, is NOT kicking ass in this aspect? The only fault I see in Linux is a lack of cohesive package distrbution. Gobolinux approach seems a step to fix this.
So... I want your opinions, not much about Gobolinux itself, but about their approach. Maybe would we ever see an Arch spin-off using this approach, or another approach with similar results - effectively, package compartmentation in the FS layout level? Using this layout could alleviate a bit the burden of forced upgrade that a rolling release brings.
And for those who didn't know Gobolinux, give it a try.
I hope this post can give a glimpse of new ideas and solutions for some of the incredible talented people here. I think the Linux landspace is too immersed in inertia and old traditions that don't cope with today needs, Arch was a refreshing oasis of innovation I found in that landscape.
Last edited by freakcode (2008-09-30 22:39:47)
Offline
Gobolinux is one of my favourite distributions, mainly because it chooses to do something completely different.
How do you think the filesystem layout would decrease the burden of rolling release? Easier downgrades for sure (as you can keep old packages on your filesystem and just change symlinks - what I used to do in LFS btw). But, to me this is the same as having a good package manager and keeping a package cache.
Offline
How do you think the filesystem layout would decrease the burden of rolling release?
Despite the easy rollback, I think there's also the possibility of one package not needing to bump other's version. A pretty generic example: package A depends on LIB version X, and package B depends on LIB ver. Y. I don't need to wait package A being built with LIB ver. Y to install package B. I just install B, it will bring LIB Y with it, but it will live alongside LIB X until it's needed for A. New version of A will hopefully depend on new version Y, then LIB X can be safely removed.
Shipping KDE or Gnome packages would be easy as 1-2-3, as we don't need to carefully plan the version bump of their dependencies - every package can progress in it's own rhythm. I see this pretty much like the paradigm shift from CVS to newer DVCSes - no need for the WHOLE system being in a same "snapshot", each package can live in its own version.
Last edited by freakcode (2008-10-01 00:25:41)
Offline
Are all binaries added to the path? If no, how do programs find them? If yes, how do programs distinguish between different versions? I suspect that this is accomplished through symlink magic from what I read on the pages linked above and I realize that I should probably just go scour the gobolinux site, but I'm hoping to get a simply clarification here.
Overall, if it were possible to have different versions installed in parallel, it sounds like the rolling release system could benefit considerably. It would make each package's dependencies independent of others', re-using when possible but not conflicting when not.
How difficult would it be to change the FS paradigm? I expect that makepkg could handle this easily but that it would make PKGBUILDs a bit more complicated.
Last edited by Xyne (2008-10-01 00:37:22)
My Arch Linux Stuff • Forum Etiquette • Community Ethos - Arch is not for everyone
Offline
i would really like to see an arch spinoff, just as a proof of concept, on how simple/complicated, good/bad, it would be....
have to try gobolinux when i have time, i already knew about it, read the concept, but not really tried it.
Offline
Are all binaries added to the path? If no, how do programs find them? If yes, how do programs distinguish between different versions? I suspect that this is accomplished through symlink magic from what I read on the pages linked above and I realize that I should probably just go scour the gobolinux site, but I'm hoping to get a simply clarification here.
http://www.gobolinux.org/?page=at_a_glance (Scroll down to "How can this possibly work?")
Overall, if it were possible to have different versions installed in parallel, it sounds like the rolling release system could benefit considerably. It would make each package's dependencies independent of others', re-using when possible but not conflicting when not.
I too share this opinion.
How difficult would it be to change the FS paradigm? I expect that makepkg could handle this easily but that it would make PKGBUILDs a bit more complicated.
Well, Gobolinux already did it with reasonable success, so it isn't impossible. In theory, you are not breaking FHS, you're just sandboxing each application inside it's own micro-FHS, and then working with paths system-wide. In practice, the brick walls are working around hardcoded paths (which smells bad, bad code).
Any spare time to invest on an Arch spin-off like that?
Offline
See section 6.3.2.3 on http://www.linuxfromscratch.org/lfs/vie … kgmgt.html
This is very easy to do with Arch given the PKGBUILDs are all made to be put in $pkgdir anyway... In fact you could adjust makepkg to make pkgdir=/usr/pkg/$pkgname/$pkgver and not tar everything up. Then you would just need something to put the symlinks into the / hierarchy - there are tools floating around to do this. Note this cause big problems for people who have a separate /usr partition... but that makes little sense under this scheme
Offline
http://www.gobolinux.org/?page=at_a_glance (Scroll down to "How can this possibly work?")
Thanks, that cleared things up a bit, but I'm still confused about something. Let's say you have two programs, x and y, which require different versions of some basic executable, foo. Internally, x and y both exec "foo", but x needs to exec "foo-2.4" and y needs to exec "foo-3.2". How is that resolved? It almost seems to indicate that the symlinks would need to be changed dynamically during runtime in order to fix that. What am I missing?
Well, Gobolinux already did it with reasonable success, so it isn't impossible. In theory, you are not breaking FHS, you're just sandboxing each application inside it's own micro-FHS, and then working with paths system-wide. In practice, the brick walls are working around hardcoded paths (which smells bad, bad code).
Any spare time to invest on an Arch spin-off like that?
*wonders how difficult it would be to persuade changes upstream to avoid the hardcoded-paths issue*
Time is something that I don't have much of at the moment. Also, still being a Linux noob, I'm still clueless when it comes to some basic things (e.g. I still don't know my way around makefiles (haven't really looked), and I've never coded anything in C/C++ beyond a few tutorial examples). This would be a great reason to learn more about how things work and an interesting way to contribute back to the community, but I'm not sure how useful I would be. I definitely wouldn't be able to be a driving force in such a project, but I might be able to pick up momentum as it went along. The interest is there... I just need time, both for the project itself, and to develop the necessary skills for it.
My Arch Linux Stuff • Forum Etiquette • Community Ethos - Arch is not for everyone
Offline
Who else would kill for an arch spinoff?
Offline
This does seem to be an interesting distro. I think I may load it up in a vm and give it a spin.
archlinux - please read this and this — twice — then ask questions.
--
http://rsontech.net | http://github.com/rson
Offline
I just don't like the idea that much...
The day Microsoft makes a product that doesn't suck, is the day they make a vacuum cleaner.
--------------------------------------------------------------------------------------------------------------
But if they tell you that I've lost my mind, maybe it's not gone just a little hard to find...
Offline
See section 6.3.2.3 on http://www.linuxfromscratch.org/lfs/vie … kgmgt.html
This is very easy to do with Arch given the PKGBUILDs are all made to be put in $pkgdir anyway... In fact you could adjust makepkg to make pkgdir=/usr/pkg/$pkgname/$pkgver and not tar everything up. Then you would just need something to put the symlinks into the / hierarchy - there are tools floating around to do this. Note this cause big problems for people who have a separate /usr partition... but that makes little sense under this scheme
That's a good approach, PKGBUILDs are flexible enough to enabling a installation style like this. It should be a matter of finding a good method of symlinking into the legacy "/" hierarchy.
Thanks, that cleared things up a bit, but I'm still confused about something. Let's say you have two programs, x and y, which require different versions of some basic executable, foo. Internally, x and y both exec "foo", but x needs to exec "foo-2.4" and y needs to exec "foo-3.2". How is that resolved? It almost seems to indicate that the symlinks would need to be changed dynamically during runtime in order to fix that. What am I missing?
If you knew beforehand that some app needs a different version of the executable, it would be just a matter of aliasing or changing PATHS before running it. But this is scheme is less applicable for executables, and more for dynamic libraries, as binaries link to names like "lib-1.0.so.0.9.1", so those can be distinct symlinked.
Who else would kill for an arch spinoff?
Chances are that if this thread brings enough ideas and good feedback, I feel motivated enough to kick this myself.
@moljac024: I would appreciate if you share your reasons for thinking so. Bring the cons you find to the discussion.
Last edited by freakcode (2008-10-01 02:18:13)
Offline
I've run it in a virtual machine before and it's a very nice distro. I don't remember having any path errors but I also kept a very light syetem for testing purposes. It didn't really end up being easier to navigate the FS though.
Offline
I really don't see the point in it, all I really need to know is where my home folder is and thats all I worry about
Buts thats just my opinion
Last edited by molom (2008-10-01 02:39:26)
Offline
I think the advantages would come more from the modularity of dependencies than from the navigability of the file system.
Chances are that if this thread brings enough ideas and good feedback, I feel motivated enough to kick this myself.
I'll post ideas as they pop up then... maybe they'll lead to something productive.
One criticism that was mentioned was the eventual redundancy of files across the hierarchy. I think such a system would benefit from a package manager that could compare files and transparently handle symlinking to avoid this, for instance if package B would install files x, y and z from package A, B would just symlink to them. The comparison could be optional, for instance by passing the package manager a "--optimize" command (e.g. to avoid installation slowdown), or something similar. Removing a package hierarchy that contains the symlinked targets would need to move them and reconfigure the symlinks of course (e.g. move x, y, and z to where B would have put them) but it should be easy to keep track of locations with a database.
Maybe this system would require a script that can reconfigure aliases on the fly too. Thinking of the issues brought up in the python 2.6 & 3.0 thread (i.e. the existence of parallel python packages and the absence of compatibility), there will be times when "python" should invoke one or the other. Having some app that let's you do something like "some_app --configure python-3.0" would be nice. This could trace the dependency hierarchy and change all relevant bin aliases to the appropriately versioned command along the way.
EDIT
I really don't see the point in it, all I really need to know is where my home folder is and thats all I worry about
Buts thats just my opinion
The point, as I understand it anyway, is the modularity of dependencies. If you want to use app_y which needs dep_a-4.5, but you use app_x which uses dep_a-3.5 and which breaks with dep_a-4.5, you can't have both app_x and app_y and you have to wait for app_x to update to using dep_a-4.5. This would solve that. Also, gobolinux might be right in claiming that the current directory structure is an improvable unix relic.
As already mentioned, package groups like KDE would be easier to maintain.
Although that does lead to another possible issue... the repos. What should be held in the repos? Everything that is either the latest version or required by something else?
Last edited by Xyne (2008-10-01 03:12:20)
My Arch Linux Stuff • Forum Etiquette • Community Ethos - Arch is not for everyone
Offline
The point, as I understand it anyway, is the modularity of dependencies. If you want to use app_y which needs dep_a-4.5, but you use app_x which uses dep_a-3.5 and which breaks with dep_a-4.5, you can't have both app_x and app_y and you have to wait for app_x to update to using dep_a-4.5. This would solve that. Also, gobolinux might be right in claiming that the current directory structure is an improvable unix relic.
As already mentioned, package groups like KDE would be easier to maintain.
Although that does lead to another possible issue... the repos. What should be held in the repos? Everything that is either the latest version or required by something else?
Oh. Thats more understandable and a lot more beneficial. Thanks!
Last edited by molom (2008-10-01 03:17:13)
Offline
Although that does lead to another possible issue... the repos. What should be held in the repos? Everything that is either the latest version or required by something else?
Might as well be as in a rolling release like Arch, having the most recent packages, but dependencies managed in a tree for each package instead of horizontally spanning all the packages. For the client, as long it keeps the old versions installed, new packages will only add to it, not replace. Once a package that was installed as a dependency gets outdated, and no other package depends on it anymore, it can be safely removed.
Easy said than done, but let's see if we can come with a proof-of-concept.
Good, keep ideas coming up!
Last edited by freakcode (2008-10-01 03:37:49)
Offline
Interesting: someone already tought on that file hierarchy layout issue. See GNU's Stow.
Offline
This should be rule 35: For any new computing idea, there is a GNU subproject that (at least tries to) implement it.
What does not kill you will hurt a lot.
Offline
This should be rule 35: For any new computing idea, there is a GNU subproject that (at least tries to) implement it.
Imagine the consequences of rule 34.5.
My Arch Linux Stuff • Forum Etiquette • Community Ethos - Arch is not for everyone
Offline
But this is scheme is less applicable for executables, and more for dynamic libraries, as binaries link to names like "lib-1.0.so.0.9.1", so those can be distinct symlinked.
Wouldn't you then need to link all your executables to specifically numbered library .so's? Most executables, by default, are linked to /usr/lib/libfoo.so.1 where libfoo.so.1 is a symlink to libfoo.so.1.5.2 or whatever (where most libfoo.so.1.* are more or less backward compatible, so most executables linked to libfoo.so.1 will be just fine with the upgrade). I don't see how this would work if package A needs libfoo 1.4.3 and package B needs libfoo 1.5.2 but both are linked to libfoo.so.1. So you would need executable X linked to libfoo.so.1.4.3 and executable Y linked to libfoo.so.1.5.2. And then when you upgrade the library, you need to re-link every package that links to it to take advantage of the new lib (throwing away the whole point of linking to libfoo.so.1 by default).
In addition if you are linking to specifically numbered .so's, then you don't necessarily need the different package directories, as both .so's can live in /usr/lib (you'll need both symlinks in the lib directory in the new FS layout anyway). If you just do it for major library revisions (i.e. libfoo.so.1 to libfoo.so.2) then this is already the status quo (for example, I have a /usr/lib/libstdc++.so.5 -> libstdc++.so.5.0.7 and a /usr/lib/libstdc++.so.6 -> libstdc++.so.6.0.10).
So I must be missing something, as I don't see how this new directory layout would improve the modularity of the distro packages. The cleanliness of the filesystem is arguable improved, but I agree with earlier posts that a good package manager like we have in Arch is just as effective. In Arch I can instantly see what package a file belongs to, see every file installed by a package, and easily add and remove packages. So in terms of stuff I can do easily, what do separate directories for each get me?
Last edited by jcasper (2008-10-01 15:43:11)
Offline
freakcode wrote:But this is scheme is less applicable for executables, and more for dynamic libraries, as binaries link to names like "lib-1.0.so.0.9.1", so those can be distinct symlinked.
Wouldn't you then need to link all your executables to specifically numbered library .so's? Most executables, by default, are linked to /usr/lib/libfoo.so.1 where libfoo.so.1 is a symlink to libfoo.so.1.5.2 or whatever (where most libfoo.so.1.* are more or less backward compatible, so most executables linked to libfoo.so.1 will be just fine with the upgrade). I don't see how this would work if package A needs libfoo 1.4.3 and package B needs libfoo 1.5.2 but both are linked to libfoo.so.1. So you would need executable X linked to libfoo.so.1.4.3 and executable Y linked to libfoo.so.1.5.2. And then when you upgrade the library, you need to re-link every package that links to it to take advantage of the new lib (throwing away the whole point of linking to libfoo.so.1 by default).
Yes, my example wasn't clear. Executables X and Y are linked against lib-A.so, which in turn is a symlink to a minor version like lib-A.so.aa. When this lib gets updated, and breaks compatiblity, the maintainers bump the major version number so it turns into lib-B.so. Executable X can be readily updated to work with the new lib-so.B, while executable Y can still work because both lib-A.so and lib-B.so can coexist in the system. In this case, I don't need to wait both X and Y being updated to the new library to install one of them. Minor versions (lib-A.so.aa, lib-A.so.ab, ...) don't affect most builds, as they normally link to a major release (lib-A.so, that is a symlink to the most recent minor version) that guarantee to maintain compatiblity.
For instance, this happened with libstdc++ too, but is an exception because both branches (5 & 6) coexisted for some time.
But don't take my word on it, see how Gobolinux actually manages that http://www.gobolinux.org/index.php?page=k5 (What is it all about?)
So I must be missing something, as I don't see how this new directory layout would improve the modularity of the distro packages. The cleanliness of the filesystem is arguable improved, but I agree with earlier posts that a good package manager like we have in Arch is just as effective. In Arch I can instantly see what package a file belongs to, see every file installed by a package, and easily add and remove packages. So in terms of stuff I can do easily, what do separate directories for each get me?
(Just to make it clear, I'm not directly comparing with what is existent in Arch, neither saying that what we have today doesn't work, which also happens to be the case with all the other 300 distros. The fact one thing works, doesn't mean it is the only neither the best possible solution. From there comes the interest to discuss Gobolinux's approach)
The point is, if the layout is clean and sandboxed, you don't need a package manager. A package manager is an abstraction to a database relating what files belongs to what packages, that tries to ultimately solve a fundamental flaw of using the traditional Unix hierarchy, where files from one package are scattered on the filesystem, and then you have no simple means to track them anymore. That's hardly KISS. In Gobolinux's approach, you can track files to their packages by simply inspecting symlinks.
/System/Links/Libraries] ls -l | cut -b 49-
...
libgtk-1.2.so.0 -> /Programs/GTK+/1.2.10/lib/libgtk-1.2.so.0.9.1
libgtk-1.2.so.0.9.1 -> /Programs/GTK+/1.2.10/lib/libgtk-1.2.so.0.9.1
libgtk.a -> /Programs/GTK+/1.2.10/lib/libgtk.a
libgtk.la -> /Programs/GTK+/1.2.10/lib/libgtk.la
libgtk.so -> /Programs/GTK+/1.2.10/lib/libgtk-1.2.so.0.9.1
libgtk-x11-2.0.la -> /Programs/GTK+/2.6.7/lib/libgtk-x11-2.0.la
libgtk-x11-2.0.so -> /Programs/GTK+/2.6.7/lib/libgtk-x11-2.0.so.0.600.7
libgtk-x11-2.0.so.0 -> /Programs/GTK+/2.6.7/lib/libgtk-x11-2.0.so.0.600.7
...
On a side note now, having /bin and /usr/bin, /usr and /usr/local,... all that made sense once, with a different hardware, with a different purpose. Does it make sense today, on a modern desktop system? There's enough room and flexibility to rethink and improve those ideas further *without* actually breaking compatiblity, as Gobolinux and OS X already achieved. The only reason most Linux systems until today stick to old standards is because Linux systems were originally intended as cheap Unix replacements. Funny enough is that OS X, which doesn't follow the Unix hierarchy standard so close, is regarded as fully Unix compatible and marketed using the Unix trademark - whereas Linux isn't.
Last edited by freakcode (2008-10-01 20:16:09)
Offline
The point is, if the layout is clean and sandboxed, you don't need a package manager.
Anyway, you will need some tool, that will update all the symlink stuff to make this sandboxed layout usable (i.e. to resolve ambiguities between two versions of the same program or library).
all that made sense once
It still makes sense for many people. Read hier(7).
Funny enough is that OS X, which doesn't follow the Unix hierarchy standard so close, is regarded as fully Unix compatible and marketed using the Unix trademark
I'd appreciate some links on this topic. IMO, Apple products are more 'marketed' and 'claim to be', than they really 'are'.
Linux filesystem layout represents the concept of a solid, environment, where all components are (or at least can be) tightly integrated and you don't need to worry about _how_ your (and your distro-mate's), say, python is going to find, say tcl interpreter, _what version_ of tcl he finds and etc. You just either have tcl in your environment or not. And the same for bash, java, alsa, vim and so on. You seldom use several versions of one tool at a time. You just use the tool.
And the package manager's job is to decide, whether you would prefer (but not 'must', anyway) to have tcl or... better to say, to offer you tcl. That's why there are so many linux distros - all they are possible ways to organize these solid environments.
Moreover. For me it just makes no sense, how to organize the system hierarchically, provided the system is organized. Well, I just don't care whether the config file is in /etc/tool/tool.conf or in /tools/tool/tool.conf.
And what about having different versions of libraries and software installed alongside... I don't really think, that this is so essential feature, that it is to be implemented as a core system functionality. How many people use more than one version of bash at once? Of java (well, not a fair example, since newer versions have 'emulate older version' mode)? Of alsa? In fact, that's more an exception than a rule - when you need several versions of the same thing. Usually this occurs when two major releases of the same product co-exist, for example gcc 3 and gcc 4, python 2.x and python 3. But in linux world this problem admits ad-hoc solution - you just mark the more wide-spread release as 'main' to satisfy the majority of users. For example 'python' still stands for 'python 2.5' and zealous 'python 3' users need to adopt somehow. As an option - you have no default 'python' but have explicit 'python2' and 'python3' branches. And you and all other tools should take that into consideration.
Yes, Gobolinux brings extra functionality. But the real need for this functionality is questionable. And I believe, that all the features have their drawbacks - even those that seem 'innocent', 'just extra', 'useful and at no cost' and so on.
PS: This all is _totally_ IMO.
Last edited by Mr.Cat (2008-10-01 20:31:39)
Offline
Yes, my example wasn't clear. Executables X and Y are linked against lib-A.so, which in turn is a symlink to a minor version like lib-A.so.aa. When this lib gets updated, and breaks compatiblity, the maintainers bump the major version number so it turns into lib-B.so. Executable X can be readily updated to work with the new lib-so.B, while executable Y can still work because both lib-A.so and lib-B.so can coexist in the system. In this case, I don't need to wait both X and Y being updated to the new library to install one of them. Minor versions (lib-A.so.aa, lib-A.so.ab, ...) don't affect most builds, as they normally link to a major release (lib-A.so, that is a symlink to the most recent minor version) that guarantee to maintain compatiblity.
For instance, this happened with libstdc++ too, but is an exception because both branches (5 & 6) coexisted for some time.
But don't take my word on it, see how Gobolinux actually manages that http://www.gobolinux.org/index.php?page=k5 (What is it all about?)
So I still don't see how Gobolinux's directory layout enables the packages to be more modular. What you described above is exactly how library versioning works in a normal system. What the Gobolinux link says is how the use this same system but have the libraries separated into different directories with symlinks pointing to them. It seems to me that it would be as easy/hard to have executable X linked against lib-A.so and executable Y linked against lib-B.so with either directory scheme.
(Just to make it clear, I'm not directly comparing with what is existent in Arch, neither saying that what we have today doesn't work, which also happens to be the case with all the other 300 distros. The fact one thing works, doesn't mean it is the only neither the best possible solution. From there comes the interest to discuss Gobolinux's approach)
Yeah, I understand. And I'm not trying to bash on Gobolinux's approach, I'm just trying to understand what its advantages are. I understand the possible advantage of having things "clean and sandboxed" (although, as Mr.Cat pointed out, you still need a tool to manage the symlinks, so the "package manager" isn't entirely gone). What I don't see is how this approach makes it easier to have multiple library versions installed on your system. I think I see now that it isn't a fundamental difference in how you handle multiple library versions (it's using the exact same technique), but that the method of handling packages is maybe more conducive to having multiple version of a single package installed at once. Instead of having a package called "libfoo" and a package "libfoo2" or "libfoo-compat" you have a single package "libfoo" that has two different versions.
The point is, if the layout is clean and sandboxed, you don't need a package manager. A package manager is an abstraction to a database relating what files belongs to what packages, that tries to ultimately solve a fundamental flaw of using the traditional Unix hierarchy, where files from one package are scattered on the filesystem, and then you have no simple means to track them anymore. That's hardly KISS. In Gobolinux's approach, you can track files to their packages by simply inspecting symlinks.
<snip>
On a side note now, having /bin and /usr/bin, /usr and /usr/local,... all that made sense once, with a different hardware, with a different purpose. Does it make sense today, on a modern desktop system?
But what about the other side of the coin? In Gobolinux and OSX I have executables in 200 directories spread across the system, and libraries in other 200 different directories. "That's hardly KISS." How can I easily get a list of all the executables on my system? In the traditional Unix approach, essential system executables are in /bin, user executables are in /usr/bin, libraries are in /usr/lib. Simple. (Just trying to point out that things aren't as cut and dry as they may appear to be)
Offline
The point, as I understand it anyway, is the modularity of dependencies. If you want to use app_y which needs dep_a-4.5, but you use app_x which uses dep_a-3.5 and which breaks with dep_a-4.5, you can't have both app_x and app_y and you have to wait for app_x to update to using dep_a-4.5. This would solve that.
if dep_a-4.5 would break dep_a-3.5 dependent programs, dep_a-4.5 should be released as a different package/software release, imho
symlinking your shit just to have a "good-to-browse" filesystem just seems like an ugly hack to me
☃ Snowman ☃
Offline