You are not logged in.
This idea may be stupid, or difficult to implement, or both, but I wanted to share it, and have some of the more experienced users give their opinion on the matter.
Imagine this directory structure:
/
|-- boot
|-- dev
|-- sys
|-- app
Obviously, there are things missing, but I think the new "app" directory can compensate for everything I omitted, because app contains application directories, each of which contain everything that a particular application requires to function.
So, instead of relying on something as complicated as a full-fledged package manager, you could just have an application that would download the compressed stand-alone directory of the application you want to install, and then simply extract it into /app.
In essence, the install/uninstall process would reduce to moving the relevant application directory into/out of the app directory.
This arrangement seems like it would simplify things in non-trivial ways, because everything the given application requires would reside in /app/appname, instead of being scattered all over the filesystem, or having shared dependencies which cannot be updated without updating everything on the system.
I'm guessing you couldn't simply re-arrange most of the existing linux programs to fit into something like the model I describe (most assume/require standard linux directory structure to operate properly), but wouldn't it be nice if that was possible?
Or is this basically a stupid idea?
Offline
Offline
Correct me if I'm wrong, but isn't that the way Mac OSX works? you move the app to the apps folder [which is placed in the root], also you have a library folder which works as a .config in your home, and you have access and permissions to modify the Applications folder so you can open/edit/ the app or remove it by deleting the icon [which is not an icon but a folder in fact].
"Dream as if you'll live forever, live as if you'll die today" - James Dean
Offline
More like http://www.gobolinux.org/?page=faq
Offline
jasonwryan wrote:More like http://www.gobolinux.org/?page=faq
Yes, they seem similar in terms of an aproach to FS.
Offline
The Haiku operating system does it a bit differently: You just download the HPKG file and drop it into one of the special directories for packages, and the contents get automatically loaded into a read-only directory tree. Uninstallation is just doing the opposite.
A LOT of work went into the (relatively) new Haiku package management system. The details are here: https://dev.haiku-os.org/wiki/PackageManagement
Offline
Congrats you invented windows! j/k
What makes linux different from windows is that every program can use a single library so that every *app* doesn't need to have one included.
IMHO package managers are THE killer feature of linux, why in the world would you want to update each app on it's own when a package manager can do it for you and keep track of libraries without you worrying all the time... Of course if you feel the other way around is more appealing to you, check the links above for distros aiming at such structure.
Offline
This arrangement seems like it would simplify things in non-trivial ways, because everything the given application requires would reside in /app/appname, instead of being scattered all over the filesystem,
Correct me if I'm wrong, but isn't that the way Mac OSX works?
These are both illusions of abstraction. In the first case, sure, 'ls' shows fewer top-level directories, but the complexity hasn't disappeared. It's just been displaced, replicated in dozens (or hundreds) of individual application directories rather than consolidated in at the root of the filesystem. In *nix systems the files are not "scattered:" they're organized according to purpose and function, and their location dictates what may be done to them and by what/whom (including automated system processes).
In the case of OS X this impression comes from the fact that users drag an icon into a location, but behind the scenes that icon represents a package archive like any other, and the files are installed where needed. It's simply a way to create an install/uninstall process that doesn't require the typical Mac user to learn anything new, while allowing advanced users a little leeway. There are indeed .app archives that act as stand-alone applications no matter where they're located, but those are more the exception from my experience. The usefulness of both the drag-and-drop .dmg archive and .app archives is somewhat obviated now that OS X actually has a de facto package manager for casual users, and things like Homebrew and Macports for advanced users.
There's one big problem I can see with this model, and on a lark I looked up some Hasekll and Python packages in the NixOS online package browser. Sure enough, there are several builds of each release. The model isn't so simple once you have to deal with the possibility of having multiple copies of the same versions of interpreters, compilers and libraries installed.
Offline
That's interesting, but not quite what I meant; The whole idea was to simplify the system so that applications could be managed without complicated package managers, and all the "magic" they typically perform.
More like http://www.gobolinux.org/?page=faq
Yeah, something like that.
Unfortunately, it seems like a dead project (the links to their wiki and forums are broken).
The Haiku operating system does it a bit differently: You just download the HPKG file and drop it into one of the special directories for packages, and the contents get automatically loaded into a read-only directory tree. Uninstallation is just doing the opposite.
I've heard of Haiku, but I never tried it.
Since the system requires things in a special package format, it doesn't really strike me as being simpler.
IMHO package managers are THE killer feature of linux, why in the world would you want to update each app on it's own when a package manager can do it for you and keep track of libraries without you worrying all the time...
In my experience, package managers don't live up to their promise of easy, flexible, and "worry free". Each seems to be a world of its own, requiring custom-tailored packages, with specific quirks, files, and even whole languages to do something which could (in theory) be no more complicated than moving a directory tree.
Things can, and do break more often than they should, for reasons that only those who are intimately familiar with that package manager can grok well enough to properly rectify. In general, attempting to do anything slightly more involved (like installing multiple versions of the same software) seems to require really deep knowledge, because there are so many potential pitfalls (library conflict, incompatible files, some kind of dependency trap, etc).
The "killer feature of linux" is, in my opinion, repositories, not package managers - Given the architecture I described, and a place that stores most of the relevant application archives, the "package manager" could be nothing more than a simple script that fetches the requested archive, and extracts it into the /app directory.
the complexity hasn't disappeared. It's just been displaced, replicated in dozens (or hundreds) of individual application directories rather than consolidated in at the root of the filesystem.
In the system I described, where everything the application needs to function is located in its application directory, there's no need for a package manager, and no need to learn its quirks in intimate detail. You could have multiple versions of the same software, with different configurations, without the need to update absolutely everything, and without having to worry about issues I mentioned above.
It's true that you need to have additional copies of certain files, because different applications (and different versions of the same application) would require that, but you get a system in which one directory represents one application, and that seems quite a bit simpler to me (conceptually, and implementation wise).
In *nix systems the files are not "scattered:" they're organized according to purpose and function, and their location dictates what may be done to them and by what/whom (including automated system processes).
Ok, scattered was the wrong word, but in "consolidating" all application specific files, to a set of specific directories at the root of the filesystem, you create an architecture in which the processes of installation, uninstallation, and updating require a fairly involved piece of software to manage, in ways that differ across systems and managers, where many things can go wrong for a wide variety of reasons.
I'm sure the arrengement has certain benefits, for applications that were designed to fit into traditional *nix patterns/assumptions, but it seems to just needlessly complicate things for virtually everything else.
Offline
In the system I described, where everything the application needs to function is located in its application directory, there's no need for a package manager, and no need to learn its quirks in intimate detail. You could have multiple versions of the same software, with different configurations, without the need to update absolutely everything, and without having to worry about issues I mentioned above.
It's true that you need to have additional copies of certain files, because different applications (and different versions of the same application) would require that, but you get a system in which one directory represents one application, and that seems quite a bit simpler to me (conceptually, and implementation wise).
Simpler, maybe, but at the cost of most of the benefits of a *nix system. For example, what happens when a problem is found and fixed in a lib used by nearly every application? Every one of them issues a new package? What about abandoned ones? I could point to many other similar issues.
Last edited by Scimmia (2015-04-25 01:26:40)
Offline
Slightly -- but not much -- similar?
http://0install.net/install-linux.html#arch
https://en.wikipedia.org/wiki/Zero_Install
Offline
In the system I described, where everything the application needs to function is located in its application directory, there's no need for a package manager, and no need to learn its quirks in intimate detail.
You've missed my point. Again, the overall complexity is no less than it was before---it's just been displaced. Your envisioned system is simpler from the perspective of an end-user with no interest in learning about system internals, but the system itself has more quirks of its own that developers then need to work around. You've just offloaded the effort of the user onto the developers, is all. You can actually get a general idea of what you're asking for right here on Arch. Just do the following:
- Create an ~/app directory.
- Run 'pacman -Sw bash' to download the bash package to /var/cache/pacman/pkg
- Copy the bash *.pkg.tar.xz archive from the package cache to ~/app, and extract it in-place.
Do that, and you'll have a single directory with a subdirectory for bash, in turn containing subdirectories with all the files distributed in the package. You'd then just need to take any dependencies for Bash and copy them into the ~/app/bash subdirectory. Then run the 'tree' command on ~app to see what you've got. Even doing this for a single application will give you an idea of what's required. This must be done: applications written for Unix-like operating systems conform to the customs of those systems, and will expect to find files in certain locations. You can't simply dump all the files in one place and run the executable binary.
Once you're done, run the 'tree' command on ~/app. Now multiply that result by, say, 250, which is a very slim install for a desktop user. There may be reasonable arguments for something like this, but "simplicity" isn't one of them.
And I'm still not clear on where you're getting the idea that a "package manager" of sorts wouldn't be necessary. You'd still need some method of fetching all the software, and a mechanism to make the system aware of it and make it accessible to users.
Offline
And then every program that uses bash has to do the same thing for their own directory. You end up with a bunch of horribly inefficient containers that are a nightmare to keep current.
Offline
what happens when a problem is found and fixed in a lib used by nearly every application? Every one of them issues a new package?
Assuming that the developers of those applications decide to link to a new version of that library, and make a new release, then yes. Why not?
What about abandoned ones?
I don't really understand the question - As long as you have the archive (on your system, or in a repository), you can simply continue to use the application, because it's completely self-contained. The update cycle depends on the developers of the application in question, and/or the people who push their changes into repositories.
I could point to many other similar issues.
If you can, please do - I'm interested in this.
Slightly -- but not much -- similar?
http://0install.net/install-linux.html#arch
https://en.wikipedia.org/wiki/Zero_Install
The concept doesn't seem that different from Nix - I'm thinking about something that's more like GoboLinux.
You've just offloaded the effort of the user onto the developers
How? I mean, as a developer, if I just need to put the compiled binary, the relevant libraries, and all the required files in a single archive, isn't that simpler than having to twist everything so it fits into traditional unix conventions, and then trying to push that into some specific format required by some package manager?
Then run the 'tree' command on ~app to see what you've got. Even doing this for a single application will give you an idea of what's required.
The directory tree for bash doesn't seem that large, and I don't think you would need to include the complete structure for all dependencies (you could probably just include the required .so files) - The fact that GoboLinux exists implies that it's possible to optimize these things, but even if not: I don't think that "size on disk" is a proper measure of simplicity.
And I'm still not clear on where you're getting the idea that a "package manager" of sorts wouldn't be necessary. You'd still need some method of fetching all the software, and a mechanism to make the system aware of it and make it accessible to users.
I don't think you would need anything that complicated. If a linux distro could be configured to assume that all required files for a given application are located in a single directory, you could just have a script that downloads the archive, puts it in /app, and then makes a symlink (in a directory that's already in your path) to the binary. I guess you could call that a "package manager", but I'm not sure the label fits (considering what people typically think of when they hear the term).
Offline
This topic made me think of Snappy Ubuntu. It was unveiled recently that they plan on using Snappy packages over the current use of .deb: https://plus.google.com/+WillCooke/posts/AxfoU3N1Ezo
If you can't sit by a cozy fire with your code in hand enjoying its simplicity and clarity, it needs more work. --Carlos Torres
Offline
This topic made me think of Snappy Ubuntu. It was unveiled recently that they plan on using Snappy packages over the current use of .deb: https://plus.google.com/+WillCooke/posts/AxfoU3N1Ezo
Snappy seems to be the 1:1 translation of this systemd draft:
http://0pointer.net/blog/revisiting-how … stems.html
How? I mean, as a developer, if I just need to put the compiled binary, the relevant libraries, and all the required files in a single archive, isn't that simpler than having to twist everything so it fits into traditional unix conventions, and then trying to push that into some specific format required by some package manager?
I think your /app is the same as /opt, so it isn't really new. There are already enough applications using this approach.
Last edited by progandy (2015-04-25 11:26:09)
| alias CUTF='LANG=en_XX.UTF-8@POSIX ' |
Offline
Scimmia wrote:what happens when a problem is found and fixed in a lib used by nearly every application? Every one of them issues a new package?
Assuming that the developers of those applications decide to link to a new version of that library, and make a new release, then yes. Why not?
What about abandoned ones?
I don't really understand the question - As long as you have the archive (on your system, or in a repository), you can simply continue to use the application, because it's completely self-contained. The update cycle depends on the developers of the application in question, and/or the people who push their changes into repositories.
Perhaps an example will help:
There's a security hole in a library (e.g. A buffer overrun exploit). Currently once you download the fixed lib, every application is fixed. Under your scenario, you have to wait for every application to be updated before your system is secured - potentially your system will be vulnerable for much longer. However, abandoned applications will not be updated and will remain vulnerable - so your system will never be secured for as long as those applications remain installed.
Offline
Unia wrote:This topic made me think of Snappy Ubuntu. It was unveiled recently that they plan on using Snappy packages over the current use of .deb: https://plus.google.com/+WillCooke/posts/AxfoU3N1Ezo
Snappy seems to be the 1:1 translation of this systemd draft:
http://0pointer.net/blog/revisiting-how … stems.htmlHow? I mean, as a developer, if I just need to put the compiled binary, the relevant libraries, and all the required files in a single archive, isn't that simpler than having to twist everything so it fits into traditional unix conventions, and then trying to push that into some specific format required by some package manager?
I think your /app is the same as /opt, so it isn't really new. There are already enough applications using this approach.
Right, I meant to link to those too but forgot. There is also GNOME's xdg-app.
Last edited by Unia (2015-04-25 12:48:58)
If you can't sit by a cozy fire with your code in hand enjoying its simplicity and clarity, it needs more work. --Carlos Torres
Offline
I don't much care for this approach personally, so I've never learned much about it. But I'd be very surprised if the app bundles included all the .so files for dependencies. I imagine (hope) they'd be statically linked binaries. There are a lot of pros and cons to balance when choosing between static and dynamic linking. On most systems it seems the balance falls in favor of dynamic (though projects like Stali linux would disagree). However, once every app ships with all it's own libraries and nothing is actually shared, then *all* of the benefits of dynamic linking vanish. Static linking app bundles seem like a no-brainer to me - and so this approach might be better suited to an already statically linked base (like Stali).
But the following point is the biggest one for me here:
There's a security hole in a library ... so your system will never be secured for as long as those applications remain installed.
To put a finer point on this, each end user would then be fully responsible to keep track of any and all security issues that may arrive in any and all of their programs. And "upstream" will likely never fix things as I doubt many gui tool app developers are keeping fully informed on every low level security threat. I'm very glad we have a division of labor among devs; when the devs who focus their
energy on keeping informed of the security wholes in low-level libs catch something, they fix it, and that fix is "automagically" employed by every program that uses that library without the app developer needing to know anything about it.
"UNIX is simple and coherent..." - Dennis Ritchie, "GNU's Not UNIX" - Richard Stallman
Offline
Scimmia wrote:what happens when a problem is found and fixed in a lib used by nearly every application? Every one of them issues a new package?
Assuming that the developers of those applications decide to link to a new version of that library, and make a new release, then yes. Why not?
Why not? So when Heartbleed was found, it would be up to the individual application developer to decide if they should include the fix and issue a new package? How long would that take, considering how many programs link to openssl? Or, as I originally asked, what happens to abandoned programs? Yes, it continues to run, but it's still vulnerable.
Scimmia wrote:I could point to many other similar issues.
If you can, please do - I'm interested in this.
Let's just hit few of the bigger ones:
Nearly every executable on your system uses glibc and/or gcc-libs. How many copies of it do you need on your machine?
In the same vane, how many copies do you need in memory? In your system, every process would have it's own copy of these libs in memory. Horribly inefficient.
Again in the same vane, does a simple python script now include the entire python interpreter? Does that seem sane?
If you're distributing binaries of all of these libs, you MUST provide a way to get the sources. This is required by the license. For an individual developer, this would quickly become unmanageable.
Speaking of unmanageable, keeping track of every library single lib you link to *and the libs they then link to* becomes unmanageable in all but the simplest programs. Check the output of ldd on a program some time, do you as a developer want to keep track of what every one of those lib's upstream is doing?
Should I continue?
Offline
How? I mean, as a developer, if I just need to put the compiled binary, the relevant libraries, and all the required files in a single archive, isn't that simpler than having to twist everything so it fits into traditional unix conventions, and then trying to push that into some specific format required by some package manager?
I don't understand what the roadblock is, here. You don't have to put them into a single archive---you have to put them in every archive that depends on them. All those bits and pieces are designed to interact with each other. Taking my system as an example: I have vim, cmus, ranger, vit, mutt, htop, emacs, and irssi installed. Every one of those depends on ncurses. So rather than install the files distributed with ncurses at the root of filesystem and allowing all packages to access that one distribution, you want to create 8 copies in 8 different places that all need to be updated simultaneously. How exactly is that simpler, and how does doing more stuff amount to less effort? Like I said, there may be valid reasons for wanting something like this, but simplicity most certainly is not one of them.
The directory tree for bash doesn't seem that large...
I used it as an example because a) it's rather small, yet b) it's a fundamental component of most (all?) GNU/Linux distributions. There's a reason I chose Haskell packages as the example to look into in my first post.
I don't think you would need anything that complicated. If a linux distro could be configured to assume that all required files for a given application are located in a single directory, you could just have a script that downloads the archive, puts it in /app, and then makes a symlink (in a directory that's already in your path) to the binary.
Again---simple? No. Now you've got two files called "bash" residing in two different parent directories called "/bin" in different parts of the filesystem, and you need a mechanism to reverse that multi-step installation process. That's what a package manager does.
Offline
Ok, there seem to be some common/core points here; I'll try to address them:
Security:
Let's assume that security is fairly low on my priority list. I know that sounds insane to many (or probably most), but let's just start from there.
Resources:
Let's assume that I have gigabytes of ram, terabytes of storage, and that I'm not shy about using either to store redundant bits, if that's truly required to achieve what I want.
What I want:
I want a Linux distribution in which applications are just simple archives, which can be extracted to /app (or something like that), and used as completely self-contained software, which can't be broken in the manner by which package managers could typically break installed software (or fail to install it properly in the first place).
If I hinted at a specific implementation of this idea (it seems I did), I would like to clearly label that as a mistake: I don't know how best to implement this, and I don't want to give that impression. However, looking at something like GoboLinux, it seems like it's possible, and that it has the potential to make things simpler for both users (straightforward install/uninstall; multiple versions; no conflicts), and developers (no need to learn package manager formats; able to simply ship the relevant files).
Static linking app bundles seem like a no-brainer to me - and so this approach might be better suited to an already statically linked base (like Stali).
I think you're right. Stali looks interesting - Thanks for the reference!
Offline
I should have figured this out earlier. This is nothing more than an exercise in futility.
Offline
looking at something like GoboLinux, it seems like it's possible, and that it has the potential to make things simpler
Gobolinux actually uses a standard FSH layout but conceals it from the end-user with a custom kernel module and then symlinks the underlying directory structure to the "fake" directories that the user can access.
Not simple at all...
Offline
Gobolinux actually uses a standard FSH layout but conceals it from the end-user with a custom kernel module and then symlinks the underlying directory structure to the "fake" directories that the user can access.
Not simple at all...
On their "at a glance" page, they state:
This is for aesthetic purposes only and purely optional, though: GoboLinux does not require modifications in the kernel or any other system components.
Perhaps the implementation (of this conceptually simpler approach) could be made extremely simple, by starting from something like Stali, and going with the static linking approach mentioned by Trilby?
Offline