ngoonee wrote:tonythed wrote:I did say "more risky than normal" which implied some constant level of risk. What I look for in a distro is a package manager that makes actual sense which is why I use arch/pacman.. it seems to do that better than most these days. What I hate to see happen to a distro is for the developers to decide that dangling by a thread over the bleeding edge is an important goal.. rather than just operating close to the bleeding edge with a safety margin between. There is no need for a stable repository to be at high risk, just keep the software in testing long enough to be tested to a point where the risk is low enough to allow the software to be useful in important applications. Anyway, my best wishes and thanks go to the Arch developers for their works and excellent disto.
You misunderstand the use of [testing]. Refer here
[core], [extra], and [community] are not 'stable' in the sense you're thinking of (which is - my business can depend on this). The packages there are only guaranteed not to cause system breakage on systems which match the ones which use [testing] (devs and other users). That's how Arch runs, nothing is kept in [testing] any longer than strictly necessary.
Doesn't your last paragraph contain an oxymoron? Breakage protection is guaranteed only for the particular hardware used in testing.. so wouldn't it be a logical assumption that the longer the packages stay in testing the more opportunity they will have to be ran on varied and numerous hardware.. and therefore less likely to break when moved to core and extra? This assumes there are growing amounts of users on the testing repo and that their needs require using more and different packages as time progresses. Or are the users on the testing repo pretty much static in package needs and number? I wouldn't think so, but I don't really know either way.
There IS no 'breakage protection' per se, and [testing] users normally update fairly regularly. You seem to be suggesting leaving packages in [testing] for extended periods of time (weeks?) which is unreasonable since dependencies and various general libraries would quickly make things unworkable.
Some packages ARE left for a longer time, but that's mostly because its known that it could potentially cause serious harm if there's a problem. In general though, once a package has been in [testing] for a full 2 days there's unlikely to be any further problems from [testing] users (in my experience).
]]>tonythed wrote:sitquietly wrote:It seems to me that you yourself have given us the proof that Archlinux is risky.
...
What I've been noticing as well as the "show stoppers lately" is that more and more guys seem to be installing Arch and NOT accepting that it is a testing distro which is only appropriate for linux experts. Have you noticed that there's a lot of whining in the forum? A lot of users seem to expect their problems to be fixed by someone other than themselves.I did say "more risky than normal" which implied some constant level of risk. What I look for in a distro is a package manager that makes actual sense which is why I use arch/pacman.. it seems to do that better than most these days. What I hate to see happen to a distro is for the developers to decide that dangling by a thread over the bleeding edge is an important goal.. rather than just operating close to the bleeding edge with a safety margin between. There is no need for a stable repository to be at high risk, just keep the software in testing long enough to be tested to a point where the risk is low enough to allow the software to be useful in important applications. Anyway, my best wishes and thanks go to the Arch developers for their works and excellent disto.
You misunderstand the use of [testing]. Refer here
[core], [extra], and [community] are not 'stable' in the sense you're thinking of (which is - my business can depend on this). The packages there are only guaranteed not to cause system breakage on systems which match the ones which use [testing] (devs and other users). That's how Arch runs, nothing is kept in [testing] any longer than strictly necessary.
Doesn't your last paragraph contain an oxymoron? Breakage protection is guaranteed only for the particular hardware used in testing.. so wouldn't it be a logical assumption that the longer the packages stay in testing the more opportunity they will have to be ran on varied and numerous hardware.. and therefore less likely to break when moved to core and extra? This assumes there are growing amounts of users on the testing repo and that their needs require using more and different packages as time progresses. Or are the users on the testing repo pretty much static in package needs and number? I wouldn't think so, but I don't really know either way.
]]>Sorry to hear that you've been experiencing bumps in the road, but this should act as a motivant rather than other.
I've just pulled 5 months of updates, a very big no-no in rolling release land, which all applied without issue. I recall more update breaking in our past, surely.
]]>sitquietly wrote:It seems to me that you yourself have given us the proof that Archlinux is risky.
...
What I've been noticing as well as the "show stoppers lately" is that more and more guys seem to be installing Arch and NOT accepting that it is a testing distro which is only appropriate for linux experts. Have you noticed that there's a lot of whining in the forum? A lot of users seem to expect their problems to be fixed by someone other than themselves.I did say "more risky than normal" which implied some constant level of risk. What I look for in a distro is a package manager that makes actual sense which is why I use arch/pacman.. it seems to do that better than most these days. What I hate to see happen to a distro is for the developers to decide that dangling by a thread over the bleeding edge is an important goal.. rather than just operating close to the bleeding edge with a safety margin between. There is no need for a stable repository to be at high risk, just keep the software in testing long enough to be tested to a point where the risk is low enough to allow the software to be useful in important applications. Anyway, my best wishes and thanks go to the Arch developers for their works and excellent disto.
You misunderstand the use of [testing]. Refer here
[core], [extra], and [community] are not 'stable' in the sense you're thinking of (which is - my business can depend on this). The packages there are only guaranteed not to cause system breakage on systems which match the ones which use [testing] (devs and other users). That's how Arch runs, nothing is kept in [testing] any longer than strictly necessary.
]]>It seems to me that you yourself have given us the proof that Archlinux is risky.
...
What I've been noticing as well as the "show stoppers lately" is that more and more guys seem to be installing Arch and NOT accepting that it is a testing distro which is only appropriate for linux experts. Have you noticed that there's a lot of whining in the forum? A lot of users seem to expect their problems to be fixed by someone other than themselves.
I did say "more risky than normal" which implied some constant level of risk. What I look for in a distro is a package manager that makes actual sense which is why I use arch/pacman.. it seems to do that better than most these days. What I hate to see happen to a distro is for the developers to decide that dangling by a thread over the bleeding edge is an important goal.. rather than just operating close to the bleeding edge with a safety margin between. There is no need for a stable repository to be at high risk, just keep the software in testing long enough to be tested to a point where the risk is low enough to allow the software to be useful in important applications. Anyway, my best wishes and thanks go to the Arch developers for their works and excellent disto.
]]>sitquietly wrote:I've dropped out of the testing team and started building a more perfect system for myself.
..... If you have managed to understand the logic of updates and the mutual dependency of various subsystems, then you get the best system you can dream of. Updating is no more a nightmare resulting in frustration because you know how to do it right, what first and what then (and what if).....
You've touched on one of the great advantages of Arch for me. Some packages are touchy to install from source code because of cyclic dependencies, e.g. the toolchain (linux-api-headers->glibc->binutils->gcc->binutils->glibc) and the ghc chain. It's hard to do a source-based install from a bare harddrive. With Arch it's simple! Just install the Archlinux binary and THEN recompile from source code with local changes. Arch makes a great bootstrap system.
]]>I've dropped out of the testing team and started building a more perfect system for myself.
That's probably a logical way to go if you have become aware of your needs and realized that your *nix literacy has matured enough to do yet a bit more on your own. Nowadays, I wouldn't be able to revert to a distro that I should use and accept passively. They are mostly fine and I can work with them if necessary, but as my hardware is my kingdom I feel much more comfortable with a system that I can fully control from the first to the last second of its operation. I set the limits of its novelty and at the same time make it stable wherever I wish. If you have managed to understand the logic of updates and the mutual dependency of various subsystems, then you get the best system you can dream of. Updating is no more a nightmare resulting in frustration because you know how to do it right, what first and what then (and what if). You make your update schedule appropriate for yourself and life is beautiful.
A lot of users seem to expect their problems to be fixed by someone other than themselves.
That's the drawback of Arch becoming more popular than it should be. One can successfully use Arch as their first distro ever as long as they understand the consequences of Arch's policy. However, many come from other *nices bringing in the habits that aren't well seen here. That's why they are doomed to fail. Still, I'd say it has more to do with the art of reading and understanding rather than any distribution in particular, but that's my skeptical assumption on some general tendencies that doesn't have to be valid of course...
]]>Is it just me or has anyone else noticed as of the last few months that upgrading a system seems to be a little more risky than normal? I have experienced several show stoppers after perfoming an upgrade lately..
It seems to me that you yourself have given us the proof that Archlinux is risky. It's like a mathematical existence proof: if one person can show us a properly installed and updated Archlinux system that has "show stopper" bugs then Archlinux is risky. It's a characteristic Archer response to say that there is no problem because "it works for me". If you read the forums every day you'll see that you aren't alone, many people report very frustrating problems almost daily.
The beauty of Arch to me is that I've always been able to resolve my problems within a few days by waiting for the devs to come up with an update, or by "researching" the problem and fixing it myself, sometimes by recompiling packages that I can see need to be recompiled against lower-level updates.
I too got the impression over the last year that Arch was getting more buggy but it probably only takes one personal frustration to give me that impression. It is not Arch developers' fault. I don't think that it's possible to "roll" together every upstream stable release, as they're released every day, and have a smooth functioning system. We Archers are the testing team for upstream (Arch "stable" is about the equivalent of Debian "experimental").
I've dropped out of the testing team and started building a more perfect system for myself. I build everything from source code using the Arch abs/aur and freeze packages by group; e.g. right now I have my "compiler" group frozen to keep me at gcc 4.6.3. I like my semi-rolling system because I'm running a stable system with latest releases of my important apps and I'm learning a lot.
What I've been noticing as well as the "show stoppers lately" is that more and more guys seem to be installing Arch and NOT accepting that it is a testing distro which is only appropriate for linux experts. Have you noticed that there's a lot of whining in the forum? A lot of users seem to expect their problems to be fixed by someone other than themselves.
]]>My last "showstopper" was when the kernel upgrades failed to correctly update initramfs, leading to non-booting. Easy fix and got me into the habit of manually running mkinitcpio after a kernel upgrade before the reboot. In the past 3 -4 weeks I've had no breakages at all that I've noticed, even running on testing. I was reaching the point where I felt I should break something myself, just to give me something to do.
Something similar happened to me as well. I could run mkinitcpio while logged in and, though it would say it had successfully built the image, my machine wouldn't boot. It would boot after I'd used a LiveCD to chroot and build the image from there. Turns out I hadn't properly merged my mkinitcpio.conf after a recent update. When working with some problem in Arch, the null hypothesis should always be "PEBCAK." I rarely come across a serious problem I haven't caused myself.
]]>But for the most part, if you are careful about updating anything involved in the boot process, updating Arch is pretty uneventful lately.
]]>Unlike most of the others above, I run Arch only on my personal machine (my only machine) and haven't experienced any problems ([testing] user here) that weren't almost immediately queried on [arch-general] or the like. Since my timezone puts me after the majority of upgraders, I generally know of potential problems even before they occur. Having bog-standard hardware (not-brand-new nvidia GC for example) helps as well.
Ditto :-)
]]>tomk wrote:Standard advice: don't run Arch
tonythed wrote:on a server or other important system
unless you know what you're doing.
Yes that sounds like some possibly good advice. But I have been using Linux almost exclusively since the mid to late 90's with great success so I probably know what I am doing by now. I do like the Arch distro and use it on several important servers and so far has worked very well.. but regardless of the server admin personnel experience, if a routine upgrade is prone to break something that risk can cause some stress to say the least. Just checking to see if anyone else had noticed anything similar.. probably just a bit of bad luck for me with having the right combination of hardware to be affected.. I hope :-).
Dangerous assumption. You'd likely not make any newbie errors, but that just means the inevitable errors you DO make (all of us do) would be complicated enough to be hard to solve .
Unlike most of the others above, I run Arch only on my personal machine (my only machine) and haven't experienced any problems ([testing] user here) that weren't almost immediately queried on [arch-general] or the like. Since my timezone puts me after the majority of upgraders, I generally know of potential problems even before they occur. Having bog-standard hardware (not-brand-new nvidia GC for example) helps as well.
]]>As long as there is some type of monitoring going on in testing that sets some kind of minimum error free user-hours number on the updates before they are allowed to be moved to stable.. then all should be well.
If you have "unusual" hardware or are concerned about the amount of time before updates move from [testing], you could always enable the testing repository on your machine and provide valuable feedback to the devs before the updates are released.
]]>