You are not logged in.
Pages: 1
Why hasn't this been created yet? A daemon to handle the web browsing tasks on a computer seems to be a feature that would be very much appreciated. Think of all the browsers that share SOOO much code, would it not be smart for a daemon to utilize the best of all worlds and become the fastest possible way to browse HTML source code ??
This is an idea I had, and I would just like to hear some ideas or if this has been refuted as an absolute pipe dream.
What would be the pros and cons of something like this? If you guys think this is a good idea, I'm going to invest some time and money into it.
I really enjoy having a daemon for listening to my music, I don't see why this shouldn't be the same for something like a browser client.
Thanks for keeping this community top of the line Archers, can't wait to hear your thoughts. Please just say if this is a dumb idea and short explanation why, I'm really curious, I haven't been able to find much discussion about this type of idea.
Last edited by thechitowncubs (2011-03-04 16:10:27)
Offline
moving to GNU/Linux Discussion....
1 con that immediately comes to mind is that the daemon would be specific to the libraries used by the browser. All webkit browsers would be supported by one daemon (that too only if the browsers are using the same version of webkit) but what about non-webkit browsers?
then if you say the daemon should handle both, then someone would just see it as bloat (if they genuinely don't use the other type of browser)
if a browser depends on a newer version of webkit (or any library) while another on your system is not yet compatible with the newer version, what do you do -- effectively one of your browsers is just a pile of dump
There's no such thing as a stupid question, but there sure are a lot of inquisitive idiots !
Offline
If you care about saving space, you can use https://aur.archlinux.org/packages.php?ID=21627 :-)
I think that the browsers don't share too much html-rendering code - check http://en.wikipedia.org/wiki/Web_browser_engine
Last edited by karol (2011-03-03 18:37:25)
Offline
Richard Stallman already does this!
Offline
Richard Stallman already does this!
that's an awesome idea. i think it would take a lot of self-control at first but ultimately cull the unnecessary, theoratically leaving more room/time for productive endeavors in the long run (although i guess it depends on what you do and how you're productive nature is tied to the internet). regardless, in today's systems, i think if i implement this style of internet activity i would end up wget'ting more than i should (perhaps the daemon would have a fetch quota) and require more and more web-functionality for certain sites such as with purchases or banking (etc) that i'm ultimately back on a graphical web browser for those few and find myself leaving it open to browse everything else. hmph. cyclical webs.
Offline
This probably isn’t what you had in mind, but I’d like to set up a Squid proxy to do things like block ads, redirect to SSL à la HTTPS Everywhere, and so forth… basically, pull out functionality that is typically found in browser addons so that all applications (like browsers without the addons) can benefit. Something in the same vein is Adsuck, a DNS server that blocks content from ad domains.
Offline
Richard Stallman already does this!
That is so comical, but yet so wise if the internet detracts from your productivity.
Offline
Pages: 1