You are not logged in.
Pages: 1
Hi guys!
I know there's a package that contains Arch wiki, but pages in that packages are ugly and seem to lack graphics. Anyway, it's not appropriate. Is there any way I can download Arch wiki completely just as it is on the Web and save it, say, on CD-R to use as help when doing some Arch installations on other machines? So I'll be able to click on links in that pages and everything will work and look like I'm surfing the Web.
Offline
> Any normal way to download Arch wiki?
Not yet https://bbs.archlinux.org/viewtopic.php?id=94201&p=1
Offline
I bet there is some kind of firefox plugin that can save websites in that way but I don't know if it is appropriate to do so.
Offline
I bet there is some kind of firefox plugin that can save websites in that way but I don't know if it is appropriate to do so.
There is a way to scrape the pages one by one, keeping the 'Note' and 'Tip' etc. pretty formatting, but it's long an painful. In the near future there may be a new download provided.
Offline
wget can mirror, but it would be inappropriate to put so much load on the Arch server by requesting EVERY wiki page, when there's already a package available with the information that you need.
Are you familiar with our Forum Rules, and How To Ask Questions The Smart Way?
BlueHackers // fscanary // resticctl
Offline
I bet there is some kind of firefox plugin that can save websites in that way but I don't know if it is appropriate to do so.
"Scrapbook+" should do the trick. If I remember it correctly, there was an option in "capture page as" to follow links and save these pages too (four should be enough). Even if all other web sites except the arch wiki are blocked, it will result in a huge chunk of data, because everything would be saved, like all languages, all discussions, all revisions, even all obsolete media files and so on. Not to mention the amount of generated traffic, which should be a lot higher than the official docs with 7.5 MB. It could work (maybe), but without asking if this is okay, it would make you a dick.
Last edited by mento (2010-10-04 09:46:21)
Offline
sand_man wrote:I bet there is some kind of firefox plugin that can save websites in that way but I don't know if it is appropriate to do so.
"Scrapbook+" should do the trick. If I remember it correctly, there was an option in "capture page as" to follow links and save these pages too (four should be enough). Even if all other web sites except the arch wiki are blocked, it will result in a huge chunk of data, because everything would be saved, like all languages, all discussions, all revisions, even all obsolete media files and so on. Not to mention the amount of generated traffic, which should be a lot higher than the official docs with 7.5 MB. It could work (maybe), but without asking if this is okay, it would make you a dick.
1. It would be nice if the links were converted for local use.
2. How do you search this thing? You can browse it from cover to cover, use the index page, 'html2text + grep' it, but it's not very user-friendly ;P
Offline
Meh, because of you I installed it again.
1. There are converted. Just recall it via the scrapbook+ menu.
2. Jo, without search it is really hard to use, but probably all wiki-download-actions from the client site are lacking this feature.
Offline
I'd suggest you get http://tuxtraining.com/files/arch_linux_handbook.pdf - it's from 2009 but has real nice formatting (think there's even a printed version available at Amazon)
"A computer lets you make more mistakes faster than any invention in human history - with the possible exceptions of handguns and tequila."
(Mitch Ratcliffe)
Offline
Try HTTrack, it could work...
http://www.httrack.com/
Offline
Try HTTrack, it could work...
http://www.httrack.com/
It still doesn't solve the question how to search that wiki and it still will download page by page, putting unnecessary load on the server.
Offline
Pages: 1