You are not logged in.
Running 40gig ext3 filesystem, had ~200 packages installed, ran defrag, pacman is much quicker for updates and searching... even after multiple reboots. Nice job
Edit: forgot to mention, my install before the defrag was 6 months old, also I noticed a significant decrease in hard drive thrashing(??) when running pacman.
Offline
I cant believe we are still talking about defragging a filesystem in this day and age.. *sigh*
Does this only happen with reiserfs? I was led to believe that ext3, jfs, and others were not in need of defrags, because they somehow placed consecutively accessed blocks closer together as a general rule..
No, this is not the problem. The problem is to keep several hundred - not defraged - files together, so pacman can read them real fast instead of doing disc-hopping. It is more a dog-keeper-problem.
Frumpus ♥ addict
[mu'.krum.pus], [frum.pus]
Offline
I cant believe we are still talking about defragging a filesystem in this day and age.. *sigh*
Does this only happen with reiserfs? I was led to believe that ext3, jfs, and others were not in need of defrags, because they somehow placed consecutively accessed blocks closer together as a general rule..
it's a tad different here... under windows, fragmentation happens when individual files get their nodes scattered across the disk.
that doesn't happen with the FSes you mentioned... the inodes stay consecutive, or as close as possible. However, that is for an individual file. A single file will never get fragmented, whereas a series of multiple files which are accessed in order can suffer from the same fragmentation that a single file on a FAT partition can suffer from.
So, like i3839 said, if we were to tar the whole bloody thing, and read from the tar file, it will never become fragmented, because the FS sees it as one file.
Hmmm, it might be possible to archive the thing up and mount it with some funky loop-back device at /var/lib/pacman.... *scratches chin*
Offline
A loop-mounted file will work for this ; no need to use tar.
cd /var/lib
dd if=/dev/zero of=Paclib.img bs=1024k count=60
mke2fs -m 0 -b 1024 -i 1024 -O dir_index -v -L VLPimg Paclib.img
mv pacman pacman.old
mkdir pacman
mount -o loop Paclib.img pacman
cd pacman.old ; cp -a * ../pacman
(edit) nvm about inode size, I was remembering something unrelated.
Offline
A loopback filesystem for storing application specific information?
This is what databases were built for people...
simply...amazing..
"Be conservative in what you send; be liberal in what you accept." -- Postel's Law
"tacos" -- Cactus' Law
"t̥͍͎̪̪͗a̴̻̩͈͚ͨc̠o̩̙͈ͫͅs͙͎̙͊ ͔͇̫̜t͎̳̀a̜̞̗ͩc̗͍͚o̲̯̿s̖̣̤̙͌ ̖̜̈ț̰̫͓ạ̪͖̳c̲͎͕̰̯̃̈o͉ͅs̪ͪ ̜̻̖̜͕" -- -̖͚̫̙̓-̺̠͇ͤ̃ ̜̪̜ͯZ͔̗̭̞ͪA̝͈̙͖̩L͉̠̺͓G̙̞̦͖O̳̗͍
Offline
Well, yes it's not the ideal solution but it can be done today
not when(if) pacman DB support is ready.
YMMV
Offline
Thanks for all the responses to my initial question about linux fs defragmentation. Now I see that that the files don't get fragmented, they are just scattered around the hdd in a random manner.
Still, I dare say that linux needs some kind of tool that would deal with the scattered files and put them close to each other in a logical way. All the libraries(/usr/lib) for example, could be reordered in the same way as pacman database files are. All the binaries, headers, icons could be reorganised similarly. The problem with this kind of tool is that it would need to know which groups of files should be put together, therefore the files would need some kind of a tag(lib, icon, header etc..) that groups them in meaningful chunks. Anyway, I don't know if it makes any sense to you I also doubt if the speed increase would make up for all the hard work, I mean linux filesystems are already fast.
Offline
Still, I dare say that linux needs some kind of tool that would deal with the scattered files and put them close to each other in a logical way. All the libraries(/usr/lib) for example, could be reordered in the same way as pacman database files are. All the binaries, headers, icons could be reorganised similarly. The problem with this kind of tool is that it would need to know which groups of files should be put together, therefore the files would need some kind of a tag(lib, icon, header etc..) that groups them in meaningful chunks. Anyway, I don't know if it makes any sense to you I also doubt if the speed increase would make up for all the hard work, I mean linux filesystems are already fast.
hmmm that won't help anything.... libraries (ideally) are only located once on the disk, then cached (ld.so.cache or something) - binaries are usually only run one at a time, and smaller programs usually stay loaded in memory anyway...
think about the binary stuff this way: if I run firefox and then run thunderbird, it doesn't matter where they are, they still have to be individually found because they're not being loaded at the same time (in the case of pacman, it's bascially saying "open /lib/pacman/*")... it'd be like driving to Bill's house, driving home, then driving to Fred's house - it doesn't matter if Bill and Fred live next door, you still need to make the whole trek.
Offline
I've made this script available as a package with a PKGBUILD. It and my other projects can be found here
Offline
I now tried the actual version on your website and it still doesn't work for me.
packages tar, coreutils and bash were re-installed.
I just don't know what to do. I hade errors on my filesystem but resolved them lately.
cu
Ford Prefect
Offline
damn, dude I have no clue whats wrong. Have you tried reinstalling diffutils? What kernel are you using?
Has anyone else had this problem?
Offline
didn't help.
Linux lanrules 2.6.9 #1 Thu Oct 28 02:15:28 CEST 2004 i686 AMD Athlon(tm) XP 1900+ AuthenticAMD GNU/Linux
selfmade kernel 2.6.9
Offline
in the script where it says UNCOMMENT TO SEE SUMS exactly what does it show?
It may be worth a try to install the stock arch kernel (and reboot) to elminate that from the picture.
If nothing else works you can comment lines 90 to 91, 95-116, and 118-124. That will take out the intregrity check all together. Thats of course based off my latest (1.6-2). I haven't tested it so it'll be a good a idea to back up /var/lib/pacman/local in case it craps out.
EDIT: do a diff on the same file like so:
diff foo.file foo.file; [ $? -ne 0 ] && echo "You shouldn't see this"
tell me if you see the message.
Offline
uncommenting doesn't change anything (ie nothing additionally shown)
my diff works correct
won't start installing and booting other kernel images (too lazy, sorry)...
think it's risky to try without the check, as it fails of different checksums.
better want to try other things first.
cu
Ford Prefect
Offline
It must not be extracting the correct data to diff against. Try this:
replace lines 98 and 99 with
olddb=$(cat old.sums | sed 's|s*old.tar||g')
newdb=$(cat new.sums | sed 's|s*new.tar||g')
Offline
didn't change anything.
same different checksums were given with the old and the new lines 98/99
Offline
same different checksums were given with the old and the new lines 98/99
I thought you said earlier on page 1 that the sums appeared to be indentical but the script thought they were wrong? If thats the case, there's very little chance its this but more likely a major problem elsewhere.
Anyways, I reviewed and played around with my code many times and I still can't see how it could be falsely crapping out on you with the integrity check. I'm sorry but I'm completely out of ideas.
In the process of all this I've found some minor cosmetics and coding improvements which I will finish up and probably upload onto the server later today.
Offline
Mmm, I don't know if you're aware guys, that Judd already included such script in the newest pacman. Probably inspired by and based on Penguin's work.
Just to point it out, as I see that you're still fighting with it ;-)
Offline
I had a problem with pacman-optimize but I just did manually what the script did and it worked no worries. I've done some mucking around with my pacman DB files though so I wasn't going to report it as a bug unless other people got similar results.
Offline
Mmm, I don't know if you're aware guys, that Judd already included such script in the newest pacman.
Well that figures. I see its very similar to mine except he tared the folder and piped the sums in a much more sensible fashion.
Cam: Just curious, what problems were you having with pacman-optimize? Did my script do the same?
At any rate, people should not use this anymore. I'll remove it from my site and the wiki when I get a chance.
Offline
I never tried yours, I planeed on doing it myself since it's not really a physically demanding job but I never got around do it. Seeing that pacman 2.9.6 came with a similar script I gave it a shot but crapped out on the integrity check. I manually ran diff /tmp/pacsums* and they do differ so beats me whats happening but yeah, just cp'd, rm'd and mv'd it myself this arvo for the same results
Offline
Ok. I removed this script from my website and will do the same to the wiki entry once its up and running again.
I'm going to lock this thread and advise everyone to use pacman-optimize included in pacman 2.9.2-2 instead.
Offline
Congratulation!
Nice to see an idea of user contribution applied in the official.
Markku
Offline