You are not logged in.
Pages: 1
Hi,
I posted this already on the arch mailing-list but the subject was maybe a bit daunting..
Since my friend can't reproduce this problem on Gentoo, I would like to know if someone else has the same problem. The steps are easy:
1) create a dummy file (size: 5GB or less)
dd if=/dev/zero of=dummy bs=1M count=5000
2) create a loopback device on the dummy file
losetup /dev/loop/0 dummy
3) write on the loopback device or create a filesystem
dd if=/dev/zero of=/dev/loop/0 bs=1M
mke2fs -m 0 /dev/loop/0
Both commands (dd and mke2fs) should take ages to complete while hanging
around in WCHAN `congestion_wait' for most of the time.
Your reply would be perfect if you could back it with a
# dumpe2fs -h /dev/<mypartition>
of the partition on which you've created the dummy file.
Thanks in advance,
pebo
Last edited by pebo (2007-11-04 23:44:13)
Offline
I get the "congestion wait" with the dd command on the loopback device, but not the mke2fs.
$ sudo dumpe2fs -h /dev/loop/0
dumpe2fs 1.40.2 (12-Jul-2007)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: cbcbc915-f68a-4ba4-9908-6af5dbc2afd7
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: resize_inode dir_index filetype sparse_super
Filesystem flags: signed directory hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 128016
Block count: 512000
Reserved block count: 0
Free blocks: 493526
Free inodes: 128005
First block: 1
Block size: 1024
Fragment size: 1024
Reserved GDT blocks: 256
Blocks per group: 8192
Fragments per group: 8192
Inodes per group: 2032
Inode blocks per group: 254
Filesystem created: Sun Nov 4 23:53:35 2007
Last mount time: n/a
Last write time: Sun Nov 4 23:53:36 2007
Mount count: 0
Maximum mount count: 26
Last checked: Sun Nov 4 23:53:35 2007
Check interval: 15552000 (6 months)
Next check after: Sat May 3 00:53:35 2008
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Default directory hash: tea
Directory Hash Seed: a0f65075-e117-4a57-944f-c2623d58ae60
What do you need this output for anyway?
And why do you need to use the loopback device? You can use mke2fs directly on a file.
Last edited by retsaw (2007-11-04 23:59:38)
Offline
I get the "congestion wait" with the dd command on the loopback device, but not the mke2fs.
Thanks for your output. If you say, you get ``congestion_wait'' -- does that also mean that it takes long to complete? I ask because I realized that ``congestion_wait'' is not a good measure. (sometimes dd apparently hangs in c_w even if it works quite well)
And why do you need to use the loopback device? You can use mke2fs directly on a file.
I need a block device because I would like to be able to mount the file later on.
Offline
Eh, sorry -- I just saw that you posted sudo dumpe2fs -h /dev/loop/0, while I need dumpe2fs -h /dev/<filesystem-on-which-youve-created-the-loopback-file>..
Offline
retsaw wrote:I get the "congestion wait" with the dd command on the loopback device, but not the mke2fs.
Thanks for your output. If you say, you get ``congestion_wait'' -- does that also mean that it takes long to complete? I ask because I realized that ``congestion_wait'' is not a good measure. (sometimes dd apparently hangs in c_w even if it works quite well)
That depends on what you consider a long time, but yes, it was a lot slower. Typically about 7MB/s, but one run was quick at 44MB/s.
I need a block device because I would like to be able to mount the file later on.
You'll still be able to mount the filesystem later even if it not created on a block device.
Sorry, I didn't read the part about the dumpe2fs properly, here is the one from the filesystem the loopback device was on
sudo dumpe2fs -h /dev/mapper/lvm-stuff
dumpe2fs 1.40.2 (12-Jul-2007)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: af10e4bc-332a-44b0-b95d-1f6e3bfa3e80
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags: signed directory hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 53542912
Block count: 107061248
Reserved block count: 0
Free blocks: 1770338
Free inodes: 53540829
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 1024
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 16384
Inode blocks per group: 512
Filesystem created: Wed Jun 20 10:56:42 2007
Last mount time: Fri Nov 2 15:03:48 2007
Last write time: Fri Nov 2 15:03:48 2007
Mount count: 55
Maximum mount count: 60
Last checked: Sat Sep 1 18:13:59 2007
Check interval: 15552000 (6 months)
Next check after: Thu Feb 28 17:13:59 2008
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8
Default directory hash: tea
Directory Hash Seed: 440aa176-97fa-475a-8209-d96eff4a0c90
Journal backup: inode blocks
Journal size: 128M
Offline
pebo wrote:retsaw wrote:I get the "congestion wait" with the dd command on the loopback device, but not the mke2fs.
Thanks for your output. If you say, you get ``congestion_wait'' -- does that also mean that it takes long to complete? I ask because I realized that ``congestion_wait'' is not a good measure. (sometimes dd apparently hangs in c_w even if it works quite well)
That depends on what you consider a long time, but yes, it was a lot slower. Typically about 7MB/s, but one run was quick at 44MB/s.
The quick-one wasn't the first one, was it?
I straced the mke2fs process and saw that it always hangs after a write().. and while it does that, my firefox & vi hang too.
BTW, here is my dd-output:
# dd if=/dev/zero of=/dev/loop/0 bs=1M count=500
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 547.585 s, 957 kB/s
^^^^^^^^
Last edited by pebo (2007-11-05 11:19:04)
Offline
The quick-one wasn't the first one, was it?
No, it was the second. I thought I'd test it again after the mke2fs went through quickly. I tried it a few more times after that and the quick dd didn't repeat itself.
Offline
Problem disappeared on 2.6.24-rc2:
http://lkml.org/lkml/2007/11/6/110
Last edited by pebo (2007-11-13 09:38:44)
Offline
Pages: 1