You are not logged in.

#1 2010-01-11 23:14:50

awayand
Member
Registered: 2009-09-25
Posts: 398

[SOLVED] cloning with cpio - running out of disk space

Hi,
I am trying to migrate my root partition to lvm, so I followed the advice in the lvm2 howto, but I have a strange problem:

Even though my destination partition is the same size as my source partition (same filesystem for both), I keep running out of space when using the following command:

# find / -xdev -print0 | cpio -dvmp --null /root/mnt

Here is the df -h output:
S.ficheros          Tamaño Usado  Disp Uso% Montado en
/dev/sda7             7,3G  6,7G  211M  98% /
none                  501M  220K  501M   1% /dev
none                  501M     0  501M   0% /dev/shm
/dev/sda6             494M   12M  457M   3% /boot
/dev/mapper/vg-root   8,1G  8,1G     0 100% /root/mnt

Any ideas?
thanks!

EDIT: I thought I had solved it: The block size on my destination partition was 4096 bytes, while the block size on my source partition was 1024 bytes. So I recreated the filesystem with equal blocksize (actually the source was already 4096 bytes), recopied, but I am still running out of space. What am I doing wrong? Is cpio not the best command to migrate a filesystem?

Here are my tune2fs outputs for both partitions:

Source:

tune2fs 1.41.9 (22-Aug-2009)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          a30771be-e16f-4fab-b999-01d75fd79798
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags:         signed_directory_hash 
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              484272
Block count:              1925784
Reserved block count:     96302
Free blocks:              158443
Free inodes:              295913
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      999
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8208
Inode blocks per group:   513
Filesystem created:       Sat Nov  7 13:52:58 2009
Last mount time:          Mon Jan 11 22:32:01 2010
Last write time:          Mon Jan 11 22:26:10 2010
Mount count:              2
Maximum mount count:      26
Last checked:             Mon Jan 11 22:26:10 2010
Check interval:           15552000 (6 months)
Next check after:         Sat Jul 10 23:26:10 2010
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:              256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
First orphan inode:       141069
Default directory hash:   half_md4
Directory Hash Seed:      3d933bbd-2ce1-4584-80bb-119c520ab4f2
Journal backup:           inode blocks

And here my destination partition:

tune2fs 1.41.9 (22-Aug-2009)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          fbab80a0-dbd4-420a-a474-0c7b04b123b7
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags:         signed_directory_hash 
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              524288
Block count:              8388608
Reserved block count:     419430
Free blocks:              0
Free inodes:              376569
First block:              1
Block size:               1024
Fragment size:            1024
Reserved GDT blocks:      256
Blocks per group:         8192
Fragments per group:      8192
Inodes per group:         512
Inode blocks per group:   128
Filesystem created:       Tue Jan 12 00:21:55 2010
Last mount time:          Tue Jan 12 00:23:07 2010
Last write time:          Tue Jan 12 00:23:07 2010
Mount count:              1
Maximum mount count:      31
Last checked:             Tue Jan 12 00:21:55 2010
Check interval:           15552000 (6 months)
Next check after:         Sun Jul 11 01:21:55 2010
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:              256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      94c5fcdd-6591-4ff1-96f1-3bf7f424525d
Journal backup:           inode blocks

SOLVED: I ended up ditching cpio in favor of: # rsync -axSX / /destination which worked perfectly.

Last edited by awayand (2010-01-12 13:28:03)

Offline

#2 2010-01-12 02:03:25

perbh
Member
From: Republic of Texas
Registered: 2005-03-04
Posts: 765

Re: [SOLVED] cloning with cpio - running out of disk space

Personally, I always use:

(cd $SOURCE && tar cf - .) | (cd $TARGET && tar xvpf -)

Mind you - I would _never_ copy a 'live' filesystem - use a live-cd instead.
You can have up to 1 gig in /proc when the system is running! And you surely do not want all the current /dev  devices as they are dynamically created.

Last edited by perbh (2010-01-12 02:05:55)

Offline

#3 2010-01-12 10:41:07

awayand
Member
Registered: 2009-09-25
Posts: 398

Re: [SOLVED] cloning with cpio - running out of disk space

/proc isn't being copied, as I am using the -xdev option in find (exclude other filesystems), plus the /dev files shouldn't be a big problem.

The reason for my overflowing filesystem must be a different one...

Offline

Board footer

Powered by FluxBB