You are not logged in.
Pages: 1
Hi everyone,
Is there a way to use rsync to recursively sync a directory with a destination directory without creating the subdirectories at the destination? I've made a (very simple) shell script which works but takes about 1,5 hours to complete.
#!/bin/bash
normal="/Volumes/export/images/AssetPushQueue/Pipeline195x195"
overview="/Volumes/export/images/AssetPushQueue/Pipeline138x85"
small="/Volumes/export/images/AssetPushQueue/Pipeline62x62"
thumb="/Volumes/export/images/AssetPushQueue/Pipeline34x34"
zoom="/Volumes/export/images/AssetPushQueue/Pipeline500x500"
for f in `find $normal`
do
rsync -vz $f /tmp/test/normal
done;
for f in `find $overview`
do
rsync -vz $f /tmp/test/overview
done;
for f in `find $small`
do
rsync -vz $f /tmp/test/small
done;
for f in `find $thumb`
do
rsync -vz $f /tmp/test/thumb
done;
for f in `find $zoom`
do
rsync -vz $f /tmp/test/zoom
done;
exit
Thanks,
Jules
Offline
rsync -a /source/directory/to/sync/. /destination
Note the . at the end of the source directory. You can make rsync to partially make the directory structure, for example:
rsync -a /source/./directory/to/sync /destination (see man rsync).
Also you might want to use different options (--hard-links and --delete are useful if you wan to mirror a directory).
As for the time to complete, it depends almost exclusively on the speed of you hard drive (or the network if you sync remotely). This is faster if the source and destination directories reside on different physical drives. Also if you don't sync remotely, don't use the -z option: it will compress and decompress the files within the same machine, this is completely useless and slow down the process.
Offline
Hi Olive,
I'm digging through the rsync manual but I can't find exactly what I'm looking for.
The source contains a directory structure with a lot of subdirectories, every file is placed in a subdirectory depending on the last 4 characters of the filename, so
testfile.jpg is placed in ./fi/le/testfile.jpg and archlinux.jpg is placed in ./in/ux/archlinux.jpg.
I would like to synchronize those files to a single directory so
./fi/le/testfile.jpg
./in/ux/archlinux.jpg
Would look like
./tmp/test/testfile.jpg
./tmp/test/archlinux.jpg
on the destination
Thanks,
Jules
Offline
If you have many small files try creating archives of them. Why do you need one-file-per-dir thing?
The first run took 1.5h - and the next?
I'm not sure rsync alone can do what you need.
Offline
The software on the server creates those sub-dirs automatically, I have no influence on that.
The first run was between 2 local machines and took 1,5 hours, the 2nd run took a few minutes less (+/- 55.000 files, about 350MB).
Offline
The software on the server creates those sub-dirs automatically, I have no influence on that.
The first run was between 2 local machines and took 1,5 hours, the 2nd run took a few minutes less (+/- 55.000 files, about 350MB).
That's 6kB per file on avg.
Do you mean that the regular rsync is much faster?
rsync -v -- $(find $normal -type f) /tmp/test/normal
might help: you drop the loop and why compress jpg files? If you transfer some compressible files, use
-skip-compress=LIST skip compressing files with suffix in LIST
Offline
Hi Karol,
They're all jpg files so I'll remove the -z switch, I'll also try to drop the loop.
Many thanks,
Jules
Offline
I've run some test on 3k files, each about 6kB, each in it's own dir, and it took 6 minutes - both the first and the next times. rsyncing files one-by-one is not a smart move it seems :-) Rsyncing the whole dir structure and moving the files to a single dir afterwards was much faster. If you can create a tar archive on that server and grab that archive you should save a lot of time.
Offline
Hi Karol,
Thanks for your help, I'll try to copy all files to a single directory and sync the whole directory with the server.
Jules
Offline
Pages: 1