You are not logged in.
Hi Guys,
I've searched around these forums and the web for people with similar problems without getting a solution.
I took the script provided in this wiki page and directly created a file named arch_backup.sh in my bin directory.
Then I run it in the following manner:
Infernus:discourse(master!*) $ sudo ~/bin/arch_backup.sh /media/Backup/Arch
sending incremental file list
file has vanished: "/proc/10/exe"
file has vanished: "/proc/10/task/10/exe"
ERROR: destination must be a directory when copying more than 1 file
rsync error: errors selecting input/output files, dirs (code 3) at main.c(622) [Receiver=3.1.0]
total time: 0 minutes, 0 seconds
Infernus:discourse(master!*) $
Here are the relevant parts of my mount -l command:
/dev/sdd1 on /media/Backup type ext4 (rw,nosuid,nodev,noexec,relatime,data=ordered)
I get the same error when I run the provided rsync command directly:
rsync -aAXv /* /media/Backup/Arch --exclude /dev/* --exclude /proc/* --exclude /sys/* --exclude /tmp/* --exclude /run/* --exclude /mnt/* --exclude /media/* --exclude /lost+found --exclude /var/lib/pacman/sync/*
I keep getting the same error message over and over again. The destination is a directory and is writable, otherwise the provided backup script would exit with an error message.
Any help?
Last edited by Balaji Sivaraman (2013-12-01 18:05:24)
Offline
EXIT VALUES
0 Success1 Syntax or usage error
2 Protocol incompatibility
3 Errors selecting input/output files, dirs
4 Requested action not supported: an attempt was made to manipu-
late 64-bit files on a platform that cannot support them; or an
option was specified that is supported by the client and not by
the server.5 Error starting client-server protocol
6 Daemon unable to append to log-file
10 Error in socket I/O
11 Error in file I/O
12 Error in rsync protocol data stream
13 Errors with program diagnostics
14 Error in IPC code
20 Received SIGUSR1 or SIGINT
21 Some error returned by waitpid()
22 Error allocating core memory buffers
23 Partial transfer due to error
24 Partial transfer due to vanished source files
25 The --max-delete limit stopped deletions
30 Timeout in data send/receive
35 Timeout waiting for daemon connection
ERROR: destination must be a directory when copying more than 1 file
Backing up is easy.
While the system is running, open up a terminal and run (as root):
# /home/user/Scripts/backup.sh /some/destination
Now, it would seem to me that all those error messages should indicate you missed that "/some/destination" bit.
Edit: Well, no you really didn't. Sorry. Does that directory exist, and can you write to it?
Last edited by ewaller (2013-12-01 19:29:39)
Nothing is too wonderful to be true, if it be consistent with the laws of nature -- Michael Faraday
Sometimes it is the people no one can imagine anything of who do the things no one can imagine. -- Alan Turing
---
How to Ask Questions the Smart Way
Offline
eWaller, Yup! Since I am using the provided Arch Wiki script directly, they have a conditional which checks for both cases. I tried running it without a directory, and got the "No destination defined." error message. Also tried with a non-writable directory, and got the "Directory not writable." message from the script.
I really have no idea what I am doing wrong here.
Infernus:~(master!*?) $ cd /media/Backup
Infernus:Backup() $ l //alias for ls -lrt
total 32
drwxrwxrwx 5 balaji users 4096 Dec 1 21:39 .
drwxr-xr-x 8 root root 4096 Nov 23 04:06 ..
drwxrwxrwx 2 balaji users 4096 Dec 1 23:36 Arch
drwx------ 7 balaji users 4096 Dec 1 15:32 Dropbox
drwxrwxrwx 2 balaji users 16384 Dec 1 14:29 lost+found
That's what the Arch directory looks like.
Last edited by Balaji Sivaraman (2013-12-01 19:38:24)
Offline
If it is possibly useful to others here is an rsync based backup script that I wrote which I have been using for some time.
This simply requires an external usb drive to be plugged in and mounted at /run/media/mike/. (easy to adapt for your own system for a username other than mike)
The usb drive on which the backup will be written using rsync is then auto detected by the script and assumes that there is a (preferably ext4) partition containing a directory named BACKUPS within which is a directory with the hostname as its name.
#!/bin/bash
#
# Back up stuff to backup disc
#
USE_AUTO_FS=0
if [ -z "$1" ]; then
# Define variable EXT_DRIVE
EXT_DRIVE=$(ls /run/media/mike/)
echo Mounted EXT_DRIVE is $EXT_DRIVE
else
EXT_DRIVE=$1
fi
ME_HOST=`hostname`
ME_HOST=${ME_HOST%%.*}
BAKDIR=/run/media/mike/$EXT_DRIVE/BACKUPS
echo "Backing up $ME_HOST onto $BAKDIR"
list="/etc /var /boot /root /opt"
ToDir="$BAKDIR/$ME_HOST"
#mount $BAKDIR
# echo "Doing ls in $ToDir"
# ls $ToDir
TIMER=$(date)
TIMER1=$(date +%s)
echo Started at $TIMER
for i in $list
do
echo " Doing $i ..."
rsync --delete -aH --exclude 'lost+found' --exclude '/var/cache/*' $i $ToDir
done
TIMER=$(date)
TIMER2=$(date +%s)
echo Finished at $TIMER
TDIFF=$((TIMER2-TIMER1))
TDIFFM=$((TDIFF/60))
TDIFFR=$((TDIFF%60))
echo
echo Process time elapsed is $TDIFF seconds = $TDIFFM min $TDIFFR sec
#list directories processed
echo
echo Directories processed in $ME_HOST:
/usr/bin/ls -l /run/media/mike/$EXT_DRIVE/BACKUPS/$ME_HOST/
exit 0
Once the script has executed then just safely remove the external drive.
Feel free to adapt this script and maybe publish an even better modified version on the wiki for others to use?
Last edited by mcloaked (2013-12-01 20:10:09)
Mike C
Offline
Rather than run rsync in a loop, wouldn't you be better off feeding the command the directories to sync once? It would be more efficient.
You could then use the -p switch to print progress, rather than echo'ing. Also, the -z switch enables compression, which is a nice feature.
Offline
I guess that you could certainly use the progress flag and not the echo lines - and thanks for the suggestion to use compression. Anyway maybe my script will act as a start point for a better version which I am sure can be improved - I wrote it in a short time, and adapted it a couple of times since then but didn't really try to fully optimise it. I may try to modify according to your suggestions for my own use. I hadn't thought about the ls problem either! All good points.
Last edited by mcloaked (2013-12-01 21:19:37)
Mike C
Offline
jasonwryan: I had a go at re-writing the first section to avoid parsing ls output - and that was not too difficult - so the start of the script then becomes:
#!/bin/bash
#
# Back up stuff to backup disc
#
USE_AUTO_FS=0
if [ -z "$1" ]; then
USBNAME=/run/media/$USER
DIRCOUNT=$(find $USBNAME/* -maxdepth 0 -type d | wc -l)
if [ "$DIRCOUNT" -gt "1" ]
then echo Found $DIRCOUNT directories in $USBNAME;
echo rerun choosing a specific backup drive as parameter
echo quitting
exit
fi
DIRCOUNT=0
for full_path in "$USBNAME"/*/; do
if ! [ -d "$full_path" ]; then continue; fi
EXT_DRIVE=${full_path#"$USBNAME/"}
EXT_DRIVE=${EXT_DRIVE%/}
DIRCOUNT=$(($DIRCOUNT + 1))
if [ "$DIRCOUNT" -gt "1" ]
then echo Found $DIRCOUNT directories in $USBNAME;
echo rerun choosing a specific backup drive as parameter
echo quitting
exit
fi
echo "$EXT_DRIVE"
done
echo Mounted EXT_DRIVE is $EXT_DRIVE
else
EXT_DRIVE=$1
fi
This is overkill on the check for more than one mounted drive - I know!
However when I looked at using a single rsync command instead of a set in a loop - this is certainly possible but I wanted to have the script output when it started each of the top directories and not progress for every file in the recursive tree - and I could not find a solution for that. I could get a "total" progress indicator using the new --info=progress2 flag that was implemented in version 3.1.0 of the rsync command but that was not what I wanted. Having a single progress line for each item in the list variable was not something I could find. If it is possible I would love to know how?
Anyway the code for doing this is:
list=( '/etc' '/var' '/boot' '/root' '/opt')
echo Directories to be backed up are ${list[@]}
ToDir="$BAKDIR/$ME_HOST"
TIMER=$(date)
TIMER1=$(date +%s)
echo Started at $TIMER
rsync --info=progress2 --delete -aH --exclude 'lost+found' --exclude '/var/cache/*' "${list[@]}" $ToDir
TIMER=$(date)
TIMER2=$(date +%s)
echo Finished at $TIMER
TDIFF=$((TIMER2-TIMER1))
TDIFFM=$((TDIFF/60))
TDIFFR=$((TDIFF%60))
echo
echo Process time elapsed is $TDIFF seconds = $TDIFFM min $TDIFFR sec
Anyway this seems at least to work whichever option one chooses. Also the z compression flag is not necessarily faster - for a lot of small files over a gigabit link can be slower so I guess it depends on the details of how the backup is being executed.
Last edited by mcloaked (2013-12-02 21:23:36)
Mike C
Offline
I don't think you can print the progress as you would wish (at least not as I have read the man page). I run rsync as a cron job and log to a file that I can check later, so I haven't really played much with the progress and info options.
Offline
Why is that an improvement over parsing ls, given the discussion in that link about find and globbing?
CLI Paste | How To Ask Questions
Arch Linux | x86_64 | GPT | EFI boot | refind | stub loader | systemd | LVM2 on LUKS
Lenovo x270 | Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz | Intel Wireless 8265/8275 | US keyboard w/ Euro | 512G NVMe INTEL SSDPEKKF512G7L
Offline
I guess that since I was passed a link which basically said that parsing ls output was a very bad thing to do, I wrote a bit of code that avoided that way of doing it - so in that sense maybe it is better (?) even though my ls parsing in the original script has been working for me for some years without a problem!
Mike C
Offline
But did you read the link? Especially the find example which it says is just as bad? I wasn't sure why your find was supposed to avoid the issue.
CLI Paste | How To Ask Questions
Arch Linux | x86_64 | GPT | EFI boot | refind | stub loader | systemd | LVM2 on LUKS
Lenovo x270 | Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz | Intel Wireless 8265/8275 | US keyboard w/ Euro | 512G NVMe INTEL SSDPEKKF512G7L
Offline
But did you read the link? Especially the find example which it says is just as bad? I wasn't sure why your find was supposed to avoid the issue.
I could not see an alternative way to do what I needed - though I suppose I could simply remove the first section with the find command and just leave the second part. Unless you can help with a more efficient method of achieving the same outcome? Thanks for your help.
Last edited by mcloaked (2013-12-03 22:32:51)
Mike C
Offline
Can't you find without globbing?
CLI Paste | How To Ask Questions
Arch Linux | x86_64 | GPT | EFI boot | refind | stub loader | systemd | LVM2 on LUKS
Lenovo x270 | Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz | Intel Wireless 8265/8275 | US keyboard w/ Euro | 512G NVMe INTEL SSDPEKKF512G7L
Offline
Maybe this is a dumb suggestion, but, couldn't the whole issue be avoided by using the -print0 switch of find?
As for the original topic:
Why are you tring to backup the /proc directory? There's is not really anything of value in there that should be backed-up, it is just information about running processes, etc. Also, its contents are frequently changed, which might be why you are getting the I/O error from rsync.
Offline