You are not logged in.
Recently, I tried Arch for the first time, and one of the first things I came across is the question of how to setup my partitions. After years of mostly ignoring it, I finally decided to also give btrfs a try. This post is both a howto that explains my layout, which I think is a very good one, and a request for comment, in case there's a better layout available, or there is something I overlooked.
Objective: Use btrfs to create periodic snapshots of the system that make it easy to recover individual files, or the entire system should you accidentally screw everything up.
The quick answer: Anyone who has done any research into this knows that snapper is a tool that allows easy setup of a system that does periodic snapshots of any btrfs subvolume you want.
The problem: Snapper seems like a great tool for doing automated daily/weekly/monthly/yearly snapshots. I say seems like, because it sounds good in theory - I have not been using it long enough to figure out how good it is in practice. The problem is that snapper appears to mainly be used to recover files from snapshots - not recover your entire system using snapshots. Snapper actually does have this feature - it’s called snapper rollback, but instead of replacing the running system, it leaves it in place (even though it’s presumably screwed up) and ignores it, using one of it’s subfolders to boot instead.
Example:
Let’s say this is your system:
subvolid=0
└── /usr, /bin, /sbin, /.snapshots, ...
Your running system is subvolid=0, and all snapshots snapper takes go into /.snapshots.
Assuming you have configured snapper properly, recovering files you accidentally deleted from yesterday's snapshot of the system is fairly easy.
Unfortunately, if you happen to screw up your entire system, say with a system update, and want to revert the entire system to yesterday's snapshot, things get a little more difficult.
Let's say that you messed up your system bad enough that it can no longer boot. You know that everything was fine yesterday, and you have a snapshot that snapper took that day that you would like to go back to.
The official procedure appears to be this:
reboot
edit grub's kernel line and add this to it
rootflags=subvol=.snapshots/#/snapshot
Note: You are booting a read-only snapshot that snapper created
Note 2: Because the /.snapshots at the top of the btrfs filesystem is a subvolume, and not just a folder, snapper snapshots ignore it, so booting into one of snapper's read-only snapshots shows an empty /.snapshots
mount /.snapshots with
# mount /dev/sdX /.snapshots -o subvol=.snapshots
Note: You could also just have fstab always mount subvol=.snapshots to /.snapshots explicitly, which would make this step unnecessary
Tell snapper to rollback your system with:
snapper rollback
This does 3 things. It:
Creates a read-only snapshot of the default btrfs subvolume. This is the one you messed up.
Creates a read-write snapshot of the currently mounted btrfs subvolume. This is the "good" snapshot that we chose to boot into, and the one we are restoring the system to
Sets the btrfs default subvolume to the read-write snapshot it created in step 2
At this point, your system looks like this:
subvolid=0 (still screwed up)
└── /usr, /bin, /sbin, /.snapshots, etc
├── 1 (ro)
├── 2 (ro)
├── …
├── 5 (ro)
├── 6 (ro - snapshot of broken system)
└── 7 (rw - clone of 5)
And 7, or more accurately (subvolid=0)/.snapshots/7/snapshot, is set to be the default btrfs subvolume, which means when your grub mounts the btrfs device at boot, it's no longer dealing with subvol=0 but (subvolid=0)/.snapshots/7/snapshot.
While this results in a system that does indeed boot, and has indeed been restored to the way things were, the fact is that you now have subvolid=0 which is still messed up, and your system just ignores it, mounting some subvolume on the broken system. In order for your snapper to keep working, you now MUST have your fstab mount the (subvolid=0)/.snapshots subvolume to your /.snapshots, which in reality is (subvolid=0)/.snapshots/7/snapshot/.snapshots. This is very unclean, and convoluted.
The solution:
subvolid=0
├── subvol_root
│ └── /usr, /bin, /sbin, /.snapshots, etc
├── subvol_snapshots
├── subvol_home
└── subvol_opt
Note: Here, I create a subvolume for /home, and one for /opt as examples. You can basically create a subvolume for any folder on the filesystem that you do NOT want to be part of the daily snapshots that snapper takes. subvol_root and subvol_snapshots are the only 2 important subvolumes here.
My fstab:
UUID=... / btrfs OPTIONS 0 0
UUID=... /.snapshots btrfs OPTIONS,subvol=subvol_snapshots 0 0
UUID=... /home btrfs OPTIONS,subvol=subvol_home 0 0
UUID=... /opt btrfs OPTIONS,subvol=subvol_opt 0 0
IMPORTANT NOTE: When initially configuring snapper, /.snapshots must NOT already exist when you run
# snapper -c root create-config /
or snapper will error out. To get around this problem, do this:
Unmount /.snapshots if it is mounted
Delete /.snapshots
Run
# snapper -c root create-config /
Now that snapper is happily configured, delete /.snapshots. Note that it is a subvolume - not a folder
# btrfs subvolume delete /.snapshots
Create the mountpoint again
# mkdir /.snapshots
# chmod 750 /.snapshots
Mount the subvol_snapshots subvolume to /.snapshots
# mount /.snapshots
Note: Since you should already have an entry for /.snapshots in your fstab at this point, you don't need to specify what to mount - just the pointmoint is fine.
Now, everything should work properly.
Snapper will be taking regular snapshots of your system and storing them under /.snapshots, so if you ever delete a file accidentally, you can use the regular snapper commands to retrieve it from some snapshot.
If your system dies however, and you need to restore it completely to some snapshot that snapper took, do this:
Boot arch live cd
Mount your btrfs volume. Note that here, we are mounting the default subvolume, which is subvolid=0
# mount /dev/sdX /mnt
Find the snapshot you want to restore the system to:
# vi /mnt/snapshots/*/info.xml
Note: Here, we are reading the <description> tag and/or the <date> tag for the different snapshots, and when you find the one you want, remember the <num> number.
Delete or move the root subvolume
# mv /mnt/subvol_root /mnt/subvol_root.broken
Create a read-write snapshot of the read-only snapshot snapper took (the one we just found)
# btrfs subvol snapshot /mnt/snapshots/X/snapshot /mnt/subvol_root
Note: Where X is the <num> we discovered earlier
Now unmount /mnt and reboot
The point of moving /.snapshots, and anything you don't want to backup with snapper outside of the subvol_root subvolume is so that when the system crashes, you can easily replace the entire subvol_root subvolume without accidentally deleting all the snapshots, or your /home, or whatever other directories you want to keep.
Final words:
After everything is setup, this seems like the simplest and cleanest way to have a system that can easily use snapper, and be able to quickly restore the entire system should the need arise. Does anyone see a flaw in this? I got this running earlier today, and both snapper and system restoration using a live cd works great. Is there a better way to do this?
Last edited by tal (2015-03-06 08:22:11)
Offline
Not SA, Moving to Installation.
Are you familiar with our Forum Rules, and How To Ask Questions The Smart Way?
BlueHackers // fscanary // resticctl
Offline
I haven't gone deeply into the subject, but openSUSE has quite a bit of articles and documentation about doing what you're doing.
Tim
Offline
I haven't gone deeply into the subject, but openSUSE has quite a bit of articles and documentation about doing what you're doing.
Tim
I believe they created snapper - so I imagine they would, but I haven't come across any that went into this much detail. I'll see if I can find any relevant ones - maybe they have something interesting to say.
Offline
This post is both a howto that explains my layout...
Welcome to Arch, tal. You'll notice that this forum has no "How-To" subforum. There's a reason for that: In six months this thread will be buried too deep to do anyone any good. Luckily, we have an excellent wiki that documents all kinds of things, including Snapper. If you're interested in showing people how to get the most out of Snapper I'd recommend adding this to the wiki page on it, since that's the first place every good Archer should look for information.
Offline
tal wrote:This post is both a howto that explains my layout...
Welcome to Arch, tal. You'll notice that this forum has no "How-To" subforum. There's a reason for that: In six months this thread will be buried too deep to do anyone any good. Luckily, we have an excellent wiki that documents all kinds of things, including Snapper. If you're interested in showing people how to get the most out of Snapper I'd recommend adding this to the wiki page on it, since that's the first place every good Archer should look for information.
That is an excellent point.
For now, I'm hoping to get some feedback on my layout to make sure that I'm not missing anything obvious. After all - I've only been using snapper, btrfs, and arch, for about a week, and this layout for about 24 hours.
If any improvements are suggested, or problems brought to light, I can take them into consideration and perhaps modify the layout to address them.
If I get some feedback that people are successfully using this layout on the other hand, or at the very least see no flaws in it, I might look into the procedure for adding this page to the Arch wiki, as that would indeed be a place where more people could find it.
Last edited by tal (2015-03-06 22:50:30)
Offline
I've been using this setup for a few days now, and have recovered the system several times as I try out different stuff in Arch, with no problems.
Anyone else want to try this or have any thoughts?
Offline
I thought I'd add that there may be a built in solution on the way https://btrfs.wiki.kernel.org/index.php/Autosnap
The wiki page hasn't been updated since 2014 though.
Another thing; a slightly different layout is suggested (It's mentioned as a possibility, not a recommendation as such) on the btrfs wiki https://btrfs.wiki.kernel.org/index.php … _snapshots
I can't help but think that snapper could easily be substituted by a handful of scripts and systemd timers, I may have a surprise coming for me though.
Offline
I thought I'd add that there may be a built in solution on the way https://btrfs.wiki.kernel.org/index.php/Autosnap
The wiki page hasn't been updated since 2014 though.
I haven't heard of autosnap before, but if there's a built-in way to take regular btrfs snapshots, that might be interesting.
For some reason, my "btrfs-progs v3.19" does NOT have the autosnap feature.
Either it is a very new feature that hasn't made it into the Arch repos (which would be strange since Arch uses the latest packages for pretty much everything), the feature is being worked on outside the main branch of the btrfs git repo, or it's not a native feature and just looks like one, kind of like how "mkfs -t ntfs" makes it look like ntfs is a native feature of mkfs, but it's not.
Another thing; a slightly different layout is suggested (It's mentioned as a possibility, not a recommendation as such) on the btrfs wiki https://btrfs.wiki.kernel.org/index.php … _snapshots
That layout is the basic btrfs layout for easily creating and restoring snapshots. Mine is a combination of that, and snapper. My layout uses that idea, but extends it to make snapper compatible with it.
I can't help but think that snapper could easily be substituted by a handful of scripts and systemd timers, I may have a surprise coming for me though.
I would argue that this is true for a lot of the smaller software out there in Linux repos. Often, the only difference between using your own script and an official program or script that a development team made would be this: when you write a script for yourself, you make sure it works for you. You do just enough testing to make sure it works in your case. You add just enough features to make sure it meets your basic requirements. You implement it, and you leave it there, in most cases never to touch it again unless it breaks. If it breaks a few months later, in a lot of cases you didn't document how you wrote the script properly and you end up fixing it by using a workaround rather than taking the time to make a proper fix. A dedicated team (even if it's just one person) knows that their software will be used by many different users in many different environments and configurations, and that it needs to be as reliable, portable, and feature-rich as possible. This naturally improves the quality of the software. It's nice to have the ability to write your own script when you can't find something that's already written to do what you want, but when you have a choice, using software that someone has actually spent some time designing, and is actually getting maintained, is usually preferable - at least that's what I think. In snapper's case, the team that designed it is the openSUSE team - not just some guy in his free time, so you know they've thought everything through before making it work a certain way, or adding a certain feature.
Offline
I haven't heard of autosnap before, but if there's a built-in way to take regular btrfs snapshots, that might be interesting.
For some reason, my "btrfs-progs v3.19" does NOT have the autosnap feature.
Either it is a very new feature that hasn't made it into the Arch repos (which would be strange since Arch uses the latest packages for pretty much everything), the feature is being worked on outside the main branch of the btrfs git repo, or it's not a native feature and just looks like one, kind of like how "mkfs -t ntfs" makes it look like ntfs is a native feature of mkfs, but it's not.
It's outside of the main branch.
I don't disagree with you on any particular point, I'm just considering how much it would take to avoid having to adapt to snapper.
It might be easier still to make snapper's snapthot paths configurable. They seem to be accepting pullrequests.
Offline
I don't disagree with you on any particular point, I'm just considering how much it would take to avoid having to adapt to snapper.
If you have the basic btrfs setup (the one you linked to), having a script that a systemd timer triggers to create periodic btrfs snapshots should be fairly simple to make. It would be more difficult to have:
each snapshot have a note describing either when the snapshot was taken, or why it was taken if it was taken manually
have some sort of interface that makes it easy to see what snapshots are available, and show info about them
have the system delete the oldest snapshots at a certain limit one by one
All that comes with snapper. You could make your script do that too, but at that point you're just recreating snapper because you'd rather code than figure out a program that already exists. I admit that I've done that plenty of times over the years, but it's not exactly the best approach. If you learn how a program works that everyone uses, when you come across some machine that uses that program, you know exactly how it works. Coding your own solution might improve your scripting skills a bit, but ultimately you still won't know how the program that everyone uses works. Sorry - I'm getting a bit philosophical again.
It might be easier still to make snapper's snapthot paths configurable. They seem to be accepting pullrequests.
If you had the option of configuring snapper's snapshot paths, where would you tell it to save them? Outside of your /? You can't. At least not directly. You need to mount subvolid 0 (top level btrfs subvolume) to see outside of your root subvolume. More accurately, you don't actually need to see EVERYTHING in subvolid 0, just the subvolume where you want to store the snapper snapshots. You can just mount that some place and tell snapper to save your snapshots there. That's kind of exactly what I did, except I mounted the snapshot subvolume in the place where snapper already saves its snapshots (/.snapshots). It would become only slightly more convenient in this scenario if you could specify the location that snapper snapshots get saved to, since it would let you use some other directory as a mount point, but it wouldn't really change or simplify anything else. Am I wrong?
Offline
It would become only slightly more convenient in this scenario if you could specify the location that snapper snapshots get saved to, since it would let you use some other directory as a mount point, but it wouldn't really change or simplify anything else. Am I wrong?
It could allow users to make their own layouts, without having to tip-toe around snapper.
I wouldn't call that a "slight" convenience.
each snapshot have a note describing either when the snapshot was taken, or why it was taken if it was taken manually
have some sort of interface that makes it easy to see what snapshots are available, and show info about them
have the system delete the oldest snapshots at a certain limit one by one
I didn't know snapper had snapshot descriptions. I'm not sure I'd want that unless it was built into btrfs, but essentially, it is nothing more than a key/value structure.
I guess a short description could be used in the snapshot name.
Deleting old snapshots could be managed by something like logrotate. Which, although specializing in log files, does exactly that.
I've used it for rotating sql dump previously.
Offline
It could allow users to make their own layouts, without having to tip-toe around snapper.
Name one, and maybe I'll agree with you.
I didn't know snapper had snapshot descriptions. I'm not sure I'd want that unless it was built into btrfs, but essentially, it is nothing more than a key/value structure.
Without a description, all you have is the subvolume name to tell between tens (or even more) subvolumes (snapshots). Organization without a description becomes a nightmare.
I guess a short description could be used in the snapshot name.
I assume you mean that subvolume name, since that's the only thing you've got when using a pure btrfs layout. Then the filename of the subvolume needs to have:
1. A prefix that allows all the snapshots to be in order when you sort them alphabetically (which is the most common type of sort)
2. A very short description, limited to like 5 words max, or you're ending up with confusing file names all over the place
Is it possible? Yes. Is it convinient? Not entirely.
Deleting old snapshots could be managed by something like logrotate. Which, although specializing in log files, does exactly that.
Yes - you probably could. The amount of thought we're putting into designing a snapshot system from scratch. and the actual coding and testing has already been done by a team of professionals however - we're recreating the wheel. If you have imrpovements for snapper that benefit everyone, that would be awesome. But I'm pretty sure the idea of recreating the wheel clashes with Linux's dynamically linked libraries idea - you write a library or program once, and reuse and improve it over time instead of everyone recreating the same code over and over again.
Offline
tal, you can't really disprove an opinion.
I would be perfectly fine with defining a snapshot solely by timestamp, as most of my snapshots are going to be taken automatically.
Descriptions, beyond timestamps, will therefore be meaningless.
This may not be the case for you, and in connection with this discussion I am interested in hearing your usecases, but please don't try and tell me my mind.
Linking of libraries has nothing to do with this. I assume you are instead referring to the UNIX devise; "Write programs that do one thing and do it well"?
That's another thing; It doesn't seem to me that snapper adheres to that principle, in that it forces a layout on you.
It might be prudent instead have rotation and snapshot automation as separate services?
With regard to recreating the wheel: I would add that you are not required to humor me.
I'm am simply stating my thoughts, curious about what thoughts and ideas others (yourself included) might have on the subject.
Last edited by Bladtman242 (2015-05-25 14:17:34)
Offline
Building off of this, I really want to be able to run "snapper rollback ID", which: (1) makes a read-only snapshot of your current system so you can get back to it if you want; (2) make a read-write snapshot of the ID you gave; and (3) set the default subvolume to the read-write snapshot just made in number 2.
The trouble is, as you mentioned in your original post, /boot/grub/grub.cfg gives the kernel option "rootflags=subvol=root", so even if /etc/fstab doesn't give a subvol command, the default subvolume isn't used.
I've seen quite a few places that grub.cfg has to have the subvol flag, or it won't boot. (Appears to no longer be true.)
Just to try it, I made a new installation and while booted off the net install ISO, I edited /etc/grub.d/10_linux, and went to line 59 whish is in a case statement for btrfs. I removed the part that says "rootflags=subvol=${rootsubvol} ". (I'm sure more could be removed, but I left the rest there for now.)
Ran "grub-mkconfig -o /boot/grub/grub.cfg", and "btrfs subvolume set-default 257 /mnt" {257 is my ID for the /root subvolume}
Rebooted without the ISO, expecting it to fail by trying to mount subvolid=0 as /, but it worked just fine. Apparently grub (at least the current version) is able to pull the default subvolume.
This allowed me to run "snapper create -d PostInstall", create a file, and run "snapper rollback 1". After rebooting, the test file I created is gone, because grub defaulted to snapper ID 3, which was created by "snapper rollback" as a branch off ID 1, and made default. (Well, it's not gone, it's of course in /.snapshot/2/snapshot/home/username). (Exact directory going by memory.)
The only trouble I see is that /boot/grub/grub.cfg winds up with references to /root/boot, to find vmlinuz-linux and initramfs.
grub-mkconfig does look at the default subvolume!
I guess one could run "grub-mkconfig -o /boot/grub/grub.cfg" after any "snapper rollback". I don't think this is necessary, because I think vmlinuz-linux and initramfs are only automatically changed when installing a new kernel. I'm assuming arch automatically runs "grub-mkconfig" after installing a new kernel, or perhaps that's left to the user. (New to arch, haven't upgraded kernels yet.) But, the point is, I think "grub-mkconfig" winds up getting ran somehow anytime that /boot/grub/grub.cfg actually has to point to a different subvolume.
What I think I really want is a subvolume /root-snapManager, or one could use the arch ISO, but I'd prefer it be on the hard drive, then be able to create snapshots and issue rollback commands with the filesystem not in active use.
Offline
P.S. A "/outside-snapshots" subvolume is nice, if you need temporary storage that won't be wiped out by a rollback.
Offline
@tal:
In your layout, how the system knows your root is / subvol_root ?
You set the default subvolume to subvol_root ? If so, you can't see other subvolumes ?
Or you use the
rootflags=subvol=subvol_root
option in your grub ? But your fstab entry shouldn't be
UUID=... / btrfs OPTIONS,subvol=subvol_root 0 0
?
Offline
@tal:
In your layout, how the system knows your root is / subvol_root ?
UUID=... / btrfs OPTIONS,subvol=subvol_root 0 0
Offline
This is very confusing. I have a following subvolumes:
ID 257 gen 3778 top level 5 path ROOT
ID 258 gen 3785 top level 257 path home
ID 259 gen 3711 top level 257 path root
ID 260 gen 3782 top level 257 path etc
ID 261 gen 3651 top level 257 path mnt
ID 262 gen 3197 top level 257 path opt
ID 263 gen 3785 top level 257 path var
ID 264 gen 3785 top level 257 path tmp
ID 266 gen 3182 top level 263 path var/lib/machines
Now, according to your description:
snapper -c root create-config /
does create a subvolume /.snapshots
which I delete with:
btrfs subvolume delete /.snapshots
Create the mountpoint again
# mkdir /.snapshots
# chmod 750 /.snapshots
Mount the subvol_snapshots subvolume to /.snapshots
# mount /.snapshots
Now, how can I mount a subvolume if I have already delete it?
btrfs subvolume create /.snapshots
failes because the folder already exists.
Am I missing something?
And what about /etc and /opt ?
Offline
OK I´ve got it: you have a subvolume snapshots which you mount at /.snapshots
But the question is what happens to /opt /root /etc etc. ? If I create a config for those subvolumes the snapshot is located under /etc/.snapshots Can I specify another location for .snapshots?
Offline
It's been a while since I've done any work with any of this (I don't use Arch, BTRFS or Snapper anymore), but from what I remember, the idea was fairly simple:
Snapper clones subvolumes. Cloning a subvolume does NOT clone any subvolumes within that subvolume.
Ex:
── /Subvol 1
├── Subvol 2
├── Subvol 3
If you clone Subvol 1, Subvol 2 and Subvol 3 will just appear as empty folders in your clone.
That means that if you are cloning your entire system, but want to exclude /home/tal/Downloads from the cloning process because you know it fills up with tons of junk all the time, and you don't want to backup that junk, turn /home/tal/Downloads into a subvolume. Snapper will not back it up.
Not sure what you are referring to with /etc/.snapshots - I don't think I ever dealt with such a file.
Offline
Has anyone used this suggested layout with the "grub-btrfs" tool (https://github.com/Antynea/grub-btrfs) and been able to easily boot into a snapshot? I think this has been alluded to in this thread, but such a tool doesn't seem to be useful in this case since /etc/fstab has to be manually modified as well to mount the snapshot of interest.
Offline
Has anyone used this suggested layout with the "grub-btrfs" tool (https://github.com/Antynea/grub-btrfs) and been able to easily boot into a snapshot? I think this has been alluded to in this thread, but such a tool doesn't seem to be useful in this case since /etc/fstab has to be manually modified as well to mount the snapshot of interest.
You don't need to have an entry for "/" in your /etc/fstab. You can mount it on your kernel command line. You put the mount options into the "rootflags=..." parameter. That's how it should work.
If you open the source code for the script on that github page, you will see this happening in line 161.
Last edited by Ropid (2016-05-01 04:50:08)
Offline
I have been using the suggestions in the OP for more than 1 year, since I first started using btrfs and snapper. I just wanted to post my experiences. While this proposed solution is fine, if I knew what I know now, I would have simply stuck with snapper's rollback feature. This post is really making a problem where there is none.
The OP claims, " This is very unclean, and convoluted." He convinced me when I read this the first time. After a year of using snapper and btrfs snapshots extensively, there is absolutely nothing wrong with snapper's default method. It's not unclean or convoluted.
As one example of a benefit of snapper's default approach, it has the advantage of preserving history so that someone (you) looking at the system will know it was rolled back.
The other big advantage of the default approach is that it is widely used and understood.
This whole post is an example of fixing something that isn't broken.
See the btrfs wiki for more information on typical structures for managing snapshots. There are plenty of good ways to do it if you don't like snapper's default way.
Offline
Hello all, I have a quick question with regards to this setup (also the one described in the wiki https://wiki.archlinux.org/index.php/Sn … em_layout).
I would also like to snapshot my root subvolume with snapper, but get the .snapshots subvolume out of the subvolume itself. So, I have the following subvolumes after setting up snapper successfully:
arch
arch/.snapshots
archhome
First off, I delete the subvolume for .snapshots
btrfs subvol delete /mnt/btrfsroot/arch/.snapshots
Then I create some subvolumes to house my snapper snapshots:
arch
arch/.snapshots
archhome
snappersnaps
snappersnaps/arch
The intention is that snappersnaps/arch would point to /.snapshots, so it can be used by snapper. In my understanding, this would be fixed by simply mounting the location.
rm -r /.snapshots # clean slate
mkdir /.snapshots # otherwise the mount command complains about the mount point not existing
mount -o subvol=snappersnaps/arch /dev/nvme0n1p3 /.snapshots
Now, after creating a snapshot with snapper, I get the following error:
IO Error (.snapshots is not a btrfs subvolume).
I guess I'm missing some step in the process, or I'm misunderstanding some concept. I tried linking from .snapshots to the mounted subvolume, but that leads to too many levels of nested links.
Offline