You are not logged in.

#26 2017-11-04 14:18:46

Registered: 2011-10-09
Posts: 912

Re: Snapper/BTRFS layout for easily restoring files, or entire system

Correct. You are right in wanting to have the snapshot subvolume outside the subvolume you want to snapshot, but you are missing a step.

You need to temporarily mount your /mnt/btrfsroot with subvolid=0. Then create the .snapshots subvolume there. Then you should be able to make your snapshots, successfully.

After making the snapshots, I always umount the /mnt/btrfsroot (or, my equivalent), but that is up to you. Also, if you have multiple drives, each drive will have to have its own root-level subvolume for snapshots.

This is all shown in the btrfs wiki: … ide#Layout



#27 2017-11-04 17:43:08

Registered: 2017-11-04
Posts: 48

Re: Snapper/BTRFS layout for easily restoring files, or entire system

Hi Tim,

Well, I have created the `snappersnaps` subvolume within `/mnt/btrfsroot`, so as far as I understand I did create the subvolume in the right spot? I can verify this by the output of `btrfs subvol list /` (only showing the relevant subvolumes here):

ID 285 gen 5441 top level 5 path arch
ID 303 gen 5441 top level 5 path archhome
ID 371 gen 5326 top level 5 path snappersnaps
ID 372 gen 5353 top level 371 path snappersnaps/arch

I understand that, under arch, root `/` is actually the subvolume `arch`. So I really want to mount `snappersnaps/arch` under the mount point `/.snapshots`. Which works, but snapper does not see it as a subvolume. I am probably still missing the point, I'm afraid smile.

- edit - okay, I found the point. I mounted `snappersnaps/arch` under `/`, but I should have mounted it under `/mnt/btrfsroot/arch` (

mount -o subvol="/snappersnaps/arch" /dev/nvme0n1p3 /mnt/btrfsroot/arch/.snapshots

). This seems to please snapper, it's making the snapshots as expected. Feels a bit funny though: if I list `/.snapshots` it is empty. If I list /mnt/btrfsroot/arch/.snapshots` it contains the actual snapshot.

Last edited by CountZukula (2017-11-04 17:52:44)


#28 2018-02-09 08:08:21

Registered: 2016-02-08
Posts: 371

Re: Snapper/BTRFS layout for easily restoring files, or entire system

I received a PM about my message here:

I know you feel that the OP is making a problem out of nothing, but I still don't understand what the issue the OP is highlighting and why you think it's not a problem. Could you explain what the OP is trying to say and then why you think it's not relevant? Will the snapshot actually mount over the broken filesystem when it's being stored like the OP said and if so, why is this not a problem?

Only thing I think is weird about the situation is that it seems Snapper or the Btrfs devs hasn't really dealt with this in the Snapper tool, let alone even mention it. This leads me to think the problem is unfounded but I still want to try to understand it to get a better understanding of ways to set up my filesystem.

Here's my response:

tal wrote:

The problem is that snapper appears to mainly be used to recover files from snapshots - not recover your entire system using snapshots.

Snapper does a fine job recovering an entire volume, whether root or other. (If I understand the OP correctly, what he is calling a filesystem is what BTRFS calls a volume.)

tal wrote:

snapper rollback...leaves [broken filesystem] in place (even though it’s presumably screwed up) and ignores it

It is often desirable to have access to a "broken" (or otherwise screwed up) volume so that one can examine log files or other information to learn what the problem was. That doesn't mean it has to be kept around forever. One is not "stuck" with it. (In the situations under consideration here, the issue is probably not a broken BTRFS filesystem, or we would not be having a discussion about restoring a snapshot -- it would be an entirely different discussion.)

tal wrote:

using one of it’s subfolders to boot instead.

I think this is a misunderstanding. BTRFS volumes can be mounted almost anywhere in the directory tree. However, that doesn't make the new volume (snapshot) is actually a permanent "subfolder" of the "broken" volume. It is possible to remove the "broken" volume completely while keeping the new volume. They can become independent very easily.

Some layouts make may this more clear than others, but whether the layout makes this explicit or not doesn't change the fundamental features of BTRFS. In hindsight, I think the OP simply did not understand BTRFS very well and created a problem where there wasn't one, as far as I can tell.

However, it is possible to come up with alternative layouts, like the OP suggested. Many people have done this. As I said I used his layout for about a year and based on that experience, I created another layout that I perfer. However, using an alternative layout is simply an option for convenience or personal preference. It does not really solve any fundamental problem with BTRFS or Snapper in my opinion.

I will share some info about the layout I'm using now. For context, the OP said:

tal wrote:

Note: Here, I create a subvolume for /home, and one for /opt as examples. You can basically create a subvolume for any folder on the filesystem that you do NOT want to be part of the daily snapshots that snapper takes. subvol_root and subvol_snapshots are the only 2 important subvolumes here.

In other discussions on this topic, it was recommended that /var/log be placed in its own subvolume. See, for example,

It's recommended to create a subvolume for /var/log so that snapshots of / exclude it. That way if a snapshot of / is restored your log files will not also be reverted to the previous state. This make it easier to troubleshoot.

I followed that recommendation along with the OP's recommendations from the day I started using BTRFS and Snapper.

I have noticed two minor issues that may or may not be related to that:

1. on each shutdown, the following message appears in the journal:

systemd[1]: Failed unmounting /var/log.

It does not seem to cause any problems* and I do like having the log files in a separate snapshot. That way, if I roll back to a prior root snapshot, I can retain the prior logs with the new snapshot, should I choose.

*I do sometimes find corrupted journal files, but there are many posts from systemd users seeing the same thing even without BTRFS and snapper, so I don't think these issue are related to my layout. But I can't be 100% sure. However, the Arch Wiki does recommend creating a separate subvolume for /var/log; therefore, if doing so was a problem I think the wiki would mention it.

2. on each boot I see this warning in the systemd-journald status:

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

As far as I can tell, the journal output is complete. But just to be sure, I always do this after a reboot:

systemctl restart systemd-journald

See more here:

If I was sure these issues were not related to the layout, I could probably recommend the layout I'm using now.

Since I started using BTRFS at the same time I read … 9#p1722289, I never used the standard / stock layout, so I can't compare.

Overall, it is possible that the OP's advice is unnecessary and it is also possible that it is harmful. It is also educational and not totally worthless. But for someone starting with BTRFS, I would not advise following this advice. I think it is best to just ignore the whole thread.

For someone with more BTRFS experience and who is confident they have a valid reason for changing the layout, doing so is an option. Here is the layout I am currently using. I am not recommending it. I'm just sharing it.

In an empty partition, I create a top_level BTRFS volume. In it, I only create 3 BTRFS subvolumes:

Snapper will eventually put snapshots into each of those BTRFS subvolumes (inside sequentially numbered directories). However, when I set up a new system, I create one directory named "live". In "live" I either send an existing snapshot I want to use or I make a new BTRFS volume called snapshot.

If I mount top_level (which I usually do not**), here is what it looks like with 6 Snapper snapshots:

	# tree -d -L 2 /mnt/top_level/
	├── @hometop
	│   ├── 1
	│   ├── 2
	│   ├── 3
	│   ├── 4
	│   ├── 5
	│   ├── 6
	│   └── live
	├── @roottop
	│   ├── 1
	│   ├── 2
	│   ├── 3
	│   ├── 4
	│   ├── 5
	│   ├── 6
	│   └── live
	└── @vlogtop
	│   ├── 1
	│   ├── 2
	│   ├── 3
	│   ├── 4
	│   ├── 5
	│   ├── 6
		└── live

My "live" volume is at the same level as the snapshots. If I want to run a different snapshot as my live system, I can rename any of the numbered directories to "live" and rename "live" to something else such as "prior". The entries shows as 1, 2, ... 6, live are regular directories and they can be moved (renamed). Within each is a BTRFS subvolume named snapshot (which is created by Snapper except in the case of "live").

	# tree -d -L 3 /mnt/top_level/
	├── @roottop
	│   ├── 1
	│   │   └── snapshot
	│   ├── 2
	│   │   └── snapshot
	│   ├── 3
	│   │   └── snapshot
	│   ├── 4
	│   │   └── snapshot
	│   ├── 5
	│   │   └── snapshot
	│   ├── 6
	│   │   └── snapshot
	│   └── live
	│       └── snapshot

This layout shows that all the snapshots are fundamentally equilavent in status. Any one of them could be live. Any one of them could be deleted, if desired.

Here are the contents of the live snapshot:

	# tree -d -L 1 /mnt/top_level/@roottop/live/snapshot/
	├── bin -> usr/bin
	├── boot
	├── dev
	├── etc
	├── home
	├── lib -> usr/lib
	├── lib64 -> usr/lib
	├── mnt
	├── opt
	├── proc
	├── root
	├── run
	├── sbin -> usr/bin
	├── srv
	├── sys
	├── tmp
	├── usr
	└── var

As you see, it is a normal root filesystem.

To me, this layout is very clean and very intuitive. It makes rolling back to any snapshot very easy and very intuitive.

It does have one characteristic that could be mind-bending, however. Look at this:

	# tree -d -L 1 /.snapshots/
	├── 1
	├── 2
	├── 3
	├── 4
	├── 5
	├── 6
	└── live

Is that a recursive loop? No. The whole snapshot tree is NOT again mounted under /.snapshots/live/snapshot:

	# ls -la /.snapshots/live/snapshot/.snapshots/live/snapshot
	ls: cannot access '/.snapshots/live/snapshot/.snapshots/live/snapshot': No such file or directory

Furthermore, in my experience, this aspect my my layout is a feature not a bug. However, I do not wish to defend my current layout. While I like it, I want to better understand the two minor issues mentioned above.

** Here is my fstab entry for mounting my "live" root filesystem:

	UUID=xxxxx  /  btrfs  rw,noatime,nodiratime,compress=lzo,space_cache,defaults,subvol=/@roottop/live/snapshot  0 0

The way I mount the snapshots is both a little mind-bending at first and simultaneously very intuitive and flexible once it is understood. It is this simple:

	UUID=xxxxx  /.snapshots  btrfs  rw,noatime,nodiratime,compress=lzo,space_cache,defaults,subvol=/@roottop/  0 0

I repeat that pattern for @hometop and @vlogtop.

Here's my complete /etc/fstab:

UUID=eeeeee       none                    swap            defaults        0 0

UUID=aaaaaa                                  /boot                   vfat            noauto,rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro  0 2

UUID=xxxxxx       /                       btrfs           rw,noatime,nodiratime,compress=lzo,space_cache,defaults,subvol=/@roottop/live/snapshot        0 0
UUID=xxxxxx       /.snapshots             btrfs           rw,noatime,nodiratime,compress=lzo,space_cache,defaults,subvol=/@roottop/                0 0

UUID=xxxxxx       /home                   btrfs           rw,noatime,nodiratime,compress=lzo,space_cache,defaults,subvol=/@hometop/live/snapshot  0 0
UUID=xxxxxx       /home/.snapshots        btrfs           rw,noatime,nodiratime,compress=lzo,space_cache,defaults,subvol=/@hometop                 0 0

UUID=xxxxxx       /var/log                btrfs           rw,noatime,nodiratime,compress=lzo,space_cache,defaults,subvol=/@vlogtop/live/snapshot  0 0
UUID=xxxxxx       /var/log/.snapshots     btrfs           rw,noatime,nodiratime,compress=lzo,space_cache,defaults,subvol=/@vlogtop/                0 0

My main point is not that anyone should be using my layout or the OP's layout or any other alternative layout. I think the stock layout used by Snapper is probably fine for most people, and this whole thread is probably not helpful for people new to Snapper and BTRFS.

I'm also not saying that BTRFS is perfect in its current state. In fact, if I did not find snapshots so incredibly useful (they have saved my butt a number of times), I probably would not use BTRFS. It requires a lot of attention ("like a young child" to quote someone from the BTRFS mailing list). If you want to be able to set it up once and then forget about your filesystem, BTRFS isn't the right choice. There is a reason the team that created Snapper doesn't use BTRFS for all files systems (e.g., /home). On that note, I do like BTRS for /home but I make /home/<user>/.cache a separate BTRFS subvolume and I mark that directory as nodatacow. But that's really a separate discussion.

I have another variation of my layout where all Snapper snapshots reside on a separate volume. (Because you can't snapshot directly to a separate volume, this is a two-step process.) With that layout, I use the autodefrag mount option for @roottop, @hometop, @vlogtop.


#29 2018-04-01 14:07:30

Registered: 2018-01-26
Posts: 9

Re: Snapper/BTRFS layout for easily restoring files, or entire system


sorry for this newb question, but are your .snapshots volumes still nested subvolumes of @roottop, @hometop, etc. pp created by snapper during the configuration? If I use the .snapshots volumes that snapper created, I don't have to mount them explicitly!?

Based on your explanations, I assume snapper rollback works just fine, right? I have the following setup and I just can't get snapper rollback to work:

$ sudo btrfs subvolume list /
ID 257 gen 1285 top level 5 path @
ID 258 gen 1281 top level 5 path @home
ID 259 gen 1278 top level 5 path @var
ID 261 gen 25 top level 257 path boot/grub/x86_64-efi
ID 262 gen 1278 top level 259 path @var/cache
ID 263 gen 1286 top level 259 path @var/log
ID 266 gen 973 top level 259 path @var/lib/machines
ID 273 gen 1281 top level 258 path @home/.snapshots
ID 274 gen 1281 top level 259 path @var/.snapshots
ID 277 gen 1281 top level 257 path .snapshots
ID 280 gen 896 top level 277 path .snapshots/1/snapshot
ID 281 gen 905 top level 277 path .snapshots/2/snapshot
ID 282 gen 906 top level 277 path .snapshots/3/snapshot
ID 283 gen 912 top level 273 path @home/.snapshots/1/snapshot
ID 284 gen 913 top level 277 path .snapshots/4/snapshot
ID 285 gen 914 top level 274 path @var/.snapshots/1/snapshot
ID 289 gen 921 top level 277 path .snapshots/5/snapshot
ID 295 gen 947 top level 273 path @home/.snapshots/3/snapshot
ID 297 gen 950 top level 274 path @var/.snapshots/3/snapshot
ID 300 gen 968 top level 273 path @home/.snapshots/4/snapshot
ID 302 gen 971 top level 274 path @var/.snapshots/4/snapshot
ID 303 gen 990 top level 273 path @home/.snapshots/5/snapshot
ID 305 gen 993 top level 274 path @var/.snapshots/5/snapshot
ID 306 gen 1008 top level 273 path @home/.snapshots/6/snapshot
ID 308 gen 1011 top level 274 path @var/.snapshots/6/snapshot
ID 311 gen 1023 top level 273 path @home/.snapshots/7/snapshot
ID 313 gen 1026 top level 274 path @var/.snapshots/7/snapshot
ID 314 gen 1031 top level 273 path @home/.snapshots/8/snapshot
ID 316 gen 1034 top level 274 path @var/.snapshots/8/snapshot
ID 317 gen 1055 top level 273 path @home/.snapshots/9/snapshot
ID 319 gen 1058 top level 274 path @var/.snapshots/9/snapshot
ID 320 gen 1092 top level 273 path @home/.snapshots/10/snapshot
ID 322 gen 1095 top level 274 path @var/.snapshots/10/snapshot
ID 324 gen 1109 top level 274 path @var/.snapshots/11/snapshot
ID 325 gen 1109 top level 273 path @home/.snapshots/11/snapshot
ID 327 gen 1112 top level 274 path @var/.snapshots/12/snapshot
ID 328 gen 1112 top level 273 path @home/.snapshots/12/snapshot
ID 329 gen 1119 top level 273 path @home/.snapshots/13/snapshot
ID 331 gen 1122 top level 274 path @var/.snapshots/13/snapshot
ID 332 gen 1203 top level 277 path .snapshots/6/snapshot
ID 333 gen 1132 top level 274 path @var/.snapshots/14/snapshot
ID 335 gen 1135 top level 277 path .snapshots/7/snapshot
ID 336 gen 1137 top level 274 path @var/.snapshots/15/snapshot
ID 338 gen 1192 top level 273 path @home/.snapshots/16/snapshot
ID 339 gen 1193 top level 277 path .snapshots/8/snapshot
ID 340 gen 1195 top level 274 path @var/.snapshots/16/snapshot
ID 341 gen 1202 top level 277 path .snapshots/9/snapshot
ID 342 gen 1207 top level 277 path .snapshots/10/snapshot
ID 343 gen 1260 top level 273 path @home/.snapshots/17/snapshot
ID 344 gen 1261 top level 277 path .snapshots/11/snapshot
ID 345 gen 1263 top level 274 path @var/.snapshots/17/snapshot
ID 346 gen 1269 top level 277 path .snapshots/12/snapshot
ID 347 gen 1271 top level 274 path @var/.snapshots/18/snapshot
ID 348 gen 1271 top level 273 path @home/.snapshots/18/snapshot
ID 349 gen 1272 top level 277 path .snapshots/13/snapshot
ID 350 gen 1274 top level 274 path @var/.snapshots/19/snapshot
ID 351 gen 1274 top level 273 path @home/.snapshots/19/snapshot
/dev/sda1 /boot/efi vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro 0 2

/dev/mapper/Swap none swap defaults,pri=-2 0 0

/dev/mapper/System / btrfs rw,noatime,nodiratime,compress=lzo,ssd,discard,space_cache,subvolid=257,subvol=/@,subvol=@ 0 0

/dev/mapper/System /home btrfs rw,noatime,nodiratime,compress=lzo,discard,space_cache,subvolid=258,subvol=/@home,subvol=@home 0 0

/dev/mapper/System /var btrfs rw,noatime,nodiratime,compress=lzo,discard,space_cache,subvolid=259,subvol=/@var,subvol=@var 0 0

I can boot into the read-only snapshots (e.g. snapshot #6 with an older kernel) and also snapper rollback output looks fine, but if I boot the new read-write snapshot, nothing has changed (e.g. still the new kernel is loaded)..!?

$ sudo snapper ls
Type   | # | Pre # | Date                          | User | Cleanup  | Description                                                                 | Userdata
single | 0 |       |                               | root |          | current                                                                     |        
single | 1 |       | Fri 30 Mar 2018 23:31:44 CEST | root |          | test123                                                                     |        
pre    | 2 |       | Fri 30 Mar 2018 23:46:15 CEST | root | number   | pacman -Ud /home/vagrant/.cache/pacaur/grub-btrfs/grub-btrfs-2.0.1-1-any... |        
post   | 3 | 2     | Fri 30 Mar 2018 23:46:17 CEST | root | number   | grub-btrfs                                                                  |        
single | 4 |       | Sat 31 Mar 2018 00:00:27 CEST | root | timeline | timeline                                                                    |        
single | 5 |       | Sat 31 Mar 2018 13:10:45 CEST | root | timeline | timeline                                                                    |        
pre    | 6 |       | Sun 01 Apr 2018 13:14:23 CEST | root | number   | pacman -Syu                                                                 |        
post   | 7 | 6     | Sun 01 Apr 2018 13:15:22 CEST | root | number   | gtk-update-icon-cache gtk3 linux linux-headers mpg123 openssl srt unrar ... |        
single | 8 |       | Sun 01 Apr 2018 14:00:39 CEST | root | timeline | timeline                                                                    |
$ sudo snapper rollback 6
Creating read-only snapshot of current system. (Snapshot 9.)
Creating read-write snapshot of snapshot 6. (Snapshot 10.)
Setting default subvolume to snapshot 10.

Does anybody know if I have to implement the openSUSE layout ( if I want to be able to use all snapper features like snapper rollback?


#30 2018-04-01 14:39:20

Registered: 2011-10-09
Posts: 912

Re: Snapper/BTRFS layout for easily restoring files, or entire system

My understanding is that the snapshot subvolumes should not be subvolumes of the subvolumes being snapshotted. However, they must be on the same mounted volume. See the btrfs wiki section on flat subvolume layout. … Guide#Flat



#31 2018-04-01 14:48:30

Registered: 2018-01-26
Posts: 9

Re: Snapper/BTRFS layout for easily restoring files, or entire system

@Tim: Yes, that was my first implementation. I had @snapshots as a subvolume of root (/) for the root (@) subvolume snapshots, but in the arch wiki is written that this layout shouldn't be used with snapper rollback.

Last edited by zed123 (2018-04-01 14:49:58)


#32 2018-04-01 20:16:20

Registered: 2016-02-08
Posts: 371

Re: Snapper/BTRFS layout for easily restoring files, or entire system

zed123 wrote:

...I want to be able to use all snapper features like snapper rollback...

With my layout, I do not use snapper's rollback feature. I simply use the "mv" command to mv any snapshot number I want to use to the "live" directory.

With your goals, why not just use the standard (default) snapper layout? As I said, I don't see a problem with the default snapper layout (especially for someone new to snapper and btrfs). That's even more true in your case since you want to use "all snapper features".

As I said in my earlier post, I think the OP imagined a problem where there is none. However, where BTRFS newbies should focus is on learning the regular required maintenance for BTRFS. After every pacman update, I perform the following BTRFS maintenance tasks:

First, temporarily disable hourly snapshots (or any similar BTRFS scheduled tasks).

df -h /
btrfs fi df /
btrfs balance start -dusage=10 -dlimit=2..20 -musage=10 -mlimit=2..20 /
btrfs balance start -dusage=25 -dlimit=2..10 -musage=25 -mlimit=2..10 /
btrfs scrub start /
btrfs scrub status -d /

Re-enable hourly snapshots.

(Repeat these steps for other physical volumes, if any.)

First I check for free space. Then I run balance twice. Because I run balance regularly, these two balance commands will complete very quickly. But I always run the 10% usage command first to be sure the process will complete quickly before I run the 25% command. On most of my machines the scrub command will complete very quickly too. I can do all these maintenance commands in just a minute or two. However, on some 8 TB disks where it takes longer, I try to run scrub off hours.

On volumes where I don't have many snapshots, I will run the defrag tool. Also, don't ever accumulate more than around 100 snapshots on a volume as a rule of thumb. Set sane limits and use the snapper cleanup algorithm.

Be sure you know what to do if you get a ENOSPC error.

The only reason I am mentioning BTRFS maintenance in this thread is because I am of the opinion that this layout discussion is largely a waste of time unless you are already fairly experienced with BTRFS and have specific and valid reasons for not using the default layout. Instead of worrying about layout, most of us should be concerned with mastering basic BTRFS maintenance. Compared to ext4, BTRFS is high maintenance. If anything I wrote above about BTRFS maintenance doesn't make sense to you, I suggest forgetting about layout and learning about maintenance instead. The best place to learn is the BTRFS mailing list in my experience.

Last edited by MountainX (2018-04-01 20:18:10)


#33 2018-04-01 21:15:01

Registered: 2018-01-26
Posts: 9

Re: Snapper/BTRFS layout for easily restoring files, or entire system

Thanks for the tip about the maintenance.

What is the default snapper layout btw.? I read and tested so much today that I'm confused. Btrfs is not an easy topic unfortunately. Is it just X subvolumes under / with .snapshot created by snapper as nested volumes in each of the subvolumes? I assume it's not the layout mentioned in the wiki ( … tem_layout) with a dedicated subvolume @snapshots..!?


#34 2019-02-28 08:43:18

Registered: 2014-04-02
Posts: 14

Re: Snapper/BTRFS layout for easily restoring files, or entire system

Hi, I'm new with BTRFS and want to install arch with BTRFS on an new system. I think, I understood enough, but just wand to verify my assumptions are right.

Its a simple microserver with one SSD, no RAID.
My partion layout:
1: FAT32 - EFI
2: SWAP (for hibernate)

and a flat BTRFS-Layout:

ID 5
   ├── arch-root
   ├── arch-home
   ├── arch-snapshots
   └── arch-live

Now the point, there I'm not trust me enough :-)
I want to set arch-root as default subvolume (with btrfs sub set-default /mnt/arch-root) so I can boot "directly" without a rootflag.
Next I want to add 2 boot entries. One normal, without subvol information and one pointing to arch-live.

After all, I should able to rollback just through set a snapshot as default for the btrfs volume (through a still booting system or arch-live backup)?
Or is it better to does not work with a default subvolume and mount explicit?

Edit: Okay. I miss some points. The snapshots are readonly. So I have to manage it afterwards too. Also it may be easier to modify the bootloader configuration, because it just needs access to FAT32 and a text editor. Changing default subvolume would need the btrfs access and btrfs tools.

Last edited by qupfer (2019-02-28 08:57:08)


#35 2019-03-01 00:19:00

Registered: 2007-09-28
Posts: 6,217

Re: Snapper/BTRFS layout for easily restoring files, or entire system

qupfer, welcome to the forums. This thread is almost 12 months old; please leave the dead to rest in peace smile … bumping.22



Board footer

Powered by FluxBB