You are not logged in.
Since I made my 2TB HDD (Samsung M9T which I bought precisely to make it a BTRFS RAID1 array for data security) into 2 x 1TB btrfs RAID1 array, I keep having horrible mounting problems.
There are two subvolumes on that array:
@data
@sys_backup
fstab entries for them look like so:
UUID=a0555768-a37f-4bd8-95e5-66ebd6a09c75 /mnt/Disk_D btrfs device=/dev/mapper/2TB-1TB_RAID1a,device=/dev/mapper/2TB-1TB_RAID1b,rw,noatime,compress=lzo,autodefrag,space_cache,nofail,commit=180,subvol=@data 0 0
UUID=a0555768-a37f-4bd8-95e5-66ebd6a09c75 /mnt/arch_backup btrfs device=/dev/mapper/2TB-1TB_RAID1a,device=/dev/mapper/2TB-1TB_RAID1b,rw,noatime,compress=zlib,autodefrag,space_cache,nofail,x-systemd.automount,commit=250,subvol=@sys_backup 0 0
Now, I rarely access @sys_backup, but I have never found it not being mounted. @data on the other hand mounts hardly ever.
Most of the time, after boot, then I try to access /mnt/Disk_D mountpoint, it either
A - turns out empty, and running “mount” command confirms there's nothing mounted there. Trying to mount it manually renders absolutely no effect. Not even an error message.
or
B – every program used to access that mountpoint (terminal, file manager, conky displaying free space or anything else) freezes and I have to kill it. Even tryping “/mnt/D” and pressing TAB for autocompletion crashes the terminal.
I then need to reboot anywhere from 1 to 15 more times to get it mount properly.
I had this problem since I started using RAID1 3 months ago. I had it on kernels 3.17, 3.18 and 3.19.
Other things that made no difference include:
➤ adding/removing “device=” option
➤ adding/removing x-systemd.automount option
➤ changing UUIDs o /dev/mapper/... entries
➤ completing scrub on that partition
➤ running dry run btrfs check (no errors indicated)
This is horrible. Is there any solution to it, or is it just the current state of BTRFS? And if it is the later, then what other than btrfs Raid1 solution could be implemented?
Last edited by Lockheed (2015-02-01 07:27:35)
Offline
Offline
No, you understand it correctly.
Offline
firecat53 wrote:Am I understanding correctly that you're using two partitions on a single drive in Raid 1?? I I hope I'm misunderstanding you...
No, you understand it correctly.
There's no benefit to that. RAID works across separate physical disks; that's what it's designed for. It's advantages are at best mitigated (at worst, and more probably, none-existent) when used on a single disk. And in the event of some problem, using one disk in a RAID array would actually make recovering the data more difficult.
Offline
There's no benefit to that. RAID works across separate physical disks; that's what it's designed for. It's advantages are at best mitigated (at worst, and more probably, none-existent) when used on a single disk. And in the event of some problem, using one disk in a RAID array would actually make recovering the data more difficult.
That is the case with mda and hardware RAID1. However, in case of btrfs RAID1 you get the benefit of data self-healing:
http://askubuntu.com/questions/406096/h … id-1-array
I had btrfs corrupt my data too many times to pass on that benefit, and I can't think of a better solution - if someone knows about one, please share.
Offline
There's nothing in that linked article about doing a RAID 1 on a single disk. If you think about it, you are going to absolutely kill your read/write times by having the drive head trying to keep up a RAID 1 mirror on two partitions of the same drive. My guess is that's why you are having issues. You're going to be much better off buying two 1TB drives and using those in RAID 1!! Or just a single 2TB btrfs drive....with a UPS and another drive as a backup drive. I've been using btrfs on my laptop SSD and on my server for almost a year now with only one problem (corruption from a power loss to the server before I got the UPS).
Scott
Offline
There's nothing in that linked article about doing a RAID 1 on a single disk.
The whole discussion I linked is about doing a RAID 1 on a single disk. And the article linked from that discussion is only to prove the point of btrfs self-healing.
If you think about it, you are going to absolutely kill your read/write times by having the drive head trying to keep up a RAID 1 mirror on two partitions of the same drive.
The read performance is unchanged. The write performance is 50% of the standard partition, but since it is a data drive, that is not a serious issue. Certainly less serious than data security.
My guess is that's why you are having issues.
Maybe so, but I don't see a reason to assume it and even it is so, then I don't think those issues should be there.
You're going to be much better off buying two 1TB drives and using those in RAID 1!! Or just a single 2TB btrfs drive....with a UPS and another drive as a backup drive.
I can't do that because it is a laptop SSD with system + HDD with data.
I've been using btrfs on my laptop SSD and on my server for almost a year now with only one problem (corruption from a power loss to the server before I got the UPS).
I have been using btrfs on my laptop SSD + HDD and on my RPi server since over a year and btrfs check was detecting minor issues every few months.
I now run SSD (non-raid) + HDD (2 partitions in RAID1) on laptop, and SD (2 partitions in RAID1) + HDD (2 partitions in RAID1) on RPi (which - by the way, has no such problems with mounting neither SD nor HDD in Raid1) and I have not yet had any error with btrfs check.
Offline
firecat53 wrote:There's nothing in that linked article about doing a RAID 1 on a single disk.
The whole discussion I linked is about doing a RAID 1 on a single disk. And the article linked from that discussion is only to prove the point of btrfs self-healing.
One post by someone on Ask Ubuntu does not constitute anything like reliable documentation...
Offline
One post by someone on Ask Ubuntu does not constitute anything like reliable documentation...
I didn't say it's a reliable documentation. I said the author rises a valid and logical point. If there is something wrong with that point, please point it out.
Offline
Lockheed, even if this is possible, there's no point to it. A self-repairing filesystem? Yeah, that's cool, except it requires cutting my performance by 50%, my disk space by 75%, and still ending up with total data loss when the drive inevitably fails anyway. Pointless.
Offline
Lockheed, even if this is possible, there's no point to it. A self-repairing filesystem? Yeah, that's cool, except it requires cutting my performance by 50%, my disk space by 75%, and still ending up with total data loss when the drive inevitably fails anyway. Pointless.
This is hardly correct.
The write performance is cut by 50%, the read performance remains 100%.
The space is cut by 50%.
The data is fully protected against bitrot or normal disk corruption.
While this balance of cost and benefits might not be worthwhile for you, calling this pointless is ridiculous as obviously it has some significant advantages which cannot be achieved in other way under those conditions (laptop, one HDD, etc).
Offline
Since the start of this year: BTRFS allows the block group profile for data on a single device to be switched to DUP. Which provides the poor man's data duplication/redundancy Lockheed is looking for, in a silghty more "native"/elegant process. BTRFS documentation hasn't been updated everywhere with this information, but you can read about it here: https://btrfs.wiki.kernel.org/index.php … GLE_DEVICE
New volumes with the option can be created using:
mkfs.btrfs -d dup /mnt
Existing volumes can be converted using:
sudo btrfs balance start -dconvert=dup /mnt
The documentation notes that this option can only be enabled in single device setups.
Presumably, this can be reverted using: [I've not tested.]
sudo btrfs balance start -dconvert=single /mnt
---
You can check the status of your block group profile, file system usage and the progress of the balance action using:
btrfs filesystem df /mnt
Sorry for the necro bump! I was looking for the same information myself and found this thread before finding the above solution.
Offline
@Tonurics, thanks for the heads-up.
I wonder if:
- this parameter caused 50% drop in write performance, as Raid1 on a single HDD did
- it can be set independently on different subvolumes of the same pool
Offline
I tried it out on an external USB drive: it took about an hour to convert ~250GB to the DUP profile. Afterwards, I confirmed new data was being duplicated and performed a rudimentary write comparison with rsync; surprisingly, I saw no write performance drop in my usage case [~70MBs unchanged].
The only other things of note: the reported freespace was cut in half [expected], a 1GB "single" profile [unused, 0 bytes] remained on the drive, and it felt like the scrub command took a lot longer to run than I remembered (but that could have just been me being hypersensitive and not paying more attention to previous runs).
Offline