You are not logged in.
I'm building a NAS.
The image below encapsulates my initial plan for its storage devices. The RAID 6 array would hold a single logical volume (LV) which is cached by the two mirrored SSDs. The LV would then hold a BTRFS filesystem (--data single --metadata dup) which mounts at /srv/nfs and holds all the network shares.
I have a UPS, so I want it to be a write cache too (--cachemode writeback).
Then I learned that allowing BTRFS to take over the RAID controller layer can enable advanced features such as file self-healing. Concordantly, my revised plan is to replace the RAID 6 with BTRFS's RAID1c3, which is similar to RAID 6, but more stable than BTRFS's implementation of it.
The problem: how can I keep my cache from before?
The only thing I can think of involves buying two additional SSDs and making a LV on each SSD which caches for a corresponding LV on each HDD. Unfortunately, I don't think my motherboard can support eight SSDs, which is the number of HDDs I would like to eventually expand to.
Any thoughts?
Last edited by TankieTanuki (2024-11-21 12:30:29)
Offline
first of all: don't layer a filesystem/volume manager like btrfs or zfs on top of dumb layers lije md or lvm - data corruption and loss is about to happen
if, at all, use these special filesystems which are specifically designed to overcome the limitations of dumb raid directly on the drive and have it manage the low-level drive access
why?
your lvm can't tell if a drive only return garbage - it relies on the drive to tell it that it faulted
btrfs and zfs have block level checksum so they can tell which drive crapped itself but also can restore the faulty data
layer btrfs ontop of lvm loses you most of what btrfs is designed for to be better than lvm - get rid of it
as for how to setup caching with btrfs? I don't know - but btfs also doesn't implement raid6 proper and it's adviced not to use btrfs' raid6 implementation
my recommendation: use zfs - as it's perfectly designed for that exact application:
the pool can be setup to use the big drives in a raidz2 vdev (effectively raid6) and the ssds can be set as a mirror l2arc (read-cache)
also zfs doesn't really need a write-cache as a zil only works for synchronized writes - which most writes are not
https://www.45drives.com/community/arti … s-caching/
another option could be to ditch linux and use windows and storagespaces with refs
in the msdn there's a step-by-step copy'n'paste explanation how to setup a tiered storage (using ssds to buffer hdds)
Offline
ZFS with Arch = some hoops to jump through. See the ZFS article on the Arch wiki. Short version: It's not in the kernel and requires DKMS or a custom kernel and - according to the wiki, ZFS is not always up to speed with the current Arch kernel, so additional care is required during updates.
btrfs's RAID1c3 is not like RAID6. https://btrfs.readthedocs.io/en/latest/ … l#profiles.
Offline
requires DKMS or a custom kernel
that's just false - please have a look at https://github.com/archzfs/archzfs
I'm running 2.3.0-rc3 on standard 6.11.9 right now (available via my fork https://github.com/n0xena/archzfs )
the main project also provides pre-built for lts, hardened and zen standard kernels
dkms is only needed if you use a custom kernel
as for lacking behind: this was also solved by now-in-place github auto-build setup refreshing every 24h - so unless you build your own as I do you have to wait for an update at most 23h59m
and yes - with upcomming 2.2.7 kernel 6.11 will get official support and the zfs is already up to speed to get 6.12 asap as that's the new lts
so zfs is quite a viable option even on arch - and as said: it fots OPs requirement perfectly
Offline
Yeah, my brain filters out every instance of "third party repository" when it comes to trusting a component with all my data, but if that's not an instant no for you, try ZFS.
Offline
Thanks for the replies. I'll use ZFS.
I meant RAID1c3 was similar to RAID6 in terms of device fault tolerance.
Last edited by TankieTanuki (2024-11-21 23:08:55)
Offline
well - the difference between a classic raid10 and raid6 is: in a raid6 ANY two drives can fail - a raid10 can withstand UP TO half the drives fail - but ONLY if one drive per mirror fails - it also can fail completely from just two drives failing when two drives of the same mirror fail
I'm not quite sure about the actual implementation of raid1c3 - all I found was "it stores 3 copies on 3 disks" - well, a raid1 should do that anyway no matter how wide the mirror is
in the end it comes to: can you afford downtime while rebuilding and restore from backup - or do you take the risk by just hope on quick enough to replace two drives before the third fails?
fun fact: I use a direct attached 8-drive raidz2 pool and just recently replaced a few drives
two drives had thier issues for quite some time now - then a third failed and that's when I jumped and got new drives
then I replaced one drive - while the first rebuild a second drive failed - luckly I was able to rebuild the first drive successful
when I replaced and starting rebuilding the second failed drive a third drive failed and due to my HBA it caused quite some issues
so I had to gamble and disconnect the third drive to get the previous failed second drive rebuild
and only then I was able to rebuild the third drive
but - looking at smart - there'Re at least two more of the old drives that will fail soon and I only got one new drive left
I've searched for several years about my go to solution and finally settled on ZFS as it fits >my< needs the best - and since I keep recommending it as it can fit quite a lot and even some excotic requirements - and I see quite a potential on what you want zfs is a good fit for
if you go zfs - it's up to you - but no matter what solution you end up with - don'T got the dumb way of old school dumb simple MD or LVM - it has too many cons which are the reasons why ZFS and BtrFS were created - you should use one of the latter - as the old ones will fail you with silent bit rot and other crap that can be avoided
you could also go something way more advanced like TrueNas which uses both zfs and a bit of proprietary magic with a neat graphical interface
Offline
Be advised that TrueNAS is based on FreeBSD and not Linux.
Offline
Be advised that TrueNAS is based on FreeBSD and not Linux.
hate to say it - but wrong again: there's linux based TrueNas scale
Offline
If I go to the TrueNAS website and download the official image, will this be Linux based or FreeBSD based?
Offline
depends on if you download truenas core - bsd based - or truenas scale - linux based
it's right on thier page
https://www.truenas.com/compare/
truenas scale
base os: debian
Last edited by cryptearth (2024-11-22 13:20:15)
Offline