You are not logged in.
Pages: 1
Last night I created a raid5 array using mdadm and 5 x 3TB hdd's. I let the sync happen overnight and on returning home this evening
watch -d cat /proc/mdstat
returned:
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda1[0] sdc1[2] sde1[5] sdb1[1] sdd1[3]
11720536064 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
bitmap: 2/22 pages [8KB], 65536KB chunk
unused devices: <none>
which to me pretty much looks like the array sync is completed.
I then updated the config file, assembled the array and formatted it using:
mdadm --detail --scan >> /etc/mdadm.conf
mdadm --assemble --scan
mkfs.ext4 -v -L offsitestorage -b 4096 -E stride=128,stripe-width=512 /dev/md0
Running
mdadm --detail /dev/md0
returns:
/dev/md0:
Version : 1.2
Creation Time : Thu Apr 17 01:13:52 2014
Raid Level : raid5
Array Size : 11720536064 (11177.57 GiB 12001.83 GB)
Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Apr 17 18:55:01 2014
State : active
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : audioliboffsite:0 (local to host audioliboffsite)
UUID : aba348c6:8dc7b4a7:4e282ab5:40431aff
Events : 11306
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
5 8 65 4 active sync /dev/sde1
So, I'm now left wondering why the state of the array isn't "clean"? Is it normal for arrays to show a state of "active" instead of clean under Arch?
Last edited by audiomuze (2014-07-02 20:10:33)
Linux user #338966
Offline
Clean and active mean different things according to this, but it could be out of date.
But whether the Constitution really be one thing, or another, this much is certain - that it has either authorized such a government as we have had, or has been powerless to prevent it. In either case, it is unfit to exist.
-Lysander Spooner
Offline
^^^ yep, hence my question.
I'm fast beginning to think the simplest course of action for me is to abandon Arch and revert to Debian. Too many idiosyncrasies in just about everything I've attempted thus far. Even getting a NFS share accessible is a rigmorale.
Last edited by audiomuze (2014-04-17 16:33:03)
Linux user #338966
Offline
These aren't idiosyncracies, Arch or otherwise. They're the relatively current state of affairs upstream.
But whether the Constitution really be one thing, or another, this much is certain - that it has either authorized such a government as we have had, or has been powerless to prevent it. In either case, it is unfit to exist.
-Lysander Spooner
Offline
This is completely normal for an array that uses a write intent bitmap. Nothing to worry about.
Offline
Thanks for the responses.
These aren't idiosyncracies, Arch or otherwise. They're the relatively current state of affairs upstream.
Could you elaborate what you mean by upstream - I'm assuming you're referring to changes in recent mdadm and/or linux kernel?
This is completely normal for an array that uses a write intent bitmap. Nothing to worry about.
So this array is not degraded or synchronizing, it's good to go and server can be shut down/ restarted at will?
Last edited by audiomuze (2014-04-26 07:11:52)
Linux user #338966
Offline
Thanks for the responses.
alphaniner wrote:These aren't idiosyncracies, Arch or otherwise. They're the relatively current state of affairs upstream.
Could you elaborate what you mean by upstream - I'm assuming you're referring to changes in recent mdadm and/or linux kernel?
yes, this you are exactly right. Archlinux is far more close to what the actual developers of the Linux kernel or mdadm created then most other distro's that modify these programs for their distro. In this case it is incorrect to say 'Arch has too many idiosyncrasies' because Arch has zero idiosyncrasies when it comes to 99.99% of the packages you install; since they are what the developer of that package created
frostschutz wrote:This is completely normal for an array that uses a write intent bitmap. Nothing to worry about.
So this array is not degraded or synchronizing, it's good to go and server can be shut down/ restarted at will?
Systems with mdadm arrays are always safe to shutdown or restart; as mdadm would simply continue a pending rebuild. In your system mdadm is simply done and working fine.
clean - no pending writes, but otherwise active.
When written to inactive array, starts without resync
If a write request arrives then
if metadata is known, mark 'dirty' and switch to 'active'.
if not known, block and switch to write-pending
If written to an active array that has pending writes, then fails.
active
fully active: IO and resync can be happening.
When written to inactive array, starts with resync
Offline
^^^ your mdadm clean vs active state is consistent with my understanding, which is why I have no idea why the array status should be active given it's not had any data written to it and the build process is completed? What writes could be pending?
Last edited by audiomuze (2014-04-26 20:29:19)
Linux user #338966
Offline
Contrasting two RAID5 arrays - the first created in Arch, the second in Ubuntu Server. Both were created using the same command set (the only differences being number of drives and stride optimised for number of drives). Why the difference in status out of the starting blocks? I've not been able to find anything in the documentation or reading mdadm's manpage to explain this. If there's additional info required in order to assist please let me know and I'll provide same.
Thx in advance for considering and assistance.
/dev/md0:
Version : 1.2
Creation Time : Thu Apr 17 01:13:52 2014
Raid Level : raid5
Array Size : 11720536064 (11177.57 GiB 12001.83 GB)
Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon May 5 05:35:28 2014
State : active
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : audioliboffsite:0
UUID : aba348c6:8dc7b4a7:4e282ab5:40431aff
Events : 11307
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
5 8 65 4 active sync /dev/sde1
/dev/md0:
Version : 1.2
Creation Time : Sun Feb 2 21:40:15 2014
Raid Level : raid5
Array Size : 8790400512 (8383.18 GiB 9001.37 GB)
Used Dev Size : 2930133504 (2794.39 GiB 3000.46 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Mon May 5 06:45:45 2014
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : fileserver:0 (local to host fileserver)
UUID : 8389cd99:a86f705a:15c33960:9f1d7cbe
Events : 208
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
4 8 49 3 active sync /dev/sdd1
Linux user #338966
Offline
FYI, checked with the developer on the mdadm mailing list - status should be clean, not active. Unfortunately a Microserver issue caused one of the drives to fall out of the array so it is now degraded so haven't yet resolved, but will post an update when done.
Linux user #338966
Offline
checked with the developer on the mdadm mailing list - status should be clean, not active.
Well, but it's not. And it's normal that it's not, I can reproduce it - array with bitmap stays active. So I think whoever told you it should be clean missed something there.
Of course, this could be some obscure bug in the kernel or in mdadm, but for the moment, that just seems to be how it works.
without bitmap:
# mdadm --detail /dev/md42
/dev/md42:
Version : 1.2
Creation Time : Tue Jun 17 10:29:12 2014
Raid Level : raid5
Array Size : 161792 (158.03 MiB 165.68 MB)
Used Dev Size : 80896 (79.01 MiB 82.84 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Tue Jun 17 10:29:12 2014
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : EIS:42 (local to host EIS)
UUID : c7915b45:d38b7922:cf2ad1ec:fab8f4cb
Events : 19
Number Major Minor RaidDevice State
0 7 0 0 active sync /dev/loop0
1 7 1 1 active sync /dev/loop1
3 7 2 2 active sync /dev/loop2
with bitmap:
# mdadm --detail /dev/md43
/dev/md43:
Version : 1.2
Creation Time : Tue Jun 17 10:29:23 2014
Raid Level : raid5
Array Size : 161792 (158.03 MiB 165.68 MB)
Used Dev Size : 80896 (79.01 MiB 82.84 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Tue Jun 17 10:29:23 2014
State : active
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : EIS:43 (local to host EIS)
UUID : 481a9da8:6a31d009:c94245fa:469ec7fb
Events : 19
Number Major Minor RaidDevice State
0 7 3 0 active sync /dev/loop3
1 7 4 1 active sync /dev/loop4
3 7 5 2 active sync /dev/loop5
Offline
Turns out it was a minor bug in mdadm code. The underlying array is fine, the call that returns the array status contained a small logic error. Subsequent releases should see the status returned correctly.
Linux user #338966
Offline
Pages: 1