You are not logged in.

#1 2026-03-10 10:23:40

dimich
Member
From: Kharkiv, Ukraine
Registered: 2009-11-03
Posts: 549

md_update_sb: can't update sb for read-only array with 6.19.6 kernel

After update to linux-6.19.6.arch1-1 I noticed kernel warning at boot:

Mar 09 10:27:28 dimich kernel: md127: echo current LBS to md/logical_block_size to prevent data loss issues from LBS changes.
                                       Note: After setting, array will not be assembled in old kernels (<= 6.18)
Mar 09 10:27:28 dimich kernel: md/raid1:md127: active with 2 out of 2 mirrors
Mar 09 10:27:28 dimich kernel: md127: detected capacity change from 0 to 3906762880
Mar 09 10:27:28 dimich kernel: md_update_sb: can't update sb for read-only array md127

(I use RAID1 for /home)

No real issues with md array observed (yet), and this seems to be a fix for forward compatibility.
Just want to clarify, do I understand correctly that if I update LBS as suggested, e.g.:

# cat /sys/block/md127/queue/logical_block_size > /sys/block/md127/md/logical_block_size

this warning should disappear, but I will not be able to assemble this RAID device with kernels < 6.19, e.g. with linux-lts (until it also updated to 6.19)?

Online

#2 2026-03-10 11:05:45

frostschutz
Member
Registered: 2013-11-15
Posts: 1,636

Re: md_update_sb: can't update sb for read-only array with 6.19.6 kernel

See man mdadm, --logical-block-size regarding the new blocksize feature. I would ignore it for now, as long as linux-lts does not support it. Leave as is until mdadm complains about it (might not allow you replace a disk due to wrong sector size).

Your cat command should not work. Since logical_block_size if unset is simply 0. You have to choose 512 or 4096 yourself, or query it with blockdev.

$ head /sys/block/md*/md/logical_block_size

==> /sys/block/md101/md/logical_block_size <==
0

==> /sys/block/md102/md/logical_block_size <==
0

==> /sys/block/md103/md/logical_block_size <==
0

# blockdev --getss /dev/md*
512
512
512

Once set, it seems not possible to change it again (echo: write error: Device or resource busy). This very much looks like a one-way road.

It would be nice if the mdadm utility itself offered a sane way to set the block size, but it only does that when you --create... that's mdadm for you. It can't change things or only when you assemble --update, but for logical block size, apparently not even that.

Might make a lot of sense to just set it to 4096 eventually, since most devices have that physical sector size.

Not sure however why you get the md_update_sb message. Arrays are usually assembled in auto-read-only (read-auto) mode. This is to allow incremental assembly, and simply as a safety device to not modify arrays pointlessly while nobody is actually using them.

It goes into read-write once it's actually mounted and a regular write request comes in. That's also when any ongoing reshape or resync operations should resume.

Before that there should not be any need to update the superblock. I don't have this message for any of my arrays, so not sure what's going on with your array there. It should be unrelated to the logical block message.

----

Hmmm, you can't even set it to 512 for new arrays if your drives report 4096...

# mdadm --create --logical-block-size=512 ...
mdadm: Defaulting to version 1.2 metadata
mdadm: failed to write '512' to '/sys/block/md42/md//logical_block_size' (Invalid argument)
mdadm: Failed to set logical_block_size 512

Doesn't work for devices with block size of 512 either...

So you can only let 512 be picked by default, or specify 4096 yourself.

That's just... well. Quite strange. Does anybody ever test this stuff...

Last edited by frostschutz (2026-03-10 11:19:32)

Offline

#3 2026-03-10 11:43:48

dimich
Member
From: Kharkiv, Ukraine
Registered: 2009-11-03
Posts: 549

Re: md_update_sb: can't update sb for read-only array with 6.19.6 kernel

frostschutz wrote:

I would ignore it for now, as long as linux-lts does not support it. Leave as is until mdadm complains about it

Thank you for fast reply. Yep, I'm going to ignore it, at least for now.

frostschutz wrote:

Your cat command should not work. Since logical_block_size if unset is simply 0.

There is actual value in /sys/block/md127/queue/logical_block_size:

$ cat /sys/block/md127/queue/logical_block_size 
512
frostschutz wrote:

Might make a lot of sense to just set it to 4096 eventually, since most devices have that physical sector size.

Hm, my HDDs have physical 4096 but logical 512 block size:

$ grep -H '' /sys/block/sd{a,b}/queue/{logical,physical}_block_size 
/sys/block/sda/queue/logical_block_size:512
/sys/block/sda/queue/physical_block_size:4096
/sys/block/sdb/queue/logical_block_size:512
/sys/block/sdb/queue/physical_block_size:4096
frostschutz wrote:

Not sure however why you get the md_update_sb message. Arrays are usually assembled in auto-read-only (read-auto) mode.

This wandered me too. The message sounds like md driver is trying to update the field during initial assembling phase.

Online

#4 2026-03-10 13:55:29

frostschutz
Member
Registered: 2013-11-15
Posts: 1,636

Re: md_update_sb: can't update sb for read-only array with 6.19.6 kernel

Anything special in mdadm --examine?

Are you using systemd or busybox based initcpio? Assembly should be mostly identical though since that is triggered in the background by udev rules. Oh, since you mention it was RAID for /home only, do you have mdadm hook in initcpio at all?

You could also, like, clone the metadata headers (grab first 64K of each drive, create loop or zram devices in a VM, assemble there) just to see if it also occurs during manual assembly, and if mdadm --assemble --verbose prints anything strange.

There was another thread recently about /bin/sh missing in initcpio but it doesn't seem like it could be related... and this was about altogether failure to assemble or proceed in the boot process... https://bbs.archlinux.org/viewtopic.php … 0#p2290000

Offline

#5 2026-03-10 15:09:30

dimich
Member
From: Kharkiv, Ukraine
Registered: 2009-11-03
Posts: 549

Re: md_update_sb: can't update sb for read-only array with 6.19.6 kernel

frostschutz wrote:

Anything special in mdadm --examine?

Nothing suspicious:

$ sudo mdadm --detail /dev/md127 
/dev/md127:
           Version : 1.2
     Creation Time : Wed Dec  1 15:30:48 2021
        Raid Level : raid1
        Array Size : 1953381440 (1862.89 GiB 2000.26 GB)
     Used Dev Size : 1953381440 (1862.89 GiB 2000.26 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Tue Mar 10 16:36:21 2026
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : dimich:0  (local to host dimich)
              UUID : dcfb19d2:0235f29f:855f4c89:31d04b43
            Events : 17093

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8        1        1      active sync   /dev/sda1
$ sudo mdadm --examine /dev/sd{a,b}1
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : dcfb19d2:0235f29f:855f4c89:31d04b43
           Name : dimich:0  (local to host dimich)
  Creation Time : Wed Dec  1 15:30:48 2021
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 3906762895 sectors (1862.89 GiB 2000.26 GB)
     Array Size : 1953381440 KiB (1862.89 GiB 2000.26 GB)
  Used Dev Size : 3906762880 sectors (1862.89 GiB 2000.26 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=15 sectors
          State : clean
    Device UUID : 8e8a25ed:861c7922:7d538f18:7045eef8

Internal Bitmap : 8 sectors from superblock
    Update Time : Tue Mar 10 16:33:42 2026
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 3fb1a2a8 - correct
         Events : 17093


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : dcfb19d2:0235f29f:855f4c89:31d04b43
           Name : dimich:0  (local to host dimich)
  Creation Time : Wed Dec  1 15:30:48 2021
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 3906762895 sectors (1862.89 GiB 2000.26 GB)
     Array Size : 1953381440 KiB (1862.89 GiB 2000.26 GB)
  Used Dev Size : 3906762880 sectors (1862.89 GiB 2000.26 GB)
    Data Offset : 264192 sectors
   Super Offset : 8 sectors
   Unused Space : before=264112 sectors, after=15 sectors
          State : clean
    Device UUID : 953ebea1:16135982:7ab05e1c:3548bbdb

Internal Bitmap : 8 sectors from superblock
    Update Time : Tue Mar 10 16:33:42 2026
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 3ac6ad00 - correct
         Events : 17093


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
frostschutz wrote:

Are you using systemd or busybox based initcpio? ... do you have mdadm hook in initcpio at all?

I use busybox based initcpio. Indeed, there is mdadm_udev hook becase rootfs was also on this RAID. A few years ago I moved rootfs to another device but forgot to remove the hook.

$ sed -e 's/^#.*//' -e '/^$/d' /etc/mkinitcpio.conf
MODULES=(ext4 dm_mod)
BINARIES=()
FILES=()
HOOKS=(base udev autodetect microcode modconf kms keyboard block mdadm_udev lvm2 filesystems resume_encswap fsck)

There is one custom hook 'resume_encswap' for resuming from dm-encrypted swap, but it has nothing to do with RAID.

Let me try to remove mdadm_udev and see what changes.

frostschutz wrote:

You could also, like, clone the metadata headers (grab first 64K of each drive, create loop or zram devices in a VM, assemble there) just to see if it also occurs during manual assembly, and if mdadm --assemble --verbose prints anything strange.

Good idea. Actually I can boot with premount interactive shell and try to assemble array manually. Or wait until Arch ISO with 6.19 kernel is released.

Online

#6 2026-03-10 15:31:58

dimich
Member
From: Kharkiv, Ukraine
Registered: 2009-11-03
Posts: 549

Re: md_update_sb: can't update sb for read-only array with 6.19.6 kernel

dimich wrote:

Let me try to remove mdadm_udev and see what changes.

Well, without mdadm_udev hook (and without lvm2) md127 became md0 and no "md_update_sb" message anymore:

$ LC_ALL=C journalctl -b --grep="md0"
Mar 10 17:12:40 dimich kernel: md0: echo current LBS to md/logical_block_size to prevent data loss issues from LBS changes.
                                       Note: After setting, array will not be assembled in old kernels (<= 6.18)
Mar 10 17:12:40 dimich kernel: md/raid1:md0: active with 2 out of 2 mirrors
Mar 10 17:12:40 dimich systemd[1]: Started Timer to wait for more drives before activating degraded array md0..
Mar 10 17:12:40 dimich kernel: md0: detected capacity change from 0 to 3906762880
Mar 10 17:12:41 dimich systemd[1]: mdadm-last-resort@md0.timer: Deactivated successfully.
Mar 10 17:12:41 dimich systemd[1]: Stopped Timer to wait for more drives before activating degraded array md0..
Mar 10 17:12:41 dimich mdadm[525]: mdadm: NewArray event detected on md device /dev/md0
Mar 10 17:12:42 dimich kernel: EXT4-fs (md0): mounted filesystem 85134262-01f5-44eb-a164-5e816995d8d5 r/w with ordered data mode. Quota mode: none.

Hm, I really don't like that RAID assembling is done already using systemd and some timers.

Online

Board footer

Powered by FluxBB