You are not logged in.

#1 2017-03-19 13:15:26

grechk
Member
Registered: 2017-03-19
Posts: 11

[SOLVED] Raid 5 corrupted

Hello everyone, sorry for my english, google helps smile
I have a serious problem to solve, I have a software raid 5 with 4 x 2TB and 2 x 4TB drives. I replaced the controller, but hard drives 4TB were recognized as 2,2TB, I updated the firmware of the controller and finally are recognized 4TB.
After doing this the table of GPT partitions of 4TB is gone and the disc would be blank.
I restored the partition table with testdisk but can not assemble the array.

root@MS-7623:~# mdadm --verbose --assemble /dev/md1 --uuid=de63e8b0:3370b7da:40ac5b6b:2f5e5950
mdadm: looking for devices for /dev/md1
mdadm: no RAID superblock on /dev/sdf5
mdadm: no RAID superblock on /dev/sdf2
mdadm: /dev/sdf1 has wrong uuid.
mdadm: no RAID superblock on /dev/sdf
mdadm: no RAID superblock on /dev/sde5
mdadm: no RAID superblock on /dev/sde2
mdadm: /dev/sde1 has wrong uuid.
mdadm: no RAID superblock on /dev/sde
mdadm: no RAID superblock on /dev/sdg5
mdadm: no RAID superblock on /dev/sdg2
mdadm: /dev/sdg1 has wrong uuid.
mdadm: no RAID superblock on /dev/sdg
mdadm: no RAID superblock on /dev/sdd5
mdadm: no RAID superblock on /dev/sdd2
mdadm: /dev/sdd1 has wrong uuid.
mdadm: no RAID superblock on /dev/sdd
mdadm: cannot open device /dev/sr0: No medium found
mdadm: no RAID superblock on /dev/sdc4
mdadm: no RAID superblock on /dev/sdc3
mdadm: /dev/sdc2 has wrong uuid.
mdadm: no RAID superblock on /dev/sdc1
mdadm: no RAID superblock on /dev/sdc
mdadm: no RAID superblock on /dev/sdb3
mdadm: /dev/sdb2 has wrong uuid.
mdadm: no RAID superblock on /dev/sdb1
mdadm: no RAID superblock on /dev/sdb
mdadm: no RAID superblock on /dev/sda5
mdadm: no RAID superblock on /dev/sda4
mdadm: no RAID superblock on /dev/sda3
mdadm: no RAID superblock on /dev/sda2
mdadm: no RAID superblock on /dev/sda1
mdadm: no RAID superblock on /dev/sda
mdadm: /dev/sdf6 is identified as a member of /dev/md1, slot 2.
mdadm: /dev/sde6 is identified as a member of /dev/md1, slot 1.
mdadm: /dev/sdg6 is identified as a member of /dev/md1, slot 3.
mdadm: /dev/sdd6 is identified as a member of /dev/md1, slot -1.
mdadm: /dev/sdc5 is identified as a member of /dev/md1, slot 4.
mdadm: /dev/sdb4 is identified as a member of /dev/md1, slot 0.
mdadm: added /dev/sde6 to /dev/md1 as 1
mdadm: added /dev/sdf6 to /dev/md1 as 2
mdadm: added /dev/sdg6 to /dev/md1 as 3
mdadm: failed to add /dev/sdc5 to /dev/md1: Invalid argument
mdadm: added /dev/sdd6 to /dev/md1 as -1
mdadm: failed to add /dev/sdb4 to /dev/md1: Invalid argument
mdadm: failed to RUN_ARRAY /dev/md1: Input/output error
root@MS-7623:~# mdadm --detail /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Mon May  2 11:22:43 2011
     Raid Level : raid5
  Used Dev Size : 1946518528 (1856.34 GiB 1993.23 GB)
   Raid Devices : 5
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Mon Mar 13 07:51:17 2017
          State : active, FAILED, Not Started 
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

           Name : server0:1
           UUID : de63e8b0:3370b7da:40ac5b6b:2f5e5950
         Events : 574877

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       6       8       70        1      active sync   /dev/sde6
       7       8       86        2      active sync   /dev/sdf6
       4       8      102        3      active sync   /dev/sdg6
       4       0        0        4      removed

       5       8       54        -      spare   /dev/sdd6

Last edited by grechk (2017-03-24 21:16:06)

Offline

#2 2017-03-19 13:16:09

grechk
Member
Registered: 2017-03-19
Posts: 11

Re: [SOLVED] Raid 5 corrupted

Other info:

root@MS-7623:~# mdadm --examine /dev/sdb4
/dev/sdb4:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : de63e8b0:3370b7da:40ac5b6b:2f5e5950
           Name : server0:1
  Creation Time : Mon May  2 11:22:43 2011
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 3893037056 (1856.34 GiB 1993.23 GB)
     Array Size : 7786074112 (7425.38 GiB 7972.94 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 84af0dbd:f7688332:6f6cd8b0:b1bb7c5c

    Update Time : Mon Mar 13 17:23:59 2017
       Checksum : b3fa334c - correct
         Events : 574877

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAAA ('A' == active, '.' == missing)
root@MS-7623:~# mdadm --examine /dev/sdc5
/dev/sdc5:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : de63e8b0:3370b7da:40ac5b6b:2f5e5950
           Name : server0:1
  Creation Time : Mon May  2 11:22:43 2011
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 3893037056 (1856.34 GiB 1993.23 GB)
     Array Size : 7786074112 (7425.38 GiB 7972.94 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 4ca624f6:a7056c71:6f4c2154:00694d05

    Update Time : Mon Mar 13 17:23:59 2017
       Checksum : 78135412 - correct
         Events : 574877

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 4
   Array State : AAAAA ('A' == active, '.' == missing)
root@MS-7623:~# mdadm --examine /dev/sdd6
/dev/sdd6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : de63e8b0:3370b7da:40ac5b6b:2f5e5950
           Name : server0:1
  Creation Time : Mon May  2 11:22:43 2011
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 3893037056 (1856.34 GiB 1993.23 GB)
     Array Size : 7786074112 (7425.38 GiB 7972.94 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 8ad26f38:9ff47984:a239197b:ad9b53e6

    Update Time : Mon Mar 13 07:51:17 2017
       Checksum : d56a08eb - correct
         Events : 574877

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : spare
   Array State : AAAAA ('A' == active, '.' == missing)
root@MS-7623:~# mdadm --examine /dev/sde6
/dev/sde6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : de63e8b0:3370b7da:40ac5b6b:2f5e5950
           Name : server0:1
  Creation Time : Mon May  2 11:22:43 2011
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 3893037056 (1856.34 GiB 1993.23 GB)
     Array Size : 7786074112 (7425.38 GiB 7972.94 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 72aa1ef6:9cc746b9:fab7cb3e:9e993938

    Update Time : Mon Mar 13 17:23:59 2017
       Checksum : dd76b60c - correct
         Events : 574877

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAAAA ('A' == active, '.' == missing)
root@MS-7623:~# mdadm --examine /dev/sdf6
/dev/sdf6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : de63e8b0:3370b7da:40ac5b6b:2f5e5950
           Name : server0:1
  Creation Time : Mon May  2 11:22:43 2011
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 3893037056 (1856.34 GiB 1993.23 GB)
     Array Size : 7786074112 (7425.38 GiB 7972.94 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 0b761720:aa4e41de:9ae95f61:d2b7fda4

    Update Time : Mon Mar 13 17:23:59 2017
       Checksum : bbca58d0 - correct
         Events : 574877

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAAA ('A' == active, '.' == missing)
root@MS-7623:~# mdadm --examine /dev/sdg6
/dev/sdg6:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : de63e8b0:3370b7da:40ac5b6b:2f5e5950
           Name : server0:1
  Creation Time : Mon May  2 11:22:43 2011
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 3893037056 (1856.34 GiB 1993.23 GB)
     Array Size : 7786074112 (7425.38 GiB 7972.94 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : f621b605:1f2a55af:c562e8bc:9d664a95

    Update Time : Mon Mar 13 17:23:59 2017
       Checksum : be520823 - correct
         Events : 574877

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AAAAA ('A' == active, '.' == missing)

Please help me ... Thanks

Last edited by grechk (2017-03-19 13:16:43)

Offline

#3 2017-03-19 13:50:45

frostschutz
Member
Registered: 2013-11-15
Posts: 1,417

Re: [SOLVED] Raid 5 corrupted

From the metadata alone, it looks fine. Something else must be going on.

What's your /proc/mdstat look like? Also, your mdadm.conf? dmesg? blockdev --getsize64 for these devices?

Last edited by frostschutz (2017-03-19 13:51:32)

Offline

#4 2017-03-19 13:58:25

grechk
Member
Registered: 2017-03-19
Posts: 11

Re: [SOLVED] Raid 5 corrupted

Original mdadm.conf is lost because it was in md0 on the same hard disk of md1.
As soon as they are at the computer, I answer the other questions.

Last edited by grechk (2017-03-19 14:00:08)

Offline

#5 2017-03-19 17:04:17

grechk
Member
Registered: 2017-03-19
Posts: 11

Re: [SOLVED] Raid 5 corrupted

mdadm --assemble /dev/md1
mdadm: failed to add /dev/sdc5 to /dev/md1: Invalid argument
mdadm: failed to add /dev/sdb4 to /dev/md1: Invalid argument
mdadm: failed to RUN_ARRAY /dev/md1: Input/output error
cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md1 : inactive sdd6[5](S) sdg6[4] sdf6[7] sde6[6]
      7786074112 blocks super 1.2
       
unused devices: <none>

mdadm.conf isn't original

cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=18809096:864de900:d1b4709b:a70f671c name=server0:0
   spares=1
ARRAY /dev/md/0 metadata=1.2 UUID=122c0f1e:eb42374e:731efedb:0b7dc38f name=ubuntu-gnome:0
ARRAY /dev/md/1 metadata=1.2 UUID=de63e8b0:3370b7da:40ac5b6b:2f5e5950 name=server0:1
   spares=1

# This file was auto-generated on Sat, 18 Mar 2017 22:06:15 +0100
# by mkconf $Id$

Offline

#6 2017-03-19 17:07:25

grechk
Member
Registered: 2017-03-19
Posts: 11

Re: [SOLVED] Raid 5 corrupted

dmesg:

[   23.787541] atl1c 0000:03:00.0: irq 43 for MSI/MSI-X
[   23.787628] atl1c 0000:03:00.0: atl1c: eth0 NIC Link is Up<1000 Mbps Full Duplex>
[   35.699803] audit_printk_skb: 15 callbacks suppressed
[   35.699807] type=1400 audit(1489871096.797:17): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/sbin/dhclient" pid=1038 comm="apparmor_parser"
[   35.699814] type=1400 audit(1489871096.797:18): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=1038 comm="apparmor_parser"
[   35.699818] type=1400 audit(1489871096.797:19): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=1038 comm="apparmor_parser"
[   35.700235] type=1400 audit(1489871096.797:20): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=1038 comm="apparmor_parser"
[   35.700238] type=1400 audit(1489871096.797:21): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=1038 comm="apparmor_parser"
[   35.700453] type=1400 audit(1489871096.797:22): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=1038 comm="apparmor_parser"
[   35.847957] type=1400 audit(1489871096.945:23): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/telepathy/mission-control-5" pid=1041 comm="apparmor_parser"
[   35.847965] type=1400 audit(1489871096.945:24): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/telepathy/telepathy-*" pid=1041 comm="apparmor_parser"
[   35.847969] type=1400 audit(1489871096.945:25): apparmor="STATUS" operation="profile_load" profile="unconfined" name="pxgsettings" pid=1041 comm="apparmor_parser"
[   35.847972] type=1400 audit(1489871096.945:26): apparmor="STATUS" operation="profile_load" profile="unconfined" name="sanitized_helper" pid=1041 comm="apparmor_parser"
[   36.756232] init: alsa-restore main process (1135) terminated with status 99
[   45.088081] init: plymouth-upstart-bridge main process ended, respawning
[   45.094845] init: plymouth-upstart-bridge main process (1290) terminated with status 1
[   45.094858] init: plymouth-upstart-bridge main process ended, respawning
[   49.648445] audit_printk_skb: 114 callbacks suppressed
[   49.648450] type=1400 audit(1489871110.764:65): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/cups/backend/cups-pdf" pid=1404 comm="apparmor_parser"
[   49.648456] type=1400 audit(1489871110.764:66): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/sbin/cupsd" pid=1404 comm="apparmor_parser"
[   49.648902] type=1400 audit(1489871110.768:67): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/sbin/cupsd" pid=1404 comm="apparmor_parser"
[   57.702846] type=1400 audit(1489871118.832:68): apparmor="DENIED" operation="open" profile="/usr/lib/telepathy/mission-control-5" name="/etc/dconf/profile/gdm" pid=1567 comm="mission-control" requested_mask="r" denied_mask="r" fsuid=115 ouid=0
[  120.406585] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled
[  120.456297] JFS: nTxBlock = 8192, nTxLock = 65536
[  120.495012] NTFS driver 2.1.30 [Flags: R/O MODULE].
[  120.539423] QNX4 filesystem 0.2.3 registered.
[  120.604457] xor: measuring software checksum speed
[  120.642689]    prefetch64-sse:  3268.000 MB/sec
[  120.682627]    generic_sse:  3088.000 MB/sec
[  120.682633] xor: using function: prefetch64-sse (3268.000 MB/sec)
[  120.766539] raid6: sse2x1    2602 MB/s
[  120.834431] raid6: sse2x2    3966 MB/s
[  120.902347] raid6: sse2x4    4758 MB/s
[  120.902350] raid6: using algorithm sse2x4 (4758 MB/s)
[  120.902352] raid6: using intx1 recovery algorithm
[  120.970340] bio: create slab <bio-1> at 1
[  120.972566] Btrfs loaded
[  370.155695] md: md0 stopped.
[  370.162935] md: bind<sdd1>
[  474.947528] md: md1 stopped.
[  474.956106] md: bind<sde6>
[  474.956595] md: bind<sdf6>
[  474.957027] md: bind<sdg6>
[  474.957205] md: sdc5 does not have a valid v1.2 superblock, not importing!
[  474.957210] md: md_import_device returned -22
[  474.957681] md: bind<sdd6>
[  474.957839] md: sdb4 does not have a valid v1.2 superblock, not importing!
[  474.957844] md: md_import_device returned -22
[  474.985356] async_tx: api initialized (async)
[  474.996743] md: raid6 personality registered for level 6
[  474.996750] md: raid5 personality registered for level 5
[  474.996752] md: raid4 personality registered for level 4
[  474.997877] md/raid:md1: device sdg6 operational as raid disk 3
[  474.997888] md/raid:md1: device sdf6 operational as raid disk 2
[  474.997891] md/raid:md1: device sde6 operational as raid disk 1
[  474.998526] md/raid:md1: allocated 0kB
[  474.999254] md/raid:md1: not enough operational devices (2/5 failed)
[  474.999275] RAID conf printout:
[  474.999277]  --- level:5 rd:5 wd:3
[  474.999279]  disk 1, o:1, dev:sde6
[  474.999281]  disk 2, o:1, dev:sdf6
[  474.999282]  disk 3, o:1, dev:sdg6
[  474.999736] md/raid:md1: failed to run raid set.
[  474.999738] md: pers->run() failed ...
[  608.845248] md: md1 stopped.
[  608.845269] md: unbind<sdd6>
[  608.861328] md: export_rdev(sdd6)
[  608.861390] md: unbind<sdg6>
[  608.877290] md: export_rdev(sdg6)
[  608.877371] md: unbind<sdf6>
[  608.893268] md: export_rdev(sdf6)
[  608.893308] md: unbind<sde6>
[  608.905273] md: export_rdev(sde6)
[  611.940713] md: md1 stopped.
[  611.945374] md: bind<sde6>
[  611.945744] md: bind<sdf6>
[  611.945965] md: bind<sdg6>
[  611.946128] md: sdc5 does not have a valid v1.2 superblock, not importing!
[  611.946134] md: md_import_device returned -22
[  611.946346] md: bind<sdd6>
[  611.946504] md: sdb4 does not have a valid v1.2 superblock, not importing!
[  611.946509] md: md_import_device returned -22
[  611.990310] md/raid:md1: device sdg6 operational as raid disk 3
[  611.990316] md/raid:md1: device sdf6 operational as raid disk 2
[  611.990318] md/raid:md1: device sde6 operational as raid disk 1
[  611.990817] md/raid:md1: allocated 0kB
[  611.990984] md/raid:md1: not enough operational devices (2/5 failed)
[  611.991816] RAID conf printout:
[  611.991824]  --- level:5 rd:5 wd:3
[  611.991826]  disk 1, o:1, dev:sde6
[  611.991828]  disk 2, o:1, dev:sdf6
[  611.991830]  disk 3, o:1, dev:sdg6
[  611.992294] md/raid:md1: failed to run raid set.
[  611.992296] md: pers->run() failed ...
[ 1125.399016] md: md1 stopped.
[ 1125.399037] md: unbind<sdd6>
[ 1125.401536] md: export_rdev(sdd6)
[ 1125.401629] md: unbind<sdg6>
[ 1125.413435] md: export_rdev(sdg6)
[ 1125.413481] md: unbind<sdf6>
[ 1125.421472] md: export_rdev(sdf6)
[ 1125.421497] md: unbind<sde6>
[ 1125.433449] md: export_rdev(sde6)
[ 4010.146451] md: md0 stopped.
[ 4010.146463] md: unbind<sdd1>
[ 4010.160095] md: export_rdev(sdd1)
[ 4484.621719] md: md0 stopped.
[ 4484.807186] md: bind<sde1>
[ 4484.807904] md: bind<sdf1>
[ 4484.808446] md: bind<sdg1>
[ 4484.861407] md: bind<sdc2>
[ 4484.861759] md: bind<sdb2>
[ 4484.879015] md/raid:md0: device sdb2 operational as raid disk 0
[ 4484.879021] md/raid:md0: device sdc2 operational as raid disk 4
[ 4484.879023] md/raid:md0: device sdg1 operational as raid disk 3
[ 4484.879026] md/raid:md0: device sdf1 operational as raid disk 2
[ 4484.879028] md/raid:md0: device sde1 operational as raid disk 1
[ 4484.879654] md/raid:md0: allocated 0kB
[ 4484.879858] md/raid:md0: raid level 5 active with 5 out of 5 devices, algorithm 2
[ 4484.879861] RAID conf printout:
[ 4484.879862]  --- level:5 rd:5 wd:5
[ 4484.879864]  disk 0, o:1, dev:sdb2
[ 4484.879866]  disk 1, o:1, dev:sde1
[ 4484.879868]  disk 2, o:1, dev:sdf1
[ 4484.879869]  disk 3, o:1, dev:sdg1
[ 4484.879871]  disk 4, o:1, dev:sdc2
[ 4484.879907] md0: detected capacity change from 0 to 19973275648
[ 4484.889779]  md0: unknown partition table
[ 5399.262786] md0: detected capacity change from 19973275648 to 0
[ 5399.262800] md: md0 stopped.
[ 5399.262811] md: unbind<sdb2>
[ 5399.275211] md: export_rdev(sdb2)
[ 5399.365420] md: unbind<sdc2>
[ 5399.375089] md: export_rdev(sdc2)
[ 5399.375121] md: unbind<sdg1>
[ 5399.383105] md: export_rdev(sdg1)
[ 5399.509496] md: unbind<sdf1>
[ 5399.518975] md: export_rdev(sdf1)
[ 5399.645197] md: unbind<sde1>
[ 5399.654866] md: export_rdev(sde1)
[35503.755360] type=1400 audit(1489906598.227:69): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/cups/backend/cups-pdf" pid=14402 comm="apparmor_parser"
[35503.755372] type=1400 audit(1489906598.227:70): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/sbin/cupsd" pid=14402 comm="apparmor_parser"
[35503.756046] type=1400 audit(1489906598.227:71): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/sbin/cupsd" pid=14402 comm="apparmor_parser"
[58126.595958] mptbase: ioc0: LogInfo(0x31123000): Originator={PL}, Code={Abort}, SubCode(0x3000) cb_idx mptbase_reply
[58127.130724] mptbase: ioc0: LogInfo(0x31123000): Originator={PL}, Code={Abort}, SubCode(0x3000) cb_idx mptscsih_io_done
[58127.130745] mptbase: ioc0: LogInfo(0x31123000): Originator={PL}, Code={Abort}, SubCode(0x3000) cb_idx mptscsih_io_done
[71757.996733] md: md1 stopped.
[71758.000642] md: bind<sde6>
[71758.001095] md: bind<sdf6>
[71758.001555] md: bind<sdg6>
[71758.001947] md: sdc5 does not have a valid v1.2 superblock, not importing!
[71758.001977] md: md_import_device returned -22
[71758.002482] md: bind<sdd6>
[71758.002851] md: sdb4 does not have a valid v1.2 superblock, not importing!
[71758.002871] md: md_import_device returned -22
[71758.695610] md/raid:md1: device sdg6 operational as raid disk 3
[71758.695621] md/raid:md1: device sdf6 operational as raid disk 2
[71758.695627] md/raid:md1: device sde6 operational as raid disk 1
[71758.696752] md/raid:md1: allocated 0kB
[71758.696825] md/raid:md1: not enough operational devices (2/5 failed)
[71758.696845] RAID conf printout:
[71758.696848]  --- level:5 rd:5 wd:3
[71758.696853]  disk 1, o:1, dev:sde6
[71758.696856]  disk 2, o:1, dev:sdf6
[71758.696860]  disk 3, o:1, dev:sdg6
[71758.697565] md/raid:md1: failed to run raid set.
[71758.697569] md: pers->run() failed ...
root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sdd
2000398934016
root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sde
2000398934016
root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sdf
2000398934016
root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sdg
2000398934016
root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sdb
4000787030016
root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sdc
4000787030016
root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sdd6
1993236021248
root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sde6
1993236021248
root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sdf6
1993236021248
root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sdg6
1993236021248
root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sdb4
1993234976768
root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sdc5
1993234976768

Last edited by grechk (2017-03-19 17:13:19)

Offline

#7 2017-03-21 16:07:39

grechk
Member
Registered: 2017-03-19
Posts: 11

Re: [SOLVED] Raid 5 corrupted

testdisk deeper search:

TestDisk 6.14, Data Recovery Utility, July 2013
Christophe GRENIER <grenier@cgsecurity.org>
http://www.cgsecurity.org

Disk /dev/sdc - 4000 GB / 3726 GiB - CHS 486401 255 63

The harddisk (4000 GB / 3726 GiB) seems too small! (< 14968622 TB / 13613882 TiB)
Check the harddisk size: HD jumpers settings, BIOS detection...

The following partitions can't be recovered:
     Partition               Start        End    Size in sectors
>  MS Data                100634622 12749246461 12648611840 [multimedia]
   MS Data                100634624 12749246463 12648611840 [multimedia]
   MS Data                104566782 12753178621 12648611840 [multimedia]
   MS Data                104566784 12753178623 12648611840 [multimedia]
   MS Data                838276048 13486887887 12648611840 [multimedia]
   MS Data               2355582242 29235589848940926 29235587493358684 [~AU ~[P^EM-'ֲHM-8]





[ Continue ]
ext3 blocksize=4096 Large file Sparse superblock Backup superblock, 6476 GB / 6031 GiB

Offline

#8 2017-03-22 08:58:46

dminca
Member
From: Bucharest, RO
Registered: 2017-03-05
Posts: 6
Website

Re: [SOLVED] Raid 5 corrupted

@grechk by doing one duckduckgo search, I found this post on superuser. You could try:

mdadm --stop /dev/md0
mdadm --assemble --scan

If that's not working, all that's left is to troubleshoot by following steps from our wiki

Last edited by dminca (2017-03-22 08:59:16)

Offline

#9 2017-03-22 09:40:57

frostschutz
Member
Registered: 2013-11-15
Posts: 1,417

Re: [SOLVED] Raid 5 corrupted

[71758.001947] md: sdc5 does not have a valid v1.2 superblock, not importing!
[71758.002851] md: sdb4 does not have a valid v1.2 superblock, not importing!

That appears to be the key problem here.

root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sdf6
1993236021248
root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sdg6
1993236021248
root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sdb4
1993234976768
root@MS-7623:~/Scaricati# blockdev --getsize64 /dev/sdc5
1993234976768

How come sdb4, sdc5 are smaller than the others by 1044480 bytes?

Compare partitioning by `parted /dev/disk unit s print free`

If there is free space, parted resizepart to extend those two partitions to correctsize.

Last edited by frostschutz (2017-03-22 09:42:23)

Offline

#10 2017-03-23 06:58:08

grechk
Member
Registered: 2017-03-19
Posts: 11

Re: [SOLVED] Raid 5 corrupted

I noticed the difference in size. I used testdisk on both discs 4 tb, might have been wrong detection? now I'm cloning 4 of 5 hard drives in order to make even destructive tests. Ideas on what to do?

Offline

#11 2017-03-23 10:07:00

frostschutz
Member
Registered: 2013-11-15
Posts: 1,417

Re: [SOLVED] Raid 5 corrupted

Ideas on what to do?

Just make the partitions the correct size? If you're not familiar with partitioning show parted output (command in previous post).

Offline

#12 2017-03-23 10:19:06

grechk
Member
Registered: 2017-03-19
Posts: 11

Re: [SOLVED] Raid 5 corrupted

to fix the partitions is not a problem, but was wondering if testdisk might have detections the wrong partitions?

Offline

#13 2017-03-23 11:08:34

frostschutz
Member
Registered: 2013-11-15
Posts: 1,417

Re: [SOLVED] Raid 5 corrupted

Well, TestDisk is just guessing so it can be wrong. And it doesn't have an easy job, the start of a partition is usually easy to locate (that's where most things put their metadata magic headers), but the exact size is a lot more difficult to determine (for example with LUKS there is no indication about size whatsoever). And then there may be wrong headers / remnants of previous partitions floating around.

Fixing the partitions should make the RAID work again. If you want to investigate why TestDisk didn't do it correctly, maybe a bug in testdisk or improvement possible (since MD header kinda knows the size), you could try to reproduce the issue and open a bug report with testdisk for that...

Offline

#14 2017-03-23 11:22:53

mich41
Member
Registered: 2012-06-22
Posts: 796

Re: [SOLVED] Raid 5 corrupted

Attach these 4T disks to some known-good controller and see if they have the same capacity as on this controller. That they aren't truncated to 2.2T doesn't mean that they aren't truncated smile

Offline

#15 2017-03-23 11:29:53

frostschutz
Member
Registered: 2013-11-15
Posts: 1,417

Re: [SOLVED] Raid 5 corrupted

mich41 wrote:

Attach these 4T disks to some known-good controller and see if they have the same capacity as on this controller. That they aren't truncated to 2.2T doesn't mean that they aren't truncated smile

smartctl or hdparm should report the capacities the disks themselves believe to have and you can compare that with `blockdev --getsize64 /dev/disk` no need to go about plugging and unplugging disks.

If it's not in some USB enclosure or fake raid controller you're probably safe. USB bridges are known to do strange things (report disk as 4K or 512 byte sectors even though it's the other way around). RAID controllers sometimes eat a few sectors for their own metadata even if it's supposed to just pass the disks through.

Offline

#16 2017-03-23 12:01:45

grechk
Member
Registered: 2017-03-19
Posts: 11

Re: [SOLVED] Raid 5 corrupted

@frostschuts: I say that because maybe I have mistake when I made partitions. Hard drives were new, had no previous partitions. In any case just finished clone hard disk try to resize partitions.

Offline

#17 2017-03-24 21:13:17

grechk
Member
Registered: 2017-03-19
Posts: 11

Re: [SOLVED] Raid 5 corrupted

I did it, I corrected the partitions and I started the degraded array ... now I'm transferring all the files on a backup drive and then restore your operating system.
thanks thanks thanks......

Last edited by grechk (2017-03-24 22:31:43)

Offline

#18 2017-03-24 22:45:11

dminca
Member
From: Bucharest, RO
Registered: 2017-03-05
Posts: 6
Website

Re: [SOLVED] Raid 5 corrupted

ok, and how did you manage to solve the issue?

Offline

#19 2017-03-24 22:53:51

grechk
Member
Registered: 2017-03-19
Posts: 11

Re: [SOLVED] Raid 5 corrupted

I deleted the partitions of the hard disk from 4 tb, I recreated partitions the same size as those from 2 tb. Testdisk had recovered partitions but they were incorrect. At that point I started the degraded array, and now I'm copying the data.

Offline

Board footer

Powered by FluxBB