I partitioned the drives, making the first partition a swap partition, the second partition for later use as a RAID1 /boot, and the third (nearly 1TB) partition as the RAID5 LVM.
Unfortunately, Linux (Ubuntu 10.04.3) would no longer boot properly with this setup. Instead, it would get stuck at the initramfs prompt, because the LVM root partition, /dev/alpha/root, was not present.
I learned that it was possible to finish booting when I manually stopped and reassembled the RAID arrays (mdadm --stop --scan; mdadm --assemble --scan). By doing this, my LVM volumes would become available.
For some reason, I initially assumed this was a race condition between LVM and RAID (because /dev/md0 was present), and looked for ways to defer LVM until later. In retrospect, this theory made no sense, but it sent me on a wild goose chase.
Much later, after an inconvenient series of events involving international phone-tag, a friend was able to issue the necessary magic commands (which I had not even written down; and my formulation was worse, as I recall it specifying the drives manually instead of using --scan), the system was up and running after the unexpected power failure. But this underscored the need to have the system rebootable without manual intervention!
Finally, I spotted the significant detail in the logs. When misdetecting the array, the kernel logged "raid5: device sdb operational as raid disk 0". When correctly detecting the array, the kernel logged "raid5: device sdb3 operational as raid disk 0".
For a number of good reasons, Linux mdadm puts the "raid superblock" at the end of the disk or partition. However, it turns out that this means it is at the end of the last partition AND at the end of the disk-as-a-whole. For some reason, the very earliest detection snatches the whole disk but the second detection finds the partitions instead.
I experimented unsuccessfully with kernel boot options (anything happening at a timestamp of 3 seconds into boot seemed likely to be kernel autodetection) like raid=noautodetect and md=0=/dev/sdb3,… but these did not work (possibly these only have any effect when mdadm --auto-detect is run). I did end up with the option scsi_mod.scan=sync, which may or may not be relevant. (hopefully it fixes a slightly related problem in which the /dev/sdX names are not stable from boot to boot)
But the real final solution was this: In /etc/mdadm/mdadm.conf, disable scanning of whole disks for RAID volumes with the line
DEVICE /dev/sd??After making this change, it's necessary to update the initramfs.
Now my system reboots properly after lost power is restored. Hooray.
Entry first conceived on 13 August 2011, 17:58 UTC, last modified on 15 January 2012, 3:46 UTC
Website Copyright © 2004-2024 Jeff Epler