I thought I would document this for reference for myself and anyone else who might benefit. I recently upgraded with main workstation, replacing my 1.2TB LVM made up of five 250GB drives with a 2TB raid array of five 500GB drives. The additional challenges were that I would have to transfer the data off of the old LVM onto the new raid and I would have to do this over the network as neither motherboard has 10 SATA connections. To further complicate things, I want to use one of the current drives of the LVM as my new system drive. Here is the process.
After setting up the hardware for both systems, I booted the new raid system with a Fedora Core 7 DVD and entered rescue mode by entering linux rescue at the ISOLINUX prompt. Normally, I would setup the raid during the normal Fedora installation. However, I want to use one of the LVM members for the new installation, which I also need to pull data off of first.
Now that I'm in rescue mode, the first thing to do was partition each raid member with one big Linux Raid Autodetect or fd type partition. Once this was done, I could create a level 5 raid with this command:
mdadm --create /dev/md0 --level=5 --raid-devices=5 /dev/sd[abcde]1
When I ran this command the first time, mdadm told me that /dev/sda1 was too small to be part of the array. For some reason, the kernel detected the changes in the other drives, but not the first one. I rebooted the system, as partprobe is not available in rescue mode, and then command worked like a charm.
The next step was to put a file system on the new raid device. I used XFS because the files that would live on this file system were very large, about 100MB to several gigabytes. I formatted /dev/md0 with this command, which is about how you could format any drive with XFS:
mkfs.xfs /dev/md0
Next, I mounted the raid, setup an NFS export on the old system, and mirrored about a terabyte of data. This took about 16 hours over gigabit ethernet.
The next day, after copying completion, I installed Fedora Core 7 on one of the old members of the LVM. After installing, I would then configure the raid as my /home. I had to assemble the raid before I could use it. You could use this method to assemble a raid moved from another system as well as what I am doing.
Before I could assemble it, I needed the raid's UUID. This is easy to get. All you need to know is the device file of one of the raid members, which the first on mine is /dev/sdb1, and you can ask mdadm.
mdadm --examine /dev/sdb1
In the cascade of output, you will see UUID presented as a string of hexidecimal characters. Now, we run this command to assemble the raid device:
mdadm --assemble --name=/dev/md0 --uuid=PUT_UUID_HERE /dev/sd[bcdef]1
The raid device is all ready to mount. Now, I want this raid device to be prepared at every boot. I do this by creating /etc/mdadm.conf and insert this:
DEVICE /dev/sd[bcdef]1
ARRAY /dev/md0 UUID=PUT_UUID_HERE
MAILADDR root@localhost
That's it. Remember to update your /etc/fstab, if you want your raid to be mounted every boot.