Created a RAID5 array:
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
Now the RAID5 array is created, and it is being built already. It takes time, but you can proceed with creating a new physical LVM2 volume:
pvcreate /dev/md0
Now let’s create a new volume group:
vgcreate vd_raid /dev/md0
Then we need to create a new logical volume inside that volume group. First we need to figure out the exact size of the created volume group:
vgdisplay vd_raid
The size can be seen from the row which indicates the “Total PE” in physical extents. Let’s imagine it is 509. Now create a new logical volume which takes all available space:
lvcreate -l 509 vd_raid -n lv_raid
Finally we can create a file system on top of that logical volume:
mkfs.xfs /dev/mapper/vd_raid-lv_raid
To be able to use our newly created RAID array, we need to create a directory and mount it:
mkdir -p /mnt/raid
mount /dev/mapper/vd_raid-lv_raid /mnt/raid
Now it is ready to use. But for it to automatically mount after reboot, we need to save RAID geometry to mdadm’s configuration file:
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
Then add the following line to /etc/fstab which mounts the RAID array automatically:
/dev/mapper/vd_raid-lv_raid /mnt/raid auto auto,noatime,nodiratime,logbufs=8 0 1
Now the RAID array is ready to use, and mounted automatically to /raid directory after every boot.
Let’s imagine that now you have a new drive, /dev/sde, which you want to add to the previously created array without losing any data.
Add to the RAID array:
mdadm --add /dev/md0 /dev/sde1
Now the RAID5 array includes four drives, which only three are in use currently. The array needs to be expanded to include all four drives:
mdadm --grow /dev/md0 --raid-devices=4
Then the physical LVM2 volume needs to be expanded:
pvresize /dev/md0
Now the physical volume is resized by default to cover all available space in the RAID array. We need to find out the new size in physical extents:
vgdisplay vd_raid
Let’s imagine that the new size is now 764 (can be seen from “Total PE”). Now expand the logical volume to cover this:
lvextend /dev/mapper/vd_raid-lv_raid -l 764
Then expand the XFS file system. This needs to be done during the file system is online and mounted:
xfs_growfs /mnt/raid
By default it is expanded to cover all available space. Finally the RAID array geometry needs to be updated because the array now includes a new disk. First delete the added line from /etc/mdadm/mdadm.conf and then add a new one:
mdadm --detail --scan >> /etc/mdadm/mdadm.conf