This page is read only. You can view the source, but not change it. Ask your administrator if you think this is wrong. ====== Easy to expand Linux software RAID 5 with XFS. ====== Source: [[https://serverfault.com/questions/114445/easy-to-expand-linux-software-raid-5-with-xfs-best-practices|Easy to expand Linux software RAID 5 with XFS. Best practices?]] ===== Creating the initial 3 drive array ===== Created a RAID5 array: <code>mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1</code> Now the RAID5 array is created, and it is being built already. It takes time, but you can proceed with creating a new physical LVM2 volume: <code>pvcreate /dev/md0</code> Now let's create a new volume group: <code>vgcreate vd_raid /dev/md0</code> Then we need to create a new logical volume inside that volume group. First we need to figure out the exact size of the created volume group: <code>vgdisplay vd_raid</code> The size can be seen from the row which indicates the "Total PE" in physical extents. Let's imagine it is 509. Now create a new logical volume which takes all available space: <code>lvcreate -l 509 vd_raid -n lv_raid</code> Finally we can create a file system on top of that logical volume: <code>mkfs.xfs /dev/mapper/vd_raid-lv_raid</code> To be able to use our newly created RAID array, we need to create a directory and mount it: <code>mkdir -p /mnt/raid</code> <code>mount /dev/mapper/vd_raid-lv_raid /mnt/raid</code> Now it is ready to use. But for it to automatically mount after reboot, we need to save RAID geometry to mdadm's configuration file: <code>mdadm --detail --scan >> /etc/mdadm/mdadm.conf</code> Then add the following line to /etc/fstab which mounts the RAID array automatically: <code>/dev/mapper/vd_raid-lv_raid /mnt/raid auto auto,noatime,nodiratime,logbufs=8 0 1</code> Now the RAID array is ready to use, and mounted automatically to /raid directory after every boot. ====== Debian need: ====== <code> update-initramfs -u </code> ===== Adding a new drive to the array ===== Let's imagine that now you have a new drive, /dev/sde, which you want to add to the previously created array without losing any data. Add to the RAID array: <code>mdadm --add /dev/md0 /dev/sde1</code> Now the RAID5 array includes four drives, which only three are in use currently. The array needs to be expanded to include all four drives: <code>mdadm --grow /dev/md0 --raid-devices=4</code> Then the physical LVM2 volume needs to be expanded: <code>pvresize /dev/md0</code> Now the physical volume is resized by default to cover all available space in the RAID array. We need to find out the new size in physical extents: <code>vgdisplay vd_raid</code> Let's imagine that the new size is now 764 (can be seen from "Total PE"). Now expand the logical volume to cover this: <code>lvextend /dev/mapper/vd_raid-lv_raid -l 764</code> Then expand the XFS file system. This needs to be done during the file system is online and mounted: <code>xfs_growfs /mnt/raid</code> By default it is expanded to cover all available space. Finally the RAID array geometry needs to be updated because the array now includes a new disk. First delete the added line from /etc/mdadm/mdadm.conf and then add a new one: <code>mdadm --detail --scan >> /etc/mdadm/mdadm.conf</code>