I've been using mdadm to learn about software RAID configurations on Linux and I find it to be a great resource to play around with and do some hands on work. How I wish we were taught using these tools in college, rather than just being bombarded by all possible combinations of RAID.
Having the ability to setup and tear down VMs at the snap of a finger makes me think how fortunate CS students are nowadays. We had it tough. At the risk of digression, I sound like one of the Yorkshiremen from the classic Monty Python sketch - The Four Yorkshiremen
Statistics have shown that 85% of the audience, after clicking the above link have spent at least 45 minutes visiting the various sketches of Monty Python, including, but not limited to - Woody and tinny words, Argument Clinic and others.
And.... we are back. :)
This tutorial gives a gentle introduction to RAID and then builds up on the concepts and walks the reader through the mdadm tool. As that HOWTO is already delivering, I will not go into the details of it. The aim of this post is to inform about the pre-requisite setup for building RAIDs.
I am using Virtualbox with a VM running Ubuntu 10.10. For mdadm I have installed the mdadm tool (installation steps mentioned in the tutorial page).
As you are going to play with different RAID configurations, you will need to add multiple disks. For VirtualBox, go to the main menu, right click on the VM, go to settings and then to storage. Out there add 2 more SATA disks each of 1 GB - I used this value because I am going to partition each disk into 2 partitions of 512 MB each. If there are no new disks added to your system, then these disks will show up as /dev/sdb and /dev/sdc on your Linux box.
Boot up your VM. Become root. Execute the following command:
We are going to work with /dev/sdb and /dev/sdc - the 2 new disks that have been added. If you already had added other disks the device names may be different in your case - but one thing is for sure - for the new disks, the partition table will not exist. Hence for the new ones fdisk output will be something like this:
Please make sure that you use the new disks. If you follow the steps below without knowing which partition you are working on, then you sir are not only shooting yourself in the foot, you are blowing the brains out of your VM also in the process.
So in this case, I am assuming (notice the bold font, imagine the font size to be 20, font color to be a gory red, and the word being animated to flash every other second) that you added 2 new disks to your system: /dev/sdb and /dev/sdc - both of size 1 GB. Verify that the partition table does not exist by doing:
Follow these steps to create the partition tables:
The really cool part of mdadm is that you can actually set one of the disk as "faulty" then in case of RAID - 1 (which does mirroring without any striping), use the primary disk and remove the faulty one. For the sake of completeness, I am going to dump the steps I followed to create a simple RAID 1 setup using /dev/sdc1 and /dev/sdc2.
Having the ability to setup and tear down VMs at the snap of a finger makes me think how fortunate CS students are nowadays. We had it tough. At the risk of digression, I sound like one of the Yorkshiremen from the classic Monty Python sketch - The Four Yorkshiremen
Statistics have shown that 85% of the audience, after clicking the above link have spent at least 45 minutes visiting the various sketches of Monty Python, including, but not limited to - Woody and tinny words, Argument Clinic and others.
And.... we are back. :)
This tutorial gives a gentle introduction to RAID and then builds up on the concepts and walks the reader through the mdadm tool. As that HOWTO is already delivering, I will not go into the details of it. The aim of this post is to inform about the pre-requisite setup for building RAIDs.
I am using Virtualbox with a VM running Ubuntu 10.10. For mdadm I have installed the mdadm tool (installation steps mentioned in the tutorial page).
As you are going to play with different RAID configurations, you will need to add multiple disks. For VirtualBox, go to the main menu, right click on the VM, go to settings and then to storage. Out there add 2 more SATA disks each of 1 GB - I used this value because I am going to partition each disk into 2 partitions of 512 MB each. If there are no new disks added to your system, then these disks will show up as /dev/sdb and /dev/sdc on your Linux box.
Boot up your VM. Become root. Execute the following command:
# fdisk -lYou will see a partition table for /dev/sda - containing - /dev/sda1, /dev/sda2, etc. - this is the first disk in which your OS is installed. We are not going to touch this disk.
We are going to work with /dev/sdb and /dev/sdc - the 2 new disks that have been added. If you already had added other disks the device names may be different in your case - but one thing is for sure - for the new disks, the partition table will not exist. Hence for the new ones fdisk output will be something like this:
Disk /dev/sdb: 1073 MB, 1073741824 bytes 255 heads, 63 sectors/track, 130 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition tableAs mentioned earlier, the new disks will not contain any partition tables - which is visible in the above output.
Please make sure that you use the new disks. If you follow the steps below without knowing which partition you are working on, then you sir are not only shooting yourself in the foot, you are blowing the brains out of your VM also in the process.
So in this case, I am assuming (notice the bold font, imagine the font size to be 20, font color to be a gory red, and the word being animated to flash every other second) that you added 2 new disks to your system: /dev/sdb and /dev/sdc - both of size 1 GB. Verify that the partition table does not exist by doing:
# fdisk -l /dev/sdband
# fdisk -l /dev/sdcshould say "Disk /dev/sdx does not contain valid partition table"
Follow these steps to create the partition tables:
- Create partition table for /dev/sdb - create 2 partitions of 512 MB each
- Verify the paritions created
- Follow steps 1 and 2 with /dev/sdc instead of /dev/sdb
# fdisk /dev/sdb Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-130, default 1): Using default value 1 Last cylinder, +cylinders or +size{K,M,G} (1-130, default 130): +512M Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 2 First cylinder (67-130, default 67): Using default value 67 Last cylinder, +cylinders or +size{K,M,G} (67-130, default 130): Using default value 130 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
# fdisk -l /dev/sdb Device Boot Start End Blocks Id System /dev/sdb1 1 66 530113+ 83 Linux /dev/sdb2 67 130 514080 83 Linux
The really cool part of mdadm is that you can actually set one of the disk as "faulty" then in case of RAID - 1 (which does mirroring without any striping), use the primary disk and remove the faulty one. For the sake of completeness, I am going to dump the steps I followed to create a simple RAID 1 setup using /dev/sdc1 and /dev/sdc2.
- Create a RAID 1 setup using /dev/sdc1 and /dev/sdc2. Remember that /dev/md0 is a device created by combining /dev/sdc1 and /dev/sdc2 - you cannot mount these partitions individually.
- Create an ext3 filesystem on the device /dev/md0
- Mount the device
- Check the disk status
- Set one of the disks faulty. Before doing this keep a separate tab open with the command:
- Do it
- At this point my kernel log shows the message:
- Check the mdadm output
- Remove the faulty disk.
- Check the details once again:
- Add the new disk again. Re-adding the same disk - as if it was a new one.
- At this point, the new disk will be rebuilt with the data from the working disk.
- Check the details once again. Now that the new second disk is rebuilt, both will be in active sync.
root@vm1-ubuntu:/var/tmp# mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdc1 /dev/sdc2 mdadm: size set to 513984K mdadm: largest drive (/dev/sdc1) exceed size (513984K) by more than 1% Continue creating array? y mdadm: array /dev/md0 started.
root@vm1-ubuntu:/var/tmp# mke2fs -j /dev/md0
root@vm1-ubuntu:/var/tmp# mount /dev/md0 /mnt/
root@vm1-ubuntu:/var/tmp# mdadm --detail /dev/md0 | tail -3 # you should remove tail to see more output Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 34 1 active sync /dev/sdc2
root@vm1-ubuntu:/var/tmp# tail -f /var/log/kern.log
root@vm1-ubuntu:/var/tmp# mdadm --manage --set-faulty /dev/md0 /dev/sdc2 mdadm: set /dev/sdc2 faulty in /dev/md0
md/raid1:md0: Disk failure on sdc2, disabling device.
root@vm1-ubuntu:/var/tmp# mdadm --detail /dev/md0 | tail -5 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 0 0 1 removed 2 8 34 - faulty spare /dev/sdc2
root@vm1-ubuntu:/var/tmp# mdadm /dev/md0 -r /dev/sdc2 mdadm: hot removed /dev/sdc2
root@vm1-ubuntu:/var/tmp# mdadm --detail /dev/md0 | tail -3 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 0 0 1 removed
root@vm1-ubuntu:/var/tmp# mdadm /dev/md0 -a /dev/sdc2 mdadm: re-added /dev/sdc2
root@vm1-ubuntu:/var/tmp# mdadm --detail /dev/md0 | grep -i rebuild Rebuild Status : 1% complete root@vm1-ubuntu:/var/tmp# mdadm --detail /dev/md0 | grep -i rebuild Rebuild Status : 53% complete root@vm1-ubuntu:/var/tmp# mdadm --detail /dev/md0 | grep -i rebuild Rebuild Status : 94% complete root@vm1-ubuntu:/var/tmp# mdadm --detail /dev/md0 | grep -i rebuild # above command gave no output => check the disk states to verify if rebuild done
root@vm1-ubuntu:/var/tmp# mdadm --detail /dev/md0 | tail -3 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 34 1 active sync /dev/sdc2
No comments:
Post a Comment