Quantcast
Channel: Autarchy of the Private Cave » how-to
Viewing all articles
Browse latest Browse all 35

How to remotely convert live 1xHDD/LVM Linux server to 2xHDD RAID1/LVM (GRUB2, GPT)

$
0
0

Assumptions:

  • current HDD is /dev/sda, it has a GPT (with bios_grub being /dev/sda1), separate /boot partition (/dev/sda2), and a physical LVM volume (/dev/sda3), where LVM holds all the remaining partitions (root, /home, /srv, …); LVM is properly configured, and system reboots with no problems
  • your new drive is /dev/sdb, it is identical to /dev/sda, and it comes empty from the manufacturer (this is important! wipe the drive if it is not empty, especially if it used to be a part of another RAID)
  • your system is Debian or Debian-based; in this exact example I’ve been using Ubuntu Server 10.04
  • your LVM volume group is named vg0
  • make sure you understand what each command does before executing it
  • you do have an external backup of all your important data, and you do understand that the following operations are potentially dangerous to your data integrity

Inspired by: Debian Etch RAID guide, serverfault question.

  1. Create the GPT on the new drive:
    parted /dev/sdb mklabel gpt
  2. Get the list of partitions on /dev/sda:
    parted -m /dev/sda print
  3. Create /dev/sdb partitions similarly to what you have on /dev/sda (my example numbers follow, use your numbers here):
    parted /dev/sdb mkpart bios_grub 1049kB 2097kB
    parted /dev/sdb mkpart boot 2097kB 258MB
    parted /dev/sdb mkpart lvm 258MB 2000GB
  4. Set proper flags on partitions:
    parted /dev/sdb set 1 bios_grub on (GPT doesn’t have MBR, so you create a 1-MB partition instead to hold grub2′s boot code)
    (possibly optional) parted /dev/sdb set 2 raid on
    (possibly optional) parted /dev/sdb set 3 raid on
  5. (possibly optional) To make sure /dev/sdb1 (the bios_grub) indeed contains grub’s boot code, I did dd if=/dev/sda1 of=/dev/sdb1
  6. apt-get install mdadm
  7. Note: at this point, older tutorials suggest adding a bunch of raid* kernel modules to /etc/modules and to grub’s list of modules to load. I’m not sure this is really necessary, but do see the tutorials mentioned at the top for more information. If you do modify the lists of modules – don’t forget to run update-initramfs -u.
  8. Create two initially-degraded RAID1 devices (one for /boot, another for LVM):
    mdadm ––create /dev/md0 ––level=1 ––raid-devices=2 /dev/sdb2 missing
    mdadm ––create /dev/md1 ––level=1 ––raid-devices=2 /dev/sdb3 missing
  9. Store the configuration of your RAID1 to the mdadm.conf file (important! this is not done automatically!)
    mdadm -Es >> /etc/mdadm/mdadm.conf
  10. Verify the contents of your mdadm.conf:
    cat /etc/mdadm/mdadm.conf
    dpkg-reconfigure mdadm, and enable booting in degraded mode
  11. Copy your current /boot (/dev/sda2) to the new /dev/md0 /boot partition:
    (one can use dd here as well, but for some reason my attempt at dd failed writing 1 last byte of data)
    mkdir /mnt/md0
    mount /dev/md0 /mnt/md0
    cp -a /boot/* /mnt/md0/
    umount /dev/md0
    rmdir /mnt/md0
  12. Now extend your existing volume group to include the newly-created /dev/md1:
    pvcreate /dev/md1
    vgextend vg0 /dev/md1
  13. Verify the list of logical volumes you curently have: enter lvm shell, and type lvs. Here’s what I had:
    LV VG Attr LSize Origin Snap% Move Log Copy% Convert
    home vg0 -wi-ao 1.70t
    logs vg0 -wi-ao 4.66g
    root vg0 -wi-ao 10.24g
    srv vg0 -wc-ao 100.00g
    swap vg0 -wi-ao 1.86g
    tmp vg0 -wi-ao 4.66g
  14. Now, you can move all the logical volumes to new physical volume in one command: pvmove /dev/sda3 /dev/md1. Personally, remembering the problem I had with dd from /dev/sda2 to /dev/md0, I decided to move all logical volumes one-by-one; as this takes time, you may consider joining these operations with ; or &&, and putting the /tmp last (as the easiest one to re-create if it fails to move):
    pvmove ––name home /dev/sda3 /dev/md1
    pvmove ––name srv /dev/sda3 /dev/md1
    pvmove ––name logs /dev/sda3 /dev/md1
    pvmove ––name swap /dev/sda3 /dev/md1
    pvmove ––name root /dev/sda3 /dev/md1
    pvmove ––name tmp /dev/sda3 /dev/md1
  15. To be safer, I ran FS check on a few volumes I could umount:
    umount /dev/mapper/vg0-srv
    fsck -f /dev/mapper/vg0-srv
    mount /dev/mapper/vg0-srv
    umount /dev/mapper/vg0-tmp
    fsck -f /dev/mapper/vg0-tmp
    mount /dev/mapper/vg0-tmp
  16. Remove /dev/sda3 from the physical space available to your volume group:
    vgreduce vg0 /dev/sda3
  17. Install grub2 to both drives, so as to make them both bootable in case of failure:
    grub-install ‘(hd0)’
    grub-install ‘(hd1)’
  18. Edit /etc/fstab, pointing /boot to /dev/md0. You may use UUIDs here, but please do not use UUIDs from mdadm.conf – those are different from FS-UUIDs, instead do ls -l /dev/disk/by-uuid to find the UUID of /dev/md0. Personally, I had no problems just using /dev/md0.
  19. Now is the time to add your original /dev/sda to the RAID1; be absolutely sure you have moved all the data off that drive, because these commands will destroy it:
    mdadm ––manage ––add /dev/md0 /dev/sda2
    mdadm ––manage ––add /dev/md1 /dev/sda3
    Re-syncing array will take some time.
  20. To be on the safe side, you may want to run again update-initramfs -u and update-grub; I have also edited /etc/grub.d/40_custom, adding there 2 more boot options: from /dev/sda2 and /dev/sdb2 (/boot on both drives) – have no idea if that will work, but having more boot options didn’t hurt
  21. Reboot into your new system. Actually, at this point reboot is only necessary to verify that your system is bootable – you may delay this reboot as long as you want to.
  22. Many tutorials also suggest testing your RAID1 by manually “degrading” it, trying to boot, and then rebuilding it back. I haven’t done that, but you may want to.

Improvement suggestions, criticism and thank-you are welcome in the comments.

Share


Viewing all articles
Browse latest Browse all 35

Trending Articles