Migrate RAID disks to LVM volumes
Goal: Migrate data from existing Raid5 Array to lvm partitions
Action: Current raid5 disk array is having 4 disks, we have to remove one by one disk from /dev/md0 raid5 array and use them for the lvm partitioning scheme. Once lvm created from the raid disks (/dev/sdb8) we’ll copy our data from existing raid array to lvm mount point.
Copy Raid5 data to newly created lvm partition(/dev/vg1/lv1) mount point /lv1
Once we are copied whole data from the raid5 to lvm, we’ll wipe out /raid5 devices and use them in newly created lvm.
First, we need to convert all raid devices from 'Linux raid autodetect' to 'Linux LVM'
Then we have to create pv for all three devices
Then vgextend for the existing volume group vg1
Then lvextend for exiting lvm partition mounted on /lv1
Resize the filesystem type
Put entry inside /etc/fstab file
[root@node1 ~]# mdadm -D /dev/md0
Check mount point and raid array name, check disk size as well.
[root@node1 ~]# df -h
Mark commented on raid5 mount point in /etc/fstab so it won’t be problem in case we need reboot to update partition tables as we’ll be going to use existing disk into the lvm partition.
[root@node1 ~]# vim /etc/fstab
Mark fail any single device which you want to remove from the existing raid5 array (/dev/sdb8)
[root@node1 ~]# mdadm /dev/md0 --fail /dev/sdb8
Unmount raid5 mount point
[root@node1 ~]# umount /dev/md0
Remove failed disk from the array which was failed in last command /dev/sdb8
[root@node1 ~]# mdadm /dev/md0 --remove /dev/sdb8
Check existing array should report failed/remove device from the raid5 array /dev/md0
[root@node1 ~]# mdadm -D /dev/md0
Resize existing raid5 as one disk has been removed from the array.
[root@node1 ~]# resize2fs -P /dev/md0
Check raid status it should not have any current process going on, and should show 1 removed device (you can see there is only 3 U’s inside the bracket [UUU_]
[root@node1 ~]# cat /proc/mdstat
Check and update filesystem accordingly for the raid5 array
[root@node1 ~]# e2fsck -f /dev/md0
Shrink the filesystem (-M) and print (-p) progress of the shrinking so once shrinking completed, we can go ahead for the next operation.
[root@node1 ~]# resize2fs -p -M /dev/md0
To check array-size show /dev/md9 detail from the below command and record array size.
[root@node1 ~]# mdadm -D /dev/md0
Need to adjust /dev/md0 array, so copy array-size from above command
[root@node1 ~]# mdadm --grow /dev/md0 --array-size 3139584
Resize existing array once again to update the existing disk array size
[root@node1 ~]# resize2fs /dev/md0
Now active device size reduced to 3 disk device array
[root@node1 ~]# mdadm -D /dev/md0
Reboot to update the partition table
[root@node1 ~]# reboot
Mount raid5 array and check data inside the raid5 mount point, we will go ahead and copy this data to lvm partition
[root@node1 ~]# mount /dev/md0 /raid5/
[root@node1 ~]# ls /raid5
[root@node1 ~]# wipefs -a /dev/sdb8
Changed type of partition 'Linux raid autodetect' to 'Linux LVM'
[root@node1 ~]# fdisk /dev/sdb
Create new lvm from the raid5 wiped disk and copy data from /raid5 array to newly created and mounted lvm partition /lv1
[root@node1 ~]# pvcreate /dev/sdb8 [root@node1 ~]# vgcreate vg1 /dev/sdb8 [root@node1 ~]# lvcreate -L +900M -n lv1 vg1 [root@node1 ~]# mkfs.xfs /dev/vg1/lv1 [root@node1 ~]# mkdir /lv1 [root@node1 ~]# mount /dev/vg1/lv1 /lv1 [root@node1 ~]# cp -r /raid5/* /lv1 [root@node1 ~]# ls /lv1 |
We can see raid5 data is on the newly created lvm partition, now if we are okay and confident that we have copied all the data from the raid5 and accommodate in the newly created lvm partition then it would be safe to remove raid array from the system.
Be careful: please double check data and its size on the newly created lvm partition and then only do the below operation.
Remove all the disks from the existing array.
[root@node1 ~]# mdadm /dev/md0 --fail /dev/sdb7
[root@node1 ~]# mdadm /dev/md0 --remove /dev/sdb7
Check the raid5 details and its disks which is currently participated into the raid5 array.
[root@node1 ~]# e2fsck -f /dev/md0
Note: Raid5 does not allow us to remove any disk completely from raid5, having read error during the filesystem check
We have to fail and remove all the existing disks and stop raid5 array and set zero superblock for all the remaining raid devices.
[root@node1 ~]# mdadm /dev/md0 --fail /dev/sdb5
[root@node1 ~]# mdadm /dev/md0 --fail /dev/sdb6
[root@node1 ~]# mdadm /dev/md0 --remove /dev/sdb5
[root@node1 ~]# mdadm /dev/md0 --remove /dev/sdb6
As we have copied data from raid5 array, we can stop and wiped out all the disks and use them for the newly created lvm volume. We’ll extend volume group and then logical volume.
First of all, unmount /dev/md0 and set zero superblock for all the faulty and removed disks.
[root@node1 ~]# umount /dev/md0
[root@node1 ~]# mdadm --stop /dev/md0
[root@node1 ~]# mdadm --zero-superblock /dev/sdb5
[root@node1 ~]# mdadm --zero-superblock /dev/sdb6
[root@node1 ~]# mdadm --zero-superblock /dev/sdb7
[root@node1 ~]# reboot
Changed type of partition 'Linux raid autodetect' to 'Linux LVM'
[root@node1 ~]# fdisk /dev/sdb
[root@node1 ~]# partprobe
[root@node1 ~]# pvcreate /dev/sdb5 /dev/sdb6 /dev/sdb7
[root@node1 ~]# vgextend vg1 /dev/sdb5 /dev/sdb6 /dev/sdb7
[root@node1 ~]# vgdisplay
[root@node1 ~]# lvextend -L +3G /dev/vg1/lv1
[root@node1 ~]# xfs_growfs /dev/vg1/lv1
[root@node1 ~]# lvdisplay
[root@node1 ~]# df -h
[root@node1 ~]# mount /dev/vg1/lv1 /lv1
[root@node1 ~]# xfs_growfs /dev/vg1/lv1
[root@node1 ~]# df -h
LVM snapshot and restore data of lv1 logical volume
[root@node1 ~]# pvs
[root@node1 ~]# lvdisplay /dev/vg1/lv1
[root@node1 ~]# df -h | grep lv1
Create lvm snapshot for the disk /dev/vg1/lv1 to backup lv1 data
[root@node1 ~]# lvcreate --size 100M --snapshot --name snap_lv1 /dev/vg1/lv1
We have some files on /lv1 mount point which is currently available in the lvm snapshot.
We can use dmsetup status to check snapshot metadata sectors.
Check data on the snapshot after unmounting actual mount point before mounting snapshot volume.
[root@node1 ~]# df -h | grep lv1
[root@node1 ~]# umount /lv1
[root@node1 ~]# mount /dev/vg1/snap_lv1 /mnt
[root@node1 ~]# umount /mnt
[root@node1 ~]# mount /dev/vg1/lv1 /lv1
[root@node1 ~]# umount /lv1
[root@node1 ~]# mount /dev/vg1/lv1 /mnt
[root@node1 ~]# df -h | grep lv1
[root@node1 ~]# tar -cvzf snap_lv1.tag.gz /mnt/
[root@node1 ~]# ls
Restore snapshot to the original logical volume.
[root@node1 ~]# lvconvert --mergesnapshot /dev/vg1/snap_lv1
We can refresh the logical volume for it to reactivate using the latest metadata using “lvchange”
[root@node1 ~]# lvchange --refresh vg1/lv1
[root@node1 ~]# ls /mnt
Now lvsnapshot is removed and merged with the original logical volume.
You may be interested in these jobs
-
SAP FICO FM Consultant
Found in: Foundit IN A2 - 6 days ago
CareerPaths Delhi, India Full time1.Expertise on SAP Finance (FI) & Funds Management (FM) module · On demand support to Business Process and Operational Divisions Teams · Accountable for Solution issues escalated to the Support team ...
-
Relationship Manager
Found in: beBee S2 IN - 3 days ago
FIRON CONSULTING Pune, India Full timeStrengthen distribution by identifying potential leaders · Support and manage the BAs in Recruitment & Training of Advisors. · Manage Advisor career progressions and create a pool of Premier Advisor ...
-
Customer Care Executive
Found in: Emprego IN C2 - 3 days ago
PSR FASHIONS AND LIFESTYLES PVT LTD Coimbatore, IndiaWe are looking for a passionate and skilled Customer Care Executive to join our team at The Nesavu, a vibrant and growing kidswear brand. · In this role, you will be the first point of contact for our ...
Comments