After performing complex LVM operations, you need specific commands to verify success and troubleshoot any potential problems.
Here are the most valuable verification and troubleshooting commands for the LVM-Thin and Directory setup you just created:
Verification & Status Commands
These are the first commands you should run to confirm your work and check disk usage.
| Command | Purpose | Expected Output |
sudo pvs | Physical Volume Status: Verify the new disk is a PV. | Should show /dev/sdb as a PV, and that it belongs to the newvg. |
sudo vgs newvg | Volume Group Status: Verify the VG size and free space. | VSize should be around 3.0T. VFree should be very close to 0 (since you used 98%FREE). |
sudo lvs newvg | Logical Volume Status: Check the LVs created inside the VG. | Should list both vm_thin_pool (LSize: 2.5T, Attr: twi-a...) and dir_store_lv (LSize: ~500G, Attr: -wi-a...). |
df -h /mnt/directory_store | Filesystem Check: Verify the Directory LV is mounted and formatted. | Should show the device /dev/mapper/newvg-dir_store_lv mounted to /mnt/directory_store with a size of ~500G. |
cat /etc/fstab | Mount Persistence Check: Verify the entry for auto-mounting on reboot. | Should show the line: /dev/newvg/dir_store_lv /mnt/directory_store xfs defaults 0 2. |
Troubleshooting Commands
If something doesn’t look right, these commands help diagnose common LVM or filesystem errors.
1. LVM Component Diagnostics
| Command | Purpose | Context/Fix |
sudo pvscan | Scans for all PVs visible to the system. | Use this if pvs doesn’t show your new PV (/dev/sdb). |
sudo vgck newvg | Performs integrity checks on the Volume Group. | Use this if you suspect VG metadata corruption or inconsistencies. |
sudo lvchange -ay newvg/dir_store_lv | Manually activate a Logical Volume. | If dir_store_lv shows a state other than a (active) in lvs. |
2. Filesystem & Mount Diagnostics
| Command | Purpose | Context/Fix |
sudo umount /mnt/directory_store then sudo mount -a | Test the fstab entry live. | If this fails, there is an error in the /etc/fstab entry (typo, wrong device path) or the filesystem is corrupt (see xfs_check). |
| `dmesg | tail` | Check kernel logs for I/O errors. |
sudo xfs_check /dev/newvg/dir_store_lv | Check the integrity of the XFS filesystem. | If you get read/write errors or the mount fails, this helps pinpoint corruption. |
3. Thin Pool Health
For the LVM-Thin pool specifically, you must monitor its metadata health.
Bash
sudo lvs -o +lv_size,data_percent,metadata_percent
Data%: Shows how full the 2.5TB data area is.Metadata%: Shows how full the metadata area (which tracks all the individual VM disks/snapshots) is.
If Metadata% starts getting high (e.g., over 80%), it means you have created too many snapshots or thin volumes, and you may need to extend the pool’s metadata space to avoid future I/O failures.






Leave a Reply