Activating second disk on ubuntu
I have two disks installed on my ubuntu dapper machine. Each is 250GB. But only one of them appears to be mounted and usable. This has been the case since my initial install of Ubuntu on this machine. I'm not sure how I confused the installer so badly. But I clearly did not understand the subtleties of LVM.
Collected data[edit]
root@weasel:/var/log# df -kh Filesystem Size Used Avail Use% Mounted on /dev/mapper/Ubuntu-root 224G 87G 125G 42% / varrun 1007M 208K 1007M 1% /var/run varlock 1007M 4.0K 1007M 1% /var/lock udev 1007M 108K 1007M 1% /dev devshm 1007M 0 1007M 0% /dev/shm lrm 1007M 22M 986M 3% /lib/modules/2.6.15-26-amd64-k8/volatile /dev/sda1 228M 102M 115M 48% /boot
Logical volume manager (LVM) has something to do with this.
root@weasel:/var/log# fdisk -l Disk /dev/sda: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 31 248976 83 Linux /dev/sda2 32 30401 243947025 5 Extended /dev/sda5 32 30401 243946993+ 8e Linux LVM Disk /dev/sdb: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sdb1 * 1 30401 244196001 8e Linux LVM
The end of the LVM Howto has some common tasks and how to do them. I noticed the pvdisplay command:
root@weasel:/etc/lvm# pvdisplay /dev/evms/lvm2/VolGroup00/LogVol00: read failed after 0 of 4096 at 0: Input/output error Couldn't find device with uuid 'Y67lyo-mEdN-F0un-yPLl-HCTa-hWLC-mCentd'. --- Physical volume --- PV Name unknown device VG Name VolGroup00 PV Size 232.78 GB / not usable 0 Allocatable yes (but full) PE Size (KByte) 32768 Total PE 7449 Free PE 0 Allocated PE 7449 PV UUID Y67lyo-mEdN-F0un-yPLl-HCTa-hWLC-mCentd --- Physical volume --- PV Name /dev/sdb1 VG Name VolGroup00 PV Size 232.88 GB / not usable 0 Allocatable yes PE Size (KByte) 32768 Total PE 7452 Free PE 2 Allocated PE 7450 PV UUID mh0UHf-UYhT-NOj8-Ddlv-3NVL-hKBW-enhOXu --- Physical volume --- PV Name /dev/sda5 VG Name Ubuntu PV Size 232.64 GB / not usable 0 Allocatable yes (but full) PE Size (KByte) 4096 Total PE 59557 Free PE 0 Allocated PE 59557 PV UUID hzY3In-XHIh-RGwS-1ilj-nL6P-Ajts-KmySkw
There's a volume group display command as well:
root@weasel:/etc/lvm# vgdisplay /dev/evms/lvm2/VolGroup00/LogVol00: read failed after 0 of 4096 at 0: Input/output error /dev/evms/lvm2/VolGroup00/LogVol00: read failed after 0 of 4096 at 0: Input/output error Couldn't find device with uuid 'Y67lyo-mEdN-F0un-yPLl-HCTa-hWLC-mCentd'. Couldn't find all physical volumes for volume group VolGroup00. /dev/evms/lvm2/VolGroup00/LogVol00: read failed after 0 of 4096 at 0: Input/output error /dev/evms/lvm2/VolGroup00/LogVol00: read failed after 0 of 4096 at 0: Input/output error Couldn't find device with uuid 'Y67lyo-mEdN-F0un-yPLl-HCTa-hWLC-mCentd'. Couldn't find all physical volumes for volume group VolGroup00. Volume group "VolGroup00" doesn't exist --- Volume group --- VG Name Ubuntu System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 232.64 GB PE Size 4.00 MB Total PE 59557 Alloc PE / Size 59557 / 232.64 GB Free PE / Size 0 / 0 VG UUID 85vnO6-FtRR-SZnS-s0UQ-aFNf-OY5X-ZiwMm0
Physical volume scan (pvscan) shows this:
root@weasel:/etc/lvm# pvscan /dev/evms/lvm2/VolGroup00/LogVol00: read failed after 0 of 4096 at 0: Input/output error /dev/evms/lvm2/VolGroup00/LogVol00: read failed after 0 of 4096 at 0: Input/output error Couldn't find device with uuid 'Y67lyo-mEdN-F0un-yPLl-HCTa-hWLC-mCentd'. PV unknown device VG VolGroup00 lvm2 [232.78 GB / 0 free] PV /dev/sdb1 VG VolGroup00 lvm2 [232.88 GB / 64.00 MB free] PV /dev/sda5 VG Ubuntu lvm2 [232.64 GB / 0 free] Total: 3 [698.30 GB] / in use: 3 [698.30 GB] / in no VG: 0 [0 ]
Analysis[edit]
It looks to me like I have two volume groups named VolGroup00 and Ubuntu. The Ubuntu volume group is working, but the VolGroup00 is not. Device /dev/sda5 is associated with Ubuntu and /dev/sdb1 is associated with VolGroup00.
The cunning plan[edit]
I think the steps toward a working configuration are these:
- remove logical volumes in VolGroup00?
lvremove /dev/???
- remove volume group VolGroup00
vgchange -a n VolGroup00 vgremove VolGroup00
The above commands failed with the error that it couldn't find VolGroup00. But it recommended that I could consider using the command: vgreduce --removemissing. I did the following:
vgreduce --test --removemissing VolGroup00 vgreduce --removemissing VolGroup00 vgreduce --test --removemissing VolGroup00
Luckily, my existing data in the Ubuntu volume group is un-destroyed.
Now vgscan says this:
root@weasel:/# vgscan Reading all physical volumes. This may take a while... /dev/evms/lvm2/VolGroup00/LogVol00: read failed after 0 of 4096 at 0: Input/output error Found volume group "VolGroup00" using metadata type lvm2 Found volume group "Ubuntu" using metadata type lvm2
- initialize /dev/sdb1?
pvcreate -M2 /dev/sdb
Unfortunately, I got this back:
root@weasel:/# pvcreate -M2 /dev/sdb Device /dev/sdb not found (or ignored by filtering).
But pvscan says this:
root@weasel:/# pvscan /dev/evms/lvm2/VolGroup00/LogVol00: read failed after 0 of 4096 at 0: Input/output error PV /dev/sda5 VG Ubuntu lvm2 [232.64 GB / 0 free] PV /dev/sdb1 lvm2 [232.88 GB] Total: 2 [465.53 GB] / in use: 1 [232.64 GB] / in no VG: 1 [232.88 GB]
So I guess I don't need to pvcreate anything. But how do I go from here to having useable mountpoint?
I did a vgcreate like so:
root@weasel:/# vgcreate vg00 /dev/sdb1 /dev/evms/lvm2/VolGroup00/LogVol00: read failed after 0 of 4096 at 0: Input/output error /dev/evms/lvm2/VolGroup00/LogVol00: read failed after 0 of 4096 at 0: Input/output error Volume group "vg00" successfully created root@weasel:/# pvscan /dev/evms/lvm2/VolGroup00/LogVol00: read failed after 0 of 4096 at 0: Input/output error PV /dev/sdb1 VG vg00 lvm2 [232.88 GB / 232.88 GB free] PV /dev/sda5 VG Ubuntu lvm2 [232.64 GB / 0 free] Total: 2 [465.53 GB] / in use: 2 [465.53 GB] / in no VG: 0 [0 ]
I then used vgdisplay to find the Total PE of vg00 (59618). I then fed this to lvcreate like so:
root@weasel:/# lvcreate -l 59618 vg00 -n big /dev/evms/lvm2/VolGroup00/LogVol00: read failed after 0 of 4096 at 0: Input/output error Logical volume "big" created
I hope that I/O error goes away after a reboot or something...
Now lvscan says this:
root@weasel:/# lvscan /dev/evms/lvm2/VolGroup00/LogVol00: read failed after 0 of 4096 at 0: Input/output error ACTIVE '/dev/vg00/big' [232.88 GB] inherit ACTIVE '/dev/Ubuntu/root' [226.77 GB] inherit ACTIVE '/dev/Ubuntu/swap_1' [5.88 GB] inherit
I formatted this logical volume like this:
root@weasel:/# mkfs -t ext3 /dev/vg00/big mke2fs 1.38 (30-Jun-2005) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 30539776 inodes, 61048832 blocks 3052441 blocks (5.00%) reserved for the super user First data block=0 1864 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 32 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Then I could copy the line for my root partition in fstab like this:
/dev/mapper/Ubuntu-root / ext3 defaults,errors=remount-ro 0 1 /dev/mapper/vg00-big /big ext3 defaults,errors=remount-ro 0 1
and then issue the command "mount /big" and it worked.
- I no longer intend to assign this new disk to the existing volume group. Instead, I'm going to make a different mount point and filesystem for editing video. Hopefully, this will allow me to partition my disk failures.
kernel update causes disk failure[edit]
Perhaps it was just the reboot following the kernel update, but my /big filesystem disappeared. I didn't notice a problem until my root filesystem filled up when my daily synch between root and /big turned into a synch between root and root, causing root to fill up.
I tried wiggling the wires and unplugging the disk that hosts the failing filesystem. But that caused the system to be unbootable. Ugh. (My guess here is that the BIOS renumbered the disks when it saw that there was a change and tried to boot from the wrong disk. I had to go into the BIOS utility to change the disk boot priority to fix this.)
I had an old Ubuntu install CD on hand that let me boot the machine and get to the point of editing the LVM configuration, which was probably unnecessary. Luckily, I didn't make any substantive changes. Eventually, I found that tweaking the LVM config was not getting me anywhere and I went to the BIOS to get things working again. At least I got it to boot.
Now I'm at the point of having a bootable system with both disks recognized by the BIOS, but the non-root disk is not visible to the operating system.
df -k shows only filesystems from /dev/sda. The one big filesystem on /dev/sdb is missing. My fstab curiously did not have an entry for the /big filesystem. Perhaps Ubuntu helpfully removed it for me. I just added it back and now all is well again.
However, many reboots convinced comcast to give me a new IP address. So now I have to reconfigure for the new IP number. I need to make a wiki page on how to do that.
Can't mount second disk[edit]
For a time I had my backup disk (/big) disconnected and commented out of fstab. When I plugged it back in and uncommented the fstab entry, I found that I couldn't mount the disk.
When I try to mount it I get this:
root@weasel:~# mount /big mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg00-big, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so
In /var/log/syslog, I see this:
Jun 22 22:51:17 weasel kernel: [ 1427.167819] EXT3-fs error (device dm-0): ext3_check_descriptors: Block bitmap for group 0 not in group (block 3858766779)! Jun 22 22:51:17 weasel kernel: [ 1427.178580] EXT3-fs: group descriptors corrupted!
This appears to be working to recover the corruption:
root@weasel:/dev/vg00# fsck /dev/vg00/big fsck 1.40.8 (13-Mar-2008) e2fsck 1.40.8 (13-Mar-2008) fsck.ext3: Group descriptors look bad... trying backup blocks... ext3 recovery flag is clear, but journal has data. Recovery flag not set in backup superblock, so running journal anyway. /dev/vg00/big: recovering journal Pass 1: Checking inodes, blocks, and sizes
It is taking a pretty long time. I should probably walk away so I'm not tempted to interrupt it or poke at it.
Eventually, it started to ask permission to fix group numbers, free inodes, etc.
About 1800 groups needed to be repaired. In the end it looked like this:
Free inodes count wrong for group #1855 (16383, counted=16384). Fix<y>? yes Free inodes count wrong for group #1860 (16381, counted=16382). Fix<y>? yes Free inodes count wrong (30459026, counted=30443879). Fix<y>? yes /dev/vg00/big: ***** FILE SYSTEM WAS MODIFIED ***** /dev/vg00/big: 95897/30539776 files (4.2% non-contiguous), 50861505/61048832 blocks
Now I get this:
root@weasel:/dev/vg00# fsck /dev/vg00/big fsck 1.40.8 (13-Mar-2008) e2fsck 1.40.8 (13-Mar-2008) /dev/vg00/big: clean, 95897/30539776 files, 50861505/61048832 blocks
Yay.
Four disks under LVM, untangling the mess[edit]
My system crashes about once a week since I installed the maximum of four SATA disks in it. I've tried unmounting the new disks, but I actually require three of the four for the basic system. The fourth could be unplugged, since it only stores movies.
With everything plugged in and mounted, it works fine most of the time. Except once a week or so when the hard disk activity light stops blinking and goes on constantly. Then the machine becomes unresponsive, that is, pressing caps lock doesn't change the state of the LED and moving the mouse does not move the cursor, and no network traffic is passed.
lvscan broken?[edit]
I figure the problem is probably in my LVM configuration, so I'm starting to try to go back and verify that I did everything right. But my first step is to look at the output of lvscan, and it seems that there might be some small problem with it:
root@weasel:~# lvscan File descriptor 4 left open File descriptor 5 left open File descriptor 7 left open File descriptor 8 left open Setting log/indent to 1 Setting log/prefix to Setting log/command_names to 0 Setting global/test to 0 Setting log/overwrite to 0 Setting log/file to /var/log/lvm2.log log/activation not found in config: defaulting to 0 Logging initialised at Tue Sep 23 21:01:05 2008 Setting global/umask to 63 Set umask to 0077 Setting devices/dir to /dev Setting global/proc to /proc Setting global/activation to 1 global/suffix not found in config: defaulting to 1 Setting global/units to h devices/preferred_names not found in config file: using built-in preferences Setting devices/ignore_suspended_devices to 0 Setting devices/cache_dir to /etc/lvm/cache Setting devices/write_cache_state to 1 Setting activation/reserved_stack to 256 Setting activation/reserved_memory to 8192 Setting activation/process_priority to -18 Initialised format: lvm1 Initialised format: pool Initialised format: lvm2 global/format not found in config: defaulting to lvm2 Initialised segtype: striped Initialised segtype: zero Initialised segtype: error Initialised segtype: snapshot Initialised segtype: mirror Setting backup/retain_days to 30 Setting backup/retain_min to 10 Setting backup/archive_dir to /etc/lvm/archive Setting backup/backup_dir to /etc/lvm/backup global/fallback_to_lvm1 not found in config: defaulting to 0 Setting global/locking_type to 1 File-based locking selected. Setting global/locking_dir to /var/lock/lvm Finding all logical volumes /dev/ram0: size is 131072 sectors /dev/ram0: size is 131072 sectors /dev/ram0: No label detected /dev/loop0: size is 0 sectors /dev/sda: size is 488397168 sectors /dev/netstore/netstore: size is 526385152 sectors /dev/netstore/netstore: size is 526385152 sectors /dev/netstore/netstore: No label detected /dev/ram1: size is 131072 sectors /dev/ram1: size is 131072 sectors /dev/ram1: No label detected /dev/loop1: size is 0 sectors /dev/sda1: size is 497952 sectors /dev/sda1: size is 497952 sectors /dev/sda1: No label detected /dev/netstore/store: size is 450379776 sectors /dev/netstore/store: size is 450379776 sectors /dev/netstore/store: No label detected /dev/ram2: size is 131072 sectors /dev/ram2: size is 131072 sectors /dev/ram2: No label detected /dev/loop2: size is 0 sectors /dev/sda2: size is 2 sectors /dev/vg00/big: size is 488390656 sectors /dev/vg00/big: size is 488390656 sectors /dev/vg00/big: No label detected /dev/ram3: size is 131072 sectors /dev/ram3: size is 131072 sectors /dev/ram3: No label detected /dev/loop3: size is 0 sectors /dev/netback/netback: size is 976764928 sectors /dev/netback/netback: size is 976764928 sectors /dev/netback/netback: No label detected /dev/ram4: size is 131072 sectors /dev/ram4: size is 131072 sectors /dev/ram4: No label detected /dev/loop4: size is 0 sectors /dev/Ubuntu/root: size is 475570176 sectors /dev/Ubuntu/root: size is 475570176 sectors /dev/Ubuntu/root: No label detected /dev/ram5: size is 131072 sectors /dev/ram5: size is 131072 sectors /dev/ram5: No label detected /dev/loop5: size is 0 sectors /dev/sda5: size is 487893987 sectors /dev/sda5: size is 487893987 sectors /dev/sda5: lvm2 label detected /dev/Ubuntu/swap_1: size is 12320768 sectors /dev/Ubuntu/swap_1: size is 12320768 sectors /dev/Ubuntu/swap_1: No label detected /dev/ram6: size is 131072 sectors /dev/ram6: size is 131072 sectors /dev/ram6: No label detected /dev/loop6: size is 0 sectors /dev/ram7: size is 131072 sectors /dev/ram7: size is 131072 sectors /dev/ram7: No label detected /dev/loop7: size is 0 sectors /dev/ram8: size is 131072 sectors /dev/ram8: size is 131072 sectors /dev/ram8: No label detected /dev/ram9: size is 131072 sectors /dev/ram9: size is 131072 sectors /dev/ram9: No label detected /dev/ram10: size is 131072 sectors /dev/ram10: size is 131072 sectors /dev/ram10: No label detected /dev/ram11: size is 131072 sectors /dev/ram11: size is 131072 sectors /dev/ram11: No label detected /dev/ram12: size is 131072 sectors /dev/ram12: size is 131072 sectors /dev/ram12: No label detected /dev/ram13: size is 131072 sectors /dev/ram13: size is 131072 sectors /dev/ram13: No label detected /dev/ram14: size is 131072 sectors /dev/ram14: size is 131072 sectors /dev/ram14: No label detected /dev/ram15: size is 131072 sectors /dev/ram15: size is 131072 sectors /dev/ram15: No label detected /dev/sdb: size is 488397168 sectors /dev/sdb1: size is 488392002 sectors /dev/sdb1: size is 488392002 sectors /dev/sdb1: lvm2 label detected /dev/sdc: size is 976773168 sectors /dev/sdc: size is 976773168 sectors /dev/sdc: lvm2 label detected /dev/sdd: size is 976773168 sectors /dev/sdd: size is 976773168 sectors /dev/sdd: lvm2 label detected Locking /var/lock/lvm/V_netback RB /dev/sdd: lvm2 label detected /dev/sdd: lvm2 label detected ACTIVE '/dev/netback/netback' [465.76 GB] inherit Unlocking /var/lock/lvm/V_netback Locking /var/lock/lvm/V_netstore RB /dev/sdc: lvm2 label detected /dev/sdc: lvm2 label detected ACTIVE '/dev/netstore/netstore' [251.00 GB] inherit ACTIVE '/dev/netstore/store' [214.76 GB] inherit Unlocking /var/lock/lvm/V_netstore Locking /var/lock/lvm/V_vg00 RB /dev/sdb1: lvm2 label detected /dev/sdb1: lvm2 label detected ACTIVE '/dev/vg00/big' [232.88 GB] inherit Unlocking /var/lock/lvm/V_vg00 Locking /var/lock/lvm/V_Ubuntu RB /dev/sda5: lvm2 label detected /dev/sda5: lvm2 label detected ACTIVE '/dev/Ubuntu/root' [226.77 GB] inherit ACTIVE '/dev/Ubuntu/swap_1' [5.88 GB] inherit Unlocking /var/lock/lvm/V_Ubuntu Dumping persistent device cache to /etc/lvm/cache/.cache Locking /etc/lvm/cache/.cache (F_WRLCK, 1) /dev/disk/by-uuid/1e46b23b-a03a-405e-b6a7-00beeb570375: stat failed: No such file or directory /dev/disk/by-id/dm-name-Ubuntu-root--snap: stat failed: No such file or directory /dev/disk/by-id/dm-name-Ubuntu-root--snap-cow: stat failed: No such file or directory /dev/disk/by-id/usb-WD_5000AAV_External_57442D574341535533303031393637-0:0-part1: stat failed: No such file or directory /dev/mapper/Ubuntu-root-real: stat failed: No such file or directory /dev/disk/by-id/dm-uuid-LVM-85vnO6FtRRSZnSs0UQaFNfOY5XZiwMm0Lcx9O2JaqSiw7NvRMWEhiSu0vJP1Hw7W-real: stat failed: No such file or directory /dev/disk/by-id/dm-uuid-LVM-85vnO6FtRRSZnSs0UQaFNfOY5XZiwMm027YVkbLEKc6uOykfqByOG8xSKAZT3SBn-cow: stat failed: No such file or directory /dev/disk/by-id/dm-name-Ubuntu-root-real: stat failed: No such file or directory /dev/mapper/Ubuntu-root--snap: stat failed: No such file or directory /dev/mapper/Ubuntu-root--snap-cow: stat failed: No such file or directory /dev/disk/by-id/dm-uuid-LVM-85vnO6FtRRSZnSs0UQaFNfOY5XZiwMm0YWlbzX9RoUdJ3shI80GwYOBRN6FOO9MQ-cow: stat failed: No such file or directory /dev/sdc1: stat failed: No such file or directory /dev/sdd1: stat failed: No such file or directory /dev/disk/by-uuid/1EF5-31A5: stat failed: No such file or directory /dev/Ubuntu/root-snap: stat failed: No such file or directory /dev/disk/by-id/dm-uuid-LVM-85vnO6FtRRSZnSs0UQaFNfOY5XZiwMm027YVkbLEKc6uOykfqByOG8xSKAZT3SBn: stat failed: No such file or directory /dev/disk/by-label/My\x20Book: stat failed: No such file or directory /dev/disk/by-path/pci-0000:00:02.1-usb-0:5:1.0-scsi-0:0:0:0-part1: stat failed: No such file or directory /dev/disk/by-id/dm-uuid-LVM-85vnO6FtRRSZnSs0UQaFNfOY5XZiwMm0YWlbzX9RoUdJ3shI80GwYOBRN6FOO9MQ: stat failed: No such file or directory Loaded persistent filter cache from /etc/lvm/cache/.cache Unlocking fd 5 Wiping internal VG cache
The pvscan output is similar, that is, it seems to have the same errors around the real data. There doesn't seem to be a difference between pvscan and pvscan --verbose. And no difference between lvscan and lvscan --verbose.
Ooops, is this because I tried to turn up logging in lvm when I was trying to get more info on the failure?
OK, all that confusing stuff was from me setting the /etc/lvm/lvm.conf logging verbosity to 3 instead of 2. After I set it back, the output is much more familiar:
root@weasel:/etc/lvm# lvscan File descriptor 4 left open File descriptor 5 left open File descriptor 7 left open File descriptor 8 left open Logging initialised at Tue Sep 23 21:37:15 2008 Set umask to 0077 Finding all logical volumes ACTIVE '/dev/netback/netback' [465.76 GB] inherit ACTIVE '/dev/netstore/netstore' [251.00 GB] inherit ACTIVE '/dev/netstore/store' [214.76 GB] inherit ACTIVE '/dev/vg00/big' [232.88 GB] inherit ACTIVE '/dev/Ubuntu/root' [226.77 GB] inherit ACTIVE '/dev/Ubuntu/swap_1' [5.88 GB] inherit Wiping internal VG cache root@weasel:/etc/lvm# pvscan File descriptor 4 left open File descriptor 5 left open File descriptor 7 left open File descriptor 8 left open Logging initialised at Tue Sep 23 21:37:19 2008 Set umask to 0077 Wiping cache of LVM-capable devices Wiping internal VG cache Walking through all physical volumes PV /dev/sdd VG netback lvm2 [465.76 GB / 0 free] PV /dev/sdc VG netstore lvm2 [465.76 GB / 0 free] PV /dev/sdb1 VG vg00 lvm2 [232.88 GB / 0 free] PV /dev/sda5 VG Ubuntu lvm2 [232.64 GB / 0 free] Total: 4 [1.36 TB] / in use: 4 [1.36 TB] / in no VG: 0 [0 ] Wiping internal VG cache
lvm dumpconfig[edit]
Here is the output of my lvm dumpconfig:
root@weasel:/etc/lvm# lvm dumpconfig File descriptor 4 left open File descriptor 5 left open File descriptor 7 left open File descriptor 8 left open Logging initialised at Tue Sep 23 21:42:11 2008 Set umask to 0077 Dumping configuration to stdout devices { dir="/dev" scan="/dev" preferred_names=[] filter="a/.*/" cache_dir="/etc/lvm/cache" cache_file_prefix="" write_cache_state=1 sysfs_scan=1 md_component_detection=1 ignore_suspended_devices=0 } activation { missing_stripe_filler="/dev/ioerror" reserved_stack=256 reserved_memory=8192 process_priority=-18 mirror_region_size=512 mirror_log_fault_policy="allocate" mirror_device_fault_policy="remove" } global { umask=63 test=0 units="h" activation=1 proc="/proc" locking_type=1 fallback_to_clustered_locking=1 fallback_to_local_locking=1 locking_dir="/var/lock/lvm" } shell { history_size=100 } backup { backup=1 backup_dir="/etc/lvm/backup" archive=1 archive_dir="/etc/lvm/archive" retain_min=10 retain_days=30 } log { verbose=1 syslog=1 file="/var/log/lvm2.log" overwrite=0 level=5 indent=1 command_names=0 prefix=" " } Wiping internal VG cache
lvdisplay -C[edit]
root@weasel:/etc/lvm# lvdisplay -C File descriptor 4 left open File descriptor 5 left open File descriptor 7 left open File descriptor 8 left open Logging initialised at Tue Sep 23 21:49:22 2008 Set umask to 0077 Finding all logical volumes LV VG Attr LSize Origin Snap% Move Log Copy% root Ubuntu -wi-ao 226.77G swap_1 Ubuntu -wi-ao 5.88G netback netback -wi-ao 465.76G netstore netstore -wi-ao 251.00G store netstore -wi-ao 214.76G big vg00 -wi-ao 232.88G Wiping internal VG cache
Hmm, that could be a problem. LVM thinks I have 5.88G of swap, but free thinks I have over 6G:
root@weasel:/etc/lvm# free -m total used free shared buffers cached Mem: 3024 1149 1875 0 99 524 -/+ buffers/cache: 525 2499 Swap: 6015 0 6015
I suppose it could just be that they are counting bytes differently. But I'll have to trace that down to make sure.
After looking closer, that doesn't seem to be a problem. Both free and lvscan are looking at the same thing: my swap partition. They are just reporting the size with differing metrics. If I was really paranoid, I could move the swap out of a partition and into a file. Then I'd have one less logical volume to fiddle with and perhaps it would simplify the overall configuration.