aports/main/lvm2/lvm.initd
Dermot Bradley 53faabaf0f main/lvm2: prevent LVM error during shutdown
Whenever a machine using LVM is shutdown then the
"/etc/init.d/lvm2 stop", run as part of an orderly shutdown will output
an error:

  * Shutting down the Logical Volume Manager
  * ERROR: lvm failed to stop

If the vgchange command in the init.d script is modified to not hide
its output then instead the following is observed:

  * Shutting down the Logical Volume Manager
   Logical volume vg0/root contains a filesystem in use.
   Can't deactivate volume group "vg0" with 1 open logical volume(s)
  * ERROR: lvm failed to stop

At this point in shutdown the majority of filesystems have already been
unmounted. However, obviously, / is not yet unmounted. The above error
occurs because the LV containing the rootfs cannot be deactivated as
the rootfs is still mounted. Therefore this is a "normal" error when the
rootfs is using LVM and should be ignored.
2023-03-23 20:52:47 +00:00

42 lines
901 B
Bash

#!/sbin/openrc-run
depend() {
before checkfs fsck swap
after hwdrivers modules device-mapper
}
dm_in_proc() {
local rc=0 i=
for i in devices misc; do
grep -qs 'device-mapper' /proc/$i
rc=$(($rc + $?))
done
return $rc
}
start() {
local rc=0 msg=
ebegin "Setting up the Logical Volume Manager"
if [ -e /proc/modules ] && ! dm_in_proc; then
modprobe dm-mod 2>/dev/null
fi
if [ -d /proc/lvm ] || dm_in_proc; then
vgscan --mknodes --ignorelockingfailure
vgchange --sysinit --activate y
rc=$?
else
rc=1
fi
eend $rc
}
stop() {
ebegin "Shutting down the Logical Volume Manager"
vgchange --ignorelockingfailure -a n >/dev/null 2>&1
# At this stage all filesystems except rootfs have been
# unmounted. A "standard" error here is failure to deactivate
# the VG containing the rootfs (as it is still obviously in use)
# so why bother giving a non-zero error code?
eend 0
}