[Bug 1814389] Re: Second extend of second lvmraid mirror does not sync

Bug Watch Updater 1814389 at bugs.launchpad.net
Sat Feb 2 23:02:16 UTC 2019


Launchpad has imported 1 comments from the remote bug at
https://bugzilla.redhat.com/show_bug.cgi?id=1671964.

If you reply to an imported comment from within Launchpad, your comment
will be sent to the remote bug automatically. Read more about
Launchpad's inter-bugtracker facilities at
https://help.launchpad.net/InterBugTracking.

------------------------------------------------------------------------
On 2019-02-02T20:36:33+00:00 steved424 wrote:

Description of problem:

Extending an lvmraid(7) type1 mirror for the second time seems to result
in the mirror legs not getting synced, *if* there is another type1
mirror in the vg.

Version-Release number of selected component (if applicable):

2.02.176 (4.1ubuntu3)

How reproducible:

Seems to be reliably reproducible here on Ubuntu 18.04 at least.

Steps to Reproduce:

# quickly fill two 10G files with random data
openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero | dd bs=$((1024*1024*1024)) count=10 of=pv1.img iflag=fullblock
openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero | dd bs=$((1024*1024*1024)) count=10 of=pv2.img iflag=fullblock

# change loop devices if you have loads of snaps in use
losetup /dev/loop10 pv1.img
losetup /dev/loop11 pv2.img
pvcreate /dev/loop10
pvcreate /dev/loop11
vgcreate testvg /dev/loop10 /dev/loop11

lvcreate --type raid1 -L2G -n test testvg
watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg

# wait for sync

lvcreate --type raid1 -L2G -n test2 testvg
watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg

# wait for sync

# the following will sync OK, observe kernel message for output from md subsys noting time taken
#
lvextend -L+2G testvg/test2
watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg

# wait for sync

# the following will FAIL to sync, the sync will seem to complete instantly, e.g:
# Feb 02 15:22:50 asr-host kernel: md: resync of RAID array mdX
# Feb 02 15:22:50 asr-host kernel: md: mdX: resync done.
#
lvextend -L+2G testvg/test2

lvchange --syncaction check testvg/test2
watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg

# observe error count

Actual results:

The sync after the final lvextend completes instantly, and a subsequent
lvchange --syncaction check reports a high number for
raid_mismatch_count

Expected results:

The sync after the final lvextend should take at least a few seconds,
and a subsequent lvchange --syncaction check should not report any
errors for raid_mismatch_count (unless the underlying hardware has
failed.)

Additional info:

Launchpad bug:
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1814389

Reply at:
https://bugs.launchpad.net/ubuntu/+source/lvm2/+bug/1814389/comments/2


** Changed in: lvm2
       Status: Unknown => Confirmed

** Changed in: lvm2
   Importance: Unknown => Undecided

-- 
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to lvm2 in Ubuntu.
https://bugs.launchpad.net/bugs/1814389

Title:
  Second extend of second lvmraid mirror does not sync

Status in lvm2:
  Confirmed
Status in lvm2 package in Ubuntu:
  New

Bug description:
  This is a weird corner case. Extending an lvmraid(7) type1 mirror for
  the second time seems to result in the mirror legs not getting synced,
  *if* there is another type1 mirror in the vg. This reliably reproduces
  for me:

  # quickly fill two 10G files with random data
  openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero | dd bs=$((1024*1024*1024)) count=10 of=pv1.img iflag=fullblock
  openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero | dd bs=$((1024*1024*1024)) count=10 of=pv2.img iflag=fullblock

  # change loop devices if you have loads of snaps in use
  losetup /dev/loop10 pv1.img
  losetup /dev/loop11 pv2.img
  pvcreate /dev/loop10
  pvcreate /dev/loop11
  vgcreate testvg /dev/loop10 /dev/loop11

  lvcreate --type raid1 -L2G -n test testvg
  watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg

  # wait for sync

  lvcreate --type raid1 -L2G -n test2 testvg
  watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg

  # wait for sync

  # the following will sync OK, observe kernel message for output from md subsys noting time taken
  #
  lvextend -L+2G testvg/test2
  watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg

  # wait for sync

  # the following  will FAIL to sync, the sync will seem to complete instantly, e.g:
  # Feb 02 15:22:50 asr-host kernel: md: resync of RAID array mdX
  # Feb 02 15:22:50 asr-host kernel: md: mdX: resync done.
  #
  lvextend -L+2G testvg/test2

  lvchange --syncaction check testvg/test2
  watch lvs -o +raid_sync_action,sync_percent,raid_mismatch_count testvg

  # observe error count

  This may cause administrator alarm unnecessarily ... :/

  For some reason the precise sizes with which the LVs are created, and
  are then extended by, do appear to matter.

  ProblemType: Bug
  DistroRelease: Ubuntu 18.04
  Package: lvm2 2.02.176-4.1ubuntu3
  ProcVersionSignature: Ubuntu 4.15.0-43.46-generic 4.15.18
  Uname: Linux 4.15.0-43-generic x86_64
  ApportVersion: 2.20.9-0ubuntu7.5
  Architecture: amd64
  Date: Sat Feb  2 15:33:16 2019
  ProcEnviron:
   TERM=screen
   PATH=(custom, no user)
   LANG=en_GB.UTF-8
   SHELL=/bin/bash
  SourcePackage: lvm2
  UpgradeStatus: No upgrade log present (probably fresh install)
  mtime.conffile..etc.lvm.lvm.conf: 2018-07-22T18:30:15.470358

To manage notifications about this bug go to:
https://bugs.launchpad.net/lvm2/+bug/1814389/+subscriptions



More information about the foundations-bugs mailing list