[Bionic][PATCH v2 00/21] blk-mq scheduler fixes

joserz at linux.ibm.com joserz at linux.ibm.com
Wed May 16 15:58:00 UTC 2018


Hello Kleber!

I've sent the 3rd version with the bugfix tag included. But I'll check
if I can remove some commits from the set in order to make it less
complex.

Thank you!

On Wed, May 16, 2018 at 12:01:03PM +0200, Kleber Souza wrote:
> On 05/10/18 18:23, Jose Ricardo Ziviani wrote:
> > From: Jose Ricardo Ziviani <joserz at linux.ibm.com>
> > 
> > Hello team!
> > 
> > Weeks ago I sent a patchset with:
> >  * genirq/affinity: Spread irq vectors among present CPUs as far as possible
> >  * blk-mq: simplify queue mapping & schedule with each possisble CPU
> > 
> > Unfortunately they broke some cases, in particular the hpsa driver, so they
> > had to be reverted. However, the bugs in blk_mq are leading to a very
> > unstable KVM/Qemu virtual machines whenever CPU hotplug/unplug and SMT
> > changes are performed, also impacting live migration.
> > 
> > So, this is a new attempt to have the patches included. This new version
> > includes all related fixes available upstream.
> > 
> > It's based on Ubuntu-4.15.0-21.22 tag.
> > 
> > Thank you!
> > 
> > Jose Ricardo Ziviani
> 
> Hi Ziviani,
> 
> The patches are missing the BugLink reference. By the history I assume
> they are the follow-up fixes for LP: #1759723.
> 
> This is a large amount of patches to be applied for some sensitive
> subsystems, which would make it hard to guarantee that there are no
> regressions. Would you be able to identify a smaller subset of these
> patches that would fix the problem you are facing?
> 
> 
> Thanks,
> Kleber
> 
> 
> > 
> > Bart Van Assche (1):
> >   blk-mq: Avoid that blk_mq_delay_run_hw_queue() introduces unintended
> >     delays
> > 
> > Christoph Hellwig (2):
> >   genirq/affinity: assign vectors to all possible CPUs
> >   blk-mq: simplify queue mapping & schedule with each possisble CPU
> > 
> > Jens Axboe (1):
> >   blk-mq: fix discard merge with scheduler attached
> > 
> > Ming Lei (16):
> >   genirq/affinity: Rename *node_to_possible_cpumask as *node_to_cpumask
> >   genirq/affinity: Move actual irq vector spreading into a helper
> >     function
> >   genirq/affinity: Allow irq spreading from a given starting point
> >   genirq/affinity: Spread irq vectors among present CPUs as far as
> >     possible
> >   blk-mq: make sure hctx->next_cpu is set correctly
> >   blk-mq: make sure that correct hctx->next_cpu is set
> >   blk-mq: don't keep offline CPUs mapped to hctx 0
> >   blk-mq: avoid to write intermediate result to hctx->next_cpu
> >   blk-mq: introduce blk_mq_hw_queue_first_cpu() to figure out first cpu
> >   blk-mq: don't check queue mapped in __blk_mq_delay_run_hw_queue()
> >   nvme: pci: pass max vectors as num_possible_cpus() to
> >     pci_alloc_irq_vectors
> >   scsi: hpsa: fix selection of reply queue
> >   scsi: megaraid_sas: fix selection of reply queue
> >   scsi: core: introduce force_blk_mq
> >   scsi: virtio_scsi: fix IO hang caused by automatic irq vector affinity
> >   scsi: virtio_scsi: unify scsi_host_template
> > 
> > Thomas Gleixner (1):
> >   genirq/affinity: Don't return with empty affinity masks on error
> > 
> >  block/blk-core.c                            |   2 +
> >  block/blk-merge.c                           |  29 +++-
> >  block/blk-mq-cpumap.c                       |   5 -
> >  block/blk-mq.c                              |  65 +++++---
> >  drivers/nvme/host/pci.c                     |   2 +-
> >  drivers/scsi/hosts.c                        |   1 +
> >  drivers/scsi/hpsa.c                         |  73 ++++++---
> >  drivers/scsi/hpsa.h                         |   1 +
> >  drivers/scsi/megaraid/megaraid_sas.h        |   1 +
> >  drivers/scsi/megaraid/megaraid_sas_base.c   |  39 ++++-
> >  drivers/scsi/megaraid/megaraid_sas_fusion.c |  12 +-
> >  drivers/scsi/virtio_scsi.c                  | 129 ++-------------
> >  include/scsi/scsi_host.h                    |   3 +
> >  kernel/irq/affinity.c                       | 166 +++++++++++++-------
> >  14 files changed, 296 insertions(+), 232 deletions(-)
> > 
> 





More information about the kernel-team mailing list