[PATCH 0/1][Focal/Impish/Jammy linux-azure] PCI: hv: Do not set PCI_COMMAND_MEMORY to reduce VM boot time
Tim Gardner
tim.gardner at canonical.com
Mon May 9 15:50:07 UTC 2022
BugLink: https://bugs.launchpad.net/bugs/1972662
SRU Justification
[Impact]
A VM on Azure can have 14 GPUs, and each GPU may have a huge MMIO BAR,
e.g. 128 GB. Currently the boot time of such a VM can be 4+ minutes, and
most of the time is used by the host to unmap/map the vBAR from/to pBAR
when the VM clears and sets the PCI_COMMAND_MEMORY bit: each unmap/map
operation for a 128GB BAR needs about 1.8 seconds, and the pci-hyperv
driver and the Linux PCI subsystem flip the PCI_COMMAND_MEMORY bit
eight times (see pci_setup_device() -> pci_read_bases() and
pci_std_update_resource()), increasing the boot time by 1.8 * 8 = 14.4
seconds per GPU, i.e. 14.4 * 14 = 201.6 seconds in total.
Fix the slowness by not turning on the PCI_COMMAND_MEMORY in pci-hyperv.c,
so the bit stays in the off state before the PCI device driver calls
pci_enable_device(): when the bit is off, pci_read_bases() and
pci_std_update_resource() don't cause Hyper-V to unmap/map the vBARs.
With this change, the boot time of such a VM is reduced by
1.8 * (8-1) * 14 = 176.4 seconds.
[Test Case]
Microsoft tested
[Where things could go wrong]
PCI BAR setup could fail or be incorrect.
[Other Info]
SF: #00336342
More information about the kernel-team
mailing list