NAK: [PATCH v2][impish/linux-azure] UBUNTU: SAUCE: azure: Swiotlb: Add swiotlb_alloc_from_low_pages switch

Tim Gardner tim.gardner at canonical.com
Mon Apr 4 12:02:35 UTC 2022



On 3/30/22 12:41, Tim Gardner wrote:
> 
> 
> On 3/30/22 12:16, Thadeu Lima de Souza Cascardo wrote:
>> On Wed, Mar 30, 2022 at 12:09:56PM -0600, Tim Gardner wrote:
>>>
>>>
>>> On 3/30/22 12:00, Thadeu Lima de Souza Cascardo wrote:
>>>> On Wed, Mar 30, 2022 at 11:50:32AM -0600, Tim Gardner wrote:
>>>>> From: Tianyu Lan <Tianyu.Lan at microsoft.com>
>>>>>
>>>>> v2 - added proper subject
>>>>>
>>>>> BugLink: https://bugs.launchpad.net/bugs/1967166
>>>>>
>>>>> Hyper-V Isolation VM and AMD SEV VM uses swiotlb bounce buffer to
>>>>> share memory with hypervisor. Current swiotlb bounce buffer is only
>>>>> allocated from 0 to ARCH_LOW_ADDRESS_LIMIT which is default to
>>>>> 0xffffffffUL. Isolation VM and AMD SEV VM needs 1G bounce buffer at 
>>>>> most.
>>>>> This will fail when there is not enough memory from 0 to 4G address
>>>>> space and devices also may use memory above 4G address space as DMA 
>>>>> memory.
>>>>> Expose swiotlb_alloc_from_low_pages and platform mey set it to 
>>>>> false when
>>>>> it's not necessary to limit bounce buffer from 0 to 4G memory.
>>>>>
>>>>> Signed-off-by: Tianyu Lan <Tianyu.Lan at microsoft.com>
>>>>> Signed-off-by: Tim Gardner <tim.gardner at canonical.com>
>>>>> ---
>>>>>
>>>>> This patch depends on a previous pull request:
>>>>> [Pull Request] [impish/linux-azure] Azure: Update Hyperv to 5.17
>>>>>
>>>>> ---
>>>>>    include/linux/swiotlb.h |  1 +
>>>>>    kernel/dma/swiotlb.c    | 17 +++++++++++++++--
>>>>>    2 files changed, 16 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
>>>>> index 2356da25c3b9..037356d57abf 100644
>>>>> --- a/include/linux/swiotlb.h
>>>>> +++ b/include/linux/swiotlb.h
>>>>> @@ -38,6 +38,7 @@ enum swiotlb_force {
>>>>>    extern void swiotlb_init(int verbose);
>>>>>    int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int 
>>>>> verbose);
>>>>>    unsigned long swiotlb_size_or_default(void);
>>>>> +void swiotlb_set_alloc_from_low_pages(bool low);
>>>>>    extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long 
>>>>> nslabs);
>>>>>    extern int swiotlb_late_init_with_default_size(size_t 
>>>>> default_size);
>>>>>    extern void __init swiotlb_update_mem_attributes(void);
>>>>> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
>>>>> index c57e78071143..e2c1395740de 100644
>>>>> --- a/kernel/dma/swiotlb.c
>>>>> +++ b/kernel/dma/swiotlb.c
>>>>> @@ -67,6 +67,7 @@ enum swiotlb_force swiotlb_force;
>>>>>    struct io_tlb_mem *io_tlb_default_mem;
>>>>>    phys_addr_t swiotlb_unencrypted_base;
>>>>> +static bool swiotlb_alloc_from_low_pages = true;
>>>>>    /*
>>>>>     * Max segment that we can provide which (if pages are 
>>>>> contingous) will
>>>>> @@ -109,6 +110,11 @@ void swiotlb_set_max_segment(unsigned int val)
>>>>>            max_segment = rounddown(val, PAGE_SIZE);
>>>>>    }
>>>>> +void swiotlb_set_alloc_from_low_pages(bool low)
>>>>> +{
>>>>> +    swiotlb_alloc_from_low_pages = low;
>>>>> +}
>>>>> +
>>>>
>>>> This looks to be missing the user to this function. Under that 
>>>> condition, this
>>>> doesn't look like to be changing anything. Is it called by some 
>>>> other commit in
>>>> that pull request you mentioned? And, in that case, isn't that breaking
>>>> bisection?
>>>>
>>>
>>> IMHO this patch could use some work. I'm sure it won't get merged 
>>> intact. I
>>> agree that there is no user for set_alloc_from_low_pages() yet. The 
>>> major
>>> change that this patch makes is to use memblock_alloc_low() by 
>>> default for
>>> Azure and Azure-CVM kernels.
>>>
>>> rtg
>>>
>>>> Cascardo.
>>>>
>>>>>    unsigned long swiotlb_size_or_default(void)
>>>>>    {
>>>>>        return default_nslabs << IO_TLB_SHIFT;
>>>>> @@ -253,8 +259,15 @@ swiotlb_init(int verbose)
>>>>>        if (swiotlb_force == SWIOTLB_NO_FORCE)
>>>>>            return;
>>>>> -    /* Get IO TLB memory from the low pages */
>>>>> -    tlb = memblock_alloc_low(bytes, PAGE_SIZE);
>>
>> ^^^
>>
>> But isn't memblock_alloc_low being used by default already?
>>
> 
> Indeed it is. I looked right past that. Lets consider this NAK'd for now 
> since it doesn't do anything constructive. In the meantime I'll get with 
> MSFT and figure out what it is they really want to do.
> 
> rtg
> 

The real issue is that I forgot to attache the 2nd patch which actually 
makes use of swiotlb_set_alloc_from_low_pages(). Since these patches are 
dependent on the Hyperv update to 5.17, I've attached them to a pull 
request '[Pull Request] [impish/linux-azure] Azure: Update Hyperv to 5.17'.

rtg
-- 
-----------
Tim Gardner
Canonical, Inc



More information about the kernel-team mailing list