[PATCH v2][impish/linux-azure] UBUNTU: SAUCE: azure: Swiotlb: Add swiotlb_alloc_from_low_pages switch

Tim Gardner tim.gardner at canonical.com
Wed Mar 30 17:50:32 UTC 2022


From: Tianyu Lan <Tianyu.Lan at microsoft.com>

v2 - added proper subject

BugLink: https://bugs.launchpad.net/bugs/1967166

Hyper-V Isolation VM and AMD SEV VM uses swiotlb bounce buffer to
share memory with hypervisor. Current swiotlb bounce buffer is only
allocated from 0 to ARCH_LOW_ADDRESS_LIMIT which is default to
0xffffffffUL. Isolation VM and AMD SEV VM needs 1G bounce buffer at most.
This will fail when there is not enough memory from 0 to 4G address
space and devices also may use memory above 4G address space as DMA memory.
Expose swiotlb_alloc_from_low_pages and platform mey set it to false when
it's not necessary to limit bounce buffer from 0 to 4G memory.

Signed-off-by: Tianyu Lan <Tianyu.Lan at microsoft.com>
Signed-off-by: Tim Gardner <tim.gardner at canonical.com>
---

This patch depends on a previous pull request:
[Pull Request] [impish/linux-azure] Azure: Update Hyperv to 5.17

---
 include/linux/swiotlb.h |  1 +
 kernel/dma/swiotlb.c    | 17 +++++++++++++++--
 2 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 2356da25c3b9..037356d57abf 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -38,6 +38,7 @@ enum swiotlb_force {
 extern void swiotlb_init(int verbose);
 int swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose);
 unsigned long swiotlb_size_or_default(void);
+void swiotlb_set_alloc_from_low_pages(bool low);
 extern int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs);
 extern int swiotlb_late_init_with_default_size(size_t default_size);
 extern void __init swiotlb_update_mem_attributes(void);
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index c57e78071143..e2c1395740de 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -67,6 +67,7 @@ enum swiotlb_force swiotlb_force;
 struct io_tlb_mem *io_tlb_default_mem;
 
 phys_addr_t swiotlb_unencrypted_base;
+static bool swiotlb_alloc_from_low_pages = true;
 
 /*
  * Max segment that we can provide which (if pages are contingous) will
@@ -109,6 +110,11 @@ void swiotlb_set_max_segment(unsigned int val)
 		max_segment = rounddown(val, PAGE_SIZE);
 }
 
+void swiotlb_set_alloc_from_low_pages(bool low)
+{
+	swiotlb_alloc_from_low_pages = low;
+}
+
 unsigned long swiotlb_size_or_default(void)
 {
 	return default_nslabs << IO_TLB_SHIFT;
@@ -253,8 +259,15 @@ swiotlb_init(int verbose)
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
 		return;
 
-	/* Get IO TLB memory from the low pages */
-	tlb = memblock_alloc_low(bytes, PAGE_SIZE);
+	/*
+	 * Get IO TLB memory from the low pages if swiotlb_alloc_from_low_pages
+	 * is set.
+	 */
+	if (swiotlb_alloc_from_low_pages)
+		tlb = memblock_alloc_low(bytes, PAGE_SIZE);
+	else
+		tlb = memblock_alloc(bytes, PAGE_SIZE);
+
 	if (!tlb)
 		goto fail;
 	if (swiotlb_init_with_tbl(tlb, default_nslabs, verbose))
-- 
2.35.1




More information about the kernel-team mailing list