[SRU][F][PATCH 1/1] drm/panfrost: Fix the error path in panfrost_mmu_map_fault_addr()

Hui Wang hui.wang at canonical.com
Wed Sep 25 04:01:10 UTC 2024


From: Boris Brezillon <boris.brezillon at collabora.com>

If some the pages or sgt allocation failed, we shouldn't release the
pages ref we got earlier, otherwise we will end up with unbalanced
get/put_pages() calls. We should instead leave everything in place
and let the BO release function deal with extra cleanup when the object
is destroyed, or let the fault handler try again next time it's called.

Fixes: 187d2929206e ("drm/panfrost: Add support for GPU heap allocations")
Cc: <stable at vger.kernel.org>
Reviewed-by: Steven Price <steven.price at arm.com>
Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno at collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon at collabora.com>
Co-developed-by: Dmitry Osipenko <dmitry.osipenko at collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko at collabora.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240105184624.508603-18-dmitry.osipenko@collabora.com
(backported from commit 1fc9af813b25e146d3607669247d0f970f5a87c3)
[hui: This fix commit can't be cleanly applied to J and F due to
missing a prerequisite commit 21aa27ddc582 ("drm/shmem-helper: Switch
to reservation lock"), the prerequisite commit will introduce a
significant change hence here can't introduce it in the J and F. So
I edited the fix commit accordingly, changed "goto err_unlock" to
"goto err_bo".]
CVE-2024-35951
Signed-off-by: Hui Wang <hui.wang at canonical.com>
---
 drivers/gpu/drm/panfrost/panfrost_mmu.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c
index b17f3022db5a..d59cfad77bfe 100644
--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c
+++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c
@@ -498,12 +498,19 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 	mapping_set_unevictable(mapping);
 
 	for (i = page_offset; i < page_offset + NUM_FAULT_PAGES; i++) {
+		/* Can happen if the last fault only partially filled this
+		 * section of the pages array before failing. In that case
+		 * we skip already filled pages.
+		 */
+		if (pages[i])
+			continue;
+
 		pages[i] = shmem_read_mapping_page(mapping, i);
 		if (IS_ERR(pages[i])) {
 			mutex_unlock(&bo->base.pages_lock);
 			ret = PTR_ERR(pages[i]);
 			pages[i] = NULL;
-			goto err_pages;
+			goto err_bo;
 		}
 	}
 
@@ -513,7 +520,7 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 	ret = sg_alloc_table_from_pages(sgt, pages + page_offset,
 					NUM_FAULT_PAGES, 0, SZ_2M, GFP_KERNEL);
 	if (ret)
-		goto err_pages;
+		goto err_bo;
 
 	if (!dma_map_sg(pfdev->dev, sgt->sgl, sgt->nents, DMA_BIDIRECTIONAL)) {
 		ret = -EINVAL;
@@ -534,8 +541,6 @@ static int panfrost_mmu_map_fault_addr(struct panfrost_device *pfdev, int as,
 
 err_map:
 	sg_free_table(sgt);
-err_pages:
-	drm_gem_shmem_put_pages(&bo->base);
 err_bo:
 	drm_gem_object_put_unlocked(&bo->base.base);
 	return ret;
-- 
2.34.1




More information about the kernel-team mailing list