ACK: [PATCH 1/1] vfio/type1: Limit DMA mappings per container

Colin Ian King colin.king at canonical.com
Thu Apr 18 08:48:35 UTC 2019


On 18/04/2019 08:28, Tyler Hicks wrote:
> From: Alex Williamson <alex.williamson at redhat.com>
> 
> Memory backed DMA mappings are accounted against a user's locked
> memory limit, including multiple mappings of the same memory.  This
> accounting bounds the number of such mappings that a user can create.
> However, DMA mappings that are not backed by memory, such as DMA
> mappings of device MMIO via mmaps, do not make use of page pinning
> and therefore do not count against the user's locked memory limit.
> These mappings still consume memory, but the memory is not well
> associated to the process for the purpose of oom killing a task.
> 
> To add bounding on this use case, we introduce a limit to the total
> number of concurrent DMA mappings that a user is allowed to create.
> This limit is exposed as a tunable module option where the default
> value of 64K is expected to be well in excess of any reasonable use
> case (a large virtual machine configuration would typically only make
> use of tens of concurrent mappings).
> 
> This fixes CVE-2019-3882.
> 
> Reviewed-by: Eric Auger <eric.auger at redhat.com>
> Tested-by: Eric Auger <eric.auger at redhat.com>
> Reviewed-by: Peter Xu <peterx at redhat.com>
> Reviewed-by: Cornelia Huck <cohuck at redhat.com>
> Signed-off-by: Alex Williamson <alex.williamson at redhat.com>
> 
> CVE-2019-3882
> 
> (backported from commit 492855939bdb59c6f947b0b5b44af9ad82b7e38c)
> [tyhicks: Backport to 4.4:
>  - Minor context differences due to missing blocking notifier from commit
>    c086de818dd8 ("vfio iommu: Add blocking notifier to notify DMA_UNMAP")
>  - vfio_dma_do_map() doesn't yet have an out_unlock label which was added in
>    commit 8f0d5bb95f76 ("vfio iommu type1: Add task structure to vfio_dma")]
> Signed-off-by: Tyler Hicks <tyhicks at canonical.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index 2fa280671c1e..875634d0d020 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -53,10 +53,16 @@ module_param_named(disable_hugepages,
>  MODULE_PARM_DESC(disable_hugepages,
>  		 "Disable VFIO IOMMU support for IOMMU hugepages.");
>  
> +static unsigned int dma_entry_limit __read_mostly = U16_MAX;
> +module_param_named(dma_entry_limit, dma_entry_limit, uint, 0644);
> +MODULE_PARM_DESC(dma_entry_limit,
> +		 "Maximum number of user DMA mappings per container (65535).");
> +
>  struct vfio_iommu {
>  	struct list_head	domain_list;
>  	struct mutex		lock;
>  	struct rb_root		dma_list;
> +	unsigned int		dma_avail;
>  	bool			v2;
>  	bool			nesting;
>  };
> @@ -382,6 +388,7 @@ static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma)
>  	vfio_unmap_unpin(iommu, dma);
>  	vfio_unlink_dma(iommu, dma);
>  	kfree(dma);
> +	iommu->dma_avail++;
>  }
>  
>  static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> @@ -582,12 +589,18 @@ static int vfio_dma_do_map(struct vfio_iommu *iommu,
>  		return -EEXIST;
>  	}
>  
> +	if (!iommu->dma_avail) {
> +		mutex_unlock(&iommu->lock);
> +		return -ENOSPC;
> +	}
> +
>  	dma = kzalloc(sizeof(*dma), GFP_KERNEL);
>  	if (!dma) {
>  		mutex_unlock(&iommu->lock);
>  		return -ENOMEM;
>  	}
>  
> +	iommu->dma_avail--;
>  	dma->iova = iova;
>  	dma->vaddr = vaddr;
>  	dma->prot = prot;
> @@ -903,6 +916,7 @@ static void *vfio_iommu_type1_open(unsigned long arg)
>  
>  	INIT_LIST_HEAD(&iommu->domain_list);
>  	iommu->dma_list = RB_ROOT;
> +	iommu->dma_avail = dma_entry_limit;
>  	mutex_init(&iommu->lock);
>  
>  	return iommu;
> 

Backport looks good.

Acked-by: Colin Ian King <colin.king at canonical.com>



More information about the kernel-team mailing list