ACK/Cmnt: [SRU][F:linux-bluefield][PATCH v3 1/1] UBUNTU: SAUCE: tmfifo: Fix a memory barrier issue

Tim Gardner tim.gardner at canonical.com
Fri May 7 14:35:52 UTC 2021


Acked-by: Tim Gardner <tim.gardner at canonical.com>

Its unlikely to make things any worse. Should this patch be submitted 
upstream ?

On 5/6/21 6:30 AM, Liming Sun wrote:
> From: Liming Sun <lsun at mellanox.com>
> 
> BugLink: https://bugs.launchpad.net/bugs/1927262
> 
> The virtio framework uses wmb() when updating avail->idx. It
> gurantees the write order, but not necessarily loading order
> for the code accessing the memory. This commit adds a load barrier
> after reading the avail->idx to make sure all the data in the
> descriptor is visible. It also adds a barrier when returning the
> packet to virtio framework to make sure read/writes are visible to
> the virtio code.
> 
> Signed-off-by: Liming Sun <limings at nvidia.com>
> ---
>   drivers/platform/mellanox/mlxbf-tmfifo.c | 11 ++++++++++-
>   1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c
> index 5739a966..92bda873 100644
> --- a/drivers/platform/mellanox/mlxbf-tmfifo.c
> +++ b/drivers/platform/mellanox/mlxbf-tmfifo.c
> @@ -294,6 +294,9 @@ static irqreturn_t mlxbf_tmfifo_irq_handler(int irq, void *arg)
>   	if (vring->next_avail == virtio16_to_cpu(vdev, vr->avail->idx))
>   		return NULL;
>   
> +	/* Make sure 'avail->idx' is visible already. */
> +	virtio_rmb(false);
> +
>   	idx = vring->next_avail % vr->num;
>   	head = virtio16_to_cpu(vdev, vr->avail->ring[idx]);
>   	if (WARN_ON(head >= vr->num))
> @@ -322,7 +325,7 @@ static void mlxbf_tmfifo_release_desc(struct mlxbf_tmfifo_vring *vring,
>   	 * done or not. Add a memory barrier here to make sure the update above
>   	 * completes before updating the idx.
>   	 */
> -	mb();
> +	virtio_mb(false);
>   	vr->used->idx = cpu_to_virtio16(vdev, vr_idx + 1);
>   }
>   
> @@ -730,6 +733,12 @@ static bool mlxbf_tmfifo_rxtx_one_desc(struct mlxbf_tmfifo_vring *vring,
>   		desc = NULL;
>   		fifo->vring[is_rx] = NULL;
>   
> +		/*
> +		 * Make sure the load/store are in order before
> +		 * returning back to virtio.
> +		 */
> +		virtio_mb(false);
> +
>   		/* Notify upper layer that packet is done. */
>   		spin_lock_irqsave(&fifo->spin_lock[is_rx], flags);
>   		vring_interrupt(0, vring->vq);
> 

-- 
-----------
Tim Gardner
Canonical, Inc



More information about the kernel-team mailing list