[SRU][J][PATCH 0/4] NVMe TCP - Host fails to reconnect to target after link down/link up sequence

Michael Reed michael.reed at canonical.com
Fri Nov 11 01:26:22 UTC 2022


From: Michael Reed <Michael.Reed at canonical.com>

[Impact]
Ubuntu 22.04 host fails to reconnect successfully to the NVMe TCP target after link down event if the number of queues have changed post link down.

[Fix]
Following upstream patch set helps address the issue.

1.
nvmet: Expose max queues to configfs
https://git.infradead.org/nvme.git/commit/2c4282742d049e2a5ab874e2b359a2421b9377c2

2.
nvme-tcp: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/516204e486a19d03962c2757ef49782e6c1cacf4

3.
nvme-rdma: Handle number of queue changes
https://git.infradead.org/nvme.git/commit/e800278c1dc97518eab1970f8f58a5aad52b0f86

The patch in Point 2 above helps address the failure to reconnect in NVMe TCP scenario.

Also, following patch addresses error code parsing issue in the reconnect sequence.

nvme-fabrics: parse nvme connect Linux error codes
https://git.infradead.org/nvme.git/commit/ec9e96b5230148294c7abcaf3a4c592d3720b62d

[Test Plan]

[Where problems could occur]

[Other Info]

Test Kernel Source

https://code.launchpad.net/~mreed8855/ubuntu/+source/linux/+git/jammy/+ref/lp_1989990_nvme_tcp

Amit Engel (1):
  nvme-fabrics: parse nvme connect Linux error codes

Daniel Wagner (3):
  nvme-tcp: handle number of queue changes
  nvme-rdma: handle number of queue changes
  nvmet: expose max queues to configfs

 drivers/nvme/host/fabrics.c    |  6 ++++++
 drivers/nvme/host/rdma.c       | 26 +++++++++++++++++++++-----
 drivers/nvme/host/tcp.c        | 26 +++++++++++++++++++++-----
 drivers/nvme/target/configfs.c | 29 +++++++++++++++++++++++++++++
 4 files changed, 77 insertions(+), 10 deletions(-)

-- 
2.34.1




More information about the kernel-team mailing list