[azure][PATCH v1 8/9] UBUNTU: SAUCE: netvsc: add documentation
Marcelo Henrique Cerri
marcelo.cerri at canonical.com
Fri Aug 4 14:54:43 UTC 2017
From: stephen hemminger <stephen at networkplumber.org>
Add some background documentation on netvsc device options
Signed-off-by: Stephen Hemminger <sthemmin at microsoft.com>
Signed-off-by: David S. Miller <davem at davemloft.net>
(cherry picked from net-next commit a5050c61036859e6fd7924f25cc6a97e7462039d)
Signed-off-by: Marcelo Henrique Cerri <marcelo.cerri at canonical.com>
Documentation/networking/netvsc.txt | 63 +++++++++++++++++++++++++++++++++++++
MAINTAINERS | 1 +
2 files changed, 64 insertions(+)
create mode 100644 Documentation/networking/netvsc.txt
diff --git a/Documentation/networking/netvsc.txt b/Documentation/networking/netvsc.txt
new file mode 100644
@@ -0,0 +1,63 @@
+Hyper-V network driver
+This driver is compatible with Windows Server 2012 R2, 2016 and
+ Checksum offload
+ The netvsc driver supports checksum offload as long as the
+ Hyper-V host version does. Windows Server 2016 and Azure
+ support checksum offload for TCP and UDP for both IPv4 and
+ IPv6. Windows Server 2012 only supports checksum offload for TCP.
+ Receive Side Scaling
+ Hyper-V supports receive side scaling. For TCP, packets are
+ distributed among available queues based on IP address and port
+ number. Current versions of Hyper-V host, only distribute UDP
+ packets based on the IP source and destination address.
+ The port number is not used as part of the hash value for UDP.
+ Fragmented IP packets are not distributed between queues;
+ all fragmented packets arrive on the first channel.
+ Generic Receive Offload, aka GRO
+ The driver supports GRO and it is enabled by default. GRO coalesces
+ like packets and significantly reduces CPU usage under heavy Rx
+ SR-IOV support
+ Hyper-V supports SR-IOV as a hardware acceleration option. If SR-IOV
+ is enabled in both the vSwitch and the guest configuration, then the
+ Virtual Function (VF) device is passed to the guest as a PCI
+ device. In this case, both a synthetic (netvsc) and VF device are
+ visible in the guest OS and both NIC's have the same MAC address.
+ The VF is enslaved by netvsc device. The netvsc driver will transparently
+ switch the data path to the VF when it is available and up.
+ Network state (addresses, firewall, etc) should be applied only to the
+ netvsc device; the slave device should not be accessed directly in
+ most cases. The exceptions are if some special queue discipline or
+ flow direction is desired, these should be applied directly to the
+ VF slave device.
+ Receive Buffer
+ Packets are received into a receive area which is created when device
+ is probed. The receive area is broken into MTU sized chunks and each may
+ contain one or more packets. The number of receive sections may be changed
+ via ethtool Rx ring parameters.
+ There is a similar send buffer which is used to aggregate packets for sending.
+ The send area is broken into chunks of 6144 bytes, each of section may
+ contain one or more packets. The send buffer is an optimization, the driver
+ will use slower method to handle very large packets or if the send buffer
+ area is exhausted.
diff --git a/MAINTAINERS b/MAINTAINERS
index 8618e6b21458..6f260f64dc05 100644
@@ -6086,6 +6086,7 @@ M: Haiyang Zhang <haiyangz at microsoft.com>
M: Stephen Hemminger <sthemmin at microsoft.com>
L: devel at linuxdriverproject.org
More information about the kernel-team