[Bug 1671951] Re: networkd should allow configuring IPV6 MTU
Balint Reczey
balint.reczey at canonical.com
Thu Oct 31 19:25:38 UTC 2019
In my test I modified a multipass launched VM's netplan config to:
multipass at safe-hornet:~$ cat /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by
# the datasource. Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
ethernets:
ens3:
dhcp4: true
mtu: 1496
match:
macaddress: 52:54:00:4c:5b:ac
set-name: ens3
dhcp4-overrides:
use-mtu: false
dhcp6-overrides:
use-mtu: false
ipv6-mtu: 1284
version: 2
Also disabled cloud-init overwriting it:
multipass at safe-hornet:~$ cat /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
network: {config: disabled}
IPv6 MTU is properly set on Eoan after reboot:
multipass at safe-hornet:~$ sysctl net.ipv6.conf.ens3.mtu
net.ipv6.conf.ens3.mtu = 1284
... but not on Disco or Bionic. To be continued...
--
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to systemd in Ubuntu.
https://bugs.launchpad.net/bugs/1671951
Title:
networkd should allow configuring IPV6 MTU
Status in cloud-init package in Ubuntu:
Confirmed
Status in netplan.io package in Ubuntu:
Fix Released
Status in systemd package in Ubuntu:
Fix Released
Status in cloud-init source package in Bionic:
Confirmed
Status in netplan.io source package in Bionic:
Fix Released
Status in systemd source package in Bionic:
Triaged
Status in cloud-init source package in Disco:
New
Status in netplan.io source package in Disco:
Fix Released
Status in systemd source package in Disco:
Triaged
Bug description:
= netplan.io =
[Impact]
* IPv6 traffic failing to send/receive due to incompatible/low MTU
setting. Specifically, IPv6 traffic may have higher MTU requirements
than IPv4 traffic and thus may need to be overridden and/or set to a
higher value than IPv6 traffic.
[Test Case]
* Apply a netplan configuration that specifices ipv6-mtu:
network:
version: 2
ethernets:
eth0:
dhcp4: true
dhcp6: true
ipv6-mtu: 6000
* Check that MTU bytes, is at least IPv6MTUBytes on the interface:
$ sysctl net.ipv6.conf.eth0.mtu
net.ipv6.conf.eth0.mtu = 6000
[Regression Potential]
* This is a future compatible backport of an additional keyword not
used by default. It may result in MTU change to a higher value, which
should not cause loss of connectivity.
[Other Info]
* Original bug report below
= end of netplan.io =
= systemd =
[Impact]
* IPv6 traffic failing to send/receive due to incompatible/low MTU
setting. Specifically, IPv6 traffic may have higher MTU requirements
than IPv4 traffic and thus may need to be overridden and/or set to a
higher value than IPv6 traffic.
[Test Case]
* Use IPv6MTUBytes= setting in a .network unit
* Restart systemd-network
* Check that there no error messages / warnings about not-recognizing this option
* Check that MTU bytes, is at least IPv6MTUBytes on the interface
[Regression Potential]
* This is a future compatible backport of an additional keyword not
used by default. It may result in MTU change to a higher value, which
should not cause loss of connectivity.
[Other Info]
* Original bug report below
= end of systemd =
1) Zesty
2) systemd-232-19
3) I need to configure the IPV6 MTU for tunneling by adding an IPv6MTUBytes=1480 value in the .network file for an interface with an IPV6 static address in the [Network] section
4) networkd does not parse or read the value and does not apply this configuration to the interface.
Upstream has discussed this issue here:
https://github.com/systemd/systemd/pull/1533
But it's been closed in favor of only setting via RA.
However, we know of multiple use-case which are currently supported in
ifdupdown where we want to retain control over IPV6 MTU values outside
of PMTU Discovery configurations.
Some context from those discussions
>> Client systems that route their ipv6 packets to a 6in4 router also
>> have to have their ipv6 mtu lowered. They could lower their link mtu,
>> so their ipv6 packets are small enough, but that reduces performance
>> of their ipv4 network.
Yes. Anything that creates a PMTUD black hole can result in
situations where the higher header overhead of IPv6 will cause IPv4 to
pass but IPv6 traffic to be dropped.
One example here is egress from an ipsec tunnel wherein the next
hop MTU is too low for IPv6 datagrams to pass. Another is VM ->
whatever -> host bridge -> tunnel ingress. If the datagram cannot enter
the tunnel due to size, it is dropped, and an ICMP response uses the
tunnel address as a source, which may not be routable back to the
origin. This one is an issue with IPv4 as well, and is one case where
manually setting the IPv6 MTU lower than the (also manually set) device
MTU is of benefit.
In essence, any of these sort of cases that require an explicit
setting of the device MTU will likely require a setting of the IPv6 mtu
as well to account for its larger header overhead.
To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/1671951/+subscriptions
More information about the foundations-bugs
mailing list