[Bug 785668] Re: bonding inside a bridge does not update ARP correctly when bridged net accessed from within a VM

Louis Bouchard louis.bouchard at canonical.com
Fri May 20 13:06:11 UTC 2011


** Description changed:

  Binary package hint: qemu-kvm
  
  Description: Ubuntu 10.4.2
  Release: 10.04
  
- 
- When setting a KVM host with a bond0 interface made of eth0 and eth1 and using this bond0 interface for a bridge to KVM VMs, the ARP tables do not get updated correctly and
+ When setting a KVM host with a bond0 interface made of eth0 and eth1 and
+ using this bond0 interface for a bridge to KVM VMs, the ARP tables do
+ not get updated correctly so it is not possible for a VM to reach an IP
+ on the bridged network up until that remote system has pinged the VM
+ itself.
  
  Reproducible: 100%, with any of the load balancing mode
  
  How to reproduce the problem
  
  - Prerequisites:
  1 One KVM system with 10.04.02 server,  configured as a virtual host is needed. The following /etc/network/interfaces was used for the test :
  
  # grep  -v ^# /etc/network/interfaces
  auto lo
  iface lo inet loopback
  
- 
  auto bond0
  iface bond0 inet manual
- 	post-up ifconfig $IFACE up
- 	pre-down ifconfig $IFACE down
- 	bond-slaves none
- 	bond_mode balance-rr
- 	bond-downdelay 250
- 	bond-updelay 120
+  post-up ifconfig $IFACE up
+  pre-down ifconfig $IFACE down
+  bond-slaves none
+  bond_mode balance-rr
+  bond-downdelay 250
+  bond-updelay 120
  auto eth0
  allow-bond0 eth0
  iface eth0 inet manual
- 	bond-master bond0
+  bond-master bond0
  auto eth1
  allow-bond0 eth1
  iface eth1 inet manual
- 	bond-master bond0
+  bond-master bond0
  
  auto br0
  iface br0 inet dhcp
- 	# dns-* options are implemented by the resolvconf package, if installed
- 	bridge-ports bond0
- 	bridge-stp off
- 	bridge-fd 9
- 	bridge-hello 2
- 	bridge-maxage 12
- 	bridge_max_wait 0
+  # dns-* options are implemented by the resolvconf package, if installed
+  bridge-ports bond0
+  bridge-stp off
+  bridge-fd 9
+  bridge-hello 2
+  bridge-maxage 12
+  bridge_max_wait 0
  
  One VM running Maverick 10.10 server, standard installation, using the
  following /etc/network/interfaces file :
  
  grep -v ^# /etc/network/interfaces
  
  auto lo
  iface lo inet loopback
  
  auto eth0
  iface eth0 inet static
-         address 10.153.107.92
-         netmask 255.255.255.0
-         network 10.153.107.0
-         broadcast 10.153.107.255
+         address 10.153.107.92
+         netmask 255.255.255.0
+         network 10.153.107.0
+         broadcast 10.153.107.255
  
  --------------
  On a remote bridged network system :
  $ arp -an
  ? (10.153.107.188) à 00:1c:c4:6a:c0:dc [ether] sur tap0
  ? (16.1.1.1) à 00:17:33:e9:ee:3c [ether] sur wlan0
  ? (10.153.107.52) à 00:1c:c4:6a:c0:de [ether] sur tap0
  
  On KVMhost
  $ arp -an
  ? (10.153.107.80) at ee:99:73:68:f0:a5 [ether] on br0
  
  On VM
  $ arp -an
  ? (10.153.107.61) at <incomplete> on eth0
  
  1) Test #1 : ping from VM (10.153.107.92) to remote bridged network
  system (10.153.107.80) :
  
  - On remote bridged network system :
  caribou at marvin:~$ arp -an
  ? (10.153.107.188) à 00:1c:c4:6a:c0:dc [ether] sur tap0
  ? (16.1.1.1) à 00:17:33:e9:ee:3c [ether] sur wlan0
  ? (10.153.107.52) à 00:1c:c4:6a:c0:de [ether] sur tap0
  
  - On KVMhost
  ubuntu at VMhost:~$ arp -an
  ? (10.153.107.80) at ee:99:73:68:f0:a5 [ether] on br0
  
  - On VM
  ubuntu at vm1:~$ ping 10.153.107.80
  PING 10.153.107.80 (10.153.107.80) 56(84) bytes of data.
  From 10.153.107.92 icmp_seq=1 Destination Host Unreachable
  From 10.153.107.92 icmp_seq=2 Destination Host Unreachable
  From 10.153.107.92 icmp_seq=3 Destination Host Unreachable
  ^C
  --- 10.153.107.80 ping statistics ---
  4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3010ms
  pipe 3
  ubuntu at vm1:~$ arp -an
  ? (10.153.107.61) at <incomplete> on eth0
  ? (10.153.107.80) at <incomplete> on eth0
  
  2) Test #2 : ping from remote bridged network system (10.153.107.80) to
  VM (10.153.107.92) :
  
  - On remote bridged network system :
  $ ping 10.153.107.92
  PING 10.153.107.92 (10.153.107.92) 56(84) bytes of data.
  64 bytes from 10.153.107.92: icmp_req=1 ttl=64 time=327 ms
  64 bytes from 10.153.107.92: icmp_req=2 ttl=64 time=158 ms
  64 bytes from 10.153.107.92: icmp_req=3 ttl=64 time=157 ms
  ^C
  --- 10.153.107.92 ping statistics ---
  3 packets transmitted, 3 received, 0% packet loss, time 2003ms
  rtt min/avg/max/mdev = 157.289/214.500/327.396/79.831 ms
  caribou at marvin:~$ arp -an
  ? (10.153.107.188) à 00:1c:c4:6a:c0:dc [ether] sur tap0
  ? (16.1.1.1) à 00:17:33:e9:ee:3c [ether] sur wlan0
  ? (10.153.107.52) à 00:1c:c4:6a:c0:de [ether] sur tap0
  ? (10.153.107.92) à 52:54:00:8c:e0:3c [ether] sur tap0
  
  - On KVMhost
  $ arp -an
  ? (10.153.107.80) at ee:99:73:68:f0:a5 [ether] on br0
  
  - On VM
  arp -an
  ? (10.153.107.61) at <incomplete> on eth0
  ? (10.153.107.80) at ee:99:73:68:f0:a5 [ether] on eth0
  
- 
  3) Test #3 : New ping from VM (10.153.107.92) to remote bridged network system (10.153.107.80) :
  - On remote bridged network system :
  $ arp -an
  ? (10.153.107.188) à 00:1c:c4:6a:c0:dc [ether] sur tap0
  ? (16.1.1.1) à 00:17:33:e9:ee:3c [ether] sur wlan0
  ? (10.153.107.52) à 00:1c:c4:6a:c0:de [ether] sur tap0
  ? (10.153.107.92) à 52:54:00:8c:e0:3c [ether] sur tap0
  
  - On KVMhost
  ubuntu at VMhost:~$ arp -an
  ? (10.153.107.80) at ee:99:73:68:f0:a5 [ether] on br0
  
  - On VM
  ubuntu at vm1:~$ ping 10.153.107.80
  PING 10.153.107.80 (10.153.107.80) 56(84) bytes of data.
  64 bytes from 10.153.107.80: icmp_req=1 ttl=64 time=154 ms
  64 bytes from 10.153.107.80: icmp_req=2 ttl=64 time=170 ms
  64 bytes from 10.153.107.80: icmp_req=3 ttl=64 time=154 ms
  ^C
  --- 10.153.107.80 ping statistics ---
  3 packets transmitted, 3 received, 0% packet loss, time 2003ms
  rtt min/avg/max/mdev = 154.072/159.465/170.058/7.504 ms
  
  tcpdump traces are available for those tests. Test system is available
  upon request.
  
  Workaround:
  
  Use the bonded device in "active-backup" mode
  
  ProblemType: Bug
  DistroRelease: Ubuntu 10.04.02
  Package: qemu-kvm-0.12.3+noroms-0ubuntu9.6
  Uname: Linux 2.6.35-25-serverr x86_64
  Architecture: amd64

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to qemu-kvm in Ubuntu.
https://bugs.launchpad.net/bugs/785668

Title:
  bonding inside a bridge does not update ARP correctly when bridged net
  accessed from within a VM



More information about the Ubuntu-server-bugs mailing list