[Bug 889423] Re: 802.3ad bonding not configured correctly

Stéphane Graber stgraber at stgraber.org
Fri Dec 2 04:13:19 UTC 2011


Wow, that's quite a lot of things happening on that system :)
So indeed looking at the number of CPUs, network cards and disks showing up, it's enough to flood udev and upstart and likely make things start a bit slower than usual and so out of order.

Essentially, the fallback networking script starts before the network cards actually got setup and announced by udev.
That's the one case where you indeed end up trying to add something to the bond just before the bond actually gets created (by not even a second apparently).

Just for testing's sake can you add:
pre-up sleep 2

To your bridge to confirm that it's indeed a race condition happening
there?

What it shows at least is that we definitely can't rely on the fallback
networking job as running after all the kernel events have been
processed. I guess the easiest way out of that problem will be to add
the same hack to bridge-utils that I added to ifenslave, essentially
waiting for up to a minute for the slaves/members to appear before
giving up and continuing without them.

In your case, that'd wait for around 200ms, then find bond0, move it
into the bridge and continue.

At least it looks like the proposed ifenslave isn't at fault, it's just
an extra change that'll need to happen to bridge-utils.


Thanks for the tests, good to have someone with that kind of hardware around for testing :)

-- 
You received this bug notification because you are a member of Ubuntu
Foundations Bugs, which is subscribed to ifupdown in Ubuntu.
https://bugs.launchpad.net/bugs/889423

Title:
  802.3ad bonding not configured correctly

Status in “ifupdown” package in Ubuntu:
  Confirmed

Bug description:
  Configuring an 802.3ad bond doesn't appear to work correctly. The following entry in /etc/network/interfaces should configure an 802.3ad bond between interfaces eth2 and eth3:
  #auto bond0
  iface bond0 inet static
    address 10.191.62.2
    netmask 255.255.255.0
    broadcast 10.191.62.255
    bond-slaves eth2 eth3
    bond-primary eth2 eth3
    bond-mode 802.3ad
    bond-lacp_rate fast
    bond-miimon 100

  However, after booting the system, we have:
    # ifconfig -a
    bond0     Link encap:Ethernet  HWaddr 00:1b:21:b7:21:ea
              inet addr:10.191.62.2  Bcast:10.191.62.255 Mask:255.255.255.0
              inet6 addr: fe80::21b:21ff:feb7:21ea/64 Scope:Link
              UP BROADCAST MASTER MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

    eth2      Link encap:Ethernet  HWaddr 00:1b:21:b7:21:ea
              UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
              Memory:b2420000-b2440000

    eth3      Link encap:Ethernet  HWaddr 00:1b:21:b7:21:ea
              UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
              Memory:b2400000-b2420000
    # cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

    Bonding Mode: IEEE 802.3ad Dynamic link aggregation
    Transmit Hash Policy: layer2 (0)
    MII Status: down
    MII Polling Interval (ms): 100
    Up Delay (ms): 0
    Down Delay (ms): 0

    802.3ad info
    LACP rate: fast
    Aggregator selection policy (ad_select): stable
    bond bond0 has no active aggregator

    Slave Interface: eth2
    MII Status: up
    Speed: 1000 Mbps
    Duplex: full
    Link Failure Count: 1
    Permanent HW addr: 00:1b:21:b7:21:ea
    Aggregator ID: N/A
    Slave queue ID: 0

    Slave Interface: eth3
    MII Status: up
    Speed: 1000 Mbps
    Duplex: full
    Link Failure Count: 1
    Permanent HW addr: 00:1b:21:b7:21:eb
    Aggregator ID: N/A
    Slave queue ID: 0

  If I do the following:
    # ip link set dev bond0 up
    # ifenslave bond0 eth2 eth3
    # ifconfig bond0 10.191.62.2 netmask 255.255.255.0
  I get:
    # ifconfig bond0
    bond0     Link encap:Ethernet  HWaddr 00:1b:21:b7:21:ea
              inet addr:10.191.62.2  Bcast:10.191.62.255 Mask:255.255.255.0
              inet6 addr: fe80::21b:21ff:feb7:21ea/64 Scope:Link
              UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
              RX packets:17 errors:0 dropped:17 overruns:0 frame:0
              TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0
              RX bytes:2108 (2.1 KB)  TX bytes:3126 (3.1 KB)

    # cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

    Bonding Mode: IEEE 802.3ad Dynamic link aggregation
    Transmit Hash Policy: layer2 (0)
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 0
    Down Delay (ms): 0

    802.3ad info
    LACP rate: fast
    Aggregator selection policy (ad_select): stable
    Active Aggregator Info:
            Aggregator ID: 1
            Number of ports: 2
            Actor Key: 17
            Partner Key: 24
            Partner Mac Address: 00:04:96:18:54:d5

    Slave Interface: eth2
    MII Status: up
    Speed: 1000 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 00:1b:21:b7:21:ea
    Aggregator ID: 1
    Slave queue ID: 0

    Slave Interface: eth3
    MII Status: up
    Speed: 1000 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 00:1b:21:b7:21:eb
    Aggregator ID: 1
    Slave queue ID: 0

  I can ping 10.191.62.2 after making the above changes. So, either I am
  configuring /etc/network/interfaces incorrectly or ifupdown/ifenslave
  is doing the wrong thing.

  Note also the number of dropped packages on bond0. Why should I see
  any dropped packages on the bond0 interface?

  DistroRelease: Ubuntu 11.10
  Package: ifupdown 0.7~alpha5.1ubuntu5
  PackageArchitecture: amd64
  ProcEnviron:
   LANG=en_US.UTF-8
   SHELL=/bin/bash
  ProcVersionSignature: Ubuntu 3.0.0-12.20-server 3.0.4
  SourcePackage: ifupdown
  Uname: Linux 3.0.0-12-server x86_64

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ifupdown/+bug/889423/+subscriptions




More information about the foundations-bugs mailing list