Bonding

Prev Next

FMADIO management network ports use standard linux software and hardware stacks. We use Intel based NIC chipsets and the standard linux network stack. This allows wide flexibility in how the management network can be setup.

NOTE: PTPv2 and LCAP on the 10G Management interfaces are mutually exclusive. Contact support@fmad.io for further details.

LACP Link Bonding

LACP or Link Bonding is critical for fail over / redundancy planning. FMADIO Packet Capture devices run a standard linux kernel thus we support LCAP/Bonding on the management interfaces.


LACP on 1G management

Step 1) configure network config file

Edit the network configuration file

/opt/fmadio/etc/network.lua

Create a “bond0” interface within the configuration file as shown below

["bond0"] =
{
    ["Mode"]    = "bond",
    ["Address"] = "192.168.1.2",
    ["Netmask"] = "255.255.255.0",
    ["Gateway"] = "192.168.1.1",
    ["DNS0"]    = "",
    ["DNS1"]    = "",
    ["Slave"]  = { "phy0", "phy1" },
    ["BondMode"] = "active-backup",
},

This creates a “bond0” interface on linux with the above static IP address. The physical interfaces are listed in the “Slave” in this case both 1G RJ45 interfaces phy0 and phy1.

Step 2) Save the file

Save the file and confirm no syntax errors on the file by running

fmadiolua /opt/fmadio/etc/network.lua

Correct output looks like below

If it shows any syntax error, pls fix config file

Step 3) Reboot

Reboot the system. Its recommended to start the HTML KVM interfaces first as a backup, incase there is a problem with the configuration. This allows updating the network configuration when the system can not be reached using the management interfaces.

To reboot run the command

sudo reboot

Step 4) Confirm settings are correct

Once the system is reachable again, run the command

ifconfig

The output looks similar to below. Things to note:

  • bond0 has the static IP and netmask

  • man0 is not listed

fmadio@fmadio200v4-636:~$ ifconfig
bond0     Link encap:Ethernet  HWaddr 74:56:3C:0E:85:A1
          inet addr:192.168.1.2  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::7656:3cff:fe0e:85a1/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:231329 errors:0 dropped:0 overruns:0 frame:0
          TX packets:237070 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:16911240 (16.1 MiB)  TX bytes:121576600 (115.9 MiB)

fmad0     Link encap:Ethernet  HWaddr B6:56:51:58:0D:D4
          inet addr:192.168.255.2  Bcast:192.168.255.255  Mask:255.255.255.0
          inet6 addr: fe80::b456:51ff:fe58:dd4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:36 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:2496 (2.4 KiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:448959 errors:0 dropped:0 overruns:0 frame:0
          TX packets:448959 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:119022540 (113.5 MiB)  TX bytes:119022540 (113.5 MiB)

man10     Link encap:Ethernet  HWaddr B4:96:91:CF:89:A0
          inet addr:192.168.91.215  Bcast:192.168.91.255  Mask:255.255.255.0
          inet6 addr: fe80::b696:91ff:fecf:89a0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:191255 errors:0 dropped:0 overruns:0 frame:0
          TX packets:256129 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:24691190 (23.5 MiB)  TX bytes:35293894 (33.6 MiB)

phy0      Link encap:Ethernet  HWaddr 74:56:3C:0E:85:A1
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:231329 errors:0 dropped:0 overruns:0 frame:0
          TX packets:237070 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:16911240 (16.1 MiB)  TX bytes:121576600 (115.9 MiB)
          Memory:d7420000-d743ffff

phy1      Link encap:Ethernet  HWaddr 74:56:3C:0E:85:A1
          UP BROADCAST SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
          Memory:d7400000-d741ffff

phy10     Link encap:Ethernet  HWaddr B4:96:91:CF:89:A0
          inet6 addr: fe80::b696:91ff:fecf:89a0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:191255 errors:0 dropped:0 overruns:0 frame:0
          TX packets:256157 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:27368760 (26.1 MiB)  TX bytes:35302046 (33.6 MiB)

phy11     Link encap:Ethernet  HWaddr B4:96:91:CF:89:A1
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

fmadio@fmadio200v4-636:~$

LACP on the 10G/25G/100G management interfaces

In most cases the highspeed management interfaces is used for access. Depending on the hardware SKU this could be 10G/25G/40G/100G.

The setup is the same as above, except the “Slave” list is now phy10 and phy11. Example shown below

["bond0"] =
{
    ["Mode"]    = "bond",
    ["Address"] = "192.168.1.2",
    ["Netmask"] = "255.255.255.0",
    ["Gateway"] = "192.168.1.1",
    ["DNS0"]    = "",
    ["DNS1"]    = "",
    ["Slave"]  = { "phy10", "phy11" },
    ["BondMode"] = "active-backup",
},

All other steps are the same as above.


LACP Bonding Mode

By default 802.3ad bonding mode is used, full list of Linux bonding modes can be seen on kernel.org. Note "BondMode" specifies the Linux bonding mode to be used.

The setting is highlighted below

 ["BondMode"] = "active-backup",

Line Bonding mode options (details copied from kernel.org)

Round-robin (balance-rr)

Transmit network packets in sequential order from the first available network interface (NIC) slave through the last. This mode provides load balancing and fault tolerance.

Active-backup (active-backup)

Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface's MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.

XOR (balance-xor)

Transmit network packets based on a hash of the packet's source and destination. The default algorithm only considers MAC addresses (layer2). Newer versions allow selection of additional policies based on IP addresses (layer2+3) and TCP/UDP port numbers (layer3+4). This selects the same NIC slave for each destination MAC address, IP address, or IP address and port combination, respectively. This mode provides load balancing and fault tolerance.

Broadcast (broadcast)

Transmit network packets on all slave network interfaces. This mode provides fault tolerance.

Default mode

IEEE 802.3ad Dynamic link aggregation (802.3ad, LACP)

Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification. This mode is similar to the XOR mode above and supports the same balancing policies. The link is set up dynamically between two LACP-supporting peers.

Adaptive transmit load balancing (balance-tlb)

Linux bonding driver mode that does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

Adaptive load balancing (balance-alb)

includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.

Debugging

To check the current bonding mode

cat /sys/devices/virtual/net/bond0/bonding/mode 

Example output

fmadio@fmadio200v4-636:~$ cat /sys/devices/virtual/net/bond0/bonding/mode
balance-rr 0
fmadio@fmadio200v4-636:~$

To get more detail on the bonding

 cat /proc/net/bonding/bond0

Example output is shown below

fmadio@fmadio200v4-636:~$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v6.1.92-tinycore64

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 5000
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: phy0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 74:56:3c:0e:85:a1
Slave queue ID: 0

Slave Interface: phy1
MII Status: down
Speed: Unknown
Duplex: Unknown
Link Failure Count: 1
Permanent HW addr: 74:56:3c:0e:85:a2
Slave queue ID: 0
fmadio@fmadio200v4-636:~$