Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Network port configuration can be achieved using a) the web interface, b) SSH command line interface(CLI). Using the Web interface is the easiest route, however in highly constrained network environments a pure CLI based configuration can be easier
WEB INTERFACE: NETWORK CONFIG
From the dashboard page, Start by selecting the configuration menu option from as shown below (highlighted in green).
Then edit the network configuration`s IP/Netmask/Gateway/DNS setting as shown in the image below. After each field has been edited the system automatically saves and updates the system setting (save button is not required). After completing the update, refresh the web page to confirm the new settings.
Select the tools menu from the top toolbar, as shown in the image below.
And finally select the Power Cycle / Reboot button to restart the system
added some random comments
FW: 7167+
When using FMADIO Packet capture system for analytics processing SSD resources can be split into Capture devices and Scratch disk space. In scratch disk space 1-16TB of SSD can be mounted as a general purpose file system used to store temporarily/intermediate network packet processing results.
The system should have scratch disks setup and visible on the GUI as follows, if this has not been configured contact support@fmad.io on how to configure
**In the above example there are 2 disks SCR0 and SCR1 enabled for scratch disk these are seen on the file system as
NOTE: the /dev/* mount point may change from time to time, please use the /opt/fmadio/disk/scr* path name for all operations.
Start by creating a /dev/md1 RAID0 partition as follows
This creates a /dev/md1 partition as shown with lsblk command. Can see the /dev/md1 device
More detail via the mdadm --detail command
The block device /dev/md1 is block level only, it contains no mountable file system. Next create btrfs filesystem on the device as follows
By default FMADIO Packet Capture systems at boot time mount BTRFS with lzo disk compression. Compression can be enabled or disabled with BTRFS on-the-fly. In this case we will mount it the same as capture system does at boot time.
Then check the mount point with lsblk. Below we can see /dev/md1 is mounted on /mnt/store1
Checking the compression level with BTRFS requires calculating the raw storage and the compressed storage.
In the above example we see /mnt/store1 has 3.0 GB worth of data (using du)
In the above example we see /mnt/store1 has used 751.MiB of actual storage capacity (using btrfs fi show)
Based on the above math (3112MB / 751MB) , the compression rate is ~ x4.04
Modifying the network configuration setting in a restricted Colocation environment can be far easier to achieve via the command line. The first step is SSH into the system, change to the specified directory and view the current network settings, as shown below
In the example configuration file above, the network ports are mapped as follows
The FMADIO40 Gen3 packet capture device is our entry level full sustained line rate 40Gbit capture to cache packet capture / packet sniffer devices. It is a compact 1U 650mm deep chassis featuring 3.2 nanosecond resolution hardware packet time stamps and sub 100ns world time accuracy via PTPv2 + PPS.
In addition there is 1-4TB of high bandwidth SSD flash storage which is written back into 16-64TB of raw magnetic disk drives. The system is unique by combining a hybrid SSD / HDD storage architecture to gain maximum cost savings with maximum disk storage and still be capable of sustained multi TB worth of line rate capture without any packet drops.
When using FMADIO Packet capture system for analytics processing SSD resources can be split into Capture devices and Scratch disk space. In scratch disk space 1-16TB of SSD can be mounted as a general purpose file system used to store temporarily/intermediate network packet processing results.
The system should have scratch disks setup and visible on the GUI as follows, if this has not been configured contact support@fmad.io on how to configure
In the above example there are 2 disks SCR0 and SCR1 enabled for scratch disk these are seen on the file system as
NOTE: the /dev/* mount point may change from time to time, please use the /opt/fmadio/disk/scr* path name for all operations.
NOTE: Any change to the scratch disk configuration it is recommend to run a Quick Format to ensure SSD and Capture disks configuration are consistent
By default all SSD are specified are dedicated to capture. This is specified in the configuration file
Capture disks are specified here
In the above example we have 4 x SSD for capture. To convert half to capture and half to scratch disk modify as follows
This is assigning the SSD Serial numbers to mount point /opt/fmadio/disk/scr0 and /opt/fmadio/disk/scr1. The actual Serial numbers for each system will be different, the mount point (scr0/scr1) is the same.
After updating confirm there are no syntax errors in the config file by running fmadiolua /opt/fmadio/etc/disk.lua as follows
Output as above shows correctly formatted file. Output per below shows configuration file with a syntax error (line 30 has some incorrect formatting)
After confirming the configuration file syntax is correct, reboot the system. The mount points scr0 and scr1 should be visible as shown below.
After /opt/fmadio/disk/scr[0-1] have been created. Next is creating a /dev/md1 RAID0 partition as follows
This creates a /dev/md1 partition as shown with lsblk command. Can see the /dev/md1 device
More detail via the mdadm --detail command
The block device /dev/md1 is block level only, it contains no mountable file system. Next create btrfs filesystem on the device as follows
By default FMADIO Packet Capture systems at boot time will start the RAID0 partition and mount /dev/md1 Scratch disk to /mnt/store1
If it fails to mount, please issue the following command
Hardware
FMADIO 40G Gen3 1U System
FMADIO 40G Gen3 2U System
The default 1G management port is labeled "L1" and closets to the power supply.
This L1 port can bridge the IPMI/BMC port (single RJ45 connection for both Server and BMC)
The L2 port can not bridge the IPMI/BMC port
The IPMI Port is a dedicated segmented network port used for out of band communication with the system. It allows power on/off and KVM capabilities. Highly recommend connecting this. Default IP address is 192.168.0.93/24
The Management ports are 1G SFP/ 10G SFP+ (Capture System). Or 10G/40G QSFP+ (Analytics systems) These can be run in standard, or link bonded / redundant setup.
FMADIO40Gv3 Capture systems,
FMADIO40Gv3 Gen3 Analytics Systems
Capture ports can be configured in the following way.
The PPS connector is a 1 Pulse Per Second time synchronization cable. It runs on a 3.3V trigger signal the interface is SMA Coaxial cable
FMADIO Systems have multiple 1G, 10G and 40G management interfaces, depending on the ordered SKU.
Management interfaces are all bridged by default per the following block diagram
Using the above configuration allows
LXC containers full pass-thru IP address (no NAT)
Bonded management mode for Redundancy (Hot-Standby)
Bonded management mode for Throughput ( LAG )
Example ifconfig of the system is as follows
And bridge settings
By default MTU size is set to 1500B for maximum compatibility. This can be configure for 9200 Jumbo frame support to maximize download throughput. This is done by setting
For both man10 and phy10 network interfaces in the network configuration script below.
This has to be set on both the man10 and phy10 (optionally phy11 if used) interfaces to be fully effective as per below example.
Requires FW:6508+
LACP or Link Bonding is critical for fail over / redundancy planning. FMADIO Packet Capture devices run on Linux thus we support LCAP/Bonding on the management interfaces.
Add a bonded interface "bond0" as follows
In the above example the "Slave" field contains the list of physical interfaces the bonding runs on. This example is bonding the two 1G RJ45 interfaces on the system. To bond the 10G interfaces on a separate LCAP link (bond1), use the following:
Requires FW: 6633+
Line Bonding mode options (details ripped from kernel.org)
Round-robin (balance-rr)
Transmit network packets in sequential order from the first available network interface (NIC) slave through the last. This mode provides load balancing and fault tolerance.
Active-backup (active-backup)
Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface's MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance.
XOR (balance-xor)
Transmit network packets based on a hash of the packet's source and destination. The default algorithm only considers MAC addresses (layer2). Newer versions allow selection of additional policies based on IP addresses (layer2+3) and TCP/UDP port numbers (layer3+4). This selects the same NIC slave for each destination MAC address, IP address, or IP address and port combination, respectively. This mode provides load balancing and fault tolerance.
Broadcast (broadcast)
Transmit network packets on all slave network interfaces. This mode provides fault tolerance.
Default mode IEEE 802.3ad Dynamic link aggregation (802.3ad, LACP)
Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification. This mode is similar to the XOR mode above and supports the same balancing policies. The link is set up dynamically between two LACP-supporting peers.
Adaptive transmit load balancing (balance-tlb)
Linux bonding driver mode that does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
Adaptive load balancing (balance-alb)
includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.
NOTE: PTPv2 and LCAP on the 10G Management interfaces are mutually exclusive.
By default 802.3ad bonding mode is used, full list of Linux bonding modes can be seen on . Note "BondMode" specifies the Linux bonding mode to be used.
Port Count
Port Speed
Interface
2
10G
SFP+
2
1G
SFP
Port Count
Port Speed
Interface
2
40Gbps
QSFP+
4
10Gbps
QSFP+ Breakout Cables
Port Count
Port Speed
Interface
2
100Gbps
QSFP28 (FEC or no FEC)
2
40Gbps
QSFP+
8
25Gbps
QSFP28 (Not released yet)
8
10Gbps
QSFP+ Breakout Cables