Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Hardware
The FMADIO 20G packet capture device is our entry level full sustained line rate 10Gbit capture to cache packet capture / packet sniffer devices. It is a compact 1U 650mm deep chassis featuring 3.2 nanosecond resolution hardware packet time stamps and sub 100ns world time accuracy via PTPv2 and PPS.
In addition there is 1-4TB of high bandwidth SSD flash storage which is written back into 16-64TB of raw magnetic disk drives. The system is unique by combining a hybrid SSD / HDD storage architecture to gain maximum cost savings with maximum disk storage and still be capable of sustained up to 4TB worth of line rate capture without any packet drops.
FW: 7167+
When using FMADIO Packet capture system for analytics processing SSD resources can be split into Capture devices and Scratch disk space. In scratch disk space 1-16TB of SSD can be mounted as a general purpose file system used to store temporarily/intermediate network packet processing results.
The system should have scratch disks setup and visible on the GUI as follows, if this has not been configured contact support@fmad.io on how to configure
In the above example there are 2 disks SCR0 and SCR1 enabled for scratch disk these are seen on the file system as
NOTE: the /dev/* mount point may change from time to time, please use the /opt/fmadio/disk/scr* path name for all operations.
Start by creating a /dev/md1 RAID0 partition as follows
This creates a /dev/md1 partition as shown with lsblk command. Can see the /dev/md1 device
More detail via the mdadm --detail command
The block device /dev/md1 is block level only, it contains no mountable file system. Next create btrfs filesystem on the device as follows
By default FMADIO Packet Capture systems at boot time mount BTRFS with lzo disk compression. Compression can be enabled or disabled with BTRFS on-the-fly. In this case we will mount it the same as capture system does at boot time.
Then check the mount point with lsblk. Below we can see /dev/md1 is mounted on /mnt/store1
Checking the compression level with BTRFS requires calculating the raw storage and the compressed storage.
In the above example we see /mnt/store1 has 3.0 GB worth of data (using du)
In the above example we see /mnt/store1 has used 751.MiB of actual storage capacity (using btrfs fi show)
Based on the above math (3112MB / 751MB) , the compression rate is ~ x4.04
Modifying the network configuration setting in a restricted Colocation environment can be far easier to achieve via the command line. The first step is SSH into the system, change to the specified directory and view the current network settings, as shown below
In the example configuration file above, the network ports are mapped as follows
SKU
Description
Average
Max
Capture
System Idle
80W
150W
Capture
Full 20Gbps Sustained Capture
90W
150W
Analytics
Full 20GBps Sustained Capture
Analytics
36 CPUs maximum processing
There are various tunable available only via configuration file editing the fie
Then in the ["capture"] section editing each field, an example is shown below
When the total daily capture sizes start exceeding 10TB / day file sizes can get bit too large and difficult to work with. This setting sets the maximum size of a single capture, then rolling (losslessly) to a new Capture when the limit is reached.
Example below rolls the capture file every 1TB
In addition to maximum size, for large ((10TB+ daily) capture rates a more simpler approach is to roll the capture every 1H. This reduces the size of each capture to something more manageable.
Example below rolls the capture every 1H (units are in nano seconds)
This setting a debug only as it (potentially) reduces Capture performance, specifically on 100G and higher capture systems. Only enable this if directed by FMADIO Support
Confirm the setting by checking log file /mnt/store0/log/stream_capture f20.cur where the following log entries will be seen.
This is the number of packets to send (per pipeline) when the Capture pipline has to be flushed. Default is 2000
When in continuous output flush mode this is the period (in nanoseconds) between flushes. Disabling constant period flushing set this to 0. Default is 0 (disable)
This is the idle packet activity timeout. If no new packets are received within this period, the pipeline gets flushed. Default value is 1e9 (1 second)
The performance of a capture system can be characterized in a number of different ways. We provide the following performance dimensions
This is the short time burst capture rate of the system. For the 100G Gen2 system this is burst capture rate that fills up the DDR buffer on the FMAD FPGA Capture card.
For the 100G Gen2 FMADIO Packet Capture system, all storage is on high speed NVMe SSDs, so the Burst Capture Speed is the same as the Sustained Capture Speed.
FMADIO 20G and 40G systems use a mixture of SSD and magnetic disk storage, so for those systems the Burst Capture Speed is higher than the Sustained Capture Speed.
We indicate this as the sustained capture rate, i.e. the capture rate that a system can sustain 24/7 without any packet loss. As mixing capture with downloads effects the capture speed, this performance metric is Capture Only with no simultaneous/concurrent downloads.
Performance metric is assuming no bottlenecks on the egress (download client) what is the capture performance while simultaneously downloading.
The other metric is Download only speed. This metric is used to calculate the maximum rate data can be moved off the device over 10G or 40G ethernet.
FMADIO 20G 2U Packet Capture system has 4TB of SSD Cache and 48TB-216TB worth of HDD Magnetic storage.
The default setting has Compression and CRC checks enabled. Its designed to get maximum total storage capacity via the use of compression and CRC Checks for data integrity. This specific dataset is incompressible, thus the writeback performance is the raw hardware performance.
Testing: FMADIO20Gv3-2U-48TB System
Compression and CRC checks enabled and downloads that hit the SSD cache. e.g Download data is on SSDs and does not require access to the HDD. Download is using localhost to remove network performance from the test.
Testing: FMADIO20Gv3-2U-48TB System
There is a difference when downloading from SSD cache vs HDD storage, as seen below. When a download has to fetch data from HDD magnetic storage, it dramatically effects the throughput of the HDD writeback. This is physical limitation of magnetic storage, as its a physical spinning disk which has poor random IO access performance. This is clearly seen in the significantly lower writeback and download speed, as shown below.
Testing: FMADIO20Gv3-2U-48TB System
This setting shows Writeback performance optimized mode per
This setup both Burst Capture and Sustained Capture rates @ 10Gbps are possible across the entire storage. However above 10Gbps Burst Capture is limited to SSD size (4TB) as beyond that the magnetic storage performance becomes a bottleneck.
Testing: FMADIO20Gv3-2U-48TB System
FW: 7219+
Configuration options are in the specified config file. Please note all options requires capture to be stopped and started before settings are applied.
Specifically the Writeback block, example as follows
This setting changes the default writeback IO Priority, allows changing preference for faster Downloads(default) or faster sustained Writeback to magnetic storage.
Setting enables or disables disk compression. For faster sustained writeback to disk speeds, disable compression.
For 1U systems disabling compression makes little difference due to lower of HDD write bandwidth.
For 2U systems disabling compression improves sustained writeback to HDD performance. As the system has 12 spinning disks with an aggregate 10Gbps-20Gbps (depending on spindle position) write throughput.
This function checks the CRC when reading data from the SSDs. It calculates the CRC and checks for a match against the orignial captured CRC value, before writing the block to magnetic storage. This adds additional CPU overhead.
Disabling this improves the sustained write performance on 2U systems. On 1U systems there is little performance advantage
This is mostly a debug setting. by disabling ECC it removes the RAID5 calculation and additional IO writeback. Turning the system into a RAID0 configuration. This is mostly for debug testing and not recommend for production systems
The default settings are recommended unless there is specific use cases.
For Maximum sustained capture rate we recommended the following settings. This disables the compression and priorities Magnetic Storage writeback over download performance.
In the unlikely event of a complete boot failure, system can be recovered by booting via the Virtual CDROM interface over a HTML BMC connection
Start by going to the BMC interface (default IP is 192.168.0.93) contact us for default login/password
Start the Remote HTML KVM
Will look like this. Select Brose Files, selecting an ISO image + Start the Media
System will boot Ubuntu (for example), we are using ( systemrescue 8.01 amd64)
System will boot as follows, it may take several minutes depending on the speed of the HTML <-> FMADIO System connection. Recommend the closer the HTML instance is to the FMAD device the better.
If a particular boot stage is taking too long Ctrl-C can skip it
After SystemRescue CD has booted, the above is seen. Note the total number of bytes transfered over the Virtual ISO.
First step is to find the FMADIO OS and Persistant storage devices, Use the "lsblk" tool
Looking foor a small (15GB) partition as the OS boot disk. In this case its sda1 and a large (224GB or larger) partition for the Persistent storage
Sometimes its easier to work over SSH. To access the system find or assign an IP address to the a reachable interface
SystemRescuelCD by default has iptables setup. Disable all iptables as follows
Then setup a password for the root account
Then ssh access to the system is possible
Next mount the FMAD OS and Persistant storage disks. They may be sda* or nvme0n1p* in this example its mapped to sda
Next check the contents, it should look roughtly like this
Replacement of SSDs is straight forward but requires unracking and removing PCIe devices
Once unracked remove the top cover by un-screwing the system as follows
Once removed the system looks like the following. The SSDs are located at the following location, shown in RED with the GIGABYTE thermal heatsink
To remove the M.2 SSD drives remove the screwes shown as follows
The PCIe riser is detached from the motherboard
SSD PCIe card is removed from the Riser
SSD PCIe black heatsink is removed from the board
After the PCIe card has been removed, the SSDs are accessible as follows
Remove the screws highlighted in RED above to remove and replace the SSDs
After completing the SSD replacement, reverse the above steps to complete the installation
FMADIO Systems have multiple 1G, 10G and 40G management interfaces, depending on the ordered SKU.
Management interfaces are all bridged by default per the following block diagram
Using the above configuration allows
LXC containers full pass-thru IP address (no NAT)
Bonded management mode for Redundancy (Hot-Standby)
Bonded management mode for Throughput ( LAG )
Example ifconfig of the system is as follows
And bridge settings
By default MTU size is set to 1500B for maximum compatibility. This can be configure for 9200 Jumbo frame support to maximize download throughput. This is done by setting
For both man10 and phy10 network interfaces in the network configuration script below.
This has to be set on both the man10 and phy10 (optionally phy11 if used) interfaces to be fully effective as per below example.
Requires FW:6508+
LACP or Link Bonding is critical for fail over / redundancy planning. FMADIO Packet Capture devices run on Linux thus we support LCAP/Bonding on the management interfaces.
Add a bonded interface "bond0" as follows
In the above example the "Slave" field contains the list of physical interfaces the bonding runs on. This example is bonding the two 1G RJ45 interfaces on the system. To bond the 10G interfaces on a separate LCAP link (bond1), use the following:
Requires FW: 6633+
By default 802.3ad bonding mode is used, full list of Linux bonding modes can be seen on kernel.org. Note "BondMode" specifies the Linux bonding mode to be used.
Line Bonding mode options (details ripped from kernel.org)
Round-robin (balance-rr) Transmit network packets in sequential order from the first available network interface (NIC) slave through the last. This mode provides load balancing and fault tolerance. Active-backup (active-backup) Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface's MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance. XOR (balance-xor) Transmit network packets based on a hash of the packet's source and destination. The default algorithm only considers MAC addresses (layer2). Newer versions allow selection of additional policies based on IP addresses (layer2+3) and TCP/UDP port numbers (layer3+4). This selects the same NIC slave for each destination MAC address, IP address, or IP address and port combination, respectively. This mode provides load balancing and fault tolerance. Broadcast (broadcast) Transmit network packets on all slave network interfaces. This mode provides fault tolerance. Default mode IEEE 802.3ad Dynamic link aggregation (802.3ad, LACP) Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification. This mode is similar to the XOR mode above and supports the same balancing policies. The link is set up dynamically between two LACP-supporting peers. Adaptive transmit load balancing (balance-tlb) Linux bonding driver mode that does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave. Adaptive load balancing (balance-alb) includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic. NOTE: PTPv2 and LCAP on the 10G Management interfaces are mutually exclusive.
Network port configuration can be achieved using a) the web interface, b) SSH command line interface(CLI). Using the Web interface is the easiest route, however in highly constrained network environments a pure CLI based configuration can be easier
WEB INTERFACE: NETWORK CONFIG
From the dashboard page, Start by selecting the configuration menu option from as shown below (highlighted in green).
Then edit the network configuration`s IP/Netmask/Gateway/DNS setting as shown in the image below. After each field has been edited the system automatically saves and updates the system setting (save button is not required). After completing the update, refresh the web page to confirm the new settings.
Select the tools menu from the top toolbar, as shown in the image below.
And finally select the Power Cycle / Reboot button to restart the system
added some random comments
When using FMADIO Packet capture system for analytics processing SSD resources can be split into Capture devices and Scratch disk space. In scratch disk space 1-16TB of SSD can be mounted as a general purpose file system used to store temporarily/intermediate network packet processing results.
The system should have scratch disks setup and visible on the GUI as follows, if this has not been configured contact support@fmad.io on how to configure
In the above example there are 2 disks SCR0 and SCR1 enabled for scratch disk these are seen on the file system as
NOTE: the /dev/* mount point may change from time to time, please use the /opt/fmadio/disk/scr* path name for all operations.
NOTE: Any change to the scratch disk configuration it is recommend to run a Quick Format to ensure SSD and Capture disks configuration are consistent
By default all SSD are specified are dedicated to capture. This is specified in the configuration file
Capture disks are specified here
In the above example we have 4 x SSD for capture. To convert half to capture and half to scratch disk modify as follows
This is assigning the SSD Serial numbers to mount point /opt/fmadio/disk/scr0 and /opt/fmadio/disk/scr1. The actual Serial numbers for each system will be different, the mount point (scr0/scr1) is the same.
After updating confirm there are no syntax errors in the config file by running fmadiolua /opt/fmadio/etc/disk.lua as follows
Output as above shows correctly formatted file. Output per below shows configuration file with a syntax error (line 30 has some incorrect formatting)
After confirming the configuration file syntax is correct, reboot the system. The mount points scr0 and scr1 should be visible as shown below.
After /opt/fmadio/disk/scr[0-1] have been created. Next is creating a /dev/md1 RAID0 partition as follows
This creates a /dev/md1 partition as shown with lsblk command. Can see the /dev/md1 device
More detail via the mdadm --detail command
The block device /dev/md1 is block level only, it contains no mountable file system. Next create btrfs filesystem on the device as follows
By default FMADIO Packet Capture systems at boot time will start the RAID0 partition and mount /dev/md1 Scratch disk to /mnt/store1
If it fails to mount, please issue the following command
Setting
Description
11
Writeback to Magnetic storage is lowest priority
(Default Value)
30
Writeback to disk has higher priority than Download or Push speed
Setting
Description
true
Enable Compression
(Default setting)
false
Disable disk compression. Faster sustained writeback performance on 2U systems
Setting
Description
true
Enable SSD CRC Checks before writing to Magnetic Storage
(Default setting)
false
Do not check SSD CRC data check. This improves sustained writeback performance
Setting
Description
true
Calculate ECC RAID5 Parity
(Default setting)
false
No ECC calculation (RAID0 mode)
SKU
Description
SKU
Description