FW:10790+
The FMADIO system have a massive amount of CPU and RAM available for LXC containers. One problem around this is how do you isolate one containers resources from another.
CPU Resources
General splitting of CPUs is shown below
In this case
Isolated CPUs
These are CPUs removed from the kernels run queue using the isolcpu= kernel boot parameter. Usage of these CPUs is for the FMADIO Realtime processing functions, typically latency sensitive capture processes. The selection of these CPUs is carefully chosen and baked into the system.
Linux Kernel Run queue CPU
For all other CPUs, they end up in the linux kernels general purpose run queue. This allows the kernel the schedule workloads appropriately based on demand and typically location of any connected device / proximity to memory and cache.
When adding LXC containers to the system, by default all CPUs can be used by the container, for example the below LXC config has no CPU configuration set
# lxc config generate by ubuntu install.lua
# set cpu to default
lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = x86_64
lxc.rootfs.path = dir:/opt/fmadio/lxc/ubuntu22-202312061042/rootfs
lxc.uts.name = fmadio200v4-636-ubuntu22
lxc.net.0.type = veth
lxc.net.0.link = fmad0
lxc.net.0.flags = up
lxc.net.0.ipv4.address = 192.168.255.172/24
lxc.net.1.type = veth
lxc.net.1.link = man0
lxc.net.1.flags = up
lxc.net.1.ipv4.address = 192.168.2.216/24
lxc.net.1.ipv4.gateway = 192.168.2.1
lxc.prlimit.nofile = 65535
lxc.prlimit.memlock = unlimited
In this case the LXCs process will be scheduled on the general purpose Linux Kernel runqueue. e.g. there is no CPU isolation of the LXC, this is shown below
Isolating LXC CPU resources
CPUs can be isolated and assigned only for a specific LXC using the Linux cgroup v2 functionality. To do this it effectively creates a new Linux runqueue with a specific set of CPUs. This segments the FMADIO Host systems linux run queue from the LXCs linux run queue effectively partitioning the CPUs entirely. This is depicted below.
This is achieved by adding the following lines to the lxc config file
lxc.cgroup2.cpuset.cpus=7,8,9
lxc.cgroup2.cpuset.cpus.partition=root
In this case we are dedicated CPUs 7,8,9 to be only used by this LXC. Setting the CPU partition to be too “root” effectively creates its own kernel run queue that only processes attached to that cgroup (e.g. the entire LXC) can schedule on.
Isolation on older firmware
Unfortunately the above is only possible using a more recent firmware FW:10790+. For system running older firmware some manual modifications are required
Step 1) Update kernel bool parameter
By default the Linux kernel will use a hybrid group v1 / v2 mode which is not ideal for the above setup. We need to use cgroups v2 only to take advantage of all the features.
Append the following to the kernel boot command in
/mnt/system/EFI/syslinux/syslinux.cfg
Find the boot command and append
cgroup_no_v1=all
This disables all cgroup v1 functions, so only cggroup v2 is used.
The result looks similar to below
Save and reboot the system.
Step 2) mount cgroup sysfs manually
Modify the automatic boot script
/opt/fmadio/etc/boot.lua
Paste in the following
os.execute([[sudo mount -t cgroup2 none /sys/fs/cgroup/ ]])
os.execute([[echo +cpuset > /sys/fs/cgroup/cgroup.subtree_control]])
This will mount the cgroup v2 sysfs interface so LXC can now access and utilize it.
Step 3) configure the LXCs config
Per the above, modify the LXC config to isolate specific CPUs and ensure its a root partition per
lxc.cgroup2.cpuset.cpus=7,8,9
lxc.cgroup2.cpuset.cpus.partition=root
Step 4) Done
Confirm CPUs are isolated as expected.