With MSI-X device can IRQ affinity be set (Linux)?
Asked Answered
T

1

6

I've set IRQ affinity in the past on Linux by setting values to the proc files. [1] However, I noticed that when I do this on an system that uses MSI-X for the device (PCIe) that I want to set affinity for e.g. NIC, the /proc/interrupt counters increment for each core for the IRQ and not for the single core I set it for. Where in a non- MSI-X system the specified core answers the interrupts.

I'm using Linux kernel 3.11.

Short: Can IRQ affinity be set for devices that use MSI-X interrupts?

[1] https://www.kernel.org/doc/Documentation/IRQ-affinity.txt

Tarnish answered 11/10, 2013 at 21:2 Comment(1)
good freakin questionBertrando
P
0

Unburrying this thread, I am trying to set IRQ (MSI-X) cpu affinity for my SATA controller in order to avoid cpu switching delays. So far, I got the current used IRQ via:

IRQ=$(cat /proc/interrupts | grep ahci | awk -F':' '/ /{gsub(/ /, "", $1); print $1}')

Just looking at the interrupts via cat /proc/interrupts shows that multiple CPUs are involved in my sata controller handling.

I then set the IRQ affinity (cpu 2 in my case) via

echo 02 > /proc/irq/$IRQ/smp_affinity

I can test the effective affinity with

cat /proc/irq/$IRQ/effective_affinity

After a while of disk benchmarking, I noticed that the affinity stays as configured. Example:

Before benchmark, having bound IRQ 134 to cpu 2:

 cat /proc/interrupts | egrep "ahci|CPU"
             CPU0       CPU1       CPU2       CPU3       CPU4       CPU5       CPU6       CPU7
  134:   12421581          1          0         17       4166          0          0          0  IR-PCI-MSI 376832-edge      ahci[0000:00:17.0]

After benchmark:

 cat /proc/interrupts | egrep "ahci|CPU"
            CPU0       CPU1       CPU2       CPU3       CPU4       CPU5       CPU6       CPU7
 134:   12421581    2724836          0         17       4166          0          0          0  IR-PCI-MSI 376832-edge      ahci[0000:00:17.0]

So in my case, the affinity that I've setup stayed as it should. I can only imagine that you have irqbalance running as a service. Have you checked that ? In my case, running irqbalance redistributes the affinity and overrides the one I setup.

My test system: CentOS 8.2 4.18.0-193.6.3.el8_2.x86_64 #1 SMP Wed Jun 10 11:09:32 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

In the end, I did not achieve better disk utilization / performance. My initial problem is that fio benchmarks do not use 100% disk, mere some values between 75-85% (and sometimes 97%, without me knowing why).

Proxy answered 16/7, 2020 at 9:27 Comment(2)
"My initial problem is that fio benchmarks do not use 100% disk". For problems like that you would at least need to see your full fio job but that would be more of a new question (and probably better on a site like superuser). If you're using a file in a filesystem as opposed to a block device I would not be surprised...Resistant
Thanks, but IRQ cpu switching was on my suspect list, so I needed to outrule it, hence my thread unburrying ;)Proxy

© 2022 - 2024 — McMap. All rights reserved.