]> www.infradead.org Git - users/jedix/linux-maple.git/commit
Revert x86/apic/x2apic: set affinity of a single interrupt to one cpu
authorMridula Shastry <mridula.c.shastry@oracle.com>
Wed, 6 Mar 2019 21:29:55 +0000 (13:29 -0800)
committerSomasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com>
Wed, 13 Mar 2019 23:31:30 +0000 (16:31 -0700)
commit45d6ad0e5a83c8b9378fc4e991ddb81a432495ca
tree918bdc8330432aba0915d06bbb7edbc1032cb7f7
parent8e87dd822e2c248c4df1c2b07eec3ba952068ec3
Revert x86/apic/x2apic: set affinity of a single interrupt to one cpu

The commit 092aa78c11f0
("x86/apic/x2apic: set affinity of a single interrupt to one cpu")
was causing performance regression on block storage server on X5.

On OCI X5 server, they were not binding irqs to CPUs 1:1,
irq to cpu affinity was set to multiple cpus
(/proc/$irq/smp_affinity: 00,003ffff0,0003ffff, cpu0-17 and 36-53).
This is not the default behavior of bnxt_en. From bnxt_en,
driver, when NIC link is up, it sets irq affinity, OCI assumed that
most of the bnxt_en interrupts will go to cpu3.

After the patch "x86/apic/x2apic:
set affinity of a single interrupt to one cpu",
if we set irq to cpu 1:1, it works fine, but if we set irq affinity
to multiple cpus, it only sets irq_cfg->domain/cpumask to the first
online cpu which is on the cpu affinity list. With the current setting
which caused the perf issue, although /proc/$irq/smp_affinity is set
to multiple cpus, irq_cfg->domain cpumask only has cpu 0, this lead all
ens4f0-TxRx interrupts to route to cpu0, also iscsi target application
was being run on CPU0 during the testing which led to the performace issue.
The issue is no longer seen after the patch was reverted.

Orabug: 29449976
Signed-off-by: Mridula Shastry <mridula.c.shastry@oracle.com>
Reviewed-by: Joe Jin <joe.jin@oracle.com>
Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com>
arch/x86/kernel/apic/x2apic_cluster.c