]> www.infradead.org Git - users/jedix/linux-maple.git/commit
x86/pae: use 64 bit atomic xchg function in native_ptep_get_and_clear
authorJuergen Gross <jgross@suse.com>
Tue, 21 Aug 2018 15:37:55 +0000 (17:37 +0200)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Sat, 15 Sep 2018 07:45:35 +0000 (09:45 +0200)
commitd85c2999a7b5e924172f0cc21f5804bbb1f34d69
treee43ac1c8c2c60efa2cb63fa012a0087598800dfd
parent685a452ce3bfd030700a0360483ce59515facc41
x86/pae: use 64 bit atomic xchg function in native_ptep_get_and_clear

commit b2d7a075a1ccef2fb321d595802190c8e9b39004 upstream.

Using only 32-bit writes for the pte will result in an intermediate
L1TF vulnerable PTE. When running as a Xen PV guest this will at once
switch the guest to shadow mode resulting in a loss of performance.

Use arch_atomic64_xchg() instead which will perform the requested
operation atomically with all 64 bits.

Some performance considerations according to:

https://software.intel.com/sites/default/files/managed/ad/dc/Intel-Xeon-Scalable-Processor-throughput-latency.pdf

The main number should be the latency, as there is no tight loop around
native_ptep_get_and_clear().

"lock cmpxchg8b" has a latency of 20 cycles, while "lock xchg" (with a
memory operand) isn't mentioned in that document. "lock xadd" (with xadd
having 3 cycles less latency than xchg) has a latency of 11, so we can
assume a latency of 14 for "lock xchg".

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Tested-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
[ Atomic operations gained an arch_ prefix in 8bf705d13039
("locking/atomic/x86: Switch atomic.h to use atomic-instrumented.h") so
s/arch_atomic64_xchg/atomic64_xchg/ for backport.]
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/x86/include/asm/pgtable-3level.h