There are three atomic_t which are of size 4. This leaves a hole of 4
bytes on x86_64 according to pahole. Move map_count to the first
cacheline to fill this hole.
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
* &struct mm_struct is freed.
*/
atomic_t mm_count;
+ int map_count; /* number of VMAs */
#ifdef CONFIG_MMU
atomic_long_t pgtables_bytes; /* PTE page table pages */
#endif
- int map_count; /* number of VMAs */
spinlock_t page_table_lock; /* Protects page tables and some
* counters