For A+A configuration, device memory is supposed to be mapped as
cachable from CPU side. For kernel pre-map gpu device memory using
ioremap_cache
Signed-off-by: Oak Zeng <Oak.Zeng@amd.com>
Reviewed-by: Christian Koenig <Christian.Koenig@amd.com>
Tested-by: Amber Lin <Amber.Lin@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
        /* Change the size here instead of the init above so only lpfn is affected */
        amdgpu_ttm_set_buffer_funcs_status(adev, false);
 #ifdef CONFIG_64BIT
-       adev->mman.aper_base_kaddr = ioremap_wc(adev->gmc.aper_base,
-                                               adev->gmc.visible_vram_size);
+       if (adev->gmc.xgmi.connected_to_cpu)
+               adev->mman.aper_base_kaddr = ioremap_cache(adev->gmc.aper_base,
+                               adev->gmc.visible_vram_size);
+
+       else
+               adev->mman.aper_base_kaddr = ioremap_wc(adev->gmc.aper_base,
+                               adev->gmc.visible_vram_size);
 #endif
 
        /*