]> www.infradead.org Git - users/dwmw2/linux.git/commitdiff
iommu/amd: make sure TLB to be flushed before IOVA freed
authorZhen Lei <thunder.leizhen@huawei.com>
Wed, 6 Jun 2018 02:18:46 +0000 (10:18 +0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 3 Oct 2018 23:59:04 +0000 (16:59 -0700)
[ Upstream commit 3c120143f584360a13614787e23ae2cdcb5e5ccd ]

Although the mapping has already been removed in the page table, it maybe
still exist in TLB. Suppose the freed IOVAs is reused by others before the
flush operation completed, the new user can not correctly access to its
meomory.

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Fixes: b1516a14657a ('iommu/amd: Implement flush queue')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
drivers/iommu/amd_iommu.c

index 596b95c50051dcb75adb67496579af4b501d0ace..d77c97fe4a236dc1a527bfdb35e151177ace678c 100644 (file)
@@ -2405,9 +2405,9 @@ static void __unmap_single(struct dma_ops_domain *dma_dom,
        }
 
        if (amd_iommu_unmap_flush) {
-               dma_ops_free_iova(dma_dom, dma_addr, pages);
                domain_flush_tlb(&dma_dom->domain);
                domain_flush_complete(&dma_dom->domain);
+               dma_ops_free_iova(dma_dom, dma_addr, pages);
        } else {
                pages = __roundup_pow_of_two(pages);
                queue_iova(&dma_dom->iovad, dma_addr >> PAGE_SHIFT, pages, 0);