i40e: Temporary workaround for DMA map issue
This is a quick temporary workaround for Bug
22107931.
iommu DMA map failure occurs when i40e saturate iommu with large number
of DMA map requests. e.g. System running 128 CPUs can maximum have 256K-1
entries in iommu table considering 8K page size and 32bit iommu (i.e.
2^31/PAGE_SIZE). On this system, i40e driver by default has 128 Queue Pairs
(QP) per interface. For each Rx queues, i40e by default, allocates 512 Rx
buffers which generates 64K DMA map requests. Four i40e interfaces will
generates total of 256K DMA map requests. That is beyond iommu can
accommodate and therefor results into DMA map failure.
The correct fix would be that i40e driver should not saturate iommu
resources and graciously bailout when DMA map failure occurs.
However, due severity of the issue and complexity involved implementing
correct resolution, this patch provides quick temporary workaround by
just limiting number of QP not to exceed 32.
For the record, QP equals 32 chosen because QP has to be power of 2 and we
can't have QP equals 64 because in that case number of DMA map requests for
Rx and Tx will be 256K and iommu can only accommodate 256K-1.
i.e.
64 RX queues * 512 RX buffers = 32K , for 4 interfaces = 128K
64 TX queues * 512 TX buffers = 32K , for 4 interfaces = 128K
When an appropriate fix (as mentioned above) is ready, this quick temporary
workaround will be removed.
Note:this temporary workaround can have negative impact on i40e network
performance.
Signed-off-by: Tushar Dave <tushar.n.dave@oracle.com>