i40e: optimise prefetch page refcount

[ Upstream commit 1fa5cef283420b3dad93cd6ab04d7125bc1562de ]

refcount of rx_buffer page will be added here originally, so prefetchw
is needed, but after commit 1793668c3b ("i40e/i40evf: Update code to
better handle incrementing page count"), and refcount is not added
every time, so change prefetchw as prefetch.

Now it mainly services page_address(), but which accesses struct page
only when WANT_PAGE_VIRTUAL or HASHED_PAGE_VIRTUAL is defined otherwise
it returns address based on offset, so we prefetch it conditionally.

Jakub suggested to define prefetch_page_address in a common header.

Reported-by: kernel test robot <lkp@intel.com>
Suggested-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Li RongQing <lirongqing@baidu.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
This commit is contained in:
Li RongQing 2020-08-18 15:07:57 +08:00 committed by Greg Kroah-Hartman
parent 405bfd36f0
commit b05fdd74ff
2 changed files with 9 additions and 1 deletions

View File

@ -1971,7 +1971,7 @@ static struct i40e_rx_buffer *i40e_get_rx_buffer(struct i40e_ring *rx_ring,
struct i40e_rx_buffer *rx_buffer;
rx_buffer = i40e_rx_bi(rx_ring, rx_ring->next_to_clean);
prefetchw(rx_buffer->page);
prefetch_page_address(rx_buffer->page);
/* we are reusing so sync this buffer for CPU use */
dma_sync_single_range_for_cpu(rx_ring->dev,

View File

@ -15,6 +15,7 @@
#include <asm/processor.h>
#include <asm/cache.h>
struct page;
/*
prefetch(x) attempts to pre-emptively get the memory pointed to
by address "x" into the CPU L1 cache.
@ -62,4 +63,11 @@ static inline void prefetch_range(void *addr, size_t len)
#endif
}
static inline void prefetch_page_address(struct page *page)
{
#if defined(WANT_PAGE_VIRTUAL) || defined(HASHED_PAGE_VIRTUAL)
prefetch(page);
#endif
}
#endif