net: dwc_eth_qos: Invalidate RX packet DMA buffer

This patch prevents an issue where the RX packet might have been
accessed by the CPU, which now has cached data from the packet in
the caches and possibly various write buffers, and these data may
be evicted from the caches into the DRAM while the buffer is also
written by the DMA.

By invalidating the buffer after the CPU accessed it and before the
DMA populates the buffer, it is assured that the buffer will not be
corrupted.

Reviewed-by: Patrick Delaunay <patrick.delaunay@st.com>
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: Joe Hershberger <joe.hershberger@ni.com>
Cc: Patrice Chotard <patrice.chotard@st.com>
Cc: Patrick Delaunay <patrick.delaunay@st.com>
Cc: Ramon Fried <rfried.dev@gmail.com>
Cc: Stephen Warren <swarren@nvidia.com>
This commit is contained in:
Marek Vasut 2020-03-23 02:09:55 +01:00 committed by marex
parent 738ee270fe
commit a83ca0c280

View File

@ -1476,6 +1476,9 @@ static int eqos_free_pkt(struct udevice *dev, uchar *packet, int length)
}
rx_desc = &(eqos->rx_descs[eqos->rx_desc_idx]);
eqos->config->ops->eqos_inval_buffer(packet, length);
rx_desc->des0 = (u32)(ulong)packet;
rx_desc->des1 = 0;
rx_desc->des2 = 0;
@ -1538,6 +1541,9 @@ static int eqos_probe_resources_core(struct udevice *dev)
}
debug("%s: rx_pkt=%p\n", __func__, eqos->rx_pkt);
eqos->config->ops->eqos_inval_buffer(eqos->rx_dma_buf,
EQOS_MAX_PACKET_SIZE * EQOS_DESCRIPTORS_RX);
debug("%s: OK\n", __func__);
return 0;