Question about low level ETH init

Hi everyone,

we integrated RTOS+TCP V2.3.3 on our STM32H7 platform. Occasionally we ran into an assertion which originated from the low level ETH RX polling function HAL_ETH_IsRxDataAvailable (in stm32hxx_hal_eth.c, line 1156):

/* FreeRTOS+TCP only handles packets that fit in 1 descriptor. */
 configASSERT( ( ( ulDesc3 & ETH_DMATXNDESCWBF_FD ) != 0U ) && ( ( ulDesc3 & ETH_DMATXNDESCWBF_LD ) != 0U ) );

Upon closer inspection, received Ethernet packets with a MTU > 1498 Bytes triggered this assertion.
This surprised me, since the maximum should be

#define	ipconfigNETWORK_MTU								( 1500 )

and no packets with a larger MTU were observed on our testing network.

The ETH peripheral/it’s DMA seemed to cap off the last two bytes, when such large frames were received.
So I tracked down the ported init function xNetworkInterfaceInitialise and saw this (NetworkInterface.c, line 229):

xEthHandle.Init.RxBuffLen = ( ETH_RX_BUF_SIZE - ipBUFFER_PADDING ) & ~( ( uint32_t ) 3U );

Where ETH_RX_BUF_SIZE is 1524 Bytes and ipBUFFER_PADDING has it’s default value of 10 Bytes. The AND tries to align it, so the RxBuffLen member is 1512 → MTU is 1498

My question is: Why does the ETH peripheral need to take the padding into account? As far as I understand, the padding is only used in a copied version of the frame on heap (in some kind of linked list between the heap copy and the descriptors), and not on the buffer accessed by ETH hardware.
The heap copy mechanism in pxGetNetworkBufferWithDescriptor seems to take the “raw” received byte amount from the HAL and adds them up with the padding:

 pxReturn->pucEthernetBuffer = ( uint8_t * ) pvPortMalloc( xRequestedSizeBytes + ipBUFFER_PADDING );

Thanks in advance for your help

The peripheral should be set to a DMA buffer size covering the packet and matching the peripheral (DMA) requirements. For STM32F4 I found 1536 is a good size (as far as I remember) supporting std. MTU size of 1500.

Edit: Found one of the helpful posts from @htibosch

Related to your question.

Hi,

thanks for your reply. From my naive point of view this just seems like a waste of memory space, since the buffers are statically allocated, but ipBUFFER_PADDING will artificially decrease the effective size.

Hartmut remembers well, 1536 is a perfect size for a DMA buffer.
“MTU” is a confusing term, because the Ethernet header ( 14 bytes ) is not included.

In many places in the FreeRTOS+TCP library, a network buffer is passed to functions with its protocol payload, for instance when sending or receiving zero-copy UDP packets.

When the IP-task receives a pointer to a payload, it will find a pointer to the containing network buffer before the beginning of the buffer, at -10 bytes bytes.

Here is an older post about this subject.

Summary: we want to reserve 1514 bytes for the largest packet, and 10 bytes for a hidden header.

That makes 1524 bytes, which is 0x5F4. If you put the DMA descriptors in a static array, it gives an alignment of 4 bytes. Now if you make it 12 bytes longer, you have a great alignment: 0x600. That is good, both for any DMA handler, but also for any data caching! Data caching often has cache lines of 32 or 64 bytes.

So, if your DMA is happy with an alignment of 4 bytes, and data caching is disabled for the network buffers, then you can use a size of 1524 bytes.
Otherwise I would recommend a size of 1536 bytes.

Got it, thanks for your help and comments @hs2 and @htibosch .