we integrated RTOS+TCP V2.3.3 on our STM32H7 platform. Occasionally we ran into an assertion which originated from the low level ETH RX polling function HAL_ETH_IsRxDataAvailable (in stm32hxx_hal_eth.c, line 1156):
/* FreeRTOS+TCP only handles packets that fit in 1 descriptor. */ configASSERT( ( ( ulDesc3 & ETH_DMATXNDESCWBF_FD ) != 0U ) && ( ( ulDesc3 & ETH_DMATXNDESCWBF_LD ) != 0U ) );
Upon closer inspection, received Ethernet packets with a MTU > 1498 Bytes triggered this assertion.
This surprised me, since the maximum should be
#define ipconfigNETWORK_MTU ( 1500 )
and no packets with a larger MTU were observed on our testing network.
The ETH peripheral/it’s DMA seemed to cap off the last two bytes, when such large frames were received.
So I tracked down the ported init function xNetworkInterfaceInitialise and saw this (NetworkInterface.c, line 229):
xEthHandle.Init.RxBuffLen = ( ETH_RX_BUF_SIZE - ipBUFFER_PADDING ) & ~( ( uint32_t ) 3U );
Where ETH_RX_BUF_SIZE is 1524 Bytes and ipBUFFER_PADDING has it’s default value of 10 Bytes. The AND tries to align it, so the RxBuffLen member is 1512 → MTU is 1498
My question is: Why does the ETH peripheral need to take the padding into account? As far as I understand, the padding is only used in a copied version of the frame on heap (in some kind of linked list between the heap copy and the descriptors), and not on the buffer accessed by ETH hardware.
The heap copy mechanism in pxGetNetworkBufferWithDescriptor seems to take the “raw” received byte amount from the HAL and adds them up with the padding:
pxReturn->pucEthernetBuffer = ( uint8_t * ) pvPortMalloc( xRequestedSizeBytes + ipBUFFER_PADDING );
Thanks in advance for your help