FreeRTOS + TCP CRC error: 1234 location 0

Hi all,

While debugging, why I sporadically get very long RTT (>250ms) while the normal RTT is around 5ms, I stumbled upon the following messages on the console:

FreeRTOS_TCP/Source/FreeRTOS_IP.c:2740: CRC error: 1234 location 0
FreeRTOS_TCP/Source/FreeRTOS_IP.c:2740: CRC error: 4321 location 0

Looking at the Source, I don’t see how the CRC error 4321 can be set, but location remains 0? E.g. the only line of code that can set the crc error to 4321 (ipUNHANDLED_PROTOCOL) is where xLocation gets set to 7… so, I’d expect xLocation to be 7 and not 0…

I’m running FreeRTOS Kernel V10.4.0 with FreeRTOS+TCP V2.3.2 on a NIOS2 Softcore with a simple driver (not zero copy). The ethernet mac has a built-in (and activated) function, which should discard frames with a incorrect FCS, so, I’m wondering why CRC errors exist (in the IP stack) at all…

By setting a breakpoint, I just discovered, that the invalid length (0x1234) this raised in the transmit path:

What could be the issue that the TCP Stack generates a frame with invalid length?

… and when I hit the breakpoint for ipUNHANDLED_PROTOCOL (0x4321), the protocol is set to TCP: (ucProtocol is 6 => ipPROTOCOL_TCP )

Ah, I just noticed, that the checksum may well be actually 1234 or 4321…

This solved the problem of false positives, but not my rtt problem yet:

diff --git a/FreeRTOS_IP.c b/FreeRTOS_IP.c
index 9b57525..799ac09 100644
--- a/FreeRTOS_IP.c
+++ b/FreeRTOS_IP.c
@@ -2736,8 +2736,9 @@ uint16_t usGenerateProtocolChecksum( const uint8_t * const pucEthernetBuffer,
         #endif /* ipconfigHAS_DEBUG_PRINTF != 0 */
     } while( ipFALSE_BOOL );

-    if( ( usChecksum == ipUNHANDLED_PROTOCOL ) ||
-        ( usChecksum == ipINVALID_LENGTH ) )
+    if( ( ( usChecksum == ipUNHANDLED_PROTOCOL ) ||
+          ( usChecksum == ipINVALID_LENGTH ) ) &&
+          ( xLocation > 0 ) )
     {
         /* NOP if ipconfigHAS_PRINTF != 0 */
         FreeRTOS_printf( ( "CRC error: %04x location %ld\n", usChecksum, xLocation ) );

Thanks so far for reporting, Stephan.

I think that you are right about the logging in usGenerateProtocolChecksum(): messages should only be printed when xLocation is not equal to zero. We will change that.
Within the library, when xOutgoingPacket is non-zero, the function result is ignored.

While debugging, why I sporadically get very long RTT (>250ms)
while the normal RTT is around 5ms

Would it be possible to run tcpdump or WireShark, and see if there was packet loss?
You might be able to see who is slow, FreeRTOS+TCP or the peer device. Or maybe there is a resend from either side?
Is it busy on the LAN that you are using? Does it use switches?
Maybe the device on the other side is sporadically too busy to answer quickly? Is that a PC?

1 Like

Hi Hein
Thanks for your inputs.
I do see with Wireshark, that sometimes it just takes that long (eventually, the packets return, it just takes a long time).
It’s a direct connection to a PC, without switches.
I noticed on another level, that I have a memory leak somewhere. I think my Network driver is not working well yet. At some point, the malloc hook jumps in and my heap is exhausted.

One thing I don’t understand yet is; Porting TCP/IP Ethernet drivers to a different MCU states how to allocate a Network Buffer. But when is that freed again? Inside the stack? Or do I have to free it again? I see that vReleaseNetworkBufferAndDescriptor( pxBufferDescriptor ); is only called in error cases. This confuses me a bit…

Hello Stephan!

Maybe you can try setting a global variable and then set a breakpoint at the malloc hook to see what all is getting allocated?

Furthermore, are you releasing the network buffers in the xNetworkInterfaceOutput function? The network buffer should be released using vReleaseNetworkBufferAndDescriptor if the parameter bReleaseAfterSend to xNetworkInterfaceOutput is not pdFALSE.

Something like this in xNetworkInterfaceOutput:

    /* The buffer has been sent so can be released. */
    if( bReleaseAfterSend != pdFALSE )
    {
        vReleaseNetworkBufferAndDescriptor( pxNetworkBuffer );
    }

The release function to free the descriptor is actually called in several places, not just in case of errors. Whenever the buffer containing the data is not needed anymore, it is freed. See the below cases:

  • In FreeRTOS_TCP_IP.c: The buffer here is freed in case the buffer is properly processed and handled.
  • In FreeRTOS_IP.c: In this case, the buffer is released since there was either an error or the data was processed successfully and the buffer can be safely released.

So, in essence, you needn’t free the buffer yourself once you hand it over to the stack. (But you DO need to free the buffer in NetworkInterfaceOutput based on the value of bReleaseAfterSend).

Hope this helps.

1 Like

Thank you very much for your inputs. Appreciated and now I understand it!
Turned out, I generated a memory leak in my own ring buffer for the DMA.

Of course!
Yes, I have had to deal with such memory leak issues myself!

If you have any more questions, feel free to ask.