STM32F FreeRTOS+TCP Multiple Interfaces

I’m trying to have the ipv6_multi stack running on Nucleo-F429ZI.

I started from having a working TCP ip stack project using the FreeRTOS-PLUS-TCP main branch, and then I replaced it with the ipv6_multi branch. (thanks to @ActoryOu for updating the folder structure!)

While the main branch works ok, and after few seconds the IP gets assigned by the DHCP, the ipv6_multi does not work. The IP falls back to the manual one, because the DHCP client goes into time out. Moreover it is not possible to ping the manual IP.

UDP printf broadcast seems to work.

Has anyone tried to have the Multi interface stack working on STM32?

Where would you start debugging?

Souce explore-multi-ip-forum-topic

( URL corrected by @htibosch )

FreeRTOS_printf UDP output:

R[]:    0.000.000 [IP-Task   ] prvIPTask started

R[]:    0.001.000 [IP-Task   ] PHY ID 7C130

R[]:    0.051.000 [IP-Task   ] xPhyReset: phyBMCR_RESET 0 ready

R[]:    0.101.000 [IP-Task   ] +TCP: advertise: 01E1 config 3100

R[]:    0.101.000 [IP-Task   ] prvEthernetUpdateConfig: LS mask 00 Force 1

R[]:    1.651.000 [IP-Task   ] Autonego ready: 00000004: full duplex 100 mbit high status

R[]:    1.651.000 [IP-Task   ] Link Status is high

R[]:    1.651.000 [IP-Task   ] vDHCPProcessEndPoint: enter 0

R[]:    1.653.000 [IP-Task   ] DHCP-socket[44-46]: DHCP Socket Create

R[]:    1.653.000 [IP-Task   ] prvInitialiseDHCP: start after 250 ticks

R[]:    1.653.000 [IP-Task   ] vIPReloadDHCP_RATimer: 250

R[]:    1.653.000 [IP-Task   ] vDHCPProcessEndPoint: exit 1

R[]:    1.903.000 [IP-Task   ] vDHCPProcessEndPoint: enter 1

R[]:    1.903.000 [IP-Task   ] vDHCPProcess: discover

R[]:    1.903.000 [IP-Task   ] vDHCPProcessEndPoint: exit 2

R[]:    1.903.000 [EMAC      ] Network buffers: 25 lowest 24

R[]:    2.153.000 [IP-Task   ] vDHCPProcessEndPoint: enter 2

R[]:    7.153.000 [IP-Task   ] vDHCPProcess: discover

R[]:    7.153.000 [IP-Task   ] vDHCPProcess: timeout 10000 ticks

R[]:   17.403.000 [IP-Task   ] vDHCPProcess: discover

R[]:   17.403.000 [IP-Task   ] vDHCPProcess: timeout 20000 ticks

R[]:   37.653.000 [IP-Task   ] vDHCPProcess: giving up 40000 > 30000 ticks

R[]:   37.653.000 [IP-Task   ] vIPSetDHCP_RATimerEnableState: Off

R[]:   37.653.000 [IP-Task   ] vApplicationIPNetworkEventHook: event 0

R[]:   37.653.000 [IP-Task   ] IP Address:

R[]:   37.653.000 [IP-Task   ] Subnet Mask:

R[]:   37.653.000 [IP-Task   ] Gateway Address:

R[]:   37.653.000 [IP-Task   ] DNS Server Address:

R[]:   37.653.000 [IP-Task   ] DHCP-socket[44-46]: closed, user count 0

R[]:   37.653.000 [ServerListener] Socket 0ip port 7 to 0ip port 0 State eCLOSED -> eTCP_LISTEN

Thank you Stefano for reporting this. @actoryou is doing a great job, merging all latest changes to the IPv4/single branch into IPv6/multi.
I test all changes by running integration tests. I will test DHCPv6 in the coming days and report back about it.

PS Have you also tried the RA ( Router Advertisement ) protocol in stead of DHCPv6?

Hi @stefano, this week we have been working on both FreeRTOS+TCP /multi, as well as on the STM32Fx driver. For a long time, this branch received little attention. Now it is time to continue with IPv6/multi.

@ActoryOu continued his work on his split_dns branch.

Yesterday I discovered a problem with the STM32Fx driver: ipFRAGMENT_OFFSET_BIT_MASK was defined as a host-endian value, while it was compare to a network-endian value in this function:

static BaseType_t xMayAcceptPacket( uint8_t * pucEthernetBuffer )
    if( ( pxIPHeader->usFragmentOffset & ipFRAGMENT_OFFSET_BIT_MASK ) != 0U )
        /* Drop the packet. */
        return pdFALSE;

Because of this, the driver dropped packets that were in fact correct. From now on, ipFRAGMENT_OFFSET_BIT_MASK will depend on the endianness, defined in

Beside that, I discovered some issues with the DNS (IPv6) driver. I will push the changes as soon as Actory’s PR 537 has been merged.

If you are in a hurry, please tell and I will forward you a version with all upcoming patches.

@htibosch Can you please apply this patch? I have run into the same issue.

Hello @tfeldoni , welcome to this forum.

What you can do is the following:
Clone my forked version of FreeRTOS-Plus-TCP
Check out the branch IPv6_multi_corrections_on_DNS, and use that source code.

Mentioned PR is still being reviewed, otherwise you could use the official repo.

It would be very helpful if you write your feedback in this post.

What platform are you using?

@htibosch Thank you for these options! I was not aware of your developmental forks.

I am on a different platform (SAME70) but was finding a similar bug. The driver is different but that same check exists and there is also no apparent distinction between the endian-ness of the processor and the network. The symptom is the same (all IPv6 traffic is ignored), although I want to double-check now because I think the cause may be different.

At the moment I am on business travel, but at least as soon as next week I will be able to do some testing and provide feedback. If the cause appears to be different I can open another thread.

The thing of course is that this:

if( ( pxIPHeader->usFragmentOffset & ipFRAGMENT_OFFSET_BIT_MASK ) != 0U )

is easier to read than this:

if( ( FreeRTOS_ntohs( pxIPHeader->usFragmentOffset ) & ipFRAGMENT_OFFSET_BIT_MASK ) != 0U )

And that is why we made the value of ipFRAGMENT_OFFSET_BIT_MASK depend on the platform:

    #define ipFRAGMENT_OFFSET_BIT_MASK   ( ( uint16_t ) 0xff1fU )
    #define ipFRAGMENT_OFFSET_BIT_MASK   ( ( uint16_t ) 0x1fffU )

but unfortunately, we had not checked all network interfaces that may also use this macro.

DriverSAM: I see that NetworkInterface.c is already adapter for IPv6/multi.
If you have any questions about it, you can ask them in this post, no matter what “the cause” is.