I want to have seperate Buffer per MAC for ZYNQ 7200

I want to have separate Rx and Tx Buffer per MAC, so that the buffer depletions can be monitored accurately and it provides some level of memory segregation for my Safety-Critical tasks.

So there are separate Tx and Rx buffers per MAC in my proto code.
e.g.

static NetworkBufferDescriptor_t xNetworkTxBuffers[XPAR_XEMACPS_NUM_INSTANCES][ ipconfigNUM_NETWORK_BUFFER_DESCRIPTORS ];
static NetworkBufferDescriptor_t xNetworkRxBuffers[XPAR_XEMACPS_NUM_INSTANCES][ ipconfigNUM_NETWORK_BUFFER_DESCRIPTORS ];

then you will need separate semaphores as well

/* The semaphore used to obtain network buffers. */
static xSemaphoreHandle xNetworkBufferSemaphore[XPAR_XEMACPS_NUM_INSTANCES][RX_TX_DIRECTION] = { NULL };

( I knew it was trouble - as I by design the stack buffer ownership could be spread across tasks EMACTask and IPTask mainly, yet I took a punt)

I tried this and it works for UDP, yet not for fast ICMP pings - the problem is that I had to duplicate the buffers in ICMP when I do the vRetunEthernetFrames(), as the Rx buffer is to be duplicated onto the tx buffer (due to separate buffers) using pxDuplicateNetworkBufferWithDescriptor.

and then
there is contention in the list access for EMAC1 - the list contained within differs,
failing this call
listIS_CONTAINED_WITHIN( &xFreeBuffersList[xEMACUnit][eBufferType], &( pxReturn->xBufferListItem ) )
I think the semaphore for EMAC1 is not accessed correctly and fails, EMAC0 is OK and continues working as normal.
And this only occurs when both the EMAC does a ping simultaneously using
e.g.

ping -f -c 83333 -s 1472 10.10.0.200 // EMAC1 - stops in like 2 sec
ping -f -c 83333 -s 1472 199.168.0.200 // EMAC0 - continues working

However, with UDP the data flow is different and works.

       /* If there is a semaphore available, there is a network buffer
         * available. */
        if( xSemaphoreTake( xNetworkBufferSemaphore[xEMACUnit][eBufferType] , uxBlockTimeTicks ) == pdPASS )
        {
            /* Protect the structure as it is accessed from tasks and
             * interrupts. */
            ipconfigBUFFER_ALLOC_LOCK();
            {
                pxReturn = ( NetworkBufferDescriptor_t * ) listGET_OWNER_OF_HEAD_ENTRY( &xFreeBuffersList[xEMACUnit][eBufferType] );

                if( ( bIsValidNetworkDescriptor( pxReturn ) != pdFALSE_UNSIGNED )  &&
                      **listIS_CONTAINED_WITHIN( &xFreeBuffersList[xEMACUnit][eBufferType], &( pxReturn->xBufferListItem ) )** )
                {
                    ( void ) uxListRemove( &( pxReturn->xBufferListItem ) );
                }
                else
                {
                    xInvalid = pdTRUE;
                }
            }

So as the moral of my prototype :slight_smile: - am I right in assuming that the Multimode Freertos IPStack is designed for a single buffer pool, as this can be considered as a design constraint and stick to the existing design

It is true that in the IPv6/multi branch there is only one pool of buffers, just like in IPv4/single.

It will be a difficult job to maintain multiple buffer=pools.

If you need it to do debugging network buffers, I have a separate version that could be useful for you.

Otherwise I am not yet convinced of the advantages of having multiple pools of network buffers.

Maybe just adding some optional statistics could also help?

Iā€™m not familiar with the Zync, but generally the EMAC buffer chains must be frame linked so that the DMA engine knows how to forward the buffer chain. Could it be that the MCU does not support separate buffer chains for multiple EMACS? Or that you need to link the chains independently at buffer initialization time?

Just a shot in the dark, sorry if I should be way off.

The debug code would be nice to have, with the single buffer It works well, however the second EMAC on Zync drops packet and eventually stops working. :frowning:

@RAc wrote:

Could it be that the MCU does not support separate buffer chains for multiple EMACS? Or that you need to link the chains independently at buffer initialization time?

Each Zynq EMAC expects 2 arrays of DMA descriptors. They are not linked, but they must be declared in an array. The last descriptor has a special bit set: WRAP.

I know of one project in which two EMACs are used along with the labs/ipv6_multi branch.

ping -f -c 83333 -s 1472 10.10.0.200 // EMAC1 - stops in like 2 sec

And so it stops because the network buffers are exhausted?

The function emacps_check_tx() will call vReleaseNetworkBufferAndDescriptor() for every packet sent.

When you ping to EMAC0, you will see that happening. But is it also called when you ping to EMAC1? Can you verify that?

1 Like

Now things are ok, yet I wanted to have a VLAN , so I added a small code.
however, the Zynq has no VLAN append or Strip and these extra 4 bytes is screwing the
buffer padding.
and fails to release at emacps_check_tx()
I suspected some mem moves I had used yet they seem to be irrelevant

And to my surprise it happens a few times - not a hard failure :frowning:

// e.g appended VLAN - rough one
if (pxBuffer->pxEndPoint->xVlanEnabled)
{
taskENTER_CRITICAL();
uint8_t *withVlan = (uint8_t *)pxBuffer->pucEthernetBuffer;
uint8_t *moveTag = (uint8_t *)(withVlan - ipSIZE_OF_VLAN_HEADER);
const uint16_t VLAN_TPID = ipVLAN_TYPE;
const uint16_t VLAN_VIDPRIORITY = FreeRTOS_htons(pxBuffer->pxEndPoint->xVlanID |
((pxBuffer->pxEndPoint->xVlanPriority & ipVLAN_TCI_PRIORITY_MASK)
<< ipVLAN_TCI_PRIORITY_BIT_SHIFT));

	memmove((uint8_t *)moveTag, withVlan, ipSIZE_OF_ETH_HEAD_ADDR);
	memmove((uint8_t *)(moveTag + ipSIZE_OF_ETH_HEAD_ADDR), (uint8_t *)(&VLAN_TPID), 2U);
	memmove((uint8_t *)(moveTag + ipSIZE_OF_ETH_HEAD_ADDR + 2U), (uint8_t *)(&VLAN_VIDPRIORITY), 2U);
	pxBuffer->pucEthernetBuffer = (uint8_t *)(moveTag);
	pxBuffer->xDataLength += ipSIZE_OF_VLAN_HEADER;
    taskEXIT_CRITICAL();
}

Finally found the Issue. I did not handle VLAN for ARP correctly, so when an ARP broadcast or request / occurs the Network buffer descriptors were misaligned and it failed at emacps_check_tx()

VLAN seems to be an easy update for FREERTOS, the following code reference can be used for ZYNQ GEM controllers which can not append/Strip VLAN like a good NIC card
https://doc.dpdk.org/api/rte__ether_8h_source.html