Error when SMC instruction and IRQ are operated continuously in SMP code

Error when SMC instruction and IRQ are operated continuously in SMP code

Hi~

I’m using FreeRTOS kernel V11.0.0.
I’m testing with SMP in ARMv8-A ca53 aarch64 dual core.

After the xSemaphoreTake (handle, timeout=0x100) function,
The code is implemented to call xSemaphoreGiveFromISR (handle, pxHigherPriorityTaskWoken) in the ISR.

In the actual test…

In Core0, portYIELD() is called through the vTaskYieldWithinAPI() function in xSemaphoreTake().
During SWI_Handler() operation, IRQ of the core0 is generated and xSemaphoreGiveFromISR() is called, resulting in unintentional change of pxCurrentTCBs value.

Should I do IRQ disable during SWI_Handler() operation?

This portGET_ISR_LOCK should prevent from any other FromISR APIs to access pxCurrentTCB while it is being updated. Are you running this code as it is or did you make some changes to bring it up-to-date with the latest mainline code?

Are you talking about portGET_ISR_LOCK() called by vTaskSwitchContext()?

FreeRTOS-Kernel-Partner-Supported-Ports/blob/dc3afc6e837426b4bda81bbb6cf45bfb6f34c7e9/TI/CORTEX_A53_64-BIT_TI_AM64_SMP/portASM.S

#define portGET_ISR_LOCK()  vPortRecursiveLock(ISR_LOCK, pdTRUE)

static inline void vPortRecursiveLock(uint32_t ulLockNum, BaseType_t uxAcquire)
{
    uint32_t ulCoreNum = portGET_CORE_ID();
    uint32_t ulLockBit = 1u << ulLockNum;

    /* Lock acquire */
    if (uxAcquire)
    {

        /* Check if spinlock is available */
        /* If spinlock is not available check if the core owns the lock */
        /* If the core owns the lock wait increment the lock count by the core */
        /* If core does not own the lock wait for the spinlock */
        if( GateSmp_tryLock( &GateWord[ulLockNum] ) != 0)
        {
            /* Check if the core owns the spinlock */
            if( Get_64(&ucOwnedByCore[ulCoreNum]) & ulLockBit )
            {
                configASSERT( Get_64(&ucRecursionCountByLock[ulLockNum]) != 255u);
                Set_64(&ucRecursionCountByLock[ulLockNum], (Get_64(&ucRecursionCountByLock[ulLockNum])+1));
                return;
            }

            /* Preload the gate word into the cache */
            uint32_t dummy = GateWord[ulLockNum];
            dummy++;

            /* Wait for spinlock */
            while( GateSmp_tryLock(&GateWord[ulLockNum]) != 0);
        }

         /* Add barrier to ensure lock is taken before we proceed */
        __asm__ __volatile__ (
            "dmb sy"
            ::: "memory"
        );

        /* Assert the lock count is 0 when the spinlock is free and is acquired */
        configASSERT(Get_64(&ucRecursionCountByLock[ulLockNum]) == 0);

        /* Set lock count as 1 */
        Set_64(&ucRecursionCountByLock[ulLockNum], 1);
        /* Set ucOwnedByCore */
        Set_64(&ucOwnedByCore[ulCoreNum], (Get_64(&ucOwnedByCore[ulCoreNum]) | ulLockBit));
    }
...
}

→ According to the above code, if IRQ occurs in the same core, it only does ucRecursionCountByLock[ulLockNum]++ and it just returns and does not actually lock.

That may be the cause of the problem. Can you try disabling interrupts in the SWI_Handler before calling vTaskSwitchContext and enable afterwards?