STM32F411 going into HardFault when task from interrupt is triggered


I am new in this forum so feel free to correct me if I m in the wrong section. Thanks in advance!

So basically I am using FreeRTOS on a STM32F411 with two tasks. One is running with a simple vTaskDelay for 10ms and another task triggered from an interrupt running at 25kHz. From the interrupt I perform a the following:

void DMA2_Stream0_IRQHandler(void) {
    BaseType_t xHigherPriorityTaskWoken = pdFALSE;

    DMA_ClearITPendingBit(DMA2_Stream0, DMA_FLAG_TCIF0);

    // Notify the task that the event has occurred
    vTaskNotifyGiveFromISR(xHandleControl, &xHigherPriorityTaskWoken);

    // Perform any additional ISR cleanup or context switch if required

and in the task

_Noreturn void vTaskControl(void *pvParameters) {
    uint32_t ulNotifiedValue;

    for (;;) {
        GPIOC->ODR ^= GPIO_Pin_13;

        xTaskNotifyWait(0x00,      // Don't clear any notification bits on entry.
                        ULONG_MAX, // Reset the notification value to 0 on exit.
                        &ulNotifiedValue, // Notified value pass out in ulNotifiedValue.
                        portMAX_DELAY);  // Block indefinitely.

        GPIOC->ODR ^= GPIO_Pin_13;
.... do some stuff or nothing

I aslo tried vTaskSuspend in the task and in the interrupt vTaskResumeFromISR with same result.

The symptoms are: both tasks are working properly up until few seconds has passed then I have a hardfault interrupt. I removed all my code with just the time delay and the interrupt to be sure the problem isn’t comming from my code. Both tasks are working fine when the other is not created.

The config is given below:

#define vPortSVCHandler SVC_Handler
#define xPortPendSVHandler PendSV_Handler
#define xPortSysTickHandler SysTick_Handler

#define configUSE_PREEMPTION                    1
#define configUSE_TICKLESS_IDLE                 0
#define configCPU_CLOCK_HZ                      100000000
#define configTICK_RATE_HZ                      100000
#define configMAX_PRIORITIES                    5
#define configMINIMAL_STACK_SIZE                ( ( unsigned short ) 120 )
#define configMAX_TASK_NAME_LEN                 16
#define configUSE_16_BIT_TICKS                  0
#define configIDLE_SHOULD_YIELD                 1
#define configUSE_TASK_NOTIFICATIONS            1
#define configUSE_MUTEXES                       1
#define configUSE_RECURSIVE_MUTEXES             1
#define configUSE_COUNTING_SEMAPHORES           0
#define configUSE_ALTERNATIVE_API               0
#define configQUEUE_REGISTRY_SIZE               10
#define configUSE_QUEUE_SETS                    0
#define configUSE_TIME_SLICING                  0
#define configUSE_NEWLIB_REENTRANT              0

/* Memory allocation related definitions. */
#define configSUPPORT_STATIC_ALLOCATION         1

The tasks are created as follow:

xHandleManager = xTaskCreateStatic(
        vTaskManager,       // Function that implements the task.
        "mngr",          // Text name for the task.
        STACK_SIZE,      // Number of indexes in the xStack array.
        NULL,    // Parameter passed into the task.
        2,// Priority at which the task is created.
        xStackManager,          // Array to use as the task's stack.
        &xTaskManager);  // Variable to hold the task's data structure.

xHandleControl = xTaskCreateStatic(
        vTaskControl,       // Function that implements the task.
        "ctrl",          // Text name for the task.
        STACK_SIZE,      // Number of indexes in the xStack array.
        NULL,    // Parameter passed into the task.
        2,// Priority at which the task is created.
        xStackControl,          // Array to use as the task's stack.
        &xTaskControl);  // Variable to hold the task's data structure.


I don’t know where to start in order to solve the problem. I would be glad if you can help me or give me any hints to relevant information.

Thanks in advance!

1 Like

How / where are xStackControl and xStackManager defined ?
You should also define configASSERT and enable stack checking for development.
Don’t use vTaskSuspend/Resume for task synchronization !
BTW configUSE_PORT_OPTIMISED_TASK_SELECTION is fine for apps with less than 32 tasks.

Thanks for your reply! :slight_smile:
So they are defined as global variables,

StackType_t xStackManager[STACK_SIZE];
StackType_t xStackControl[STACK_SIZE];


#define STACK_SIZE 2000

I have to admit that it is not clear how to define the stack size I should look at it thoroughly. I also tried 500 and it seemed to work as well :confused: I read that there is a function to check the available stack size but I m not sure on how to use it. Definitively something to master I guess.

I thought that configASSERTwas defined but now I doubt about it. I will have a look and tell u asap.

I ll give a try to configUSE_PORT_OPTIMISED_TASK_SELECTION then.

Hey @dcmde, and welcome to the FreeRTOS Forum!

Sorry that you’re running into a hardfault issue like this. I don’t see anything glaringly wrong in the code that you’ve linked. Is there any chance you’d be able to provide a stack trace to assist us in helping you find the cause of your issue?

In your task code I don’t see your call to vTaskDelay(), but you mention using it for 10 ms. Are you using pdMS_TO_TICKS to do something like vTaskDelay(pdMS_TO_TICK(10))?

I find it interesting that you’re able to run for a few seconds before seeing an issue occur. Do you have the ability to run this in a debugger? It might be worth checking your stack pointer each time you enter into your ISR to see if you’re leaking stack space

Hi @skptak , thanks for your reply.

I have to admit that I don’t know what is a stack trace and how to perform such thing. I’ll have a look on the internet but if you have any relevant link to it I’d be glad to look.

Indeed I didn’t provide the manager task, here it is:

void vTaskManager(void *pvParameters) {
    for (;;) {

Yes I can run the debugger. I just did it with the advice from @hs2 and it turns out that I do fall in the callback. I used the following function,

void vAssertCalled(unsigned long ulLine, const char *const pcFileName) {
    static portBASE_TYPE xPrinted = pdFALSE;
    volatile uint32_t ulSetToNonZeroInDebuggerToContinue = 0;

    pcFileName_ = pcFileName;
    /* Parameters are not used. */
    (void) ulLine;
    (void) pcFileName;

        /* You can step out of this function to debug the assertion by using
        the debugger to set ulSetToNonZeroInDebuggerToContinue to a non-zero
        value. */
        while (ulSetToNonZeroInDebuggerToContinue == 0) {

and in the FreeRTOSConfig.h

void vAssertCalled(unsigned long ulLine, const char *const pcFileName);

#define configASSERT(x)     if( ( x ) == 0 ) vAssertCalled( __FILE__, __LINE__ )

However I cannot see the file name when I use the debugger in O1 or O0.
I got some weird strings, "\230B\034\320\003\365\200c\230B\030\320\003\365\200c\230B\024\320\003\365\200c\230B\020\320\003\365\200c\230B\f\320\003\365\200c\230B\b\320\003\365\200c\230B\004\320\003\365\200c\230B".

I slowly realize that I don’t understand much of mcu :smiling_face_with_tear: but thanks to your help I see what is missing. So the next step is to check the stack pointer and to find what is triggering the configASSERT.

I ll keep you inform asap. :slight_smile:

Showing the stack trace or call stack is a basic feature of all debuggers. Once you halted the target when it was hit by an assert try to find call stack or something similar in your debugger IDE. It displays the way up to the highest level calling function and often gives you an idea about the root cause of the assert e.g. a parameter being invalid. The file name looks strange, indeed. Maybe a formatting issue in the debugger ?
It should be plain ASCII (char), I guess. However, once you see the call stack it’s not that important.

Note, your assert vAssertCalled definition, and the calling in configASSERT have FILE (which should be a const char*) and LINE (which should be an unsigned long) in different orders.

@hs2 @skptak I found the call stack on the debugger and here is the result.

Screenshot from 2024-01-30 19-10-38

From what I can read on the code, it seems that I had #define configMAX_SYSCALL_INTERRUPT_PRIORITY 191 and the priority of my interruption was 32.
So I changed configMAX_SYSCALL_INTERRUPT_PRIORITY 30. So now I don’t have ASSERT issue.

Could you confirm that it is the right way to do it?

Indeed it was inverted now it’s better.

So now it seems to have evolved but I don’t get what is wrong, here is the stack trace:
Screenshot from 2024-01-30 20-23-41

And here is the line causing the assert:

You’re on the right track :+1: But I can’t tell if values are correct.
Better have a look into the FreeRTOS docs regarding RTOS for ARM Cortex-M and
e.g. Understanding priority levels of ISR and FreeRTOS APIs - #16 by aggarg which contains a pretty good explanation of this rather confusing topic.
There are much more posts regarding interrupt priorities in the forum (search: configMAX_SYSCALL_INTERRUPT_PRIORITY).

Are you doing anything else in the task? Can you share complete task code?

So it seems that the issue is link with the interrupt priorities as @hs2 pointed out.

The problem is that I really don’t get a thnig on how it should be configured given the fact that there are many settings.

First the priority of the task when created :

    xHandleControl = xTaskCreateStatic(
            vTaskControl,       // Function that implements the task.
            "ctrl",          // Text name for the task.
            CTRL_STACK_SIZE,      // Number of indexes in the xStack array.
            NULL,    // Parameter passed into the task.
            7,// Priority at which the task is created.
            xStackControl,          // Array to use as the task's stack.

then the priority of the interrupt :

    NVIC_SetPriority(DMA2_Stream0_IRQn, 5);

and finally the different settings present in the config:

#define configPRIO_BITS       		4        /* 15 priority levels */

/* The lowest interrupt priority that can be used in a call to a "set priority"
function. */

/* The highest interrupt priority that can be used by any interrupt service
routine that makes calls to interrupt safe FreeRTOS API functions.  DO NOT CALL
PRIORITY THAN THIS! (higher priorities are lower numeric values. */

/* Interrupt priorities used by the kernel port layer itself.  These are generic
to all Cortex-M ports, and do not rely on any particular library functions. */
/* !!!! configMAX_SYSCALL_INTERRUPT_PRIORITY must not be set to zero !!!!
See */

I tired to set the interrupt to a lower priority (higher number) than the configLIBRARY_MAX_SYSCALL_INTERRUPT_PRIORITY but then I got the configASSERT issue listLIST_ITEM_CONTAINER. If I try the opposite of course the configASSERT( ucCurrentPriority >= ucMaxSysCallPriority ); throwing an error.

If anyone knows what I am missing I would be glad to listen to what is wrong. :slight_smile:

It turns out that, as I said at the beginning, it takes some times for the mcu to get stuck at the configASSERT.

At the beginning ucMaxSysCallPriority has a value of 0x20 but then after some times it changes to 0x90 and it doesn’t make sense since the interrupt is still the same according to the call stack

vPortValidateInterruptPriority port.c:742
vTaskGenericNotifyGiveFromISR tasks.c:5138
DMA2_Stream0_IRQHandler stm32f4xx_it.c:140
<signal handler called> 0x00000000ffffffed
prvPortStartFirstTask port.c:247
xPortStartScheduler port.c:347

You need to set the interrupt priority to a lower priority (higher number). We then need to find out the cause of the other assert. Can you examine what the TCB looks like when you hit that assert?

Hi @aggarg I will be able to check the TCB latter. Is there anything special to monitor on the TCB?

Regarding my previous post I found that according to the reference manual DMA2_Stream0 has a priority of 63 whereas ucMaxSysCallPriority goes from 32 to 144 and litteraly no interrupts could have this value.

So the big question is why does it change?

I also checked that my stack arrays are not too big, with a stack size of 500 and 4 bytes per word then it should fit into the 128Kbytes of RAM I got.

The idea is to figure out if TCB is corrupted.

What do you mean by ucMaxSysCallPriority goes from 32 to 144? ucMaxSysCallPriority is calculated once here when the scheduler is started and it should not change after that unless there is a memory corruption. Can you put a data breakpoint to find out if it is getting corrupted?

Try increasing your stack sizes to 2k (i.e. 2 * 1024 ) just for testing.

Thanks for your reply @aggarg

Regarding ucMaxSysCallPriority I detailed the phenomenon it my previous post. (number 15)

I don’t understand what is causing the corruption given the fact that I make no dynamic allocations and my code is reduced to the use of FreeRTOS API and some basic peripherals configuration.

I don’t know if it has any importance but I also have another interrupt TIM1_xxx that start the ADC DMA conversion then the DMA2_Stream0 interrupt notifies the ControlTask.

From a time perspective, the ADC conversion is really fast so it isn’t possible to have each interrupts being called in another order.

I should also point out that I have also based my code on the sole usage of interrupts and it works just fine. I mean with the same functions I never had data corruption.

Dynamic allocation is not the only reason for memory corruption. Still can you use data breakpoints to find what is causing corruption?

Also, can you share your complete code?