Systick Priority vs all Cortex-M priorities

Hello everyone. I have a question regarding the interruption of the Systick. From what I read in old forums, in this link precisely:

The Systick runs at the lowest possible priority. However, in my project I want a greater temporal determinism and I am measuring the frequency of the Systick.

To check in practice, I did a project with only one task in which I wrote on the serial bus constantly and for 8000 timer interrupt events I saw that the Systick was not delayed.

So, does this “lowest possible priority” apply to Cortex-M4 native interrupt modules, such as USART, CAN and DMA for example or does it apply only to ISR’s programmed by the designer?


I’m not sure I’m understanding your post correctly, so let me know if this does not answer your question.

The kernel uses three Cortex-M interrupts. First, SVC, which is just used to start the scheduler (unless you are using the port that has memory protection unit (MPU) support). Second, SysTick, which by default is used to generate the RTOS’s tick interrupt (you can override that to use any clock you like). Third, PendSV, which is used to perform a context switch.

Always try to ensure PendSV is the lowest possible priority, things may still work if its not, but they will be less efficient.

We recommend SysTick is also the lowest priority, although things will work fine if its not. If SysTick is the lowest priority it will experience jitter in its execution if kernel code is inside a critical section or if higher priority interrupts are executing. If you want very high temporal accuracy then you can measure time using any other timer your chip provides.

Other interrupts, such as USART and CAN, can run at any priority you want provided they do not use any FreeRTOS API functions. If they do use FreeRTOS API functions then their priority must be at or below the maximum system call priority set by the configMAX_SYSCALL_INTERRUPT_PRIORITY setting in your FreeRTOSConfig.h file.

The Cortex-M design makes this quite complex - we try and explain it here:

Sorry if I was not clear on my question.

I would like to know if the RTOS tick interruption will suffer some kind of delay in any interruption, such as a button that I program that activates an interruption or even a write on the serial bus, using the HAL libraries.


If any interrupt (including the RTOS tick interrupt) is at a relatively low priority when a relatively high priority interrupt occurs (which may be generated by a button push, if the button push generates an interrupt) then the lower priority interrupt will experience a delay. That is how the hardware functions, rather than being a characteristic of the RTOS.

I will point out that unless you are doing something in the Tick Callback Hook functions, that changing the priority of the Tick Interrupt is unlikely to change the perceived jitter to the program. The tick interrupt will be delayed by critical sections and higher priority interrupts, but the effect on the tasks won’t be seen until all the pending interrupts clear, so it doesn’t really matter.

One spot where it could make a difference is if the tick interrupt could be delayed for a whole tick period, then having it higher could help, but a system that is in interrupts that long is likely misdesigned.

Which come back to a point about the Cube HAL Libraries. I have heard that some of the Cube HAL Libraries have delays in the ISR, and some of these require that the tick interrupt for the HAL needs to be higher than those ISRs using it. I stand by my previous comment that I think that design is incorrect, and will generally rewrite much of the HAL driver library to be cleaner.