If I use both Tickless IDLE and HW Timer together, vTaskDelay does not respect timing. But if I use ONLY ONE of them it works

I am emulating with QEMU a mps2-an385 board.

I just can’t understand why if I use both Tickless IDLE and HW Timer (which continuously generates interrupts with a reload of configCPU_CLOCK_HZ/10khz so a rate of 10khz so basically it always exits from Tickless IDLE), vTaskDelay behaves in really strange way it’s really slower and not synced.

BUT:
if I instead disable HW Timer and leave Tickless IDLE, it perfectly works (also viceversa).

I’m just genuinely curious about what is breaking that timing. What’s causing the problem.
It MUST be something related to HW Timer and the interrupts that it generates, but I don’t get what is actually making the things slower.

If you want to use a HW timer to drive FreeRTOS tick, you need to provide an implementation of portSUPPRESS_TICKS_AND_SLEEP which suppresses the HW timer - Tickless Low power features in FreeRTOS. The default implementation suppresses SysTick and not your HW timer which is why your MCU keeps waking up periodically.

I know this. My question was different: why HW Timer alongside with Tickless IDLE gives these strange vTaskDelay behavior?

I’m just curious about WHAT BREAKS under the hood.

Okay, apologies not getting the question right. I assume by “strange vTaskDelay behavior” you mean that it is taking long time to return? If so, can you make sure that your interrupt is firing at the correct rate? Also, can you share your ISR for the HW timer?

I suspect the behavior is the result of drift or slippage in the system tick rate. Drift occurs when other interrupts wake the system from tickless idle. Since you have a lot of those other interrupts (10kHz), you are getting a lot of drift. You can play with portMISSED_COUNTS_FACTOR in port.c to tune the drift toward zero if your interrupt load is stable. Also note if you change compiler optimization levels you will have to tweak portMISSED_COUNTS_FACTOR again.

Drift is expected with the default implementation of tickless idle because the tickless logic temporarily stops and restarts the timer.

1 Like

Thanks for the answer.

Question: the reason why, if I disable HW timer and leave Tickless IDLE, the vTaskDelay works “normally” is because the IDLE will stop exactly after xExpectedIdleTime(which is given by the xNextTaskUnblockTime so basically it works because it exists from IDLE exactly in the time where we want our delayed task to be resumed)?

Other question:

Drift is expected with the default implementation of tickless idle because the tickless logic temporarily stops and restarts the timer.

when you say timer you talk about the SysTick, right? So basically the issue comes from “losing” some decrements in the period between the disabling of SysTick and re-enabling SysTick?

The Tickless Idle code, when it is woken after the expected delay, can fairly accurately keep the system tick timing. The larger drift comes when it is unexpectedly woken, as the system can’t doesn’t have a way to precisely determine how long it was idle, so have to approximate it.

If you have the system being woken up at 10 kHz, going into tickless idle doesn’t make sense, (and on some processors can actually COST power) as the idea of tickless idle is if the processor doesn’t have to do anything for a long time, it can enter modes optimized for not doing things, but if you are going to be waking up shortly anyway, that doesn’t make sense to do.

1 Like

so basically what is causing the drift is the approximation ulCompleteTickPeriods = ulCompletedSysTickDecrements / ulTimerCountsForOneTick; right? So this small change can introduce a so strange and different behavior of vTaskDelay? Isn’t the error in approximation negligible since it may only be 1 tick wrong? Maybe because since there are a lot of interrupts, then the xTickCount is continously receiving approximation which continously to produce an error in the real xTickCount, which result in strange vTaskDelay?

You have up to 1 tick errors at 10 kHz, which can add up fast. Even at the fastest likely rate of a 1 kHz tick, your max error is 10 times actual tick rate, and it gets bigger if your tic was slower.

1 Like