vTaskNotifyGiveFromISR/ulTaskNotifyTake "glitch"

Hi,

I’ve noticed that in debug configuration most of the time ISR->task notification via vTaskNotifyGiveFromISR/ulTaskNotifyTake takes around 14us. But sometimes there is a “glitch” (once in 100 times or even more rare) and this time jumps to 130us. In release configuration I’m seeing the same thing, but times are less: 8us vs 74us. I’ve also noticed that it depends on what code my MCU is executing, maybe things like thread contention or similar.

Does it mean that FreeRTOS is like “soft RTOS”, guaranteeing that vTaskNotifyGiveFromISR/ulTaskNotifyTake takes no longer that X but not guaranteeing that it will always take X (can sometimes, well actually, most of the time, take less)? Is it possible to make it fully deterministic? If there some configXXX flag for that?

Or maybe my priorities of tasks/interrupts are set wrong? I’ve set them according to something similar to “rate monotonic scheduling”, i.e. things that happen with greater frequency have greater priorities.

PS this happens with GCC/ARM_CM4F port on an STM32F4.

In fact it is deterministic in the sense that a worst case response time can be guaranteed. Real-time resp. determinism is not about speed, it’s about guaranteed deadlines.
There are small critical sections in the FreeRTOS code (needed) with FreeRTOS covered interrupts or scheduler disabled. This might cause a certain jitter.
Also if you refer to the task response time notified from an ISR other interrupts might kick in before a notified task can take the appropriate action.
For minimal jitter you should configure FreeRTOS with preemption enabled, give the desired interrupt and the associated task the highest possible priority and make use of portYIELD_FROM_ISR() optimization available for e.g. Cortex-M3/4/7 MCUs.
For absolute minimal jitter you have to do all work in an ISR with an interrupt priority (logically) above configLIBRARY_MAX_SYSCALL_INTERRUPT_PRIORITY. Then it’s outside the scope of FreeRTOS with the drawback that you can’t use any API in that ISR.
Any remaining jitter is then caused by MCU execution behavior (running from flash or RAM,…)
BTW if you don’t need FPU support use the CM3 port to avoid the overhead added by save/restore FPU registers on context switch.

Thanks! Can you please provide more info on portYIELD_FROM_ISR()? I’ve used it before only when I needed to call *FromISR() functions in my IRQ handlers. Can it be used by itself to notify a task when something is done? I cannot do all the job in IRQ handler because there are some heavy FPU arithmetic involved on that IRQ event (200Hz timer).

I have two main timers in which I don’t want jitter:

  • 1Mhz timer driving DMA->DAC which I want to be very “jitter-free” for maximum DAC resolution which is 1MHz on my MCU
  • 200Hz or similar (configurable) timer for driving cycles of DMA->DAC stuff which I also want to be very “jitter-free”

There are other timers/stuff but they can have jitter, I don’t care.

Following this “rate monotonic scheduling” thing I’ve set first (1MHz timer) IRQ priority to be the highest one, and second (200Hz timer) to be lowest one as it is the slowest timer in the whole code. Do I understand correct that this is wrong? I.e. should I set 1MHz DMA->DAC timer to have the highest priority, 200Hz timer to have a priority one less then highest and other tasks/interrupts to have lesser pririties as I don’t care about jitter there? Or maybe 200Hz timer to have the highest priority, 1MHz DMA->DAC one less and other to have lesser priorities? Or maybe same priority for 1MHz DMA->DAC timer and 200Hz timer as they both have to be jitter-free so no preemption for them and other lesser priorites for rest of the tatsks/interrupts?

First, portYIELD_FROM_ISR() will not ‘wake’ a task that is waiting for something, basically, what it does is run the scheduler so that whoever is the current Highest Priority Ready Task gets run. Normally it is called after a *FromISR function has indicated that it has woken a task that is now the highest priority ready task, so we need to run the schedule to make it happen.

For the 1 MHz timer, I would see if you could configure the timer to directly trigger the DAC and thus eliminate all the jitter. Then you only need to handle lower rate interrupts to service the DMA buffer getting near empty. If not, I would see if I could put the 1 MHz interrupt at a very high priority (very low value) that isn’t allowed to interact with FreeRTOS, so critical sections don’t interfere with it.

Rate Monotonic Scheduling doesn’t really apply to interrupt priorities, that is a method for scheduling tasks, interrupts tend to be ordered by their required response time and related factors, which the ‘No Jitter’ requirement would say those want to be High Priority (low numeric priority for Arm-M processors). Since the 200 Hz interrupt needs to trigger a task, it should be set to the value of configLIBRARY_MAX_SYSCALL_INTERRUPT_PRIORITY.

I do use portYIELD_FROM_ISR() correctly then. The 1MHz DMA->DAC timer is working just fine, it is the 200Hz timer which is causing me troubles. The code is like so:

void TIM2_IRQHandler() { // this is the 200Hz timer
    ....
    BaseType_t higherPriorityTaskWoken = pdFALSE;
    vTaskNotifyGiveFromISR(taskHandle, &higherPriorityTaskWoken);
    portYIELD_FROM_ISR(higherPriorityTaskWoken);
}

And task does this:

while (true) {
    if (ulTaskNotifyTake(pdTRUE, TICKS_BETWEEN_UPDATING_WATCHDOG_COUNTERS) != 0u) {
        // Do some heavy FPU stuff...
        // ...and here is the problem, mostly it happens 14us after timer, but sometimes 130us

I think I get your point. If I understood you correctly I think I will set both the 1Mhz and 200Hz timers configLIBRARY_MAX_SYSCALL_INTERRUPT_PRIORITY. This way they will have the highest priority, and the fact that they will have the same priority will lead to a fact that none of them will preempt each other which is actually what I want.

BTW if “rate monotonic scheduling” does not really apply to IRQ priorities and IRQ priorities should be set to highest to these IRQ handlers which should not have jitter, which I think I understand, what idea can be applied to setting task priorities? The same one, like things that should happen as fast as possible should have higher priorities? What is the idea of “rate monotonic scheduling” then, is it appliable in practice at all?

I didn’t used STM32 DACs yet but as far as I know STM32F4 and probably other derivates support HW trigger coupling of e.g. TIM6 + DMA + DAC without software interaction needed. There should be something in the net or an application note.
So the only thing to take care would be the timely preparation of the DAC data and appropriate control of the DMA feeding the DAC.
To get interrupt priorities right see also https://www.freertos.org/RTOS-Cortex-M3-M4.html

Yes, exactly, TIM6 + DMA + DAC is working just fine with 1MHz. I think I get the point, I need to set 200Hz TIM2 timer the highest priority (or the same as TIM6 + DMA + DAC one), since I don’t want jitter in it (and yes, it is driving the task which prepares data for TIM6 + DMA + DAC). Maybe I also should set the preparing data task high priority.

I was thinking that “rate monotonic scheduling” i like a solution for everything and know I understand that it is probably not applicable at all - priorities should be set on an individual basis according to jitter requirements.

PS Thank you for the link.

PPS I’ve just tried that: setting 200Hz IRQ and task priorities to be the highest and I don’t experience jitter now. Thank you!

'I suspect that the 1 MHz timer wants less jitter than the 200 Hz one does, so it probably makes sense for it to have a higher priority. And, as I mentioned, if it doesn’t need to interact with FreeRTOS, and at that rate, I hope it doesn’t (at least not most of the time), then putting it at a priority even higher (lower value) than configLIBRARY_MAX_SYSCALL_INTERRUPT_PRIORITY will cut down the jitter from critical sections.

Note that ‘Rate Monotonic’ is a simplification for setting priorities, it assumes that operations that you don’t need to do often, you have the most time to get done. At its face, it doesn’t handle well things with a low rate, but tight deadline. Perhaps a better method is to rank your tasks (and you can do this also for interrupts) by their allowed latency, how long after they get notified, do they have to get their job done. Things with short allowed latencies, are given a higher priority. Requirements like “No Jitter” tend to imply very allowed short latencies.

Thank you, I now know everything I need to know about that.