hydim wrote on Tuesday, April 17, 2018:
Hey everyone,
I would like to get some advice on timing measurement I made that surprise me.
I run my application on a Cortex M7 200MHz. I have configCHECK_FOR_STACK_OVERFLOW, configUSE_MALLOC_FAILED_HOOK, configUSE_TRACE_FACILITY, configGENERATE_RUN_TIME_STATS set to 0. Compiler GCC is optimised for speed.
I have a task with the highest priority in my system that waits for a notification to run its code.
This notification comes from an interruption every 23us.
The scenario is the following :
- Interruption starts (lets say at t=0)
- Some code is processed (around 5us processing, finishes at t=5us)
- A notification is sent to the highest priority task (followed by portYIELD_FROM_ISR).
From what I understand, context switch should be pretty fast: the task that gets notified has the highest priority and is the only one with this priority. I therefore would like to get around 23-5=18us of available processing time for this task before the next interruption.
But the time I measure between the notification is sent and the task wakes up is around 2.2us, which represent 440 CPU cycles. Even though this can be variable according to multiple parameters, doesn’t it seem too far from the 84 cycles announced in the FAQ ?
Also, in this high priority task, I run some code that as some point TAKE a mutex and GIVE it back. I set the timout of the TAKE operation to 0, since I don’t want to wait: if it’s not available, I don’t want to run my code. It appears that the time consumed by the simple TAKE + GIVE operations is 1.7us, which represent 340 CPU cycles. Again, I think it is a lot for such simple operation.
In total, instead of having 18us available for my task to run between two interrupts, I end up having only 14.1us (if I use the mutex). This makes a noticeable difference in my application.
Do some of you can provide explanation on this ? It would be greatly appreciated.
Best