Max interrupt response delay because of critical sections

Hello,

is there a easy way to figure out the estimated max. response delay for an interrupt running below or equal to max syscall priority (beacaus i want the freertos API inside the isr) ?

Even if I give the interrupt highest priority (but <=max syscall prio) there is a additional delay to the normal hardware specific interrupt funnel because for the critical sections inside freertos (or in my case the freertos tcp stack).

What is the best way to figure this out?

Regards

Naturally the maximum “calendar time” depends on the port, clock frequencies, etc… Not sure about the TCP stack so perhaps the best thing to do would be to try and measure it. If you have a Cortex-M then you could use the debug circuit cycle counter as a fast clock - take a snapshot of the value on the way into and out of a critical section within the critical section macros themselves, then look for the maximum difference. That would also enable you to put a break point when that maximum goes above a threshold to let you know where in the code and under what condition it occurs. Make sure not to take the entry value until after interrupts are masked though. Also note this technique would only work for kernel ports that use asynchronous interrupts for context switches. Plus your measurement would also include time spent executing interrupts that occur within the critical section (which could only ever be interrupts that have a priority above configMAX_SYSCALL_INTERRUPT_PRIORITY).

OK I already thought that try an measure is the best thing I can do :roll_eyes:

Thanks!