ISRs on a Cortex M3

wlauer wrote on Sunday, February 16, 2014:

I’m trying to figure out how to properly setup a few ISRs for an application running on an Arduino Due using the Atmel SAM3X chip. I think I’m missing something regarding the use of FromISR APIs.
In the manual “Using the FreeRTOS Real Time Kernel – A Practical Guide” in the chapter 3 “Interrupt Management” section 3.5 regarding interrupt nesting the last two paragraphs state that
For ISRs to use the FromISR APIs the priority of the interrupt must be between configMAX_SYSCALL_INTERRUPT_PRIORITY and configKERNEL_INTERRUPT_PRIORITY. But then it goes on to say
“Interrupts that make API call can only use these priorities, but will be masked by critical sections” and “ Interrupts that use these priorities are prevented from executing while the kernel or application is inside a critical section”.

Can I interpret this as while in critical sections, interrupts with aforementioned priority range are masked and consequently missed? If this is the case what’s the point of the FromISR APIs? To me the beauty of synchronizing a task with an ISR using a binary semaphore is lost if you going to miss interrupts.

Without going into the details my application loses interrupts every once in a while using Give and Take on binary semaphores.

richard_damon wrote on Sunday, February 16, 2014:

They won’t be “missed”, but delayed. Critical sections are supposed to be kept very short, and are used to guard the update of critical data shared between tasks and interrupts (like the lists used to manage which task is to run).

The FreeRTOS kernel itself is very good at keeping to a minimum the duration of a critical section, and uses other methods when it needs a longer protection that doesn’t block interrupts.

wlauer wrote on Sunday, February 16, 2014:

Ok, things are beautiful again no missing interrupts!

My application is using a TC (Timer/Counter) to emit a single pulse. The sequence is to:

  1. Start the TC
  2. Wait in the application with a Take of a binary semaphore.
  3. When the ISR is called at the end of the pulse stop the TC and Give the binary semaphore.

If the interrupt is delayed too much a second pulse will be emitted and a second interrupt will be generated. This is obviously a function of the frequency of the TC etc. and the maximum expected delay.
I need to know the delay in order to determine when this kind of situation is a problem.

The delay is comprised of three software sources:

  1. The OS
    a. How do I determine the maximum length of time the OS can delay servicing an interrupt with a priority in the range between configMAX_SYSCALL_INTERRUPT_PRIORITY and configKERNEL_INTERRUPT_PRIORITY?
  2. The application
    a. If I use critical sections with a judicious design and this won’t be a problem
  3. Other interrupt sources especially those with equal or higher priority compared to the TC
    a. Again with a judicious design and this won’t be a problem

The delay is also a function of hardware sources such as CPU clock speed, etc.

Thanks in advance.

rtel wrote on Sunday, February 16, 2014:

If there is a chance that two interrupts are serviced before the binary semaphore is processes then you may consider a counting semaphore instead of a binary semaphore.

a. How do I determine the maximum length of time the OS can delay servicing an interrupt
with a priority in the range between configMAX_SYSCALL_INTERRUPT_PRIORITY and
configKERNEL_INTERRUPT_PRIORITY?a. How do I determine the maximum length of time the OS
can delay servicing an interrupt with a priority in the range between

The FreeRTOS Cortex-M port has a full interrupt nesting model, so if this is the most important interrupt make sure all the other interrupts have a priority below it.

If there are other interrupts than that have to be higher priority then only you know how long they are going to take as only you know the code that the interrupts are going to execute.

In most cases, the answer is simply to measure it.


wlauer wrote on Sunday, February 16, 2014:

I think you missed the point.I don’t want multiple interrupts.

In order to prevent multiple interrupts I need to know the maximum time the RTOS can delay servicing the first interrupt.

rtel wrote on Sunday, February 16, 2014:

If you use are using the xHigherPriorityTaskWoken parameter along with the portYIELD_FROM_ISR or portEND_SWITCHING_ISR, and the task that reads from the binary semaphore is the highest priority task, then you are guaranteed that the interrupt will return directly to the that task - without any other tasks executing in between. If it is not the highest priority task then again only you know how long it will take the other tasks execute first.

Also reference the context switch time on this page: and ensure you have configUSE_PORT_OPTIMISED_TASK_SELECTION set to 1 in FreeRTOSConfig.h for the best performance.


wlauer wrote on Monday, February 17, 2014:

We are getting closer but not quite there yet.

My question is not how long a context switch takes. But instead how long is the time from when an interrupt occurs until the ISR starts running.

For interrupt priorities logically higher than configMAX_SYSCALL_INTERRUPT_PRIORITY the critical sections of the OS and it’s APIs have no impact.
For interrupt priorities logically lower than configMAX_SYSCALL_INTERRUPT_PRIORITY the OS and it’s APIs can mask interrupts during critical sections and consequently delay there start of the ISR until the OS exits the critical section.
What is the maximum value of this delay?

The FAQ on context switch time notes says “The ARM Cortex-M port performs all task context switches in the PendSV interrupt. The quoted time does not include interrupt entry time.”
What is the interrupt entry time? This is what’s killing my application. How can I measure it?

Thanks again.

rtel wrote on Monday, February 17, 2014:

Ok, so I think you are asking, what is the worst case time between an interrupt being asserted and an interrupt being processed if the interrupt is asserted while in a critical section. Or, what is the longest critical section.

I don’t have numbers for that - it would not be practical for us to attempt to either as >30 architectures and >16 compilers and each compiler having probably 4 or 5 different optimisation levels would make approximately … a lot of different numbers to measure.

SafeRTOS does take worst case times for each certified port, and they do it by finding the longest path through a critical section (no doubt in a queue send or receive function - although they use scheduler and queue locks to allow all interrupts to remain unmasked for the longest operations), taking the cycle count on entry, and the cycle count on exit, then multiplying by the time per cycle.


richard_damon wrote on Tuesday, February 18, 2014:

If you haven’t added any long critical sections, and the delay caused by FreeRTOS’s use of Critical Sections is given you problems, makes me think that something isn’t being done right.

If the delay of a handful or so of instructions that a Critical Section might cause is a problem, then you need to put that processing into a very high priority interrupt (likely highest possible) and do very rapidly the critical operations (perhaps needing to write this routine in assembly) and if it needs to communicate with a task, trigger a lower priority interrupt of a level that can talk to FreeRTOS.

If the delay in getting the task rescheduled is the problem, perhaps you need to do a bit more in the ISR (what ever is that critical in timing) before switching to the task.

wlauer wrote on Thursday, February 20, 2014:

I got it working by adjust the priorities of the various interrupts.
I was curious about your solution to get a high priority interrupt to trigger a low priority interrupt to communicate with the OS. I assume you’re talking about a software generated interrupt. How do I set one of those up? How’s the handler defined etc? code snipets would be good.