Why don't do the context switch in the internal of the API ended with FromISR?

Hi,
the user must pay more attention to the FreeRTOS API ended with FromISR, I means, they must care the return value of the API, such as the usage of xTaskResumeFromISR, the standard usage is like this:
uint32_t needYield = xTaskResumeFromISR(TaskHandle);*
** portYIELD_FROM_ISR(needYield); //must use this API to require the context switch***

but not all the users have the knowledge or may forget to call portYIELD_FROM_ISR…

if the user forget to require the task context switch, three-things will happen.
case1: the task cannot be scheduled even it is the highest priority ready task if there are not other task which will trigger the scheduling

case2: the task will be scheduled if there are some other task trigger the scheduling in the later time

case3: in some API, the OS will set the context switch flag, and the task will be scheduled when the next systick interrupt is triggered
1) e.g. xTaskGenericNotifyFromISR, the OS will check the input parameter pxHigherPriorityTaskWoken, if this parameter is NULL, the OS will set xYieldPending to pdTRUE
2) e.g. xQueueGenericSendFromISR, it will set xYieldPending to pdTRUE in the xTaskRemoveFromEventList if the wakeup task with higher priority than the pxCurrentTCB

no matter the case 1 or 2 or 3, the task cannot be triggered in time is not a good thing for the system real-time requirement. So I have 2-questions:
Q1: why the FreeRTOS don’t trigger the context switch in the API internal. why don’t like the task level API, e.g. xQueueReceive,the os will call the portYIELD_WITHIN_API() in the xQueueReceive.
xQueueReceive
{

if( xTaskRemoveFromEventList( &( pxQueue->xTasksWaitingToSend ) ) != pdFALSE )
{
queueYIELD_IF_USING_PREEMPTION(); //trigger context switch at here
}
}
Q2: why does the FreeRTOS provide the 2-kinds of the API, which is ended with FromISR, and the other is used in task level?
why don’t provide just one kind of API and resolve the context attribute(task or isr ) in the API internal.
for example, provide a portable level api to check the context is task or isr level, such as
#define xPortContextLevel ((SCB->ICSR & SCB_ICSR_VECTACTIVE_Msk) >> SCB_ICSR_VECTACTIVE_Pos) //take CM4 as an example

One big part of it is that for some ports, the scheduler is actually invoked with the portYIELD_FROM_ISR command, as opposed to setting an interrupt and that happening when the ISR returns. An effect of this is any code after the yield won’t get executed until the task that was switched out get run again. Not a problem if this is the very end of the ISR that will just return to the task, not so good if their is more ISR to run.

Hi, Richard,
thanks for your quickly response.
could your give more explanation for the comment: Not a problem if this is the very end of the ISR that will just return to the task, not so good if their is more ISR to run.
why it is not so good if more ISR to run?

If I understand your question correctly, then you ask why the portYIELD_FROM_ISR is not done automatically by FreeRTOS.

I wasn’t involved in the design of FreeRTOS, but I do believe it’s good to leave the choice to the user (developer). Here’s a possible use case:

Take a system in which there are several interrupt driven comm interfaces at different priority levels. Task A is being interrupted by an ISR with a low priority, say a serial UART. The ISR would typically buffer the received character, then signal a higher pri task to process the buffered character. If the yield was implied, after return there would be an immediate task switch. So far, so good.

Now imagine that during the time the interrupted task would have had the CPU until the time slice ran out (which is now being used by the serial processor task B), another higher pri interrupt (say an SPI handler) interrupts task B and schedules task C tun run after the ISR finished. The net effect would be a superfluous context switch, because processor B in every case has to wait until C is done, so it would have been more efficient for task A to finish its time slice.

I’ve worked with Real time OS’s for almost 30 years now, and I can assure you that fine tuning CPU scheduling can become a fine art. I do appreciate the flexibility that FreeRTOS provides!

As far as I can tell, it’s a related issue: We are talking about very very heavily used code, and the more cycles we can save, the better. There are processors (as has been discussed before) that allow you to determine what context you are in, but if for every single time an OS function is invoked, you need to make the query just for the convenience of the developer, it adds up to many CPU cycles, and the less CPU cycles an OS fries to do its work, the better the OS.

As I said, different ports handle the scheduling differently. In some ports, the call to portYIELD_FROM_ISR will IMMEDIATELY go to the scheduler and it will then go to that task, stopping in the middle of the ISR. The ISR is really running in the context of the task that was interrupt, just as if it had a subroutine call at that point. This means the rest of the ISR code won’t run until that task gets its next chance to run, and since the whole point of putting code into an ISR is to get it to run right away, that tends to not be what you want.

On machines where portYIELD_FROM_ISR just sets a pending interrupt bit at the lowest priority, then yes, that could have been done in the FreeRTOS functions, but then FreeRTOS could not be run on machines which don’t work that way, and one goal of FreeRTOS was to support a wide variety of platforms.

There is more information in this FAQ: https://www.freertos.org/FAQ_API.html#IQRAPI although that doesn’t provide answers to all the questions. Specifically to why whether to perform a context switch or not is left to the application writer: Consider the scenario where you receive an interrupt each time a character is received - but there is no processing to do until the whole string has been received. In that case the ISR can opt to call portYIELD_FROM_ISR() only after the entire string has been received - and in so doing - avoid thrashing the scheduler by continuously switching to a task that has nothing to do until the full string is buffered (you will still switch to that task when the next tick interrupt occurs anyway, if not before by another task yielding).

Hi,

You can release safely the CPU as shown:

ISR()
{
    // some code ...

    portYIELD_FROM_ISR(needYield); // it's ok to call it here
}

But here is a different story:

ISR()
{
    // some code ...

    portYIELD_FROM_ISR(needYield); // it's not ok to call it here

    // more important code here ...
}

I think that’s the reason why the programmer is responsible of releasing the CPU when it’s time to do it and doesn’t let the _FromISR() functions to do the job.

And yes, that could be a problem for newbies as well as seasoned programmers (I’ve been there), but there is also a wise saying: READ THE MANUAL.

@Xavier, its a bit more complicated. On many ports, especial those with nestable interrupts, it is generally safe to do the portYIELD_FROM_ISR anywhere in the ISR, as all it does is software signal a lowest priority interrupt to cause the scheduler to run.

Some non-nestable ports do the direct scheduler on the call so you have to do it at the end. (and some spell the macro differently)

As was said, you need to read the directions for you port and follow the rules, especially since some require special declarations for the ISRs.

The API is designed in a device independent way, so all use the flag allow the trigger at the end (there is rarely any reason to NOT do it at the end). Don’t know why xTaskResumeFromISR doesn’t take a pointer to the variable. Personally I would write the line as

wasWoken |= xTaskResumeFromISR(handle);

so I could use just a single flag. Though actually, I wouldn’t expect to be using Resume because of its issues.

1 Like