Starving threads?

grygorek wrote on Friday, March 04, 2016:

Hi,

I am using v8.2.3 on ARM CortexM0. I have noticed interesting behaviour. It is about using xSemaphoreTake and xSemaphoreGive. In a case when two running threads (same priority) use the same sempaphore then it may happen that the second thread will never get a chance to execute.

I think my problem is related to this (not sure):
https://sourceforge.net/p/freertos/discussion/382005/thread/e981974b/

My application is more complex but I think the code below can explain the issue…

SemaphoreHandle_t semH;

void Task1()
{
  xSemaphoreTake( semH, 0xFFFFFFFF );
  debug_print("000 ");
  xSemaphoreGive( semH );
}


void Task2()
{
  xSemaphoreTake( semH, 0xFFFFFFFF );
  debug_print("111 ");
  xSemaphoreGive( semH );
}

I can see only "000 " being printed to the output. While debugging and stopped at the breakpoint at the line printing "000 " (Task1) the Task2 has status BLOCKED. However, when the xSemaphoreGive is executed from the Task1 the Task2 changes the status to READY. Beacause I dont switch the context in that moment the Task1 is still running and the xSemaphoreTake from the Task1 is executed again. Surprisingly it enters the locked section. The status of the Task2 changes to BLOCKED again. The loop continues and in result Task2 is never executed.

I know that adding the call to vPortYield() after the xSemaphoreGive solves the problem.

Not sure if this is correct/incorrect behaviour but at least some note in the documentation could help to avoid surprises.

I can provide more information about my configuration if needed.

Regards

rtel wrote on Friday, March 04, 2016:

This is expected and documented (in the book at least) behaviour if you
are using a semaphore in a tight loop from more than one task.

Older versions of FreeRTOS would yield after giving a semaphore if
another task of equal or higher priority was waiting for the semaphore,
but this had two perceived issues:

  1. Really it breaks the scheduling policy, because a task should only
    yield to a higher priority task.

  2. It can result in thrashing (rapidly switching back and forth) between
    tasks of equal priority with each task only performing a tiny amount of
    work in between each switch.

Now the behaviour has been ‘corrected’ in that a context switch is only
performed if the task waiting for the semaphore has a higher priority -
but this introduced the behaviour you have noticed.

If tasks of equal priority are using a semaphore in a tight loop then
when the semaphore is given the other task will be unblocked, but not
start executing until the end of the time slice. However, when the time
slice ends the original task, if using the semaphore in a tight loop,
will hold the semaphore again - and the unblocked task will simply
re-enter the Blocked state until by co-incidence the time slice ends
when the semaphore is not being held.

You have control over this; if you know you tasks are using a semaphore
in this way then you can manually call taskYIELD() after the semaphore
is given BUT that takes you back to a very inefficient thrashing
execution pattern. Much better to yield manually only if you note that
the time slice ended (the tick count incremented) while the semaphore.
was being held.