I think I have found a source of potential obscure and not frequent bugs in the task and queue API calls.
In a few places of code in functions xTaskResumeAll, xQueueGenericReceive and xQueueGenericSend, portYILED_WITHIN_API macro is called before exitCRITICAL_SECTION is called. The subsequent macro is called only after portYILED_WITHIN_API is executed.
There is an example of one possible scenario to reveal what may happen.
Suppose, application has created 3 independent tasks- task1, task2, and task3 and two binary semaphores m2 for IPC between task1 and task2, and m3 for IPC between task2 and task3. Lets list the priorities in the ascending order:
task1 - 1
The scheduler is started. Both semaphores are taken during initialization.
Task1 is running. Task 2 is blocked on m2 and task 3 is blocked on m3(waiting when semaphore will be given).
Suddenly task1code invokes xSemaphoreGive that is covered by xQueueGenericSend. Inside xQueueGenericSend function current task1 enters critical section(macro taskENTER_CRITICAL). \
In a vast majority of existed ports for FREERTOS increment nesting counter is provided to keep track of how many times portENTER_CRITICAL was called. It increments every time program comes across with portENTER_CRITICAL and decrements on every portEXIT_CRITICAL. Now the nesting counter is incremented.
Then task copies data to the queue (lines 465-468, file queue.c) and finally checks whether any task is blocked waiting for a data on this queue(lines 472-473 file queue.c). According to a given scenario precondition task2 is blocked currently on the m1. Since task1 gives mutex and task 2 is higher priority task, then task1 will call portYIELD_WITHIN_API macro(line 480). This macro will consequently perform a context switch.
Now task1 is inside critical section and tick interrupts are disabled.
Lets assume that on the next step task2 does a few things like sending some information via uart for argument’s sake. And after that, within 1 ms period, it gives mutex- m3 and task3 is waiting for it.
Now task2 invokes xQueueGenericSend where the first thing it does – it enters critical section and nesting counter for a crittical section is incremented again, finally its value is 2. Then it checks whether any tasks are already waiting for this mutex. According to our scenario task3 is waiting. Now task3 is gonna be run, where tick interrupts are enabled.
Lets assume at some point context switch inside tick interrupt handler will point to a task1.
Now the task1 is at the point where portYIELD_WITHIN_API was called. The next instruction is portEXIT_CRITICAL. Most realizations of this macro(for a vast majority of existing ports) rely on “increment nesting counter” by comparison it with 0, and if its value is 0, then tick interrupt is enabled again, otherwise only a nesting counter is decremented. In our scenario nesting counter currently holds 2. So task1 will decrement nesting counter by one and return with the state when tick interrupts are disabled. Normally it should be in the state when tick interrupts are enabled, but now it is not !!!!!.
It is not normal?. Am I correct.
I suppose that portYIELD_WITHIN_API (line 480 queue.c) must be preceded with taskEXIT_CRITICAL if taskENTER_CRITICAL was called above. And taskEXIT_CRITICAL that is called below should be excluded from the code. See example below. The same thing may be done in function xTaskResumeAll(file tasks.c) and function xQueueGenericReceive(file queue.c).
If my assumptions are not correct, please provide your feedback. Thank you for you attention in this matter.