Why there aren't a single functions to query whether a queue is empty/full?

It’s not a priority inversion problem. Since the higher prio producer tasks falls into a busy loop as soon as the queue got full, it completely grabs the CPU spinning around waiting for getting free queue space, which is impossible since you don’t give the lower prio consumer task the chance to run.
There is no real reason to document something in addition. It’s the just the way a multi-tasking RTOS with (fixed) priorities work.

Hi Hartmut,

I don’t think that the producer task is hogging the CPU, otherwise you wouldn’t see that many 'C’s in Xavier’s output (they belong to the lower priority consumer task).

It is my suspicion (backed by Xavier’s observation that reducing serial debugging output to single characters change the behavior) that the factor that really changes the game is the serial output itself (reaffirming my “Heisenberg effect” theory). Possibly the serial output routine down the chain does something like claim a mutex which may indeed address priority inversion to some degree.

Again: For concurrency behavior studies, I believe that serial output debugging is evil. @Xavier: I’d recommend to save your characters into a ring buffer instead of outputting it and examine/display the ring buffer posthumously.

1 Like

@RAc To be honest I did I not decode the output in detail but double reviewed the source. I also think you’re right that the serial driver internally most likely waits for the completion of the output for a while (here it’s helpful that serial output is rather slow) allowing the consumer task to run for a short period of time.
As you clearly said, it’s one of those nasty side effects giving the illusion the program is (almost) right distracting from the root cause problem. Even worse some applications run seemingly correct with debug code/output enabled. Later on with release builds omitting all debug output the program breaks and the next wrong path is often taken blaming the compiler (optimizer) being buggy.
Therefore I regularly test interims release builds to have the chance to remember my last code changes … and verify that the build result isn’t broken by optimizer effects (especially LTO) :wink:

Serial printing in Arduino is disastrous, for saying the less, but debugging (with GDB) is worst!

That’s way I’ve shortened the messages to a single letter; however, the inversion problem has nothing to do with the serial printing.

Hi @hs2 and @RAc,

In here:

#if 1
      else{ // no data available:
         Serial.println( "ND" );
         vTaskDelay( pdMS_TO_TICKS( 200 ) );
      }
#endif  

the CPU is released by the consumer task when calling vTaskDelay(), so the producer task (with higher priority) gets the CPU. The serial output has nothing to do with such misbehavior. Actually the code works without it:

#if 1
      else{ // no data available:
         vTaskDelay( pdMS_TO_TICKS( 200 ) );
      }
#endif  

Here the question is: why this low priority task is not ever preempted by a higher priority task?

My bet is the CS inside the uxQueueMessagesWaiting() function. Let’s review a simplified version of the consumer code:

while( 1 )
{
  if( uxQueueMessagesWaiting( g_queue_handler ) > 0 ){
      
  } 
  // program gets here when there are not items into the queue, which happens when the
  // program starts (there is a 500ms delay inside the producer task)
}

It seems that the consumer task is spending all its time inside the CS (with the interrupts turned off) of the function uxQueueMessagesWaiting() , that’s way it can’t release the CPU.

I’m not pretty sure whether I can call this behavior “priority inversion”, but is there a better term?

A workaround might be to use a blocking function in the else side of the if( uxQueueMessagesWaiting( g_queue_handler ) > 0 ){ expression.

Will myself of the future recall this advice? I don’t think so.

For reference purposes, here is the code of uxQueueMessagesWaiting():

UBaseType_t uxQueueMessagesWaiting( const QueueHandle_t xQueue )
{
    UBaseType_t uxReturn;

    configASSERT( xQueue );

    taskENTER_CRITICAL();
    {
        uxReturn = ( ( Queue_t * ) xQueue )->uxMessagesWaiting;
    }
    taskEXIT_CRITICAL();

    return uxReturn;
}

Again, if it wouldn’t release the CPU, you wouldn’t see all of your 'C’s in your output log. The 'C’s come from the lower pri producer task.

Is the code you posted complete? I may load it into one of my eval boards to see if it behaves the same way on a non Arduino port.

As the source code for FreeRTOS is available, it would be easy to look at all the places where
uxQueueMessagesWaiting() could possibly block. I don’t have access right now, though.

@Xavier I think this

is wrong. If println expects a (format) string gets an arbitrary integer anything might happen. Nowadays compilers can warn about such typos, which might easily happen…

Hi,

The method .println() (reference here) has been highly overloaded, so you can print any C data type primitive directly:

Serial.println( "hello world" );
Serial.println( 100 );
Serial.println( 3.14 );

.println() is far from perfect (particularly with floating point values), but that’s not the problem in my program.

Hi,

The consumer task doesn’t hoard the CPU, but the function uxQueueMessagesWaiting() does. I didn’t know that and I didn’t expect that misbehavior.

As I mentioned in a simplified code for the consumer task,

the issue is in querying uxQueueMessagesWaiting() so quickly that it seems that the consumer task enters into a permanent critical section. When later I added a blocking function in the else part is then that the CPU is released.

And yes, the program is complete.

Here is the code ofuxQueueMessagesWaiting(). The function itself doesn’t block, it enters into a critical section:

Greetings!