SMP: vTaskDelay() not respected for tasks pinned to core not running the tick

Hi there,

I am struggling to understand what such a simple piece of code would not work, see below…

I am using an RP2040 with SDK v2.1.1. FreeRTOS_Config is the one from the pico examples repository.

You can see the BlinkTask is pinned to core 1, while the main one is running on core 0.

Let’s consider two cases: task creation at POSITION 1 or POSITION 2.

When POSITION 1 is active, stdout is filled with “BlinkingLED…”, the delay is not held (not waiting 1000 ticks) and the main task never outputs.

When POSITION 2 is active, the BlinkTask prints once then never again. The main task outputs as expected.

If I pin the BlinkTask to core 0 then everything is fine.

What is fundamentally wrong with my thinking? Should this work? Bug?

I thank you very much already for any tips!

static constexpr UBaseType_t kCore0 = 1UL;
static constexpr UBaseType_t kCore1 = 2UL;

void blink_task(void* params) {
  while (true) {
    printf("Blinking LED...\n");
    vTaskDelay(1000);
  }
}

void main_task(__unused void* params) {

  // POSITION 1
  xTaskCreateAffinitySet(blink_task, "BlinkTask", 2048, NULL, 1, kCore1, NULL);

  int count = 0;
  while (true) {
    printf("Hello from main task count=%u\n", count++);
    vTaskDelay(3000);
  }
}

void VLaunch(void) {
  xTaskCreateAffinitySet(main_task, "MainThread", 2048, NULL, 1, kCore0, NULL);

  // POSITION 2
  // xTaskCreateAffinitySet(blink_task, "BlinkTask", 2048, NULL, 1, kCore1, NULL);

  vTaskStartScheduler();
}

int main(void) {
  if (!stdio_init_all()) {
    return 1;
  }

  busy_wait_ms(1000);
  printf("Starting FreeRTOS SMP on both cores...\n");

  VLaunch();
  return 0;
}

I originally posted this question in the Pico forum, but maybe it is a fundamental flaw in my understanding of the SMP concept.

First big question, is your printf “thread safe”?

Hello!

First of all, thanks for taking the time.

Yes it is. It’s the one from the sdk. It’s using a pico mutex (not a FreeRTOS one) that is locking out both cores as per documentation. On top of that I see no interleaving. It’s not ISR safe though but we are not in this case right now.

Let me toggle a gpio pin instead on the BlinkTask just to be sure.

I’ll report back in a few minutes

I think we can rule out the influence of printf for once :wink:

Same thing happens with the toggling a GPIO instead of printing using printf.

Cheers

Here some traces:

The program is a bit changed in that I am no longer printing.

  • Main is a blocking wait of 50ms then a vTaskDelay(200) (1tick → 1ms in my config)
  • Blink is a blocking wait of 50ms then a vTaskDelay(50)

So their execution time is the same.

Obviously time slicing is active otherwise one would never show up in the “only on core0” case as both tasks have the same priority.


I think I found the culprit:

#define configTASK_DEFAULT_CORE_AFFINITY 0x1  // Core 0

If I comment that line everything is fine and works as expected.

It seems to be related to how the (passive) idle tasks are assigned to the cores.

If left uncommented, both IDLE tasks and core1 has nothing to do while idling thus piling up. When commented, both IDLE tasks have no affinity and I guess they run on the appropriate core :wink:

Maybe something in the port code should be done to pin these idle task to their respective core instead of leaving them with no affinity? Would be safer IMHO.

I would not know where to perform these changes though.

Cheers!

A couple of comments. “time Slicing” should have no effect on this program, as what it does is force tasks to yield on the Tick Interrupt, but all your tasks are blocking, so all that should do is force the Idle Tasks to yield and switch back to themselves.

Second, if the “Default” core affinity is affecting things, then there is perhaps a problem. The Idle tasks either need to be created 1 per core with that fixed affinity, or all should have NO affinity (not default affinity). The base code in tasks.c generates them with no affinity. If the port layer is changing them to all only run on the same core, that is a problem with the port. (I am not sure if where that macro is supposed to be used).

Assigning them to NO affinity reduces the danger of something mistakenly blocking them, because as long as enough are running to cover the cores that are idle, things will still work. Assigning them each to a core leaves the danger that if the IdleHook function incorrectly blocks, you might not have an idle task for that core. It will reduce the overhead of allowing idle tasks to migrate between cores.

If the intent of configTASK_DEFAULT_CORE_AFFINITY is to make all tasks with no affinity (even idle tasks) run on those cores, then the create Idle tasks function should check that define, and if not tskNO_AFFINITY explicitly assign each idle task to a core.

1 Like

Hi,

Thanks for that very complete answer!

I did not think of it in terms of “there should be enough idle tasks to feed all cores that need one”. Put it that way it makes sense to leave them with no affinity and let the scheduler distribute them as needed.

This thread is definitely closed!

Thanks again and have a wonderful day!