Can we share a FreeRTOS object between two processes (separate applications) via shared memory and expect everything to be safe still?

I have a general question which has derived from some challenges I’m facing on using a FreeRTOS QueueHandle_t.

The following is some context

  • I’m running FreeRTOS on a dual-core MCU
  • Each core needs to access the same QueueHandle_t
  • The accessing is done via shared memory
  • I pass a pointer of the QueueHandle_t from one core to the other and can access it from both cores
  • I can push and pop data from both sides

My question is the following. Can the QueueHandle_t still be thread safe despite being shared between separate applications via shared memory? If each core pushes or pops at the same time will they properly block each other? I understand that two threads running in the same application will block each other, but will two threads running in separate applications properly block each other?

Big question, is this a SMP application (with one copy of FreeRTOS) or an AMP application (with.a seperate copy of FreeRTOS per core).

If the first, then you don’t have “two applications”, but one application running on multiple cores.

If the second, you have the problem that FreeRTOS can only properly handle Queues and the like that THIS copy created,

1 Like

Thanks for the response Richard.

Turns out it is an AMP application. If it provides anymore meaningful context i’m using a PSoC6 from Infineon.

One more question on this topic, I have confirmed that each core CAN push and pop from the same Queue object via Shared Memory. From your response it sounds like since there are separate copies of FreeRTOS then I can’t rely on the blocking mechanisms to work properly. Is that the only mechanism that will fail??

I could just wrap access to the Queue with a semaphore (accessible by both sides) to lock and unlock and use the rest of the functionality as is. Would that be correct?

BTW there is a multi-core AMP messaging example explained here

Might be interesting for your use case.

1 Like

Amazing, thanks for sharing!

Yes, putting/getting data from a queue will normally work, it is blocking/unblocking that won’t work. This means that putting data in an empty queue might cause problems if a task from a different copy of FreeRTOS is waiting on it.

As Hartmut points out, there ARE ways to make an AMP version of message buffers.

1 Like

Are we limited to Message Buffers for AMP core-to-core communication? Or can we take advantage of Queues in a similar implementation?

I don’t think queue have the hooks built into them to allow cross kernel communication.

The SMP ports, where both cores are running the SAME copy of FreeRTOS (so it is all one application) works with queue, but I am not sure your usage would work with that (as it sounds like your processors aren’t “Symmetric”. Inter-system communication can be a tough problem.

1 Like

Stream/Message buffers were designed specifically for core to core communication. That is why their event model is different to the other primitives such as queues and event groups - although stream/message buffers get used a lot on single core devices too because their different design choices also make them leaner.

The event mechanism for Stream/message buffers defaults to creating a task notification to another task on the same core, but that behaviour can be overridden to instead create an interrupt to another core. The original implementation only had one override used by all create steam/message buffers - the latest code (head revision in git) enables each object to have its own override. In theory queues could be given the same override mechanism, but it may restrict how the application writer could use them - and that restriction would be only by the writer’s discipline rather than programmatically restricted. That is because queues can have multiple tasks waiting to send to them, and multiple tasks waiting to receive from them - and those tasks are expected to be on the same core, where that core knows where their task control blocks are, where there stacks are, etc.

1 Like