EventBus recommendation?

I have five tasks.

1 and 2 are vying for a resource. They make requests to 3. 3 will update them they will wait for 3 to tell them it’s safe to proceed.

3 determine who has priority on that resource and will ask 4 and 5 to do their work to make that available.

4 and 5 will listen to commands from 3, then report their success back to 3.


I could do this with getters and setters. I could have extern memory objects all over. If you think about the arrows of communication it puts 3 in the middle everyone has to know 3, and 3 everyone else… Not so bad with 6 tasks, but imagine I wanted an all seeing logging task or added 5 more to each group.

I’m considering an EventBus instead. Where 1,2,3,4,5 all sub to messages they want from EB and EB will pub to each of the tasks on event.

What is the best FreeRTOS way to do this? I’ve had ups and downs.

  1. It seems like using an eventGroup for each of the tasks is nice. I can sub by sending EB my eventGroup pointer and the flags I am interested in. eventGroups limit me to 24 “data events” which is probably enough, but it’s not leaving a ton of room to grow.

  2. I don’t really want 1 and 2 to have extern / direct pointer access to 3’s internal data because that’s not threadsafe… But this leads me to queues. And queues have an issue that I really don’t want a stack of data updates as they happened chronoglically, just an element size of 1 would be fine so I have lastest data when I need it - but now we’re talking 76 Bytes for the queue plus the size of the object. Times 24 possible data events, that’s 1800 bytes overhead before I’ve done any work at all and indicates I need to share these mailboxes (queue of 1 item) and that none of my tasks consume only peak.

  3. I could pass a pointer from 1 or 2 to 3, but I also need to pass a mutex pointer. Now 3 is able to directly read the memory from another task, but it’s pretty important that it take control of the mutex first. In this case, every write and read is mutex protected, so long as everyone plays nice and there is no extra memory or thread safety issues. This is not without disadvantages, I wrote mutex, but those are just empty queues iirc, a binary sempaphore is probably what I want here to avoid the cost of queues. Although there is a questionable priority issue.

That’s it so far as I can see. I want to share 3’s output with 1 and 2 or 3 and 4, so without knowing who already saw the data and when, there is really no way to know it’s safe to write… right? But using read and writes semaphores shared as part of the “event” should cover this.

I think there has to be a more elegant solution.

I don’t think your post give enough information to offer a suggestion. Maybe break the problem into two to start with. Task 1 and 2 need to access the same resource - first question is - can you refactor the design so whatever task 1 and 2 are doing is done from the same task? Second, what does accessing the resource cost in terms of time and data transfers? For example, does it just read from the shared object in a few microseconds, or is it performing some lengthy IO operation - the best critical section method depends a lot of these kind of factors.

Ok, fair enough.

It wouldn’t be impossible that 1 and 2 are the same task but they are separate machines, and both have blocking requirements. They are trying to get 3 to tell 4 and 5 to set their network hardware so they can do their higher level communication, but 1 and 2 may have different speeds/settings and it’s 3’s job to figure out which we should be doing. 4 and 5 are hardware specific, where 3 is a constant who abstracts for different hardware.

Accessing the resource should be fast enough. Maybe milliseconds if the networking is busy with a specific packet. Time isn’t really the issue I care about much, it’s that 1 and 2 need separate things, they ask 3 who asks 4 and 5 who have to report to 3 with agreement so that 1 or 2 gets an OK. If you draw it out, that’s a lot of arrows and a dependency if I were to trade 4 or 5 for a 6 who replaces both, or add a 7 who sits next to 3 just logging what is going on.

This is an example, and you’re right, not a great one. What I’m really getting at is what event bussing or messaging system works for this 5 item example or a 10 or 20 task system.

I’m leaning towards an event bus that everyone subs to, pubs to while supplying a pointer and mutex. The receiving tasks will provide a pointer to some local memory that the event bus make a copy to while respecting mutex/semaphore safety. So, my tasks would have a pub/sub system and shadow data, it’s not ideal, but it seems lighter than mailboxes.

I’d use a task hierarchy with 3 being the main or parent task creating its childs. It tells the child tasks its handle at least. Since I’m using C++ I have a task class and tell the this pointer. But you could also have a struct in C containing all things needed for communication common to all tasks with a struct (object) owned by each task and appropriate common functions or methods to access/use it.
Analog to the FreeRTOS TCB the task handles point to.
On top I prefer a simple REQUEST/RESPONSE protocol if possible to avoid dealing with potentially unbound events resp. (event queue) overflow / backlog situations.
This could be a suitable design given that 3 controls the overall behavior of the application. The separation of the task communication data/functions would allow to change the underlying FreeRTOS mechanism used.
I’m using a similar approach with task notifications as event mechanism and ‘ownership transfer’ of result data buffers/structs provided by REQUEST/RESPONSE mechanism but also mutexed or double buffered access to child task data where needed.
Basically when a child task got some information or data ready to tell it sends a REQUEST to the main task and waits for the RESPONSE. During that period the main task got the ownership of the child task result data, processes it and sends back a RESPONSE (event) to the child, which continues to run.
This helped to reduce memory footprint since there is no additional buffering or queuing for (most of) the transactions.

I think the nature of your messages is going to be pivotal in your decision. If you need really low latency responsiveness and the messages are tiny (byte or two or even 1 bit) then using eventgroups is probably a very good option, you could have one group to notify the bus that a task has just posted a message and you can immediately notify which tasks need to receive it.

Another option is to use message queues. Tasks can wait on a queue and you can post messages to the queue from any task. This I think will make it easier but it has a lot more overhead than doing event groups, so if you have thousands of messages per second I would go with event groups like you are suggesting, if you have less frequent messaging then I think message queues will be a good option because of the reduction in the code complexity.

In your expanded idea of essentially an event bus you have to be very careful because sharing the same bus between different priority problem spaces can cause all kinds of tangles.

For me, in similar situations I usually just use message queues. Each task has 1 queue for incoming work and writes to the queues of its collaborators. T1 and T2 will both write requests to the queue on T3, which will sleep until it gets this message, and then arbitrate who gets the resource and send a message back with the result the T1 or T2 queue. The unlucky task will continue sleeping until it gets its turn, while the other task will run and it will have to tell T3 when it is done via a message which will wake T3.

The next layer is just more of that pattern.

@cobusve

This is mostly where I ended up.

I was put off that queues are 76-84 bytes each before their own size. So I thought up a complicated “fat pointer” system that included the safety-semaphore… then I leaned semaphores were 80 bytes each themselves so it didn’t end up saving me a lot considering the extra work and dubious result.

I’ve decided to use mailboxes (queues of 1, single producer that always overwrites, multiple consumers that only ever Peek). In order to signal new data, I use notification values to each thread so I can wake a thread with a flag indicating NUM3_MAILBOX_NEW_DATA. This works great with the one exception that my eventBus will need to make sure not to overwrite a notification value for that is pending for a thread with a new notification value, which would cause the receiving task to miss an update signal. This is easily solvable.

I could use event groups to flag instead of notify, but I’m already making my task sleep on notify and the cost of an event group was closer to 48+ bytes itself, as where at least the first notify index is “already paid for”.