I’m working on a command parsing and IO module based on freeRTOS. My command parsing task receives a formatted hex string and converts it to a packet with address fields, etc. From there the parser needs to forward the packets on to the appropriate IO task.
I’m wondering what the best architecture is for sending the packets from the parser task to the other tasks is?
A separate queue for each task?
A single queue guarded by a mutex, each task peeks to check the message address and only receives if the address matches?
Is there a a way of having a stream of packets that are addressed to individual tasks?
Any input on good practice when architecting a system like this is greatly appreciated.
I’d recommend using a separate (input) queue or message buffer per task.
Both mechanisms support sending/pushing multiple items to have them pending in the queue or message buffer and to get processed later by the I/O tasks.
Better don’t mess around with a single queue and extra mutex protection. That’s not really the intended use case.
I agree with Hartmut here. My normal design methodology is to assign each task to have some resource it is waiting on (A Queue, Buffer, Direct-to-Task notification, or a semaphore, but the last can normally be replaced with the notification). Also, each Queue or Buffer will normally have a sink defined for it, which can be a task or an interrupt. Rarely will a Queue have multiple tasks waiting on it, the only exception I have had so far was multiple copies of the same task waiting on a queue for commands to do operations that took a long time, but weren’t CPU intensive, and could be overlapped (but often this could be replaced with timers)
Queue space is pre-allocated on creation. Therefore their filling level doesn’t matter concerning memory/resource usage when queuing items by value/copy.
You could also queue items by reference/pointer and manage the memory needed for the items yourself. But this is a bit more complicated (you need your own memory management) and normally only useful with rather large items to avoid the overhead for coping them in and out the queue.
Note that the same applies to message buffers, which need a bit less internal overhead than queues.
Is the memory allocation for the queue static or dynamic? If I have 20 queues say with the ability to hold 5 messages each and the maximum message length is 255 bytes is that 25kb that is constantly tied up?
If I have multiple tasks all sending packets back to a serialiser task, would that then be a case for a single queue, as there’s a single destination, or still better to have one queue per task that needs to send messages out through the serialiser?
Does it make any difference to the memory allocation in the queues if I write my application in c or c++?
A queue can be created either statically or dynamically. I have linked to the documentation so you can read how rather than repeat it here. The memory is constantly tied up if you allocated statically. If you allocated dynamically then you have the option to free the memory when the queue is not in use. Whether you can afford that much RAM depends on your system - but you have other options like queueing pointers to messages, or just making linked lists out of your messages then sending a direct-to-task notification to the receiving task passing the head of the list without using any additional memory. Your task can then block on or poll its notification to known when a message is waiting.
If there is a single destination I would probably use a single queue.
The C++ class would call the underlying C function to create the queue, so I don’t think it would make any difference.
Is it dynamic in that it allocates space for however many items you specify with xQueueCreate() and then you free it with xQueueDelete(), or dynamic in that if you have 2 items in the queue but you’ve created it to handle up to 10 items it only allocates space for the 2 items?
Dynamic with QueueDelete. The queue, while it exists, ALWAYS uses the same amount of memory. It allocates space for its maximum when created, so you can never get an ‘Out of Memory’ error in operation, only a Queue Full.