jmmag wrote on Wednesday, December 12, 2012:
Thanks for the response, Richard.
As to your example, while I know it is just a simplified version of your code, I wouldn’t expect your function to take the “message_struct” parameter. It would take the input_message, and the queue information (queue, mutex, circularbuffer, perhaps wrapped into a single struct), and the write_to_cbuff function copies the message to the slot in the buffer and returns the pointer to the message struct it copied to, which is pushed onto the queue.
I originally did that because the data going into the circular buffer are just byte arrays, and the message_structs that go into the queue have other information about the message (like message type, source task, etc), beyond just the message length and position in the circular buffer. That way I figured I could avoid dynamic arrays in structs. Though since the queue is really just a FIFO, having the start position in the message_struct is somewhat redundant since I am only reading from the head of the circular buffer every time, anyway. You are probably right in that I should keep the queue stuff in one struct, it seems a bit messy my way.
I might be tempted to move the mutex into the circular buffer code, as long as it could handle message coming out in a slightly different order than they are enqueued if they come from different tasks.
Yeah, I originally thought it would be nice to leave any RTOS code out of simple function like the circular buffer, and wasn’t quite sure the order in which the locking, circular buffer transfer, and queue transfer should take place. Suppose I could always add an extra layer…
Also, what do you think about the potential synchronization issues between the queue and the circular buffer? The corresponding queue_read() function would ideally block on the queue rather than having to take the mutex every time before checking, but of course there is the chance that it could take the queue element and get interrupted by the write_to_cbuff() before it is able to take the mutex. Since right now it’s strictly FIFO I don’t think it should be a problem, but once I start wanting to prioritize messages to the front of the queue, then I will have to consider the synchronization issues.
Also, rather than copying the data into a circular buffer, I often find it better to have a list of buffers, and tasks needing a buffer get one from the list, and pass the buffer address as (part) of the message, and the consuming task then puts the buffer back on the free list, this way I never need to copy the data from one buffer to another like your circular buffer seems to be doing.
So it would essentially be a heap of fixed-length message arrays? Ideally, I wish I could just throw the message_struct and the message array into the queue; that would definitely save me a lot of trouble. But my message arrays vary anywhere from a few bytes a hundred, so any fixed-length solution seemed inefficient to me. And I will probably end up with around 15 different tasks, each with their own message queue…